PCLSRING is an operating system kernel design technique introduced in ITS for managing interruptions of long-running synchronous system calls. It was mentioned in an infamous diatribe by Dick Gabriel, and is described in loving detail by Allen Bawden in an article for the ages.
Discussions of PCLSRING usually center on fundamental questions of systems design. Is the ITS approach better than the Unix approach? Should the whole issue be avoided by using asynchronous system calls, as in VMS? And weren’t the good old days better than the bad new days anyway?
Let’s set those things aside for now and instead consider what it is, rather than what it’s for or whether it’s needed. The crux of the matter is this. Suppose you’re working with a system such as Unix that has synchronous system calls for file I/O, and you initiate a “large” read of n bytes into memory starting at address a. It takes a while to perform the transfer, during which time the process making the call may be interrupted for any number of reasons. The question is, what to do about the process state captured at the moment of the interrupt?
For various reasons it doesn’t make sense to snapshot the process while it is running inside the kernel. One solution is to simply stop the read “in the middle” and arrange that, when the process resumes, it returns from the system call indicating that some m<=n bytes have been read. You’re supposed to check that m=n yourself anyway, and restart the call if not. (This is the Unix solution.) It is all too easy to neglect the check, and the situation is made the worse because so few languages have sum types which would make it impossible to neglect the deficient return.
PCLSRING instead stops the system call in place, backs up the process PC to the system call, but with the parameters altered to read n-m bytes into location a+m, so that when the process resumes it simply makes a “fresh” system call to finish the read that was so rudely interrupted. The one drawback, if it is one, is that your own parameters may get altered during the call, so you shouldn’t rely on them being anything in particular after it returns. (This is all more easily visualized in assembly language, where the parameters are typically words that follow the system call itself in memory.)
While lecturing at this year’s OPLSS, it occurred to me that the dynamics of Modernized Algol in PFPL, which is given in Plotkin’s style, is essentially the same idea. Consider the rule for executing an encapsulated command:
if m → m’, then bnd(cmd(m);x.m”) → bnd(cmd(m’);x.m”)
(I have suppressed the memory component of the state, which is altered as well.) The expression cmd(m) encapsulates the command m. The bnd command executes m and passes its result to another command, m”, via the variable x. The above rule specifies that a step of execution of m results in a reconstruction of the entire bnd, albeit encapsulating m’ , the intermediate result, instead of m. It’s exactly PCLSRING! Think of m as the kernel code for the read, think of cmd as the system call, and think of the bnd as the sequential composition of commands in an imperative language. The kernel only makes partial progress executing m before being interrupted, leaving m’ remaining to be executed to complete the call. The “pc” is backed up to the bnd, albeit modified with m’ as the new “system call” to be executed on the next transition.
I just love this sort of thing! The next time someone asks “what the hell is PCLSRING?”, you now have the option of explaining it in one line, without any mention of operating systems. It’s all a matter of semantics.
Isn’t PCLSRING exactly the Unix approach (with EINTR)?
LikeLike
I was writing from memory. It is more similar to SA_RESTART.
In any case, the meaningful part of PCLSRING is that you go can always reach a “safe” – user-space representable – state in finite time.
Because processes are concurrent, you have
read(m) >>= cont
-> complex-kernel-state >>= cont (SIGNAL)
(SIGNAL) -> read(m’) >>= cont
LikeLike
And yes, what you describe can be seen as PCLSRing where cmd is a monad that implements parts of the system (system calls).
LikeLike
A couple of months back François-René Rideau gave a talk on PCLSRing semantics in the context of a tower of interpreters at Boston Haskell, based on his long-abandoned PhD research:
LikeLike
Thanks for the tip, I hadn’t heard of it. Similar point?
LikeLike
Tower of interpreters? No, tower of semantics. Usually compiled. I didn’t use the word “interpreter”, did I?
BTW, I’ve been working to resurrect those chapters of my thesis. So far corresponding to the first part of the above talk (slide 29 out of 71). I’ll be looking for proofreaders imminently for that part.
LikeLike
Current draft: http://fare.tunes.org/files/tmp/phd/thesis.pdf
LikeLike
NB: As of 2016-07-13, my drafts text ends at p. 37, subsection 3.3.1 Composition (of Implementations), subsubsection “Composing Down: Virtualizing”. I’ll try to wrap up chapter 3 this week, and start a chapter 4 on a runtime reflective API based on the computational content of those nice implementation properties.
LikeLike