The Four Fundamental Operations of Definite Action

All definite actions (computer program) can be defined by four fundamental patterns of combination:

  1. Sequence

  2. Branch

  3. Loop

  4. Parallel


Do one thing after another. In joy this is represented by putting two symbols together, juxtaposition:

foo bar

Operations have inputs and outputs. The outputs of foo must be compatible in “arity”, type, and shape with the inputs of bar.


Do one thing or another.

boolean [F] [T] branch

   t [F] [T] branch

   f [F] [T] branch

branch == unit cons swap pick i

boolean [F] [T] branch
boolean [F] [T] unit cons swap pick i
boolean [F] [[T]] cons swap pick i
boolean [[F] [T]] swap pick i
[[F] [T]] boolean pick i
[F-or-T] i

Given some branch function G:

G == [F] [T] branch

Used in a sequence like so:

foo G bar

The inputs and outputs of F and T must be compatible with the outputs for foo and the inputs of bar, respectively.

foo F bar

foo T bar


Often it will be easier on the programmer to write branching code with the predicate specified in a quote. The ifte combinator provides this (T for “then” and E for “else”):

[P] [T] [E] ifte

Defined in terms of branch:

ifte == [nullary not] dip branch

In this case, P must be compatible with the stack and return a Boolean value, and T and E both must be compatible with the preceeding and following functions, as described above for F and T. (Note that in the current implementation we are depending on Python for the underlying semantics, so the Boolean value doesn’t have to be Boolean because Python’s rules for “truthiness” will be used to evaluate it. I reflect this in the structure of the stack effect comment of branch, it will only accept Boolean values, and in the definition of ifte above by including not in the quote, which also has the effect that the subject quotes are in the proper order for branch.)


Do one thing zero or more times.

boolean [Q] loop

   t [Q] loop
   Q [Q] loop

   ... f [Q] loop

The loop combinator generates a copy of itself in the true branch. This is the hallmark of recursive defintions. In Thun there is no equivalent to conventional loops. (There is, however, the x combinator, defined as x == dup i, which permits recursive constructs that do not need to be directly self-referential, unlike loop and genrec.)

loop == [] swap [dup dip loop] cons branch

boolean [Q] loop
boolean [Q] [] swap [dup dip loop] cons branch
boolean [] [Q] [dup dip loop] cons branch
boolean [] [[Q] dup dip loop] branch

In action the false branch does nothing while the true branch does:

t [] [[Q] dup dip loop] branch
      [Q] dup dip loop
      [Q] [Q] dip loop
      Q [Q] loop

Because loop expects and consumes a Boolean value, the Q function must be compatible with the previous stack and itself with a boolean flag for the next iteration:

Q == G b

Q [Q] loop
G b [Q] loop
G Q [Q] loop
G G b [Q] loop
G G Q [Q] loop
G G G b [Q] loop


Keep doing B while some predicate P is true. This is convenient as the predicate function is made nullary automatically and the body function can be designed without regard to leaving a Boolean flag for the next iteration:

            [P] [B] while
   [P] nullary [B [P] nullary] loop

while == swap [nullary] cons dup dipd concat loop

[P] [B] while
[P] [B] swap [nullary] cons dup dipd concat loop
[B] [P] [nullary] cons dup dipd concat loop
[B] [[P] nullary] dup dipd concat loop
[B] [[P] nullary] [[P] nullary] dipd concat loop
[P] nullary [B] [[P] nullary] concat loop
[P] nullary [B [P] nullary] loop


The parallel operation indicates that two (or more) functions do not interfere with each other and so can run in parallel. The main difficulty in this sort of thing is orchestrating the recombining (“join” or “wait”) of the results of the functions after they finish.

The current implementaions and the following definitions are not actually parallel (yet), but there is no reason they couldn’t be reimplemented in terms of e.g. Python threads. I am not concerned with performance of the system just yet, only the elegance of the code it allows us to write.


Joy has a few parallel combinators, the main one being cleave:

               ... x [A] [B] cleave
   ... [x ...] [A] infra first [x ...] [B] infra first
                   ... a b

The cleave combinator expects a value and two quotes and it executes each quote in “separate universes” such that neither can affect the other, then it takes the first item from the stack in each universe and replaces the value and quotes with their respective results.

(I think this corresponds to the “fork” operator, the little upward-pointed triangle, that takes two functions A :: x -> a and B :: x -> b and returns a function F :: x -> (a, b), in Conal Elliott’s “Compiling to Categories” paper, et. al.)

Just a thought, if you cleave two jobs and one requires more time to finish than the other you’d like to be able to assign resources accordingly so that they both finish at the same time.

“Apply” Functions

There are also app2 and app3 which run a single quote on more than one value:

                ... y x [Q] app2
   ... [y ...] [Q] infra first [x ...] [Q] infra first

       ... z y x [Q] app3
   ... [z ...] [Q] infra first
       [y ...] [Q] infra first
       [x ...] [Q] infra first

Because the quoted program can be i we can define cleave in terms of app2:

cleave == [i] app2 [popd] dip

(I’m not sure why cleave was specified to take that value, I may make a combinator that does the same thing but without expecting a value.)

clv == [i] app2

   [A] [B] clv
     a b


The common map function in Joy should also be though of as a parallel operator:

[a b c ...] [Q] map

There is no reason why the implementation of map couldn’t distribute the Q function over e.g. a pool of worker CPUs.


One of my favorite combinators, the pam combinator is just:

pam == [i] map

This can be used to run any number of programs separately on the current stack and combine their (first) outputs in a result list.

   [[A] [B] [C] ...] [i] map
   [ a   b   c  ...]

Handling Other Kinds of Join

The cleave operators and others all have pretty brutal join semantics: everything works and we always wait for every sub-computation. We can imagine a few different potentially useful patterns of “joining” results from parallel combinators.


Thinking about variations of pam there could be one that only returns the first result of the first-to-finish sub-program, or the stack could be replaced by its output stack.

The other sub-programs would be cancelled.


Also known as “Futures” or “Promises” (by everybody else. “Fulinators” is what I was going to call them when I was thinking about implementing them in Thun.)

The runtime could be amended to permit “thunks” representing the results of in-progress computations to be left on the stack and picked up by subsequent functions. These would themselves be able to leave behind more “thunks”, the values of which depend on the eventual resolution of the values of the previous thunks.

In this way you can create “chains” (and more complex shapes) out of normal-looking code that consist of a kind of call-graph interspersed with “asyncronous” … events?

In any case, until I can find a rigorous theory that shows that this sort of thing works perfectly in Joy code I’m not going to worry about it. (And I think the Categories can deal with it anyhow? Incremental evaluation, yeah?)