Intensional Logic in Context – from philosophy to technology

The most pervasive fallacy of philosophic thinking goes back to neglect of context.

Jon Dewey

What exactly is “intensional programming?” The easy answer is, programming in a language based on intensional logic. But that raises another, more important question, namely what is intensional logic? Logicians have been working on the answer for more than 2500 years.

The short answer is, logic in which the truth-value and more generally the meaning of an expression depends on an implicit context. Let me attempt to give you the full answer.

The term “intensional” itself is relatively recent. Carnap introduced it in the 1930s, based on Frege’s distinction between the “sense” and the “denotation” of an expression. In Frege’s terminology the denotation of an expression is just that – the particular object it (currently) denotes. (This is what Carnap and modern logicians call the “extension”). For example, “the President of France” currently [2021] denotes Emmanuel Macron.

On the other hand the “sense” (what we now call the “intension”) is the entire concept it represents – what we, at some level, intend when we write it. No one would claim that M Macron somehow sums up the whole concept of the French presidency.

Intensional logic is therefore the logic of expressions in which the intensions must be taken into account. These are very common in natural language. The French constitution specifies that the President is directly elected; and this is not the same as specifying that M Macron be directly elected.

The mysteries of intensionality.

Many famous paradoxes are based on the observation that intensional expressions seem to violate the basic law of substitution of equals for equals.

According to the latest announcements,

The number of planets = 8

Kepler, the famous astronomer, was well aware of the basic rules of arithmetic. We can be sure that

Kepler knew that 8 is a perfect cube

But if we substitute equals for equals in this latter assertion, we get

Kepler knew that the number of planets is a perfect cube

which is almost certainly false.

In one sense, the explanation is simple: the equation relates only the extensions of the two expressions, whereas “Kepler knew …” refers to the intension of “the number of planets”. But what sort of mathematical object is an intension?

Necessity and Possibility

Aristotle, the founder of formal logic, first addressed similar problems. It is often said that Aristotle’s logic is two-valued, but this is not correct. He carefully distinguished between assertions that are true but not necessarily so, and those that are true by necessity – that could not possibly be false. Aristotle classified these different ‘modes’ of truth and falsity He tried to extend his analysis of syllogisms to include those in which assumptions and/or conclusions were not simply true, but necessarily true, or only possibly true.

Necessity is, in our terminology, an intensional operator. We cannot always determine the truth of “necessarily P” knowing only the truth (extension) of P. P may be true but not necessarily so.

The Greek Stoics and the medieval Scholastics continued the tradition. During this entire period of more than two millennia, “modal” logic (the logic of necessity and possibility) was considered to be an integral part of formal logic.

Around 1900 Frege, Cantor, and Russell completed Leibniz’ program of mechanizing logic. Set theory and the predicate calculus are entirely extensional formalisms and deal with unchanging, immortal entities encountering each other in an empty context. Extensional logic proved very successful and Russell and Wittgenstein wasted no time generalizing their approach to to a whole philosophy, logical atomism. In this philosophy knowledge – in fact reality itself – can be described as a large collection of atomic facts evaluated in isolation.

Frege was aware that something (namely intensions) had been omitted. Efforts began almost immediately to extend mathematical logic to cover intensional phenomena (as they were later called).

Lewis and Langford formalize Modal Logic

The effort to recover modal logic began soon after, in the 1930s, when CI Lewis formalized his logic of “material implication”. He quickly dropped this, however, in favor of necessity and possibility. For these he introduced the symbols ☐ and <> which have since become standard.

One of the challenges was that it was not completely obvious which axioms are needed to capture the notions of necessity and possibility. For example, Aristotle noted that everything that is necessary is possible, i.e.

☐P → <>P

More generally, everything that is necessary is in fact true; and everything that is in fact true must clearly be possible. Hence

☐P →P

and

P → <>P

On the other hand, if P and Q are both possible, we cannot assume that they can be true simultaneously; so

(<>P / <>Q) → <>(P / Q)

cannot be an axiom or even a theorem.

Some principles, however, are hard to decide. If something is possible, is it necessary that it be so? In other words, should

<>P → ☐<>P

be a tautology? Lewis and Langford avoided ruling on such contentious issues and instead developed a series (S1 through S5) of (propositional) modal logic systems based on increasingly more powerful axioms. Since then logicians have proposed literally dozens of different theories, and the whole collection is far from being linearly ordered.

The syntactic approach confirmed that modal logic is not inherent nonsense; Lewis and others who followed showed that their systems are consistent, and do not collapse the modal operators onto the identity operator. Nevertheless, their system had no model theory (no semantics in computer science terminology).

As a result, for at least another three decades modal logic lacked the respectability of extensional logic.

Kripke’s possible worlds semantics for modal logic

Finally in the early sixties Saul Kripke presented the first completely formal semantics for modal logic. This semantics was based on the notion of “possible world” or “alternate state of affairs”, which can be traced back nearly a thousand years, to the medieval logician Duns Scotus. For Scotus, an event is possible means that we can imagine it taking place without contradiction in at least one alternate state of affairs. Conversely, something is necessary if it remains true in all the alternate states of affairs.

Kripke’s contribution (anticipated by Church and Carnap) was to completely avoid the whole question of what a possible world is and to take it to be an undefined concept. A Kripke model is simply an indexed family of normal interpretations, one for each possible world. The only structure on the universe of possible worlds is a binary relation that specifies which worlds are possible alternatives

to a single given world (this is called the accessibility relation).

In Kripke semantics, propositions are not normally either true or false; they are true in some worlds and false in others. The Kripke semantics of necessity and possibility formalizes the “alternate state of affairs” idea of Scotus. A proposition of the form ☐P is true at a world w iff P is true at all worlds w’ accessible from w. Dually, a proposition of the form <>P is true at a world w iff P is true at at some world w’ accessible from w.

Kripke showed that most of the proposed modal axioms corresponded to different assumptions about the accessibility ordering. For example, if everything necessary is to be true, and everything true is to be possible, we require that the relation be reflexive (so that every world is an alternate to itself). If the ordering is transitive, we have

☐P → ☐☐P

so that everything necessary is necessarily so.



Scott’s semantics for intensional logic

Once the notion of possible world/context was formalized, it was easy to find new intensional formalisms that went beyond the traditional necessity/possibility.

The most important generalization, however, was to break any remaining ties with the ancient modalities and consider operators not defined by an accessibility relation. This means giving a semantics not just to modal logic, but to intensional logic in general. Credit for this primarily goes to Dana Scott. In 1969 he laid out a complete framework for a semantics of intensional logic in his (perhaps mistitled) Advice to Modal Logicians.

In Advice  Scott takes the basic idea of Kripke models and extends it to give a framework in which Carnap’s distinction between intension and extension can be formalized. In a “Scott model” we have a nonempty set I of reference points (essentially, possible worlds) but do not require that any accessibility relations be specified. Whatever the syntactic details of our language, propositions are not a priori absolutely true or false Their truth value varies from world to world; although they may happen to have the same value at each world. Scott calls the truth value of φ at a particular world the extension of φ at that world. The intension of φ, on the other hand, he takes to be the function that maps each world w to the extension of φ at w. In other words, the intension of a formula is an element of 2I, 2 being the set {0,1}  of truth values.

In Scott’s approach, a (unary) intensional operator is simply a function that maps 2I to 2I. It may be defined in terms of an accessibility relation, but the framework allows for more general kinds of operators. As an example, he proposes a formalization of the present progressive tense (as in “I am eating”). He takes I to be the reals and defines [H]E to be true at time t iff E is true in some interval (however small) containing t.

Scott models are not restricted to propositional logics. They can also specify a collection D of 

individuals that serve as the extensions of terms. These individuals can be (understood to be) numbers, strings and lists, physical objects, people, organizations, etc. However intensional individuals denote elements of DI — they may have different extensions at different worlds. The DI are intensional objects elements or virtual individuals. They correspond to natural language phrases such as “the President of France” and Scott models can give us a clear explanation of the puzzles described at the beginning of this article.

For Scott an important challenge was dealing with intensional individuals that don’t exist at certain worlds. This has not proved (so far) to be a problem for intensional programming. For example, we assume the string “Hello World” is always available.

Intensional Programming

The Scott (—Montague—Kripke—Carnap—Lewis) approach to modal and intensional logic has proved very successful in illuminating the foundations of natural language. For me, however, the real significance of the Scott approach is that it laid the basis for Lucid and intensional programming.

For example, consider the case of a time-varying two dimensional intensional spread sheet. We define a smoothing operator that calculates at each cell and timepoint the previous value of the average of the surrounding cells:

smooth(S) = prev (up S + right S + left S + down S)/4

The intent is clear enough but without a formalization of intensional logic what would we make of such definitions? The operator smooth is an intensional operator that acts on intensions that are maps from

time x space x space -> floatingpoints

The Creation of Intensional Programming

This is not the place to present even a brief overview of work in intensional programming. However, We can explain the connection by presenting a Creation Myth that makes the connection clear (and also puts its creators in a more favorable light). According to the Myth, far sighted Computer Science researchers read and understood Advice and decided to use it prescriptively, as the basis for new programming languages and systems. The first language was Lucid,invented by the author and E. A. Ashcroft in 1974. (Now you know who the far sighted researchers were. However the Myth does not explain why they waited five years.)

The Creators chose (a family of) Scott models in which I is the set of natural numbers, interpreted as timepoints. For the individuals, they took (in the simplest case) D to be the set of integers. Their language included the operators first, next and fby, with next (for example) the intensional operator

from DI → DI such that



next(X) = λn X(n+1)

Programs consisted of equations defining program variables in terms of constants and each other. 

Program variables were therefore intensional integers.

At this point the Myth has the Creators look at what they had done, and see that it was Good (a few referee reports aside). They return for more Advice and found it; for example,


This situation is easily appreciated where I is the context of time-dependent statements; that is in the case where I represents the instants of time. For more general situations one must not think of the elements of I as anything as simple as instants of time or even possible worlds. In general we will have i= (w,t,p,a) where the index i has coordinates; for example w is a world, t is a time, p = (x, y, z) is a (3-dimensional) position in the world, a is an agent, etc. All these coordinates can be varied, possibly independently, and thus affect the truth of statements which have indirect references to these coordinates.

The newly inspired Creators (by now a much larger group) first added extra time dimensions to Lucid to allow nested iterations. Then space dimensions, used much like arrays. The “place” dimension (actually a form of branching time) allowed programs with first-order recursive functions to be translated into pure intensional programs. Later, spreadsheet and attribute grammar tools used cell and tree node coordinates, respectively. Branching time allowed logic programs to express search strategies in a very simple way. We also discovered a translation scheme for some higher-order programs using multiple-place coordinates.



Myth meets reality

Intensional programming obviously developed quickly and rationally in the parallel universe of the mythical Creators. Over in the real world, however, the real creation was a longer and less orderly process. To be fair to the real creators, they had some serious obstacles to overcome, and in so doing they had to extend the system of Advice in important ways. From the beginning they — we — wanted to allow recursive definitions of operators. This meant that the collection of intensions had to form a domain (yes, a Scott domain) and that the basic operators had to be continuous over this domain. In particular, this meant that the operators be finitary: that any particular extension of the result of an operation can be computed from a finite set of extensions of the operands.

Intensional programming also used multiple dimensions almost from the beginning. It took a long time, however to realize that it was not enough to have (even a large) set of fixed predefined dimensions; or even an infinite, indexed collection of predefined dimensions. Eventually, the GLU system allowed user-declarable local dimensions.

Finally, the Creators realized the language could be implemented through a demand—driven dataflow model, in which demands for particular extensions of variables can generate demands for other particular extensions

From philosophy to technology

The system presented in Advice has proved to be almost exactly that needed as foundation for intensional programming; we have made no changes, and very few additions. It has passed the hardest design test of all, namely suitability a for a purpose (intensional programming) that could not be foreseen. The same, in fact, can be said for intensional logic as a whole. It has developed in much the same way as other, very different technologies. It began with the study of a few curious phenomena and a collection of baffling paradoxes. A long period of study and experimentation, much of it apparently futile, eventually led to understanding and to the perfection of useful tools.

We have now reached the stage where, in Scott’s words “the old puzzles can be cast aside, and one can begin to provide meaningful applications.”

Read More

View all posts by Bill Wadge