TLA+ in Practice and Theory
Part 2: The + in TLA+

This is part 2 in a four-part series. Part 1, Part 3, Part 4. A video of a 40-minute talk that covers parts of this series.

If you find TLA+ and formal methods in general interesting, I invite you to visit and participate in the new /r/tlaplus on Reddit.

Writing is nature’s way of letting you know how sloppy your thinking is.

Mathematics is nature’s way of letting you know how sloppy your writing is.

…Formal mathematics is nature’s way of letting you know how sloppy your mathematics is.

In part 1 we discussed the motivation, use and general principles guiding the design of TLA+. We now turn to study the details of the language. We begin with the language elements used to describe the static state of a computation, namely the program’s data and data operations, that form the primitive building blocks of computation. We will call this part of TLA+ the data logic.

Describing Data and Operations with Math

TLA+ uses the temporal logic of actions, TLA, to describe computation as a discrete dynamical system, which is analogous to the use of ordinary differential equations for describing continuous systems. Just like ODEs are defined over a state space, where the variables can take values in $\mathbb{R}$. The state space of TLA formulas is some logical structure (we’ll learn precisely what that means) that contains the values which TLA variables can take at any instant in time. In TLA+, sets form that structure, and the logical theory for describing elements in that structure is based on first-order-logic and Zermelo–Fraenkel set theory (ZFC) — the standard formalism of ordinary mathematics. This means that we will use formal mathematics to describe our software, or, for now, just their data structures and basic operations. “Formal” simply means working in a system (a “formal system”, or a “formalism”) with precise rules, both syntactic — how expressions must be formed — and semantic — what the expressions mean.

This static component of TLA+ takes up most of the TLA+ language, and if you’re new to specification languages and proof assistants, understanding it will teach you all the essentials of doing formal mathematics on a computer. But from a theoretical standpoint, it is the least interesting and least essential part of TLA+ (TLA, the dynamic component is the interesting, essential part), although it is important from a design and usability perspective, as it was designed to be as easy and familiar as possible. And yet it is the most controversial aspect of TLA+. This is because Lamport chose an untyped formalism for mathematics, while most specification languages opt for a typed one.1

Aside from controversial claims (on either side of the debate) regarding metrics such as correctness or productivity, types in programming languages have advantages that, I believe, most agree on: they help compilers (especially AOT compilers) generate efficient machine code, and they help organize code, by providing some form of up-to-date documentation and encouraging a sensible focus to subroutines. They also assist tooling with automatic completion and automatic refactoring. None of these less-controversial advantages, however, is relevant for a specification language like TLA+ because it is not compiled into an executable, and because specifications are orders of magnitude shorter than program code and so don’t benefit much from features intended to help large teams of programmers maintain large codebases. The tradeoffs typed and untyped formalisms make for specification languages — or math in general — are therefore different from those concerning programming languages, and are well covered in the joint, and balanced, paper by Lamport and Larry Paulson (author of the proof assistant Isabelle), Should Your Specification Language Be Typed?

Lamport’s choice is driven by his wish for simplicity (as he sees it) for TLA+’s intended audience, namely engineers, and probably by his background as a classical mathematician. Set theory has the advantage of being familiar2 yet powerful, and Lamport says he believes it is indeed simpler in practice for engineers to use in specification of systems.

Other specification languages may need type systems because they are intended for uses other than software or hardware specification. Lamport writes:

There’s no good reason to write a set like {1, {2,3}}, and making it impossible to write seems like a good idea. However, I’ve found that there is no simple way to make it impossible to write that set without also making it impossible to write useful sets. As this implies, Coq is not simple… That doesn’t mean that there’s anything wrong with Coq; it just means that it’s not meant for ordinary engineers. For example, if you look at a math text, you might find that the symbol + is used in a single paragraph to mean several different things. To formalize that math in TLA+, you’d have to use a different symbol for each of those different meanings. That would drive a mathematician crazy. Coq allows you to use the same symbol for all of them. So, as George Gonthier3 will tell you, you need something like Coq for formalizing serious math. Since system builders and algorithm designers don’t use that kind of math, they don’t need to deal with the complexity of a language like Coq.

The static component of TLA+ is a formal set theory that Lamport calls ZFM, ZF for Mathematics, which he explains in this short paper (along with a comparison with typed formalisms).

Logicians will likely find the TLA+ formalism too ordinary; perhaps even dull. The mathematician G.H. Hardy once wrote4: “The ‘real’ mathematics of the ‘real’ mathematicians… is almost wholly ‘useless’… It is the dull and elementary parts… that work for good or ill.” But TLA+ is not a tool for exploring the secrets of mathematics or novel logics, but a tool for engineers to specify systems that “work”5. As far as ordinary math goes, TLA+ is a clear, simple and powerful formalism, with a convenient and natural syntax6.

TLA+ uses formal mathematics to specify software systems. As you’ll see, it is not as scary as it may first sound to a programmer. Programming is turning informal requirements into a low-level formal specification of the software — the program. If you can do that, and if you understand why your program works, or at least, how it is supposed to work, you can also write a high-level mathematical specification. It just takes some practice. In the introduction to Specifying Systems Lamport writes:

The mathematics we use is more formal than the math you’ve grown up with… The mathematics written by most mathematicians and scientists is not really precise. It’s precise in the small, but imprecise in the large. Each equation is a precise assertion, but you have to read the accompanying words to understand how the equations relate to one another and exactly what the theorems mean. Logicians have developed ways of eliminating those words and making the mathematics completely formal and, hence, completely precise… [M]athematicians and scientists think that formal mathematics, without words, is long and tiresome. They’re wrong. Ordinary mathematics can be expressed compactly in a precise, completely formal language. It takes only about two dozen lines to define the solution to an arbitrary differential equation in the DifferentialEquations module… But few specifications need such sophisticated mathematics. Most require only simple application of a few standard mathematical concepts.

I must remind you again that this is not a tutorial, and much of the material covered is not necessary to write good TLA+ specifications. While I will cover the basics and hope that the examples in this post will give you a taste of what formal mathematical specification is and how it feels similar to programming, my emphasis is the mathematical theory and design principles behind TLA+. For good hands-on tutorials on TLA+, refer to those I mentioned in part 1.

What Is (a) Logic?

TLA+ uses logic to specify both algorithms and their data structures. For those of you who may be rusty on mathematical logic, here’s a review.

Logic Fundamentals

Formal logic, symbolic logic, or mathematical logic, is a formal system, or a formalism7. It is a precise language to talk about things, or, at least things that are amenable to precision. Like any language it has a syntax, which can be though of as grammatical rules for forming sentences, and semantics, or meaning – what the language talks about. But, being precise, it has precise syntax and precise semantics. The syntax defines rules for how legal (well formed) expressions in the language can be formed, and the semantics defines what those expressions mean by connecting them to precisely defined mathematical objects. The semantics allows us to know exactly what any phrase in the language means, and the syntax allows us to manipulate phrases in such a way that we know exactly its effect on meaning.

The syntax of the logic is made of some built-in connectives — usually $\land$ for conjunction (“and”), $\lor$ for disjunction (“or”), $\neg$ or sometimes $\sim$ for negation (“not”), $\implies$ or $\rightarrow$ for implication (“if-then”), $\equiv$ or $\Leftrightarrow$ for equivalence (if-and-only-if, or iff), a set of variables ($x,y,..$) — names we use to refer to objects — and a signature, which is a set of symbols with a specific arity (how many arguments the symbol takes), like 5 (0-ary), = (2-ary), < (2-ary), * (2-ary), or - (unary minus). For example, the expression $x * 5 < -y \land \neg(x < 5)$ is a legal expression in the logic I’ve given as an example. The logic can also have quantifiers, the most common of which are the universal quantifier $\A$ (“for all”) and the existential quantifier $\E$ (“there exists”). Quantifiers usually bind variables. For example, $\A x \ldots$, which means “for all objects x such that …”, or $\E x …$, which means “there exists an object x such that …”.

A well-formed expression is called a term (of the language), and so the syntax is often thought of as the set of all terms — all possible well-formed expressions in the language. A formula is a boolean-valued expression, namely one that is either true or false.8 Variables that appear in a formula unbound are called free variables. A formula that has no free variables is called a sentence or a closed formula.

A logic also has a structure, which is the logic’s domain of discoursewhat the logic is talking about — which gives meaning, or semantics to the signature. Assigning a specific structure to a logic is called an interpretation. The meaning for 0-ary symbols like 2 is usually called a constant, while the meaning of symbols of higher-arity is usually called a relation (e.g., the less-than relation, <) or a function9. The meaning of a variable is also defined by the structure, although here the “order” of the logic matters. We’ll discuss this subject a bit later, but in first-order logic, a variable can refer any value in a collection of possible values defined by the structure; that collection is called the universe of the logic.

A model is the relationship between the syntax and semantics: a model of a formula is a structure that satisfies it, namely an assignment of values to the variables that make the formula true (truth is a semantic property). The usual symbol for “satisfies” is $\vDash$. On the left is a structure that makes the formula on the right true — a model of the formula. For example, the structure for our logic can be the set of integers with multiplication, negation and the less-than relation. A model for the expression $x < 2$ could then be $x = -5$. The collection of all models of a formula $A$ forms its formal semantics, and is often written $\sem A $. I will colloquially refer to that collection of models — the semantics of a formula — simply as the model of the formula, and say that the model of $x < 2$ is any assignment of a number less than 2 to $x$, or, more simply, all numbers less than 2. We can say that that formula specifies all numbers less than two. A formula that is true under all interpretations is said to be valid, and we write $\vDash A$ (with no structure on the left-hand side). The formula $\TRUE$ is satisfied by all interpretations, while the formula $\FALSE$ has no model at all.

The various logical operators interact with the model in specific ways. The model for $A \land B$ is the intersection of the model of $A$ with the model of $B$, or $\sem{A \land B} = \sem A \cap \sem B $. The model for $A \lor B$ is the union of the model of $A$ with the model of $B$, or $\sem{A \lor B} = \sem A \cup \sem B $. The model for $\neg A$ is the complement of the model for $A$, or $\sem{\neg A} = \sem{A}^c$. These are exactly the familiar definitions of the logical operators with Venn diagrams.

When we work with a logic, we usually work within a specific logical theory, which is a set of formulas called axioms, taken to be equivalent to . A model of a theory is a structure that satisfies all axioms of the theory; in other words, the theory characterizes, or specifies, a model. Often, therefore, a logic is not defined with a structure, just with a theory, which then characterizes all appropriate structures. For example, the Peano axioms, are a logical theory that characterizes the natural numbers and the familiar arithmetic operations. The natural numbers we know, with the familiar arithmetic operations, are a model of Peano arithmetic.

Of course, a logical formula, or even an entire logical theory, can have multiple interpretations over different structures. For example, the formula $x > 2$ has a different model if interpreted over the real numbers than over the naturals. The sentence $\E x . x > 0 \land \A y . y > 0 \implies y \geq x$ is true when interpreted over the integers, but false when interpreted over the reals.

A logic also usually has a calculus, a syntactic system for deriving expressions from other expressions, like natural deduction. If formula $C$ can be derived by a finite number of application of inference rules from formulas $A$ and $B$, we write $A, B \vdash C$, and say that $A$ and $B$ prove C or entail C, where $A$ and $B$ are the assumptions, and $C$ is their consequence. Provability is the syntactic counterpart to the semantic notion of truth. If a formula $A$ is entailed by the theory’s axioms alone, without any other assumptions, we write $\vdash A$, and say that $A$ is a tautology. If $\vdash A$ and $A$ is not an axiom, we say that $A$ is a theorem. If we want to prove the theorem $A$ but we haven’t yet done so, we call $A$ a proposition.

According to the axioms of most logics, if $A, B \vdash C$ then $\vdash (A \land B) \implies C$ and vice versa. I.e., $A$ and $B$ entail, or prove, $C$ iff $A \land B$ imply $C$.

There are two important axioms that shape the general properties of a logic. The first is called the principle of explosion, and it states that for any formula $A$, $\vdash \neg (A \land \neg A)$, or equivalently $A \vdash \neg\neg A$. The second is called the law of excluded middle, and it states that for any formula $A$, $\vdash A \lor \neg A$, or equivalently, $\neg\neg A \vdash A$. A logic with both axioms is called a classical, or standard, logic. A logic with the principle of explosion but without the law of excluded middle is called an intuitionistic, or constructive logic (in which, if $A$ is known not to be false, we cannot conclude that it must be true). A logic with the law of excluded middle but without the principle of explosion is called a paraconsistent logic (in which, if $A$ is true, we cannot conclude that it may not also be false). Both constructive and paraconsistent logics have interesting and useful applications. With neither axiom, we have no means of relating negated and positive statements, which is tantamount to a logic without negation at all. All logic employed in TLA+ is classical.

Note that in a classical logic (but not in a non-classical logic!), the universal and existential quantifiers can each be defined in terms of the other:

In short we say that $\E \equiv \neg \A \neg$ and $\A \equiv \neg \E \neg$.

A logic, like all languages, expresses meaning. But the meaning of a logical statement is not always entirely captured by its formal semantics. The logician Gottlob Frege pointed out that a logical statement may have two kinds of meaning: sense and reference (the latter is sometimes also called denotation). For example, the sentence $3 > 2$ has a semantic value of and so refers to the value . The sentence $2 > 1$ also refers to the value , and so is equivalent to $3 > 2$. However, the two sentences have different senses, as they express different ideas (one about the relationship between 3 and 2, and the other about 2 and 1). The distinction between sense and reference is a key philosophical observation required for a full understanding of all logic (and all language, really), and some non-classical logics — in particular, intuitionistic logics — can explicitly refer to both kinds of meaning (using different notions of equivalence, one called extensional, and is an equivalence on reference/denotation, and the other called intensional, which can be seen as an equivalence on sense). But we will be dealing only with classical logic, so this important philosophical point has no practical significance.

First Order Logic and Other Orders

A first-order logic (or FOL) has all the regular logical connectives ($\land$, $\neg$, etc.), and the universal and existential quantifiers $\A$ and $\E$. The variables in FOL can represent values that are simple values of the universe: if the universe is the natural numbers, then a variable — either free or quantified, i.e. bound — stands for a natural number. If we want to quantify over richer structures, like sets of natural numbers, functions of natural numbers etc., we need to add them to the universe by extending our theory.

In a second-order logic (or SOL), a variable can either represent an element of the universe or a set of elements. It is easier to think of this set as a predicate; a predicate $p(x)$ is true iff the value $x$ is in the set $p$. So, while in FOL all predicates must be “constant” and given in the signature, in SOL we can quantify over them and say, for example, $\A_1 x . \E_2 p . \A_1 y . p(y) \equiv (y = x)$ (the symbol $\equiv$ stands for double implication, or if-and-only-if, and I’ve used subscripts to distinguish between first- and second-order quantification; often this distinction is clear from the context), or in words, “for any $x$, there exists a predicate $p$ that is true for an argument $y$ iff $y = x$”.

Similarly, a third-order logic allows us to quantify over sets of sets of values etc. Higher-order logic, or HOL, allows us to quantify over variables of any order, and is usually used in typed formalisms only. The original concept of types was created by Bertrand Russell precisely to distinguish order: objects of different orders are of different type, i.e., if our “ground” values are of type one, then sets of those are of type two, sets-of-sets are of type three etc.. Types have a syntactic as well as a semantic significance: if $x$ and $y$ are of different types, then the expression $x = y$ is not false, but rather ill-formed, meaning it is not a legal expression at all. Similar to the subscripts I used above, quantifiers in HOL are distinguished by types; we write $\E x : T$ (instead of $\E_{\scriptsize T} x$) meaning “there exists an $x$ of type $T$”.

One of the most important theorems in all of logic is that in first-order logics (only!), semantic truth and syntactic provability coincide, i.e. $\vdash A$ iff $\vDash A$, or, $A$ is a tautology iff it is valid. In other words, if a formula $A$ is valid — satisfied by all interpretations (that also satisfy the axioms of the theory) — then it has a proof, and vice-versa. Similarly, $A \vdash B$ iff $\sem A \vDash B$. This is Gödels’ completeness theorem. Note, however, that this doesn’t mean that every sentence in FOL can be proven or disproven. If your theory is rich enough, there will always be sentences that can be neither disproven nor proven (which, in the case of FOL, means that such theories will have multiple models, in some the sentence would be true and in others false). This is a consequence of yet another famous result, known as Gödel’s first incompleteness theorem. That theorem doesn’t apply only to FOL, but to any formalism whatsoever, as it is a direct result of the halting theorem (even though the halting theorem was proven some years later).

As both the “static” or data logic and the temporal logic in TLA+ only allow first-order formulas, we can just use the symbol $\vdash$ to mean both validity and provability (unless we want to specifically talk about a semantic structure on the left-hand side). But as the first incompleteness theorem applies to ZFC, not every TLA+ sentence can be proven or disproven.

Because TLA+ allows free variables, I’d like to point out a possible source of confusion when reading formulas containing implication ($\implies$) and free variables: The formula $x > 4 \implies x > 2$ is a theorem, i.e., $\vdash x > 4 \implies x > 2$, and is satisfied by all assignments to $x$, and $ \A x : x > 4 \implies x > 2$ is also a theorem. On the other hand, while $\A x : x > 2 \implies x > 4$ is equivalent to $\FALSE$, i.e. $\vdash \neg(\A x : x > 2 \implies x > 4)$, the formula $x > 2 \implies x > 4$ does have a model: it is satisfied by all numbers that are either less than or equal to 2 or greater than 4. This means that we have to know whether we’re treating $x > 2 \implies x > 4$ as a proposition — equivalent to asking whether it is valid, or satisfied by all assignments to $x$ — which in our case is not true, or not, in which case it specifies a model. Describing the model for $A \implies B$ as we did above is not interesting, but the proposition $\vdash A \implies B$ from a semantic perspective is: $\vdash A \implies B$ iff $\sem A \subseteq\sem B$.

Now, notice that the model for a FOL formula, say $x > 2$, is always a set of values from the universe (in this case, all numbers greater than 2). So the model of a FOL formula is always a second-order object. But if the meaning of a formula is a set, a second-order object, then how can we write a general axiom such as $\vdash A \lor \neg A$ where $A$ stands not for a specific formula but for any formula? We can’t write $\A A . A \lor \neg A$ because we can’t quantify over formulas (which, as we’ve just seen, stand for sets, or second-order values) nor treat them as free variables for the same reason. The answer is that we don’t. We say that such an axiom is an axiom schema that refers to an infinite number of axioms in the logic, one for each possible formula $A$. The language used to write the schema is therefore not the logic itself, but the metalogic, or metalanguage of our logic.

While this is fine from a theory standpoint, this is not satisfactory for a mechanical proof software. When proving a theorem, we often need to prove some intermediate lemmas, and we would like to be able to state and prove general lemmas which we could reuse in many proofs, and such general second-order lemmas can be very useful.

One solution is, therefore, to adopt a higher-order logic. But keeping the logic first-order has some advantages in terms of simplicity, model checking algorithms, general theorems (if we know that a formula is first-order, it’s easier to state theorems about a formula’s meaning based on its syntactic structure) — the last one is more relevant for TLA, the temporal logic part of TLA+, which we’ll cover in the next installments. But it turns out that we can have our cake and eat it, too. We can keep all formulas first-order, but allow writing second-order propositions in the form $A \vdash B$, which are not in themselves formulas (and so they don’t have a model). We basically provide access to the metalanguage. This is the solution adopted by TLA+, which we’ll see later in the section about proofs.

By the way, because I mentioned model checking, I’d like to take the opportunity to clarify what may be a certain misconception about model checkers. A model checker takes its name from the definition of “model” we’ve just learned in the context of logic. It is a program that takes a description of a logical structure, $M$ and a logical formula, $\varphi$, and checks that $M$ is a model of $\varphi$, or that $M \vDash \varphi$.

Other than the theoretical and practical advantages of FOL, it has some theoretical disadvantages. For example, first-order theories have non-standard models (e.g. there are models of the first-order Peano axioms with uncountably many numbers). However, this is not a concern at all given TLA+’s intended use and audience. “I’m certainly not worried,” Lamport writes, “about a bug occurring because an engineer implemented a non-standard model of the integers.”10

Logical Formulas and Expressions

We will now begin learning the particulars of TLA+. If you’re familiar with standard mathematical notation you’ll find TLA+’s syntax intuitive and readable for the most part. However, because TLA+ is completely formal, the meaning of every expression must be precisely defined, with no room for ambiguity or intuition. This makes the description seem obsessive, but this precision is what allows mechanical analyses of the logic. Because TLA+ is untyped and tries to mimic the familiar, informal mathematical notation, albeit formally, its details may be of particular interest to those who are generally interested in proof assistants but are used to typed formalisms.

The Logical Connectives and the Quantifiers

The static, data logic of TLA+ is first order logic with equality. The logical constants are and .11 We use the familiar operators:12 $\land$ for conjunction, $\lor$ for disjunction, $\lnot$ for negation, $\Rightarrow$ for implication, and $\equiv$ for equivalence. $\land$ and $\lor$ have the same precedence, so the expression $a \land b \lor c$ is a syntax error due to the precedence ambiguity, and parentheses are required to resolve it. Implication has a lower precedence, so $A \land B \implies C$ is equivalent to $(A \land B) \implies C$.

$\exists$ and $\forall$ are the regular first order existential and universal quantifiers, and introduce bound variables. In the TLA+ syntax, we write : (colon) instead of the more common single dot after a quantifier, which is used instead of parentheses to delineate the scope of the bound variables, and is read as “such that”. E.g.,

The quantifiers bind their variables in a scope that extends “as far as it can,” i.e., until a closing parentheses or the end of the current expression. Therefore, $\A x : P(x) \implies \E x : Q(x)$ is illegal because it’s parsed as $\A x : (P(x) \implies \E x : Q(x))$, and so the existential quantifier introduces the variable $x$, which is already bound to the universal quantifier. However, $(\A x : P(x)) \implies \E x : Q(x)$ is well-formed.

In addition, TLA+ has a couple of special constructs inspired by programming languages. The construct

where $p$ is some logical predicate and $x$ and $y$ are values, takes the value $x$ if $p$ is or the value of $y$ otherwise. The construct

is equal to some $e_i$ such that $p_i$ is true, if any, otherwise it is equal to $e$ if the clause exists; if it does not, the value of the expression is undefined (we’ll look at what precisely that means in the next section). More precisely:

If more than one predicate is true, the value of the expression would be one of them, but the language does not commit to which. Both and are defined in terms of the operator we’ll see later.

Because in TLA+ specifications of algorithms and systems are written as formulas, it requires some ergonomics for writing long formulas in a way that makes them easy to read. Lamport devised such a scheme in How to Write a Long Formula. It uses alignment of conjunctions and disjunctions in place of parentheses, like so:

Where a, b, c and d stand for arbitrary subformulas. Note the prefix connective that starts the disjunction and conjunction lists. The implied parentheses extend from the expression following the connective starting the line until another connective (of the same kind) is encountered in some new line aligned with the opening connective. So the above means the same as the following (which is also allowed in TLA+):

So the following is well-formed (despite the variable $x$ being bound by two different quantifiers),

because it is parsed as $(\A x : P(x)) \lor (\E x : Q(x))$.

The CHOOSE Operator and the Meaning of Undefined

Completing the picture of the fundamental operators is , which is Hilbert’s epsilon operator.

$\CHOOSE x : P(x)$ — where $P(x)$ is some predicate that may contain $x$ — is equal to some value in the logic’s domain of discourse, i.e. universe of values, satisfying the predicate $P$, if one exists. satisfies these two rules:

The first rule shows how the quantifier $\E$ can be defined in terms of (and therefore so can $\A$); we’ll understand why in the next paragraph. The second rule is sometimes called right-uniqueness or a schema of extensionality, and means that always assigns the same value given equivalent predicates (although not one that is dictated or can be determined by the logic); even though the choice is always the same, we don’t know – or care – what it is, other than that it is some value that satisfies the predicate $P$. In other words, is deterministic, although the particular choice is not prescribed; this is a common misunderstanding among new TLA+ users. This must be so for the simple reason that in mathematics, $e = e$ for any expression $e$, and, therefore, regardless of what $P$ is.

If no value satisfies the predicate $P$, the value of $\CHOOSE x : P(x)$ is undefined, which means that it is equal to some value in our universe of values (we will only encounter values in the next section, but possible ones include 42, $\set{3, \set{4, 5}}$ or $\str{hello}$) which we simply cannot determine (meaning, prove in the logic).13 In other words, “undefined” means “we don’t know and we don’t care.” There is no notion of an undefined “value,” but if we were to give it some notation, a good choice might be ¯\_(ツ)_/¯.

This definition interprets the informal notion of “undefined” – which probably means no value – as unspecified (i.e., some value, but we don’t know which). You may find this interpretation philosophically troubling, but rest assured that it poses no issue to the soundness of mathematics.

It is important to emphasize that , despite being named as a verb — a confusing choice for programmers, I believe14 — does not imply any algorithm, or indeed any dynamic process for finding a value. It describes what something is, not how to find it. For example $\CHOOSE x : x \in Int \land x \% 2 = 0$ means “some even integer”, not “find an even integer”. The distinction is, indeed, mostly philosophical, but programmers must be reminded that, at this point, we are dealing only with mathematical definitions, not some algorithm that “runs”.

It is also important to remember that only selects values, it does not construct them; you cannot a value that doesn’t exist in the universe of values, i.e., whose existence isn’t postulated by the axioms of the logical theory, which I will introduce in the chapter Sets.

TLA+’s and constructs are actually defined in terms of 15:

We can now see how fundamental is. All the TLA+ constructs we’ve seen so far outside of the connectives of propositional logic — the quantifiers, and — can be defined in terms of it.

For a value to be undefined, must not necessarily be used in its definition. Undefined values can arise whenever a specific value cannot be determined by axioms and definitions, as we’ll see in the section on .

Definitions

The definition is the main building-block in TLA+ — in fact it is almost the only one, the only other being the module, which we’ll learn about in part 4. When writing definitions informally, mathematicians use many different hopefully-intuitive-but-ill-specified notations. On the other hand, the minimalistic language of first-order logic only allows us to write definitions in the form of axioms, which can be cumbersome. For example, the absolute value function ($|\cdot|$) would be defined in FOL, interpreted over, say, the real numbers, like so:

In TLA+ we can write definitions in a precise and formal notation that resembles familiar informal ones. A definition looks like so:

Where $\text{Name}$ is the name of our definition, and $e$ is a TLA+ expression.

Definitions can be parameterized, in which case they’re called operators (we’ll get to what numbers are in TLA+ later, but for now let’s assume that numbers and arithmetic operators are at our disposal):

Operators are not functions; those have a precise meaning, and we’ll learn the distinction later. Operators can be second-order:

Defining some symbolic operators is also supported:

Unlike some other languages, TLA+ only allows defining symbolic operators from a predetermined list of symbols, whose arity, position (prefix/infix/postfix), and precedence is fixed. We can use symbolic names for operator arguments, as in this example for an infix operator:

Recursive operators are also allowed:

Anonymous operators can be defined inline with 16, as in $ApplyTwice(\LAMBDA x: x^2, 3)$.

Definitions have a scope of the module in which they are defined (we will learn about modules in part 4), and a definition must precede its use. Definitions that are local to an expression are introduced with the construct:

definitions can be made inside any expression.

TLA+ does not let you name variables (or definitions) that are already bound in scope, and overloading an operator name with another of a different arity is not allowed.

Name bindings are introduced by top-level definitions or declarations (by declarations I mean the and constructs we’ll see later), whose binding extends until the end of the module in which they are defined, by parameters of an operator, whose scope extends until the end of the operator definition, by definitions in constructs, which extends to the end of the expression in the clause, and by quantified variables, whose binding extends to the end of the quantifier’s scope, as explained above. So this is not allowed:

and neither is this:

But this is allowed because $x$ is not in scope when $Foo$ is defined:

Unless an operator is recursive, in which case its name is introduced to the scope with the declaration, a name being defined on the left-hand of a definition is not yet in scope inside the definition body. This is why we can, and often do, write $x \defeq \CHOOSE x : P(x)$. There is no significance to the use of the same variable name on both the left- and right-hand sides of the definition, which is equivalent to $x \defeq \CHOOSE z : P(z)$.

The values of the logic are the objects of its structure. As we’ll see momentarily, the data logic — the language used to describe the static data of algorithms — in TLA+ is a set theory, and so the values of the logic are all sets. Operators are not themselves values, and we cannot quantify over them. For example, we cannot write:

Operators work by syntactic substitution17; from a practical point of view they behave similarly to how hygienic macros work in a programming language, but they must evaluate to a value. We, therefore, cannot define the following operator, as the expression is an operator, not a value:

This could, however, be easily achieved with functions — which are values in the logic — as we’ll see later.

From a logic point-of-view, operators are second-order objects, or objects of the meta-language. TLA+ allows treating them as free variables denoting second-order objects using the construct we’ll learn in the next section, as well as in the proof language. So while the above expression that uses quantifiers over operators is illegal, but its intent can be expressed in TLA+, as we’ll see shortly.

Even though TLA+ is considered untyped, its syntax enforces more than just the well-formedness of its first-order logic (when rejecting ill-formed expressions such as $A \land \land B$). TLA+ also syntactically enforces operator arity, and it is a syntax error to use an operator with a different number of arguments from its definition. In parts 3 and 4 we will see some more properties that are enforced by the TLA+ syntax, and that is essentially done through a mechanism of type inference.

Free and Uninterpreted Variables: CONSTANT

Values or operators can be declared without being defined (with $\defeq$); instead, their meaning can be specified with the use of axioms. Symbols that are not directly defined in terms of others but are given axioms that determine how they interact are often called in logic uninterpreted symbols. In part 4 we’ll see how those declarations can also be seen as an input or parameters when we learn of modules and module instantiation.

Free variables/uninterpreted symbols are declared in TLA+ with the keyword (or its synonym ), and assumptions about them — i.e. formulas that are axiomatically asserted to be true — are introduced with the keyword (or its synonym ). For example, if $P$ and $Q$ are some predicates, we can write:

Assumptions can also be named ($\ASSUME A1 \defeq …$) so that they can be referred to in proofs, where they’re treated as axioms. In fact, the keyword is synonymous with (with the minor difference that the model checker TLC checks assumptions declared with but not with , which is why is more common in TLA+ specifications).

From a formal logic perspective, constants are just free variables. That $P(A)$ and $Q(B)$ is all we know about $A$ and $B$. If we use $A$ in a theorem, we must therefore prove the theorem for any value $A$ such that $P(A)$. In addition, the value of $Q(A)$ or $F(A)$ (for any operator $F$ that is not just defined in terms of $P$) — i.e. that of any expression whose value cannot be determined from our axioms about $A$ — is undefined, in the exact same sense explained in the section about .

While $\CHOOSE x : P(x)$ and

initially appear to be very different — the former talks about some value of $x$ such that $P(x)$, while the latter talks about any $x$ such that $P(x)$ — proving a theorem about $x$ in both cases is the same: we must prove it for every $x$ such that $P(x)$, because in either case we know nothing about $x$ except that $P(x)$ is true. Nevertheless, the two constructs are used in very different ways. can be used inside any expression and use a predicate that refers to variables bound to quantifiers or operator parameters, while can be instantiated to particular values (we’ll see how that’s done when we learn about modules in part 4).

As constants can be not only values but also operators, they are technically free second-order variables. The reason is that we can write something like the following (we’ll introduce the construct when we talk about proofs, but its meaning should be intuitive here):

This theorem treats $F$ as any operator, i.e. as a free variable, which is equivalent to the quantified second-order formula that we cannot write in TLA+:

However, TLA+ does not define an equality relation over those second-order objects (i.e., operators). Writing,

is a syntax error. One must write $\ASSUME \A x : F(x) = G(x)$.

Constants are one way in which TLA+ allows us to state second-order propositions even though it is a first-order logic. We’ll say a more about that in the section on proofs.

The name constant simply means that the declared value/operator is not a temporal one that changes over time when specifying a dynamic system. We’ll learn about temporal variables in part 3, when we learn how to write algorithms in TLA+, but it is important to keep in mind that temporal variables may change during an execution of an algorithm while constants may not, but constants do change between different executions.

Sets

Now that we have the core logic we introduce the universe of discourse of that logic, or, put differently, the logical theory we will work with. The data logic is based on ZFC set theory, so all values in TLA+ are formally sets. Even the number 1 is a set, is a set, and the function $\tan$ is a set. But as we’ll see later, how those values are encoded as sets is hidden, and their internal structure is inaccessible to us that it may as well not exist at all, and we can safely treat them as primitive values.

When working with set theory, we usually use first-order logic. However, as the values of the universe are sets, the expressivity of the logic is similar to that of higher-order logic18. In fact, FOL over ZFC is considered the “standard” formal system for (ordinary) mathematics.

Set Fundamentals

The signature (the symbols used in the logic beyond those of first-order logic) contains just the membership operator $\in$, which is a binary relation defined on all sets (i.e., for any sets $a$ and $b$, $a \in b$ is defined), and the theory is based on the axioms of Zermelo-Fraenkel set theory with the axiom of choice (ZFC). We then define the equality relation, $=$, using membership as $a = b \defeq \A x : x \in a \equiv x \in b$ (axiom of extensionality); equality is therefore also defined on all sets (as equality is often taken to be a primitive relation on the logic’s universe of discourse rather than defined by the theory, the axiom of extensionality can be taken to be defining the relationship between the equality and membership relations).

The quantifiers and the operator can – and usually do – appear bounded to a set (e.g. $\forall x \in S : …$), in which case they are defined as follows:

Next we specify which sets exist in our universe using the axioms of set theory.19 To make this more precise (and to tie everything together), recall that stating that some object $S$ — in our case, a set — satisfying some property $P$ exists, i.e., $\E S : P(S)$, can be done using choice: $P(\CHOOSE S : P(S))$. The following axioms can all be understood as being of this form – i.e., they postulate $\E S : P(S)$ for some predicate $P$ – only they introduce notation which is used instead of writing $\CHOOSE S : P(S)$.

The constant $\set{}$ is the empty set (axiom of the empty set), and $\A x : x \notin \set{}$ (which is an abbreviation for $\A x : \neg (x \in \set{})$). I.e. $\set{} \defeq \CHOOSE S : \A x : x \notin S$, and this set exists .

If $a$ and $b$ are sets, then $S = \set{a, b}$ is another set (axiom of pairing), and $\A x : x \in \set{a, b} \equiv x = a \lor x = b$.

If $S$ is a set and $P$ is some predicate, then a set $T$ that contains all those members of $S$ that satisfy $P$ also exists, i.e., $\A x : x \in T \equiv x \in S \land P(x)$ (axiom of specification/separation). We introduce a convenient syntax (called set-comprehension), and write the set $T$ like so: $\set{x \in S : P(x)}$. For example, $\set{n \in Nat : n \% 2 = 0}$ is the set of even natural numbers. Note that a colon is used instead of the more common | in the set comprehension expression.

If $S$ is a set and $F$ some expression then another form of comprehension, $U = \set{F(x) : x \in S}$, is a set constructed by replacing each member of $S$ with its image under the operator $F$ (axiom of replacement)20; formally, $\A x : x \in U \equiv \E z \in S : F(z) = x$. For example, $\set{2*n : n \in Nat}$ is also the set of even natural numbers.

If $S$ is a set, then $\UNION S$ is a set containing all members of the members of $S$ (axiom of union), i.e., $\A x : x \in \UNION S \equiv \E s \in S : x \in s$. E.g. $\UNION \set{\set{1, 2}, \set{2, 3}} = \set{1, 2, 3}$.

If $S$ is a set, then $\SUBSET S$ is the power set of $S$, namely the set of all subsets of $S$ (axiom of the power set), or, $\A x : x\in \SUBSET S \equiv \A z \in x : z\in S$ (note that for any set $S$, the empty set is a member of $\SUBSET S$ because $\A z \in \set{} : z \in S$). For example, $\SUBSET \set{1,2,3} = \set{\set{}, \set{1}, \set{2}, \set{3}, \set{1, 2}, \set{1, 3}, \set{2, 3}, \set{1, 2, 3}}$.

TLA+ has the usual set operators $\cup$ (union), $\cap$ (intersection), $\subseteq$ (subset or equal), $\subset$ (strict subset), \ (set difference) etc., all trivially defined using the fundamental operations above (for example $a \subseteq b \defeq \A x : x \in a \implies x \in b$, and $a \cap b \defeq \set{x \in a : x\in b}$).

The operations defined above, based on the axioms of ZFC, fully specify which sets exist in our universe. As only selects values, you can’t the Russell paradox “set” because one does not exist, as it cannot be constructed using any of the set construction operations. The value of $\CHOOSE x : \A s : s \in x \equiv s \notin s$ (which says that $x$ is the set of all sets that do not contain themselves — a paradox) is, therefore, undefined, as no such set $x$ exists; the right-hand side of the expression is false for all sets in ZFC.

Let’s see some examples of TLA+ definitions using sets. Here is an example of a useful operator that states that there exists one and only one member of a set satisfying some predicate:

(this is a common pattern in logic to express a unique value; a value is unique iff two variables referring to it must be equal).

Note that we could have defined an unbounded operator, $ExistsOne0$,

and then defined $ExistsOne$ in terms of $ExistsOne0$ like so:

Now let’s define some richer mathematical concepts. We’ll define a proset — a set with a preorder — a poset — a partially-ordered set — and a totally-ordered set:

And here are some important algebraic structures, the semigroup, the monoid and the group21:

I could have defined $Group$ directly in terms of $Semigroup$, as we must repeat the identity condition (or use a construct we haven’t yet learned), but I wanted to emphasize that a group is a monoid. That the identity element is unique is a theorem we will formally state and prove later.

Finally, the quotient set of a set $S$ contains its equivalence classes with respect to an equivalence $\sim$ (often written ) and can be defined like so:

The first conjunct says that all members of $x$ — a member of the quotient — are equivalent, the second conjunct says that $x$ must not be empty, and the third conjunct says that $x$ is maximal, i.e., contains all members of $S$ that are equivalent to those in $x$; therefore $x$ is an equivalence class.

People who are not accustomed to writing formal definitions — and even those who are — may neglect to write the second or third conjuncts, but the TLA+ tooling can help. $Quotient(\set{1,2,3,4}, \LAMBDA a, b : a \% 2 = b \%2)$ specifies the equivalence classes of numbers that are of equal parity: $\set{\set{1,3}, \set{2, 4}}$. The TLA+ toolbox allows you to evaluate expressions using the model checker TLC22; not quite a REPL, but almost. If you try to evaluate the above expression without the third conjunct in the definition you’ll get $\set{\set{2}, \set{3}, \set{4}, \set{1,3}, \set{2, 4}}$ and realize your mistake.

There are often many equivalent ways of defining mathematical concepts. For example, here is another way of defining $Quotient$:

The FiniteSets module (which is part of the TLA+ standard module library), defines useful set operators, like $IsFiniteSet(S)$, which is true iff $S$ is finite, and $Cardinality(S)$, which is the number of elements in $S$ if $S$ is finite (we define $Cardinality$ ourselves later on).

$ChooseOne$ is a useful operator, whose value is defined only if exactly one element of a given set matches the predicate:

Another useful operator is $AnyOf$, which picks an arbitrary element from a set (and is undefined if the set is empty):

Speaking of , because we now know that all values in TLA+ are sets, and because the relations $\in$ and $=$ are defined on all sets, we should revisit our definition of “undefined”, because we can now have an undefined value used in an expression yet still know something about the expression’s value. For example, is undefined, which means it is some unknowable value. But as that value is a set, then, say, must be either or (and so, similarly, must be either 3 or 5). In such cases I will also say that the expression is undefined even though it is known to be in some set, because the logic does not let us determine its precise value.

We can use and the definitions we’ve just learned as follows:

We can now think of $S$ as a type parameter that has a partial order relation defined on it, similar to a typeclass in some functional programming languages, or interfaces in object-oriented ones.

Some Important Sets

The set is an integral part of TLA+. Its existence is crucial because TLA+ does not distinguish between the elements of this set and logical truth values – and are ordinary values, i.e. sets, and if $p$ is some predicate (e.g. $p \defeq a \in b$) then, – and a formula (and a predicate) is, therefore, any expression whose value is a .23 For example, is true, we can write $(3 < 4) = (8 > 2)$, using $=$ instead of $\equiv$ even though the latter is a logical connective while the former is a relation on sets, is true, and instead of $(A \implies B)\land (\neg A \implies C) $, or the equivalent $(A \land B) \lor (\neg A \land C)$, we can write , even though, as you may recall, is defined in terms of .

is another built-in set, which is the set of all finite character strings. Strings are sequences of characters (we’ll cover sequences in the next section), and characters are primitive values with an opaque encoding; TLA+ has no syntax for character literals.

The modules Naturals, Integers, and Reals define the sets $Nat$ ($\mathbb{N}$), $Int$ ($\mathbb{Z}$), and $Real$ ($\mathbb{R}$) respectively, with $Nat \subset Int \subset Real$, along with the familiar arithmetic operators (the natural numbers and the integers have addition, subtraction, multiplication, integer division, $\div$, and modulo, , defined for them; division is defined only for the reals), order relations ($\leq$) etc.. Importing the public definitions from a module is done with the statement, as in $\EXTENDS Naturals$. You can’t use arithmetic operators in a TLA+ specification (the numeric literals are built-in) unless you import one of those modules that defines them. TLA+ currently has no special syntax for complex numbers or matrices, nor does it include a standard module defining the rationals ($\mathbb{Q}$).

The special syntax $a .. b$ defines the set $\set{n \in Int : n \geq a \land n \leq b}$, so $-1..1 = \set{-1, 0, 1}$, $2..5 = \set{2,3,4,5}$ and $5..2 = \set{}$.

Division for real numbers (defined in the Reals module) could be defined like so:

This definition immediately tells us, by the meaning of the operator, that $1/0$ is undefined24, in the very precise sense explained in our discussion of .

With numbers, arithmetic operations and the order relation on numbers we can start doing some more concrete math. We’ll begin with something simple: a definition of the operator $Prime$, which is a predicate saying whether a number is prime (other definitions are also possible):

Now let’s define the GCD operator, the greatest common divisor of two natural numbers:

Notice how straightforward is the GCD definition: first we define what a divisor is; then we define what the divisors of a number are. The common divisors are then just the intersection of the two numbers’ divisors, and the GCD is the greatest among them.

We can use the sets we’ve seen and some definitions we made above to state a couple of interesting theorems:

It is important to point out that $Nat$ is the set of all natural numbers, $Int$ is the set of all integers, and $Real$ is the set of the actual real numbers — all uncountably many of them. If we want to work with, say, 32-bit integers or 64-bit floating point numbers, we need to define them and the arithmetic operations on them. You may wonder what good are so many real numbers considering that a program can only represent a small, finite subset of them. The answer is that TLA+ is not a programming language but a specification language, and it is useful to specify and reason about more than just the actual variables of your program. For example, if you’re specifying a numerical algorithm, it is easier to analyze its error if you can actually express the precise result it tries to approximate. Or if you are building a cyber-physical system — a discrete system that interacts with objects in the real world; think drone, sensor or robot — you may want to also model the system’s environment, which is easier to do by describing it as a continuous system using familiar physics equations (we will talk about cyber-physical systems in part 4).

Set Equality

In set theory, all values are encoded as some set (so the number 1 is somehow encoded as a set), and, as we’ve learned, equality is a relation defined on all sets, and could have been defined as $a = b \defeq \A x : x \in a \equiv x \in b$ (except that, as we’ve seen, the boolean values are also sets in TLA+ rather than special constants of the logic itself, and so $\equiv$ is defined in terms of $=$, and $=$ must be primitive).

When a set is defined, this encoding is, at least notionally, a part of the definition, and so equality is also defined for that set. However, comparing values in different sets is another matter. TLA+ lets us hide encodings within modules so that their details do not escape the module boundary and appear to external modules as primitives or uninterpreted (we will cover modules in part 4), and so the value of $1 \in 2$ or of $1 = \str{hi}$ is undefined (i.e., indeterminable). As the integers and strings are defined separately, neither “knows” about the encoding of the other; we simply don’t know whether $1$ and $\str{hi}$ have the same encoding, and so are the same set and are equal, or not (we do know that the integer $1$ and the real number $1$ are equal because the reals are defined as a superset of the integers).

Because equality (and membership) is defined on all sets to have a boolean value we do know that $1 = \str{hi}$ must be some boolean, namely, either or , but we have no way of determining which. In a typed formalism, the expression $1 = \str{hi}$ is simply illegal (ill-formed); in a dynamically-typed programming language it evaluates to false. But in TLA+ it is simply nonsensical — we cannot assign it any meaning, i.e. a value25 — like the english expression “Thursday is purple”, which is grammatically legal, or well-formed, yet carries no well-accepted meaning. If you’d like to revisit the tradeoffs of this choice, I’ll refer you to the two papers I linked to above26.

As a result, we don’t know whether the set $\set{1, \str{hi}}$ contains one or two elements because we cannot know if 1 is equal to “hi” or not as we don’t know how “hi” is encoded (TLC forbids such a set altogether, as its elements are incomparable). So what do we do if we want to have a set that contains the values 0, 1, and UNKNOWN? We use to define UNKNOWN like so: and then we can write the set $\set{0, 1, \text{UNKNOWN}}$, now known to contain three elements.

So while TLA+ is untyped, it does not work in a similar way to untyped programming languages, which are usually “dynamically typed.”

Functions

Most interesting objects we deal with both in mathematics and programming are not normally thought of as sets. TLA+ lets us cleanly express standard mathematical and programming objects, for which the function serves as the main building block.

Usually, a function is defined to be a one-valued relation, where a relation is a set of pairs — in other words, a function is defined by its graph — but in TLA+ functions are not defined as sets of pairs but as primitives (meaning that, like numbers, their encoding as sets is unknown, or opaque27). In fact, it is pairs that are actually functions in TLA+ as they’re a special case of sequences, which are in turn a special case of functions, as we’ll see. In any event, functions in TLA+ are not computations; they have no dynamic behavior and no computational complexity. They are just values in the state space of algorithms. Programmers may best think of them as associative arrays, albeit possibly infinite in size (even uncountably big).

If $A$ and $B$ are sets, $[A \to B]$ is the set of all functions from $A$ to $B$.28 Function application is denoted with square brackets ($f[x]$) to syntactically differentiate it from operator “application” (really, substitution). Unlike an operator, whose arguments can be any value in the universe or even other operators, every function has a domain — a set — which is obtained with the operator like so $\A f \in [A \to B] : \DOMAIN f = A$. Because a function has a set domain, we cannot have a function whose parameter can be any set or even any function (as we can with operators), because that would make the domain of the function “too large to be a set”, namely, a proper class, which doesn’t exist in ZFC.

Functions are (first-order) objects in our logic — they are just (opaque) sets themselves — while operators are not (they are second-order objects, or objects in the metalanguage). We can therefore quantify over functions as we do over any other values, as in $\A f \in [A \to B] : … $ Because a function’s encoding as a set is opaque, statements like $3 \in f$, for some function $f$, are undefined and nonsensical.

The image of a function can be defined as:

Equality for functions is extensional, meaning, for two functions, $f$ and $g$,

We could define a specific function $double$ on the natural numbers like so,

but that would be very cumbersome and so TLA+ allows us to define it like so:

or like so,

The latter form is syntactic sugar for

Because of this, functions defined in this form can be recursive:

as this is just shorthand for

which defines $fact$ as a fixed-point (or fixpoint). To see that more clearly, we can define:

and then:

Note that recursive functions, because they are really specified as fixpoints, may turn out to be undefined if the fixpoint does not exist, as in the following case:

So when using recursive function definitions in proofs, you may need to prove that they define actual functions by showing that they indeed match a value for every value in their domain (or in CS speak, that they’re total).

Functions can be defined on any set (including sets of functions) and “return” any value, including other functions, so we can define the “higher-order” function:

and then,

Formally, however, unlike in typed formalisms, functions such as $add$ are not really higher-order in TLA+, as all functions — all values really — have the same type; they are all values in the universe of sets.

We are now in a position to define a few common important properties of functions: injection (a one-to-one mapping), surjection (an onto mapping) and bijection:

Where the variable $S$ represents the function’s codomain. Note that we could have also defined $Injection$ as .

Now let’s define a function composition operator (I will use the $\bullet$ operator because $\circ$, the common symbol for function composition, is normally used in TLA+ for sequence concatenation):

Note that $g \bullet f$ is only defined if $f$ and $g$ are composable, meaning $Image(f) \subseteq \DOMAIN g$ — remember, there is no such thing as a partial function in ordinary math, at least not in the same sense as in programming. If we want function composition that works like that of partial functions in programming, we could define:

Unlike $g \bullet f$, $g \star f$ is always a function, but it is not necessarily a function on but potentially only on a subset of it, and it may even be the empty function (the function on the empty set, one that is undefined for any argument) if $Image(f) \cap \DOMAIN g = \set{}$.

The following operator defines an identity function on an arbitrary set:

The following is a theorem about $Identity$:

Let’s also define an $Inverse$ operator for the inverse of any invertible function (the definition says: for any $y$ in the function’s image we pick an $x$ in the domain that is mapped to $y$, provided that $x$ is the only point that maps to $y$):

A function is invertible if for every element in its codomain, there is one and only one element in its domain that is mapped to it, or in other words, if it is a bijection. We would like to state that as a general theorem about functions, but here we have a problem: the collection of all functions is not a set (but a proper class), so we cannot write $\A f \in Function : \ldots$ (unlike in typed functional languages, where the type forall a b. a → b denotes all functions). However, we can define the predicate $Fn$, with a seemingly silly definition, which is true iff its argument is any function:

The following is, then, a theorem:

as is:

We can manipulate functions in all sorts of ways. For example, if $inc$ is the function defined above that increments every natural number by one, we can, of course, define the following:

But the construct, which allows us to “change” specific function values, makes it easier:

The above could also be written as:

where $@$ refers to the original function’s value at that point.

expressions also support cases where functions can be “nested” (i.e., a function returns a function, or a multi-dimensional array), so, if we wanted to express incrementing the (1,2) coordinate of a two-dimensional array, instead of writing:

we can write:

We also have syntactic sugar to define and apply functions with multiple parameters. We’ll talk about it once we’ve learned about tuples.

It is as easy to define non-computable functions as it is to define computable ones. Here is an example of a non-computable function:

As is, by the way, this one, even though at first it seems so simple and natural:

Now let’s use functions and one of the sets we’ve seen to formally define some more important mathematical concepts:29

While we can define any computable function (and many non-computable ones, too), those functions are not how we describe computations in TLA+. Unlike in specification languages based on functional programming, computations in TLA+ are not functions but dynamical systems (like ODEs). Instead, we use functions as data structures (associative arrays) or, like operators, as the primitive operations of our computations. For example, when we write a high-level specification and don’t wish to model the dynamic behavior of say, the factorial subroutine, but would rather consider it a primitive operation, we specify it as a function.

Sequences and Tuples

A sequence is a finite or infinite ordered list of values. In TLA+, sequences are defined as functions on some prefix of $Nat \setminus \set{0}$, and so use 1-based indexing as is the usual practice in math, but not in most programming languages, where 0-based indexing is common.

The sequence of all even numbers in ascending order is just $[n \in Nat \setminus \set{0} \mapsto 2 * (n - 1)]$, and the sequence of all even numbers between 2 and 200 is $[n \in 1..100\mapsto 2 * n]$.

The Sequences module contains several useful definitions for working with sequences. $Seq(S)$ is the set of all finite sequences over the set $S$ (the set of infinite sequences is just $[Nat\setminus\set{0} \to S]$), and is defined like so:

The $Len(s)$ operator is the length of a sequence (which can be defined as $\CHOOSE n \in Nat : \DOMAIN s = 1..n$); the $\circ$ operator concatenates two sequences, the $Append(seq, x)$ operator appends the value $x$ to the end of the sequence $seq$, $Head(seq)$ operator is the first element in the sequence ($seq[1]$) and $Tail(seq)$ is the tail ($[i \in 1..Len(seq)-1 \mapsto seq[i+1]]$). $SubSeq(seq, i, j)$ is the subsequence of $seq$ between $i$ and $j$ inclusive, and is the sequence filtered to contain only those elements $x$ such that $P(x)$ is true.

As strings are just sequences of characters, the Sequences module’s $\circ$ operator also concatenates strings, $SubSeq$ selects a substring etc..

Defining the familiar map operation from functional programming on sequences is easy:

flatmap requires a bit more work and the use of a recursive operator (the inner helper operator is required as TLA+ does not allow recursive operators with operator parameters):

TLA+ has special syntax for finite sequence literals, also called tuples (or lists, if you like). The tuple $\seq{10, \str{hi}, [x \in N \mapsto x + 1]}$ is simply syntax sugar for:

If $A$ and $B$ are sets, then $A \times B$ is their Cartesian product, $\set{\seq{a, b} : a \in A , b \in B}$. Similarly, $A \times B \times C = \set{\seq{a, b, c} : a \in A , b \in B, c \in C}$ etc. The Cartesian product is not associative in TLA+, so $A \times B \times C \neq (A \times B) \times C \neq A \times (B \times C)$, because $\seq{a,b,c} \neq \seq{\seq{a,b},c} \neq \seq{a,\seq{b,c}}$ (and so $\times$ is not an ordinary binary infix operator – despite behaving almost like one – but a special construct).

For convenience, tuples can be used as quantified variables: instead of $\E pair \in A \times B : pair[1] > pair[2]$, we can write $\E \seq{a, b} \in A \times B : a > b$.

Functions that take multiple parameters are syntactic sugar defined in terms of tuples. We can write:

which is just syntax sugar for

and, similarly, we can define the same function as

We can apply such functions with the expression $foo[3, \str{hi}]$, which is just syntactic sugar for $f[\seq{3, \str{hi}}]$.

Records

The record,

is syntax sugar for the function,

and $r.a$ is syntax sugar for $r[\str{a}]$. also has special syntax for records, so a record that is equal to $r$ but whose “field” $b$ is 200, could be created with $[r \;\EXCEPT !.b = 200]$ (or with $[r \;\EXCEPT !.b = @*10]$). Nested structures also work (e.g. $[r \;\EXCEPT !.a.k = @+1]$). Finally, we have special syntax for sets of records:

is the set of all records with the fields $name$ and $id$, whose $name$ field is a string and whose $id$ is a natural number.

Operators vs. Values

Should you express mathematical functions as TLA+ functions or as operators? Similarly, should you express relations as sets of pairs or as binary operators? That’s entirely up to you. Expressing them as objects in the theory, i.e. as functions or sets, “reifies” them allowing to them or quantify over them, which may or may not be necessary in your specification. Using operators may be more convenient in terms of syntax (e.g. $x \preceq y$ looks nicer than $\seq{x,y} \in OrderRel$). Sometimes, the choice is dictated by the capabilities of the tools you’d like to use in verifying your specification. For example, the mechanical proof system TLAPS currently does not support recursive operator definitions, but it does support recursive functions (which are just syntax sugar for )30. Also, operators can express things that functions simply can’t. For example, the sequence length operator or the set cardinality operator (which we’ll see later) could not have been defined as functions because they’re not functions in set-theory, as their domain is not a set but “too many” sets (i.e. a collection, a class that is cannot be constructed as a set in ZFC).

Putting it All Together

The logic we’ve learned allows us to define anything we want about the data computer programs operate on.

More Complex Definitions

Now let’s use everything we’ve seen to define the $Cardinality(S)$ operator, which is the number of elements in the finite set $S$ (the operator is provided by the standard module FiniteSets).

We can use a computer-sciency definition using recursion, based on the observation that the cardinality of a finite set is 1 plus the cardinality of the set after removing an arbitrary element from it:

We can also opt for a more “mathematical” definition. We’ll define cardinality by finding a bijection from a range of natural numbers to the set.

We can show off some other TLA+ features by using the following mathematical definition, which finds the largest natural number $n$ such that there is injection (a one-to-one mapping) from $1..n$ to $s$:

Describing Properties of Data

What do we do with all this? We use TLA+ values to define the data of our program at any step of its execution, and the primitive operations that our computation can perform in one step. Data structures can be easily defined using the objects we learned: Arrays and lists can be modeled as sequences; structures can be modeled as records; maps can be modeled as functions, and sets as, well, sets.

What about things like the heap and the stack? What about processes or threads? In TLA+, you choose the level of abstraction. If you wish, you could model memory at a low level, as an array containing bytes (say, as natural numbers between 0 and 255) and write definitions for memory allocation and deallocation operations like malloc and free, as well as definitions that take care of encoding and decoding constructs like strings, arrays, floating point numbers etc. to and from bytes. In most circumstances, however, you’re more likely to choose to model those constructs purely abstractly without worrying about their memory layout, or you could choose to find some middle ground where memory is allocated and deallocated in objects, but without worrying about lower-level representation, instead modeling interesting techniques for dealing with shared pointers like separation logic. As to processes, we’ll see how those are modeled in part 3.

What good are the non-computable functions or operators (like $Inverse$ or $dirichlet$) we’ve seen when specifying real software systems? For that matter, even when a definition is computable, it may not suggest a feasible (tractable) way of computing it, which is just as bad. As we’ll see in the following installments, such definitions can be convenient abstract representations of actual executable algorithms. They specify the what rather than the how of our program or some small part of it, and we can then choose to define the how and verify that it conforms to the what, or choose to leave things at that, and not worry about an implementation.

For example, when specifying some elaborate algorithm that relies on a sorting function, we may know that we already have an efficient library subroutine for sorting and we’re not interested in the details of its implementation. So instead of specifying the sorting subroutine as an algorithm or defining it in a way that mimics an efficient implementation, we may choose to simply define its behavior:

Data Refinement and Inductive Data Types

To make things more concrete, let’s consider a specification of linked lists. Say we decide to model the list as linked nodes, but we choose not to go into the detail of actual pointer arithmetic, and so we will not model memory directly.

Even though it is not necessary in TLA+, we can begin by specifying the “type” of our linked list node, where by type I mean the general structure of the node (without getting into memory layout). The elements of our list can be any members of the set $S$.

We may be tempted to define the “type” of the list — i.e. the set of all lists — like so:

However, as with any recursive (i.e. self referential) definition, there are subtleties because a recursion implies a fixed-point equation, but many recusions have multiple solutions and TLA+ does not commit to a specific one. The last two lines in the above definition could correspond to the following one:

As picks any set that satisfies the equation, the above line may represent the set of all finite linked-lists, or the set of all finite and infinite linked lists, which is yet another solution. If we want to be specific and, for example, only represent the set of finite lists, we could, instead, define:

In fact, we can generalize this idea:

and then:

Note that $List$ inside the is not the set we’re defining, namely, $List$ on the left-hand side of the definition, but simply the name of the ’s parameter.

Another, more “set-theoretic” way of defining an inductive data type is by specifying precisely which solution to the recursion equation we want. In the case of lists, if we want to allow only finite lists, we want the least, or smallest solution. We can define it in general like so:

We a set $T$ that satisfies condition $C$, but with the added requirement that $T$ has no subsets that also satisfy $C$. We could then define the set of all finite lists like so:

Or in words, the set $List$ is the smallest set that contains $\text{EMPTY}$ and is closed under adding a new head to the list (i.e., for any list in $List$, the list we get by adding the element with the value $x$ as the head, is also in $List$). This last definition using $Least$ is too abstract to be obvious to people who are not logicians. Indeed, as we’ll see in part 4, it is generally discouraged to specify programs in such an abstract way, as it quickly gets very complex. TLA+ offers alternatives.

We can later use this definition when we define algorithms over linked list to show that the type of the list is preserved. This turns the familiar notion of type into just another, relatively simple property we can specify and verify. For example, $x \in List$ is just another proposition, like $a + b = 0$.

Now, let’s define a view that presents a linked list as a sequence. For simplicity, let’s assume we only allow finite lists. Remember, a finite sequence is just a function of some 1..n:

The operators $ListAsSeq$31 and $SeqAsList$ allow us to move between two levels of abstractions and write a theorem like

We could then prove the theorem or check it on a finite model in the model checker to verify that our concrete implementation of lists behaves like an abstract sequence. This is a very simple form of something called a refinement relation between specifications; in particular, this is a data refinement, as it pertains to a data structure. In parts 3 and 4 we’ll see what refinement means exactly, as well as how we can use refinement relations of algorithms.

I hope that it is now clear why I opened by saying that this is the least essential part of TLA+: any other system for defining data and operations on it would do. For example we could have used HOL instead of ZFC, for a slightly different set of tradeoffs, although when it comes to actual work, the similarities would be greater than the differences; after all, math is math. The interesting — and unique — power of TLA+ comes from TLA, the logic used to define algorithms, which we’ll cover in the upcoming installments. Nevertheless, the data formalism chosen by TLA+ is one that is both powerful and relatively familiar to engineers, most of whom learned about sets and functions in a discrete math course.

In fact, this system is powerful enough that we don’t really need anything else. We could define computations as as discrete dynamical processes that change state in time by modeling (discrete) time as the index of an infinite sequence32. However, this is not the path taken by TLA+, which offers a more structured and convenient way of describing algorithms and computations using the temporal logic of actions.

Proofs

The most important skill when using TLA+ (or any other specification language) is the ability to express your ideas precisely in the language of mathematics. If there is an automatic verification tool for your language of choice — like the TLC model checker for TLA+ — this is often all you need. However, the most important capability of logic after serving as a precise language, is the ability to use syntactic manipulation rules — often called inference rules, deduction rules or a proof calculus — to write proofs, using the syntax of the logic to arrive at truth.

The TLA+ proof language is more readable than any I’ve seen, so I will cover its main features, as they’re important from a design, if not theory, perspective. The proof language is a relatively late addition to TLA+, added in 2008, along with an early version of TLAPS, the mechanical proof system. The design principles of the proof language, with comparisons to other proof languages, is covered in its introduction paper. Details of usage are covered in detail in section 7 of the TLA+2 Preliminary Guide, and a good tutorial, which includes the use of the mechanical proof system, is found in “The TLA+ Proof Track” of the hyperbook. More information about TLAPS can be found on its website. TLAPS can mechanically verify a useful subset of TLA+ proofs and specifications. TLAPS is not a standalone proof assistant, but a frontend, which uses multiple automated solvers and the Isabelle proof assistant as backends. It can even check and certify the automated provers in Isabelle (currently only Zenon, but certifying the SMT solvers is planned). You may be interested in reading some of the papers explaining TLAPS’s operation here and here. An interesting real-world specification of a real-time OS kernel scheduler has recently been mechanically proven with TLAPS.

In practice, when writing real specifications, you’ll find that using the model-checker has a much higher return-on-investment than writing proofs (by an order of magnitude). Not only does the model checker save you a great deal of effort, but you can only prove things that are true, and very often, you’ll make assertions that are wrong. The model checker will let you know exactly why it is that your assertions are wrong; struggling with a proof may lead you to your mistake, but only after considerable effort.

Nevertheless, sometimes you may find it useful to write a proof. TLA+ has a rich structured and declarative proof language, based on Lamport’s interesting ideas for a structured proof system that assists both in the readability and the rigor of mathematical proofs. The system is detailed in How to write a 21st century proof (with an older discussion in How to Write a Proof), which is an interesting read regardless of whether or not you’re interested in TLA+. Lamport first gave a talk explaining his ideas on structured proofs in 1991. It was not well received:

Lots of people jumped on me for trying to take the fun out of mathematics. The strength of their reaction indicates that I hit a nerve. Perhaps they really do think it’s fun having to recreate the proofs themselves if they want to know whether a theorem in a published paper is actually correct, and to have to struggle to figure out why a particular step in the proof is supposed to hold.

Twenty years later, attitudes have changed, although not practice:

The talk was received much more calmly than my earlier one, and the mathematicians were open to considering that I might have something interesting to say about writing proofs. Perhaps in the last 20 years I have learned to be more persuasive, or perhaps the mathematicians in the audience had just grown older and calmer. In any case, they were still not ready to try changing how they write their own proofs.

TLA+’s proof language is rich with constructs designed to make it easier for people to read and write proofs, and can be learned in a day or two once you know TLA+. The proof language deviates somewhat from the relative minimalism that characterizes the rest of TLA+, and focuses more on readability that evokes prose proofs as much as possible, using a wide selection of keywords — over 25 — many of which are synonyms and syntactic sugar. I will not cover them all, but just highlight the core elements.

The paper introducing the proof language33 reads:

The goal of the language is to make proofs easy to read and write for someone with no knowledge of how the proofs are being checked. This leads to a mostly declarative language, built around the uses and proofs of assertions rather than around the application of proof-search tactics.

While formal proofs are some sequence (or a tree) of applications of the logic’s inference rules, writing proofs in this way is untenable, as the rules are too primitive, and each moves you forward towards the goal too slowly. So mechanical proof assistants often have a higher-level proof languages. Some, like Coq, have an imperative proof language, where a proof is constructed by chaining tactics. You can think of tactics as macros for the primitive inference rules. Others, like TLA+ have a declarative proof language (and Isabelle has both imperative and declarative proof languages). Declarative proofs are designed to be easily readable by people, and resemble how humans write proofs. At every step of the proof they only list those facts — axioms, other lemmas or theorems, and previous proof steps — required to prove the current statement, without explaining how. They basically list the input and the output for some chain of deduction steps, leaving the actual steps for the tool to figure out. This, however, means that there is a difference between a legal proof and a useful proof. For example, the following is (probably) a formal proof in TLA+

but, of course, it will convince no mathematician and certainly not the mechanical proof system, which cannot deduce the steps from the input to the output. This, however:

may or may not be a valid proof, because we don’t know whether Goldbach’s conjecture is a theorem or not, and if it is, whether or not the Peano axioms suffice to prove it.

A useful proof is one that can be relatively easily verified either by a human reader and/or by the mechanical proof system. This happens when it is obvious how proof goal (theorem/lemma or a proof step) is entailed by the facts listed as its proof. What “obvious” means depends, of course, on the capabilities of the person or algorithm verifying the proof, where “verification” ultimately means becoming convinced (perhaps even supplying a formal certificate — the long chain of primitive inference rules connecting the assumptions to the consequence) that the goal is indeed entailed by the facts listed.

Both imperative and declarative proof languages have their pros and cons (and both are quite tedious). Imperative proofs are so hard to read that they are effectively meaningless to a human; on the other hand, they let you know exactly what the verifier knows at each step, so they are easier to write by comparison. Declarative proofs are easy to read, but writing them is often a game of trial and error, where you supply more detail and hope that would do the trick, while the feedback you get is just whether the verifier has managed to prove the goal from the supplied facts or not.

The TLA+ proof language does not presuppose any specific capabilities on the part of the verifier, be it human or mechanical, instead allowing hierarchically refining every proof step by making it into a proof goal of its own and recursively breaking it down into another, hopefully simpler, series of proof steps, and so on until finally we get proof steps that are each obvious enough for the verifier. This is the idea behind Lamport’s structured hierarchical proofs.

At any point in a TLA+ proof, there is a current proof obligation, which claims that a proof goal is entailed by a set of facts (other theorems axioms or proof steps) and definitions called a context. A proof goal is anything that requires proof. It can be a theorem introduced with the keyword (or one of its synonyms , , ), or, recursively, a proof step of one. A (declarative) proof is either omitted (with the clause), stated to be if it is a direct result of logical inference rules and other built-in axioms that don’t require further assumptions, or supplied in a (or, optionally, ) clause, which lists all facts from which the obligation can be deduced as well as any definitions that the obligation depends on and must be examined (“expanded”).

A theorem can be stated in two ways. It can be a logical proposition given as a formula (like $\A r, s, t : r \subseteq s \land s \subseteq t \implies r \subseteq t$ or the above statement of Fermet’s Last Theorem), or as a more powerful, more general pair, where the clause lists some assumptions that, if true, would entail the consequence in the clause. The pair basically means . For example:

or:

The keyword introduces a new variables, as in “let P be any…”. by itself is shorthand for , or just , signifying that the variable is a constant rather than a temporal one that can refer to different values at different times in an algorithm; this is a in the exact sense as we learned above in the section Constants. If unbounded, it means any value or any (non-temporal) operator or formula. It is a second-order free variable. $\ASSUME A \;\PROVE B$ is a proposition but it is not a TLA+ formula; it doesn’t have a model in the TLA+ logic, so the formulas of the logic are still all first-order. See the discussion to follow, as well as in the section First Order Logic and Other Orders, for the reasoning behind this design.

For example, the following theorem demonstrates a bounded , as well as introducing parameterized formulas:

Note how we assume the existence of a theorem (a true proposition) in the nested .

The ability to write second-order theorems is important in a proof system, as it allows the user to make general, reusable lemmas for use in proofs. For example, an important theorem (exported by the NaturalsInduction module provided with TLAPS) that can be used in many circumstances is the following, which defines induction over the naturals:

Instead of $\A n \in Nat : P(n) \implies P(n+1)$ above, we could have used the nested $\ASSUME \NEW n \in Nat, P(n) \;\PROVE P(n+1)$.

We can then use that theorem to prove the following (silly) one:

( would have sufficed in this case, both for a human reader as well as for TLAPS, and we could have stated the theorem more simply as the formula $\forall x \in Nat : x = 0 \lor x - 1 \geq 0$). Notice there is no need to actually define an operator of the form $P(n)$; it is automatically inferred from the formula $x = 0 \lor x - 1 \geq 0$ that it can be understood as an operator parameterized by $x$.

Theorems of the kind we’ve seen can be introduced as axioms to define inference rules for new logical connectives (for example, for something like separation logic). However, built-in axioms cannot be removed (so we can’t remove the law of excluded middle as an axiom to ensure our reasoning is constructive). Axioms are declared with the keyword or with , the construct we covered in the section about constants.

While TLAPS has been designed to be fully backend-agnostic, when using it you may sometimes find it necessary to use specific solver features. For example, $\BY Z3$ can tell TLAPS to use the Z3 SMT solver to discharge (verify) the particular proof, and you can even — although it’s not recommended and you’re unlikely to need it in practice — make use of Isabelle tactics. In fact, the proof of the $NatInduction$ theorem above provided with TLAPS is $\BY IsaM(\str{(intro\;natInduct, auto)})$.

Let’s look at an example that requires expanding a definition in the proof. The TLAPS solvers are powerful enough to prove the following theorems about the algebraic structures we defined in the section Set Fundamentals directly from their definition, without any added assistance:

$\BY \DEF Monoid$ says that the proof relies on the content of the definition of $Monoid$, as no facts are used nor any definitions expanded unless we explicitly say so (inside long proofs — which we’ll get to right away — there are ergonomic constructs designed to save us repetitive mentions of facts or definitions in multiple proof steps). The second theorem requires examining the inner definitions of $Monoid$ and $Semigroup$ used in the definition of $Group$, as it requires the associativity property, defined in $Semigroup$.

More interesting theorems cannot be proven satisfactorily with a single clause, and must be broken down into smaller proof steps. Each proof step states a proposition and requires proof (or ) with a clause, or, recursively, by a nested series of proof steps, thus forming a tree-like hierarchy. Every step is labeled with “$\seq{depth}name.$”, where depth is the depth of the step in the hierarchy, and name is an optional label for the step, usually just the index of the step in the sequence. A named step can be referenced as a fact in a clause as if it were a named theorem. Every sequence of proof steps, at any level of the tree, must end with a step, whose goal is the parent proposition of the sequence, known as the current goal. Here’s an example of a proof structure:

Note that a proof may contain multiple steps with the same name (e.g. $\seq{2}1, \seq{2}2, \seq{2}3.$ all appear twice in the outline above), however, there can be no ambiguity when referencing steps, as you can only reference steps appearing previously in the same sequence or in higher levels of the hierarchy. The idea of a hierarchical proof is that each nested sequence provides the details of how its goal (i.e., parent) is proven, until the leaf nodes are obvious enough. If, as a reader, you’re not interested in the details or the step is obvious enough without the more detailed steps, you can collapse the tree in the TLA+ Toolbox. This hierarchical structure results in very readable proofs, as you can easily grasp the outline of the proof, and then delve deeper for more detail if you like.

The proof language has some constructs that allow for a more natural expression of the proof (similar to common techniques in prose proofs), or to improve readability. Instead of listing the set of facts and definitions that are supposed to entail the goal, the keyword introduces facts and definitions that are then implicitly added to all subsequent steps in the same step sequence. The matching keyword does the opposite, removing implicit facts from consideration. Because doesn’t introduce a proposition, it is usually written in an unnamed step (e.g. $\seq{2}$ as opposed to, say, $\seq{2}5$), as there’s never a reason to reference it.

A more interesting ergonomic construct is . Say that assuming A, we want to prove C, and that we could do that by first proving $A \vdash B$ and then $B \vdash C$. Sometimes it is more convenient (to guide the reader in the intent of the proof) to first prove $B \vdash C$ and only then $A \vdash B$. In a prose proof, we usually say “suffices to prove B because…” and then move on to proving B. We can do the same in TLA+ with the construct. When used in an unnamed step, it changes the current goal as seen in the example below, taken from the hyperbook, which demonstrates the use of as well as gives you a sense of what a real proof looks like. The example proves some theorems that can be mechanically checked by TLAPS about the GCD operator we defined above in the section Some Important Sets34:

The construct helps writing a proof by cases. For example, instead of:

you can write:

Finally, let’s take a look at the construct. If we have an assumption of the form $\exists x : P(x)$ we can use it by ing a fresh variable $x$ for which $P(x)$ is assumed (it works similarly to $\NEW x, P(x)$). Here is an example adapted from the TLAPS documentation. In addition to , it makes use of and also demonstrates a proof of contradiction35:

There are quite a few other constructs that capture common proof techniques; you can find a list of them here (as you can see, their meaning is precisely defined by the more basic constructs we’ve covered). There are also constructs that allow adding local definitions to a proof, a scheme that allows naming and referencing parts of formulas, and a convenient syntax for writing proofs by a sequence of equalities or inequalities.

For an example of the last, if TLAPS weren’t able to prove the theorem $UniqueInverse$ automatically — or if the proof, although obvious to the mechanical prover, is not obvious to the human reader — we could help the system or the reader by writing the following detailed proof, where $@$ refers to the right-hand side of the relation mentioned in the previous step:

The appendix of How to write a 21st century proof contains a full, mechanically verified, TLA+ proof of a simple theorem in calculus along with all necessary definitions.

While TLA+ can be used to prove general mathematical theorems, it was not designed for that task. Mathematicians interested in formal proofs would be better served by tools designed for that task, like Isabelle or Coq. Engineers who choose to write formal proofs are encouraged not to waste their time on proving mathematical theorems, but to either use the library of proven mathematical theorems supplied with TLAPS or to merely state a mathematical theorem and omit the proof. They should concentrate on proving the correctness of their algorithms or system designs. In part 3 we’ll see how that’s done.

Conclusion

Using formal math to precisely define objects and their properties is pretty straightforward, can be learned easily by engineers, and doing that forms the core of the work in TLA+, and, in fact, in any specification language or proof assistant.

You’ll usually use TLC, the TLA+ model checker to automatically verify your assertions. Formal proofs, on the other hand, are rather laborious — especially if we want them to be mechanically checked — even when written in a relatively nice proof language like that of TLA+, and even when SMT solvers are used by TLAPS to carry some of the burden.

While we haven’t even begun exploring any of the interesting theory of TLA+, this installment covered nearly all of its syntax; in fact, we’ve covered almost all definitions in the standard module library, too. The entire logic for defining dynamical systems contains just a handful of additional operators — $’$, $\Box$, , and $\EE$ — applied to the data logic we’ve learned.

Next week we’ll look at how TLA+ models algorithms and their properties mathematically, and at the proof techniques that work best in verifying algorithms.


Discuss on Reddit

  1. This is not the full picture, as we’ll later see. While the data part of TLA+ is an untyped mathematical formalism, other aspects of TLA+ are essentially typed (e.g. the module system, which is typed in a manner similar to ML’s modules to ensure all names are resolved unambiguously) but they are typed in a way that’s different from how types are used in programming languages. TLA+’s syntax is much more elaborate than that of untyped programming languages. For example, if some identifier $a$ is defined as, say, $b(3)$, then, for some $P$, whether or not the expression $P(a)$ is well-formed depends not only on the definition of $P$, but that of $b$. This is common in typed programming languages with type inference, and pretty rare in untyped programming languages. However, we will only encounter such cases in parts 3 and 4. 

  2. While there are several set theories, ZFC is considered “standard”, whereas there are many type theories that differ from each other either in subtle or profound ways, none of which has become standard. 

  3. George Gonthier is a leading practitioner in formal mathematics and one of the Coq veterans who helped design the TLA+ proof language. 

  4. G.H. Hardy, A Mathematician’s Apology, 1940 

  5. Then again, Hardy took pride in his own “useless” field of number theory because, he believed, it would never have any “warlike” uses. 

  6. I’ve seen some online discussions where working programmers discuss the foundation of logic and mathematics with the same opinionated vigor they argue over Haskell vs. Python, and I just want to say this: If you think the choice of a mathematical foundation can significantly affect the way you specify and reason about real, physically realizable systems, then you misunderstand something basic about the whole subject of mathematical foundations. All foundations must agree on any physically observable prediction, and no foundation provides magic proof techniques that make it superior to others, certainly as far as the work of us programmers is concerned. 

  7. Not to be confused with Formalism, a finitist philosophy of mathematics. 

  8. Some people define formula to be any self-standing well-formed expression, and term to be any part of a formula, in which case the syntax is the set of all formulas. 

  9. A function is just a relation with one implicit arity, so *, which is a 2-ary function, could also be given as a 3-ary relation, multiplies(a,b,c), that says whether c is the result of multiplying a and b 

  10. That would be impossible, due to the physical limitations of computation. 

  11. Later we’ll see that and are also ordinary values in TLA+

  12. When writing TLA+ specifications, you would enter the text in ASCII using special shortcuts, like \/ for $\lor$, /\ for $\land$, ~ for $\neg$, => for $\Rightarrow$, <=> for $\equiv$, -> for $\to$, |-> for $\mapsto$, == for $\triangleq$, \A for $\forall$, \E for $\exists$, [] for $\Box$, <> for $\Diamond$, << for $\langle$, >> for $\rangle$, \o for $\circ$ etc.. $\LaTeX$ notation is used for other symbols, like \cup for $\cup$ , and you can also use it for the symbols with more convenient shortcuts if you like (e.g. \forall for $\forall$ or \neg for $\neg$). The pretty printer preserves alignments, which, as we’ve seen, may be meaningful. 

  13. And so $(\A x : P(x)) \equiv P(\CHOOSE x : \neg P(x) )$ 

  14. Isabelle/HOL named its Hilbert’s epsilon operator SOME, which I think is a much better, less confusing name. 

  15. See Specifying Systems, p. 298. 

  16. Note that is not a function but an operator. We’ll see anonymous functions later. 

  17. For more on this, see Lamport’s Substitution: Syntactic versus Semantic 

  18. If you’re wondering what the model of a FOL+ZFC formula is, as the models of FOL formulas are always second-order objects, and yet sets are already our first-order object, then that model is called a class. Many classes, like the universe of ZFC (i.e., the model of the formula $x = x$), are not formal sets, meaning they cannot be constructed by the axioms of ZFC. A class that is not a (formal) set is called a proper class

  19. These set construction rules is how ZFC avoids Russell’s paradox. In HOL, Russell’s paradox is avoided by not allowing a set to contain elements of different orders, i.e. different types (so $\set{1, \set{2, 3}}$ is not a valid set). 

  20. You may have noticed there is an ambiguity regarding the meaning of the expression $\set{x \in A : x \in B}$. If you think about it, it could be understood to mean $A \cap B$ (if the axiom of separation is used) or as some subset of (if the axiom of replacement is used). TLA+ chooses the first, more useful meaning. 

  21. Actually, the $\cdot$ operator is already taken in TLA+ (for action composition), so you’d need to choose a different operator. 

  22. Provided that evaluating the expression does not require enumerating an infinite set, but TLC allows overriding set definitions so that you may refer to infinite sets in your specification, but evaluate them on finite subsets. 

  23. The identification of the members of with the logical truth values is convenient, but also introduces some subtleties. They are discussed in section 16.1.3 Interpretation of Boolean Operators of Specifying Systems (p. 296). In particular, TLA+ requires that (and ), for any $x$, including $\sqrt{2}$, is ; similarly, (and ) is for any $x$, even non-Boolean. TLA+ does not require that the value of a Boolean operator when both sides are non-Boolean be a Boolean. 

  24. While it doesn’t matter in practice — as it has no bad consequences — the definition allows us to deduce that $0/0$ is some indeterminate real number, as there are many numbers $c$ that satisfy the equation on the left-hand side. If that bothers you, it can be fixed by defining division as $ChooseOne(Real,\; \LAMBDA c : a = b*c)$. Also, the second rule in the definition of (right-uniqueness) allows us to deduce that $1/0 = 2/0$. If that bothers you, there is a somewhat more complex definition that makes such a deduction impossible. 

  25. Attempting to evaluate the expression in the TLC model checker results in an error. 

  26. Lamport and Paulson’s Should Your Specification Language Be Typed? and Lamport’s Types Are Not Harmless

  27. And so $\seq{-1, 1} \in [x \in Int \mapsto x^2]$ is undefined rather than

  28. In deviation from familiar notation, the square brackets are necessary in the concrete TLA+ syntax to settle parsing ambiguities that may arise due to the other use of the $\to$ symbol in the construct. In addition, square brackets are used in all forms of function definition and application (except for tuple definition, where angled brackets are used), so they serve as a visual queue that helps identify the use of functions in TLA+ expressions. 

  29. If we wanted to do actual calculus, we’d be less sloppy, and define continuity and differentiability, so we could assert that a limit or a derivative actually exists (as our $Limit$ and $Derivative$ operators are undefined if the function is dicontinuous or not differentiable at point $a$). To see how that’s done, see page 22 of Leslie Lamport’s How to Write a 21st Century Proof

  30. This is not as big a limitation as it may sound, as recursive operators can usually be expressed as non-recursive operators with a recursive function defined in a clause. 

  31. There are other, shorter ways of writing $ListAsSeq$, but I wanted to showcase some more TLA+ features. 

  32. And we could then define Kleene’s T predicate

  33. Kaustuv Chaudhuri, Damien Doligez, Leslie Lamport, Stephan Merz, A TLA+ Proof System, 2008 

  34. The theorems are later used in the book to prove the correctness of Euclid’s algorithm for computing the GCD. We will talk about Euclid’s algorithm in part 3. 

  35. Note to all pedantic intuitionists: I wrote of contradiction; not by contradiction.