Connectives: Conjunction, disjunction, and implication
module plfa.part1.Connectives where
This chapter introduces the basic logical connectives, by observing a correspondence between connectives of logic and data types, a principle known as Propositions as Types:
 conjunction is product,
 disjunction is sum,
 true is unit type,
 false is empty type,
 implication is function space.
Imports
import Relation.Binary.PropositionalEquality as Eq open Eq using (_≡_; refl) open Eq.≡Reasoning open import Data.Nat using (ℕ) open import Function using (_∘_) open import plfa.part1.Isomorphism using (_≃_; _≲_; extensionality) open plfa.part1.Isomorphism.≃Reasoning
Conjunction is product
Given two propositions A
and B
, the conjunction A × B
holds
if both A
holds and B
holds. We formalise this idea by
declaring a suitable record type:
data _×_ (A B : Set) : Set where ⟨_,_⟩ : A → B  → A × B
Evidence that A × B
holds is of the form ⟨ M , N ⟩
, where M
provides evidence that A
holds and N
provides evidence that B
holds.
Given evidence that A × B
holds, we can conclude that either
A
holds or B
holds:
proj₁ : ∀ {A B : Set} → A × B  → A proj₁ ⟨ x , y ⟩ = x proj₂ : ∀ {A B : Set} → A × B  → B proj₂ ⟨ x , y ⟩ = y
If L
provides evidence that A × B
holds, then proj₁ L
provides evidence
that A
holds, and proj₂ L
provides evidence that B
holds.
When ⟨_,_⟩
appears in a term on the righthand side of an equation
we refer to it as a constructor, and when it appears in a pattern on
the lefthand side of an equation we refer to it as a destructor.
We may also refer to proj₁
and proj₂
as destructors, since they
play a similar role.
Other terminology refers to ⟨_,_⟩
as introducing a conjunction, and
to proj₁
and proj₂
as eliminating a conjunction; indeed, the
former is sometimes given the name ×I
and the latter two the names
×E₁
and ×E₂
. As we read the rules from top to bottom,
introduction and elimination do what they say on the tin: the first
introduces a formula for the connective, which appears in the
conclusion but not in the hypotheses; the second eliminates a
formula for the connective, which appears in a hypothesis but not in
the conclusion. An introduction rule describes under what conditions
we say the connective holds—how to define the connective. An
elimination rule describes what we may conclude when the connective
holds—how to use the connective.^{1}
In this case, applying each destructor and reassembling the results with the constructor is the identity over products:
η× : ∀ {A B : Set} (w : A × B) → ⟨ proj₁ w , proj₂ w ⟩ ≡ w η× ⟨ x , y ⟩ = refl
The pattern matching on the lefthand side is essential, since
replacing w
by ⟨ x , y ⟩
allows both sides of the
propositional equality to simplify to the same term.
We set the precedence of conjunction so that it binds less tightly than anything save disjunction:
infixr 2 _×_
Thus, m ≤ n × n ≤ p
parses as (m ≤ n) × (n ≤ p)
.
Alternatively, we can declare conjunction as a record type:
record _×′_ (A B : Set) : Set where constructor ⟨_,_⟩′ field proj₁′ : A proj₂′ : B open _×′_
The record construction record { proj₁′ = M ; proj₂′ = N }
corresponds to the
term ⟨ M , N ⟩
where M
is a term of type A
and N
is a term of type B
.
The constructor declaration allows us to write ⟨ M , N ⟩′
in place of the
record construction.
The data type _x_
and the record type _×′_
behave similarly. One
difference is that for data types we have to prove ηequality, but for record
types, ηequality holds by definition. While proving η×′
, we do not have to
pattern match on w
to know that ηequality holds:
η×′ : ∀ {A B : Set} (w : A ×′ B) → ⟨ proj₁′ w , proj₂′ w ⟩′ ≡ w η×′ w = refl
It can be very convenient to have ηequality definitionally, and so the
standard library defines _×_
as a record type. We use the definition from the
standard library in later chapters.
Given two types A
and B
, we refer to A × B
as the
product of A
and B
. In set theory, it is also sometimes
called the Cartesian product, and in computing it corresponds
to a record type. Among other reasons for
calling it the product, note that if type A
has m
distinct members, and type B
has n
distinct members,
then the type A × B
has m * n
distinct members.
For instance, consider a type Bool
with two members, and
a type Tri
with three members:
data Bool : Set where true : Bool false : Bool data Tri : Set where aa : Tri bb : Tri cc : Tri
Then the type Bool × Tri
has six members:
⟨ true , aa ⟩ ⟨ true , bb ⟩ ⟨ true , cc ⟩
⟨ false , aa ⟩ ⟨ false , bb ⟩ ⟨ false , cc ⟩
For example, the following function enumerates all
possible arguments of type Bool × Tri
:
×count : Bool × Tri → ℕ ×count ⟨ true , aa ⟩ = 1 ×count ⟨ true , bb ⟩ = 2 ×count ⟨ true , cc ⟩ = 3 ×count ⟨ false , aa ⟩ = 4 ×count ⟨ false , bb ⟩ = 5 ×count ⟨ false , cc ⟩ = 6
Product on types also shares a property with product on numbers in that there is a sense in which it is commutative and associative. In particular, product is commutative and associative up to isomorphism.
For commutativity, the to
function swaps a pair, taking ⟨ x , y ⟩
to
⟨ y , x ⟩
, and the from
function does the same (up to renaming).
Instantiating the patterns correctly in from∘to
and to∘from
is essential.
Replacing the definition of from∘to
by λ w → refl
will not work;
and similarly for to∘from
:
×comm : ∀ {A B : Set} → A × B ≃ B × A ×comm = record { to = λ{ ⟨ x , y ⟩ → ⟨ y , x ⟩ } ; from = λ{ ⟨ y , x ⟩ → ⟨ x , y ⟩ } ; from∘to = λ{ ⟨ x , y ⟩ → refl } ; to∘from = λ{ ⟨ y , x ⟩ → refl } }
Being commutative is different from being commutative up to isomorphism. Compare the two statements:
m * n ≡ n * m
A × B ≃ B × A
In the first case, we might have that m
is 2
and n
is 3
, and
both m * n
and n * m
are equal to 6
. In the second case, we
might have that A
is Bool
and B
is Tri
, and Bool × Tri
is
not the same as Tri × Bool
. But there is an isomorphism between
the two types. For instance, ⟨ true , aa ⟩
, which is a member of the
former, corresponds to ⟨ aa , true ⟩
, which is a member of the latter.
For associativity, the to
function reassociates two uses of pairing,
taking ⟨ ⟨ x , y ⟩ , z ⟩
to ⟨ x , ⟨ y , z ⟩ ⟩
, and the from
function does
the inverse. Again, the evidence of left and right inverse requires
matching against a suitable pattern to enable simplification:
×assoc : ∀ {A B C : Set} → (A × B) × C ≃ A × (B × C) ×assoc = record { to = λ{ ⟨ ⟨ x , y ⟩ , z ⟩ → ⟨ x , ⟨ y , z ⟩ ⟩ } ; from = λ{ ⟨ x , ⟨ y , z ⟩ ⟩ → ⟨ ⟨ x , y ⟩ , z ⟩ } ; from∘to = λ{ ⟨ ⟨ x , y ⟩ , z ⟩ → refl } ; to∘from = λ{ ⟨ x , ⟨ y , z ⟩ ⟩ → refl } }
Being associative is not the same as being associative up to isomorphism. Compare the two statements:
(m * n) * p ≡ m * (n * p)
(A × B) × C ≃ A × (B × C)
For example, the type (ℕ × Bool) × Tri
is not the same as ℕ ×
(Bool × Tri)
. But there is an isomorphism between the two types. For
instance ⟨ ⟨ 1 , true ⟩ , aa ⟩
, which is a member of the former,
corresponds to ⟨ 1 , ⟨ true , aa ⟩ ⟩
, which is a member of the latter.
Exercise ⇔≃×
(recommended)
Show that A ⇔ B
as defined earlier
is isomorphic to (A → B) × (B → A)
.
 Your code goes here
Truth is unit
Truth ⊤
always holds. We formalise this idea by
declaring a suitable record type:
data ⊤ : Set where tt :  ⊤
Evidence that ⊤
holds is of the form tt
.
There is an introduction rule, but no elimination rule.
Given evidence that ⊤
holds, there is nothing more of interest we
can conclude. Since truth always holds, knowing that it holds tells
us nothing new.
The nullary case of η×
is η⊤
, which asserts that any
value of type ⊤
must be equal to tt
:
η⊤ : ∀ (w : ⊤) → tt ≡ w η⊤ tt = refl
The pattern matching on the lefthand side is essential. Replacing
w
by tt
allows both sides of the propositional equality to
simplify to the same term.
Alternatively, we can declare truth as an empty record:
record ⊤′ : Set where constructor tt′
The record construction record {}
corresponds to the term tt
. The
constructor declaration allows us to write tt′
.
As with the product, the data type ⊤
and the record type ⊤′
behave
similarly, but ηequality holds by definition for the record type. While
proving η⊤′
, we do not have to pattern match on w
—Agda knows it is
equal to tt′
:
η⊤′ : ∀ (w : ⊤′) → tt′ ≡ w η⊤′ w = refl
Agda knows that any value of type ⊤′
must be tt′
, so any time we need a
value of type ⊤′
, we can tell Agda to figure it out:
truth′ : ⊤′ truth′ = _
We refer to ⊤
as the unit type. And, indeed,
type ⊤
has exactly one member, tt
. For example, the following
function enumerates all possible arguments of type ⊤
:
⊤count : ⊤ → ℕ ⊤count tt = 1
For numbers, one is the identity of multiplication. Correspondingly,
unit is the identity of product up to isomorphism. For left
identity, the to
function takes ⟨ tt , x ⟩
to x
, and the from
function does the inverse. The evidence of left inverse requires
matching against a suitable pattern to enable simplification:
⊤identityˡ : ∀ {A : Set} → ⊤ × A ≃ A ⊤identityˡ = record { to = λ{ ⟨ tt , x ⟩ → x } ; from = λ{ x → ⟨ tt , x ⟩ } ; from∘to = λ{ ⟨ tt , x ⟩ → refl } ; to∘from = λ{ x → refl } }
Having an identity is different from having an identity up to isomorphism. Compare the two statements:
1 * m ≡ m
⊤ × A ≃ A
In the first case, we might have that m
is 2
, and both
1 * m
and m
are equal to 2
. In the second
case, we might have that A
is Bool
, and ⊤ × Bool
is not the
same as Bool
. But there is an isomorphism between the two types.
For instance, ⟨ tt , true ⟩
, which is a member of the former,
corresponds to true
, which is a member of the latter.
Right identity follows from commutativity of product and left identity:
⊤identityʳ : ∀ {A : Set} → (A × ⊤) ≃ A ⊤identityʳ {A} = ≃begin (A × ⊤) ≃⟨ ×comm ⟩ (⊤ × A) ≃⟨ ⊤identityˡ ⟩ A ≃∎
Here we have used a chain of isomorphisms, analogous to that used for equality.
Disjunction is sum
Given two propositions A
and B
, the disjunction A ⊎ B
holds
if either A
holds or B
holds. We formalise this idea by
declaring a suitable inductive type:
data _⊎_ (A B : Set) : Set where inj₁ : A  → A ⊎ B inj₂ : B  → A ⊎ B
Evidence that A ⊎ B
holds is either of the form inj₁ M
, where M
provides evidence that A
holds, or inj₂ N
, where N
provides
evidence that B
holds.
Given evidence that A → C
and B → C
both hold, then given
evidence that A ⊎ B
holds we can conclude that C
holds:
case⊎ : ∀ {A B C : Set} → (A → C) → (B → C) → A ⊎ B  → C case⊎ f g (inj₁ x) = f x case⊎ f g (inj₂ y) = g y
Pattern matching against inj₁
and inj₂
is typical of how we exploit
evidence that a disjunction holds.
When inj₁
and inj₂
appear on the righthand side of an equation we
refer to them as constructors, and when they appear on the
lefthand side we refer to them as destructors. We also refer to
case⊎
as a destructor, since it plays a similar role. Other
terminology refers to inj₁
and inj₂
as introducing a
disjunction, and to case⊎
as eliminating a disjunction; indeed
the former are sometimes given the names ⊎I₁
and ⊎I₂
and the
latter the name ⊎E
.
Applying the destructor to each of the constructors is the identity:
η⊎ : ∀ {A B : Set} (w : A ⊎ B) → case⊎ inj₁ inj₂ w ≡ w η⊎ (inj₁ x) = refl η⊎ (inj₂ y) = refl
More generally, we can also throw in an arbitrary function from a disjunction:
uniq⊎ : ∀ {A B C : Set} (h : A ⊎ B → C) (w : A ⊎ B) → case⊎ (h ∘ inj₁) (h ∘ inj₂) w ≡ h w uniq⊎ h (inj₁ x) = refl uniq⊎ h (inj₂ y) = refl
The pattern matching on the lefthand side is essential. Replacing
w
by inj₁ x
allows both sides of the propositional equality to
simplify to the same term, and similarly for inj₂ y
.
We set the precedence of disjunction so that it binds less tightly than any other declared operator:
infixr 1 _⊎_
Thus, A × C ⊎ B × C
parses as (A × C) ⊎ (B × C)
.
Given two types A
and B
, we refer to A ⊎ B
as the
sum of A
and B
. In set theory, it is also sometimes
called the disjoint union, and in computing it corresponds
to a variant record type. Among other reasons for
calling it the sum, note that if type A
has m
distinct members, and type B
has n
distinct members,
then the type A ⊎ B
has m + n
distinct members.
For instance, consider a type Bool
with two members, and
a type Tri
with three members, as defined earlier.
Then the type Bool ⊎ Tri
has five
members:
inj₁ true inj₂ aa
inj₁ false inj₂ bb
inj₂ cc
For example, the following function enumerates all
possible arguments of type Bool ⊎ Tri
:
⊎count : Bool ⊎ Tri → ℕ ⊎count (inj₁ true) = 1 ⊎count (inj₁ false) = 2 ⊎count (inj₂ aa) = 3 ⊎count (inj₂ bb) = 4 ⊎count (inj₂ cc) = 5
Sum on types also shares a property with sum on numbers in that it is commutative and associative up to isomorphism.
Exercise ⊎comm
(recommended)
Show sum is commutative up to isomorphism.
 Your code goes here
Exercise ⊎assoc
(practice)
Show sum is associative up to isomorphism.
 Your code goes here
False is empty
False ⊥
never holds. We formalise this idea by declaring
a suitable inductive type:
data ⊥ : Set where  no clauses!
There is no possible evidence that ⊥
holds.
Dual to ⊤
, for ⊥
there is no introduction rule but an elimination rule.
Since false never holds, knowing that it holds tells us we are in a
paradoxical situation. Given evidence that ⊥
holds, we might
conclude anything! This is a basic principle of logic, known in
medieval times by the Latin phrase ex falso, and known to children
through phrases such as “if pigs had wings, then I’d be the Queen of
Sheba”. We formalise it as follows:
⊥elim : ∀ {A : Set} → ⊥  → A ⊥elim ()
This is our first use of the absurd pattern ()
.
Here since ⊥
is a type with no members, we indicate that it is
never possible to match against a value of this type by using
the pattern ()
.
The nullary case of case⊎
is ⊥elim
. By analogy,
we might have called it case⊥
, but chose to stick with the name
in the standard library.
The nullary case of uniq⊎
is uniq⊥
, which asserts that ⊥elim
is equal to any arbitrary function from ⊥
:
uniq⊥ : ∀ {C : Set} (h : ⊥ → C) (w : ⊥) → ⊥elim w ≡ h w uniq⊥ h ()
Using the absurd pattern asserts there are no possible values for w
,
so the equation holds trivially.
We refer to ⊥
as the empty type. And, indeed,
type ⊥
has no members. For example, the following function
enumerates all possible arguments of type ⊥
:
⊥count : ⊥ → ℕ ⊥count ()
Here again the absurd pattern ()
indicates that no value can match
type ⊥
.
For numbers, zero is the identity of addition. Correspondingly, empty is the identity of sums up to isomorphism.
Exercise ⊥identityˡ
(recommended)
Show empty is the left identity of sums up to isomorphism.
 Your code goes here
Exercise ⊥identityʳ
(practice)
Show empty is the right identity of sums up to isomorphism.
 Your code goes here
Implication is function
Given two propositions A
and B
, the implication A → B
holds if
whenever A
holds then B
must also hold. We formalise implication using
the function type, which has appeared throughout this book.
Evidence that A → B
holds is of the form
λ (x : A) → N
where N
is a term of type B
containing as a free variable x
of type A
.
Given a term L
providing evidence that A → B
holds, and a term M
providing evidence that A
holds, the term L M
provides evidence that
B
holds. In other words, evidence that A → B
holds is a function that
converts evidence that A
holds into evidence that B
holds.
Put another way, if we know that A → B
and A
both hold,
then we may conclude that B
holds:
→elim : ∀ {A B : Set} → (A → B) → A  → B →elim L M = L M
In medieval times, this rule was known by the name modus ponens. It corresponds to function application.
Defining a function, with a named definition or a lambda abstraction, is referred to as introducing a function, while applying a function is referred to as eliminating the function.
Elimination followed by introduction is the identity:
η→ : ∀ {A B : Set} (f : A → B) → (λ (x : A) → f x) ≡ f η→ f = refl
Implication binds less tightly than any other operator. Thus, A ⊎ B →
B ⊎ A
parses as (A ⊎ B) → (B ⊎ A)
.
Given two types A
and B
, we refer to A → B
as the function
space from A
to B
. It is also sometimes called the exponential,
with B
raised to the A
power. Among other reasons for calling
it the exponential, note that if type A
has m
distinct
members, and type B
has n
distinct members, then the type
A → B
has nᵐ
distinct members. For instance, consider a
type Bool
with two members and a type Tri
with three members,
as defined earlier. Then the type Bool → Tri
has nine (that is,
three squared) members:
λ{true → aa; false → aa} λ{true → aa; false → bb} λ{true → aa; false → cc}
λ{true → bb; false → aa} λ{true → bb; false → bb} λ{true → bb; false → cc}
λ{true → cc; false → aa} λ{true → cc; false → bb} λ{true → cc; false → cc}
For example, the following function enumerates all possible
arguments of the type Bool → Tri
:
→count : (Bool → Tri) → ℕ →count f with f true  f false ...  aa  aa = 1 ...  aa  bb = 2 ...  aa  cc = 3 ...  bb  aa = 4 ...  bb  bb = 5 ...  bb  cc = 6 ...  cc  aa = 7 ...  cc  bb = 8 ...  cc  cc = 9
Exponential on types also share a property with exponential on numbers in that many of the standard identities for numbers carry over to the types.
Corresponding to the law
(p ^ n) ^ m ≡ p ^ (n * m)
we have the isomorphism
A → (B → C) ≃ (A × B) → C
Both types can be viewed as functions that given evidence that A
holds
and evidence that B
holds can return evidence that C
holds.
This isomorphism sometimes goes by the name currying.
The proof of the right inverse requires extensionality:
currying : ∀ {A B C : Set} → (A → B → C) ≃ (A × B → C) currying = record { to = λ{ f → λ{ ⟨ x , y ⟩ → f x y }} ; from = λ{ g → λ{ x → λ{ y → g ⟨ x , y ⟩ }}} ; from∘to = λ{ f → refl } ; to∘from = λ{ g → extensionality λ{ ⟨ x , y ⟩ → refl }} }
Currying tells us that instead of a function that takes a pair of arguments, we can have a function that takes the first argument and returns a function that expects the second argument. Thus, for instance, our way of writing addition
_+_ : ℕ → ℕ → ℕ
is isomorphic to a function that accepts a pair of arguments:
_+′_ : (ℕ × ℕ) → ℕ
Agda is optimised for currying, so 2 + 3
abbreviates _+_ 2 3
.
In a language optimised for pairing, we would instead take 2 +′ 3
as
an abbreviation for _+′_ ⟨ 2 , 3 ⟩
.
Corresponding to the law
p ^ (n + m) = (p ^ n) * (p ^ m)
we have the isomorphism:
(A ⊎ B) → C ≃ (A → C) × (B → C)
That is, the assertion that if either A
holds or B
holds then C
holds
is the same as the assertion that if A
holds then C
holds and if
B
holds then C
holds. The proof of the left inverse requires extensionality:
→distrib⊎ : ∀ {A B C : Set} → (A ⊎ B → C) ≃ ((A → C) × (B → C)) →distrib⊎ = record { to = λ{ f → ⟨ f ∘ inj₁ , f ∘ inj₂ ⟩ } ; from = λ{ ⟨ g , h ⟩ → λ{ (inj₁ x) → g x ; (inj₂ y) → h y } } ; from∘to = λ{ f → extensionality λ{ (inj₁ x) → refl ; (inj₂ y) → refl } } ; to∘from = λ{ ⟨ g , h ⟩ → refl } }
Corresponding to the law
(p * n) ^ m = (p ^ m) * (n ^ m)
we have the isomorphism:
A → B × C ≃ (A → B) × (A → C)
That is, the assertion that if A
holds then B
holds and C
holds
is the same as the assertion that if A
holds then B
holds and if
A
holds then C
holds. The proof of left inverse requires both extensionality
and the rule η×
for products:
→distrib× : ∀ {A B C : Set} → (A → B × C) ≃ (A → B) × (A → C) →distrib× = record { to = λ{ f → ⟨ proj₁ ∘ f , proj₂ ∘ f ⟩ } ; from = λ{ ⟨ g , h ⟩ → λ x → ⟨ g x , h x ⟩ } ; from∘to = λ{ f → extensionality λ{ x → η× (f x) } } ; to∘from = λ{ ⟨ g , h ⟩ → refl } }
Distribution
Products distribute over sum, up to isomorphism. The code to validate this fact is similar in structure to our previous results:
×distrib⊎ : ∀ {A B C : Set} → (A ⊎ B) × C ≃ (A × C) ⊎ (B × C) ×distrib⊎ = record { to = λ{ ⟨ inj₁ x , z ⟩ → (inj₁ ⟨ x , z ⟩) ; ⟨ inj₂ y , z ⟩ → (inj₂ ⟨ y , z ⟩) } ; from = λ{ (inj₁ ⟨ x , z ⟩) → ⟨ inj₁ x , z ⟩ ; (inj₂ ⟨ y , z ⟩) → ⟨ inj₂ y , z ⟩ } ; from∘to = λ{ ⟨ inj₁ x , z ⟩ → refl ; ⟨ inj₂ y , z ⟩ → refl } ; to∘from = λ{ (inj₁ ⟨ x , z ⟩) → refl ; (inj₂ ⟨ y , z ⟩) → refl } }
Sums do not distribute over products up to isomorphism, but it is an embedding:
⊎distrib× : ∀ {A B C : Set} → (A × B) ⊎ C ≲ (A ⊎ C) × (B ⊎ C) ⊎distrib× = record { to = λ{ (inj₁ ⟨ x , y ⟩) → ⟨ inj₁ x , inj₁ y ⟩ ; (inj₂ z) → ⟨ inj₂ z , inj₂ z ⟩ } ; from = λ{ ⟨ inj₁ x , inj₁ y ⟩ → (inj₁ ⟨ x , y ⟩) ; ⟨ inj₁ x , inj₂ z ⟩ → (inj₂ z) ; ⟨ inj₂ z , _ ⟩ → (inj₂ z) } ; from∘to = λ{ (inj₁ ⟨ x , y ⟩) → refl ; (inj₂ z) → refl } }
Note that there is a choice in how we write the from
function.
As given, it takes ⟨ inj₂ z , inj₂ z′ ⟩
to inj₂ z
, but it is
easy to write a variant that instead returns inj₂ z′
. We have
an embedding rather than an isomorphism because the
from
function must discard either z
or z′
in this case.
In the usual approach to logic, both of the distribution laws are given as equivalences, where each side implies the other:
A × (B ⊎ C) ⇔ (A × B) ⊎ (A × C)
A ⊎ (B × C) ⇔ (A ⊎ B) × (A ⊎ C)
But when we consider the functions that provide evidence for these implications, then the first corresponds to an isomorphism while the second only corresponds to an embedding, revealing a sense in which one of these laws is “more true” than the other.
Exercise ⊎weak×
(recommended)
Show that the following property holds:
postulate ⊎weak× : ∀ {A B C : Set} → (A ⊎ B) × C → A ⊎ (B × C)
This is called a weak distributive law. Give the corresponding distributive law, and explain how it relates to the weak version.
 Your code goes here
Exercise ⊎×implies×⊎
(practice)
Show that a disjunct of conjuncts implies a conjunct of disjuncts:
postulate ⊎×implies×⊎ : ∀ {A B C D : Set} → (A × B) ⊎ (C × D) → (A ⊎ C) × (B ⊎ D)
Does the converse hold? If so, prove; if not, give a counterexample.
 Your code goes here
Standard library
Definitions similar to those in this chapter can be found in the standard library:
import Data.Product using (_×_; proj₁; proj₂) renaming (_,_ to ⟨_,_⟩) import Data.Unit using (⊤; tt) import Data.Sum using (_⊎_; inj₁; inj₂) renaming ([_,_] to case⊎) import Data.Empty using (⊥; ⊥elim) import Function.Equivalence using (_⇔_)
The standard library constructs pairs with _,_
whereas we use ⟨_,_⟩
.
The former makes it convenient to build triples or larger tuples from pairs,
permitting a , b , c
to stand for (a , (b , c))
. But it conflicts with
other useful notations, such as [_,_]
to construct a list of two elements in
Chapter Lists
and Γ , A
to extend environments in
Chapter DeBruijn.
The standard library _⇔_
is similar to ours, but the one in the
standard library is less convenient, since it is parameterised with
respect to an arbitrary notion of equivalence.
Unicode
This chapter uses the following unicode:
× U+00D7 MULTIPLICATION SIGN (\x)
⊎ U+228E MULTISET UNION (\u+)
⊤ U+22A4 DOWN TACK (\top)
⊥ U+22A5 UP TACK (\bot)
η U+03B7 GREEK SMALL LETTER ETA (\eta)
₁ U+2081 SUBSCRIPT ONE (\_1)
₂ U+2082 SUBSCRIPT TWO (\_2)
⇔ U+21D4 LEFT RIGHT DOUBLE ARROW (\<=>)

This paragraph was adopted from “Propositions as Types”, Philip Wadler, Communications of the ACM, December 2015. ↩