This post is a record of my struggles learning the Lean 4 Proof Assistant, which I’m doing by the inefficient and misleading strategy of interpreting it whenever I can as a souped-up version of Haskell.
Curry–Howard in propositional logic
The formula
is the generic instance of the second axiom scheme in Lukasiewicz’s formulation of propositional logic. As such, it is a tautology, true under any assignment of true and false to the propositional letters. An elegant proof of this is by the Curry–Howard correspondence for propositional logic: a proposition is a tautology if and only if, interpreting the propositional letters as type variables, there is a function of the corresponding type. Thus the fact that the Haskell compiler accepts
lukasiewicz2 :: (p -> (q -> r))
-> ((p -> q) -> (p -> r))
lukasiewicz2 h_ptoqr h_pq h_p = (h_ptoqr h_p) (h_pq h_p)
-- h_ptoqr h_p :: q -> r evaluated at h_pq h_p :: q ...
-- has type r
or that the Lean compiler accepts
lemma lukasiewicz2 {p q r : Prop} : (p → (q → r))
→ ((p → q) → (p → r))
:= fun h_ptoqr h_pq h_p => (h_ptoqr h_p) (h_pq h_p)
is a proof that the Lukasiewicz formula is a tautology. Here h_pq can be read as ‘the hypothesis that ‘, or ‘a proof that
‘. It is a value of type
. (For more on the curly braces block see below.) The Lean code may look more familiar in the entirely equivalent form below, which shows that Lean permits multiple arguments in type signatures to be listed by juxtaposition, just as for multiple arguments to functions:
lemma lukasiewicz2' {p q r : Prop}
(h_ptoqr : p → (q → r)) (h_pq : p → q) : p → r
:= fun h_p => (h_ptoqr h_p) (h_pq h_p)
This proof can be rewritten in tactic style using the intro tactic to introduce the hypothesis h_p of type p, and then the exact tactic to finish:
lemma lukasiewicz2'' {p q r : Prop}
(h_ptoqr : p → (q → r)) (h_pq : p → q) : p → r
:= by intro h_p; exact (h_ptoqr h_p) (h_pq h_p)
One thing I am struggling with in Lean is to get used to tactics, so examples like this where the macro-rewriting black box can easily be understood are particularly welcome. I hope the not so-subtle neurolinguistic programming of typing fun every so often will keep me motivated.
Some more tautologies
In a similar spirit, the Curry–Howard proofs that p → p and p → (q → p) are tautologies have one word solutions: id (valid in Haskell and Lean), and const (valid in Haskell). Here p → (q → p) is the generic instance of the first Lukasiewicz axiom scheme.
Negation
In Lean, negation is definitionally the same as ‘implies false’. That is . We can mimic this in Haskell by writing
data False, to introduce a new type False which deliberately does not have any constructors. We then define negation by the type synonym type Not p = p -> False. (As a visual aid, -> is the Haskell arrow; this is allowed by Lean, but Lean also supports → which I’ve used for all Lean code.) For example, the proposition is a tautology, with Curry–Howard proof in Haskell
impossible :: (p, Not p) -> False
impossible (h_p, h_notp) = h_notp h_p
given by evaluating the function h_notp :: p → False at h_p :: p. One could curry this definition to get the equivalent tautology with definition
impossible' h_p h_notp = h_notp h_p. The analogous Lean code for the curried form is
lemma impossible {p : Prop} : p → ¬p → False
:= fun h_p => fun h_np => h_np h_p
which, very similarly to before, can be rewritten as
lemma impossible' {p : Prop} (h_p : p) (h_np : ¬p)
: False := h_np h_p
Observe that since a proposition cannot be both true and false, it is impossible, without using the special Haskell value undefined, or its Lean colleague sorry, to construct two valid inputs to impossible. (Or for another proof of this, note that a successful evaluation of impossible gives a value of the void type False.) But still, the fact that impossible type checks confirms that it is a tautology.
For a simpler instance, observe that id :: False → False type checks and so is a tautology. Thinking of types as their set of values, this is consistent with
, since
unless
p is itself a void type, in which case we have ; correspondingly
id is the unique function of type False → False.
The contrapositive and vector space duality
Continuing in this mathematical theme, the contrapositive, formulated so that it introduces rather than removes a negation, is and correspondingly
contrapositive :: (p -> q) -> (Not q -> Not p)
contrapositive h_pq h_nq h_p = h_nq (h_pq h_p)
is a valid Haskell program, that takes h_pq : p -> q and the proof that p is false, as expressed by a value of type h_nq : q -> False, to produce a new value, namely the composition h_nq . h_pq of type p -> False, which is then evaluated at h_p : p, the argument of the output Not p. Here it may be helpful to note that the final parentheses are redundant: the type could also be written as (p -> q) -> (q -> False) -> (p -> False). Again the Lean code is almost identical:
lemma contrapositive {p q : Prop} : (p → q) → (¬q → ¬p)
:= fun h_pq h_nq h_p => h_nq (h_pq h_p)
-- fun h_pq => (fun h_nq => (fun h_p =>
-- h_nq (h_pq h_p)))
where the commented out lines are the less idiomatic form; like any civilised being, Lean curries by default.
The flip in the direction of implication corresponds to vector space duality: if we read False as , for a field
(please ignore that this is not a void type), then the contrapositive states that, given a
-linear map
, there is a map
, namely the dual map
. The introduction of a double negation corresponds to the canonical inclusion of a vector space in its double dual
that sends
to evaluation at
:
double_negation_intro :: p -> Not (Not p)
-- equivalently p -> (p -> False) -> False
double_negation_intro h_p h_np = h_np h_p
or in Lean
lemma double_negation_intro {p : Prop} : p → ¬¬p
:= fun h_p => (fun h_notp => h_notp h_p)
which works because ¬¬p is (p → False) → False so we can take the given value of type p and evaluate it at the argument of type p → False. As an antidote to the false argument that the number of negations cannot go down, I find it helpful to consider
lemma triple_negation_elimination {p : Prop} : ¬¬¬p → ¬p
:= fun h_nnnp =>
(fun h_p => h_nnnp (double_negation_intro h_p))
which corresponds to the canonical map defined by dualising the canonical inclusion
just seen. These maps are not just canonical but also ‘natural’ in the sense of ‘natural transformation’: if
is a linear map then the naturality squares below commute.
As a big hint for the question at the following section, can you define a map , natural in the vector space
? If your answer is ‘yes’, you are perhaps working in the category of finite vector spaces, which curiously is not the right setting for this question, even though the question does not, on the face of it, involve any form of infinity!
Double negation elimination
The Lukasiewicz system needs a third axiom scheme to become complete, in the sense that all tautologies are provable from the axioms by repeated applications of modus ponens. This third axiom scheme has as its generic instance the contrapositive, this time removing negation:
Does this have a Curry–Howard proof in Haskell? Does it have a Curry–Howard proof in Lean? Using the version of the contrapositive already proved, we can certainly prove ; then using double negation introduction
, we get
The Curry–Howard proof of this in Haskell is, using our existing functions,
lukasiewicz3_nearly :: (Not p -> Not q)
-> q -> Not (Not p)
lukasiewicz3_nearly h_npnq h_q =
contrapositive h_npnq (double_negation_intro h_q)
which can be ‘desugared’ in careful steps to
lukasiewicz3_nearly1 h_npnq h_q = \h_nq ->
contrapositive h_npnq (double_negation_intro h_q) h_nq
lukasiewicz3_nearly2 h_npnq h_q = \h_nq ->
(double_negation_intro h_q) (h_npnq h_nq)
lukasiewicz3_nearly3 h_npnq h_q = \h_nq ->
(\h_nq' -> h_nq' h_q) (h_npnq h_nq)
lukasiewicz3_nearly4 h_npnq h_q = \h_nq
-> (h_npnq h_nq) h_q
lukasiewicz3_nearly5 h_npnq h_q h_nq = h_npnq h_nq h_q
I find it quite remarkable that, without a type signature, the inferred type for the final version is (a -> b -> c) -> b -> a -> c, which by parametric polymorphism and the theorems for free mantra, has unique value
lukasiewicz3_nearly6 g x y = g y x
I have no intuition for why this apparently complicated tautology involving both negation and double negation should be such a simple combinator.
It is now clear that the third Lukasiewicz axiom scheme is equivalent in logical power to double negation elimination. But, finally reaching one punchline, the Haskell type Not (Not p) -> p, and its syntactic equivalent ((p -> False) -> False) -> p are uninhabited. Thus there is no way to complete
double_negation_elimination :: Not (Not p) -> p
double_negation_elimination h_nnp = ...
without using undefined. Thus Haskell exhibits the Curry–Howard logic for intuitionistic, rather than classical, logic.
The foundations of Lean are in homotopy type theory, which is also intuitionistic. But Lean supports classical logic, and if I understand right, this is automatically enabled by importing Mathlib. Remembering that proof by contradiction is ‘the falsity of implies
‘ we get a one-line proof of double negation elimination in Lean.
lemma double_negation_elimination {p : Prop} :
¬¬p → p := by_contra
-- by_contra has type (¬?m.1 → False) → ?m.1
If you are playing along and find this gives an error, please add import Mathlib.Tactic.ByContra at the top of your file. Or add import Mathlib, and go away and make some tea. I used a full import of Mathlib for the examples of dependent types involving natural numbers below. Using this we can prove the instance of the third Lukasiewicz scheme:
lemma contrapositive_eliminating_negation {p q : Prop}
: (¬p → ¬q) → q → p :=
fun h_npnq h_q => by by_contra h_np;
-- we have h_npnq : ¬p → ¬q, h_q : q and now h_np : ¬p
-- so all set up for
exact h_npnq h_np h_q
where the definition in the body could be rewritten as intro h_npnq h_q; by_contra h_np; exact h_npnq h_np h_q
Peirce’s Law
For a similar example, Peirce’s Law, that , is negation free but still unprovable in intuitionistic logic. Peirce’s Law is the type of the dreaded (well, by me anyway) Scheme function
call-with-current-continuation. This type is void in Haskell: instead the nearest approximation is callCC
class Monad m => MonadCont m where
callCC :: ((a -> m b) -> m a) -> m a
where, omitting type constructors, and choosing a fixed return type r, one suitable type for a monad in the class MonadCont is m p = (p -> r) -> r. This might suggest taking r to be False but then m a is Not (Not a), so we stay in the doubly negated world. I found it an instructive exercise to deduce double negation elimination from Peirce’s Law, by using Peirce with specialised to False to get
, i.e.
and then using a chain of implications
; this becomes a function composition below. I will leave filling in the
sorry in the sublemma h_aux : ¬¬p → (¬p → p) below as an exercise. As a hint, one solution begins with intro h_nn h_n to get hold of values of the input types ¬¬p and \neg p; now compare the earlier proof by contradiction.
lemma peirce_implies_double_negation_elimination
: (∀ r q : Prop, ((r → q) → r) → r)
→ (∀ p : Prop, ¬¬p → p) := by
intro peirce p
specialize peirce p False
-- current state is p : Prop, peirce :
((p → False) → p) → p, goal ¬¬p → p
have h_aux : ¬¬p → (¬p → p) := by sorry
exact fun h_nnp => peirce (h_aux h_nnp)
This proof is probably comically long-winded to experts but no matter. At least I now have a better appreciation for why the first year undergraduates doing my introduction to pure maths course used to write proofs that began with a confident ‘suppose for a contradiction’, continued with a contrapositive, flirted perhaps with induction, and so on and so on, until eventually one reached the meat of the proof, which was usually a one line direct proof of the required proposition.
Types of types
Whereas in Haskell there are values, types and kinds (but without extensions the only kind that matters is , the common kind of all types), in Lean 4, in order to avoid Girard’s paradox, there is an infinite hierarchy. (The logical system of Principia Mathematica is I believe, similar.) Nearly at the bottom of the hierarchy are values, just like in Haskell, such as the numbers 3 or 1.234, the list
[1,2,3] or the string literal "hello". As in Haskell, these values have various types, for instance Nat for 3, ℝ for 1.234, List Nat for [1,2,3] and String for "hello". (Unlike Haskell, strings are not definitionally the same as lists of characters, so String is more like Haskell’s efficient Text data type.) Thus 3 : Nat, 1.234 : ℝ, [1,2,3] : List Nat and "hello" : String. Each of Nat, ℝ, List Nat, String is a type. The common type of all these types is Type, which can also be notated Type 0. The hierarchy then proceeds: the type of Type is Type 1, the type of Type 1 is Type 2, and so on. Off to one side, but at the lowest level of all, are values of a further type: Prop. This is the only type we have used so far. The type of Prop is Type. Thus
10 : Nat : Type : Type 1 : Type 2
h_p : p → p : Prop : Type : Type 1 : Type 2
are chains of valid type signatures (supposing that h_p and p are in scope) and
variable (p q : Prop)
#check p → (q → p)
#check p → q
#check p → p
prints out p → (q → p) : Prop, p → q : Prop, and p → p : Prop, as expected. (Note the second is not an inhabited type.) To avoid having to inspect the interactive output, one can include the expected output at the end, using an extra pair of parentheses. Thus
#check (p → (q → p) : Prop)
is a suitable replacement for the first check; any mismatch gets highlighted in VSCode with the usual red cross. Some of the claims above on types of types can be confirmed using
#check (3 : Nat)
#check (Nat : Type)
#check (Prop : Type)
#check (Type : Type 1)
Impredicativity of the proposition type
One subtle point is that normally the type of a function α → β where α : Type i, β : Type j is Type k where k is the maximum of i and j. But if β : Prop then no matter how high α is in the hierarchy, the resulting type is Prop. Thus the types of the entity following := in the three examples below are respectively Type 2, Prop and Prop
example {α : Type 1} {β : Type 2} : Type 2 := α → β
example : Prop := (n : Nat) → 0 + n = n
example {p : Prop} : Prop := (α : Type 4) → p
This property ‘impredicativity of Prop‘ is needed below so that our functions of dependent type are propositions, and so compound proofs do not quickly leave the world of propositions.
Proof irrelevance
I have the strong feeling that there can be essentially different proofs of the same proposition. For example, the Fundamental Theorem of Algebra can be proved by induction on the 2-power dividing the degree of the polynomial, using only the intermediate value theorem from basic calculus, or instead as a corollary of Liouville’s theorem from complex analysis that a bounded entire function is constant. Lean 4 would disagree:
theorem proof_irrelevance {p : Prop} (h_p h'_p : p)
: h_p = h'_p := by rfl
Here rfl can, I think, be read as ‘are reflexively equal’. The analogue in Haskell would be a data type P that is made an instance of Eq by instance Eq P where _ == _ = True.
Aside on curly braces in types
The block {p q r : Prop} in the first example lukasiewicz2 declares that p, q, r are implicitly of type Prop; this block can be omitted, if one first declares variable (p q r : Prop) at the top of the file, as in the check example above. This is a bit like declaring p, q, r to be global types, each themselves of type Prop; compare a global declaration const int x = 3 of global values in a traditional imperative language such as C. Lean also has an analogue of Haskell’s type classes, for which square brackets are used. For instance [Ring R] means that the compiler should know (probably from some other file) that R has the structure of a ring. We won’t need type classes at all below.
Function types and forall
So far we have stayed in the world of propositional logic (which unlike predicate logic does not have quantification), and correspondingly, except for double-negation elimination, everything we have done in Lean can also be done in Haskell. Function types, or -types, are the key place where the Lean type system is far more expressive. Let me show how one can discover the syntax for oneself. Using the tautology
, the interactive environment prints out identical output
∀ (r : Prop), r → r : Prop for both the #check queries below.
#check (r : Prop) → r → r
#check ∀ r : Prop, r → r
This is not the same as the output p → p : Prop of #check p → p, which is only valid thanks to the variable declaration variable (p q : Prop) above. Thus ∀ r : Prop, r → r is the type of functions whose
- first argument is a proposition, called
r(say) - return value has type the proposition
r → r
Or equivalently, ∀ r : Prop, r → r is the type of functions whose
- first argument is a proposition, called
r(as it happens) - second argument is a value of type
r(let’s call ith_r) - returning a value of type
r(it must beh_r).
As confirmation here are Lean proofs of the axiom , first using implicit declaration as before, and then written with each of these interpretations in mind:
lemma self_implication {p : Prop} : p → p := id
lemma self_implication' : (∀ r : Prop, r → r) :=
by intro r; exact id
lemma self_implication'' : (r : Prop) → r → r :=
fun r h_r => h_r
Both primed forms have identical type signatures (when read by the compiler) and the values defined, i.e. the identity functions r → r for each r : Prop are definitionally equivalent. The compiler, at least with my default settings, will object that the argument p in the final function form is unused; it is a nice touch that the output is still more cluttered by the instruction set_option linter.unusedVariables false on how to declutter it. But fair enough: it would be more idiomatic to write
lemma self_implication''' : (r : Prop) → r → r :=
fun _ h_r => h_r
using the same underscore notation as Haskell for the ‘function hole’. Yet further definitions (i.e. the part following :=) are possible, for instance,
lemma self_implication'''' : (r : Prop) → r → r :=
by simp
lemma self_implication''''' : (r : Prop) → r → r :=
by simp?
both succeed, and the latter even tells us how it did it:
Try this: [apply] simp only [imp_self, implies_true]
Well, if you really must. I think I prefer the identity function.
Dependent types
Observe that the type of the output of self_implication'' depends explicitly on the first argument: in the first slot p, of type Prop, is the proposition to be shown to imply itself, whereas in the second and third slots, p is the type. The nearest we can get to the type signature in Haskell, supposing a type Prop is defined, is
self_implication_Haskell :: Prop → p → p -- what's p?
except this entirely fails to capture that p is not just a proposition, but in fact the first argument of the function. Instead the type signature above specifies a function taking an arbitrary proposition to a fully polymorphic function p → p, with no connection between the type variable p and the proposition. (To underline this point, equivalent Haskell type signatures in which p appears within the scope of a universal quantifier are :: Prop -> forall p. p -> p and forall p. Prop -> p -> p; you might need to enable the language extension ExplicitForAll for this to be accepted, but it seems that it is on by default in ghci-9.6.7.) Of course since we can give a Curry–Howard proof of self-implication in Haskell, there is an acceptable definition of self_implication_Haskell, namely \lambda _ h_p = h_p, but from our new dependent-type perspective, this is just a potentially confusing stroke of good luck.
Propositions indexed by natural numbers as a dependent type
For an example where it is not just the type signature but the whole function that cannot be realised in Haskell (at least without special tricks, see the final section below), we consider propositions parametrised by natural numbers. The following are equivalent declarations of a function that takes as input a natural number n, of type Nat, and returns a value of type 0 + n = n, i.e. a proof that adding zero to n gives n.
lemma zero_add_nat : ∀ n : Nat, 0 + n = n := by sorry
lemma zero_add_nat' : (n : Nat) → (0 + n = n) :=
by sorry
In both cases one suitable replacement for sorry is intro n; ring_nf; after introducing n, the target is 0 + n = n, which falls to the ring normal form tactic. Showing a better appreciation of MathLib, we could also use intro n; exact zero_add n, or, showing a better appreciation of Peano arithmetic, we could use
lemma zero_add_nat'' : (n : Nat) → 0 + n = n := by
intro n; cases n
-- zero case 0 + 0 = 0
exact (add_zero 0)
-- succ case: ⊢ 0 + (n' + 1) = n' + 1
rename_i n
-- Nat.add_assoc (n m k : ℕ) : n+m+k = n+(m+k)
rewrite [← Nat.add_assoc]
-- goal is now 0 + n + 1 = n + 1
-- which is implicitly (0 + n) + 1 = n + 1
rewrite [zero_add_nat'' n]
rfl
Above I edited the Lean output to use n' rather than the dagger from Lean, which did not come through well on WordPress. It would be more idiomatic to replace cases n with induction n hn, where hn is bound to the inductive hypothesis. Also it is standard to use rw rather than rewrite, combining multiple rewrites in one line. Since rw includes a weak form of rfl, the final line is then redundant. With these changes we get something you might recognise from the Lean Number Game.
lemma zero_add_nat''' : (n : Nat) → 0 + n = n := by
intro n; induction n with
| zero => exact (add_zero 0)
| succ n hn => rw [← Nat.add_assoc, hn]
This is also a good example of the impredicativity of Prop mentioned in the section above on the type hierarchy, where we saw that example : Prop := (n : Nat) → 0 + n = n type checks. Correspondingly, the type of the function type zero_add_nat is Prop. This can be verified, taking great care not to confuse types on the left of := with types (as values) on the right, by
def type_of_zero_add_nat := ∀ n : Nat, 0 + n = n
#check (type_of_zero_add_nat : Prop)
Pi-types
Returning to the type signatures, notice that both zero_add_nat : (n : Nat) -> 0 + n = n and
lemma self_implication''' : (p : Prop) → p → p
:= fun _ h_p => h_p
taken verbatim from above are of the general form
lemma piTest {N : Type} {f : N → Prop} : (n : N) → f n
:= by sorry
which does indeed type check. (The triangle warning is from the use of sorry, which is of course the only way to complete the definition.) For more on the –type, see ‘Function types (a.k.a. Pi types)’ in Buzzard’s Formalising mathematics in Lean: one interesting remark I’ll trail here is that a
-type is analogous to a tangent vector field on a manifold, in that each value of
lies in a different (vector) space:
. Anyway, we have now seen the most general type of a Lean function. Buzzard goes on to discuss inductive types (analogous to algebraic data types in Haskell) and quotient types (with no Haskell analogue, beyond what one could fake using GADTs).
This is the end of the Lean part of this post, and probably a very good place to stop reading. Thank you for getting this far!
Playing the natural number game in Haskell with singletons
There is a website out there with about 25 ‘design patterns’, all of which as far as I can tell are ingenious ways to work around the limitations of the type systems in more traditional languages, of which the prime miscreant seems to be Java. So it is with some dismay that I learned from the wonderful Haskell Unfolder podcast that the singleton design pattern is a good way to bring some of the expressiveness of dependent types to Haskell.
We define Peano naturals as a data type
data Nat = Zero | Succ Nat deriving (Eq, Show)
If like me, you often mistype a constructor when you mean a type, as in makeZero :: Zero as opposed to the correct makeZero :: Nat (with definition makeZero = Zero) then you will have got rather used to seeing variations on the theme of ‘Not in scope: type constructor or class ‘Zero’. A data constructor of that name is in scope; did you mean DataKinds?’ Well the great moment has finally arrived, when we do, in fact, mean DataKinds! (Seriously though, this is a nice example of why shipping without a million extensions enabled is the right choice for producing understandable error messages.) Using this extension, we get access to type level naturals, written as 'Zero,
type One = 'Succ 'Zero
and so on. Confusingly the quotes can be omitted, because (unlike this poor human) the compiler always knows what is at the value level and what is at the type level. Here I am including the quotes every time they are appropriate. The three further extensions required appear as comments in the next code block.
To encode the proposition at the type level, we introduce a new generalized algebraic data type (using the GADTs extension) that has ‘wired in’ the Peano axioms that
for all
and, by definition
. (Note that addition on the right is idiomatic, because it generalizes to ordinals in the usual way:
is the successor ordinal to
whereas
.
data Add :: Nat -> Nat -> Nat -> Type where
-- Uses GADTs and KindSignatures
AddZero :: Add m 'Zero m
AddSucc :: Add m n p -> Add m ('Succ n) ('Succ p)
deriving instance Show (Add m n p)
deriving instance Eq (Add m n p)
-- Uses StandaloneDeriving
We now use the Haskell compiler to prove that for all
. What we want is to define a function
zeroAdd :: (n : Nat) -> Add Zero n n such that zeroAdd n has type Add Zero n n. But note that in zeroAdd n, n is a value, whereas in Add Zero n n it is a type. Thus the type of the output depends on a value in the input, exactly as expected for a dependent type. And this is not possible in Haskell, no matter how many extensions one turns on.
What we need is a bridge from values to types. For instance, we need a value of type SomeNewDataType One such that, when we pattern match on it, we know that the type parameter is One (and not ‘Zero, or ‘Succ ‘Succ ‘Zero, or …). This is what singletons do for us.
data TypeValueDict (n :: Nat) where
TVZero :: TypeValueDict 'Zero
TVSucc :: TypeValueDict n -> TypeValueDict ('Succ n)
As an illustration of the bridge, I offer
showTypeFromDict :: TypeValueDict nType
-> String
showTypeFromDict TVZero = "'Zero"
showTypeFromDict (TVSucc nValue) =
"'Succ " ++ showTypeFromDict nValue
tvOneTypeString = showTypeFromDict
(TVSucc TVZero :: TypeValueDict One)
-- no quote for One, it is a type (well really a Nat)
where the output is the string "'Succ 'Zero'"; earlier we declared One as a type synonym for 'Succ 'Zero; it has type Nat
The proof that is now routine: note that the illegal type signature
zeroAdd :: (n : Nat) -> Add Zero n n is made legitimate using the dictionary.
zeroAdd :: TypeValueDict n -> Add Zero n n
zeroAdd TVZero = AddZero
zeroAdd (TVSucc r) = AddSucc (zeroAdd r)
zeroAddOne = zeroAdd (TVSucc TVZero) :: Add Zero One One
The fact that this compiles is a proof, at the type level, that using only the Peano axioms. A truly expert Haskell programmer would probably refrain from running the code at all. Viewed from a sufficiently great distance, it is the same as the inductive proof in Lean.
Posted by mwildon 







