Quantcast
Channel: Hacker News
Viewing all articles
Browse latest Browse all 10943

haskell - "What part of Milner-Hindley do you not understand?" - Stack Overflow

$
0
0

Comments:"haskell - "What part of Milner-Hindley do you not understand?" - Stack Overflow"

URL:http://stackoverflow.com/q/12532552/286871


This syntax, while it may look complicated, is actually fairly simple. The basic idea comes from logic: the whole expression is an implication with the top half being the assumptions and the bottom half being the result. That is, if you know that the top expressions are true, you can conclude that the bottom expressions are true as well.

Symbols

Another thing to keep in mind is that some letters have traditional meanings; particularly, Γ represents the "context" you're in—that is, what the types of other things you've seen are. So something like Γ ⊢ ... means "the expression ... when you know the types of every expression in Γ.

The symbol essentially means that you can prove something. So Γ ⊢ ... is a statement saying "I can prove ... in a context Γ. These statements are also called type judgements.

Another thing to keep in mind: in math, just like ML and Scala, x : σ means that x has type σ. You can read it just like Haskell's x :: σ.

What each rule means

So, knowing this, the first expression becomes easy to understand: if we know that x : σ ∈ Γ (that is, x has some type σ in some context Γ), then we know that Γ ⊢ x : σ (that is, in Γ, x has type σ). So really, this isn't telling you anything super-interesting; it just tells you how to use your context.

The other rules are also simple. For example, take [App]. This rule has two conditions: e₀ is a function from some type τ to some type τ' and e₁ is a value of type τ. Now you know what type you will get by applying e₀ to e₁! Hopefully this isn't a surprise :).

The next rule has some more new syntax. Particularly, Γ, x : τ just means the context made up of Γ and the judgement x : τ. So, if we know that the variable x has a type of τ and the expression e has a type τ', we also know the type of a function that takes x and returns e. This just tells us what to do if we've figured out what type a function takes and what type it returns, so it shouldn't be surprising either.

The next one just tells you how to handle let statements. If you know that some expression e₁ has a type τ as long as x has a type σ, then a let expression which locally binds x to a value of type σ will make e₁ have a type τ. Really, this just tells you that a let statement essentially lets you expand the context with a new binding—which is exactly what let does!

The [Inst] rule deals with sub-typing. It says that if you have a value of type σ' and it is a sub-type of σ ( represents a partial ordering relation) then that expression is also of type σ.

The final rule deals with generalizing types. A quick aside: a free variable is a variable that is not introduced by a let-statement or lambda inside some expression; this expression now depends on the value of the free variable from its context.The rule is saying that if there is some variable α which is not "free" in anything in your context, then it is safe to say that any expression whose type you know e : σ will have that type for any value of α.

How to use the rules

So, now that you understand the symbols, what do you do with these rules? Well, you can use these rules to figure out the type of various values. To do this, look at your expression (say f x y) and find a rule that has a conclusion (the bottom part) that matches your statement. Let's call the thing you're trying to find your "goal". In this case, you would look at the rule that ends in e₀ e₁. When you've found this, you now have to find rules proving everything above the line of this rule. These things generally correspond to the types of sub-expressions, so you're essentially recursing on parts of the expression. You just do this until you finish your proof tree, which gives you a proof of the type of your expression.

So all these rules do is specify exactly—and in the usual mathematically pedantic detail :P—how to figure out the types of expressions.

Now, this should sound familiar if you've ever used Prolog—you're essentially computing the proof tree like a human Prolog interpreter. There is a reason Prolog is called "logic programming"! This is also important as the first way I was introduced to the H-M inference algorithm was by implementing it in Prolog. This is actually surprisingly simple and makes what's going on clear. You should certainly try it.

Note: I probably made some mistakes in this explanation and would love it if somebody would point them out. I'll actually be covering this in class in a couple of weeks, so I'll be more confident then :P.


Viewing all articles
Browse latest Browse all 10943

Trending Articles