tag:blogger.com,1999:blog-25541020Thu, 14 Mar 2024 06:32:48 +0000HaskelloverloadingCode generationDSLLLVMModulesBASICBenchmarkLambda calculusCompilationDependent typesOCamlThings that amuse mehttps://augustss.blogspot.com/noreply@blogger.com (augustss)Blogger44125tag:blogger.com,1999:blog-25541020.post-9198690142414352673Sat, 20 Dec 2014 11:52:00 +00002014-12-20T11:57:03.142+00:00HaskellA commentary on 24 days of GHC extensions, part 3It's time for some more Haskell opinions.<br />
<br />
<h2>Day 13: Functional Dependencies</h2>See day 12.<br />
<p>The addition of functional dependencies to multi-parameter type classes all of a sudden opened up the possibility of doing type level computations that had not been there before. The type level programming industry seems to have started with Thomas Hallgren's 2001 paper <a href="http://www.cse.chalmers.se/~hallgren/Papers/wm01.html">Fun with Functional Dependencies</a> (which got some inspiration from the type level numeric computation available in (classic) <a href="http://en.wikipedia.org/wiki/Bluespec,_Inc.">Bluespec</a>).<br />
<br />
<h2>Day 14: Deriving</h2>I don't know the history of deriving <tt>Functor</tt> et al. I believe Simon PJ added it after enough nagging.<br />
<p>I do have an intense dislike for <tt>GeneralizedNewtypeDeriving</tt> as implemented in GHC. As illustrated by ghc bug <a href="https://ghc.haskell.org/trac/ghc/ticket/1496">1496</a> (found by Stefan O'Rear who was 14-15 at the time) it leads to an inconsistent type system. Instead of removing the root cause of the problem GHC HQ has decided to add a rather ingenious, but grotesque, solution where type variables are given <it>roles</tt>. It's complex and it's going to get worse. It also brings its own problems (e.g., <tt>join</tt> can no longer be made a method in the monad class. WAT?!?!).<br />
<p>In the two compilers where I've implemented GND I've chosen a different route. The compiler tries to construct a proper instance declaration for the derived class. If it can be constructed and type checks then all is well. If it doesn't, there's something tricky going on and the instance has to be written by hand. I claim this covers 99% of all GND uses. And it makes me sleep better at night having had the instance declaration approved by the type checker.<br />
<p>I also prefer to make the compiler recognise and remove the identity functions that <tt>newtype</tt> gives rise to rather than forcing the user to explicitly using the new <tt>Coercible</tt> class.<br />
<br />
<h2>Day 15: Derive Generic</h2>The generic deriving extension is a clever idea from Clean, where all the meta-data about a data type (constructor names, types, etc) is encoded at the type level, and thus available at compile time.<br />
<p>The details of this encoding are rather gruesome, but I think it could be a lot nicer using the new type level strings, type level numbers, and type families. Time for a small redesign?<br />
<br />
<h2>Day 16: Overloaded String</h2>Overloaded strings was another extension I added to GHC because of my work on the Paradise DSL. Since Paradise is deeply embedded using DSL-level strings would be ugly unless the string literals were overloaded.<br />
<p>It was quite easy to add to GHC. The least easy part was adapting the defaulting rules to strings as well.<br />
<p>Worth noting is that pattern matching on overloaded strings turns into an equality comparison (just as with numeric literals), and so requires the string type to be an instance of <tt>Eq</tt>, which is not a superclass of <tt>IsString</tt>. Oh, and the name <tt>IsString</tt> for the class with the <tt>fromString</tt> was just temporary since I couldn't think of a good name. I guess it stuck.<br />
<br />
<h2>Day 17: Rank-N Types</h2>Rank-n types are, of course, a rather old idea, but it took a while before they made it into Haskell. At the Haskell Workshop in La Jolla in 1995 Mark Jones had a very nice paper, <a href="http://web.cecs.pdx.edu/~mpj/pubs/haskwork95.html">From Hindley-Milner Types to First-Class Structures</a>. It shows how to add a form of records to Haskell, where the record components can be polymorhic. This has the same expressivity as current rank-n types, but looks a little different.<br />
<p>For example, the archetypical Haskell rank-2 function has this type now<br />
<pre>runST :: (forall s . ST s a) -> a
</pre>With records with polymorphic components we have to introduce a new record type to write this. With modern Haskell it would look like<br />
<pre>data ArgST a = ArgST (forall s . ST s a)
runST :: ArgST a -> a
</pre>Or with Mark's suggested anonymous record types<br />
<pre>runST :: { arg :: forall s . ST s a } -> a
</pre>The 1996 release of HBC contained this kind of polymorphic components, but it did not have the anonymous records. GHC got rank-n (or was it rank-2?) types in 2000. The GHC rank-n types are somewhat more convenient, but requires function type signatures which the hbc version did not.<br />
<p>At the La Jolla ICFP we had a Haskell committee meeting about the next Haskell release, 1.3. If memory serves me, after a lot of discussions we decided that Haskell was going to get record types. Something along what Mark's paper suggested, but I think they were going to be named. The records would support polymorphic components (thus general universal quantification) and data types would support existentials. But after we split up, I guess the czar got cold feet and we ended up with the current Haskell records (I didn't really like them when I first saw them, and I still don't). Haskell could have been rather different if we had followed through on the decision (I'm not sure about better, but at least different).<br />
<br />
<h2>Day 18: Existential Types</h2>The idea of existential types is old, but the first time I saw it in an ML-like programming language was Läufer&Odersky's <a href="http://webpages.cs.luc.edu/~laufer/papers/ml92.pdf">An Extension of ML with First-Class Abstract Types</a> at the ML workshop in San Francisco in June 1992. I promptly implemented it and it was in version 0.998.1 of HBC in July 1992.<br />
<p>Adding existentials to ML is very easy; it's just a matter of tweaking the type checker to accept some more programs. Adding it to Haskell is a little more involved since the existential variables my appear in a context, and so a value of an existential type has to be a pair of a dictionaries and the actual value (ML only needs the actual value). The dictionaries are needed to use overloaded operations on the value.<br />
<p>What about syntax? I did it exactly like Läufer&Odersky, namely free variables in a data declaration are existentially quantified. E.g., <tt>data E = C a (a -> Int)</tt>, since the type variable <tt>a</tt> does not appear in the head of the data declaration it is existentially quantified.<br />
<p>Some people (I remember Mark Jones and Simon PJ) argued fiercely against this, saying that it would be too easy to make mistakes and forget a type variable and get it existentially qualified. So when GHC finally got them they looked like <tt>data E = forall a . C a (a -> Int)</tt>, using the confusing (but entirely logical!) <tt>forall</tt> to indicate an existential type. But then GADTs came along, and now it's suddenly no longer important to use the (pointless) <tt>forall</tt>, i.e., now we can write <tt>data E where { C :: a -> (a -> Int) -> E }</tt>. Oh, well.<br />
<p>Some people say that existential types is an anti-pattern in Haskell. I don't agree. It has a few perfectly legitimate uses, the problem is just that if you come from an OO background you'll probably reach for existentials in all kind of cases when there are better solutions.https://augustss.blogspot.com/2014/12/its-time-for-some-more-haskell-opinions.htmlnoreply@blogger.com (augustss)2tag:blogger.com,1999:blog-25541020.post-1455616072722516517Fri, 19 Dec 2014 16:45:00 +00002014-12-19T16:45:46.518+00:00HaskellA commentary on 24 days of GHC extensions, part 2<h2>Day 7: Type Operators</h2>I understand the usefulness of the type operator GHC extension, but I don't like it. For value identifiers any operator not starting with a '<tt>:</tt>' is treated the same way as identifiers starting with a lower case letter. This is how it used to be on the type level too. But with the latest version of the <tt>TypeOperators</tt> extension these operators now behave like upper case identifiers and can be used to name types. An ugly wart, IMO.<br />
<br />
<h2>Day 8: Recursive do</h2>This extension stems from Levent Erkök's PhD thesis <a href="https://sites.google.com/site/leventerkok/erkok-thesis.pdf">Value Recursion in Monadic Computations</a>. The motivation was to be able to describe circuits with feedback using monadic notation.<br />
<p>There's actually two way to get recursive <tt>do</tt>-notation. You can either use the <tt>rec</tt> keyword inside the <tt>do</tt> or you can use the keyword <tt>mdo</tt> instead of <tt>do</tt>. Why <tt>mdo</tt>, you ask? It should really be <tt>μdo</tt>, where μ is the recursion operator. But people at GHC HQ are scared of Unicode, so instead we got <tt>mdo</tt>.<br />
<br />
<h2>Day 9: Nullary Type Classes</h2>Once you have accepted multi-parameter type classes then nullary type classes are a natural consequence; zero is also a number, after all.<br />
<p>In 1994 I worked on the <a href="http://csg.csail.mit.edu/projects/languages/ph.shtml">pH</a> language when I visited MIT. The acronym pH stands for "parallel Haskell", which is what it is. It had some imperative features that were not in a monad. This was in 1994, and the ST monad was just being invented, so we were trying out other things. To keep track of what parts of the program used imperative features I used a nullary type class <tt>Imperative</tt> that tainted any code that used assignment et al. (As far as I know, this is the earliest use of a nullary type class.) When mixing imperative code and pure code it is important to not be too polymorphic. ML-like languages use the <a href="http://en.wikipedia.org/wiki/Value_restriction">value restriction</a> to accomplish this. Since Haskell's monomorphism restriction is essentially the same as the value restriction, having the <tt>Imperative</tt> type class neatly accomplished the value restriction for pH.<br />
<br />
<h2>Day 10: Implicit Parameters</h2>The implicit parameter started as an attempt to simplify Haskell overloading. Haskell overloading, when implemented by dictionary passing, really has two parts: first there is a way to implicitly pass dictionaries around, and second, dictionaries are constructed automatically by using the instance declarations as a logical inference system.<br />
<p>The implicit parameters do the automagic passing of parameters, and is thus half of what is needed for overloading. But they were never used for overloading, instead it was discovered that they can be made to behave like dynamic binding, but with static types. This is quite cool! Even if one doesn't like dynamic binding (I don't).<br />
<p>There idea is presented Erik Meijer et al's paper <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.9849">Implicit Parameters: Dynamic Scoping with Static Types</a>. It's worth reading.<br />
<br />
<h2>Day 11: Type Families</h2>Type families grew out of the need to have type classes with associated types. The latter is not strictly necessary since it can be emulated with multi-parameter type classes, but it gives a much nicer notation in many cases. The same is true for type families; they can also be emulated by multi-parameter type classes. But MPTC gives a very logic programming style of doing type computation; whereas type families (which are just type functions that can pattern match on the arguments) is like functional programming.<br />
<p>Using closed type families adds some extra strength that cannot be achieved by type classes. To get the same power from type classes we would need to add closed type classes. Which would be quite useful; this is what <a href="http://web.cecs.pdx.edu/~mpj/pubs/instancechains.pdf">instance chains</a> gives you.<br />
<br />
<h2>Day 12: Multi-Parameter Type Classes</h2>Even the original paper on <a href="http://dl.acm.org/citation.cfm?id=75283">Haskell type classes</a> mentions multiple parameters, so this idea has been with us from the start. But it's not present in Stefan Kaes original <a href="http://link.springer.com/chapter/10.1007%2F3-540-19027-9_9">work on type classes</a>.<br />
<p>Unfortunately, multiple type class parameters very quickly lead to type ambiguities in the code, and we are forced to add type signatures. So MPTC was not really useful until Mark Jones introduced <a href="http://web.cecs.pdx.edu/~mpj/pubs/fundeps.html">functional dependencies</a>. Mark got the idea from database theory where the same idea is used to limit the relations in the database.<br />
<br />
https://augustss.blogspot.com/2014/12/a-commentary-on-24-days-of-ghc_19.htmlnoreply@blogger.com (augustss)1tag:blogger.com,1999:blog-25541020.post-7142387885665434844Mon, 15 Dec 2014 17:36:00 +00002014-12-16T11:07:12.973+00:00HaskellA commentary on 24 days of GHC extensionsOllie Charles has continued his excellent series of Christmas Haskell blogs. This year he talks about <a href="http://ocharles.org.uk/blog/pages/2014-12-01-24-days-of-ghc-extensions.html">24 Days of GHC Extensions</a>. His blog posts are an excellent technical introduction to various Haskell extensions. Reading them inspired me to write some non-technical comments about the various extensions; giving a little bit of history and personal comments about them.<br />
<br />
<h2>Day 1: View Patterns</h2>View patterns have a long history. As far as I know <i>views</i> were first suggested by Phil Wadler in 1987, <a href="http://dl.acm.org/citation.cfm?id=41653">Views: a way for pattern matching to cohabit with data abstraction</a>. As the title of the paper suggests, it was about being able to pattern match on abstract data types. Variations of Phil's suggestions have been implemented several times (I did it both in <a href="http://en.wikipedia.org/wiki/Lazy_ML">LML</a> and for Haskell in hbc). In all these early suggestions the conversion between the concrete type and the view type were implicit, whereas in the <tt>ViewPatterns</tt> extension the conversion is explicit.<br />
<p>The addition of view patterns was partly spurred by the fact that F# has a very neat way of introducing pattern matching for abstract types, called <a href="http://en.wikibooks.org/wiki/F_Sharp_Programming/Active_Patterns">active patterns</a>. Since Simon Peyton Jones is in the same research lab as Don Syme it's natural that Simon couldn't let F# have this advantage over Haskell. :)<br />
<br />
<h2>Day 2: Pattern Synonyms</h2>Pattern synonyms is a more recent addition. The first time I remember hearing the idea was this the paper <a href="http://people.cs.uchicago.edu/~jhr/papers/1992/tr-sml-const.pdf">Abstract Value Constructors</a>, and ever since then I wished that Haskell would have something like that. Patterns were one of the few constructs in Haskell that could not be named and reused. The first time they were added to Haskell was in the <a href="https://personal.cis.strath.ac.uk/conor.mcbride/pub/she/">SHE</a> preprocessor.<br />
<p>At a Haskell hackathon in Cambridge, August 2011, Simon Peyton Jones, Simon Marlow and I met to hash out how they could be added to GHC. I wanted to go beyond simple pattern synonyms. The simplest kind is uni-directional synonyms which can only occur in patterns. The simply bi-directional synonyms can be used both in patterns and expressions, but are limited in what they can do. The explicit bi-directional synonyms allow the full power of both patterns and expressions. With explicit bi-directional synonyms and view patterns we can finally implement <i>views</i> as envisioned by Phil, but now broken down into smaller reusable pieces. The result of our discussions at the hackathon can ge found <a href="https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms">here</a>. But we were all too busy to implement any of it. All the hard implementation work, and fixing all the additional snags found, was done by <a href="http://gergo.erdi.hu/">Gergő Érdi</a>.<br />
<p>I find this a very exciting extension, and you can take it even further, like Associated Pattern Synonyms in class declarations.<br />
<br />
<h2>Day 3: Record Wildcards</h2>The record wildcard extension allows you to <i>open</i> a record and get access to all the fields without using qualified names for them. The first time I encountered this idea was in Pascal which as a <tt>with</tt>-statement. It looks like <tt>with <i>expr</i> do <i>stmt</i></tt> where the <tt><i>expr</i></tt> is a record value and inside the <tt><i>stmt</i></tt> all the fields of the record can be accessed directly. The construct later appeared in Ada as well, where it's called <tt>use</tt>.<br />
<p>So having something similar in Haskell doesn't seem so far fetched. But in was actually the dual, namely to construct values using record wildcard notation that inspired me to make this extension.<br />
<p>In 2006 I was developing the <a href="http://dl.acm.org/citation.cfm?doid=1411204.1411236">Paradise</a> EDSL which is a DSL embedded in Haskell for describing computations and user interfaces. I wanted a way to make the EDSL less verbose and that's why record wildcards came about. Here's a simple example to illustrate the idea. We want to be able to input a coordinate record.<br />
<pre>data Coord = Coord { x :: Double, y :: Double }
coord :: P Coord
coord = do
x <- input 0.0
y <- input 0.0
return Coord{..}
</pre>
This says that <tt>x</tt> and <tt>y</tt> are input and that their default value is 0. We need to have an <tt>input</tt> for every field in the <tt>Coord</tt> record and at the end we need to construct the record itself. Without the record wildcard the construction would have been <tt>Coord{x=x,y=y}</tt>. This isn't too bad, but for hundreds of inputs it gets tedious.
<p>I made a quick and dirty implementation of this in GHC, but it was too ugly to get into the official release. Instead Dan Licata reimplemented (and extended) it under Simon PJ's supervision into what we have today.
<p>I'm actually quite ambivalent about this extension. It's incredibly convenient (especially in the pattern form), but it introduces new bound names without an explicit mention of the name in the binder. This makes it harder to read code. The same objection can be made about the Haskell import declaration when it lacks an import list.
<h2>Day 4: Bang Patterns</h2>I don't have much to say about <tt>BangPatterns</tt>. One day they simply appeared in a new GHC version. :)
<p>The <a href="http://en.wikipedia.org/wiki/Clean_%28programming_language%29">Clean</a> language has long has something similar, but in Clean the annotation goes on the type instead of on the pattern. In Haskell it seems like a nice duality to have it on the pattern since there also lazy patterns introduced by <tt>~</tt>.
<h2>Day 5: Rebindable Syntax</h2>The rebindable syntax is almost like going back the older Haskell reports. They lacked an exact specification of where the identifier introduced by desugaring were bound. Of course, it was resolved so that they always refer to the Prelude identifiers. But when experimenting with other preludes it's very useful to be able to override this. Which is exactly what <tt>RebindableSyntax</tt> allows you to do.
<p>In my opinion, this extension should be a little different. It ought to be possible to give a module name for where the names should be found. So normally this would be Prelude, but I'd like to be able to say <tt>RebindableSyntax=MyPrelude</tt>, and then all names introduced by desugaring will be found in MyPrelude (even if they are not in scope).
<h2>Day 6: List comprehensions</h2>This bundles together a number of list comprehension extensions.
<p>First, <tt> MonadComprehensions</tt>, which allows list comprehension syntax for any monad. This is simply going back to Haskell 1.4 where monad comprehensions were introduced. The monad comprehension syntax had been used by Phil Wadler before that. Monad comprehension were removed from 1.4, because they were deemed too confusing for beginners. Monad comprehensions were brought back to GHC by George Giorgidze et al.
<p>The <tt>ParallelListComp</tt> allows zip like operation to be expression in the comprehension. The idea originates from <a href="http://galois.com/">Galois</a> in the design of <a href="http://en.wikipedia.org/wiki/Cryptol">Cryptol</a>. John Launchbury, Jeff Lewis, and Thomas Nordin were turning crypto networks into recursive equations and they wanted a nicer notation than using <tt>zipWith</tt>. (Thanks to Andy Adams-Moran for the history lesson.)
<p>The origins <tt>TransformListComp</tt> are simple. It's just a way to bring more of the power of SQL into list comprehensions. It's an extension that introduces far too much special syntax for my taste, but the things you can do are quite neat.https://augustss.blogspot.com/2014/12/a-commentary-on-24-days-of-ghc.htmlnoreply@blogger.com (augustss)0tag:blogger.com,1999:blog-25541020.post-5606262483642528561Sat, 05 Apr 2014 19:53:00 +00002014-04-05T20:53:28.748+01:00Haskell error reporting with locations, updateSince some people (I'm among them) dislike impure features in Haskell I thought I'd present a slight variation on the error location feature that is "pure".<br />
<p>First, the <tt>__LOCATION__</tt> variable gets an abstract type. So<br />
<pre> data Location
__LOCATION__ :: Location
</pre>It's defined in the <tt>Prelude</tt> and always in scope. The type cannot be compared, shown, or anything. There's just one thing that can be done, namely:<br />
<pre> extractLocation :: Location -> IO String
</pre>The <tt>error</tt> function needs a new exception to throw<br />
<pre> data ErrorCallLoc = ErrorCallLoc Location String
{-# LOCATIONTRANSPARENT error #-}
error :: String -> a
error s = throw (ErrorCallLoc __LOCATION__ s)
</pre>This means that the location string cannot be used when we throw the error. But it can be used where the error is caught, since this can only be done in the <tt>IO</tt> monad.<br />
<p>Under the hood the everything is just as before, <tt>Location</tt> is just a string. It just can't be manipulated except in the <tt>IO</tt> monad, so we can pretend it's pure.<br />
<pre> newtype Location = Location String
extractLocation (Location s) = return s
</pre>It now looks a lot like <a href="http://www.haskell.org/pipermail/haskell-cafe/2013-February/106617.html">Michael Snoyman's proposal</a>.<br />
https://augustss.blogspot.com/2014/04/haskell-error-reporting-with-locations_5.htmlnoreply@blogger.com (augustss)1tag:blogger.com,1999:blog-25541020.post-8125460940361199531Fri, 04 Apr 2014 07:28:00 +00002014-04-04T15:54:58.409+01:00HaskellHaskell error reporting with locationsError reporting in GHC is not always the nicest. For example, I often develop code by using <tt>undefined</tt> as a placeholder for code I've not written yet. Here's a simple example:<br />
<pre>import System.Environment
main = do
args <- getargs
if null args then
undefined
else
undefined
</pre>
Running this looks like this:
<pre>$ ./H
H: Prelude.undefined
</pre>Which <tt>undefined</tt> caused that? Looking at the error message we have no idea. Wouldn't it be nice with some location information?
<p>We can actually get location information by using <tt>Control.Exception.assert</tt>:
<pre>import Control.Exception(assert)
import System.Environment
main = do
args <- getargs
if null args then
assert False undefined
else
assert False undefined
</pre>
Now running it is much more informative:
<pre>$ ./H
H: H.hs:7:9-14: Assertion failed
</pre>How is <tt>assert</tt> able to report the location? If we dig deep enough we discover that it's because the ghc compiler contains a special hack to recognize this function and give it location information.
<h2>A generalized hack</h2>In a Haskell compiler that I've implemented I've taken this compiler hack and extended it so it can be used for any function. It comes in two parts, location information and location transparent definitions.
<h3><tt>__LOCATION__</tt></h3>The <tt>__LOCATION__</tt> identifier is always defined and utterly magical. Its value is a string that describes the location of that very identifier. This is the very opposite of a referentially transparent name. In fact it's value varies with where it is placed in the code! So it's definitely not for purists. But I'm a practical man, so I sometimes have resort of the ugliness of reality. And in reality we want to report locations in errors.
<p>Enough philosophy, here's an example:
<pre>main = do
print __LOCATION__
print __LOCATION__
</pre>And running it prints:
<pre>"test/Test.hs:2:11"
"test/Test.hs:3:13"
</pre>And to illustrate the impurity:
<pre>main = do
let loc = __LOCATION__
print loc
print loc
</pre>And running this:
<pre>"test/Test.mu:2:15"
"test/Test.mu:2:15"
</pre><h3>Location transparency</h3>The <tt>__LOCATION__</tt> identifier gives the location of itself. This is of little use on its own. Imagine the definition we could give for <tt>undefined</tt>. Somewhere in the <tt>Prelude</tt> module it could say something like
<pre>undefined = error ("undefined: " ++ __LOCATION__)
</pre>But if we use this all that it will tell us is where the definition of <tt>undefined</tt> is, not where it was used.
<p>To get the point of use instead of the definition I've introduced location transparent definitions. In a location transparent definition the <tt>__LOCATION__</tt> identifier will not refer to its own position, but to the position of the reference to the definition. Location transparency is introduced with a pragma.
<pre>{-# LOCATIONTRANSPARENT undefined #-}
undefined = error ("undefined: " ++ __LOCATION__)
</pre>With this definition our initial example looks like this when we run it:
<pre>undefined: test/H.hs:6:9
</pre>In fact, the real definition of <tt>undefined</tt> doesn't look like that. The <tt>__LOCATION__</tt> identifier is only used in the definition of <tt>error</tt>, so it looks something like this:
<pre>{-# LOCATIONTRANSPARENT error #-}
error :: String -> a
error s = throw (ErrorCall (__LOCATION__ ++ ": " ++ s))
{-# LOCATIONTRANSPARENT undefined #-}
undefined = error "undefined"
</pre>Since both <tt>error</tt> and <tt>undefined</tt> are transparent any use of <tt>undefined</tt> will be reported with the location of the use.
<p>Furthermore, we can make a few more functions location transparent, e.g.,
<pre>{-# LOCATIONTRANSPARENT head #-}
head :: [a] -> a
head [] = error "Empty list"
head (x:xs) = x</tt>
</pre>A simple example:
<pre>main = putStr (head [])
</pre>Which will print:
<pre>test/Head.hs:1:16: Empty list
</pre>which is the location where <tt>head</tt> was called.
<h3>Implementation</h3>There are different ways to implement this feature, and I'm going to sketch two of them.
<p>First: Every function that has the <tt>LOCATIONTRANSPARENT</tt> pragma will be inlined at the point of use, and the <tt>__LOCATION__</tt> identifier in the inlined code will be updated to reflect the call site. The definitions must be processed in a bottom-up fashion for this to work. It's fairly simple to implement, but will cause some code bloat due to inlining.
<p>Second: Every function that has <tt>LOCATIONTRANSPARENT</tt> pragma will be rewritten (by the compiler) to have an extra location argument, and each use of this function will be rewritten to pass in the current location.
For example (using <tt>$$f</tt> for the location version of <tt>f</tt>):
<pre>main = putStr ($$head __LOCATION__ [])
$$head __LOCATION__ [] = $$error __LOCATION__ "Empty list"
$$head __LOCATION__ (x:xs) = x
$$error __LOCATION__ s = throw (ErrorCall (__LOCATION__ ++ ": " ++ s))
</pre>This should be fairly straightforward to implement, but I've not tried it. (It's somewhat like dynamic binding, so maybe ghc could reuse that mechanism for locations.)
<p>And, of course, the global <tt>__LOCATION__</tt> identifier has to be recognized by the compiler and replaced by a string that is its location.
<h3>Conclusion</h3>I implemented the <tt>__LOCATION__</tt> hack quite a while ago, and I like the much improved reporting of error locations. I hope someone will add it to ghc as well.<br />
https://augustss.blogspot.com/2014/04/haskell-error-reporting-with-locations.htmlnoreply@blogger.com (augustss)3tag:blogger.com,1999:blog-25541020.post-5657007189613124192Thu, 03 Apr 2014 07:40:00 +00002014-04-04T11:42:22.287+01:00HaskellA small Haskell extension<h2>
The extension</h2>
In Haskell you can give a type to an expression by writing <i>expr </i><span style="font-family: Courier New, Courier, monospace;">::</span> <i>type</i>. To an untrained eye the <span style="font-family: Courier New, Courier, monospace;">::</span> looks just like an infix operator, even if it is described as a special syntactical construct in the Haskell report. But let's think about it as an infix operator for a moment.<br />
<br />
For an infix operator you you can for a section, i.e., a use of the operator with one operand left out. For instance <span style="font-family: Courier New, Courier, monospace;">(* 2)</span> leaves out the first operand, and Haskell defines this to be the same as <span style="font-family: Courier New, Courier, monospace;">(\ x -> x * 2)</span>. Regarding <span style="font-family: Courier New, Courier, monospace;">::</span> as an operator we should be able to write <span style="font-family: Courier New, Courier, monospace;">(:: <i>type</i>)</span> and it should have the obvious meaning <span style="font-family: Courier New, Courier, monospace;">(\ x -> x :: <i>type</i>)</span>.<br />
<br />
I suggest, and I plan sending the <i>haskell-prime</i> mailing list, Haskell should adopt this small extension.<br />
<br />
<h2>
Why?</h2>
<div>
First, the extension is very light weight and has almost no extra intellectual weight for anyone learning Haskell. I'd argue it makes the language simpler because it allows <span style="font-family: Courier New, Courier, monospace;">::</span> to be treated more like an infix operator. But without use cases this would probably not be enough of an argument.</div>
<h4>
Example 1</h4>
<div>
We want to make a function, <span style="font-family: Courier New, Courier, monospace;">canonDouble</span>, that takes a string representing a <span style="font-family: Courier New, Courier, monospace;">Double</span> and changes it to the standard Haskell string representing this <span style="font-family: Courier New, Courier, monospace;">Double</span>. E.g. <span style="font-family: Courier New, Courier, monospace;">canonDouble "0.1e1" == "1.0"</span>. A first attempt might look like this:</div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<span style="font-family: Courier New, Courier, monospace;"> canonDouble :: String -> String</span><br />
<span style="font-family: Courier New, Courier, monospace;"> canonDouble = show . read -- WRONG!</span><br />
<div>
<br /></div>
<div>
This is, of course, wrong since the compiler cannot guess that the type between <span style="font-family: Courier New, Courier, monospace;">read</span> and <span style="font-family: Courier New, Courier, monospace;">show</span> should be a <span style="font-family: Courier New, Courier, monospace;">Double</span>. We can convey this type information in different ways, e.g.:<br />
<div>
<span style="font-family: inherit;"><br /></span></div>
<span style="font-family: Courier New, Courier, monospace;"> canonDouble :: String -> String</span><br />
<span style="font-family: Courier New, Courier, monospace;"> canonDouble = show . asDouble . read</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> where asDouble :: Double -> Double</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> asDouble x = x</span><br />
<div>
<br /></div>
<div>
This is somewhat clumsy. Using my proposed extension we can instead write:</div>
<div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<span style="font-family: Courier New, Courier, monospace;"> canonDouble :: String -> String</span><br />
<span style="font-family: Courier New, Courier, monospace;"> canonDouble = show . (:: Double) . read</span></div>
<div>
<div>
<br /></div>
</div>
<div>
This has the obvious meaning, and succinctly describes what we want.</div>
<h4>
Example 2</h4>
<div>
In ghc 7.8 there is a new, better implementation of <span style="font-family: Courier New, Courier, monospace;">Data.Typeable</span>. It used to be (before ghc 7.8) that to get a <span style="font-family: Courier New, Courier, monospace;">TypeRep</span> for some type you would have to have a value of that type. E.g., <span style="font-family: Courier New, Courier, monospace;">typeOf True</span> gives the <span style="font-family: Courier New, Courier, monospace;">TypeRep</span> for the <span style="font-family: Courier New, Courier, monospace;">Bool</span> type. If we don't have a value handy of the type, then we will have to make one, e.g., by using <span style="font-family: Courier New, Courier, monospace;">undefined</span>. So we could write <span style="font-family: Courier New, Courier, monospace;">typeOf (undefined :: Bool)</span>.</div>
<div>
<br /></div>
<div>
This way of using <span style="font-family: Courier New, Courier, monospace;">undefined</span> is rather ugly, and relies on non-strictness to work. Ghc 7.8 add a new, cleaner way of doing it.</div>
<div>
<br /></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> typeRep :: proxy a -> TypeRep</span></div>
<div>
<br /></div>
<div>
The typeRep function does not need an actual value, but just a proxy for the value. A common proxy is the <span style="font-family: Courier New, Courier, monospace;">Proxy</span> type from <span style="font-family: Courier New, Courier, monospace;">Data.Proxy</span>:</div>
<div>
<br /></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> data Proxy a = Proxy</span></div>
<div>
<br /></div>
<div>
Using this type we can now get the <span style="font-family: Courier New, Courier, monospace;">TypeRep</span> of a <span style="font-family: Courier New, Courier, monospace;">Bool</span> by writing <span style="font-family: Courier New, Courier, monospace;">typeRep (Proxy :: Proxy Bool)</span>. Note that in the type signature of <span style="font-family: Courier New, Courier, monospace;">typeRep</span> the <span style="font-family: Courier New, Courier, monospace;">proxy</span> is a type variable. This means we can use other values as proxies, e.g., <span style="font-family: Courier New, Courier, monospace;">typeRep ([] :: [Bool])</span>.</div>
<div>
<br /></div>
<div>
We can in fact use anything as a proxy that has a structure that unifies with <span style="font-family: Courier New, Courier, monospace;">proxy a</span>. For instance, if we want a proxy for the type <span style="font-family: Courier New, Courier, monospace;">T</span> we could use <span style="font-family: Courier New, Courier, monospace;">T -> T</span>, which is the same as <span style="font-family: Courier New, Courier, monospace;">(->) T T</span>. The <span style="font-family: Courier New, Courier, monospace;">(->) T</span> part makes of it is the <span style="font-family: Courier New, Courier, monospace;">proxy</span> and the last <span style="font-family: Courier New, Courier, monospace;">T</span> makes up the <span style="font-family: Courier New, Courier, monospace;">a</span>.</div>
<div>
<br /></div>
<div>
The extension I propose provides an easy way to write a function of type <span style="font-family: Courier New, Courier, monospace;">T -> T</span>, just write <span style="font-family: Courier New, Courier, monospace;">(:: T)</span>. So to get a <span style="font-family: Courier New, Courier, monospace;">TypeRep</span> for <span style="font-family: Courier New, Courier, monospace;">Bool</span> we can simply write <span style="font-family: Courier New, Courier, monospace;">typeRep (:: Bool)</span>. Doesn't that look (deceptively) simple?</div>
<div>
<br /></div>
<div>
In fact, my driving force for coming up with this language extension was to get an easy and natural way to write type proxies, and I think using <span style="font-family: Courier New, Courier, monospace;">(:: T)</span> for a type proxy is a as easy and natural as it gets (even if the reason it works is rather peculiar).</div>
<div>
<br /></div>
<h2>
Implementation</h2>
<div>
I've implemented the extension in one Haskell compiler and it was very easy to add and it works as expected. Since it was so easy, I'll implement it for ghc as well, and the ghc maintainers can decide if the want to merge it. I suggest this new feature is available using the language extension name <span style="font-family: Courier New, Courier, monospace;">SignatureSections</span>.</div>
</div>
<div>
<br /></div>
<h2>
Extensions</h2>
<div>
Does it make sense to do a left section of <span style="font-family: Courier New, Courier, monospace;">::</span>? I.e., does <span style="font-family: Courier New, Courier, monospace;">(expr ::)</span> make sense? In current Haskell that does not make sense, since it would be an expression that lacks an argument that is a type. Haskell doesn't currently allow explicit type arguments, but if it ever will this could be considered.</div>
<div>
<br /></div>
<div>
With the definition that <span style="font-family: Courier New, Courier, monospace;">(:: T)</span> is the same as <span style="font-family: Courier New, Courier, monospace;">(\ x -> x :: T)</span> any use of quantified or qualified types as <span style="font-family: Courier New, Courier, monospace;">T</span> will give a type error. E.g., <span style="font-family: Courier New, Courier, monospace;">(:: [a])</span>, which is <span style="font-family: Courier New, Courier, monospace;">(\ x -> x :: [a]</span>, is a type error. You could imagine a different desugaring of <span style="font-family: Courier New, Courier, monospace;">(:: T)</span>, namely <span style="font-family: Courier New, Courier, monospace;">(id :: T -> T)</span>. Now <span style="font-family: Courier New, Courier, monospace;">(:: [a])</span> desugars to <span style="font-family: Courier New, Courier, monospace;">(id :: [a] -> [a])</span> which is type correct. In general, we have to keep quantifiers and qualifiers at the top, i.e., <span style="font-family: Courier New, Courier, monospace;">(:: forall a . a)</span> turns into <span style="font-family: Courier New, Courier, monospace;">(id :: forall a . a -> a)</span>.</div>
<div>
<br /></div>
<div>
Personally, I'm not convinced this more complex desugaring is worth the extra effort.</div>
https://augustss.blogspot.com/2014/04/a-small-haskell-extension.htmlnoreply@blogger.com (augustss)5tag:blogger.com,1999:blog-25541020.post-2698087297602282303Sat, 09 Jul 2011 17:54:00 +00002011-07-09T18:54:06.533+01:00<h2>
Impredicative polymorphism, a use case</h2>
In a recent <a href="http://stackoverflow.com/questions/6622524/why-is-haskell-sometimes-referred-to-as-best-imperative-language/6622857#6622857">
question on stackoverflow</a> I made a comment about how Haskell can
be considered a good imperative language because you can define
abstractions to make it convenient. When I was going to make my point
by implementing a simple example of it I found that what I wanted to
do no longer works in ghc-7.0.4. Here's a simple example of what I
wanted to do (which works in ghc-6.12.3). It's a simple library that
makes Haskell code look a bit like Python.<br />
<br />
<pre>{-# LANGUAGE ExtendedDefaultRules, TupleSections #-}
module Main where
import qualified Prelude
import Boa
last_elt(xs) = def $: do
assert xs "Empty list"
lst <- var xs -- Create a new variable, lst
ret <- var (xs.head)
while lst $: do
ret #= lst.head -- Assign variable ret
lst #= lst.tail
return ret
first_elt(xs) = def $: do
l <- var xs
l.reverse -- Destructive reverse
return (last_elt(l))
factorial(n) = def $: do
assert (n<=0) "Negative factorial"
ret <- var 1
i <- var n
while i $: do
ret *= i
i -= 1
return ret
test = def $: do
print "Hello"
print ("factorial 10 =", factorial(10))
main = do
test
l <- varc [1, 2, 3]
print ("first and last:",)
print (first_elt(l),)
print (last_elt(l))
</pre>
<br />
On the whole it's pretty boring, except for one thing.
In imperative languages there is (usually) a disctinction between
l-values and r-values. An l-value represent a variable, i.e.,
something that can be assigned, whereas an r-value is simply a value.
So in Python, in the statement <tt>x = x + 1</tt> the <tt>x</tt> on
the left is an l-value (it's being assigned), whereas on the right
it's an r-value. You can use the same notation for both in most
imperative languages.
If you want to do the same in Haskell you have (at least) two choices.
First you can unify the concepts of l-value and r-value and have a
runtime test that you only try to assign variables. So, e.g., <tt>5 =
x</tt> would type check but have a runtime failure. I will not dwell
further on this since it's not very Haskelly.<br />
Instead, we want to have l-values and r-values being statically type
checked. Here's the interesting bit of the types for a simple
example.
<br />
<pre> </pre>
<pre>data LValue a
data RValue a
instance (Num a) => Num (RValue a)
class LR lr
instance LR RValue
instance LR LValue
var :: RValue a -> IO (forall lr . (LR lr) => lr a)
(#=) :: LValue a -> RValue a -> IO ()
foo = do
x <- var 42
x #= x + 1
</pre>
<br />
We have two type constructors <tt>LValue</tt> and <tt>RValue</tt>
representing l-values and r-values of some type <tt>a</tt>. The
r-values is an instance of <tt>Num</tt>. Furthermore, the class
<tt>LR</tt> where the type is either <tt>LValue</tt> or
<tt>RValue</tt>.<br />
The <tt>var</tt> function creates a new variable given a value.
The return type of <tt>var</tt> is the interesting part. It says that
the return value is polymorphic; it can be used either as an l-value
or as r-value. Just as we want.<br />
The assignment operator,
<tt>(#=)</tt>, takes an l-value, an r-value, and returns nothing.<br />
<br />
So in the example we expect <tt>x</tt> to have type
<tt>forall lr . (LR lr) => lr a</tt>, in which case the assignment
will type check.<br />
If we try to compile this we get
<br />
<pre> Illegal polymorphic or qualified type:
forall (lr :: * -> *). LR lr => lr a
Perhaps you intended to use -XImpredicativeTypes
In the type signature for `var':
var :: RValue a -> IO (forall lr. LR lr => lr a)
</pre>
<br />
The problem is that universal quantification is not normally allowed
as the argument to a type constructor. This requires the
impredicative polymorphism extension to ghc. If we turn it on it
compiles fine in ghc-6.12.3.<br />
But, with ghc-7.0.4 we get
<br />
<pre> Couldn't match expected type `LValue a0'
with actual type `forall (lr :: * -> *). LR lr => lr a1'
In the first argument of `(#=)', namely `x'
In the expression: x #= x + 1
</pre>
<br />
I can't really explain the rational behind the change in the ghc type
system (Simon say it's simpler now), but I feel this has really made
ghc worse for defining DSELs. When you define a DSEL you usually use
the <tt>do</tt>-notation for the binding construct in the embedded
language. This is the only programmable binding construct (by
overloading <tt>(>>=)</tt>), so there is little choice. With the
change in ghc-7 it means you can no longer make DSELs with a
polymorhic binding construct (like I wanted here), because the binder
seems to be monomorphic now<br />
<br />
Please Simon, can we have polymorphic do bindings back?<br />
<br />https://augustss.blogspot.com/2011/07/impredicative-polymorphism-use-case-in.htmlnoreply@blogger.com (augustss)7tag:blogger.com,1999:blog-25541020.post-992454843212471185Mon, 02 May 2011 18:28:00 +00002011-05-02T19:30:08.146+01:00<h2>
More points for lazy evaluation</h2>
In a <a href="http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/">recent blog post</a> Bob Harper shows one use of laziness, but I think he
misses the real import points of laziness.
So I will briefly enumerate what I think are the important points of
lazyness (excuse me for not coming up with any kind of pun about points).
<br />
<br />
First, I'd like to say that I don't think strict or lazy matters that
much in practice; they both work fine. I have programmed in regular
non-strict Haskell as well as strict Haskell, and most things carry
over well to a strict Haskell. Furthermore, when I say strict language I mean a strict language with at least non-termination as an effect; total languages is another matter.<br />
<h3>
Lazy bindings</h3>
I like to tell Haskell beginners that any subexpression can be named and "pulled out", modulo name capture, of course.
For instance
<br />
<pre> ... e ...
</pre>
is the same as
<br />
<pre> let x = e
in ... x ...
</pre>
<br />
The key thing is that
<br />
<pre> ... e ... e ...
</pre>
is the same as
<br />
<pre> let x = e
in ... x ... x ...
</pre>
so that common subexpressions can be given a name.<br />
<br />
This is (in general) just wrong in a strict language, of course.
Just take the simple example
<br />
<pre> if c then error "BOO!" else 0
</pre>
Which is not the same as
<br />
<pre> let x = error "BOO!"
in if c then x else 0
</pre>
<br />
In this case you can easily fix the problem by delaying the
computation with a lambda (a common theme).
<br />
<pre> let x () = error "BOO!"
in if c then x () else 0
</pre>
<br />
But for a slightly more complicated example this simple technique goes
wrong. Consider
<br />
<pre> map (\ a -> a + expensive) xs
</pre>
where <tt>expensive</tt> does not depend on <tt>a</tt>. In this case
you want to move the expensive computation out of the loop (cf. loop
invariant code motion in imperative languages). Like so
<br />
<pre> let x = expensive
in map (\ a -> a + x) xs
</pre>
In a lazy language <tt>x</tt> will be evaluated exactly zero times or
once, just as we want. Using the delaying trick doesn't work here:<br />
<pre> let x () = expensive
in map (\ a -> a + x ()) xs
</pre>
since <tt>expensive</tt> will get evaluated once for every list
element.
<br />
<br />
This is easy enough to remedy, by introducing an abstraction for lazy
computations (which will contain an assignment to compute the value
just once). The signature for the abstract type of lazy values is
something like
<br />
<pre> data Lazy a
delay :: (() -> a) -> Lazy a
force :: Lazy a -> a
</pre>
Note that the <tt>delay</tt> needs to take a function to avoid the
<tt>a</tt> being evaluated early.
<br />
(This is probably what Bob would name a benign effect and is easily programmed using <span class="Apple-style-span" style="font-family: monospace; white-space: pre;">unsafePerformIO</span>, which means it needs careful consideration.)<br />
<br />
And so we get
<br />
<pre> let x = delay (\ () -> expensive)
in map (\ a -> a + force x) xs
</pre>
This isn't exactly pretty, but it works fine. In a language with
macros the ugliness can be hidden better.
<br />
<h3>
Lazy functions</h3>
Even strict languages like ML and C have some lazy functions even if
they don't call them that, like SML's <em>if</em>, <em>andthen</em>, and
<em>orelse</em>. You really need the <i>if</i> construct to evaluate the
condition and then one of the branches depending on the condition.
But what if I want to define my own type with the same kind of
functions?
In ML I can't, in C I would have to use a macro.
<br />
<br />
The ability to define new functions that can be used as control
constructs is especially important when you want to design embedded
domain specific languages. Take the simple example of the
<tt>when</tt> (i.e., one-arm if) function in Haskell.
<br />
<pre> when :: (Monad m) => Bool -> m () -> m ()
</pre>
A quite common use of this function in monadic code is to check for
argument preconditions in a function, like
<br />
<pre> f x = do
when (x < 0) $
error "x must be >= 0"
...
</pre>
If the <tt>when</tt> function is strict this is really bad, of course,
since the call to error will happen before the <tt>when</tt> is
called.
<br />
<br />
Again, one can work around this by using lazy values, like
<br />
<pre> myAnd :: MyBool -> Lazy MyBool -> MyBool
...
... myAnd x (delay (\ () -> y)) ...
</pre>
But in my opinion, this is too ugly to even consider. The intent of
the function is obscured by the extra noise to make the second
argument lazy.
<br />
<br />
I think every language needs a mechanism for defining some form of
call-by-name functions. And many languages have this in the form of
macros (maybe built in like in Lisp, or as a preprocessor like C).
<br />
If a language cannot define lazy function it simply lacks in
abstraction power. I think any kind of fragment of the language
should be nameable and reusable. (Haskell lacks this ability for
patterns and contexts; a wart in the design.) So if you notice a
repeated expression pattern, like
<br />
<pre> if c then t else False
</pre>
and cannot give this a name, like
<br />
<pre> and c t = if c then t else False
</pre>
and then use it with the same effect as the orginal expression, well,
then your language is lacking.
<br />
<br />
For some language constructs the solution adopted by Smalltalk (and
later Ruby), i.e., a very lightweight way on constructing closures is
acceptable. So, for instance, I could accept writing
<br />
<pre> ... myAnd x {y} ...</pre>
<br />
(In SML you could make something using functors, but it's just too ugly to contemplate.)<br />
<h3>
</h3>
<h3>
Lazy constructors</h3>
Lazy constructors were sort of covered in what Bob claimed to be the point of laziness, so I'll just mention them for completeness. Sometimes you need them, but in my experience it's not very often.<br />
<h3>
Cyclic data structures</h3>
<div>
This is related to the last point.</div>
<div>
<br /></div>
Sometimes you really want cyclic data structures. An example are the
Haskell data types in <tt>Data.Data</tt> that describe data types and
constructors. A data type descriptor needs to contain a list of its
constructors and a constructor descriptor needs to contain the data type
descriptor.
<br />
In Haskell this can be described very naturally by having the two
descriptors reference each other.
<br />
In SML this is not possible. You will have to break the cycle by
somthing like a reference (or a function).
<br />
In OCaml you can define cyclic data structures in a similar fashion to
Haskell, so this isn't really a problem with strict languages, but
rather a feature that you can have if you like.
Of course, being able to define cyclic data leads to non-standard
elements in your data types, like
<br />
<pre> data Nat = Zero | Succ Zero
omega :: Nat
omega = Succ omega
</pre>
So having the ability to define cyclic data structures is a double
edged sword.
<br />
I find the lack of a simple way to define cyclic data a minor nuisance only.
<br />
<h3>
Reuse</h3>
I've saved my biggest gripe of strict evaluation for last.
<em>Strict evaluation is fundamentally flawed for function reuse.</em><br />
<em></em>
What do I mean? I will illustrate with and example.
<br />
Consider the <tt>any</tt> function is Haskell:
<br />
<pre>any :: (a -> Bool) -> [a] -> Bool
any p = or . map p
<span class="Apple-style-span" style="font-family: Times; white-space: normal;">It's quite natural to express the </span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><tt>any</tt></span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"> function by reusing the
</span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><tt>map</tt></span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"> and </span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><tt>or</tt></span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"> functions. Unfortunately, it doesn't
behave like we would wish in a strict language. The </span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><tt>any</tt></span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"> function should scan the list from the head forwards and as soon as an
element that fulfills the predicate is found it should return true and stop
scanning the list. In a strict language this would not happen, since
the predicate will be applied to every element before the </span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><tt>or</tt></span><span class="Apple-style-span" style="font-family: Times; white-space: normal;">
examines the elements.</span></pre>
So we are forced to manually fuse the two functions, doing so we get:
<br />
<pre>any :: (a -> Bool) -> [a] -> Bool
any p = foldr False (\ x r -> p x || r)
<span class="Apple-style-span" style="font-family: Times; white-space: normal;"><pre style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">or :: [Bool] -> Bool
or = foldr False (||)
</pre>
</span><span class="Apple-style-span" style="font-family: Times; white-space: normal;"><pre style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></pre>
</span><pre></pre>
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
</pre>
<pre></pre>
<br />
But the misery doesn't end here. This still doesn't do the right
thing, because the strict language will recurse all the way down
the list since it will call <span class="Apple-style-span" style="font-family: monospace;">foldr</span> before <span class="Apple-style-span" style="font-family: monospace;">f</span>. So we either have to fuse again, or invent a new version of
<tt>foldr</tt> that delays the recursive call.<br />
One more fusion gets us to
<br />
<pre>any p [] = False
any p (y:ys) = y || any p ys
</pre>
<pre></pre>
So where's the function reuse? Nowhere in sight.
<br />
<br />
With strict evaluation you can no longer with a straight face tell
people: don't use recursion, reuse the recursion patterns in map,
filter, foldr, etc. It simply doesn't work (in general).<br />
<br />
Using macros doesn't really save us this time, because of the
recursive definitions. I don't really know of any way to fix this
problem short of making all (most?) functions lazy, because the
problem is pervasive. I.e., in the example it would not be enough to fix <span class="Apple-style-span" style="font-family: monospace;">foldr</span>; all the functions involved need to be lazy to get the desired semantics.<br />
<br />
I find this the biggest annoyance with strict evaluation, but at the
same time it's just an annoyance, because you can always rewrite the
code to make it work. But strict evaluation really, fundamentally
stops you from reusing functions the way you can with laziness.
<br />
<br />
As an aside, the definition of <tt>any</tt> doesn't work in SML for
another reason as well, namely the value restriction. But that's just
a language wart, like the monomorphism restriction in Haskell.
<br />
<h3>
Complexity</h3>
Another complaint Bob Harper had about lazy evaluation is the
difficulty of finding out the complexity of functions. I totally
agree that the space complexity is very messy in a lazy language.
For (sequential) time complexity I don't see any need to worry.
<br />
If strict a function has O(<i>f</i>(<i>n</i>)) complexity in a strict language then
it has complexity O(<i>f</i>(<i>n</i>)) in a lazy language as well. Why worry? :)
<br />
<h3>
Summing up</h3>
One of the most important principles in all software design is DRY,
i.e., Don't Repeat Yourself. This means that common patterns that we
can find in a program should be abstractable so you can reuse them.
In a lazy language you can use functions for abstracting control
structures, which a strict language does not permit (at least not in a
convenient way). This can be mitigated by providing some other
abstraction mechanism, like macros (hopefully some kind of sane macros).
<br />
For the reasons above, I find it more convenient to program in a lazy
language than a strict one. But a strict language with the ability to define lazy bindings, lazy functions, and lazy constructors is good enough that I find it usable.
<br />
<br />https://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.htmlnoreply@blogger.com (augustss)45tag:blogger.com,1999:blog-25541020.post-1085569182406070563Wed, 20 Apr 2011 22:25:00 +00002011-04-20T23:25:53.743+01:00<h2>
Ugly memoization</h2>
Here's a problem that I recently ran into. I have a function taking a
string and computing some value. I call this function a lot, but a
lot of the time the argument has occurred before. The function is
reasonably expensive, about 10 us. Only about 1/5 of the calls to the
function has a new argument.
<br />
<br />
So naturally I want to memoize the function. Luckily Hackage has a
couple packages for memoization. I found
<a href="http://hackage.haskell.org/package/data-memocombinators">data-memocombinators</a>
and
<a href="http://hackage.haskell.org/package/MemoTrie">MemoTrie</a>
and decided to try them.
The basic idea with memoization is that you have a function like
<br />
<br />
<pre> memo :: (a->b) -> (a->b)
</pre>
<pre>
</pre>
I.e., you give a function to memo and you get a new function of the
same type back. This new function behaves like the original one, but
it remembers every time it is used and the next time it gets the same
argument it will just return the remembered result.
This is only safe in a pure language, but luckily Haskell is pure.<br />
In an imperative language you can use a mutable memo table that stores
all the argument-result pairs and updates the memo table each time the
function is used. But how is it even possible to implement that in a
pure language? The idea is to lazily construct the whole memo table
in the call to memo, and it will then be lazily filled in.<br />
Assume that all values of the argument type <em>a</em> can be
enumerated by the method <tt>enumerate</tt>, we could then write
memo like this:
<br />
<pre>
</pre>
<pre> memo f =
let table = [ (x, f x) | x <- enumerate ]
in \ y -> let Just r = lookup y table in r
</pre>
<pre>
</pre>
Note how the memo table is constructed given just f, and this memo
table is then used in the returned function.<br />
The type of this function would be something like
<br />
<pre>
</pre>
<pre> memo (Enumerate a, Eq a) => (a->b) -> (a->b)
</pre>
<pre>
</pre>
assuming that the class <tt>Enumerate</tt> has the magic method <tt>enumerate</tt>.<br />
<br />
This just a very simplified example, if you tried to use this it would
be terrible because the returned function does linear lookup in a
list. Instead we want some kind of search tree, which is what the two
packages I mention implement. The MemoTrie package does this in a
really beautiful way, I recommend reading <a href="http://conal.net/blog/posts/elegant-memoization-with-functional-memo-tries/">Conal's blog post</a> about it.<br />
OK, enough preliminaries.
I used <a href="http://hackage.haskell.org/package/criterion">criterion</a> to perform the benchmarking, and I tried with no
memoization (<em>none</em>), memo-combinators (<em>comb</em>), and
MemoTrie (<em>beau</em>). I had a test function taking about 10us,
and then i called this functions with different number of repeated
arguments: 1, 2, 5, and 10. I.e., 5 means that each argument occurred
5 times as the memoized function was called.
<br />
<br />
<table border="4">
<tbody>
<tr><th></th> <td>1</td> <td>2</td> <td>5</td> <td>10</td></tr>
<tr><td>none </td> <td>10.7</td> <td>10.7</td> <td>10.7</td> <td>10.7</td></tr>
<tr><td>comb </td> <td>62.6</td> <td>52.2</td> <td>45.8</td> <td>43.4</td></tr>
<tr><td>beau </td> <td>27.6</td> <td>17.0</td> <td>10.4</td> <td>8.1</td></tr>
</tbody></table>
<br />
So with no memoization the time per call was 10.7 us all the time, no
surprise there. With the memo combinators it was much slower than no
memoization; the overhead for looking something up is bigger than the
cost of computing the result. So that was a failure. The MemoTrie
does better, at about an argument repetition of five it starts to
break even, and at ten it's a little faster to memoize.<br />
<br />
Since I estimated my repetition factor in the real code to be about
five even the fastest memoization would not be any better then
recomputation. So now what? Give up? Of course not! It's time to
get dirty.<br />
<br />
Once you know a function can be implemented in a pure way, there's no
harm in implementing the same function in an impure way as long as it
presents the pure interface. So lets write the memo function the way
it would be done in, e.g., Scheme or ML. We will use a reference to
hold a memo table that gets updated on each call. Here's the code,
with the type that the function gets.
<br />
<pre>
</pre>
<pre>import Data.IORef
import qualified Data.Map as M
memoIO :: (Ord a) => (a -> b) -> IO (a -> IO b)
memoIO f = do
v <- newIORef M.empty
let f' x = do
m <- readIORef v
case M.lookup x m of
Nothing -> do let { r = f x }; writeIORef v (M.insert x r m); return r
Just r -> return r
return f'
</pre>
<pre>
</pre>
The <tt>memoIO</tt> allocated a reference with an empty memo table.
We then define a new function, <tt>f'</tt>, which when it's called
with get the memo table and look up the argument. If the argument is
in the table then we just return the result, if it's not then we
compute the result, store it in the table, and return it.
Good old imperative programming (see below why this code is not good
imperative code).<br />
<br />
But, horror, now the type is all wrong, there's IO in two places.
The function we want to implement is actually pure. So what to do?
Well, if you have a function involving the IO type, but you can prove
it is actually pure, then (and only then) you are allowed to use
<tt>unsafePerformIO</tt>.<br />
<br />
I'll wave my hands instead of a proof (but more later), and here we go
<br />
<pre>
</pre>
<pre> memo :: (Ord a) => (a -> b) -> (a -> b)
memo f = let f' = unsafePerformIO (memoIO f) in \ x -> unsafePerformIO (f' x)
</pre>
<pre>
</pre>
Wow, two <tt>unsafePerformIO</tt> on the same line. It doesn't get
much less safe than that.<br />
Let's benchmark again:
<br />
<br />
<table border="4">
<tbody>
<tr><th></th> <td>1</td> <td>2</td> <td>5</td> <td>10</td></tr>
<tr><td>none </td> <td>10.7</td> <td>10.7</td> <td>10.7</td> <td>10.7</td></tr>
<tr><td>comb </td> <td>62.6</td> <td>52.2</td> <td>45.8</td> <td>43.4</td></tr>
<tr><td>beau </td> <td>27.6</td> <td>17.0</td> <td>10.4</td> <td>8.1</td></tr>
<tr><td>ugly </td> <td>13.9</td> <td>7.7</td> <td>3.9</td> <td>2.7</td></tr>
</tbody></table>
<br />
Not too shabby, using the ugly memoization is actually a win already
at two, and just a small overhead if the argument occurs once. We
have a winner!<br />
<br />
No so fast, there's
<br />
<h3>
A snag</h3>
My real code can actually be multi-threaded, so the memo function had
better work in a multi-threaded setting. Well, it doesn't. There's
no guarantee about <tt>readIORef</tt> and <tt>writeIORef</tt> when
doing multi-threading.<br />
So we have to rewrite it. Actually, the code I first wrote is the one
below; I hardly ever use IORef because I want it to work with
multi-threading.
<br />
<pre>
</pre>
<pre>memoIO f = do
v <- newMVar M.empty
let f' x = do
m <- takeMVar v
case M.lookup x m of
Nothing -> do let { r = f x }; putMVar v (M.insert x r m); return r
Just r -> do putMVar v m; return r
return f'
</pre>
<pre>
</pre>
So now we use an <tt>MVar</tt> instead. This makes it thread safe.
Only one thread can execute between the <tt>takeMVar</tt> and the
<tt>putMVar</tt>. This guarantees than only one thread can update the
memo table at a time. If two threads try at the same time one has to
wait a little. How long? The time it takes for the lookup, plus some
small constant. Remember that Haskell is lazy, the the <tt>(f x)</tt>
is not actually computed with the lock held, which is good.<br />
So I think this is a perfectly reasonable <tt>memoIO</tt>. And we can
do the same unsafe trick as before and make it pure. Performance of
this version is the same as with the <tt>IORef</tt>.
<br />
<br />
Ahhhh, bliss.
But wait, there's
<br />
<h3>
Another snag</h3>
That might look reasonable, but in fact the <tt>memo</tt> function is
broken now. It appears to work, but here's a simple use that fails
<br />
<pre>
</pre>
<pre> sid :: String ->; String
sid = memo id
fcn s = sid (sid s)
</pre>
<pre>
</pre>
What will happen here? The outer call to <tt>sid</tt> will execute
the <tt>takeMVar</tt> and then do the lookup. Doing the lookup with
evaluate the argument, <tt>x</tt>. But this argument is another call
to <tt>sid</tt>, this will try to execute the <tt>takeMVar</tt>.
Disaster has struck, deadlock.<br />
<br />
What happened here? The introduction of <tt>unsafePerformIO</tt>
ruined the sequencing guaranteed by the IO monad that would have
prevented the deadlock if we had used <tt>memoIO</tt>. I got what I
deserved for using <tt>unsafePerformIO</tt>.<br />
<br />
Can it be repaired? Well, we could make sure <tt>x</tt> is fully
evaluated before grabbing the lock. I settled for a different repair,
where the lock is held in a shorter portion of the code.
<br />
<pre>
</pre>
<pre>memoIO f = do
v <- newMVar M.empty
let f' x = do
m <- readMVar v
case M.lookup x m of
Nothing -> do let { r = f x }; m <- takeMVar v; putMVar v (M.insert x r m); return r
Just r -> return r
return f'
</pre>
<pre>
</pre>
This solution has its own problem. It's now possible for several threads
to compute <tt>(f x)</tt> for the same <tt>x</tt> and the result of
all but one of those will be lost by overwriting the table. This is a
price I'm willing to pay for this application.
<br />
<h3>
Moral</h3>
Yes, you can use imperative programming to implement pure functions.
But the onus is on you to prove that it is safe. This is not as easy
as you might think. I believe my final version is correct (with the
multiple computation caveat), but I'm not 100% sure.https://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.htmlnoreply@blogger.com (augustss)24tag:blogger.com,1999:blog-25541020.post-2165437207763037673Sun, 10 Apr 2011 18:51:00 +00002011-04-10T19:51:44.670+01:00Phew! Cleaned out a lot of spam comments in my blog. Hopefully my new settings will prevent the crazy onslaught of spammers.https://augustss.blogspot.com/2011/04/phew-cleaned-out-lot-of-spam-comments.htmlnoreply@blogger.com (augustss)0tag:blogger.com,1999:blog-25541020.post-4910575514980456234Wed, 10 Jun 2009 21:35:00 +00002012-05-31T09:25:09.173+01:00<h2>More LLVM</h2>Recently someone asked me on #haskell if you could use the Haskell LLVM bindings to compile some abstract syntax to a Haskell function. Naturally I said yes, but then I realized I had only done it for a boring language with just one type. I had no doubt that it could be done for more complicated languages with multiple types, but it might not be totally obvious how. So I decided to write a simple compiler, and this blog post is the result. First, a simple example: <pre>main = do
let f :: Double -> Double
Just f = compile "\\ (x::Double) -> if x == 0 then 0 else 1/(x*x)"
print (f 2, f 3, f 0)
</pre>Running this program produces (as expected) <pre>(0.25,0.1111111111111111,0.0)
</pre>What has happened is that the string has been parsed to an abstract syntax tree, translated into LLVM code, then to machine code, and finally turned back into a Haskell callable function. Many things can go wrong along the way, like syntax and type errors, so <tt>compile</tt> returns a <tt>Maybe</tt> type to indicate if things went right or wrong. (A more serious version of the <tt>compile</tt> function would return an error message when something has gone wrong.) The definition of the compilation function is simple and illustrates the flow of the compiler <pre>compile :: (Translate a) => String -> Maybe a
compile = fmap translate . toTFun <=< mParseUFun
</pre>The context <tt>Translate</tt> is there to limit the types that can actually be translated; it's a necessary evil and exactly what types are allowed depends on how advanced we make the compiler. Had we ignored the <tt>Maybe</tt> type the definitions would have been <pre>compile = translate . toTFun . mParseUFun
</pre>which says, first parse to the type <tt>UFun</tt> (untyped expressions), then type check and turn it into the type <tt>TFun a</tt>, and finally translate <tt>TFun a</tt> into an <tt>a</tt> by LLVM compilation. Let's see how this all works. <h3>The UExp module</h3>The first step is to just define an abstract syntax for the expressions that we want to handle. I'm only allowing leading lambdas (this a very first order language), so there's a distiction between the top level <tt>UFun</tt> type and the expression type <tt>UExp</tt>. The U prefix indicates that this version of the syntax is not yet type checked. The definition is pretty boring, but here it is: <pre>{-# OPTIONS_GHC -fno-warn-unused-binds #-}
{-# LANGUAGE RecordWildCards #-}
module UExp(Id, UFun(..), UTyp(..), UExp(..), parseUFun, showOp, mParseUFun) where
import Data.Maybe
import Data.List
import Data.Function
import Text.ParserCombinators.Parsec
import Text.ParserCombinators.Parsec.Expr
import Text.ParserCombinators.Parsec.Token
import Text.ParserCombinators.Parsec.Language
type Id = String
data UFun = UFun [(Id, UTyp)] UExp
data UTyp = UTBol | UTDbl
data UExp
= UDbl Double -- ^ Double literal
| UBol Bool -- ^ Bool literal
| UVar Id -- ^ Variable
| UApp Id [UExp] -- ^ Function application
| ULet Id UExp UExp -- ^ Local binding
</pre>Naturally, we want to be able to show the expressions, if nothing else so for debugging. So I make a <tt>Show</tt> instance that shows them in a nice way respecting operator precedences etc. There's nothing exciting going on, the large number of lines is just to cover operator printing. <pre>instance Show UFun where
showsPrec p (UFun [] e) = showsPrec p e
showsPrec p (UFun vts e) = showParen (p>0) (showString "\\ " . foldr (.) (showString "-> ") (map f vts) . showsPrec 0 e)
where f (v, t) = showParen True (showString v . showString " :: " . showsPrec 0 t) . showString " "
instance Show UTyp where
showsPrec _ UTDbl = showString "Double"
showsPrec _ UTBol = showString "Bool"
instance Show UExp where
showsPrec p (UDbl d) = showsPrec p d
showsPrec p (UBol b) = showsPrec p b
showsPrec _ (UVar i) = showString i
showsPrec p (UApp "if" [c, t, e]) =
showParen (p>0) (showString "if " . showsPrec 0 c . showString " then " . showsPrec 0 t . showString " else " . showsPrec 0 e)
showsPrec p (UApp op [a, b]) = showOp p op a b
showsPrec _ (UApp op _) = error $ "Uxp.show " ++ op
showsPrec p (ULet i e b) =
showParen (p>0) (showString "let " . showString i . showString " = " . showsPrec 0 e . showString " in " . showsPrec 0 b)
showOp :: (Show a, Show b) => Int -> String -> a -> b -> String -> String
showOp q sop a b = showParen (q>mp) (showsPrec lp a . showString sop . showsPrec rp b)
where (lp,mp,rp) = case lookup sop ops of
Just (p, AssocLeft) -> (p, p, p+1)
Just (p, AssocRight) -> (p+1, p, p)
Just (p, AssocNone) -> (p+1, p, p+1)
Nothing -> (9, 9, 10)
ops :: [(String, (Int, Assoc))]
ops = [("+", (6, AssocLeft)),
("-", (6, AssocLeft)),
("*", (7, AssocLeft)),
("/", (7, AssocLeft)),
("==", (4, AssocNone)),
("<=", (4, AssocNone)),
("&&", (3, AssocRight)),
("||", (2, AssocRight))
]
</pre>We also want to be able to parse, so I'm using <tt>Parsec</tt> to parse the string and produce an AST. Again, there's nothing interesting going on. I use the Haskell lexical analysis provided by <tt>Parsec</tt>. This is available as a <tt>TokenParser</tt> record, which can be conveniently opened with the <tt>RecordWildcard</tt> notation <tt>TokenParser{..}</tt>. <pre>parseUFun :: SourceName -> String -> Either ParseError UFun
parseUFun = parse $ do f <- pFun; eof; return f
where TokenParser{..} = haskell
pFun = do
vts <- between (reservedOp "\\")
(reservedOp "->")
(many $ parens $ do v <- identifier; reservedOp "::"; t <- pTyp; return (v, t))
<|> return []
e <- pExp
return $ UFun vts e
pTyp = choice [do reserved "Bool"; return UTBol, do reserved "Double"; return UTDbl]
pExp = choice [pIf, pLet, pOExp]
pIf = do reserved "if"; c <- pExp; reserved "then"; t <- pExp; reserved "else"; e <- pExp; return $ UApp "if" [c, t, e]
pLet = do reserved "let"; i <- identifier; reservedOp "="; e <- pExp; reserved "in"; b <- pExp; return $ ULet i e b
pOExp = buildExpressionParser opTable pAExp
pAExp = choice [pDbl, pVar, parens pExp]
pVar = fmap eVar identifier
pDbl = fmap (either (UDbl . fromInteger) UDbl) naturalOrFloat
eVar i = if i == "False" then UBol False else if i == "True" then UBol True else UVar i
opTable = reverse . map (map mkOp) . groupBy ((==) `on` prec) . sortBy (compare `on` prec) $ ops
where mkOp (s, (_, a)) = Infix (do reservedOp s; return $ \ x y -> UApp s [x, y]) a
prec = fst . snd
mParseUFun :: String -> Maybe UFun
mParseUFun = either (const Nothing) Just . (parseUFun "")
</pre>The parser is packaged up in <tt>mParseUFun</tt> which returns an AST if it all worked. <h3>The TExp module</h3>Since the LLVM API is typed it's much easier to translate a typed abstract syntax tree than an untyped abstract syntax tree. The <tt>TExp</tt> module contains the definition of the typed AST and the type checker that converts to it. There are many ways to formulate type safe abstract syntax trees. I've chosen to use GADTs. I've also picked to represent variables (still) by identifiers, which means that the syntax tree is not necessarily type safe. Furthermore, I've chosen a very limited way to represent function application since this is all I need for this example. The variantions on this are endless. <pre>{-# LANGUAGE GADTs, ExistentialQuantification, PatternGuards #-}
module TExp(Id,
TFun(..), TTyp(..), TExp(..), DblOp(..), BolOp(..), CmpOp(..),
Equal(..), test,
Type(..),
AFun(..), extractFun,
typeCheck, toTFun) where
import Data.Maybe
import Control.Monad
import UExp
data TFun a where
TBody :: TExp a -> TFun a
TLam :: Id -> TTyp a -> TFun b -> TFun (a->b)
data TTyp a where
TTBol :: TTyp Bool
TTDbl :: TTyp Double
TTArr :: TTyp a -> TTyp b -> TTyp (a->b)
data TExp a where
TDbl :: Double -> TExp Double
TBol :: Bool -> TExp Bool
TDblOp :: DblOp -> TExp Double -> TExp Double -> TExp Double
TBolOp :: BolOp -> TExp Bool -> TExp Bool -> TExp Bool
TCmpOp :: CmpOp -> TExp Double -> TExp Double -> TExp Bool
TIf :: TExp Bool -> TExp a -> TExp a -> TExp a
TLet :: Id -> TTyp a -> TExp a -> TExp b -> TExp b
TVar :: Id -> TExp a
data DblOp = DAdd | DSub | DMul | DDiv
deriving (Eq, Show)
data BolOp = BAnd | BOr
deriving (Eq, Show)
data CmpOp = CEq | CLe
deriving (Eq, Show)
</pre>So for instance, <tt>UApp "+" [UVar "x", UDbl 2.2]</tt> will be represented by <tt>TDblOp DAdd (TVar "x") (TDbl 2.2)</tt> which has type <tt>TExp Double</tt>. So the type of the expression is now accurately reflected in the type of the syntax tree. Even the <tt>UTyp</tt> type now has a typed equivalent where the real type is reflected. For completeness, here's some code for pretty printing etc. <pre>{-# LANGUAGE GADTs, ExistentialQuantification, PatternGuards #-}
module TExp(Id,
TFun(..), TTyp(..), TExp(..), DblOp(..), BolOp(..), CmpOp(..),
Equal(..), test,
Type(..),
AFun(..), extractFun,
typeCheck, toTFun) where
import Data.Maybe
import Control.Monad
import UExp
instance Show (TFun a) where
showsPrec p (TBody e) = showsPrec p e
showsPrec p (TLam i t e) = showParen (p>0)
(showString "\\ " . showParen True (showString i . showString " :: " . showsPrec 0 t) . showString " -> " . showsPrec 0 e)
instance Show (TTyp a) where
showsPrec _ TTBol = showString "Bool"
showsPrec _ TTDbl = showString "Double"
showsPrec p (TTArr a b) = showParen (p>5) (showsPrec 6 a . showString " -> " . showsPrec 5 b)
instance Show (TExp a) where
showsPrec p (TDbl d) = showsPrec p d
showsPrec p (TBol b) = showsPrec p b
showsPrec _ (TVar i) = showString i
showsPrec p (TDblOp op a b) = showOp p (fromJust $ lookup op [(DMul, "*"), (DAdd, "+"), (DSub, "-"), (DDiv, "/")]) a b
showsPrec p (TBolOp op a b) = showOp p (fromJust $ lookup op [(BAnd, "&&"), (BOr, "||")]) a b
showsPrec p (TCmpOp op a b) = showOp p (fromJust $ lookup op [(CEq, "=="), (CLe, "<=")]) a b
showsPrec p (TIf c t e) = showParen (p>0) (showString "if " . showsPrec 0 c . showString " then " . showsPrec 0 t . showString " else " . showsPrec 0 e)
showsPrec p (TLet i _ e b) =
showParen (p>0) (showString "let " . showString i . showString " = " . showsPrec 0 e . showString " in " . showsPrec 0 b)
</pre>The aim of the type checker is to transform from the <tt>UExp</tt> type to the <tt>TExp</tt> type, so basically <pre>typeCheckExp :: UExp -> TExp a
</pre>But things can go wrong, so it's impossible to always return a <tt>TExp</tt>, so let's use a <tt>Maybe</tt> type: <pre>typeCheckExp :: UExp -> Maybe (TExp a)
</pre>But wait! This type is totally wrong. Why? Because it promises that given a <tt>UExp</tt> the type checker can return <b>any</b> type, i.e., writing out the (normally implicit) quantifier the type is: <pre>typeCheckExp :: forall a . UExp -> Maybe (TExp a)
</pre>But this is not the case, the type checker will figure out a type and return an expression with this specific type, so the type we really want is <pre>typeCheckExp :: exists a . UExp -> Maybe (TExp a)
</pre>Haskell doesn't allow this type to be written this way; we need to package up the existential type in a data type. Like so: <pre>data ATExp = forall a . TExp a ::: TTyp a
data AFun = forall a . AFun (TFun a) (TTyp a)
</pre>It might look funny that the existential type is written with a forall, but it makes sense when looking at the type of the constructor function (but not when doing pattern matching). Now we can attempt a couple of cases of the type checker: <pre>typeCheckExp :: UExp -> Maybe ATExp
typeCheckExp (UDbl d) =
return $ TDbl d ::: TTDbl
typeCheckExp (UBol b) =
return $ TBol b ::: TTBol
</pre>They look quite nice, and they actually work. So what about something more complicated, like arithmetic? <pre>typeCheckExp (UApp op [a, b]) | Just dop <- lookup op [("+", DAdd), ("-", DSub), ("*", DMul), ("/", DDiv)] = do
a' ::: TTDbl <- typeCheckExp a
b' ::: TTDbl <- typeCheckExp b
return $ TDblOp dop a' b' ::: TTDbl
</pre>
First we conveniently look up the operator among the arithmetic operators, then we recursively call the type checker for the operands. We do this in the <tt>Maybe</tt> monad. If the type checking a subterm fails that's automatically propagated, and furthermore, if the type checking of a subterm does not yield a <tt>TTDbl</tt> type then this will cause the pattern matching to fail, and this will generate a <tt>Nothing</tt> in the maybe monad, so we used failing pattern matching to our advantage here.
The interesting case is checking <tt>UIf</tt>, because here both arms have to have the same type, but we don't know which one. Here's an attempt:
<pre>typeCheckExp (UApp "if" [c,t,e]) = do
c' ::: TTBol <- typeCheckExp c
t' ::: tt <- typeCheckExp t
e' ::: te <- typeCheckExp e
guard (tt == te)
return $ TIf c' t' e' ::: tt
</pre>But this doesn't type check. The guard ensures that the two arms have the same type, but that's something we know, but the Haskell type checker doesn't. So it rejects the <tt>TIf</tt>, because it can't see that both arms have the same type. We need to be trickier in doing the equality test so that it reflects the equality on the type level. There's a standard trick for this, namely this type:
<pre>data Equal a b where
Eq :: Equal a a
</pre>If you ever have a value (which must be <tt>Eq</tt>) of type <tt>Equal foo bar</tt> then the type checker will know that <tt>foo</tt> and <tt>bar</tt> are actually the same type. So let's code equality for <tt>TTyp</tt>.
<pre>test :: TTyp a -> TTyp b -> Maybe (Equal a b)
test TTBol TTBol = return Eq
test TTDbl TTDbl = return Eq
test (TTArr a b) (TTArr a' b') = do
Eq <- test a a'
Eq <- test b b'
return Eq
test _ _ = mzero
</pre>This code is worth pondering for a while, it's actually rather clever (I take no credit for it; I stole it from Tim Sheard). Why does even the first clause type check? Because <tt>TTBol</tt> has type <tt>TTyp Bool</tt>, so both the type variables (a and b) must be <tt>Bool</tt> in the first clause, which means that <tt>Eq :: Equal Bool Bool</tt> is what we're returning.
Equipped with this equality we can try type checking again.
<pre>typeCheckExp (UApp "if" [c,t,e]) = do
c' ::: TTBol <- typeCheckExp c
t' ::: tt <- typeCheckExp t
e' ::: te <- typeCheckExp e
Eq <- test tt te
return $ TIf c' t' e' ::: tt
</pre>And amazingly this actually works! (A tribute to the hard working ghc implementors.)
One (rather large) fly is left in the ointment. What about variables? What do we do when we type check <tt>UVar</tt>? We must check that there's a bound variable with the right type around. So the type checker needs to be extended with an environment where variables can be looked up. It's mostly straight forward. The environment simply maps a variable to <tt>ATExp</tt>. So here's the complete type checker as it's actually defined.
<pre>type Env = [(Id, ATExp)]
typeCheckExp :: Env -> UExp -> Maybe ATExp
typeCheckExp _ (UDbl d) =
return $ TDbl d ::: TTDbl
typeCheckExp _ (UBol b) =
return $ TBol b ::: TTBol
typeCheckExp r (UApp op [a, b]) | Just dop <- lookup op [("+", DAdd), ("-", DSub), ("*", DMul), ("/", DDiv)] = do
a' ::: TTDbl <- typeCheckExp r a
b' ::: TTDbl <- typeCheckExp r b
return $ TDblOp dop a' b' ::: TTDbl
typeCheckExp r (UApp op [a, b]) | Just bop <- lookup op [("&&", BAnd), ("||", BOr)] = do
a' ::: TTBol <- typeCheckExp r a
b' ::: TTBol <- typeCheckExp r b
return $ TBolOp bop a' b' ::: TTBol
typeCheckExp r (UApp op [a, b]) | Just cop <- lookup op [("==", CEq), ("<=", CLe)] = do
a' ::: TTDbl <- typeCheckExp r a
b' ::: TTDbl <- typeCheckExp r b
return $ TCmpOp cop a' b' ::: TTBol
typeCheckExp r (UApp "if" [c,t,e]) = do
c' ::: TTBol <- typeCheckExp r c
t' ::: tt <- typeCheckExp r t
e' ::: te <- typeCheckExp r e
Eq <- test tt te
return $ TIf c' t' e' ::: tt
typeCheckExp r (ULet i e b) = do
e' ::: te <- typeCheckExp r e
b' ::: tb <- typeCheckExp ((i, TVar i ::: te) : r) b
return $ TLet i te e' b' ::: tb
typeCheckExp r (UVar i) =
lookup i r
typeCheckExp _ _ =
mzero
</pre>Note the <tt>ULet</tt> case which extends the environment. First we type check the expression that's being bound, and then add a variable to the environment and type check the body.
Finally we need to type check the top level:
<pre>typeCheck :: UFun -> Maybe AFun
typeCheck = typeCheckFun []
typeCheckFun :: Env -> UFun -> Maybe AFun
typeCheckFun n (UFun [] b) = do
e ::: t <- typeCheckExp n b
return $ AFun (TBody e) t
typeCheckFun n (UFun ((x, typ):vts) b) =
case typ of
UTBol -> f TTBol
UTDbl -> f TTDbl
where f t = do AFun e r <- typeCheckFun ((x, TVar x ::: t) : n) (UFun vts b); return $ AFun (TLam x t e) (TTArr t r)
</pre>When encountering the expression we just type check it, and for an argument we add a variable with the right type to the environment.
A small test in ghci:
<pre>TExp UExp> mParseUFun "\\ (x::Double) -> x+1" >>= typeCheck
Just (\ (x :: Double) -> x+1.0 :: Double -> Double)
</pre>To be able to extract a function from <tt>ATFun</tt> we need some small utilties.
<pre>class Type a where
theType :: TTyp a
instance Type Double where
theType = TTDbl
instance Type Bool where
theType = TTBol
instance (Type a, Type b) => Type (a->b) where
theType = TTArr theType theType
extractFun :: (Type a) => AFun -> Maybe (TFun a)
extractFun = extract theType
extract :: TTyp a -> AFun -> Maybe (TFun a)
extract s (AFun e t) = do
Eq <- test t s
return e
toTFun :: (Type a) => UFun -> Maybe (TFun a)
toTFun = extractFun <=< typeCheck
</pre>The class <tt>Type</tt> allows us to construct the <tt>TTyp</tt> corresponding to a Haskell type via overloading. Using this and the <tt>test</tt> function we can then extract a <tt>TFun</tt> at any type we like. If we try to extract at the wrong type we'll just get <tt>Nothing</tt> and at the right type we get <tt>Just</tt>.
<h3>The Compiler module</h3>Now all we need to do is to write a function <tt>translate</tt> that translates a <tt>TFun a</tt> into the corresponding <tt>a</tt>. Naturally, using LLVM.
Let's start with some simple cases in translating literals to LLVM code.
<pre>compileExp :: TExp a -> CodeGenFunction r (Value a)
compileExp (TDbl d) = return $ valueOf d
compileExp (TBol b) = return $ valueOf b
</pre>The <tt>valueOf</tt> function is simply the one that lifts a Haskell value into an LLVM value. Note how nice the GADT works out here and we handle both Double and Bool with any need to compromise type safety.
What about arithmetic? Equally easy.
<pre>compileExp r (TDblOp op e1 e2) = bind2 (dblOp op) (compileExp r e1) (compileExp r e2)
dblOp :: DblOp -> Value Double -> Value Double -> CodeGenFunction r (Value Double)
dblOp DAdd = add
dblOp DSub = sub
dblOp DMul = mul
dblOp DDiv = fdiv
-- This should be in Control.Monad
bind2 :: (Monad m) => (a -> b -> m c) -> m a -> m b -> m c
bind2 f m1 m2 = do
x1 <- m1
x2 <- m2
f x1 x2
</pre>And we can just carry on:
<pre>compileExp (TBolOp op e1 e2) = bind2 (bolOp op) (compileExp e1) (compileExp e2)
compileExp (TCmpOp op e1 e2) = bind2 (cmpOp op) (compileExp e1) (compileExp e2)
compileExp (TIf b t e) = mkIf (compileExp b) (compileExp t) (compileExp e)
bolOp :: BolOp -> Value Bool -> Value Bool -> CodeGenFunction r (Value Bool)
bolOp BAnd = and
bolOp BOr = or
cmpOp :: CmpOp -> Value Double -> Value Double -> CodeGenFunction r (Value Bool)
cmpOp CEq = fcmp FPOEQ
cmpOp CLe = fcmp FPOLE
</pre>(The <tt>&&</tt> and <tt>||</tt> are not short circuiting in this implementation. It would be easy to change.)
It's rather amazing that despite these different branches producing and consuming different types it all works out. It's perfectly type safe and free from coercions. This is the beauty of GADTs.
Oh, yeah, <tt>mkIf</tt>. It's just a piece of mess to create some basic blocks, test, and jump.
<pre>mkIf :: (IsFirstClass a) =>
CodeGenFunction r (Value Bool) -> CodeGenFunction r (Value a) -> CodeGenFunction r (Value a) -> CodeGenFunction r (Value a)
mkIf mb mt me = do
b <- mb
tb <- newBasicBlock
eb <- newBasicBlock
jb <- newBasicBlock
condBr b tb eb
defineBasicBlock tb
t <- mt
br jb
defineBasicBlock eb
e <- me
br jb
defineBasicBlock jb
phi [(t, tb), (e, eb)]
</pre>OK, so was lying. The <tt>translate</tt> function is not quite as easy as that. Just as with type checking we need an environment because of variables. It's easy to add though, and here's the real code.
<pre>compileExp :: (Type a, IsFirstClass a) => Env -> TExp a -> CodeGenFunction r (Value a)
compileExp _ (TDbl d) = return $ valueOf d
compileExp _ (TBol b) = return $ valueOf b
compileExp r (TDblOp op e1 e2) = bind2 (dblOp op) (compileExp r e1) (compileExp r e2)
compileExp r (TBolOp op e1 e2) = bind2 (bolOp op) (compileExp r e1) (compileExp r e2)
compileExp r (TCmpOp op e1 e2) = bind2 (cmpOp op) (compileExp r e1) (compileExp r e2)
compileExp r (TIf b t e) = mkIf (compileExp r b) (compileExp r t) (compileExp r e)
compileExp r (TLet i t e b) = do
e' <- compileExp' r t e
compileExp ((i, AValue e' t):r) b
compileExp r (TVar i) = return $ fromJust $ castAValue theType =<< lookup i r -- lookup cannot fail on type checked code
compileExp' :: Env -> TTyp a -> TExp a -> CodeGenFunction r (Value a)
compileExp' r TTDbl e = compileExp r e
compileExp' r TTBol e = compileExp r e
compileExp' _ _ _ = error $ "compileExp': functions not allowed yet"
data AValue = forall a . AValue (Value a) (TTyp a)
castAValue :: TTyp a -> AValue -> Maybe (Value a)
castAValue t (AValue v s) = do
Eq <- test t s
return v
type Env = [(Id, AValue)]
</pre>Exactly as for the type checking environment we stick the code generation in an environment, and use <tt>castAValue</tt> project it out of the existential container. The <tt>fromJust</tt> call in the <tt>TVar</tt> case cannot fail on type checked code, but with my string based variable representation I have no evidence of this in the <tt>TExp</tt> so there's actually a cast in the variable case that can fail if scope and type checking has not been performed. The <tt>compileExp'</tt> is placate the type checker and help it with some evidence about that we are only binding base values.
The rest of the code generation module is just house keeping. It's a little ugly, but not terrible.
<pre>-- | Compile a TFun into the corresponding LLVM code.
compileFunction :: (Translate a) =>
TFun a -> CodeGenModule (Function (RetIO a))
compileFunction = createFunction ExternalLinkage . compileFun []
class Compile a where
type CG a
type RetIO a
type Returns a
compileFun :: Env -> TFun a -> CG a
instance Compile Double where
type CG Double = CodeGenFunction Double ()
type RetIO Double = IO Double
type Returns Double = Double
compileFun r (TBody e) = compileExp r e >>= ret
-- TLam is not well typed
instance Compile Bool where
type CG Bool = CodeGenFunction Bool ()
type RetIO Bool = IO Bool
type Returns Bool = Bool
compileFun r (TBody e) = compileExp r e >>= ret
-- TLam is not well typed
instance (Type a, Compile b) => Compile (a -> b) where
type CG (a->b) = Value a -> CG b
type RetIO (a->b) = a -> RetIO b
type Returns (a->b) = Returns b
-- TBody is not well typed
compileFun r (TLam i t e) = \ x -> compileFun ((i, AValue x t):r) e
</pre>The verbosity and large number of type functions in this section has convinced me that I need to simplify some of the types and classes involved in the LLVM code generation.
To convert and LLVM module we call the JIT. This produces a function that returns a value in the IO monad (to be on the safe side) so we need to get rid of the IO, and finally we can get rid of the top level IO, because externally what we are doing is really pure (in some sense).
<pre>translate :: (Translate a) => TFun a -> a
translate = unsafePerformIO . fmap unsafePurify . simpleFunction . compileFunction
</pre>The <tt>Translate</tt> context is just an abbreviation for a big context enforced by the LLVM functions. It looks horrendous, but the type checker figured it out for me and I just pasted it in.
<pre>{-# LANGUAGE TypeFamilies, FlexibleContexts, ExistentialQuantification, FlexibleInstances, UndecidableInstances #-}
module Compile(Translate, translate) where
import Data.Maybe
import Prelude hiding (and, or)
import TExp
import LLVM.Core hiding (CmpOp)
import LLVM.ExecutionEngine
import System.IO.Unsafe(unsafePerformIO)
class (Type a,
Unsafe (RetIO a) a,
FunctionArgs (RetIO a) (CG a) (CodeGenFunction (Returns a) ()),
IsFunction (RetIO a),
Compile a,
Translatable (RetIO a)) =>
Translate a
instance (Type a,
Unsafe (RetIO a) a,
FunctionArgs (RetIO a) (CG a) (CodeGenFunction (Returns a) ()),
IsFunction (RetIO a),
Compile a,
Translatable (RetIO a)) =>
Translate a
</pre><h3>Conclusion</h3>And that concludes the three parts of the compiler. In about 400 lines of code we can compile a small subset of Haskell expressions to (efficient) machine code. After type checking the rest of the processing is done in a type safe manner (except for a cast in <tt>TVar</tt>) which is the intention of the high level LLVM interface.
Oh, and if you instrument the code generator a little you can peek at the machine code being produced. For instance, for this input to <tt>compile</tt>
<pre>\ (x::Double) ->
let y = x*(x-1) in
let z = x/y + 1 in
if y <= 0 then 0 else 1/(y-z)
</pre>we get
<pre>__fun1:
subl $12, %esp
movsd LCPI1_0, %xmm0
movsd 16(%esp), %xmm1
movapd %xmm1, %xmm2
subsd %xmm0, %xmm2
mulsd %xmm1, %xmm2
pxor %xmm3, %xmm3
ucomisd %xmm2, %xmm3
jae LBB1_3
LBB1_1:
divsd %xmm2, %xmm1
addsd %xmm0, %xmm1
subsd %xmm1, %xmm2
movsd LCPI1_0, %xmm0
divsd %xmm2, %xmm0
LBB1_2:
movsd %xmm0, (%esp)
fldl (%esp)
addl $12, %esp
ret
LBB1_3:
pxor %xmm0, %xmm0
jmp LBB1_2
</pre>https://augustss.blogspot.com/2009/06/more-llvm-recently-someone-asked-me-on.htmlnoreply@blogger.com (augustss)10tag:blogger.com,1999:blog-25541020.post-3941171818143196666Sat, 07 Feb 2009 20:31:00 +00002009-02-09T11:43:08.996+00:00BASICDSLHaskell<h2>More BASIC</h2>
Not that anybody should care, but I've reimplemented by BASIC.
<p>
Here's a simple program.
<pre>
{-# LANGUAGE ExtendedDefaultRules, OverloadedStrings #-}
import BASIC
main = runBASIC $ do
10 GOSUB 1000
20 PRINT "* Welcome to HiLo *"
30 GOSUB 1000
100 LET I := INT(100 * RND(0))
200 PRINT "Guess my number:"
210 INPUT X
220 LET S := SGN(I-X)
230 IF S <> 0 THEN 300
240 FOR X := 1 TO 5
250 PRINT X*X;" You won!"
260 NEXT X
270 STOP
300 IF S <> 1 THEN 400
310 PRINT "Your guess ";X;" is too low."
320 GOTO 200
400 PRINT "Your guess ";X;" is too high."
410 GOTO 200
1000 PRINT "*******************"
1010 RETURN
9999 END
</pre>
In some ways this is a step backwards, since it requires some language extensions in Main. But I wanted to be able to use semicolon in the print statement.
<P>
But there it is, an exciting game!
<pre>
*******************
* Welcome to HiLo *
*******************
Guess my number:
50
Your guess 50 is too high.
Guess my number:
25
Your guess 25 is too low.
Guess my number:
37
Your guess 37 is too low.
Guess my number:
44
Your guess 44 is too low.
Guess my number:
47
Your guess 47 is too low.
Guess my number:
48
1 You won!
4 You won!
9 You won!
16 You won!
25 You won!
</pre>https://augustss.blogspot.com/2009/02/more-basic-not-that-anybody-should-care.htmlnoreply@blogger.com (augustss)8tag:blogger.com,1999:blog-25541020.post-5253528735726768410Fri, 06 Feb 2009 16:50:00 +00002016-11-08T16:28:45.786+00:00BASICBenchmarkDSLHaskellLLVM<h2>Is Haskell fast?</h2>Let's do a simple benchmark comparing Haskell to C. My benchmark computes an approximation to infinity by adding up 1/n. Here is the C code: <pre>#include <stdio.h>
int
main(int argc, char **argv)
{
double i, s;
s = 0;
for (i = 1; i < 100000000; i++)
s += 1/i;
printf("Almost infinity is %g\n", s);
}
</pre>And running it <pre>Lennarts-Computer% gcc -O3 inf.c -o inf
Lennarts-Computer% time ./inf
Almost infinity is 18.9979
1.585u 0.009s 0:01.62 97.5% 0+0k 0+0io 0pf+0w
</pre>And now the Haskell code: <pre>import BASIC
main = runBASIC' $ do
10 LET I =: 1
20 LET S =: 0
30 LET S =: S + 1/I
40 LET I =: I + 1
50 IF I <> 100000000 THEN 30
60 PRINT "Almost infinity is"
70 PRINT S
80 END
</pre>And running it: <pre>Lennarts-Computer% ghc --make Main.hs
[4 of 4] Compiling Main ( Main.hs, Main.o )
Linking Main ...
Lennarts-Computer% ./Main
Almost infinity is
18.9979
CPU time: 1.57s
</pre>As you can see it's about the same time. In fact the assembly code for the loops look pretty much the same. Here's the Haskell one: <pre>LBB1_1: ## _L4
movsd LCPI1_0, %xmm2
movapd %xmm1, %xmm3
addsd %xmm2, %xmm3
ucomisd LCPI1_1, %xmm3
divsd %xmm1, %xmm2
addsd %xmm2, %xmm0
movapd %xmm3, %xmm1
jne LBB1_1 ## _L4
</pre>https://augustss.blogspot.com/2009/02/is-haskell-fast-lets-do-simple.htmlnoreply@blogger.com (augustss)11tag:blogger.com,1999:blog-25541020.post-7024343775777574640Fri, 06 Feb 2009 11:12:00 +00002009-02-06T11:18:54.985+00:00BASICDSLHaskell<h2>Regression</h2>
They say that as you get older you regress back towards childhood. So I present you with today's Haskell program (the idea shamelessly stolen from JoshTriplett from #haskell on IRC):
<pre>
import BASIC
main = runBASIC $ do
10 LET X =: 1
20 PRINT "Hello BASIC world!"
30 LET X =: X + 1
40 IF X <> 11 THEN 20
50 END
</pre>
Yes, it runs. (I'm sorry about the <tt>=:</tt> instead of <tt>=</tt>, but some things are just too wired into Haskell to change.)https://augustss.blogspot.com/2009/02/regression-they-say-that-as-you-get.htmlnoreply@blogger.com (augustss)13tag:blogger.com,1999:blog-25541020.post-6096513306422692728Wed, 21 Jan 2009 01:07:00 +00002009-01-21T08:32:12.079+00:00Code generationHaskellLLVM<h2>A performance update</h2>
I've continued playing with the LLVM. I discovered that when generating code for the <tt>normcdf</tt> and Black-Scholes functions I did not tell LLVM that the functions that were called (<tt>exp</tt> etc.) are actually pure functions. That meant that the LLVM didn't perform CSE properly.
<p>
So here are some updated numbers for computing an option prices for 10,000,000 options:</p>
<ul>
<li>Pure Haskell: 8.7s</li>
<li>LLVM: 2.0s</li>
</ul>
Just as a reminder, the code for <tt>normcdf</tt> looks like this:
<pre>
normcdf x = x %< 0 ?? (1 - w, w)
where w = 1.0 - 1.0 / sqrt (2.0 * pi) * exp(-l*l / 2.0) * poly k
k = 1.0 / (1.0 + 0.2316419 * l)
l = abs x
poly = horner coeff
coeff = [0.0,0.31938153,-0.356563782,1.781477937,-1.821255978,1.330274429]
</pre>
A noteworthy thing is that exactly the same code can be used both for the pure Haskell and the LLVM code generation; it's just a matter of overloading.
<h3>Vectors</h3>
An very cool aspect of the LLVM is that it has vector instructions. On the x86 these translate into using the SSE extensions to the processor and can speed up computations by doing things in parallel.
<p>
Again, by using overloading, the exact same code can be used to compute over vectors of numbers as with individual numbers.
<p>
So what about performance? I used four element vectors of 32 bit floating point numbers and got these results:
<ul>
<li>Pure Haskell: 8.7s</li>
<li>LLVM, scalar: 2.0s</li>
<li>LLVM, vector: 1.1s</li>
<li>C, gcc -O3: 2.5s</li>
</ul>
Some caveats if you try out vectors in the LLVM.
<ul>
<li>Only on MacOS does the LLVM package give you fast primitive functions, because that's the only platform that seems to have this as a standard.</li>
<li>The vector version of floating point comparison has not been implemented in the LLVM yet.</li>
<li>Do not use two element vectors of type 32 bit floats. This will generate code that is wrong on the x86. I sent in a bug report about this, but was told that it is a feature and not a bug. (I kid you not.) To make the code right you have to manually insert EMMS instructions.</li>
<li>The GHC FFI is broken for all operations that allocate memory for a <tt>Storable</tt>, e.g., <tt>alloca</tt>, <tt>with</tt>, <tt>withArray</tt> etc. These operations do not take the alignment into account when allocating. This means that, e.g., a vector of four floats may end up on 8 byte alignment instead of 16. This generates a segfault.
</ul>
On the whole, I'm pretty happy with the LLVM performance now; about 8 times faster than ghc on this example.
<p>
[Edit:] Added point about broken FFI.https://augustss.blogspot.com/2009/01/performance-update-ive-continued.htmlnoreply@blogger.com (augustss)12tag:blogger.com,1999:blog-25541020.post-3502914505601036632Sat, 10 Jan 2009 15:33:00 +00002009-01-10T16:22:35.886+00:00Code generationHaskellLLVM<h2>LLVM arithmetic</h2>
So we want to compute <i>x</i><sup>2</sup>-5<i>x</i>+6 using the Haskell LLVM bindings. It would look some something like this.
<pre>
xsq <- mul x x
x5 <- mul 5 x
r1 <- sub xsq x5
r <- add r1 6
</pre>
Not very readable, it would be nicer to write
<pre>
r <- x^2 - 5*x + 6
</pre>
But, e.g., the <tt>add</tt> instruction has the (simplified) type <tt>Value a -> Value a -> CodeGenFunction r (Value a)</tt>, where <tt>CodeGenFunction</tt> is the monad for generating code for a function. (BTW, the <tt>r</tt> type variable is used to keep track of the return type of the function, used by the <tt>ret</tt> instruction.)
<p>
We can't change the return type of <tt>add</tt>, but we can change the argument type.
<pre>
type TValue r a = CodeGenFunction r (Value a)
add' :: TValue r a -> TValue r a -> TValue r a
add' x y = do x' <- x; y' <- y; add x' y'
</pre>
Now the type fits what the <tt>Num</tt> class wants. And we can make an instance declaration.
<pre>
instance (Num a) => Num (TValue r a) where
(+) = add'
(-) = sub'
(*) = mul'
fromInteger = return . valueOf . fromInteger
</pre>
We are getting close, the only little thing is that the <tt>x</tt> in our original LLVM code has type <tt>Value a</tt> rather than <tt>TValue r a</tt>, but <tt>return</tt> takes care of that. So:
<pre>
let x' = return x
r <- x'^2 - 5*x' + 6
</pre>
And a quick look at the generated LLVM code (for Double) shows us that all is well.
<pre>
; x in %0
%1 = mul double %0, %0
%2 = mul double 5.000000e+00, %0
%3 = sub double %1, %2
%4 = add double %3, 6.000000e+00
</pre>
<p>
All kinds of numeric instances and some other goodies are available in the <tt>LLVM.Util.Arithmetic</tt> module. Here is a complete Fibonacci (again) using this.
<pre>
import Data.Int
import LLVM.Core
import LLVM.ExecutionEngine
import LLVM.Util.Arithmetic
mFib :: CodeGenModule (Function (Int32 -> IO Int32))
mFib = recursiveFunction $ \ rfib n ->
n %< 2 ? (1, rfib (n-1) + rfib (n-2))
main :: IO ()
main = do
let fib = unsafeGenerateFunction mFib
print (fib 22)
</pre>
The <tt>%<</tt> operator compares values returning a <tt>TValue r Bool</tt>. The <tt>c ? (t, e)</tt> is a conditional expression, like C's <tt>c ? t : e</tt>.
<p>
The type given to mFib is not the most general one, of course. The most general one can accept any numeric type for argument and result. Here it is:
<pre>
mFib :: (Num a, Cmp a, IsConst a,
Num b, Cmp b, IsConst b, FunctionRet b) =>
CodeGenModule (Function (a -> IO b))
mFib = recursiveFunction $ \ rfib n ->
n %< 2 ? (1, rfib (n-1) + rfib (n-2))
</pre>
It's impossible to generate code for <tt>mFib</tt> directly; code can only be generated for monomorphic types. So a type signature is needed when generating code to force a monomorphic type.
<pre>
main = do
let fib :: Int32 -> Double
fib = unsafeGenerateFunction mFib
fib' :: Int16 -> Int64
fib' = unsafeGenerateFunction mFib
print (fib 22, fib' 22)
</pre>
<h3>Another example</h3>
Let's try a more complex example. To pick something mathematical to have lots of formulae we'll do the Cumulative Distribution Function. For the precision of a Float it can be coded like this in Haskell (normal Haskell, no LLVM):
<pre>
normcdf x = if x < 0 then 1 - w else w
where w = 1 - 1 / sqrt (2 * pi) * exp(-l*l / 2) * poly k
k = 1 / (1 + 0.2316419 * l)
l = abs x
poly = horner coeff
coeff = [0.0,0.31938153,-0.356563782,1.781477937,-1.821255978,1.330274429]
horner coeff base = foldr1 multAdd coeff
where multAdd x y = y*base + x
</pre>
We cannot use this directly, it has type <tt>normcdf :: (Floating a, Ord a) => a -> a</tt>. The <tt>Ord</tt> context is a problem, because there are no instance of <tt>Ord</tt> for LLVM types. The comparison is the root of the problem, since it returns a <tt>Bool</tt> rather than a <tt>TValue r Bool</tt>.
<p>
It's possible to hide the Prelude and overload the comparisons, but you cannot overload the <tt>if</tt> construct. So a little rewriting is necessary regardless. Let's just bite the bullet and change the first line.
<pre>
normcdf x = x %< 0 ? (1 - w, w)
</pre>
And with that change, all we need to do is
<pre>
mNormCDF = createFunction ExternalLinkage $ arithFunction $ normcdf
main :: IO ()
main = do
writeFunction "CDF.bc" (mNormCDF :: CodeGenModule (Function (Float -> IO Float)))
</pre>
So what happened here? Looking at <tt>normcdf</tt> it contains a things that the LLVM cannot handle, like lists. But all the list operations happen when the Haskell code runs and nothing of that is left in the LLVM code.
<p>
If you optimize and generate code for <tt>normcdf</tt> it looks like this (leaving out constants and half the code):
<pre>
__fun1:
subl $28, %esp
pxor %xmm0, %xmm0
ucomiss 32(%esp), %xmm0
jbe LBB1_2
movss 32(%esp), %xmm0
mulss %xmm0, %xmm0
divss LCPI1_0, %xmm0
movss %xmm0, (%esp)
call _expf
fstps 24(%esp)
movss 32(%esp), %xmm0
mulss LCPI1_1, %xmm0
movss %xmm0, 8(%esp)
movss LCPI1_2, %xmm0
movss 8(%esp), %xmm1
addss %xmm0, %xmm1
movss %xmm1, 8(%esp)
movaps %xmm0, %xmm1
divss 8(%esp), %xmm1
movaps %xmm1, %xmm2
mulss LCPI1_3, %xmm2
addss LCPI1_4, %xmm2
mulss %xmm1, %xmm2
addss LCPI1_5, %xmm2
mulss %xmm1, %xmm2
addss LCPI1_6, %xmm2
mulss %xmm1, %xmm2
addss LCPI1_7, %xmm2
mulss %xmm1, %xmm2
pxor %xmm1, %xmm1
addss %xmm1, %xmm2
movss 24(%esp), %xmm1
mulss LCPI1_8, %xmm1
mulss %xmm2, %xmm1
addss %xmm0, %xmm1
subss %xmm1, %xmm0
movss %xmm0, 20(%esp)
flds 20(%esp)
addl $28, %esp
ret
LBB1_2:
...
</pre>
And that looks quite nice, in my opinion.
<h3>Black-Scholes</h3>
I work at a bank these days, so let's do the most famous formula in financial maths, the Black-Scholes formula for pricing options. Here's a Haskell rendition of it.
<pre>
blackscholes iscall s x t r v = if iscall then call else put
where call = s * normcdf d1 - x*exp (-r*t) * normcdf d2
put = x * exp (-r*t) * normcdf (-d2) - s * normcdf (-d1)
d1 = (log(s/x) + (r+v*v/2)*t) / (v*sqrt t)
d2 = d1 - v*sqrt t
</pre>
Again, it uses an <tt>if</tt>, so it needs a little fix.
<pre>
blackscholes iscall s x t r v = iscall ? (call, put)
</pre>
With that fix, code can be generated directly from <tt>blackscholes</tt>. The calls to <tt>normcdf</tt> are expanded inline, but it's easy to make some small changes to the code so that it actually does function calls.
<h3>Some figures</h3>
To test the speed of the generated code I ran <tt>blackscholes</tt> over a portfolio of 10,000,000 options and summed their value. The time excludes the time to set up the portfolio array, because that is done in Haskell. I also ran the code in pure Haskell on a list with 10,000,000 elements.
<pre>
Unoptimized LLVM 17.5s
Optimized LLVM 8.2s
Pure Haskell 9.3s
</pre>
The only surprise here is how well pure Haskell (with -O2) performs. This is a very good example for Haskell though, because the list gets fused away and everything is strict. A lot of the time is spent in <tt>log</tt> and <tt>exp</tt> in this code, so perhaps the similar results are not so surprising after all.https://augustss.blogspot.com/2009/01/llvm-arithmetic-so-we-want-to-compute-x.htmlnoreply@blogger.com (augustss)0tag:blogger.com,1999:blog-25541020.post-2319446660716223785Wed, 07 Jan 2009 16:14:00 +00002009-01-10T15:57:37.551+00:00Code generationDSLHaskellLLVM<h2>LLVM</h2>
The <a href="http://llvm.org/">LLVM</a>, Low Level Virtual Machine, is a really cool compiler infrastructure project with many participants. The idea is that if you want to make a new high quality compiler you just have to generate LLVM code, and then there are lots of optimizations and code generators available to get fast code.
<p>
There are different ways to generate input to the LLVM tools. You can generate a text file with LLVM code and feed it to the tools, or you can use bindings for some programming language and programmatically build the LLVM code. The original bindings from the LLVM project is for C++, but they also provide C bindings. On top of the C bindings you can easily interface to other languages; for instance O'Caml and Haskell.
<p>
There are also diffent things you can do to LLVM code you have build programmatically. You can transform it, you can write to a file, you can run an interpreter on it, or execute it with a JIT compiler.
<h3>Haskell LLVM bindings</h3>
There is a Haskell binding to the LLVM. It has two layers. You can either work on the C API level and have ample opportunity to shoot your own limbs to pieces, or you can use the high level interface which is mostly safe.
<p>
Bryan O'Sullivan did all the hard work of taking the C header files and producing the corresponding Haskell FFI files. He also made a first stab at the high level interface, which I have since change a lot (for better or for worse).
<h3>An example</h3>
Let's do an example. We'll write the LLVM code for this function
<pre>
f x y z = (x + y) * z
</pre>
In Haskell this function is polymorphic, but when generating machine code we have to settle for a type. Let's pick <tt>Int32</tt>. (The Haskell <tt>Int</tt> type cannot be used in talking to LLVM; it doesn't a well defined size.) Here is how it looks:
<pre>
mAddMul :: CodeGenModule (Function (Int32 -> Int32 -> Int32 -> IO Int32))
mAddMul =
createFunction ExternalLinkage $ \ x y z -> do
t <- add x y
r <- mul t z
ret r
</pre>
For comparison, the LLVM code in text for for this would be:
<pre>
define i32 @_fun1(i32, i32, i32) {
%3 = add i32 %0, %1
%4 = mul i32 %3, %2
ret i32 %4
}
</pre>
So what does the Haskell code say? The <tt>mAddMul</tt> definition is something in the <tt>CodeGenModule</tt> monad, and it generates a <tt>Function</tt> of type <tt>Int32 -> Int32 -> Int32 -> IO Int32</tt>. That last is the type of <tt>f</tt> above, except for that <tt>IO</tt>. Why the <tt>IO</tt>? The Haskell LLVM bindings forces all defined functions to return something in the IO monad, because there are no restriction on what can happen in the LLVM code; it might very well do IO. So to be on the safe side, there's always an IO on the type. If we know the function is harmless, we can use <tt>unsafePerformIO</tt> to get rid of it.
<p>
So the code does a <tt>createFunction</tt> which does what the name suggests. The <tt>ExternalLinkage</tt> argument says that this function will be available outside the module it's in, the obvious opposite being <tt>InternalLinkage</tt>. Using <tt>InternalLinkage</tt> is like saying <tt>static</tt> on the top level in C. In this examples it doesn't really matter which we pick.
<p>
The function has three arguments <tt>x y z</tt>. The last argument to <tt>createFunction</tt> should be a lambda expression with the right number of arguments, i.e., the number of arguments should agree with the type. We the use monadic syntax to generate an <tt>add</tt>, <tt>mul</tt>, and <tt>ret</tt> instruction.
<p>
The code looks like assembly code, which is the level that LLVM is at. It's a somewhat peculiar assembly code, because it's on SSA (Static Single Assignment) form. More about that later.
<p>
So what can we do with this function? Well, we can generate machine code for it and call it.
<pre>
main = do
addMul <- simpleFunction mAddMul
a <- addMul 2 3 4
print a
</pre>
In this code <tt>addMul</tt> has type <tt>Int32 -> Int32 -> Int32 -> IO Int32</tt>, so it has to be called in the IO monad. Since this is a pure function, we can make the type pure, i.e., <tt>Int32 -> Int32 -> Int32 -> Int32</tt>.
<pre>
main = do
addMul <- simpleFunction mAddMul
let addMul' = unsafePurify addMul
print (addMul' 2 3 4)
</pre>
The <tt>unsafePurify</tt> functions is simply an extension of <tt>unsafePerformIO</tt> that drops the IO on the result of a function.
<p>
So that was pretty easy. To make a function, just specify the LLVM code using the LLVM DSEL that the Haskell bindings provides.
<h3>Fibonacci</h3>
No FP example is complete without the Fibonacci function, so here it is.
<pre>
mFib :: CodeGenModule (Function (Word32 -> IO Word32))
mFib = do
fib <- newFunction ExternalLinkage
defineFunction fib $ \ arg -> do
-- Create the two basic blocks.
recurse <- newBasicBlock
exit <- newBasicBlock
-- Test if arg > 2
test <- icmp IntUGT arg (2::Word32)
condBr test recurse exit
-- Just return 1 if not > 2
defineBasicBlock exit
ret (1::Word32)
-- Recurse if > 2, using the cumbersome plus to add the results.
defineBasicBlock recurse
x1 <- sub arg (1::Word32)
fibx1 <- call fib x1
x2 <- sub arg (2::Word32)
fibx2 <- call fib x2
r <- add fibx1 fibx2
ret r
return fib
</pre>
Instead of using <tt>createFunction</tt> to create the function we're using <tt>newFunction</tt> and <tt>defineFunction</tt>. The former is a shorthand for the latter two together. But splitting making the function and actually defining it means that we can refer to the function before it's been defined. We need this since <tt>fib</tt> is recursive.
<p>
Every instruction in the LLVM code belongs to a basic block. A basic block is a sequence of non-jump instructions (<tt>call</tt> is allowed in the LLVM) ending with some kind of jump. It is always entered at the top only. The top of each basic block can be thought of as a label that you can jump to, and those are the only places that you can jump to.
<p>
The code for fib starts with a test if the argument is Unsigned Greater Than 2. The <tt>condBr</tt> instruction branches to <tt>recurse</tt> if <tt>test</tt> is true otherwise to <tt>exit</tt>. To be able to refer to the two branch labels (i.e., basic blocks) before they are defined we create them with <tt>newBasicBlock</tt> and then later define them with <tt>defineBasicBlock</tt>. The <tt>defineBasicBlock</tt> simply starts a new basic block that runs to the next basic block start, or to the end of the function. The type system does <b>not</b> check that the basic block ends with a branch (I can't figure out how to do that without making the rest of the code more cumbersome).
<p>
In the false branch we simply return 1, and in the true branch we make the two usual recursive calls, add the results, and return the sum.
<p>
As you can see a few type annotations are necessary on constants. In my opinion they are quite annoying, because if you write anything different from <tt>::Word32</tt> in those annotations there will be a type error. This means that in principle the compiler has all the information, it's just too "stupid" to use it.
<p>
The performance you get from this Fibonacci function is decent, but in fact worse than GHC with -O2 gives. Even with full optimization turned on for the LLVM code it's still not as fast as GHC for this function.
<p>
[Edit: Added assembly] Here is the assembly code for Fibonacci. Note how there is only one recursive call. The other call has been transformed into a loop.
<pre>
_fib:
pushl %edi
pushl %esi
subl $4, %esp
movl 16(%esp), %esi
cmpl $2, %esi
jbe LBB1_4
LBB1_1:
movl $1, %edi
.align 4,0x90
LBB1_2:
leal -1(%esi), %eax
movl %eax, (%esp)
call _fib
addl %edi, %eax
addl $4294967294, %esi
cmpl $2, %esi
movl %eax, %edi
ja LBB1_2
LBB1_3:
addl $4, %esp
popl %esi
popl %edi
ret
LBB1_4:
movl $1, %eax
jmp LBB1_3
</pre>
<h3>Hello, World!</h3>
The code for printing "Hello, World!":
<pre>
import Data.Word
import LLVM.Core
import LLVM.ExecutionEngine
bldGreet :: CodeGenModule (Function (IO ()))
bldGreet = do
puts <- newNamedFunction ExternalLinkage "puts" :: TFunction (Ptr Word8 -> IO Word32)
greetz <- createStringNul "Hello, World!"
func <- createFunction ExternalLinkage $ do
tmp <- getElementPtr greetz (0::Word32, (0::Word32, ()))
call puts tmp -- Throw away return value.
ret ()
return func
main :: IO ()
main = do
greet <- simpleFunction bldGreet
greet
</pre>
To get access to the C function <tt>puts</tt> we simply declare it and rely on the linker to link it in. The <tt>greetz</tt> variable has type pointer to array of characters. So to get a pointer to the first character we have to use the rather complicated <tt>getElementPtr</tt> instruction. See <a href="http://llvm.org/docs/GetElementPtr.html">FAQ about it</a>.
<h3>Phi instructions</h3>
Let's do the following simple C function
<pre>
int f(int x)
{
if (x < 0) x = -x;
return (x+1);
}
</pre>
Let's try to write some corresponding LLVM code:
<pre>
createFunction ExternalLinkage $ \ x -> do
xneg <- newBasicBlock
xpos <- newBasicBlock
t <- icmp IntSLT x (0::Int32)
condBr t xneg xpos
defineBasicBlock xneg
x' <- sub (0::Int32) x
br xpos
defineBasicBlock xpos
r1 <- add ??? (1::Int32)
ret r1
</pre>
But what should we put at <tt>???</tt>? When jumping from the <tt>condBr</tt> the value is in <tt>x</tt>, but when jumping from the negation block the value is in <tt>x'</tt>. And this is how SSA works. Every instruction puts the value in a new "register", so this situation is unavoidable. This is why SSA (and thus LLVM) form has <tt>phi</tt> instructions. This is a pseudo-instruction to tell the code generator what registers should be merged at the entry of a basic block. So the real code looks like this:
<pre>
mAbs1 :: CodeGenModule (Function (Int32 -> IO Int32))
mAbs1 =
createFunction ExternalLinkage $ \ x -> do
top <- getCurrentBasicBlock
xneg <- newBasicBlock
xpos <- newBasicBlock
t <- icmp IntSLT x (0::Int32)
condBr t xneg xpos
defineBasicBlock xneg
x' <- sub (0::Int32) x
br xpos
defineBasicBlock xpos
r <- phi [(x, top), (x', xneg)]
r1 <- add r (1::Int32)
ret r1
</pre>
The <tt>phi</tt> instruction takes a list of registers to merge, and paired up with each register is the basic block that the jump comes from. Since the first basic block in a function is created implicitely we have to get it with <tt>getCurrentBasicBlock</tt> which returns the current basic block.
<p>
If, like me, you have a perverse interest in the machine code that gets generated here is the optimized code for that function on for x86:
<pre>
__fun1:
movl 4(%esp), %eax
movl %eax, %ecx
sarl $31, %ecx
addl %ecx, %eax
xorl %ecx, %eax
incl %eax
ret
</pre>
Note how the conditional jump has cleverly been replaced by some non-jumping instructions. I think this code is as good as it gets.
<p>
<h3>Loops and arrays</h3>
Let's do a some simple array code, the dot product of two vectors. The function takes a length and pointers to two vectors. It sums the elementwise product of the vectors. Here's the C code:
<pre>
double
dotProd(unsigned int len, double *aPtr, double *bPtr)
{
unsigned int i;
double s;
s = 0;
for (i = 0; i != len; i++)
s += aPtr[i] * bPtr[i];
return s;
}
</pre>
The corresponding LLVM code is much more complicated and has some new twists.
<pre>
import Data.Word
import Foreign.Marshal.Array
import LLVM.Core
import LLVM.ExecutionEngine
mDotProd :: CodeGenModule (Function (Word32 -> Ptr Double -> Ptr Double -> IO Double))
mDotProd =
createFunction ExternalLinkage $ \ size aPtr bPtr -> do
top <- getCurrentBasicBlock
loop <- newBasicBlock
body <- newBasicBlock
exit <- newBasicBlock
-- Enter loop, must use a br since control flow joins at the loop bb.
br loop
-- The loop control.
defineBasicBlock loop
i <- phi [(valueOf (0 :: Word32), top)] -- i starts as 0, when entered from top bb
s <- phi [(valueOf 0, top)] -- s starts as 0, when entered from top bb
t <- icmp IntNE i size -- check for loop termination
condBr t body exit
-- Define the loop body
defineBasicBlock body
ap <- getElementPtr aPtr (i, ()) -- index into aPtr
bp <- getElementPtr bPtr (i, ()) -- index into bPtr
a <- load ap -- load element from a vector
b <- load bp -- load element from b vector
ab <- mul a b -- multiply them
s' <- add s ab -- accumulate sum
i' <- add i (valueOf (1 :: Word32)) -- Increment loop index
addPhiInputs i [(i', body)] -- Control flow reaches loop bb from body bb
addPhiInputs s [(s', body)]
br loop -- And loop
defineBasicBlock exit
ret (s :: Value Double) -- Return sum
main = do
ioDotProd <- simpleFunction mDotProd
let dotProd a b =
unsafePurify $
withArrayLen a $ \ aLen aPtr ->
withArrayLen b $ \ bLen bPtr ->
ioDotProd (fromIntegral (aLen `min` bLen)) aPtr bPtr
let a = [1,2,3]
b = [4,5,6]
print $ dotProd a b
print $ sum $ zipWith (*) a b
</pre>
First we have to set up the looping machinery. There a four basic blocks involved: the implicit basic block that is created at the start of every function, <tt>top</tt>; the top of the loop, <tt>loop</tt>; the body of the loop, <tt>body</tt>; and finally the block with the return from the function, <tt>exit</tt>.
<p>
There are two "registers", the loop index <tt>i</tt> and the running sum <tt>s</tt> that arrive from two different basic blocks at the top of the loop. When entering the loop from the first time they should be 0. That's what the <tt>phi</tt> instruction specifies. The <tt>valueOf</tt> function simply turns a constant into an LLVM value. It's worth noting that the initial values for the two variables are constant rather than registers. The control flow also reached the basic block <tt>loop</tt> from the end of <tt>body</tt>, but we don't have the names of those registers in scope yet, so we can't put them in the <tt>phi</tt> instruction. Instead, we have to use <tt>addPhiInputs</tt> to add more phi inputs later (when the registers are in scope).
<p>
The most mysterious instruction in the LLVM is <tt>getElementPtr</tt>. It simply does address arithmetic, so it really does something quite simple. But it can perform several levels of address arithmetic when addressing through multilevel arrays and structs. In can take several indicies, but since here we simply want to add the index variable to a pointer the usage is pretty simple. Doing <tt>getElementPtr aPtr (i, ())</tt> corresponds to <tt>aPtr + i</tt> in C.
<p>
To test this function we need pointers to two vectors. The FFI function <tt>withArrayLen</tt> temporarily allocates the vector and fills it with elements from the list.
<p>
The essential part of the function looks like this in optimized x86 code:
<pre>
pxor %xmm0, %xmm0
xorl %esi, %esi
.align 4,0x90
LBB1_2:
movsd (%edx,%esi,8), %xmm1
mulsd (%ecx,%esi,8), %xmm1
incl %esi
cmpl %eax, %esi
addsd %xmm1, %xmm0
jne LBB1_2
</pre>
Which is pretty good. Improving this would have to use SSD vector instructions. This is possible using the LLVM vector type, but I'll leave that for now.
<h3>Abstraction</h3>
The loop structure in <tt>dotProd</tt> is pretty common, so we would like to abstract it out for reuse. The creation of basic blocks and phi instructions is rather fiddly so it would be nice to do this once and not worry about it again.
<p>
What are the parts of the loop? Well, let's just do a simple "for" loop that loops from a lower index (inclusive) to an upper index (exclusive) and executes the loop body for each iteration. So there should be three arguments to the loop function: lower bound, upper bound and loop body. What is the loop body? Since the LLVM is using SSA the loop body can't really update the loop state variables. Instead it's like a pure functional language where you have to express it as a state transformation. So the loop body will take the old state and return a new state. It's also useful to pass the loop index to the loop body. Now when we've introduced the notion of a loop state we also need to have an initial value for the loop state as an argument to the loop function.
<p>
Let's start out easy and let the state to be updated in the loop be a single value. In <tt>dotProd</tt> it's simply the running sum (<tt>s</tt>).
<pre>
forLoop low high start incr = do
top <- getCurrentBasicBlock
loop <- newBasicBlock
body <- newBasicBlock
exit <- newBasicBlock
br loop
defineBasicBlock loop
i <- phi [(low, top)]
state <- phi [(start, top)]
t <- icmp IntNE i high
condBr t body exit
defineBasicBlock body
state' <- incr i state
i' <- add i (valueOf 1)
body' <- getCurrentBasicBlock
addPhiInputs i [(i', body')]
addPhiInputs state [(state', body')]
br loop
defineBasicBlock exit
return state
</pre>
The <tt>low</tt> and <tt>high</tt> arguments are simply the loop bounds, <tt>start</tt> is the start value for the loop state variable, and finally <tt>incr</tt> is invoked in the loop body to get the new value for the state variable. Note that the <tt>incr</tt> can contain new basic blocks so there's no guarantee we're in the same basic block after <tt>incr</tt> has been called. That's why there is a call to <tt>getCurrentBasicBlock</tt> before adding to the phi instructions.
<p>
So the original loop in <tt>dotProd</tt> can now be written
<pre>
s <- forLoop 0 size 0 $ \ i s -> do
ap <- getElementPtr aPtr (i, ()) -- index into aPtr
bp <- getElementPtr bPtr (i, ()) -- index into bPtr
a <- load ap -- load element from a vector
b <- load bp -- load element from b vector
ab <- mul a b -- multiply them
s' <- add s ab -- accumulate sum
return s'
</pre>
So that wasn't too bad. But what if the loop needs multiple state variables? Or none? The tricky bit is handling the phi instructions since the number of instructions needed depends on how many state variables we have. So let's creat a class for types that can be state variables. This way we can use tuples for multiple state variables. The class needs two methods, the generalization of <tt>phi</tt> and the generalization of <tt>addPhiInputs</tt>.
<pre>
class Phi a where
phis :: BasicBlock -> a -> CodeGenFunction r a
addPhis :: BasicBlock -> a -> a -> CodeGenFunction r ()
</pre>
A simple instance is when we have no state variables.
<pre>
instance Phi () where
phis _ _ = return ()
addPhis _ _ _ = return ()
</pre>
We also need to handle the case with a single state variable. All LLVM values are encapsulated in the <tt>Value</tt> type, so this is the one we create an instance for.
<pre>
instance (IsFirstClass a) => Phi (Value a) where
phis bb a = do
a' <- phi [(a, bb)]
return a'
addPhis bb a a' = do
addPhiInputs a [(a', bb)]
</pre>
Finally, here's the instance for pair. Other tuples can be done in the same way (or we could just use nested pairs).
<pre>
instance (Phi a, Phi b) => Phi (a, b) where
phis bb (a, b) = do
a' <- phis bb a
b' <- phis bb b
return (a', b')
addPhis bb (a, b) (a', b') = do
addPhis bb a a'
addPhis bb b b'
</pre>
Using this new class the looping function becomes
<pre>
forLoop :: forall i a r . (Phi a, Num i, IsConst i, IsInteger i, IsFirstClass i) =>
Value i -> Value i -> a -> (Value i -> a -> CodeGenFunction r a) -> CodeGenFunction r a
forLoop low high start incr = do
top <- getCurrentBasicBlock
loop <- newBasicBlock
body <- newBasicBlock
exit <- newBasicBlock
br loop
defineBasicBlock loop
i <- phi [(low, top)]
vars <- phis top start
t <- icmp IntNE i high
condBr t body exit
defineBasicBlock body
vars' <- incr i vars
i' <- add i (valueOf 1 :: Value i)
body' <- getCurrentBasicBlock
addPhis body' vars vars'
addPhiInputs i [(i', body')]
br loop
defineBasicBlock exit
return vars
</pre>
<h3>File operations</h3>
The Haskell bindings provide two convenient functions - <tt>writeBitcodeToFile</tt> and <tt>readBitcodeFromFile</tt> - for writing and reading modules in the LLVM binary format.
<p>
A simple example:
<pre>
import Data.Int
import LLVM.Core
mIncr :: CodeGenModule (Function (Int32 -> IO Int32))
mIncr =
createNamedFunction ExternalLinkage "incr" $ \ x -> do
r <- add x (1 :: Int32)
ret r
main = do
m <- newModule
defineModule m mIncr
writeBitcodeToFile "incr.bc" m
</pre>
Running this will produce the file <tt>incr.bc</tt> which can be processed with the usual LLVM tools. E.g.
<pre>
$ llvm-dis < incr.bc # to look at the LLVM code
$ opt -std-compile-opts incr.bc -f -o incrO.bc # run optimizer
$ llvm-dis < incrO.bc # to look at the optimized LLVM code
$ llc incrO.bc # generate assembly code
$ cat incrO.s # look at assembly code
</pre>
Reading a module file is equally easy, but what can you do with a module you have read? It could contain anything. To extract things from a module there is a function <tt>getModuleValues</tt> which returns a list of name-value pairs of all externally visible functions and global variables. The values all have type <tt>ModuleValue</tt>. To convert a <tt>ModuleValue</tt> to a regular <tt>Value</tt> you have to use <tt>castModuleValue</tt>. This is a safe conversion function that makes a dynamic type test to make sure the types match (think of <tt>ModuleValue</tt> as <tt>Dynamic</tt> and <tt>castModuleValue</tt> as <tt>fromDynamic</tt>).
<p>
Here's an example:
<pre>
import Data.Int
import LLVM.Core
import LLVM.ExecutionEngine
main = do
m <- readBitcodeFromFile "incr.bc"
ee <- createModuleProviderForExistingModule m >>= createExecutionEngine
funcs <- getModuleValues m
let ioincr :: Function (Int32 -> IO Int32)
Just ioincr = lookup "incr" funcs >>= castModuleValue
incr = unsafePurify $ generateFunction ee ioincr
print (incr 41)
</pre>
This post is getting rather long, so I'll let this be the last example for today.https://augustss.blogspot.com/2009/01/llvm-llvm-low-level-virtual-machine-is.htmlnoreply@blogger.com (augustss)17tag:blogger.com,1999:blog-25541020.post-4185981415808905008Wed, 10 Dec 2008 15:27:00 +00002008-12-10T15:32:48.321+00:00HaskellModulesOCamloverloading<h2>The OCaml code again</h2>
I'm posting a slight variation of the OCaml code that I think better shows what I like about the ML version.
<pre>
module MkPerson(O: sig
type xString
type xDouble
val opConcat : xString -> xString -> xString
val opShow : xDouble -> xString
end) =
struct
open O
type person = Person of (xString * xString * xDouble)
let display (Person (firstName, lastName, height)) =
opConcat firstName (opConcat lastName (opShow height))
end
module BasicPerson = MkPerson(struct
type xString = string
type xDouble = float
let opConcat = (^)
let opShow = string_of_float
end)
open BasicPerson
let _ =
let p = Person ("Stefan", "Wehr", 184.0)
in display p
</pre>
Note how the <tt>open O</tt> opens the argument to the <tt>MkPerson</tt> functor and all the values and types from the argument is in scope in the rest of the module. There's no need to change lots of code in <tt>MkPerson</tt>.
<p>
Similarely, the <tt>open BasicPerson</tt> makes the operations from that module avaiable, so that <tt>Person</tt> and <tt>display</tt> can be used in a simple way.https://augustss.blogspot.com/2008/12/ocaml-code-again-im-posting-slight.htmlnoreply@blogger.com (augustss)0tag:blogger.com,1999:blog-25541020.post-5744056186368813968Wed, 10 Dec 2008 12:34:00 +00002008-12-10T15:26:25.511+00:00HaskellModulesoverloading<h2>Abstracting on, suggested solutions</h2>
I guess I should be more constructive than just whining about how Haskell doesn't always do what I want.
I do have some suggestions on how to fix things.
<h3>Explicit type applications</h3>
Let's look at a simple example again:
<pre>
f :: forall a . a -> a
f = \ x -> x
b :: Bool
b = f True
</pre>
The way I like to think of this (and what happens in ghc) is that this is shorthand for something more explicit, namely the F<sub>w</sub> version of the same thing. In F<sub>w</sub> all type abstraction and type application are explicit. Let's look at the explicit version (which is no longer Haskell).
<pre>
f :: forall (a::*) . a -> a
f = /\ (a::*) -> \ (x::a) -> x
b :: Bool
b = f @Bool True
</pre>
I'm using <tt>/\</tt> for type abstraction and <tt>expr @type</tt> for type application. Furthermore each binder is annotated with its type. This is what ghc translates the code to internally, this process involves figuring out what all the type abstractions and applications should be.
<p>
Not something a little more complicated (from my previous post)
<pre>
class C a b where
x :: a
y :: b
f :: (C a b) => a -> [a]
f z = [x, x, z]
</pre>
The type of <tt>x</tt> is
<pre>
x :: forall a b . (C a b) => a
</pre>
So whenever <tt>x</tt> occurs two type applications have to be inserted (there's also a dictionary to insert, but I'll ignore that).
The decorated term for <tt>f</tt> (ignoring the context)
<pre>
f :: forall a b . (C a b) => a -> [a]
f = /\ (a::*) (b::*) -> \ (z::a) -> [ x @a @b1, x @a @b2, z]
</pre>
The reason for the ambiguity in type checking is that the type check cannot figure out that the <tt>b</tt> is in any way connected to <tt>b1</tt> and <tt>b2</tt>. Because it isn't. And there's currently no way we can connect them.
<p>
So I suggest that it should be possible to use explicit type application in Haskell when you want to. The code would look like this
<pre>
f :: forall a b . (C a b) => a -> [a]
f z = [ x @a @b, x @a @b, z]
</pre>
The order of the variables in the <tt>forall</tt> determines the order in which the type abstractions come, and thus determines where to put the type applications.
<h3>Something like <tt>abstype</tt></h3>
Back to my original problem with abstraction. What about if this was allowed:
<pre>
class Ops t where
data XString t :: *
(+++) :: XString t -> XString t -> XString t
instance Ops Basic where
type XString Basic = String
(+++) = (++)
</pre>
So the class declaration says I'm going to use data types (which was my final try and which works very nicely).
But in the instance I provide a type synonym instead. This would be like using a <tt>newtype</tt> in the instance, but without having to use the newtype constructor everywhere. The fact that it's not a real data type is only visible inside the instance declaration. The compiler could in fact make a <tt>newtype</tt> and insert all the coercions.
This is, of course, just a variation of the <tt>abstype</tt> suggestion by Wehr and Chakravarty.https://augustss.blogspot.com/2008/12/abstracting-on-suggested-solutions-i.htmlnoreply@blogger.com (augustss)2tag:blogger.com,1999:blog-25541020.post-6971407551389533807Wed, 10 Dec 2008 10:55:00 +00002008-12-10T15:26:25.512+00:00HaskellModulesoverloading<h2>The abstraction continues</h2>
I got several comments to my lament about my <a href="http://augustss.blogspot.com/2008/12/somewhat-failed-adventure-in-haskell.html">attempts at abstraction</a> in my previous blog post. Two of the comments involve adding an extra argument to <tt>display</tt>. I dont regard this as an acceptable solution; the changes to the code should not be that intrusive. Adding an argument to a function is a change that ripples through the code to many places and not just the implementation of <tt>display</tt>.
<p>
Reiner Pope succeeded where I failed. He split up the operations in Ops into two classes and presto, it works.
<pre>
data Person t = Person {
firstName :: XString t,
lastName :: XString t,
height :: XDouble t
}
class (Show s, IsString s) => IsXString s where
(+++) :: s -> s -> s
class (Num d, IsXString s) => IsXDouble s d where
xshow :: d -> s
class (IsXDouble (XString t) (XDouble t)) => Ops t where
type XString t :: *
type XDouble t :: *
instance IsXString String where
(+++) = (++)
instance IsXDouble String Double where
xshow = show
data Basic = Basic
instance Ops Basic where
type XString Basic = String
type XDouble Basic = Double
display :: Ops t => Person t -> XString t
display p = firstName p +++ " " +++ lastName p +++ " " +++ xshow (height p + 1)
</pre>
That's neat, but a little fiddly if there are many types involved.
<h3>Another problem</h3>
Armed with this solution I write another function.
<pre>
incSpace :: (Ops t) => XDouble t -> XString t
incSpace x = xshow x +++ " "
</pre>
It typechecks fine. But as far as I can figure out there is <b>no</b> way to use this function.
Let's see what ghci says:
<pre>
> :t incSpace (1 :: XDouble Basic) :: XString Basic
<interactive>:1:0:
Couldn't match expected type `[Char]'
against inferred type `XString t'
In the expression: incSpace (1 :: XDouble Basic) :: XString Basic
<interactive>:1:10:
Couldn't match expected type `XDouble t'
against inferred type `Double'
In the first argument of `incSpace', namely `(1 :: XDouble Basic)'
In the expression: incSpace (1 :: XDouble Basic) :: XString Basic
</pre>
Despite my best efforts at providing types it doesn't work. The reason being that saying, e.g., <tt>(1 :: XDouble Basic)</tt> is the same as saying <tt>(1 :: Double)</tt>. And that doesn't match <tt>XDouble t</tt>. At least not to the typecheckers knowledge.
<p>
In the example of <tt>display</tt> things work because the parameter <tt>t</tt> occurs in <tt>Person t</tt> which is a real type and not a type family. If a type variable only occurs in type family types you are out of luck. There's no way to convey the information what that type variable should be (as far as i know). You can "solve" the problem by adding <tt>t</tt> as an argument to <tt>incSpace</tt>, but again, I don't see that as a solution.
<p>
In the paper <a href="http://www.cse.unsw.edu.au/~chak/papers/WC08.html">ML Modules and Haskell Type Classes: A Constructive Comparison</a> Wehr and Chakravarty introduce a notion of abstract associated types. That might solve this problem. I really want <tt>XDouble</tt> and <tt>XString</tt> to appear as abstract types (or associated data types) outside of the instance declaration. Only inside the instance declaration where I provide implementations for the operations do I really care what the type is.
<h3>A reflection on type signatures</h3>
If I write
<pre>
f x = x
</pre>
Haskell can deduce that the type is <tt>f :: a -> a</tt>.
<p>
If I instead write
<pre>
f :: Int -> Int
f x = x
</pre>
Haskell happily uses this type. The type checker does <b>not</b> complain as to say "Sorry dude, but you're wrong, the type is more general than what you wrote.". I think that's nice and polite.
<p>
Now a different example.
<pre>
class C a b where
x :: a
y :: b
f z = [x, x, z]
</pre>
What does ghc have to say about the type of <tt>f</tt>?
<pre>
f :: (C a b, C a b1) => a -> [a]
</pre>
OK, that's reasonable; the two occurences of <tt>x</tt> could have different contexts. But I don't want them to. Let's add a type signature.
<pre>
f :: (C a b) => a -> [a]
f z = [x, x, z]
</pre>
What does ghc have to say?
<pre>
Blog2.hs:9:7:
Could not deduce (C a b) from the context (C a b2)
arising from a use of `x' at Blog2.hs:9:7
Possible fix:
add (C a b) to the context of the type signature for `f'
In the expression: x
In the expression: [x, x, z]
In the definition of `f': f z = [x, x, z]
Blog2.hs:9:10:
Could not deduce (C a b1) from the context (C a b2)
arising from a use of `x' at Blog2.hs:9:10
Possible fix:
add (C a b1) to the context of the type signature for `f'
In the expression: x
In the expression: [x, x, z]
In the definition of `f': f z = [x, x, z]
</pre>
Which is ghc's way of say "Dude, I see your context, but I'm not going to use it because I'm more clever than you and can figure out a better type."
Rude, is what I say.
<p>
I gave a context, but there is nothing to link the <tt>b</tt> in my context to what ghc internally figures out that the type of the two occuerences of <tt>x</tt> should. I wish I could tell the type checker, "This is the only context you'll ever going to have, use it if you can." Alas, this is not how things work.
<h3>A little ML</h3>
Stefan Wehr provided the ML version of the code that I only aluded to
<pre>
module MkPerson(O: sig
type xString
type xDouble
val opConcat : xString -> xString -> xString
val opShow : xDouble -> xString
end) =
struct
type person = Person of (O.xString * O.xString * O.xDouble)
let display (Person (firstName, lastName, height)) =
O.opConcat firstName (O.opConcat lastName (O.opShow height))
end
module BasicPerson = MkPerson(struct
type xString = string
type xDouble = float
let opConcat = (^)
let opShow = string_of_float
end)
let _ =
let p = BasicPerson.Person ("Stefan", "Wehr", 184.0)
in BasicPerson.display p
</pre>
In this case, I think this is the natural way of expressing the abstraction I want. Of course, this ML code has some shortcomings too. Since string literals in ML are not overloaded the cannot be used neatly in the display function like I could in the Haskell version, but that's a minor point.https://augustss.blogspot.com/2008/12/abstraction-continues-i-got-several.htmlnoreply@blogger.com (augustss)4tag:blogger.com,1999:blog-25541020.post-788298342897012865Wed, 10 Dec 2008 00:10:00 +00002008-12-10T15:26:34.573+00:00HaskellModulesoverloading<h2>A somewhat failed adventure in Haskell abstraction</h2>
I usually blog about weird and wonderful things you can do in Haskell. Today I'm going to talk about something very plain and not wonderful at all.
<p>
If you want to try out the code below, use these Haskell extensions:
</p><pre>
{-# LANGUAGE TypeFamilies, MultiParamTypeClasses, OverloadedStrings,
FlexibleInstances, TypeSynonymInstances, ScopedTypeVariables,
FunctionalDependencies, RecordWildCards, FlexibleContexts,
GeneralizedNewtypeDeriving #-}
</pre>
<h3>The simple problem</h3>
We want to define a type for a person which has a few fields and operations. Like this
<pre>
module Person(Person(..), display) where
data Person = Person {
firstName :: String,
lastName :: String,
height :: Double
}
display :: Person -> String
display p = firstName p ++ " " ++ lastName p ++ " " ++ show (height p + 1)
</pre>
Very simple. To use it we can just import the module and the write something like
<pre>
print $ display $ Person { firstName = "Manuel", lastName = "Peyton Jones", height = 255 }
</pre>
But being efficiancy conscious I'm not happy with using <tt>String</tt> and <tt>Double</tt>.
I'd like to experiment with different types for these. Maybe I should use <tt>ByteString</tt> and <tt>Int</tt> instead?
<p>
Simple enough, let's abstract out the types and operations into a different module.
</p><pre>
module Ops(XString, XDouble, (+++), xshow) where
import Data.String
newtype XString = XString String deriving (Eq, Show, IsString)
newtype XDouble = XDouble Double deriving (Eq, Show, Num)
(+++) :: XString -> XString -> XString
XString x +++ XString y = XString (x ++ y)
xshow :: XDouble -> XString
xshow (XDouble x) = XString (show x)
module Person(Person(..), display) where
import Ops
data Person = Person {
firstName :: XString,
lastName :: XString,
height :: XDouble
}
display :: Person -> XString
display p = firstName p +++ " " +++ lastName p +++ " " +++ show (height p + 1)
</pre>
There, problems solved. By changing the import in the <tt>Person</tt> module you can try out different types for <tt>XString</tt> and <tt>XDouble</tt>.
<p>
No, this is not problem solved. To try out different implementations I need to <it>edit</it> the <tt>Person</tt> module. That's not abstraction, that's obstruction. It should be possible to write the code for the <tt>Person</tt> module once and for all once you decided to abstract, and then never change it again.
</p><p>
I also didn't really want to necessarily have <tt>newtype</tt> in my module. Maybe I'd want this:
</p><pre>
module Ops(XString, XDouble, (+++), xshow) where
type XString = String
type XDouble = Double
(+++) :: XString -> XString -> XString
(+++) = (++)
xshow :: XDouble -> XString
xshow = show
</pre>
You can define Ops that way, but then the implementation of Ops may leak into the Person module. What you really want is to type check Person against the signature of Ops, like
<pre>
interface Ops where
type XString
type XDouble
(+++) :: XString -> XString -> XString
xshow :: XDouble -> XString
</pre>
And later supply the actual implementation. Alas, Haskell doesn't allow this.
<p>
In ML (SML or O'Caml) this would be solved by using a functor. The Person module would be a functor that takes the Ops module as an argument and yields a new module. And then you can just plug and play with different Ops implementations. This is where ML shines and Haskell sucks.
</p><h3>Type classes</h3>
But the defenders of Haskell superiority say, Haskell has type classes, that's the way to abstract! So let's make Ops into a type class. Let's do old style with multiple parameters first. Since Ops defines two types it will correspond to having two type parameters to the class.
<pre>
class (IsString xstring, Num xdouble) => Ops xstring xdouble where
(+++) :: xstring -> xstring -> xstring
xshow :: xdouble -> xshow
</pre>
Ok, so how do we have to rewrite the Person module?
<pre>
data Person xstring xdouble = Person {
firstName :: xstring,
lastName :: xstring,
height :: xdouble
}
display :: (Ops xstring xdouble) => Person xstring xdouble -> xstring
display p = firstName p +++ " " +++ lastName p +++ " " +++ xshow (height p + 1)
</pre>
An implementation is provided by an instance declaration:
<pre>
instance Ops String Double where
(+++) = (++)
xshow = show
</pre>
We see the major flaw in this approch at once. The <tt>Person</tt> data type now has two parameters. This might be bearable, but imagine a more complicated example where Ops contains 15 types. And every time you add a field with a new type to Person you have to update every single place in the program that mentions the Person type. That's not abstraction.
<p>
But in fact, it's even worse than that. The definition of <tt>display</tt> might look plausible, but it's full of ambiguities. Compiling it gives lots of errors of this kind:
</p><pre>
Could not deduce (Ops xstring xdouble)
from the context (Ops xstring xdouble4)
</pre>
Well, we can remove the type signature and let GHC figure it out. The we get this
<pre>
display :: (Ops xstring xdouble,
Ops xstring xdouble3,
Ops xstring xdouble2,
Ops xstring xdouble1,
Ops xstring xdouble4) =>
Person xstring xdouble4 -> xstring
</pre>
And this function can, of course, never be used because most of the type variables do not occur outside the context so they will never be determined. I don't even know how to put explicit types in the function to make it work.
<p>
Well, it's common knowledge that multi-parameter type classes without functional dependencies is asking for trouble. So can we add some functional dependencies? Sure, if we use
</p><pre>
class (IsString xstring, Num xdouble) => Ops xstring xdouble | xstring -> xdouble where
</pre>
then things work beautifully. Until we decide that another instance that would be interesting to try is
<pre>
instance Ops String Int
</pre>
which is not valid with the FD present.
<p>
So we can't have functional dependencies if we want to have flexibilty with the instances. So what is it that goes wrong without the FDs? It's that all the uses <tt>(+++)</tt> and <tt>xshow</tt> are not tied together, they could potentially have different types. Let's try and be sneaky and tie them together:
</p><pre>
display :: (Ops xstring xdouble) => Person xstring xdouble -> xstring
display p =
let (++++) = (+++); xxshow = xshow
in firstName p ++++ " " ++++ lastName p ++++ " " ++++ xxshow (height p + 1)
</pre>
This only generates one error message, because there's still nothing that says the the two operations come from the same instance. We need to make the tie even closer.
<pre>
class (IsString xstring, Num xdouble) => Ops xstring xdouble where
ops :: (xstring -> xstring -> xstring, xdouble -> xstring)
instance Ops String Double where
ops = ((++), show)
display :: (Ops xstring xdouble) => Person xstring xdouble -> xstring
display p =
let ((+++), xshow) = ops
in firstName p +++ " " +++ lastName p +++ " " +++ xshow (height p + 1)
</pre>
This actually works!
We can make it neater looking.
<pre>
class (IsString xstring, Num xdouble) => Ops xstring xdouble where
ops :: DOps xstring xdouble
data DOps xstring xdouble = DOps {
(+++) :: xstring -> xstring -> xstring,
xshow :: xdouble -> xstring
}
instance Ops String Double where
ops = DOps (++) show
display :: (Ops xstring xdouble) => Person xstring xdouble -> xstring
display p =
let DOps{..} = ops
in firstName p +++ " " +++ lastName p +++ " " +++ xshow (height p + 1)
</pre>
We have basically packaged up the dictionary and unpack it ourselves to get access to the operations. It's not pleasent, but it works.
<p>
But as I already said, the multiparameter type class version isn't really a good solution to the problem even if it works; it introduces too many parameters to the Person record.
</p><h3>Associated types</h3>
The new and shiny way of doing type classes is to use associated types instead of FDs. So let's give that a try. So what should the associated types be in the class. The associated type is supposed to be the one that can be computed from the main one. But we have two types that are on equal footing, so there is no main one. We can remedy that by introducing an artificial third type that is the main one, it can then determine the other two.
<pre>
class (IsString (XString t), Num (XDouble t)) => Ops t where
type XString t :: *
type XDouble t :: *
(+++) :: XString t -> XString t -> XString t
xshow :: XDouble t -> XString t
data Person t = Person {
firstName :: XString t,
lastName :: XString t,
height :: XDouble t
}
</pre>
That looks pretty neat. Note how the Person record has one parameter and no matter how many new associated type we add it will still only have one parameter. One parameter is reasonable, the Person record is after all parameterized over what kind of Ops we are providing.
<p>
Let's do an instance. It will need the extra type that is somehow the name of the instance.
</p><pre>
data Basic = Basic
instance Ops Basic where
type XString Basic = String
type XDouble Basic = Double
(+++) = (++)
xshow = show
</pre>
Now what about the <tt>display</tt> function? Alas, now it breaks down again. The <tt>display</tt> function is full of type errors again. And the reason is similar to the multiparameter version; there's nothing that ties the operations together.
<p>
We can play the same trick as with DOps above, but for some reason it doesn't work this time. The type comes out as
</p><pre>
display :: (XString t ~ XString a,
XDouble t ~ XDouble a,
Ops a,
Num (XDouble t)) =>
Person t -> XString a
</pre>
I have no clue why. I find associated types very hard to get a grip on.
<p>
OK, multi-parameter type classes made things work, but had too many type parameters. And associated types is the other way around. You can try combining them, but it didn't get me anywhere closer.
</p><h3>Associated data types</h3>
OK, I won't admit defeat yet. There's still associated data types. They are easier to deal with than associated types, because the type function is guaranteed to be injective.
<pre>
class (IsString (XString t), Num (XDouble t)) => Ops t where
data XString t :: *
data XDouble t :: *
(+++) :: XString t -> XString t -> XString t
xshow :: XDouble t -> XString t
data Basic = Basic
instance Ops Basic where
newtype XString Basic = XSB String deriving (Eq, Ord, Show)
newtype XDouble Basic = XDB Double deriving (Eq, Ord, Show)
XSB x +++ XSB y = XSB (x ++ y)
xshow (XDB x) = XSB (show x)
instance Num (XDouble Basic) where
XDB x + XDB y = XDB (x+y)
fromInteger = XDB . fromInteger
instance IsString (XString Basic) where
fromString = XSB
</pre>
At last, this actually works! But it's at a price. We can no longer use the types we want in the instance declaration, instead we are forced to invent new types. Using this approach the original multi-parameter version could have been made to work as well.
<p>
Normally the GeneralizedNewtypeDeriving language extension makes it relatively painless to introduce a newtype that has all the instances of the underlying type. But due to a bug in ghc you can't use this extension for associated newtypes. So we have to make manual instance declarations which makes this approach very tedious.
</p><h2>Conclusion</h2>
I have found no way of doing what I want. My request is very simple, I want to be able to abstract over the actual implementation of a module, where the module contains types, values, and instances.
<p>
Haskell normally excels in abstraction, but here I have found no natural way of doing what I want. Perhaps I'm just not clever enough to figure out how, but that is a failure of Haskell too. It should not take any cleverness to do something as simple as this. In ML this is the most natural thing in the world to do.
</p><p>
Associated types are <b>not</b> a replacement for a proper module system. They let you do some things, but others just don't work.
</p><p>I'd be happy to see anyone doing this in Haskell in a simple way.</p><p>
</p>https://augustss.blogspot.com/2008/12/somewhat-failed-adventure-in-haskell.htmlnoreply@blogger.com (augustss)13tag:blogger.com,1999:blog-25541020.post-2069749915560147065Thu, 03 Jul 2008 11:38:00 +00002008-07-05T00:52:49.730+01:00Haskell<h2>Lost and found</h2>
If I write <tt>10^8</tt> in Haskell, how many multiplications will be used to compute the power? A stupid question? Well, for this example, but if I was computing <tt>x^8</tt> and x has 100000 digits then I'd care.
So how can I find out? I can look at the definition of the exponentiation operator. Here it is, from the Haskell report and GHC 6.8:
<pre>
(^) :: (Num a, Integral b) => a -> b -> a
_ ^ 0 = 1
x ^ n | n > 0 = f x (n-1) x
where f _ 0 y = y
f a d y = g a d
where
g b i | even i = g (b*b) (i `quot` 2)
| otherwise = f b (i-1) (b*y)
_ ^ _ = error "Prelude.^: negative exponent"
</pre>
It's a bit involved, but decipherable. Another way would be to insert some kind of debug trace message in the multiplication.
<h3>Traced values</h3>
I'd like to show a different way. Here's a ghci session:
<pre>
Prelude> :m +Debug.Traced
Prelude Debug.Traced> let x = 10 :: Traced AsValue Integer
Prelude Debug.Traced> let y = x ^ 8
Prelude Debug.Traced> y
100000000
Prelude Debug.Traced> :t y
y :: Traced AsValue Integer
Prelude Debug.Traced> asExp y
10 * 10 * (10 * 10) * (10 * 10 * (10 * 10))
Prelude Debug.Traced> asSharedExp y
let _2 = 10 * 10; in _2 * _2 * (_2 * (10 * 10))
Prelude Debug.Traced> :t asSharedExp y
asSharedExp y :: Traced AsExp Integer
Prelude Debug.Traced>
</pre>
So what's going on? The value of x is <tt>Traced Integer</tt>, which means that there's some magic going on. The variable can be used as usual, for instance in computing <tt>x^8</tt>. A traced value can also be shown as an expression, which is what <tt>showAsExp</tt> does.
So a traced value is somewhat like the symbolic values I had in an earlier post, but in addition to having a symbolic representation they also have a normal value. But the output from <tt>showAsExp</tt> doesn't really help in answering how many multiplications there are, since the shown expression has no sharing; it is totally flattened. The <tt>showAsShared</tt> function is the black magic here, it recovers the sharing, and we can see what happened. What we see is that there are actually five (5) multiplications involved in computing 10^8. This shows that the definition of exponentiation is suboptimal, since it can be done with three multiplications (three repeated squarings).
The <tt>showAsShared</tt> really does have some <b>black</b> magic. It recovers information that is not part of the Haskell semantics, so from that we can conclude that it must contain the powerful incantation <tt>unsafePerformIO</tt> somewhere. How does it reveal implementation secrets? Look at this:
<pre>
Prelude Debug.Traced> asSharedExp $ let a = 1 + 2 in a * a
let _1 = 1 + 2; in _1 * _1
Prelude Debug.Traced> asSharedExp $ (1+2) * (1+2)
(1 + 2) * (1 + 2)
Prelude Debug.Traced>
</pre>
The let expression and the expression where the variable has been expanded are semantically equal in Haskell, so no (pure) function can possibly be able to give different results for them.
<p>
OK, so how does it work? I'll show a simplified version of the Traced module here that only deals with one traced type, but it can be extended. The (soon to be available) hackage package contains the extended version.
<p>
In the <tt>Traced</tt> type we need to represent expressions. We only need constants and function applications.
<pre>
data Traced a
= Con a
| Apply a String [Traced a]
</pre>
The function application contains the value, the name of the function, and the arguments the function was applied to.
<p>
In the exported interface from the module we want to be able to convert to and from the Traced type. Nothing exciting here.
<pre>
traced :: a -> Traced a
traced = Con
unTraced :: Traced a -> a
unTraced (Con x) = x
unTraced (Apply x _ _) = x
</pre>
We want to show a traced value the same way we show the underlying value.
<pre>
instance (Show a) => Show (Traced a) where
show = show . unTraced
</pre>
Comparing for equality, we simply compare the underlying values.
<pre>
instance (Eq a) => Eq (Traced a) where
x == y = unTraced x == unTraced y
</pre>
And we'll make traced numbers be an instance of <tt>Num</tt>. All the functions (except <tt>fromInteger</tt>) build apply nodes.
<pre>
instance (Num a) => Num (Traced a) where
(+) = apply2 "+" (+)
(-) = apply2 "-" (-)
(*) = apply2 "*" (*)
negate = apply1 "-" negate
abs = apply1 "abs" abs
signum = apply1 "signum" signum
fromInteger = traced . fromInteger
apply1 s f x = Apply (f (unTraced x)) s [x]
apply2 s f x y = Apply (f (unTraced x) (unTraced y)) s [x, y]
</pre>
A fancier version of this module could make the Traced type an applicative functor etc., but that's not really so important.
<p>
Finally, we want to be able to show a traced expression as an expression tree instead of a value.
<pre
showAsExp :: (Show t) => Traced t -> String
showAsExp (Con x) = show x
showAsExp (Apply _ s [x,y]) | not (isAlpha (head s)) =
"(" ++ showAsExp x ++ " " ++ s ++ " " ++ showAsExp y ++ ")"
showAsExp (Apply _ s xs) =
"(" ++ concat (intersperse " " $ s : map showAsExp xs) ++ ")"
</pre>
We only export what is necessary, so the module header should be
<pre>
module Traced(Traced, traced, unTraced, showAsExp) where
</pre>
A quick test:
<pre>
Traced> putStrLn $ showAsExp $ 10^8
(((10 * 10) * (10 * 10)) * ((10 * 10) * (10 * 10)))
Traced>
</pre>
And now we need the black magic to recover the sharing. We would like to have a unique label in each node of the expression tree. If we only had that we could see when two things referred to the same subexpression, and use the label to refer to it instead of the value. If we were doing this in, <i>e.g.</i>., Java we could use object identity for this purpose. If we were doing it in C, we'd just compare pointers to the structs containing the expressions. But we're doing it in Haskell and none of this is available. It's not unavailable because Haskell wants to make our lives difficult, quite the contrary. Languages that allow pointer comparisons (object identity) must introduce an extra level of indirection in the semantics to explain how this is possible. So now it's not enough to know we have the number 5, we need to know that this is the number 5 at location 1000. And that's not the same as the number 5 at location 1010. The numbers contained in the locations might be the same, but the locations are not interchangable since they could, <i>e.g.</i>, be mutated differently in the future.
<p>
So is everything lost in Haskell? Not at all. GHC implements a library of stable names, which is (at first approximation) the same as the address of something in memory.
The API to <tt>System.Mem.StableName</tt> is very simple.
<pre>
data StableName a
makeStableName :: a -> IO (StableName a)
hashStableName :: StableName a -> Int
</pre>
The <tt>makeStableName</tt> function is like the & operator (address of) in C. It returns the "address" of something. So <tt>StableName</tt> is like a C pointer type. And the <tt>hashStableName</tt> function converts the "address" to an int, <i>i.e.</i>, like casting it to int in C. (In the simplified code below we'll assume that two stable names never hash to the same Int, although this is not absolutely guaranteed.)
<p>
How come this interface is OK? For instance, calling <tt>makeStableName</tt> on semantically equal values can yield different results if the values happen to be stored in different parts of memory. It's ok, because the returned value is in the IO monad. In the IO monad anything can happen, so it's perfectly reasonable that the same argument yields different results in different calls.
<p>
Despite the name and the documentation the GHC stable names have a stability flaw. The stable name changes when an unevaluated expression is evaluated. It's annoying, but not a major flaw. But once evaluated the stable names is guaranteed to remain the same. (The implementation of stable names is not as simple as taking the address of an object, since the GC can move objects around.)
<p>
So now we have a way to get the identity of each node in the expression tree built by the Traced type. The plan is to traverse the expression tree. For each node we'll use the "address" of it as it's name, and remember that we've seen this node. As we traverse the tree we build a list of nodes we've seen. This list then corresponds to let bindings we'd like to display as the final result.
As we traverse the nodes we'll also replace each node with a reference to its name instead, so we can see the sharing in the result.
<p>
To be able to represent the expression with rediscovered sharing we need to extend the Traced type. We need variable references and let bindings. In fact, we'll only generate a top level let binding, but we include it in the data type anyway.
<pre>
data Traced a
...
| Var a Name
| Let [(Name, Traced a)] (Traced a)
type Name = String
</pre>
We store the value in the <tt>Var</tt> constructor to make <tt>unTraced</tt> possible. New cases:
<pre>
...
unTraced (Var x _) = x
unTraced (Let _ e) = unTraced e
</pre>
And we want to show the new constructors:
<pre>
...
showAsExp (Var _ n) = n
showAsExp (Let bs e) = "let " ++ concatMap bind bs ++ "in " ++ showAsExp e
where bind (n, e) = n ++ " = " ++ showAsExp e ++ "; "
</pre>
To rediscover the sharing we need to keep some state. We need a mapping from the node address to the <tt>Var</tt> that should replace it, and we need to accumulate the bindings, <i>i.e.</i>, the pairs of node name and expression. So we need a state monad to keep track of the state. We also need to be able to call <tt>makeStableName</tt> in the IO monad, so we need IO as well. We'll do this by using the state transformer monad on top of IO. So the function that discovers sharing will take a traced value and return a new traced value, all in this monad. So we have the type:
<pre>
type TState = (M.IntMap (Traced a), [(Name, Traced a)])
share :: Traced a -> StateT TState IO (Traced a)
</pre>
Assuming some imports
<pre>
import Control.Monad.State
import qualified Data.IntMap as M
import System.Mem.StableName
</pre>
Now the body:
<pre>
share e@(Con _) = return e
share e@(Apply v s xs) = do
(sm, bs) <- get
sn <- liftIO $ makeStableName e
let h = hashStableName sn
case M.lookup h sm of
Just ie -> return ie
Nothing -> do
let n = "_" ++ show h
ie = Var v n
put (M.insert h ie sm, bs)
xs' <- mapM share xs
modify $ \ (sm', bs') -> (sm', (n, Apply v s xs') : bs')
return ie
</pre>
Constants are easy, we don't bother sharing them (we could, but it's not that interesting).
For an apply node, we get its stable name (it's already evaluated, so it won't change) and hash it.
We then grab the map (it's an IntMap; a fast map from Int to anything) from the state and look up the node. If the node is found we just return the expression from the map. If it's not in the map, we invent a name (using the hash value) and insert a <tt>Var</tt> node in the map so we'll never process this node again. We then recursively traverse all the children of the apply node, and rebuild a new apply node with those new children. This constitutes a binding and we stick it in the accumulated list of bindings. Finally we return the new <tt>Var</tt> node.
<p>
At the top level we need to call <tt>share</tt> and then build a let expression. The bindings are put in the list with the top node first in the list, so it looks nicer to reverse it.
<pre>
shareIO :: Traced a -> IO (Traced a)
shareIO e = do
(v, (_, bs)) <- runStateT (share e) (M.empty, [])
return $ Let (reverse bs) v
</pre>
We could leave it there, but to make it more convenient to use, we'll do an <tt>unsafePerformIO</tt> to hide the use of IO. This is not unsafe in the sense that it will make our program crash, but it is unsafe in the sense that it ruins the Haskell semantics. But since this whole exercise is to make a debugging/tracing tool this is a legitimate use, in my opinion.
<pre>
reShare :: Traced a -> Traced a
reShare = unsafePerformIO . shareIO
showShared :: (Show a) => Traced a -> String
showShared = showAsExp . reShare
</pre>
And let's test it:
<pre>
Traced> putStrLn $ showShared $ 10^8
let _5 = (10 * 10); _7 = (_5 * _5); _11 = (10 * 10);
_1 = (_5 * _11); _8 = (_7 * _1); in _8
Traced>
</pre>
In my first example it looked a little prettier since the full implementation only shows bindings for nodes that are actually shared. This is a simple transformation to add. But looking at this, we can still see that there are five multiplications.
<p>
<h3>Hacking with types</h3>
It's a little annoying that we have both <tt>show</tt> and <tt>showAsExp</tt> to show traced values. It would be nicer if we could always use <tt>show</tt> and make the type determine how it is showed. We could invent a <tt>newtype</tt> to wrap around <tt>Traced</tt> and print the new type in a different way. But then this new type would not be compatible with the old one, which is a little annoying.
<p>
So we're going down a different road instead, we'll have a the <tt>Traced</tt> type have two parameters. The first one being a phantom type; just being used to determine how to show the traced value. We will rename the old type to <tt>TracedT</tt>, and it will only be used internally in the module.
<pre>
newtype Traced t a = T { unT :: TracedT a }
data TracedT a
= Con a
| Apply a String [TracedT a]
| Var a Name
| Let [(Name, TracedT a)] (TracedT a)
</pre>
Some minor changes are needed to the code to accomodate for this change.
<pre>
traced :: a -> Traced t a
traced = T . Con
unTraced :: Traced t a -> a
unTraced = unTracedT . unT
unTracedT :: TracedT a -> a
...
apply1 s f x = T $ Apply (f (unTraced x)) s [unT x]
apply2 s f x y = T $ Apply (f (unTraced x) (unTraced y)) s [unT x, unT y]
</pre>
And now for the fun part, the <tt>show</tt> function. We could do something like this:
<pre>
data AsValue
data AsExp
instance Show (Traced AsValue a) ...
instance Show (Traced AsExp a) ...
</pre>
This has problems, first it's not Haskell-98, but more importantly we can't print something of type <tt>Traced t a</tt>, because t is too general. We'd like to print as value by default, and be able to override this. There's only one defaulting mechanism in Haskell: the numeric defaulting. So as a somewhat disgusting hack, let's use the numeric defaulting to our advantage. We'll print <tt>Traced Integer a</tt> as values and <tt>Traced Double a</tt> as expressions.
<pre>
instance (Num t, Show a) => Show (Traced t a) where
show e = if doExp e then showAsExp (unT e) else show (unT e)
doExp :: (Num t) => Traced t a -> Bool
doExp x = '.' `elem` show (f x)
where f :: (Num t) => Traced t a -> t
f _ = 0
</pre>
We distinguish between Integer and Double by how 0 is printed; for Double it has a '.' in the string.
<p>
A small utility to force printing as an expression.
<pre>
asExp :: Traced t a -> Traced Double a
asExp = T . unT
</pre>
Let's try it in ghci:
<pre>
Traced> reShare $ 10^8
100000000
Traced> asExp $ reShare $ 10^8
let _8 = (10 * 10); _9 = (_8 * _8); _5 = (10 * 10);
_7 = (_8 * _5); _10 = (_9 * _7); in _10
Traced> reShare $ (asExp 10)^8
let _7 = (10 * 10); _8 = (_7 * _7); _1 = (10 * 10);
_5 = (_7 * _1); _9 = (_8 * _5); in _9
Traced>
</pre>
In the first expression the numeric default made t be Integer, so we got a value. In the second and third case we forced t to be Double.
<p>
Some other examples:
<pre>
Traced> let fac n = if n == 0 then 1 else n * fac(n-1)
Traced> asExp $ reShare $ fac 3
let _19 = (3 - 1); _24 = (_19 - 1); _18 = (_24 * 1);
_20 = (_19 * _18); _21 = (3 * _20); in _21
Traced> let slowFib n = if n < 2 then 1 else slowFib(n-1) + slowFib(n-2)
Traced> asExp $ reShare $ slowFib 5
let _18 = (1 + 1); _19 = (_18 + 1); _23 = (1 + 1); _20 = (_19 + _23);
_25 = (1 + 1); _17 = (_25 + 1); _21 = (_20 + _17); in _21
Traced> let fastFib n = fst $ iterate (\ (x,y) -> (y,x+y)) (1,1) !! n
Traced> asExp $ reShare $ fastFib 5
let _20 = (1 + 1); _21 = (1 + _20); _19 = (_20 + _21); _23 = (_21 + _19); in _23
Traced>
</pre>
Well, that's enough abuse of the type system for one day.
<h3>More examples</h3>
The full library for traced values contains some more functionality, like naming values and functions that operate on traced booleans.
<p>
A named value, and a symbolic named value:
<pre>
Prelude Debug.Traced> asSharedExp $ (named "x" 10)^8
let x = 10; _2 = x * x; in _2 * _2 * (_2 * (x * x))
Prelude Debug.Traced> asSharedExp $ (unknown "x")^8
let _2 = x * x; in _2 * _2 * (_2 * (x * x))
</pre>
For a normal definition of <tt>fac</tt> we only see the arithmetic:
<pre>
Prelude Debug.Traced> let fac n = if n == 0 then 1 else n * fac (n-1)
Prelude Debug.Traced> fac 5
120
Prelude Debug.Traced> asSharedExp $ fac 5
let _2 = 5 - 1;
_4 = _2 - 1;
_6 = _4 - 1;
in 5 * (_2 * (_4 * (_6 * ((_6 - 1) * 1))))
</pre>
By using traced booleans we can see exactly what's going on:
<pre>
Prelude Debug.Traced> let facT n = ifT (n %== 0) 1 (n * facT (n-1))
Prelude Debug.Traced> facT 5
120
Prelude Debug.Traced> asSharedExp $ facT 5
let _6 = 5 - 1;
_11 = _6 - 1;
_16 = _11 - 1;
_21 = _16 - 1;
in ifT (5 == 0) ...
(5 * ifT (_6 == 0) ...
(_6 * ifT (_11 == 0) ...
(_11 * ifT (_16 == 0) ...
(_16 * ifT (_21 == 0) ... (_21 * ifT (_21 - 1 == 0) 1 ...)))))
</pre>
We can also lift a regular function to a traced function:
<pre>
Prelude Debug.Traced> let fac' = liftFun "fac'" fac :: Traced t Integer -> Traced t Integer
Prelude Debug.Traced> let a = fac' 5 + fac' 10
Prelude Debug.Traced> a
3628920
Prelude Debug.Traced> asSharedExp a
fac' 5 + fac' 10
Prelude Debug.Traced>
</pre>
<h3>And what about exponentiation?</h3>
The new exponentiation function in GHC looks like this:
<pre>
(^) :: (Num a, Integral b) => a -> b -> a
x0 ^ y0 | y0 < 0 = error "Negative exponent"
| y0 == 0 = 1
| otherwise = f x0 y0
where f x y | even y = f (x * x) (y `quot` 2)
| y == 1 = x
| otherwise = g (x * x) ((y - 1) `quot` 2) x
g x y z | even y = g (x * x) (y `quot` 2) z
| y == 1 = x * z
| otherwise = g (x * x) ((y - 1) `quot` 2) (x * z)
</pre>
And if we try this definition we get
<pre>
Debug.Traced> asSharedExp $ 10 ^ 8
let _2 = 10 * 10; _1 = _2 * _2; in _1 * _1
Debug.Traced>
</pre>
Victory!
<p>
Edit: Package available in <a href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/traced-2008.7.4">hackage</a>.https://augustss.blogspot.com/2008/07/lost-and-found-if-i-write-108-in.htmlnoreply@blogger.com (augustss)8tag:blogger.com,1999:blog-25541020.post-5880367048512311013Sun, 09 Mar 2008 21:45:00 +00002008-07-04T15:27:21.384+01:00Haskell<h2>Simple reflections of higher order</h2>
In a <a href="http://twan.home.fmf.nl/blog/haskell/simple-reflection-of-expressions.details">recent blog post</a> by Twan van Laarhoven he showed how to reflect Haskell expressions so that we can actually see them symbolically. This has now been included in lambdabot on the Haskell IRC (<a href"http://code.haskell.org/lambdabot/scripts/SimpleReflect.hs">source</a>).
Let's play with it for a moment:
<pre>
*SimpleReflect> x+y
x + y
*SimpleReflect> foldr f z [1,2,3]
f 1 (f 2 (f 3 z))
*SimpleReflect> let swap (x,y) = (y, x)
*SimpleReflect> swap (a,b)
(b,a)
*SimpleReflect> map swap [(a,b),(c,d)]
[(b,a),(d,c)]
*SimpleReflect> :t x
x :: Expr
</pre>
That's very cool! Read Twan's post to find out how it works. All we need to know is on the last line, <tt>x :: Expr</tt>. So <tt>Expr</tt> is a special type with special instances.
Let's try something else
<pre>
*SimpleReflect> \ x -> x + y
<interactive>:1:0:
No instance for (Show (Expr -> Expr))
...
</pre>
Oh, that's annoying, we can't print functions. But why can't we? If we made up some variable of type <tt>Expr</tt> and then applied the function we'd get something we could print.
Let's add some code to Twan's module. I've just randomly picked the variable <tt>a</tt>. (To make this compile you need the extension language FlexibleInstances.)
<pre>
instance Show (Expr -> Expr) where
show f = "\\ " ++ show a ++ " -> " ++ show (f a)
</pre>
And try again with the example
<pre>
*SimpleReflect> \ x -> x + y
\ a -> a + y
</pre>
Pretty smooth. Except it doesn't really work.
<pre>
*SimpleReflect> \ x -> x + a
\ a -> a + a
</pre>
Those two functions are not the same. We can't just arbitrarily pick the variable <tt>a</tt> since it might be free in the expression.
<p>
So we need to pick a variable that is not free in the expression. How could we do that? No matter what we pick it could be used. And we have no idea what is being used. Or do we? If we just turn the function in to text we could look at the string, and pick some variable that is unused in that string. This is really a gruesome hack, but who cares?
<p>
How do we print the function? Just invent some variable, I use _, turn it into an expression and tokenize the string. We will then have a string of tokens not to use. To find a variable to use, just pick an infinite supply of variables, remove the ones being used, and then pick one of the remaining ones.
<pre>
instance Show (Expr -> Expr) where
show f = "\\ " ++ show v ++ " -> " ++ show (f v)
where v = var (head vars)
vars = supply \\ tokenize (show $ f $ var "_")
supply = [ "x" ++ show i | i <- [1..]]
tokenize "" = []
tokenize s = case lex s of (x,s') : _ -> x : tokenize s'
</pre>
And try to fool it:
<pre>
*SimpleReflect> \ x -> x + y
\ x1 -> x1 + y
*SimpleReflect> \ x -> x + var "x1"
\ x2 -> x2 + x1
</pre>
OK, what about multiple arguments?
<pre>
*SimpleReflect> \ x y -> x + y + z
<interactive>:1:0:
No instance for (Show (Expr -> Expr -> Expr))
...
</pre>
Well, yeah, that's true. There is no such instance. But wait, why is the instance we have for the type <tt>Expr->Expr</tt>? No particular reason, that's just what I wrote. In fact it works equally well for <tt>Expr->r</tt> as long as we can show <tt>r</tt>, because that's the only thing we do with <tt>f v</tt>.
So we change the first line of the instance:
<pre>
instance (Show r) => Show (Expr -> r) where
</pre>
And now
<pre>
*SimpleReflect> \ x y -> x + y + z
\ x2 -> \ x1 -> x2 + x1 + z
*SimpleReflect> foldr (.) id [f::Expr->Expr,g,h] -- a little type help needed
\ x1 -> f (g (h x1))
*SimpleReflect> scanr (.) id [(*2),f::Expr->Expr,(+1)]
[\ x1 -> f (x1 + 1) * 2,\ x1 -> f (x1 + 1),\ x1 -> x1 + 1,\ x1 -> x1]
</pre>
Well, that wasn't too hard. So let's try another example.
<pre>
*SimpleReflect> \ (x,y) -> x+y+z
<interactive>:1:0:
No instance for (Show ((Expr, Expr) -> Expr))
...
</pre>
Hmmm, yes, the argument must be an <tt>Expr</tt>. That's annoying. We need to generalize the argument type. We want be able to use <tt>Expr</tt> and pair of them etc. Time for a type class. What does it need to do? It has to invent expressions and never reuse variables doing so. So when we invent an <tt>Expr</tt> we need to consume one variable and leave the rest for others to consume. This sounds like a state monad. So we're going to use a state monad where the state is the (infinite) list of strings that are available for making variables.
<pre>
instance (Show a, ExprArg a, Show r) => Show (a -> r) where
show f = "\\ " ++ show v ++ " -> " ++ show (f v)
where v = evalState exprArg vars
dummy = evalState exprArg $ repeat "_"
vars = supply \\ tokenize (show $ f dummy)
supply = [ "x" ++ show i | i <- [1..]]
tokenize "" = []
tokenize s = case lex s of (x,s') : _ -> x : tokenize s'
class ExprArg a where
exprArg :: State [String] a
instance ExprArg Expr where
exprArg = do v:vs <- get; put vs; return (var v)
</pre>
Using this we're back where we were before, but now we can make some more instances.
<pre>
instance ExprArg () where
exprArg = return ()
instance (ExprArg a, ExprArg b) => ExprArg (a, b) where
exprArg = liftM2 (,) exprArg exprArg
instance (ExprArg a, ExprArg b, ExprArg c) => ExprArg (a, b, c) where
exprArg = liftM3 (,,) exprArg exprArg exprArg
</pre>
And finally:
<pre>
*SimpleReflect> \ (x, y) -> x + y + z
\ (x1,x2) -> x1 + x2 + z
*SimpleReflect> curry f :: Expr -> Expr -> Expr
\ x2 -> \ x1 -> f (x2,x1)
*SimpleReflect> uncurry f :: (Expr, Expr) -> Expr
\ (x1,x2) -> f x1 x2
*SimpleReflect> \ () -> 1
\ () -> 1
</pre>
The last example is a curiosity since it does not involve <tt>Expr</tt> at all.
Well, that's enough for a Sunday night hack.https://augustss.blogspot.com/2008/03/in-recent-blog-post-by-twan-van.htmlnoreply@blogger.com (augustss)3tag:blogger.com,1999:blog-25541020.post-7429676325682374995Fri, 09 Nov 2007 18:26:00 +00002007-11-09T23:43:05.863+00:00HaskellLambda calculus<h3>Some lambda calculus examples</h3>
<h4>Syntax</h4>
In a previous blog entry I described a simple evaluator and type checker for the lambda cube, i.e., various forms of lambda calculus.
<p>
Here I'm going to show some examples of code in pure typed λ-calculus. All the examples are typable in F<sub>ω</sub>; the full lambda cube is not necessary.
<p>
Before doing any examples we'd better have some syntax that is not too painful, because writing λ-expression in raw Haskell is tedious. The syntax for variables, * kind, and application is easy, I'll just use Haskell syntax. For λ-expressions the Haskell syntax doesn't allow explicit type annotations, but various Haskell compiler implement that extension, so I'll just pick that. So a λ will be written, "<tt>\ (var::type) -> expr</tt>". And as in Haskell I'll allow multiple variables between the "<tt>\</tt>" and "<tt>-></tt>"; it's just a shorthand for multiple lambdas.
<p>
So what about the dependent function type? The syntax <i>(x:t)→u</i> suggests <tt>(x::t)->u</tt>, so I'll use that. And when the variable doesn't occur we'll write <tt>t->u</tt> as usual.
For type variables Haskell (well, not Haskell 98, but extensions) uses <tt>forall (a::*) . t</tt>, so I'll allow that too.
<p>
An example, the identity function:
<pre>
\ (a::*) (x::a) -> x
</pre>
with type
<pre>
forall (a::*) . a->a
</pre>
And using it
<pre>
id Int 5
</pre>
Writing a pretty printer and parser for this is pretty straight forward so I'll skip that and just point you to the <a href="http://darcs.augustsson.net/Darcs/LambdaCube">code</a>. BTW, instead of using Parsec for the parser like everyone else I used ReadP. The ReadP library is very nice, partly because the alternative operator is actually commutative (unlike Parsec). But the error messages suck.
<h4>Enter <tt>let</tt></h4>
Now if we want to use, say, the identity function more than once we need to name it. There is a mechanism for that, namely λ. But it looks awkward. Look:
<pre>
(\ (id :: forall (a::*) . a->a) -> ... id ... id ... id ...) (\ (a::*) (x::a) -> x)
</pre>
What makes it awkward is that the name, <tt>id</tt>, is far away from the body, <tt>\ (a::*) (x::a) -> x</tt>. From Haskell we are more used the <tt>let</tt> and <tt>where</tt> expressions.
So let's add that <tt>let</tt>. Instead of what we have above we'll write
<pre>
let id :: forall (a::*) . a->a = \ (a::*) (x::a) -> x
in ... id ... id ... id ...
</pre>
The <tt>let</tt> construct could be just be syntactic suger for a lambda and an application, but I've decided to add it as a constructor to the expression type instead.
<pre>
| Let Sym Type Expr Expr
</pre>
Adding a new constructor means that we have to modify all the functions operating on <tt>Expr</tt>, and it's extra much work because <tt>Let</tt> is a variable binding construct.
For substitution we just cheat a little and use the expansion into an application and λ
<pre>
sub (Let i t e b) = let App (Lam i' t' b') e' = sub (App (Lam i t b) e)
in Let i' t' e' b'
</pre>
And for normal form we use the same trick.
<pre>
spine (Let i t e b) as = spine (App (Lam i t b) e) as
</pre>
And for now, let's extend the type checker in the same way.
<p>
<br>
<p>
To make definitions a little less verbose I'll allow multiple bindings. E.g.
<pre>
let x :: Int = g a; y :: Int = f x in h x y
</pre>
It's just multiple nested <tt>Let</tt>s in the obvious way.
Another shorthand. I'll allow the identity function to be written
<pre>
let id (a::*) (x::a) :: a = x
in ...
</pre>
The translation is pretty easy
<pre>
let f (x1::t1) ... (xn::tn) :: t = b in e
</pre>
is
<pre>
let f :: forall (x1::t1) ... (xn::tn) . t = \ (x1::t1) ... (xn::tn) -> b in e
</pre>
And finally, to make it even more like Haskell, I'll allow the type signature to be on a line of its own, omitting types on the bound variables.
<pre>
let id :: forall (a::*) . a -> a;
id a x = x;
in ...
</pre>
It's pretty easy to translate to the bare expression type. Why all these little syntactic extras? Well, remember Mary Poppins "A spoonful of sugar makes the medicine go down."
Phew! Enough syntax, on to the examples.
<h4>Bool</h4>
We could start by throwing in all kinds of primitive types into our little language, but who knows what might happen then. All the nice properties that have been shown about the language might not hold anymore. So instead we'll code all the types we need with what we already have.
<p>
The Bool type has two values: False and True. So we need to find a type that has exactly two different values. Fortunately that's easy: <tt>a->a->a</tt> (I'll be a bit sloppy and leave off top level quantifier sometimes, just like Haskell; That should be <tt>forall (a::*) . a->a->a</tt>).
<p>
Why does that have two possible values? Well, we have a function type that must return an <tt>a</tt> for any possible <tt>a</tt> that it's given, so it can't conjure up the return value out of thin air. It has to return one of it's arguments. And there are two arguments to choose from, so there are two different values in that type.
Here's <tt>Bool, False, True</tt>:
<pre>
Bool :: *;
Bool = forall (a::*) . a->a->a;
False :: Bool;
False = \ (a::*) (x::a) (y::a) -> x;
True :: Bool;
True = \ (a::*) (x::a) (y::a) -> y;
</pre>
If the definition for <tt>Bool</tt> was written in Haskell it would be
<pre>
type Bool = forall a . a->a->a
false :: Bool
false = \ x y -> x
true :: Bool
true = \ x y -> y
</pre>
It would be easy to add more sugar and allow <tt>type</tt> which just means you're defining something of type <tt>*</tt>. And you could also make an omitted type in a <tt>forall</tt> mean type <tt>*</tt>. But I've not added these little extras.
<p>
Defining the <tt>if</tt> function is trivial; it's just a matter of permuting the arguments a little, because the boolean values come with the "if" built in.
<pre>
if :: forall (a::*) . Bool -> a -> a -> a;
if a b t f = b a t f;
</pre>
Note how the type signature is exactly the same as you'd find in Haskell. A difference is that we have explicit type abstraction and type application. For instance, <tt>if</tt> takes a first argument that is the type of the branches. So when using <tt>if</tt> we must pass in a type.
<p>
Given this we can try to type check some simple code:
<pre>
let Bool :: *;
Bool = forall (a::*) . a->a->a;
False :: Bool;
False = \ (a::*) (x::a) (y::a) -> x;
in False
</pre>
Here is what happens:
<pre>
*CubeExpr> typeCheck $ read "let Bool :: *; Bool = forall (a::*) . a->a->a; False :: Bool;False = \\ (a::*) (x::a) (y::a) -> x; in False"
*** Exception: Type error:
Bad function argument type:
Function: \ (False :: Bool) -> False
argument: \ (a :: *) (x :: a) (y :: a) -> x
expected type: Bool
got type: forall (a :: *) . a->a->a
</pre>
What is it whining about? Expected <tt>Bool</tt>, got <tt>forall (a :: *) . a->a->a</tt>. But it says right there in the code that <tt>Bool</tt> is exactly that. What's going on?
<br>
Well, the type checker knows the <b>type</b> of everything, but not the <b>value</b> of anything. So the type checker knows that type of <tt>Bool</tt> is <tt>*</tt>, but it doesn't know what it is equal to.
<p>
The problem is that when we have a <tt>let</tt> binding we know the value of the defined variable and to be able to do dependent type checking the type checker needs to know it too. We need to change the type checking of <tt>let</tt>. Here's a simple solution:
<pre>
tCheck r (Let s t a e) = do
tCheck r t
ta <- tCheck r a
when (not (betaEq ta t)) $ throwError $ "Bad let def\n" ++ show (ta, t)
te <- tCheck r (subst s a e)
tCheck r (Pi s t te)
return te
</pre>
Note how we substitute the value of the definition (<tt>a</tt>) in the body before type checking it. This is a sledge hammer approach. A better one would be to change the environment in the type checker to carry values when they are known. These values could then be used when computing the normal form of an expression. But let's keep it simple for now.
<p>
Let's try again:
<pre>
*CubeExpr> typeCheck' $ read "let Bool :: *; Bool = forall (a::*) . a->a->a; False :: Bool;False = \\ (a::*) (x::a) (y::a) -> x; in False"
forall (a :: *) . a->a->a
</pre>
That worked!
<h4>A simple top level</h4>
To make experimentation easier I've added a simple top level where you can evaluate expression, make definitions, load files, etc. The files are just a bunch of definitions the way they would go inside a <tt>let</tt>.
<p>
Sample session:
<pre>
Welcome to the Cube.
Use :help to get help.
Cube> \ (a::*) (x::a) -> x
\ (a :: *) (x :: a) -> x
::
forall (a :: *) . a->a
Cube>
</pre>
When evaluating an expression it will first print the value, then <tt>::</tt>, and finally the type.
<br>
Let's try some more. Here's the file <tt>bool.cube</tt>.
<pre>
-- The Bool type.
Bool :: *;
Bool = forall (boolT::*) . boolT->boolT->boolT;
False :: Bool;
False = \ (boolT::*) (false::boolT) (true::boolT) -> false;
True :: Bool;
True = \ (boolT::*) (false::boolT) (true::boolT) -> true;
if :: forall (a::*) . Bool -> a -> a -> a;
if a b t f = b a f t;
not :: Bool -> Bool;
not b = if Bool b False True;
and :: Bool -> Bool -> Bool;
and x y = if Bool x y False;
or :: Bool -> Bool -> Bool;
or x y = if Bool x True y;
</pre>
Note how I've named some of the type variables with more suggestive names. More about that in a moment.
<pre>
Cube> :load bool.cube
Cube> True
\ (boolT :: *) (false :: boolT) (true :: boolT) -> true
::
forall (boolT :: *) . boolT->boolT->boolT
Cube> and True False
\ (boolT :: *) (false :: boolT) (true :: boolT) -> false
::
forall (boolT :: *) . boolT->boolT->boolT
</pre>
By staring carefully at these you can convince yourself that it's right.
What's making it a little hard to read are all those leading lambdas (and the corresponding function types). But there's an option to suppress the printing of them.
<pre>
Cube> :skip
Cube> True
true
::
boolT
Cube> and True False
false
::
boolT
</pre>
Look, it got deceptively readable.
<h4>Characters</h4>
So we can define booleans. What about something like characters? You can define those too. What is the ASCII type? It's a type with 128 different values. We saw how we can make a type with two values (Bool); we can do the same for 128. Since that takes a lot of room, I'll do it for just four values.
<pre>
Char :: *;
Char = forall (charT::*) . charT->charT->charT->charT->charT;
'a' :: Char;
'a' = \ (charT::*) (_'a'::charT) (_'b'::charT) (_'c'::charT) (_'d'::charT) -> _'a';
'b' :: Char;
'b' = \ (charT::*) (_'a'::charT) (_'b'::charT) (_'c'::charT) (_'d'::charT) -> _'b';
'c' :: Char;
'c' = \ (charT::*) (_'a'::charT) (_'b'::charT) (_'c'::charT) (_'d'::charT) -> _'c';
'd' :: Char;
'd' = \ (charT::*) (_'a'::charT) (_'b'::charT) (_'c'::charT) (_'d'::charT) -> _'d';
eqChar :: Char -> Char -> Bool;
eqChar x y = x Bool (y Bool True False False False)
(y Bool False True False False)
(y Bool False False True False)
(y Bool False False False True);
</pre>
Since single quotes are allowed in identifiers in my little language the <tt>'a'</tt> is just an identifier as usual.
Some sample use:
<pre>
Cube> :load char.cube
Cube> 'a'
_'a'
::
charT
Cube> eqChar 'a' 'a'
true
::
boolT
Cube> eqChar 'a' 'b'
false
::
boolT
</pre>
<p>
A different way to use characters is just to assume that there is some character type and that is has some values. How can we do that? By using a lambda expression.
<pre>
\ (Char :: *) -> \ ('a' :: Char) ('b' :: Char) ... -> ...
</pre>
One final little syntactic sugar is to allow lambda expressions to be written with <tt>let</tt>.
So
<pre>
let x :: T;
in e
</pre>
is the same as
<pre>
\ (x :: T) -> e
</pre>
So for Char we can write
<pre>
Char :: *;
'a' :: Char;
'b' :: Char;
'c' :: Char;
'd' :: Char;
eqChar :: Char -> Char -> Bool;
</pre>
This is a very handy notation during program development, btw. Inside i <tt>let</tt> you can first just write the type of something and then use it freely. This something will then be lambda abstracted until such a time that you provide the definition.
<p>
Let's try the "fake" characters.
<pre>
Cube> :load extchar.cube
Cube> 'a'
'a'
::
Char
Cube> :skip
Cube> 'a'
\ (Char :: *) ('a' :: Char) ('b' :: Char) ('c' :: Char) ('d' :: Char)
(eqChar :: Char->Char->forall (boolT :: *) . boolT->boolT->boolT)
-> 'a'
::
forall (Char :: *) .
Char->Char->Char->Char->
(Char->Char->forall (boolT :: *) . boolT->boolT->boolT)->
Char
Cube> :skip
Cube> eqChar 'a' 'b'
eqChar 'a' 'b'
::
forall (boolT :: *) . boolT->boolT->boolT
</pre>
Now <tt>eqChar</tt> no longer gets evaluated, naturally, because it has no definition; it's λ bound.
<p>
Now we know how the <tt>Char</tt> type could be implemented (or supplied from the outside). It would then be all right to add it as a primitive (with the same behavior), because it will not ruin any properties. (Speaking of ruining properties, the <tt>seq</tt> function in Haskell is an example of a function that cannot be defined in the λ-calculus. And sure enough, adding it to Haskell ruined the validity of η-reduction.)
<p>
It's not difficult to extend the <tt>Expr</tt> type with some <tt>Prim</tt> constructor for primitive functions, and adding some special cases in the syntax and type checking for them. But since this is not intended to be a usable language I'll resist the temptation (for now).
<h4>Pairs</h4>
OK, booleans and characters were pretty easy. Let's do pairs next. The <tt>Pair</tt> type is parameterized over two types; the type of the first and the second component. As with booleans the representation of pairs come with the case analysis "build in". (Case analysis on pairs is <tt>flip uncurry</tt> in Haskell.) If we can correctly implement building pairs, <tt>fst</tt>, and <tt>snd</tt> we are done.
<pre>
Pair :: * -> * -> *;
Pair a b = forall (pairT::*) . (a->b->pairT) -> pairT;
PairC :: forall (a::*) (b::*) . a -> b -> Pair a b;
PairC a b x y = \ (pairT::*) (pair :: a->b->pairT) -> pair x y;
fst :: forall (a::*) (b::*) . Pair a b -> a;
fst a b p = p a (\ (x::a) (y::b) -> x);
snd :: forall (a::*) (b::*) . Pair a b -> b;
snd a b p = p b (\ (x::a) (y::b) -> y);
</pre>
So the type is called <tt>Pair</tt> and the constructor <tt>PairC</tt>. Since they live in the same name space we can't use Haskell's convention of giving them the same name (which is <tt>(,)</tt> in Haskell.).
<p>
Again, in Haskell the first definition would be
<pre>
type Pair a b = forall pairT . (a->b->pairT) -> pairT
</pre>
Staring at these definitions makes it obvious(?) that they work. But we can test it too.
<pre>
Cube> :let S :: *; s :: S; T :: *; t :: T
Cube> :load pair.cube
Cube> :let p :: Pair S T; p = PairC S T s t
Cube> p
pair s t
::
pairT
Cube> fst S T p
s
::
S
Cube> snd S T p
t
::
T
</pre>
The <tt>:let</tt> command is used to make bindings without having to load them from a file. The first <tt>:let</tt> just introduces some values to play with.
<p>
Again, the big difference compared to Haskell is that all types are explicit. (But users of templates in C++ or generics in Java/C# might appeciate it?)
<h4>Maybe</h4>
We can follow the same pattern and define Haskell's <tt>Maybe</tt> type. In fact any non-recursive Haskell data type has a mechanical translation and easy into lambda calculus. We've seen boolean, characters, and pairs above.
<pre>
Maybe :: * -> *;
Maybe a = forall (maybeT::*) . maybeT->(a->maybeT)->maybeT;
Nothing :: forall (a::*) . Maybe a;
Nothing a = \ (maybeT::*) (nothing::maybeT) (just::a->maybeT) -> nothing;
Just :: forall (a::*) . a -> Maybe a;
Just a x = \ (maybeT::*) (nothing::maybeT) (just::a->maybeT) -> just x;
maybe :: forall (a::*) (r::*) . r -> (a->r) -> Maybe a -> r;
maybe a r nothing just s = s r nothing just;
</pre>
<h4>Natural numbers</h4>
Let's carry on with some natural numbers. Natural numbers are trickier because it's a recursive type. In Haskell:
<pre>
data Nat = Zero | Succ Nat
</pre>
Let's try the cube version:
<pre>
Cube> :load nat.cube
Cube> 2
succ (succ zero)
::
natT
Cube> add 2 3
succ (succ (succ (succ (succ zero))))
::
natT
</pre>
What does the definitions for natural numbers look like?
<pre>
Nat :: *;
Nat = forall (natT::*) . natT -> (natT->natT) -> natT;
0 :: Nat;
0 = \ (natT::*) (zero::natT) (succ::natT->natT) -> zero;
Succ :: Nat -> Nat;
Succ n = \ (natT::*) (zero::natT) (succ::natT->natT) -> succ (n natT zero succ);
natprim :: forall (r::*) . (r->r) -> r -> Nat -> r;
natprim r succ zero n = n r zero succ;
add :: Nat -> Nat -> Nat;
add x y = x Nat y Succ;
mul :: Nat -> Nat -> Nat;
mul x y = x Nat 0 (add y);
power :: Nat -> Nat -> Nat;
power x y = y Nat (Succ 0) (mul x);
isZero :: Nat -> Bool;
isZero n = n Bool True (\ a::Bool -> False);
1 :: Nat;
1 = Succ 0;
2 :: Nat;
2 = Succ 1;
3 :: Nat;
3 = Succ 2;
</pre>
The <tt>natprim</tt> function corresponds to <tt>foldr</tt> for list and is our means of recursion. This type for natural numbers come with primitive recursion built in, just as the non-recursive data types earlier came with case analysis built in.
<p>
I'll skip defining subtraction&co, they are possible, but a little tedious (and very inefficient).
<h4>Lists</h4>
Lists are like a hybrid of natural numbers and the <tt>Maybe</tt> type. They follow the same pattern as before.
<pre>
List :: * -> *;
List e = forall (listT::*) . listT -> (e->listT->listT) -> listT;
Nil :: forall (e::*) . List e;
Nil e = \ (listT::*) (nil::listT) (cons::e->listT->listT) -> nil;
Cons :: forall (e::*) . e -> List e -> List e;
Cons e x xs = \ (listT::*) (nil::listT) (cons::e->listT->listT) -> cons x (xs listT nil cons);
</pre>
And some sample functions.
<pre>
foldr :: forall (a::*) (b::*) . (a->b->b) -> b -> List a -> b;
foldr a b f z xs = xs b z f;
map :: forall (a::*) (b::*) . (a->b) -> List a -> List b;
map a b f xs = foldr a (List b) (\ (x::a) (r::List b) -> Cons b (f x) r) (Nil b) xs;
append :: forall (a::*) . List a -> List a -> List a;
append a xs ys = foldr a (List a) (Cons a) ys xs;
foldl :: forall (a::*) (b::*) . (b->a->b) -> b -> List a -> b;
foldl a b f z xs = foldr a (b->b) (\ (x::a) (g::b->b) (v::b) -> g (f v x)) (\ (x::b) -> x) xs z;
reverse :: forall (a::*) . List a -> List a;
reverse a xs = foldl a (List a) (\ (r :: List a) (x :: a) -> Cons a x r) (Nil a) xs;
</pre>
And a test:
<pre>
Welcome to the Cube.
Use :help to get help.
Cube> :load bool.cube
Cube> :load nat.cube
Cube> :load list.cube
Cube> :load pair.cube
Cube> :load listmisc.cube
Cube> :load extchar.cube
Cube> :skip
Cube> replicate Char 3 'b'
cons 'b' (cons 'b' (cons 'b' nil))
::
listT
Cube> :quit
</pre>
And with that, I'll quit too for now.https://augustss.blogspot.com/2007/11/some-lambda-calculus-examples-syntax-in.htmlnoreply@blogger.com (augustss)1tag:blogger.com,1999:blog-25541020.post-1215723127547718916Tue, 06 Nov 2007 16:28:00 +00002007-11-07T14:49:27.174+00:00BenchmarkHaskell<h3>Benchmarking ray tracing, Haskell vs. OCaml</h3>
On his web site about OCaml Jon Harrop has a <a href="http://www.ffconsultancy.com/languages/ray_tracer/index.html">benchmark</a> for a simple ray tracing program written in a number of languages. When I saw it I wondered why Haskell was doing so badly, especially since this benchmark was taken as some kind of proof that Haskell performs poorly in "real life".
So I have rerun the benchmarks. I've also rewritten the Haskell versions of the programs. The Haskell versions on Jon's web site were OK, but they were too far from the OCaml versions for my taste. I prefer to keep the programs very similar in a situation like this.
My rewrite of the benchmarks from OCaml to Haskell was done with a minimum of intelligence. Here are the only things I did that needed creative thought:
<ol>
<li>Use the type <tt>Vector</tt> to represent vectors instead of a tuple. This allows the components to be strict.
</li>
<li>Use the type <tt>Scene</tt> instead of a tuple to represent a scene. The tuple used in the OCaml code uses the dubious feature of equi-recursive types (even Xavier thinks it's strange enough to have a flag to enable it).
</li>
<li>Rewrite the loop that computes a pixel's value using an accumulating updatable variable into a list comprehension that sums the list.
</li>
<li>Finally, the compiler flags needed a bit of tweaking to get good performance, even though "<tt>-O3 -funbox-strict-fields -fexcess-precision -optc-ffast-math</tt>" were pretty obvious.
</li>
</ol>
In addition to this I made the code look a little more Haskellish, e.g., using overloading to allow <tt>+</tt> and <tt>-</tt> on vectors. This is really just minor syntactic changes, but makes the code more readable.
<p>
To make the program size comparison fair I removed some dead code from the OCaml code.
<p>
I then reran the benchmarks using Haskell, OCaml, and C++.
<p>
The benchmarks are five programs that starts from very simple ray tracing and specializing the program more and more to speed it up.
<p>
The numbers are in the tables below. The time is execution time in second, the characters are non-white characters in the file, and the lines are the number of lines in the file. To ease comparison I also include the relative numbers compared to OCaml (smaller numbers are better).
<p>
Interestingly, and unlike Jon's benchmark, the Haskell code is always smaller than the OCaml code. Furthermore, the Haskell code ranges from much faster to slightly faster than the OCaml code. Again, this is very unlike Jon's benchmark. I find the unoptimized version of the benchmark especially interesting since Haskell is 5 times(!) faster than OCaml. I've not investigated why, but I suspect laziness.
<h4>Results</h4>
The programs, ray1-ray5, are variations on the ray tracer as given on Jon's web site. I've used the same size metrics as Jon does.
<ul>
<li>Haskell: My Haskell code compiled with ghc 6.8.1</li>
<li>Haskell old: Jon's Haskell code, compiled with ghc 6.8.1</li>
<li>Haskell old 6.6: Jon's Haskell code, compiled with ghc 6.1.1</li>
<li>OCaml: Jon's OCaml code</li>
<li>C++: Jon's C++ code</li>
</ul>
<ul>
<li>Time: execution time is second</li>
<li>Char: number of non-white chracters in the program</li>
<li>Lines: number of lines in the program</li>
<li>Rel T: execution time relative to OCaml</li>
<li>Rel C: non-white characters relative to OCaml</li>
<li>Rel L: lines relative to OCaml</li>
<li>Mem: Maximum resident memory</li>
</ul>
<TABLE BORDER=4>
<TR><TH>ray1 </TH> <TD> Time</TD> <TD> Chars</TD> <TD> Lines</TD> <TD> Rel T</TD> <TD> Rel C</TD> <TD> Rel L</TD><TD>Mem</TD></TR>
<TR><TD>Haskell </TD> <TD> 15.3</TD> <TD> 1275</TD> <TD> 51</TD> <TD> 0.202</TD> <TD> 0.990</TD> <TD> 1.020</TD><TD>5M</TD></TR>
<TR><TD>Haskell, old </TD> <TD> 15.8</TD> <TD> 1946</TD> <TD> 88</TD> <TD> 0.208</TD> <TD> 1.511</TD> <TD> 1.760</TD><TD>9M</TD></TR>
<TR><TD>Haskell, old 6.6 </TD> <TD> 28.1</TD> <TD> 1946</TD> <TD> 88</TD> <TD> 0.370</TD> <TD> 1.511</TD> <TD> 1.760</TD><TD>9M</TD></TR>
<TR><TD>OCaml </TD> <TD> 75.9</TD> <TD> 1288</TD> <TD> 50</TD> <TD> 1.000</TD> <TD> 1.000</TD> <TD> 1.000</TD><TD>18M</TD></TR>
<TR><TD>C++ </TD> <TD> 8.1</TD> <TD> 2633</TD> <TD> 122</TD> <TD> 0.106</TD> <TD> 2.044</TD> <TD> 2.440</TD><TD>8M</TD></TR>
</TABLE><P>
<TABLE BORDER=4>
<TR><TH>ray2 </TH> <TD> Time</TD> <TD> Chars</TD> <TD> Lines</TD> <TD> Rel T</TD> <TD> Rel C</TD> <TD> Rel L</TD><TD>Mem</TD></TR>
<TR><TD>Haskell </TD> <TD> 11.5</TD> <TD> 1457</TD> <TD> 50</TD> <TD> 0.206</TD> <TD> 0.912</TD> <TD> 0.943</TD><TD>12M</TD></TR>
<TR><TD>Haskell, old </TD> <TD> 12.0</TD> <TD> 2173</TD> <TD> 99</TD> <TD> 0.215</TD> <TD> 1.360</TD> <TD> 1.868</TD><TD>35M</TD></TR>
<TR><TD>Haskell, old 6.6 </TD> <TD> 21.1</TD> <TD> 2173</TD> <TD> 99</TD> <TD> 0.379</TD> <TD> 1.360</TD> <TD> 1.868</TD><TD>35M</TD></TR>
<TR><TD>OCaml </TD> <TD> 55.8</TD> <TD> 1598</TD> <TD> 53</TD> <TD> 1.000</TD> <TD> 1.000</TD> <TD> 1.000</TD><TD>15M</TD></TR>
<TR><TD>C++ </TD> <TD> 6.1</TD> <TD> 3032</TD> <TD> 115</TD> <TD> 0.108</TD> <TD> 1.897</TD> <TD> 2.170</TD><TD>8M</TD></TR>
</TABLE><P>
<TABLE BORDER=4>
<TR><TH>ray3 </TH> <TD> Time</TD> <TD> Chars</TD> <TD> Lines</TD> <TD> Rel T</TD> <TD> Rel C</TD> <TD> Rel L</TD><TD>Mem</TD></TR>
<TR><TD>Haskell </TD> <TD> 9.7</TD> <TD> 1794</TD> <TD> 62</TD> <TD> 0.970</TD> <TD> 0.919</TD> <TD> 0.939</TD><TD>12M</TD></TR>
<TR><TD>Haskell, old </TD> <TD> 11.1</TD> <TD> 2312</TD> <TD> 103</TD> <TD> 1.112</TD> <TD> 1.184</TD> <TD> 1.561</TD><TD>35M</TD></TR>
<TR><TD>Haskell, old 6.6 </TD> <TD> 19.7</TD> <TD> 2312</TD> <TD> 103</TD> <TD> 1.984</TD> <TD> 1.184</TD> <TD> 1.561</TD><TD>35M</TD></TR>
<TR><TD>OCaml </TD> <TD> 10.0</TD> <TD> 1953</TD> <TD> 66</TD> <TD> 1.000</TD> <TD> 1.000</TD> <TD> 1.000</TD><TD>15M</TD></TR>
<TR><TD>C++ </TD> <TD> 5.4</TD> <TD> 3306</TD> <TD> 143</TD> <TD> 0.545</TD> <TD> 1.693</TD> <TD> 2.167</TD><TD>8M</TD></TR>
</TABLE><P>
<TABLE BORDER=4>
<TR><TH>ray4 </TH> <TD> Time</TD> <TD> Chars</TD> <TD> Lines</TD> <TD> Rel T</TD> <TD> Rel C</TD> <TD> Rel L</TD><TD>Mem</TD></TR>
<TR><TD>Haskell </TD> <TD> 8.5</TD> <TD> 1772</TD> <TD> 66</TD> <TD> 0.985</TD> <TD> 0.867</TD> <TD> 0.957</TD><TD>12M</TD></TR>
<TR><TD>Haskell, old </TD> <TD> 11.7</TD> <TD> 2387</TD> <TD> 110</TD> <TD> 1.360</TD> <TD> 1.168</TD> <TD> 1.594</TD><TD>36M</TD></TR>
<TR><TD>Haskell, old 6.6 </TD> <TD> 19.2</TD> <TD> 2387</TD> <TD> 110</TD> <TD> 2.235</TD> <TD> 1.168</TD> <TD> 1.594</TD><TD>35M</TD></TR>
<TR><TD>OCaml </TD> <TD> 8.6</TD> <TD> 2043</TD> <TD> 69</TD> <TD> 1.000</TD> <TD> 1.000</TD> <TD> 1.000</TD><TD>11M</TD></TR>
<TR><TD>C++ </TD> <TD> 5.0</TD> <TD> 3348</TD> <TD> 149</TD> <TD> 0.584</TD> <TD> 1.639</TD> <TD> 2.159</TD><TD>8M</TD></TR>
</TABLE><P>
<TABLE BORDER=4>
<TR><TH>ray5 </TH> <TD> Time</TD> <TD> Chars</TD> <TD> Lines</TD> <TD> Rel T</TD> <TD> Rel C</TD> <TD> Rel L</TD></TR>
<TR><TD>Haskell </TD> <TD> 7.0</TD> <TD> 2246</TD> <TD> 95</TD> <TD> 0.999</TD> <TD> 0.878</TD> <TD> 0.950</TD></TR>
<TR><TD>OCaml </TD> <TD> 7.0</TD> <TD> 2559</TD> <TD> 100</TD> <TD> 1.000</TD> <TD> 1.000</TD> <TD> 1.000</TD></TR>
<TR><TD>C++ </TD> <TD> 4.7</TD> <TD> 3579</TD> <TD> 142</TD> <TD> 0.674</TD> <TD> 1.399</TD> <TD> 1.420</TD></TR>
</TABLE><P>
The source code is available in <a href="http://darcs.augustsson.net/Darcs/RayBen/">a Darcs repository</a>.
<h4>Software and hardware details</h4>
Hardware: MacBook, Intel Core Duo 2GHz, 2MB L2 Cache, 1GB 667MHz DRAM
<p>
Software:
<ul>
<li>Haskell compiler: ghc-6.8.1</li>
<li>OCaml compiler: 3.10.0</li>
<li>g++: gcc version 4.0.1 (Apple Computer, Inc. build 5367)</li>
</ul>
Compilation commands:
<ul>
<li>ghc: -O3 -fvia-C -funbox-strict-fields -optc-O3 -fexcess-precision -optc-ffast-math -funfolding-keeness-factor=10</li>
<li>OCaml: -rectypes -inline 100 -ffast-math -ccopt -O3</li>
<li>g++: -O3 -ffast-math</li>
</ul>
Target architecture is x86 (even though the processor is x86_64 capable).
<br>
<h4>Some observations</h4>
Haskell should really have the definitions of <tt>infinity</tt> and <tt>epsilon_float</tt> in a library. They are quite useful. Also, having them in a library would have made the Haskell code somewhat shorter and faster.
<p>
Converting these programs from OCaml to Haskell was very mechanical; it could almost be done with just <tt>sed</tt>.
<p>
I'm glad version 5 of the benchmark didn't show much improvement, because it's a really ugly rewrite. :)
<p>
Note that the code is all Haskell98, no strange extensions (even though -funbox-strict-fields deviates subtly from H98).
<h4>Conclusion</h4>
Benchmarking is tricky. I'm not sure why my and Jon's numbers are so different. Different hardware, slightly different programs, different software.
<p>
Haskell is doing just fine against OCaml on this benchmark; the Haskell programs are always smaller and faster.
<p>
Edit: Updated tables with more numbers.
<p>
PS: Phil Armstrong wrote the Haskell code on Jon's web site and I took some code from his original.https://augustss.blogspot.com/2007/11/benchmarking-ray-tracing-haskell-vs.htmlnoreply@blogger.com (augustss)20