Most mainstream functional languages employ an eager (a.k.a. strict) evaluation strategy, where an expression is evaluated entirely even if its resulting value is not needed or only parts of it are needed. As we will see, there are sometimes advantages in lazily evaluating certain expressions. There are two important aspects of lazy evaluation:
suspending (a.k.a delaying) a computation until its result is actually needed (a.k.a demanded or forced); and
memoizing (a.k.a caching) the result of a suspended computation in case its value is demanded again.
As an example to motivate and illustrate lazy evaluation, we will work through an encoding of natural numbers and some simple operations on them.
type Nat
fromInt : Int -> Nat
toInt : Nat -> Int
toString : Nat -> String
plus : Nat -> Nat -> Nat
equals : Nat -> Nat -> Bool
Nat.elm
We start by inductively defining Nat
ural numbers to be either Z
ero or the S
uccessor of some other Nat
ural number.
type Nat = Z | S Nat
Next, let's define functions toInt
and fromInt
to convert Nat
ural numbers to and from Int
egers.
fromInt : Int -> Nat
fromInt i =
if i <= 0
then Z
else S (fromInt (i-1))
If we take fromInt
for a spin, we see that it busts the stack rather quickly.
> fromInt 0
Z : Nat
> fromInt 1
S Z : Nat
> fromInt 10
S (S (S (S (S (S (S (S (S (S Z))))))))) : Nat
> fromInt 10000
RangeError: Maximum call stack size exceeded
Ah, right, we should define it tail-recursively so that it runs in constant stack space.
fromInt =
let
foo acc i =
if i <= 0
then acc
else foo (S acc) (i-1)
in
foo Z
That should do the trick...
> fromInt 10000
RangeError: Maximum call stack size exceeded
Or not. Okay, it's time for a little more investigative journalism. We could fire up Elm Reactor to start debugging. Or we can be lazy (pun intended) and continue to poke around at the REPL.
> let _ = fromInt 10000 in ()
() : ()
That's interesting. The call to fromInt
was not the problem. So the act of printing the resulting Nat
causes the stack overflow? Let's write our own tail-recursive printing function to test this hypothesis.
toString : Nat -> String
toString n =
let
foo acc n = case n of
Z -> acc
S n_ -> foo ("S" ++ acc) n_
in
foo "Z" n
Sure enough, that does the trick...
> fromInt 10000 |> toString
" ... SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSZ" : String
> fromInt 10000000 |> toString
" ... SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSZ" : String
... until we run out of heap space, that is.
> fromInt 100000000 |> toString
FATAL ERROR: JS Allocation failed - process out of memory
Okay, now that we have sorted out our call stack and heap space concerns, let's return to the task of programming with Nat
s. First, a bit of refactoring:
foldl : (b -> b) -> b -> Nat -> b
foldl f acc n =
case n of
Z -> acc
S n_ -> foldl f (f acc) n_
toString = foldl ((++) "S") "Z"
And a function to convert Nat
s to Int
s:
toInt : Nat -> Int
toInt = foldl ((+) 1) 0
We can add two Nat
s together by peeling S
uccessor labels off of y
one at a time and wrapping them around x
.
plus : Nat -> Nat -> Nat
plus x y =
let foo acc n = case n of
Z -> acc
S n_ -> foo (S acc) n_
in foo x y
Or:
plus x y = foldl S y x
Or:
plus x y = foldl S x y
Or:
plus = foldl S
This plus
function encodes the usual notion of addition for our Nat
type.
> plus (fromInt 0) (fromInt 0) |> toString
"Z" : String
> plus (fromInt 0) (fromInt 2) |> toString
"SSZ" : String
> plus (fromInt 10) (fromInt 2) |> toString
"SSSSSSSSSSSSZ" : String
> plus (fromInt 10) (fromInt 2) |> toInt
12 : Int
We can define eq
uality for Nat
s by peeling off one data constructor at a time.
equals : Nat -> Nat -> Bool
equals x y =
case (x, y) of
(Z, Z) -> True
(S x_, S y_) -> equals x_ y_
_ -> False
This seems to work just fine...
> equals (fromInt 0) (fromInt 0)
True : Bool
> equals (fromInt 0) (fromInt 2)
False : Bool
> equals (fromInt 10) (fromInt 2)
False : Bool
... but it is really slow for some comparisons that seem like they should be easy to decide quickly.
> equals (fromInt 10000) (fromInt 10000000)
False : Bool
> equals (fromInt 0) (fromInt 10000000)
False : Bool
The problem is that, under eager evaluation, both Nat
ural numbers are evaluated completely before calling equals
, which then very quickly decides the last two disequalities.
ThunkNat.elm
A common approach to delaying the evaluation of an expression e
of type a
in an eager language is to define a function \() -> e
, called a thunk, that waits for a dummy argument before evaluating the expression.
We will port the implementations above in order to delay computing the representations of natural numbers. In our new representation of Nat
s, a S
uccessor value stores the delayed computation of the Nat
that it succeeds. The force
function is used to evaluate a suspended computation.
type Nat = Z | S (Thunk Nat)
type alias Thunk a = () -> a
force : Thunk a -> a
force thunk = thunk ()
Note that implementing a function like the following is not a good idea, because a call to delay
will force its argument to be evaluated!
delay : a -> Thunk a
delay e = \() -> e
To implement fromInt
, we no longer need to implement a (tail-recursive) helper function because there are no direct recursive calls; instead, the latter case immediately returns a S
uccessor value (which may some time later lead to a call to fromInt
).
fromInt i =
if n <= 0
then Z
else S (\_ -> fromInt (i-1))
Notice that our new representation of non-Z
ero numbers is quite different from before:
> import Nat as N
> import ThunkNat exposing (..)
> N.fromInt 10
S (S (S (S (S (S (S (S (S (S Z))))))))) : N.Nat
> fromInt 10
S <function> : Nat
Unlike fromInt
, toInt
does need to make recursive calls immediately, because the resulting type (Int
) does not have the notion of delayed computation built in to its representation. Therefore, we will want to employ the tail-recursive helper strategy.
toInt =
let foo acc n =
case n of
Z -> acc
S n_ -> foo (1 + acc) (force n_)
in
foo 0
As before, fromInt
and toInt
are inverses:
> fromInt 100000000 |> toInt
100000000 : Int
Notice how toInt
uses force
to evaluate all of the nested suspensions that are stored within a Nat
. A function like this is called monolithic, whereas a function like fromInt
is called incremental because it does not trigger the evaluation of all nested suspensions.
Another example of a monolithic function is toString
.
toString =
let foo acc n =
case n of
Z -> acc
S n_ -> foo ("S" ++ acc) (force n_)
in
foo "Z"
However, this function is no longer needed for its original purpose above, because printing the representation of a S
uccessor value is now very quick.
> fromInt 10000
S <function> : Nat
> fromInt 10000 |> toString
" ... SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSZ" : String
We can now return to our motivation for delaying the evaluation of Nat
s.
equals x y =
case (x, y) of
(Z, Z) -> True
(S x_, S y_) -> equals (force x_) (force y_)
(_, _) -> False
When x
and y
represent the same number, equals
evaluates all of the nested suspensions in both x
and y
. Otherwise, it evaluates only enough of their delayed representations in order to demonstrate a difference in corresponding data constructors. As a result, all of the following comparisons run quickly, unlike with our original Nat.elm
implementation.
> equals (fromInt 0) (fromInt 0)
True : Bool
> equals (fromInt 0) (fromInt 2)
False : Bool
> equals (fromInt 10) (fromInt 2)
False : Bool
> equals (fromInt 10000) (fromInt 10000000)
False : Bool
> equals (fromInt 0) (fromInt 10000000)
False : Bool
We finish porting the original module with the following incremental implementation of plus
.
plus x y =
case y of
Z -> x
S y_ -> S (\_ -> plus x (force y_))
As a result, the following comparison evaluates quickly.
> equals (fromInt 0) (plus (fromInt 10000000) (fromInt 10000000))
False : Bool
What happens if we ask Elm to compare Nat
s using built-in equality?
> fromInt 0 == fromInt 0
-- PARSE ERROR ------------------------------------------------------------- elm
...
> (fromInt 0 == fromInt 0)
True : Bool
> (fromInt 0 == fromInt 1)
False : Bool
> (fromInt 1 == fromInt 2)
Error: Trying to use `(==)` on functions. There is no way to know if
functions are "the same" in the Elm sense. ...
Elm throws a run-time error when trying to compare two different function values. Fair enough. Notice that "physical equality" between function values is supported, however.
> fromInt 1 == fromInt 1
Error: Trying to use `(==)` on functions. There is no way to know if
functions are "the same" in the Elm sense.
> let foo = fromInt 1 in foo == foo
True : Bool
Defining suspensions and incremental functions can be really valuable techniques, but there's no free lunch. The representation of a thunk is a closure, which is a function to evaluate along with bindings for all free variables referred to by the function. Delaying computations willy nilly, then, can lead to a huge number of these closures building up. So one should restrict the use of thunks to situations where the benefits of being able to define incremental functions outweighs the overheads associated with delayed computations.
Another concern is that the same delayed computation may be demanded more than once. If the computation takes significant resources to evaluate, then redoing the work every time is undesirable.
As a result, many eager languages — such as OCaml, Standard ML, and Racket — offer special-purpose constructs for manipulating delayed computations with the guarantee that the result of forcing a delayed computation is cached in case it is forced again in the future. The term lazy evaluation is often used to describe support for delayed computations with the guarantee of evaluating any such computation at most once.
Prior versions of Elm included library support for lazy evaluation, with the following interface:
lazy : (() -> a) -> Lazy a
force : Lazy a -> a
The lazy
function turned a Thunk a
into a Lazy a
value — the thunk was evaluated the first time it was force
d; if and when force
d again, the previous result was retrieved. Under the hood, this library was implemented using imperative features in JavaScript: each thunk had a corresponding mutable variable (called isForced
) to track whether the particular thunk had been evaluated and a mutable variable (called value
) to store its result.
Lazy.elm
Elm v0.19 does not provide language or library support for memoizing delayed computations. So instead, we'll accomplish a bit of the same by explicitly maintaining caches for delayed computations.
First, rather than Thunk
being simply a delayed computation of a
...
type Thunk a =
Thunk { thunk : () -> a }
... it computes an "action", or "stateful function", that takes takes an input Cache
and computes an a
and an updated Cache
:
type Thunk a =
Thunk { id : Int, thunk : () -> Cache a -> (a, Cache a) }
(Requiring ()
in addition to the input Cache
argument will help when defining (tail-)recursive functions that compute actions.)
Each thunk will be assigned a unique Int
eger identifier so that the Cache
can track which Thunk
s have already been force
d.
type Cache a =
Cache { size : Int, table : Dict Int a }
Given a delayed computation — that computes an "action" Cache -> (a, Cache a)
— lazy
registers it in the cache by assigning it a new id and bumping up the cache size:
lazy : (() -> Cache a -> (a, Cache a)) -> (Cache a -> (Thunk a, Cache a))
lazy thunk (Cache cache) =
let
id =
cache.size
_ =
Debug.log "Lazy.lazy " id
in
( Thunk { id = id, thunk = thunk }
, Cache { cache | size = 1 + cache.size }
)
Later, force
will either (a) run the action and add the result to the "memo table" or (b) retrieve the previous result from the memo table:
force : Thunk a -> (Cache a -> (a, Cache a))
force (Thunk {id, thunk}) (Cache cache) =
case Dict.get id cache.table of
Nothing ->
let
_ =
Debug.log "Lazy.force MISS " id
(a, Cache newCache) =
thunk () (Cache cache)
newerTable =
Dict.insert id a newCache.table
in
(a, Cache { newCache | table = newerTable })
Just a ->
let
_ =
Debug.log "Lazy.force HIT " id
in
(a, Cache cache)
The library will not provide clients direct access to Cache
s. Instead:
run : (Cache a -> (a, Cache a)) -> a
run action =
let
initCache =
Cache { size = 0, table = Dict.empty }
in
Tuple.first (action initCache)
Now, for example:
example : Cache Int -> (Int, Cache Int)
example = \cache0 ->
let
makeThunkFive : Cache Int -> (Thunk Int, Cache Int)
makeThunkFive =
Lazy.lazy <|
\() cache ->
(Debug.log "delayed computation..." 5, cache)
(thunk, cache1) = makeThunkFive cache0
(i, cache2) = Lazy.force thunk cache1
(j, cache3) = Lazy.force thunk cache2
(k, cache4) = Lazy.force thunk cache3
in
(i + j + k, cache4)
> import Lazy exposing (..)
> import Example exposing (..)
> run example
Lazy.lazy : 0
delayed computation...: 5
Lazy.force MISS : 0
Lazy.force HIT : 0
Lazy.force HIT : 0
15 : Int
The lazy
and force
functions produce actions that compute Thunk cache
and cache
values, respectively, where cache
is the type of values in the Cache
. In general, we will want to intermingle other types of actions in between computations over values of the cache
type.
We will generalize our notion of "action":
type alias Action cache a =
Cache cache -> (a, Cache cache)
We can rewrite the signatures for Thunk
, lazy
, and force
using this alias:
type Thunk a =
Thunk { id : Int, thunk : () -> Action a a }
lazy : (() -> Action a a) -> Action a (Thunk a)
force : Thunk a -> Action a a
And we can assign a more general type for run
:
run : Action cache a -> a
We will also define some helper functions, as manually threading caches through a sequence of actions can be tedious and error-prone.
pure : a -> Action cache a
pure a =
\cache ->
(a, cache)
map : (a -> b) -> Action cache a -> Action cache b
map f action =
\cache ->
Tuple.mapFirst f (action cache)
andThen : (a -> Action cache b) -> Action cache a -> Action cache b
andThen f action =
\cache ->
let (a, newCache) = action cache in
f a newCache
Using these combinators, we can rewrite the example above:
example : Action Int Int
example =
let five () = pure (Debug.log "delayed computation..." 5) in
Lazy.lazy five |> Lazy.andThen (\thunk ->
Lazy.force thunk |> Lazy.andThen (\i ->
Lazy.force thunk |> Lazy.andThen (\j ->
Lazy.force thunk |> Lazy.andThen (\k ->
pure (i + j + k)
))))
LazyNat.elm
Now let's port ThunkNat.elm
to use our Lazy.elm
library. Each of the operations in the API will now produce Action Nat
s:
type Nat
fromInt : Int -> Lazy.Action Nat Nat
toInt : Nat -> Lazy.Action Nat Int
toString : Nat -> Lazy.Action Nat String
plus : Nat -> Nat -> Lazy.Action Nat Nat
equals : Nat -> Nat -> Lazy.Action Nat Bool
First, we redefine the type of Nat
...
type Nat = Z | S (Lazy.Thunk Nat)
... and update our function for creating Nat
s:
fromInt : Int -> Lazy.Action Nat Nat
fromInt i cache0 =
if i == 0 then
(Z, cache0)
else
let (thunk, cache1) = Lazy.lazy (\() -> fromInt (i-1)) cache0 in
(S thunk, cache1)
(We will resist the urge to use the nice pure
, map
, and andThen
combinators for composing lazy actions; we'll return to this topic below.)
As before, fromInt
is incremental:
> fromInt 10000 |> run
Lazy.lazy : 0
S (Thunk { id = 0, thunk = <function> })
: Nat
Next, recall that toInt
is monolithic and needs to convert to an Int
completely. Thus, we need a tail-recursive implementation:
toInt : Nat -> Lazy.Action Nat Int
toInt =
let
foo acc n cache =
case n of
Z ->
(acc, cache)
S thunk ->
let (n_, newCache) = force thunk cache in
foo (1 + acc) n_ newCache
in
foo 0
We can force the entire Nat
to be built...
> fromInt 10000 |> andThen toInt |> run
...
Lazy.lazy : 9997
Lazy.force MISS : 9996
Lazy.lazy : 9998
Lazy.force MISS : 9997
Lazy.lazy : 9999
Lazy.force MISS : 9998
Lazy.force MISS : 9999
10000 : Int
... and then the second time it is found in the cache:
> fromInt 10000 |> andThen (\n -> toInt n |> andThen (\_ -> toInt n)) |> run
...
Lazy.force HIT : 9996
Lazy.force HIT : 9997
Lazy.force HIT : 9998
Lazy.force HIT : 9999
10000 : Int
The logging of hits and misses is instructive but expensive. If you want to do more accurate benchmarking, toggle the flag in Lazy.elm
to suppress logging.
The monolithic toString
function is similar to toInt
.
The equals
function is also still monolithic:
equals : Nat -> Nat -> Lazy.Action Nat Bool
equals x y cache0 =
case (x, y) of
(S x_thunk, S y_thunk) ->
let (x_, cache1) = force x_thunk cache0 in
let (y_, cache2) = force y_thunk cache1 in
equals x_ y_ cache2
(Z, Z) ->
(True, cache0)
_ ->
(False, cache0)
To be incremental, plus
creates a delayed action
in the non-Z
ero case:
plus : Nat -> Nat -> Lazy.Action Nat Nat
plus x y cache =
case y of
Z ->
(x, cache)
S thunk ->
let
action cache0 =
let (y_, cache1) = Lazy.force thunk cache0 in
plus x y_ cache1
(newThunk, newCache) =
Lazy.lazy (\() -> action) cache
in
(S newThunk, newCache)
To clarify how caches are getting passed around, let's push the outer lambda down through the branches of the case
and let
:
plus : Nat -> Nat -> Lazy.Action Nat Nat
plus x y =
case y of
Z ->
\cache ->
(x, cache)
S thunk ->
let
action cache0 =
let (y_, cache1) = Lazy.force thunk cache0 in
plus x y_ cache1
in
\cache ->
let
(newThunk, newCache) =
Lazy.lazy (\() -> action) cache
in
(S newThunk, newCache)
Though equals
is monolithic, it evaluates only as much of the arguments as necessary to decide equality:
> fromInt 1000000 |> andThen (\x -> fromInt 0 |> andThen (\y -> equals x y)) |> run
Lazy.lazy : 0
False : Bool
> fromInt 1000000 |> andThen (\x -> fromInt 1 |> andThen (\y -> equals x y)) |> run
Lazy.lazy : 0
Lazy.lazy : 1
Lazy.lazy : 2
Lazy.force MISS : 0
Lazy.force MISS : 1
False : Bool
> fromInt 1000000 |> andThen (\x -> fromInt 1000000 |> andThen (\y -> equals x y)) |> run
...
Lazy.lazy : 1999999
Lazy.force MISS : 1999997
Lazy.force MISS : 1999998
Lazy.force MISS : 1999999
True : Bool
> fromInt 10 |> andThen (\x -> fromInt 10 |> andThen (\y -> plus x y |> andThen (\sum -> equals sum y))) |> run
...
Lazy.force MISS : 19
Lazy.force MISS : 20
Lazy.force HIT : 19
False : Bool
> fromInt 10000000 |> andThen (\x -> fromInt 10 |> andThen (\y -> plus x y |> andThen (\sum -> equals sum y))) |> run
...
Lazy.force MISS : 19
Lazy.force MISS : 20
Lazy.force HIT : 19
False : Bool
Let's rewrite the LazyNat
operations using the nice Action
combinators:
fromInt : Int -> Lazy.Action Nat Nat
fromInt i =
if i == 0 then
Lazy.pure Z
else
Lazy.map S (Lazy.lazy (\() -> fromInt (i-1)))
toInt : Nat -> Lazy.Action Nat Int
toInt =
let
foo acc n =
case n of
Z ->
Lazy.pure acc
S thunk ->
Lazy.force thunk |> Lazy.andThen (\n_ -> foo (1 + acc) n_)
in
foo 0
toString : Nat -> Lazy.Action Nat String
toString =
...
equals : Nat -> Nat -> Lazy.Action Nat Bool
equals x y =
case (x, y) of
(S x_thunk, S y_thunk) ->
Lazy.force x_thunk |> Lazy.andThen (\x_ ->
Lazy.force y_thunk |> Lazy.andThen (\y_ ->
equals x_ y_
))
(Z, Z) ->
Lazy.pure True
_ ->
Lazy.pure False
plus : Nat -> Nat -> Lazy.Action Nat Nat
plus x y =
case y of
Z ->
Lazy.pure x
S thunk ->
Lazy.map S (Lazy.lazy (\() ->
Lazy.force thunk |> Lazy.andThen (\y_ ->
plus x y_
)
))
No cache in sight, much better! But there's one small problem: these versions are stack hogs. In the monolithic operations toInt
, toString
, and equals
, the recursive calls are no longer in tail position according to the simple, syntactic rules. A more aggressively optimizing compiler would need to, in this case, figure out that inlining the definition of andThen
would unlock opportunities to eliminate tail recursion.
So, we'll stick with the versions that explicitly pass caches around. Schade!