The intricacies of implementing memoization in Ruby
In the never-ending quest to write code that is performant, we have many techniques at our disposal. One of those techniques is memoization,1 which boils down to storing the results of expensive function calls, so that these expensive functions do not need to be called more than absolutely necessary.
Many years ago, I wrote a Ruby gem for memoization, ddmemoize. It has since been superseded by better solutions, but for a long time, it was one of the best memoization libraries for Ruby out there.
Creating this library taught me a great deal. Memoization is surprisingly complex, and a proper implementation, it turns out, goes far beyond Ruby’s ||=
memoization operator. Memory management and thread safety, for example, are important considerations, though often overlooked.
In this article, I’ll walk you through all the learnings I gained in the process of implementing this memoization gem.
- Local variables
- The memoization operator
- Argument-aware memoization
- Memoization DSL
- Memoizing on frozen objects
- Memory-efficient memoization
- Supporting keyword arguments
- Supporting blocks
- Thread-safe memoization
- Metrics
- Conclusion
Local variables
The simplest tool at our disposal for remembering values is the concept of local variables. Trivial, perhaps — but still worth mentioning.
In the following example, the #calc_base_price
function is called twice:
def total_price
vat = calc_base_price * VAT_RATE
calc_base_price + vat
end
If the #calc_base_price
function were expensive, for example because of a database query, it would make sense to store the result in a local variable. In the following snippet, the base price is stored in a local variable called base_price
:
def total_price
base_price = calc_base_price
vat = base_price * VAT_RATE
base_price + vat
end
But there is certainly more to memoization than just this trivial example!
The memoization operator
Ruby comes with an operator, ||=
, which is sometimes called the memoization operator. Here is an example of it in use:
def base_price
@base_price ||= calc_base_price
end
When executing this method, Ruby will check whether the @base_price
instance variable is already defined and has a value that is truthy, meaning not nil
and not false
. If so, it will return the value of @base_price
.
If, on the other hand, @base_price
is either undefined or set to nil
or false
, it will call the #calc_base_price
method, store its return value in the @base_price
instance variable, and use that value as the value for the entire expression (and thus as the return value of #base_price
).
The #base_price
method could be rewritten without the use of the ||=
operator like this:
def base_price
if defined?(@base_price) && @base_price
@base_price
else
@base_price = calc_base_price
end
end
However, one of the cases in which Ruby’s ||=
operator does not work well is when the memoized method has parameters. That’s next on the list to fix.
Be careful with false and nil
The values false
and nil
can make memoization using the ||=
operator not work as intended. This is because the ||=
operator evaluates the right-hand side when the memoization variable is undefined, or when the memoized value is falsy.
Consider the following memoized method:
memoize def open?
@is_open ||= calc_is_open
end
If #calc_is_open
returns false, the @is_open
memoized instance variable will be set to false. But the next time #open?
is called, the #calc_is_open
method will be called again — because @is_open
is falsy, meaning nil
or false
.
This is a general problem with using the ||=
operator to memoize methods that return boolean values. It is a problem with methods that are expected to return nil
, too.
A good way around this problem is to avoid ||=
in this situation, and instead use #defined?
to check whether or not to use the memoized value:
def open?
if defined?(@is_open)
@is_open
else
@is_open = calc_is_open
end
end
Less compact, but at least it works.
Argument-aware memoization
Any piece of literature on the topic of memoization will inevitably bring up the Fibonacci sequence as an example where memoization is particularly useful. Here is a Ruby implementation of a function that returns a given entry in the Fibonacci sequence:
def fib(n)
case n
when 0
0
when 1
1
else
fib(n - 1) + fib(n - 2)
end
end
This works as intended: fib(6)
evaluates to 8 and fib(7)
evaluates to 13.
However, for larger numbers, the execution time quickly increases. I wrote some code to calculate the first 50 Fibonacci numbers, and print out the duration2 to calculate them:
def now
Process.clock_gettime(
Process::CLOCK_MONOTONIC
)
end
50.times do |i|
print "#{i}: "
before = now
val = fib(i)
after = now
duration = after - before
puts "#{val} (#{format '%.1f', duration}s)"
end
Here is what it printed out:
0: 0 (0.0s)
1: 1 (0.0s)
2: 1 (0.0s)
3: 2 (0.0s)
4: 3 (0.0s)
5: 5 (0.0s)
6: 8 (0.0s)
7: 13 (0.0s)
8: 21 (0.0s)
9: 34 (0.0s)
10: 55 (0.0s)
11: 89 (0.0s)
12: 144 (0.0s)
13: 233 (0.0s)
14: 377 (0.0s)
15: 610 (0.0s)
16: 987 (0.0s)
17: 1597 (0.0s)
18: 2584 (0.0s)
19: 4181 (0.0s)
20: 6765 (0.0s)
21: 10946 (0.0s)
22: 17711 (0.0s)
23: 28657 (0.0s)
24: 46368 (0.0s)
25: 75025 (0.0s)
26: 121393 (0.0s)
27: 196418 (0.0s)
28: 317811 (0.0s)
29: 514229 (0.0s)
30: 832040 (0.1s)
31: 1346269 (0.1s)
32: 2178309 (0.2s)
33: 3524578 (0.3s)
34: 5702887 (0.5s)
35: 9227465 (0.8s)
36: 14930352 (1.2s)
37: 24157817 (2.0s)
38: 39088169 (3.2s)
39: 63245986 (5.2s)
40: 102334155 (8.3s)
41: 165580141 (13.5s)
42: 267914296 (21.8s)
Calculating number 42 in the Fibonacci sequence took 21 seconds,3 after which I gave up and aborted the script. Extrapolating from the durations above, I estimate that calculating number 50 in the Fibonacci sequence would take 17 minutes.4
The reason why calculating Fibonacci numbers gets so slow is because there is a lot of redundant work happening. For example, calculating fib(40)
calculates fib(39)
and fib(38)
. Calculating fib(39)
also calculates fib(38)
. This redundant work gets progressively worse for lower numbers. For example, fib(40)
calculates the third number (n=2) in the Fibonacci sequence 63 245 986 times.
It only really needs to do that once. No wonder this implementation is slow.
One way to avoid this problem with execution speed would be to rewrite the method to avoid recursion and use looping instead:
def fib(n)
arr = [0, 1]
while init.size <= n
arr << arr.last(2).sum
end
arr[n]
end
The reason why this solution is so much faster is because it avoids recalculating anything. For example, fib(40)
calculates the third number in the Fibonacci sequence only once — not 63 million times.
The version that uses a loop instead of recursion is much faster, but it has the problem of not being nearly as easy to read as the initial version. In this faster, recursive version, the mathematical definition of the Fibonacci sequence is not readily visible.
The #fib
function cannot be memoized by applying the ||=
operator as before. Something a little more sophisticated is needed, creating a cache for each value of the argument n
:
def fib(n)
@fib ||= {}
@fib[n] ||=
case n
when 0
0
when 1
1
else
fib(n - 1) + fib(n - 2)
end
end
With this change, calculating fib(40)
is instantaneous. In fact, so is fib(4000)
.5
Memoization DSL
One of Ruby’s great strengths is its capacity for metaprogramming. A good use case for this is automating memoization by tagging a call to memoize
after a method definition, like this:
def fib(n)
# [snip]
end
memoize :fib
The technique that I’ll introduce here only works for methods, not for functions. So, let’s stick #fib
in a class:
class Fib
def fib(n)
case n
when 0
0
when 1
1
else
fib(n - 1) + fib(n - 2)
end
end
end
The invocation is a little different, but it works as before (with the same slowness):
p Fib.new.fib(10)
We’ll create a module named Memoize
and stick the memoize
method in there:
module Memoize
def memoize(method_name)
Its goal will be to replace the method with the given name (:fib
in our example) with a new one that performs automatic memoization. This new method needs to be able to call the original method, so it first creates a copy of the original method, using #alias_method
:
orig_method_name =
'__orig_' + method_name.to_s
alias_method(orig_method_name, method_name)
In our example with #fib
example, this will create a method named #__orig_fib
.
The original method (#fib
in our example) has not been touched yet. The next step is to redefine that original method, for which #define_method
is useful. For now, it’ll use #send
to call the original method; an implementation with memoization will follow later:
define_method(method_name) do |*args|
send(orig_method_name, *args)
end
end
end
To use this new functionality in our existing Fib
class, first add extend Memoization
near the top, and then add memoize :fib
after the #fib
method definition:
class Fib
extend Memoization
def fib(n)
case n
when 0
0
when 1
1
else
fib(n - 1) + fib(n - 2)
end
end
memoize :fib
end
This will still work, still without memoization (yet):
p Fib.new.fib(10)
It is now time to implement memoization in the newly defined method. The first thing it needs is a cache:
define_method(method_name) do |*args|
@__cache ||= {}
This @__cache
instance variable will contain the results of all invocations for all methods on this particular instance. The keys of the @__cache
hash will be the method name.
Next, the implementation needs to get the cache for this particular method, given by the method_name
variable:
method_cache = (@__cache[method_name] ||= {})
With the cache for this particular method now available, it can check whether there is already a value for the given arguments. If there is, that is the value that can be returned — the entire point of memoization:
if method_cache.key?(args)
method_cache[args]
If there is no value available yet, call the original method and store the return value in the cache for this method:
else
method_cache[args] =
send(orig_method_name, *args)
end
end
In Ruby, an assignment evaluates to the value that was assigned, so there is no need for an explicit method_cache[args]
after the assignment.
With this change, running fib(40)
is now very fast indeed — practically instantaneous:
p Fib.new.fib(40)
There is one more neat change that is possible. In Ruby, method definitions return the mehod name as a symbol, so memoize
can be stuck in front of the def
keyword and the memoize :fib
line removed:
memoize def fib(n)
# [snip]
end
Now it looks like a keyword, which I find rather neat.
Memoization requirements
Before continuing, I want address the following question: under which circumstances can memoization be safely applied? Memoization is not a technique that can be spray-painted onto code to make it faster. There are restrictions to consider for memoized code to work correctly.
A memoized method must only use variables that never change value. This includes instance variables, arguments, global variables, constants, and more.
To illustrate this, take a look at the following example LineItem
class, with a memoized #total_price
method:6
class LineItem
extend Memoization
attr_accessor :unit_price
attr_accessor :count
def initialize(unit_price:, count:)
@unit_price = unit_price
@count = count
end
memoize def total_price
count * unit_price
end
end
The total price is calculated correctly:
line_item = LineItem.new(unit_price: 49, count: 2)
p line_item.total_price
# => 98
However, after changing the count, the total price is not updated, because it is memoized:
line_item.count = 3
p line_item.total_price
# => 98
A solution to this problem is to make LineItem
immutable, either by freezing it or replacing attr_accessor
with attr_reader
. This would prevent the count of a LineItem
from being changed; instead, a new instance of LineItem
can be created with the correct count:
line_item = LineItem.new(
unit_price: 49,
count: 2)
p line_item.total_price
# => 98
line_item = LineItem.new(
unit_price: line_item.unit_price,
count: 3)
p line_item.total_price
# => 147
A good general guideline is to use memoization only on objects that are immutable, and likewise pass in only arguments that are immutable as well.
Memoizing on frozen objects
There is one particular issue with this implementation. Attempting to use memoization on a frozen object fails. Take the following code as an example:
f = Fib.new
f.freeze
p f.fib(40)
This fails with a FrozenError
:
example3.rb:20:in `block in memoize': can't modify frozen Fib: #<Fib:0x000000011ffeb008> (FrozenError)
This happens because the @__cache
instance variable is frozen along with the rest of the object.
The solution7 I used was to turn the idea of a cache on its head. Rather than have one cache per instance (the @__cache
instance variable) with keys for each method name, I opted for having one cache per method, with keys for each instance. In practice, this solution looks like this:
module Memoize
def memoize(method_name)
# [snip]
method_cache = {}
define_method(method_name) do |*args|
instance_cache = (method_cache[self] ||= {})
if instance_cache.key?(args)
instance_cache[args]
else
instance_cache[args] =
send(orig_method_name, *args)
end
end
end
end
The method_cache
local variable is the cache for the particular method given by the method_name
argument. The instance cache simply is method_cache[self]
, with self being the instance on which the memoized method is called.8
Unfortunately, in the excitement of finding an approach to support memoization for frozen objects, a bigger problem has appeared: memory usage.
Memory-efficient memoization
The memoization implementation never frees up memory. This means that memory usage can keep on growing. I can get this to happen if I’m not careful:
low swap: killing largest compressed process with pid 95941 (ruby) and size 76343 MB
This particular Ruby process ballooned to north of 75 GB of memory usage, before the macOS kernel killed it.
This is especially an issue with long-running processes.9 Availability becomes a problem when a process can’t adhere to its memory limits!
This issue was not as much of problem before. After all, memoized values were stored on the instance, and when the instance is deallocated, the memoized values are deallocated with it.
Weak references
There is a potential solution that Ruby provides by default: weak references. Explaining what weak references are is best done by comparing them to regular references, also known as strong references. Take a look at this piece of code:
arr = [1, 2, 3]
arr.reverse
# => [3, 2, 1]
This example creates an array, and returns a reversed version of it. Straightforward, so far. The garbage collector will not deallocate arr
, because it is still referenced. This code snippet still works:
GC.start
arr.reverse
# => [3, 2, 1]
This happens because there is a reference (a strong reference) to the arr
object. Any object that is referenced, directly or indirectly, by a variable in scope (like an instance variable, a global variable, a local variable, or a constant) is immune from being garbage-collected, i.e. having its memory reclaimed. This makes so much sense that we as programmers rarely think about this.
Weak references, however, change things up a little. Consider this code snippet:
require 'weakref'
arr = WeakRef.new([1, 2, 3])
arr.reverse
# => [3, 2, 1]
That still works as before. However, after invoking the garbage collector, the array is no longer accessible; trying to do anything with it will yield an exception:
GC.start
arr.reverse
# WeakRef::RefError (Invalid Reference - probably recycled)
The array, i.e. the object that the arr variable pointed to, has had its memory reclaimed; it is gone.
On first sight, weak references seem like they could provide an excellent solution to the problem of memory bloat caused by memoization. Wrap the memoized values in a WeakRef
, and done — right?
But there is a problem: when using weak references for memoization, all memoized values wiped out when garbage collection occurs. This is unfortunate, and in my experience wrecks the usefulness of memoization. An alternative is needed.
Soft references
There exists a kind of reference that is weaker than a strong reference, but stronger than a weak reference. These are called soft references.
Soft references are much less eager to be garbage-collected than weak references. The value pointed to by a soft reference will only be garbage collected after it has not been accessed for a number of garbage collections.
Ruby does not come with an implementation of soft references by default. There is a now-deprecated gem, ref
, that implements them:
require 'ref'
ref = Ref::SoftReference.new("hello")
ref.object.upcase
# => HI
The API is similar to that of Ruby’s built-in WeakRef
, though Ref::SoftReference
does not automatically delegate to the object. Accessing the pointed-to object is done through the #object
method.
Initially, the pointed-to object is accessible as usual.
p ref.object
# "hello"
However, after a couple of garbage-collection cycles, the object is finally garbage collected, and the pointer becomes nil
:
p ref.object
# nil
It is not much work to replace the uses of WeakRef
with Ref::SoftReference
ones. But is it worth it? I don’t think it is, for two reasons:
-
The
ref
gem is no longer maintained, and there is no other soft reference implementation for Ruby that I know of.10 -
Soft references have a performance overhead, especially when implemented in Ruby itself. This can diminish the gains from memoization.
There is also still the problem that the function/method arguments are not garbage-collected either. This was already a problem with WeakRef
.
Neither WeakRef
nor Ref::SoftReference
works well enough for the purposes of memoization. The original implementation, which used an instance variable, overall is a better choice when it comes to memory management.
And there is a better way of dealing with frozen objects, too!
Freezing, revisited
There is a better way to deal with frozen objects. It is a slight variation on the original implementation of the memoization DSL. Gone are the weak references and the soft references.
The first change (a trivial one) is to move the memoize
method into a Memoization::ClassMethods
module:
module Memoization::ClassMethods
def memoize(method_name)
# [snip]
define_method(method_name) do |*args|
@__cache ||= {}
method_cache =
(@__cache[method_name] ||= {})
# [snip]
end
end
end
This new module can already be used as-is by passing this module to extend
:
class Fib
extend Memoization::ClassMethods
memoize def fib(n)
# [snip]
end
end
So far, nothing has changed: the behavior is identical to what we had in the initial implementation of memoize
.
Next comes the solution to the frozenness problem. The idea is the following: when #freeze
is called on an instance of a class that uses memoization, the cache is created before the object becomes frozen. That is what the Memoization::InstanceMethods
is for:
module Memoization::InstanceMethods
def freeze
@__cache ||= {}
super
end
end
This new module can be put to use in the example Fib
class using prepend
:
class Fib
extend Memoization::ClassMethods
prepend Memoization::InstanceMethods
memoize def fib(n)
# [snip]
end
end
It is important to use prepend
rather than include
here. With prepend
(unlike include
) the #freeze
method of the prepended module would be executed before the #freeze
method defined on the instance (if there is any). This is necessary, because the @__cache
instance variable needs to be created before any freezing takes place.
So now there is a better solution for dealing with frozen objects: call extend
with the ClassMethods
module, and prepend
the InstanceMethods
module. With this change, frozen objects are no longer an obstacle to memoization. Memory usage is not an issue anymore.
There is one more useful change to make. Calling both extend
and prepend
is slightly awkward. Ideally, there would be only a single call to extend Memoization
, like this:
class Fib
extend Memoization
memoize def fib(n)
# [snip]
end
end
To make this work, implement the extended
method on the Memoization module, which will be called as a callback whenever any class calls extend Memoization
. The extended
method, which takes the class or module as an argument, can then delegate to the proper extend
and prepend
methods, like above:
module Memoization
def self.extended(thing)
thing.extend(ClassMethods)
thing.prepend(InstanceMethods)
end
end
And with that, the Memoization
module provides a nice way of memoizing that works with frozen objects and is memory-efficient.
But are still a few more problems left to solve.
Supporting keyword arguments
Since version 3.0, Ruby supports keyword arguments. There are distinct from regular arguments, also known as positional arguments. Here is an example of #fib
which takes n
as a keyword argument:
def fib(n:)
case n
when 0
0
when 1
1
else
fib(n: n - 1) + fib(n: n - 2)
end
end
At the moment, memoize this method and calling it will fail with an ArgumentError
:
example.rb:5:in `fib': wrong number of arguments (given 1, expected 0; required keyword: n) (ArgumentError)
The method created by #define_method
inside the memoize
method does not take keyword arguments; *args
captures only positional arguments and not keyword arguments. The #define_method
call needs to be updated to also grab keyword arguments by adding **kwargs
to the list of parameters:
define_method(method_name) do |*args, **kwargs|
Getting the method cache remains the same:
@__cache ||= {}
method_cache = (@__cache[method_name] ||= {})
The memoization key needs to change. Instead of only including positional arguments, it also needs to include the keyword arguments. The easiest approach is to construct a key that consists of positional arguments and keyword arguments:
key = [args, kwargs]
This key is then used, instead of just args, when fetching the memoized value (if any):
if method_cache.key?(key)
method_cache[key]
If there is no memoized value, the original method needs to be called with not just the positional arguments, but also the keyword arguments:
else
method_cache[key] =
send(orig_method_name, *args, **kwargs)
end
end
With this change, memoization works as expected on methods with keyword arguments.
Supporting blocks
In Ruby, methods can take three types of things. We have tackled positional arguments and keyword arguments, but there is one more type of thing a Ruby method can take: a block.
However, there is a snag: every invocation of a block will result in a new block object (a Proc
instance), which renders it impossible to meaningfully memoize a method that takes a block. To illustrate this, consider the following trivial class:
class Test
def run(&blk)
p blk.__id__
end
end
This code simply prints the ID of the block. Calling Test#run twice prints two different Proc
IDs:
test = Test.new
test.run { }
# stdout: 860
test.run { }
# stdout: 880
It is not only because there are two distinct blocks in the source code (line 3 and 5 in the example above) that the Proc
IDs are different. This even happens when there is only a single block in the source code:
2.times do
test.run { }
end
# stdout: 860
# stdout: 880
That this happens in general makes sense, because each invocation of a block captures a (potentially) different environment.
This, however, means that memoizing a method that takes a block is not possible. Memoization will be entirely ineffective: no invocation of the memoized method will ever use a memoized value.
For this reason, I believe the best way forward is to explicitly disallow memoizing a method that takes a block. Memoization needs to raise an error in this case:
define_method(method_name) do |*args, **kwargs, &blk|
if blk
raise ArgumentError,
"cannot memoize methods that take a block"
end
Raising an exception when a block is given to a memoized method is the most reasonable thing to do. With this change, we’ve made it less likely that unexpected behavior arises.
Thread-safe memoization
In a multi-threaded environment, the current implementation of memoization potentially has some issues.
Two threads could be calling the same method (with the same arguments, if any) at the same time. This is wasteful, and could lock up valuable resources that could better be used for something else. When two threads are calling the same method with the same arguments, then it would be better if the second thread were to wait for the first thread to finish, and then reuse the just-obtained return value.
This is not a problem with correctness, but with resource usage. You might not consider this to be a problem that is worth solving.
But if you were to try to fix it, know that adding thread safety11 is not trivial. Take a look at this example for a class with a single memoized method that takes no arguments:
class Order
def initialize
@mutex = Thread::Mutex.new
end
def discount
@_discount ||= begin
@mutex.synchronize do
unmemoized_discount
end
end
end
private
def unmemoized_discount
# [snip]
end
end
Multiple threads might attempt to run Order#discount
, but only a single #unmemoized_discount
call will be coming from #discount
at any given time. That is what a mutex is for, and it does its job admirably.
Admirably… and uselessly! This implementation is not quite correct. It is possible for two concurrent calls to #discount
to happen, and those two calls will compete for the mutex. The first invocation will call #unmemoized_discount
. The second invocation will wait to obtain a mutex lock, eventually calling #unmemoized_discount
. The second call does not use the memoized value at all.
Here is a better implementation:
def discount
@_discount ||= begin
@mutex.synchronize do
if defined?(@_discount) && @_discount
@_discount
else
unmemoized_discount
end
end
end
end
At the very beginning of the mutex, the method checks the presence of @_discount
again. It could’ve changed since the method call began awaiting the mutex, after all.
This version works better, but there is another issue: the mutex block could have finished, an another thread could have started executing the mutex-protected block before the memoized @_discount
instance variable got assigned.
The solution is to move the assignment to @_discount
into the mutex block:
def discount
if defined?(@_discount) && @_discount
return @_discount
end
@mutex.synchronize do
if defined?(@_discount) && @_discount
@_discount
else
@_discount = unmemoized_discount
end
end
end
This implementation will properly prevent multiple concurrent and unnecessary calls to #unmemoized_discount
.12
There is something missing, however. At the moment, there is only one memoized method, so a single mutex, @mutex
, is sufficient. Ideally, though, there would need to be a single mutex per memoized method; after all, parallel calls to different memoized methods should probably not block each other.
Additionally, this approach also only realistically works for memoized methods that have no parameters. If there are parameters, it might make sense to have a mutex for each possible combination of arguments. But then it might happen that two threads create a mutex for the same combination of arguments at the same time, so there might need to be a mutex to prevent that from happening. Also, mutexes that are no longer needed should ideally not take up memory, so a way of automatically cleaning them up would be useful, too.
Multithreading-aware memoization gets complicated very fast. And none of this is likely necessary, and perhaps not even very useful at all. Memoized methods are meant to be idempotent (or pure) anyway.
My preferred approach to this problem is to explicitly not think about thread safety, and delegate this concern to the caller. Document the thread-safety guarantees and move on. An easy way out, perhaps, but also the most reasonable in my opinion.
That leaves us with just one thing left to improve.
Metrics
A memoized function must have exactly the same behavior as its non-memoized counterpart, save for differences in execution speed. That is the contract for using memoization. But if the behavior of a memoized method is exactly the same as its non-memoized counterpart, then how does one check whether memoization is useful at all?
One of the neat features of my old ddmemoize library is that you could configure it to track and print metrics. Here is an example of what DDMemoize.print_metrics
could spit out:
memoization │ hit miss %
───────────────┼─────────────────
Order#discount │ 258 110 70.1%
A hit indicates an invocation where the memoized value was used, and a miss means that a no memoized value was available and the original method thus had to be called. In this example, the hit ratio of Order#discount
is 70.1%, which means that of all calls (hits plus misses), 70.1% were hits.
Is 70.1% a good rate? It depends. Is #discount
slow? Then perhaps having it memoized makes sense. But if #discount
is already fast, then memoization might not help, and perhaps even be harmful to performance, as memoization itself brings a tiny overhead in execution speed and memory usage.
If the number of misses is significantly higher than the number of hits, then memoization is most likely not effective.
A high hit rate is not inherently better than a low hit rate. The hit rate merely gives an idea of whether or not memoization is effective. When a code change makes the hit rate drop dramatically, then the correct response would to be reconsider the usage of memoization, rather than attempt to bring the hit rate back up.
There is a more interesting question to ask when the hit rate is high: why was the memoized method called so often (on the same object, and with the same arguments) in the first place? Perhaps rather than introducing memoization, it is worth figuring out where duplicate calls are coming from, and then eliminating them.
In my experience, memoization can be a crutch. That does not mean memoization is not useful, but it can be indication of sub-optimal design or architecture. These issues are often hard to spot or hard to fix, so memoization still has a place in a software developer’s tool kit. Knowing the hit rate, too, is great for using memoization effectively.
Conclusion
The preceding few thousand words have made it hopefully clear that memoization is tricky to implement.
Use a battle-tested library instead of writing a memoization implementation yourself. I recommend the memo_wise Ruby gem. It is the best memoization library for Ruby that I am aware of. Jemma Issroff and Jacob Evelyn have written about optimising memo_wise performance and esoteric Ruby in memo_wise.
At the moment of writing, my ddmemoize gem is about seven years old, and deprecated. Don’t use it! Better solutions exist.13
And now, with quite a bit of a delay, I have shared my own learnings in building this library in the hope and conviction that the learnings I gained along my journey will be useful to others.
If you found this article useful, consider buying me a coffee.
-
That’s memoization, not memorization — there’s no “r”! ↩︎
-
This code snippet uses a monotonic clock to calculate elapsed time. This is a more accurate way than using
Time.now
, because the monotonic clock is not affected by small shifts that occur e.g. when synchronising with a network time server. ↩︎ -
Did you spot how the durations form something quite close to the Fibonacci sequence, too? How neat is that?! ↩︎
-
This is assuming that the thermal throttling on my M2 MacBook Air does not engage. If it does, I imagine it would take much longer than that. ↩︎
-
Any number much larger than 8000 or so will yield a different problem:
stack level too deep (SystemStackError)
. This is difficult, but not impossible, to avoid with a recursive solution. The version that uses a loop won’t have this issue. ↩︎ -
The use of memoization here is not particularly useful: multiplying two numbers,
count
andunit_price
, is very fast — but it is appropriate as an example. ↩︎ -
The initial solution, that is. Yes, I am foreshadowing an alternative implementation! ↩︎
-
A benefit to this approach of using a local variable rather than an instance variable is that the instance variable no longer needs to be hidden with an obscure name. The reason why
@__cache
starts with a double underscore is so that it’s not accidentally accessed by code outside the gem. ↩︎ -
Is it really still a problem with “long-running” processes if they’re no longer “long-running” because they get quickly killed by the kernel? (Don’t answer that — it is a rhetorical question.) ↩︎
-
I was considering contributing my own implementation of soft references to Ruby, but I worry that I have neither the time nor the skill to see such a project through. ↩︎
-
The term “thread safety” is possibly not quite correct in this context. After all, the code will execute correctly without changes, and the issue is really about efficiency. But I can’t think of a term that fits this problem better than “thread safety”, so that is what I am using. ↩︎
-
To the best of my knowledge, that is. I wouldn’t want to swear on this implementation being correct, because multithreading is hard. ↩︎
-
I am happy to have abandoned ddmemoize in favor of better solutions. It is, after all, not about what we create, but about what we do to help us get stuff done and grow and learn as software developers and as people.The ddmemoize project definitely helped with that. ↩︎