A while back a friend of mine introduced me to Project Euler which is a “series of challenging mathematical/computer programming problems” all of which should solved by a program that observes the “one minute rule”

“…although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.”

Even though my primary programming language is C++ I started solving these problems using Perl due to it’s faster development cycle. Another advantage Perl has over C++ (which I didn’t realize was an advantage at the time) is that it’s slow as molasses, especially when used for numeric computations. This comes from the fact that Perl, being a dynamic language, has to deduce the types of objects at run-time, e.g. when you do `$i + 1`

, Perl has to check what type `$i`

is in order to know if it needs to add two numbers or concatenate a number to a string. ^{ [Correction: As Brad pointed out, Perl has a dot operator for concatenating strings which makes + unambiguous however the point stands for JavaScript and generally for many operations in dynamic languages.]}

So when I got to problem 14 my naive Perl implementation violated the one minute rule. Had I used C++ it would have run in less than three seconds and you would be spared this blog post, but as it was, I tried to think of a way to make it faster.

Problem 14 goes as following:

The following iterative sequence is defined for the set of positive integers:

`n` → `n`/2 (`n` is even)

`n` → 3`n` + 1 (`n` is odd)

The problem states that it’s an unproven conjecture that by iteratively feeding the result of the function to itself you’ll eventually reach 1 for all positive integers. The question then is which number (less than one million) has to be fed into the function the most times before it reaches 1.

My brute force implementation was too slow so I tried to come up with an optimization. Consider the number 3, its chain is:

`3 → 10 → 5 → 16 → 8 → 4 → 2 → 1 `: length is 8

If we look at 6 the chain is:

`6 → `**3** → 10 → 5 → 16 → 8 → 4 → 2 → 1 : length is 9

Notice that we pass through a number we saw before, there’s no need to recompute the length of 3’s chain, we already know it’s 8. I had to change my implementation to be recursive rather than iterative (so that I will be able to cache the result of all the numbers I pass through).

In order to prove to myself that the difference in speed between C++ and Perl was due Perl’s being a dynamic language I rewrote this in JavaScript (the other dynamic language that I’m somewhat familiar with). My JavaScript implementation looks like this:

function next(i) {
if (i & 1)
return i * 3 + 1;
return i / 2;
}
function length(n, cache) {
if (n == 1)
return 1;
if (cache !== null && n in cache)
return cache[n];
var ret = 1 + length(next(n), cache);
if (cache !== null)
cache[n] = ret;
return ret;
}
function naive() {
check(null, "Naive");
}
function optimized() {
check({}, "Optimized");
}
function check(cache, msg) {
var max = [0, 0];
var lim = limit();
for (var i = 1; i < lim; ++i) {
var curr = length(i, cache);
if (curr > max[0])
max = [curr, i];
}
alert(msg + " found " + max[1] + " (chain length: " + max[0] + ")");
}

I then checked this implementation on the three browsers I have installed IE8^{[1]}, Firefox 3.5 and Chrome 3 with the following results:

IE took 168 seconds to run the naive solution while Firefox only took 101 seconds but that’s nothing compared to Chrome’s **2.8**, that’s right, Chrome’s V8 Javascript implementation is **60 times** faster than IE’s, more than an order of magnitude!

The second thing that struck me was how differently the same optimization manifested itself in the three browsers, in both IE and Chrome the optimization shaved off less than half the time (less than 2 fold improvement) while in Firefox the optimized version was **24 times** faster than the naive one.

What I assume is happening is that JavaScript on the V8 engine behaves more like a statically typed language because it uses advanced JIT techniques such as trace trees which un-loop recursive function calls and makes (checked) assumptions about the types of variables.

**<****Edit>**

It has been pointed out in comments to this post and on reddit that Firefox’s TraceMonkey uses trace trees while Chrome’s V8 does not. I’m not familiar with this field, I do remember reading about an optimization made by V8 and I thought that it was trace trees but I may have misremembered. In any case it’s pretty clear that V8 uses advanced JIT techniques (although trace trees may not be one of them).

**</Edit>**

Now I realise that this isn’t a characteristic use of JavaScript, most JavaScript code is not intensively numeric and thus there’s no point in optimizing for this usage. What I find more worrying from the IE perspective is how much better the optimization behaves in Firefox than in IE, this optimization depends on property lookup in objects which is an extremely common operation in JavaScript. Why there is such a difference between IE and Firefox in this aspect escapes me.

[1] In order to get meaningful results on IE I had to disable the “Stop running this script” dialog ↩