# jsmem - JavaScript memory benchmark

*Update 2011-02-19: The
original version
of this benchmark had a major flaw - it used only
integral values for the computation. Some browsers seem to optimize this, in particular Chrome 10
with the Crankshaft engine, which seems be able to store the numbers as 4-byte integers on 32-bit systems.
While a valid (if unexpected) optimization, it completely skews the benchmark results.
I've changed the code to make this kind of optimization more difficult. Chrome results are much lower
with the new version.
*

I wrote this simple benchmark to see if current JavaScript implementations are already fast enough that the memory hierarchy becomes visible. It measures memory bandwidth for varying working set sizes. The reported bandwidth is for a loop that adds two arrays, overwriting the first with the sum:

function sum(x, y, n) { for (var i = 0; i < n; i++) x[i] += y[i]; }

In Javascript all numbers are in IEEE 754 double precision format, so the loop above moves 24 bytes per iteration (2 reads and 1 write). This is assuming that the arrays are packed arrays of the native type. If they contain object pointers to some sort of number object, all bets are off - actual memory might be closer to 20 bytes per number, the lookup is over one indirection, and locality is worse, too.

The working set size in the results assumes a packed representation, but it is really only there as a rough guide. The important information is whether the bandwidth gets lower as the working set increases. Here are some results.

New: a version using typed arrays. This will usually yield better results, and the working set size is correct, too. Not all browsers support this, though.

Bandwidth (GB/sec) vs. working set (Bytes):

Copyright © 2011 Christoph Breitkopf. Graph display uses the ProtoChart library.