serialize() vs. var_export() vs. json_encode() Memory Usage

Permanent Link: serialize() vs. var_export() vs. json_encode() Memory Usage 18. November 2009 Comment No Comment

Based on serialize() vs. var_export() vs. json_encode() Part 2, let's take a look at the (real) memory usage:

Apart from var_export allo methods seem to use about the same. Let's put the usage in relation to each other to get a better view:

With smaller arrays, the memory usage is exactly the same, from 10.000 elements on the values begin to diverge.

serialize() vs. var_export() vs. json_encode() Part 2

Permanent Link: serialize() vs. var_export() vs. json_encode() Part 2 17. November 2009 Comment No Comment

Based on the comments from my first benchmark in the serialize() vs. var_export() vs. json_encode article 2 days ago, I decided to benchmark once again, this time using different array sizes and I also added the JSON method with recursive UTF8 encoding beforehand. This time, all results are the combined results (exporting and importing), the testing script was exactly the same (except JSON+UTF8 which also had a utf8_encode_recursive function). Let's see what happens when using array sizes from 10 to 1000 elements:

So far, json_encode itself is fastest, closely followed by serialization. JSON with UTF8 Encoding is slowest, but consider that all the values are all below 0,02 seconds.

So far so good, let's look at the results with arrays from 10.000 to 1.000.000 elements:

Apart from the exception Serialization the values mostly develop like with smaller arrays. What's quite odd is that Serialization's runtime seems to be exploding when stepping up from 100.000 to 1.000.000. Why that is so, I cannot tell. Although JSON with UTF8 Encoding is slower than var_export and JSON itself, it's still way faster than serialization.


  1. None of the methods scales linear.
  2. For smaller arrays, JSON is the way to go, as long as your data is already UTF8 encoded. If not, you might want to take serialization.
  3. With larger arrays JSON is still fastest as long as your data is already UTF8 encoded, otherwise var_export is the best choice.

Closing, let's take a look at the graphs from 10 - 1.000.000 item large arrays:

serialize() vs. var_export() vs. json_encode()

Permanent Link: serialize() vs. var_export() vs. json_encode() 16. November 2009 Comment Comments (3)

There are times when you need to store an array, for example when your array is an index you wanna use again the next time you run your application. In order to store an array you have to transform it into some kind of string represantation first, most people would probably use serialize(). But there are also 2 other ways to achieve that: var_export() and json_encode().

After having them stored the functions to interpret the strings as arrays would be unserialize() if you use serialize(), eval() if you use var_export() and json_decode() if you use json_encode().

So, what about the performance?

In order to test that I wrote a little profiling script that first created some random array with 1.000.000 elements, then exported the array and then imported it again. For the json_encode() test the script looked like that:

$array = array_fill(0, 1000000, rand(1, 9999));

$start = microtime(true);
$export = json_encode($array);
$end = microtime(true);
$duration = $end - $start;
print('JSON Encode: ' . $duration . PHP_EOL);

$start = microtime(true);
$import = json_decode($export);
$end = microtime(true);
$duration = $end - $start;
print('JSON Decode: ' . $duration . PHP_EOL);

Apart from the exporting and importing functions used, the script for serialize() and var_export() looked pretty much the sime, var_export() being the only exception, since I had to add an ending ; to $export in order for it to work with eval().

While it is understandable that importing takes longer for every method than importing, the differences in time are quite astounding:

It's not only interesting to see that unserialize() is damn slow but also that JSON is fastest, which also gets quite clear when looking at combined results:


Since it's still in the Ubuntu repositories, I did the performance tests with PHP 5.2.6

Useful Firefox addons for web developers

Permanent Link: Useful Firefox addons for web developers 9. Februar 2009 Comment No Comment

Here's a short list of useful Firefox addons for web developers:


Probably the most useful addon around. You cannot only change the whole page on the fly (HTML and CSS) but you also have the very useful Javascript console which can even be used by your application for debugging. Furthermore you can see all AJAX activities. Download at


FirePHP allows you to send debug messages to Firebug through a PHP Script. Download at


Greasemonkey allows you to create JavaScript scripts for specific or all websites (Use of wildcard * possible!). This can be very useful to either test how new scripts would integrate into your website or to change the look and behaviour of any other webseite. Download at


Same as Greasemonkey only for CSS. This way you can easily test new styles on a website before integrating them. Download at

Web Developer

The classic one. Allows you to easily change behaviour of your browser (caching, JavaScript, etc.) and has a whole lot of other useful tools like showing document size, the styles, submitting the page to the W3C Validator, Browserframe resize (to test your application for specific resolutions), viewing response headers, outlining specific elements and so on. Download at

Search Status

Although it also shows the page rank (not very reliable), I only use this addon to display all the nofollow links on a page, which is really extremely useful. Download at

DNS Cache

My own Firefox extension that allows you to disable the DNS caching of Firefox, which comes in quite handy when you have to check your webservers quickly. For a more detailed description see here. Download at

JavaScript loops profiled

Permanent Link: JavaScript loops profiled 20. Januar 2009 Comment No Comment

Today I was curious and wanted to know which way of looping in JavaScript fastest. So far, I always use for var i in array, since someone once told me, it is the fastest way. For testing I created an Array with 10000 elements:

ids = [];
for (var i = 1; i <= 10000; i++) {

I used the JavaScript Profiler of Firebug for profiling. The testsystem was: Intel Dual Core T2500 @ 2.00 GHz, 2 GB RAM, Ubuntu 8.04, Firefox 3.0.5 (only installed addon is Firebug). I profiled each loop variation 5 times and took the average time for comparison. The loop variations did nothing but loop and were the following:

Loop 1:

for(var i = 0; i < ids.length; i++) {}

Loop 2:

for (var i in ids) {}

Loop 3:

function process(element, index, array) {}

When I started the test, I didn't think there would be such huge differences in the performance of those three:

JavaScript loops profiled

Here are the profiling results in detail, in case you're interested (all times in ms):

Loop 1:

Loop 2:

Loop 3 (number in brackets is the profiled runtime of the function process() - see declaration above)
33,055 (16,439)
33,262 (16,489)
33,792 (17,044)
34,682 (17,312)
35,875 (17,637)