Organisation improves JavaScript performance

JavaScript now accounts for 189kb per average page load, apparently up 50% on 2011. Mostly responsible for this are client-side JS libraries which have successfully conquered the web with a unified, expressive syntax; jQuery has become so ubiquitous it’s estimated to be in usage on 54% of the top 10,000 websites. A major problem is that jQuery has become the goto answer for every browser-based JS issue and as much as I love the library it could be partly responsible for crippling websites when viewed on a modest device. The problems of bloated file sizes and pain of multiple requests have been very thoroughly covered in the last few years but I’ve found a little code organisation can really improve performance once the page has finished downloading and JavaScript gets to work.

Technology gets faster; websites get slower

Software is getting slower more rapidly than hardware becomes faster.

Wirth's Law

As performance in hardware and connectivity improves many content publishers expect more from websites. Higher resolution media, more interactivity and more integration of external services push more and larger files down a users connection and all of it requires parsing and executing. As a result viewing a website requires more bandwidth, more memory and more processing power. In software development this is called Wirth’s Law.

Moore’s Law has covered our needs for increased computing power for decades, each new desktop PC I have owned has been 2x, 3x or 5x faster than the previous. This year however I moved from a quad-core monster to a low-voltage laptop and we are all increasingly browsing the web on relatively low-power phones and tablets.

The power of our portable devices is increasing as predicted (Apple’s A6 processor is reputedly twice as fast as its predecessor) but the difference in power at the current time is considerable. I benchmarked the JavaScript execution speed of browsers between an average Android smart phone, work desktop PC, iPad 2 and my laptop using Webkit’s SunSpider JavaScript benchmark to highlight the disparity.

Device Average Speed (ms)
Phone 2963
iPad 1429
Laptop 237
Desktop PC 217

My modest, 2 year old work PC finished 93% and 85% faster than the phone and iPad respectively; processor clock speeds may look similar on paper but half the field has substantially higher amounts of power (watts) and a cooling systems to work with. To directly test the JS parsing speed—important when loading a page for the first time—of each device I setup the Parse-N-Load library with a minified version of jQuery 1.8.2.

Device First load (ms) Average Speed (ms)
Phone 212 88
iPad 30 15
Laptop 14 9
Desktop PC 9 5

Compared to the execution performance benchmark the tablet is almost indistinguishable from the conventional computers but the phone lags behind, especially on first run without the benefit of compilation caching. Websites using a JS library and a few plug-ins will start to become noticeably slower on modest mobile devices even before any scripts are executed.

Parse on demand

Parsing isn’t necessarily a performance killer, benchmarking shows that after the initial hit most devices should perform reasonably, but there are methods to save processing power and render faster. HTML5 defines the defer attribute for the script element with the intention that the block will be skipped when first encountered but returned to and executed after the entire document has been completely parsed. This prevents the script load from blocking the first page draw, giving the impression the site has loaded faster.

Because HTML5 will be a moving target for a few more years yet Google adopted a more industrial approach by simply commenting out inline script blocks until needed. It’s an interesting approach and not necessarily hard to implement so long as page bloat for visitors without JavaScript is not an issue. In some circumstances this could be a useful technique but being both obtuse and ‘hacky’ (hint: use SourceURL) it isn’t likely to gain many fans.

Execute only what’s needed

Don’t initialise all the things! Abusing the DOM ready event to run hundreds of lines of code whether the requested view requires it or not is far too common and will slow down a site considerably. Macro management of modular, asynchronous, dependency-loaded scripts isn’t always feasible (and intimidating), especially within environments you don’t totally control, so throwing everything at the page is usually the easiest option despite the performance implications.

└── js/
    ├── src/
    │   ├── components/
    │   │   ├── tabs.js
    │   │   └── modal.js
    │   └── views/
    │       ├── global.js
    │       ├── view_1.js
    │       └── view_2.js
    └── compiled.js

In my WebsiteBase project I implemented the concept of splitting code into view objects whose methods will be executed only when needed. Each block of functionality within a view is wrapped as a method so that it is portable and reusable. Views are created as individual files and compiled into a single file for deployment, giving the advantages of small, more maintainable files but the improved network performance of a single one.

window._JSViews_.view_name = {
    init: function () {  }
};

At work we define which views to initialise with a custom template tag but as they’re simply registered in the global namespace (window._JSViews_), required views can be added on the client-side (literally window._JSViews_.push('view_name') or pulled in with a script loader or as part of server-side development.

Integrating dependency managers and understanding the incoming modular JS proposals is (probably) the end goal for all website development but using a simple, structural approach is sane, fast and a step in the right direction.