Technology gets faster; websites get slower
Software is getting slower more rapidly than hardware becomes faster.
As performance in hardware and connectivity improves many content publishers expect more from websites. Higher resolution media, more interactivity and more integration of external services push more and larger files down a users connection and all of it requires parsing and executing. As a result viewing a website requires more bandwidth, more memory and more processing power. In software development this is called Wirth’s Law.
Moore’s Law has covered our needs for increased computing power for decades, each new desktop PC I have owned has been 2x, 3x or 5x faster than the previous. This year however I moved from a quad-core monster to a low-voltage laptop and we are all increasingly browsing the web on relatively low-power phones and tablets.
|Device||Average Speed (ms)|
My modest, 2 year old work PC finished 93% and 85% faster than the phone and iPad respectively; processor clock speeds may look similar on paper but half the field has substantially higher amounts of power (watts) and a cooling systems to work with. To directly test the JS parsing speed—important when loading a page for the first time—of each device I setup the Parse-N-Load library with a minified version of jQuery 1.8.2.
|Device||First load (ms)||Average Speed (ms)|
Compared to the execution performance benchmark the tablet is almost indistinguishable from the conventional computers but the phone lags behind, especially on first run without the benefit of compilation caching. Websites using a JS library and a few plug-ins will start to become noticeably slower on modest mobile devices even before any scripts are executed.
Parse on demand
Parsing isn’t necessarily a performance killer, benchmarking shows that after the initial hit most devices should perform reasonably, but there are methods to save processing power and render faster. HTML5 defines the
defer attribute for the
script element with the intention that the block will be skipped when first encountered but returned to and executed after the entire document has been completely parsed. This should result in a faster page draw and the impression the site has finished loading.
Execute only what’s needed
Don’t initialise all the things! Abusing the DOM ready event to run hundreds of lines of code whether the requested view requires it or not is far too common and will slow down a site considerably. Macro management of modular, asynchronous, dependency-loaded scripts isn’t always feasible (and intimidating), especially within environments you don’t totally control, so throwing everything at the page is usually the easiest option despite the performance implications.
WebsiteBase <plug>My more than a boilerplate, less than a framework project starter</plug> implements a basic concept of splitting code into view objects whose methods will be executed only when needed; loosely inspired by the AMD ‘factory’ and OOP principles. Each block of functionality within a view is wrapped as a method so that it is portable and reusable. Views are saved as individual files and compiled into a single file for deployment, giving the advantages of small, more maintainable files and the speed boosts of a single one.
At work we define which views to execute with a custom template tag but as they’re simply registered in the global namespace (
window._JSViews), required views can be added on the client-side (literally
window._JSViews.push('view_name') or wrapped into a
require() function) or as part of server-side development.
Integrating dependency managers and understanding modular JS proposals is (probably) the end goal for all website development but using a simple, structural approach is sane, faster and a step in the right direction.