Advanced Web Worker Performance

This post offers some advanced considerations on the mechanics of squeezing the best performance possible out of workers. I only mention a few (very) rough benchmarks since the goal is to focus on guidelines and food-for-thought, rather than specifics. It’s also fair to say that not all tasks benefit from using web workers. The best way to know is to test performance with and without the workers.

Processing costs. There are a variety of costs associated with manipulating data. Manipulating data means any task you run against it such as looping, converting, parsing and analyzing. Each one of these tasks should be evaluated for the cost in terms of CPU usage and time spent. By looking at these aspects, you can determine the benefits of having a web worker versus not having a web worker.

Pre-processing – manipulating data before sending it to the web worker.
Peri-processing – manipulating data while doing work on the background thread.
Post-processing – manipulating data after it’s been received back from the web worker.
Total processing – this benchmarks the processing from the very beginning until the very end.

Time stamp before and after each process that handles data. Tools such as console.time are invaluable in identifying actual and potential bottlenecks. Also make liberal use of the developer tools such as Chrome’s CPU profiler that now includes screen captures.

Total CPU and memory usage. There’s often a misperception that just because CPU usage has been outsourced to the background thread that it won’t negatively impact an application or contribute to jank. This is not true, especially on mobile devices that are already CPU and memory constrained. CPU headroom, or the amount available for application processing, is finite and typically doesn’t care whether CPU is used on the main thread or a background thread.

Web apps are wholly dependent on how each browser vendor uses CPUs and manages threads behind the scenes for every browser version and operating system. Using JavaScript you cannot guarantee how the browser will spread CPU load across cores.

In fact, I’ve seen web worker performance improvements made in Chrome that create almost an exact opposite performance decrease in Safari! Talk about a WTF moment.

Cloning costs. Careful consideration should be placed on the costs associated with the act of sending and receiving messages. Sending and receiving isn’t “free”, in fact the amount of CPU consumed can directly cause jank.

Web workers use a cloning algorithm for serializing objects that go in and back out, and cloning uses CPU cycles. Yes, even if you are using transferable objects there may still be performance implications especially when dealing with large amounts of data (>100MBs) as well as on mobile devices. You simply will never know unless you test, test, test.

In some cases, the cloning costs related to both sending and receiving messages may outweigh the performance benefit of using a web worker. If the entire round-trip time and CPU utilization is greater than simply keeping the processing on the main thread – then you probably don’t need a web worker.

Pooled workers. Should you use one worker or several? The only reliable way to know is to experiment. One approach to eke out greater performance is to create a pool of workers. The concept is a task that can be broken up and passed on into smaller concurrent worker tasks and then reassembled after all tasks have completed.

Here’s an example application that retrieves a GeoJSON file, parses it, then displays the data on a map. Open up the developer console and run each application multiple times. You can also download and experiment when this application yourself with increasing and decreasing the size of the thread pool:

Example 1: Parsing using no web workers
Example 2: Parsing using one web workers
Example 3: Parsing using two web workers

With respect to binary versus non-binary data, I’ve seen diminishing returns when using larger number of workers with JSON files. On the other hand, there are a number of posts on the web that show great performance processing image data with increasing numbers of workers.

This also may be dependent on the browsers-threading model and how it uses the number of cores on your device or laptop. How do you know? Test and benchmark.

Re-use workers, or re-create for each loop? There are costs associated with initializing workers. Depending on the type of application, it may benefit you to re-use workers rather than re-create them for each loop. This depends on how complex your web workers are, how many you need to spin up, the type of device they are running on and how often you are using them.

Some applications may run on a timer once every 5 minutes. If that’s the case does it make sense to keep workers sitting unused in memory for that long? Maybe or maybe not. Other applications may only need the workers at startup and there’s definitely no need to keep them around after that. The list of use cases are endless.

Here’s some very rough test results on the time to simply initialize web workers. This example used identical workers that contained 67 lines of code:

Macbook Pro: 2 workers, 0.4 milliseconds on average
Macbook Pro: 4 workers, 0.6 milliseconds on average
Nexus 5: 2 workers, 6 milliseconds on average
Nexus 5: 4 workers, 15 milliseconds on average (border-line UI jank)

As far as computational horsepower, the Macbook used in the tests had 2.6 GHz i7 with 16GB of 1600 MHz DDR3. The Nexus 5 had a Snapdragon 2.26 GHz with 2GB of 800 MHz RAM. Clearly the Macbook outclasses the Nexus. It’s important to note how much longer it took to spin up a pool of workers on a mobile device. That’s an example of an app that would work seamlessly on a powerful laptop, but could produce jank on a mobile device.

Closing Notes. Web workers are fairly straight forward to bolt into an application, but tuning them up to gain the true performance benefits can be a bit more tricky. With a little exploration and experimentation, workers can potentially provide huge benefits for your application’s performance.

Sample Application

[Fixed broken links: July 7, 2016]

Application Cache is not gone, oh my! Or, is it?

Reports of Application Cache’s early demise are false. Application Cache (a.k.a Cache Manifest and AppCache) isn’t perfect, it can be very frustrating to get working and many people hate it with passion, but the fact is that the API will be around for the foreseeable future. I’ve been getting many questions, so this blog post is an attempt to shed some light on a confusing topic.

Mozilla says Application Cache is deprecated and unstable, WTF?

First, let’s review what Mozilla is saying about Application Cache since the majority of questions I get come from what they are saying. Mozilla has some fairly strong wording on their developer site, you can read all about it here. Or, if you don’t feel like following a link, here’s the official text from Mozilla Developer Network (MDN) website. I added the underlines to emphasize wording that’s tripping people up:

Deprecated. This feature has been removed from the Web standards. Though some browsers may still support it, it is in the process of being dropped. Do not use it in old or new projects. Pages or Web apps using it may break at any time.

Using the application caching feature described here is at this point highly discouraged; it’s in the process of being removed from the Web platform. Use Service Workers instead. In fact, as of Firefox 44, when AppCache is used to provide offline support for a page a warning message is now displayed in the console advising developers to use Service workers instead (bug 1204581).

Mozilla also references the Web Hypertext Application Technology Working Group’s (WHATWG) position on Application Cache. But, their wording is similar but definitely different in meaning from the WHATWG statement in their HTML Living Standard, Section 7.7. You may want to read this several times:

This feature is in the process of being removed from the Web platform. (This is a long process that takes many years.) Using any of the offline Web application features at this time is highly discouraged. Use service workers instead.

An oversimplified description of the WHATWG is it’s an organization that provides research and makes recommendations to the W3C – World Wide Web Consortium. It’s the W3C that finalizes recommendations and makes them into standards.

Also note: Application Cache is, for the moment, still officially part of the latest draft of the WHATWG HTML Living Standard document as shown here in Section 7.2.2 (it’s a big document, it’s best to just search for application cache).


What is the true state of Service Workers?

Reality is always nuanced when it comes to cross-browser web application development. Here’s a summary of facts for cross-browser developers to consider (as of February 1, 2016):

  • Service Workers are NOT supported on the following platforms, and you’ll need to use Application Cache if it suits your requirements and any Application Cache bugs don’t adversely affect your apps. You can verify these on
    • Android Browsers at v4.4.4 and older (there are still a lot of phones out there using older browsers). Yes, a percentage of these users download Chrome. However, as of Feb 1, 2016, Android is reporting that v4.1.x thru v4.4 represents 60.8% of all phones in circulation.
    • Desktop Safari
    • iOS Safari
    • IE and Edge – don’t forget that a lot of offline field worker apps are built specifically for laptops.
  • The Service Worker specification is still a Working Draft according to the W3C. It’s not final according to the official standards bodies. Mozilla even lists Service Workers as “Experimental Technology”. You can view that yourself here. Here’s MDNs wording copied directly from their webpage:

This is an experimental technology 

Because this technology’s specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the spec changes.

  • Service workers are more complex and require coding skills in comparison to Application Cache which requires only a configuration file and one line of HTML code. Here’s an example. It’s going to be a big step for long time Application Cache developers to wrap their heads around using Service Workers. It’s a very powerful API but it’s not going to be as easy as simply swapping out your Application Cache for offline capabilities built on Service Workers.

What to do, what to do?

My thoughts are MDNs statement leaves developers between a rock and a hard place, and holds the most water if the ONLY web browsers you need to support are browsers that have implemented all aspects of Service Workers that meet your requirements. You’ll also need to keep in mind that Service Workers are in beta so they can also can break or not work as expected, and there is a remote possibility the functionality could change.

When you combine MDN’s statement with the WHATWG statement and the WHATWG Living Standard document the picture becomes more clear. Here’s my interpretation: Application Cache is going to be around for a while longer. However, it would be a really good idea for you, as a cross-browser developer, to keep your eye on the progress of Service Workers and to consider using them in applications if/when it makes sense based on requirements and browser capabilities.

Don’t throw away an working/stable code built on Application Cache just yet.   If you are building apps on platforms that don’t support Service Workers then Application Cache may be your only viable alternative until you can upgrade to newer platforms or other browsers.

Additional References:

Advanced geolocation plugin for Cordova and PhoneGap for Android

The W3C’s browser-based, JavaScript Geolocation API is excellent as a one-size-fits-all interface, but that approach comes at a price and it can cause some serious limitations when it comes to implementing more stringent professional, commercial and government use-cases.

Challenges with the default Geolocation API. One of the primary limitations is the Geolocation API does not tell you how it got a location. All locations are lumped together in a black box. Let me explain. On a smartphone or tablet, location data comes from one of three places: the GPS chipset, the cellular provider’s Location Service, or the browser’s Location Service. The W3C Geolocation API simply lumps these data points together. The end result is typically seen by the end user as significant and disturbingly wild jumps back and forth in the reported location, sometimes over large distances. A key to minimizing these fluctuations is to gain back control and understand which location provider created the latitude and longitude point.

Cordova-plugin-advanced-geolocation. The good news is that Android, in particular, has a very detailed API called LocationManager for examining geolocation data that comes from the device.  And, even better news for JavaScript developers is that the API, along with its access to all on-device GPS and Network location providers, has been exposed thru a Cordova plugin that is available here:

What geolocation data is available? With this plugin you’ll be able to programmatically differentiate between the following geolocation data as well as get access to GPS satellites meta data:

  • Real-time GPS location – This is data from the on-board GPS or some devices will allow it to be the location data from an external GPS that is connected to the device via bluetooth.
  • Cached GPS location – Most devices cache the last-known GPS location and it’s persistent even when the device is restarted.
  • Real-time Network location triangulation – this is completely dependent on devices and cellular service providers. It may require WiFi to be turned on. It also may not be available in all countries or regions.
  • Cached Network location – Most devices cache the last-known network-based location and it’s persistent even when the device is restarted.

Use cases. With this plugin, you can now use your JavaScript skills to implement the following use cases and much more. For example, I’ve always wanted to play with the Satellite data to make a 3D map of the satellites using JavaScript. This plugin provides a huge advantage to developers building applications for capturing a single location such as field survey work as well as the following use cases and others that I haven’t thought of:

  • Determine a static outdoors location and only use GPS.
  • While indoors turn on only network location. Do not use GPS.
  • While in an urban area, use network location to get initial location before the GPS warms up and then turn off network location and only use GPS
  • Compare the differences between GPS and Network locations
  • …???

How does this plugin help minimize location fluctuations? This plugin comes with a configuration option for turning on a buffer. You can set the size of the buffer, each new geolocation from the device will be added to it, and then plugin will determine the geometric center based on all the locations in the buffer.

Are there any other advanced plugins? Yes, some Cordova plugins are focused on being activity based and will detect if you are walking, stopped, moving, etc. These plugins tend to work as apps that can be backgrounded. Feel free to browse the Cordova plug-in directory here.

Does using PhoneGap improve geolocation?

No, using PhoneGap or Cordova doesn’t change how you retrieve geolocation information from within a mobile application and it doesn’t improve accuracy. You still use the W3C HTML Geolocation API coding pattern.

The reason why is because PhoneGap runs the HTML/CSS/JS code in a native OS WebView. This is a chrome-less browser with mostly similar functionality as Chrome (Android) and Safari (iOS). There isn’t any additional functionality that would improve accuracy.

Some people have asked me if cordova-plugin-geolocation provides additional functionality. That plugin only works for devices that don’t already support geolocation and then it abides by the W3C Geolocation API spec.

The HTML Geolocation API in the browser also works offline. Yes, it’s true. You can use a web browser offline to get geolocation and even view a map without PhoneGap, here’s an example project that demonstrates the functionality in a number of ways. As long as the GPS on the device is enabled and working, and the user opt’s in at the geolocation-approval prompt then the application will receive GPS location data.

Are there any advantages to using PhoneGap for geolocation?

Yes, there are several potential advantages to using PhoneGap for geolocation. You can retrieve geolocation information while the device screen is locked and/or when the application is running in the background. You cannot do this with a typical web app. As soon as a browser is minimized a web app will stop running even if the GPS is still running.

Another advantage is you can write a PhoneGap plug-in to tap directly into the native OS location API. Taking this approach will give you a greater level of control over the information you are retrieving. And, you have coding tools and access to APIs to better manage the device’s battery life in a way that the HTML Geolocation API cannot deliver. In the case of Android you can tap into significantly greater information such as details on GPS satellites, granular information on device’s internet connection, as well as NMEA strings and more.

You can’t just wrap old websites in bootstrap and call it a day

Just because you wrapped an existing website in bootstrap doesn’t necessarily mean it’s ready for use on mobile devices. This is especially true if the website was originally designed for desktop browsers. Yes, bootstrap can significantly improve the user interface and make it flexible across multiple screen sizes. But it’s also up to you to roll up your sleeves and make sure the code behind the scenes is also worthy of being mobile-ready.

So, here are a three challenges to consider that will help keep your smartphone using visitors happy.

Challenge 1 – The internet connection on mobile devices is not as reliable as your home or office wired network. That’s a fact. Download speeds can and will vary significantly. The larger the website in MBs, the more links it has to download, the more non-optimized images then the longer it will take to render and become ready for use, especially on a mobile device. And mobile users are a very impatient bunch when it comes to sluggishness.

Size does matter with web sites. Smaller sized files download faster. The fewer number of files that make up a website also means faster downloads. Optimize, optimize and optimize some more. Minify files. Combine multiple files into one. Optimize images for web display.

Challenge 2 – Site navigation needs to be rethought and resized with mobile users in mind. Modern mobile devices use finger-based navigation, as opposed to high-precision mouse pointers. Teeny, tiny buttons or links that look cool on an ultra-high resolution MacBook retina screen positively suck when you are trying multiple times to click on them with your fingertip. On some websites using desktop navigational elements on your phone becomes like a macabre video game as you repeatedly play hit-or-miss with your fingertips.

Mobile websites should be finger lickin’ good. Okay, maybe you don’t really have to lick your fingers, but at least right size your navigational elements while keeping people’s fingers in mind. And fingers come in all shapes, sizes and levels of dexterity. Bootstrap can help with this.

Furthermore, your design and testing should work equally well in both portrait and landscape modes (phone right side up or phone on its side), and you should be able to switch back and forth seamless between the two modes in the same browser session.

Challenge 3 – Mobile devices are significantly more sensitive to browser memory leaks and bloated web pages. The mobile operating system will simply kill of any app that it deems to be using too much memory. And most of us are simply not good stewards at keeping our mobile browsers tuned up and happy. Browser caches grow huge and don’t get cleaned out regularly; we keep too many tabs open and probably have more information in our browser history than the library of congress. To further add to our woes, many of us let our phones run for weeks without a restart which can allow memory leakage to grow over time.

Tweak as many aspects of web page performance that your time will allow. Optimize old code by re-writing and striving to add new efficiencies. I know it may sound crazy, but if your site is particularly large and complex then consider creating focused, mobile-only sites that have scaled down content, rather than trying the one-size fits all approach. Smaller sites not only load faster but they are easier to navigate, take up less memory and typically perform better.

Easy automation of JavaScript form testing

If you are writing unit tests that provide coverage for HTML forms then there is an easy, pure JavaScript way to automate testing of the underlying code that works on modern browsers. The nice thing about this approach is you don’t have to manually load a file every time you run the tests. You still need to test the HTML interface components but that’s a topic for a different blog post.

The concept is straightforward in that you need to emulate the underlying functionality of an HTML form. The good news is you don’t have to programmatically create an HTML Form or Input element. Here’s the pattern you need to follow:

  1. Create an xhr request to retrieve the file. Be sure to set the response type as blob.
  2. Take the xhr.response and create a new File Object using the File API.
  3. Inject the File Object into a fake Form Object or,
  4. You can also use FormData() to create an actual Form Object.
  5. The fake Form Object is now ready to pass into your unit tests. Cool!

Here’s how you create a JavaScript FormData Object. Depending on what data your code expects you can add additional key/value pairs using append():

var formData = new FormData();
formData.append("files",/* file array */files);

And, here’s what the basic pattern looks like to retrieve the file, process it and then make it available for your unit tests:

var formNode; // Unit tests can access form node via this global

function retrieveFile(){

    var xhr = new XMLHttpRequest();"GET","images/blue-pin.png",true); //set path to any file
    xhr.responseType = "blob"; 

    xhr.onload = function()
        if( xhr.status === 200)
            var files = []; // This is our files array

            // Manually create the guts of the File
            var blob = new Blob([this.response],{type: this.response.type});
            var bits = [blob,"test", new ArrayBuffer(blob.size)];

            // Put the pieces together to create the File.
            // Typically the raw response Object won't contain the file name
            // so you may have to manually add that as a property.
            var file = new File(bits,"blue-pin.png",{
                lastModified: new Date(0),
                type: this.response.type


            // In this pattern we are faking a form object
            // and adding the files array to it.
            formNode = {

            // Last, now run your unit tests
            console.log("Retrieve file failed");
    xhr.onerror = function(e)
        console.log("Retrieved file failed: " + JSON.stringify(e));