Just because you wrapped an existing website in bootstrap doesn’t necessarily mean it’s ready for use on mobile devices. This is especially true if the website was originally designed for desktop browsers. Yes, bootstrap can significantly improve the user interface and make it flexible across multiple screen sizes. But it’s also up to you to roll up your sleeves and make sure the code behind the scenes is also worthy of being mobile-ready.

So, here are a three challenges to consider that will help keep your smartphone using visitors happy.

Challenge 1 - The internet connection on mobile devices is not as reliable as your home or office wired network. That’s a fact. Download speeds can and will vary significantly. The larger the website in MBs, the more links it has to download, the more non-optimized images then the longer it will take to render and become ready for use, especially on a mobile device. And mobile users are a very impatient bunch when it comes to sluggishness.

Size does matter with web sites. Smaller sized files download faster. The fewer number of files that make up a website also means faster downloads. Optimize, optimize and optimize some more. Minify files. Combine multiple files into one. Optimize images for web display.

Challenge 2 – Site navigation needs to be rethought and resized with mobile users in mind. Modern mobile devices use finger-based navigation, as opposed to high-precision mouse pointers. Teeny, tiny buttons or links that look cool on an ultra-high resolution MacBook retina screen positively suck when you are trying multiple times to click on them with your fingertip. On some websites using desktop navigational elements on your phone becomes like a macabre video game as you repeatedly play hit-or-miss with your fingertips.

Mobile websites should be finger lickin’ good. Okay, maybe you don’t really have to lick your fingers, but at least right size your navigational elements while keeping people’s fingers in mind. And fingers come in all shapes, sizes and levels of dexterity. Bootstrap can help with this.

Furthermore, your design and testing should work equally well in both portrait and landscape modes (phone right side up or phone on its side), and you should be able to switch back and forth seamless between the two modes in the same browser session.

Challenge 3 – Mobile devices are significantly more sensitive to browser memory leaks and bloated web pages. The mobile operating system will simply kill of any app that it deems to be using too much memory. And most of us are simply not good stewards at keeping our mobile browsers tuned up and happy. Browser caches grow huge and don’t get cleaned out regularly; we keep too many tabs open and probably have more information in our browser history than the library of congress. To further add to our woes, many of us let our phones run for weeks without a restart which can allow memory leakage to grow over time.

Tweak as many aspects of web page performance that your time will allow. Optimize old code by re-writing and striving to add new efficiencies. I know it may sound crazy, but if your site is particularly large and complex then consider creating focused, mobile-only sites that have scaled down content, rather than trying the one-size fits all approach. Smaller sites not only load faster but they are easier to navigate, take up less memory and typically perform better.

Tags: , , , ,
Posted in JavaScript, Mobile | No Comments »

Easy automation of JavaScript form testing

If you are writing unit tests that provide coverage for HTML forms then there is an easy, pure JavaScript way to automate testing of the underlying code that works on modern browsers. The nice thing about this approach is you don’t have to manually load a file every time you run the tests. You still need to test the HTML interface components but that’s a topic for a different blog post.

The concept is straightforward in that you need to emulate the underlying functionality of an HTML form. The good news is you don’t have to programmatically create an HTML Form or Input element. Here’s the pattern you need to follow:

  1. Create an xhr request to retrieve the file. Be sure to set the response type as blob.
  2. Take the xhr.response and create a new File Object using the File API.
  3. Inject the File Object into a fake Form Object or,
  4. You can also use FormData() to create an actual Form Object.
  5. The fake Form Object is now ready to pass into your unit tests. Cool!

Here’s how you create a JavaScript FormData Object. Depending on what data your code expects you can add additional key/value pairs using append():

var formData = new FormData();
formData.append("files",/* file array */files);

And, here’s what the basic pattern looks like to retrieve the file, process it and then make it available for your unit tests:


var formNode; // Unit tests can access form node via this global

function retrieveFile(){

    var xhr = new XMLHttpRequest();
    xhr.open("GET","images/blue-pin.png",true); //set path to any file
    xhr.responseType = "blob"; 

    xhr.onload = function()
    {
        if( xhr.status === 200)
        {
            var files = []; // This is our files array

            // Manually create the guts of the File
            var blob = new Blob([this.response],{type: this.response.type});
            var bits = [blob,"test", new ArrayBuffer(blob.size)];

            // Put the pieces together to create the File.
            // Typically the raw response Object won't contain the file name
            // so you may have to manually add that as a property.
            var file = new File(bits,"blue-pin.png",{
                lastModified: new Date(0),
                type: this.response.type
            });

            files.push(file);

            // In this pattern we are faking a form object
            // and adding the files array to it.
            formNode = {
                elements:[
                    {type:"file",
                        files:files}
                ]
            };

            // Last, now run your unit tests
            runYourUnitTestsNow();
        }
        else
        {
            console.log("Retrieve file failed");
        }
    };
    xhr.onerror = function(e)
    {
        console.log("Retrieved file failed: " + JSON.stringify(e));
    };

    xhr.send(null);
}
Tags: , , , , , , ,
Posted in JavaScript, testing | Comments Off

The Geolocation API is built into all modern mobile browsers and it lets you take either a quick, onetime snapshot, or you can get continuous location updates. Using the browser to get your approximate location is very, very cool, but it’s also fraught with many challenges. The vast majority of blog posts on this API talk about what it can do, this blog post focuses on how to best use it and understanding the data provided by the API.

To start things out, let’s take a quick look at a short list of some of the challenges when using the Geolocation API.

Challenge 1. You will not know where the location information is coming from. There’s no way to tell if it’s from the GPS, the cellular provider or the browser vendors location service. If you care about these things then the native Android SDK, for example, gives you a huge amount of control over what they call ‘location providers.’

Challenge 2. You cannot force the app to stay open. This means that the typical user has to keep tapping the app to keep it alive otherwise the screen will go to sleep and minimize your app.

Challenge 3. Speaking about minimizing apps, when the browser is minimized the geolocation capabilities stop working. If you have a requirement for the app to keep working in the background then you’ll need to go native.

Challenge 4. You’ll have very limited control over battery usage. Second only to the screen on your phone or tablet, the current generation of GPS chips are major energy hogs and can suck down your battery very quickly. Since the Geolocation API gives you very little control over how it works you cannot build much efficiency into your apps.

Challenge 5. Most smartphones and tablets use a consumer-grade GPS chip and antenna, and that limits the potential accuracy and precision. On average, the best possible accuracy is typically between 3 and 10 meters, or 10 – 33 feet. This is based on my own extensive experience building GPS-based mobile apps and working with many customers who are also using mobile apps. Under the most ideal scenario, the device will be kept stationary in one location until the desired accuracy number is achieved.

What’s it good for? Okay, you may be wondering what is browser-based geolocation good for? It’s perfect for very simple use cases that don’t require much accuracy. If you need to map manhole covers, or parking spaces, or any other physical things that are close together you’ll need a GPS device with professional-level capabilities.

Here are a few generic examples that I think are ideal for HTML5 Geolocation:

  • Simply getting your approximate location in latitude/longitude and converting it to a physical address.
  • Finding an approximate starting location for searching nearby places or things in a database or for getting one-time driving directions.
  • Determining which zip code, city or State you are in to enable specific features in the app.
  • Getting the approximate location of a decently sized geological feature such as a park, a building, a pond, a parking lot, a driveway, a group of trees, an intersection, etc.

What’s the best way to get a single location? The best way to get a single location is to not use getCurrentPosition() but to use watchPosition() and analyze the data for a minimum set of acceptable values.

Why? Because getCurrentPosition() simply forces the browser to barf up the best available raw, location snapshot right now. It literally forces a location out of the phone. Accuracy values can be wildly inaccurate, especially if the GPS hasn’t warmed up, or if you aren’t near a WiFi with your WiFi turned on, or if your cellular provider can’t get a good triangulation fix on your phone, or it returns a cached value from a different location altogether. There are many, many “what ifs?”

So, I recommend using watchPosition() and firing it off and letting it run until the return values meet some minimum criteria that you set. You need to know that while this is happening the location values returned may cover a fairly wide geographic area…remember our best accuracy values are 10 – 30 meters. Here’s a real-world example of Geolocation API location values that I captured over a 5 minute period while standing stationary in front of a building.

5 minute snapshot

What steps do you recommend? Here are five basic steps to help guide you towards one approach for getting the best location. This is a very simplistic approach and may not be appropriate for all use cases, but I think it’s adequate to demonstrate the basic concepts for working towards determining the best possible location.

Step 1. Immediately reject any values that have accuracy above a certain threshold that you determine. For example, let’s say we’ll reject any values with an accuracy reading greater than 50 meters.

Step 2. Create three arrays, one for accuracy, latitude and longitude. If the accuracy is below your threshold, or in this case < 50 meters, then push the values to the appropriate arrays. You will also need to set a maximize size for the array and create a simple algorithm for adding new values and removing old ones.

The array length could be 10, 20 or even 100 or more entries. Just keep in mind that the longer the array, the longer it will take to fill up and the longer the user will have to wait for the end result.

Step 3. Start calculating the average values for accuracy, latitude and longitude.

Step 4. Start calculating the standard deviation for accuracy, latitude and longitude.

Step 5. If your arrays fill up to the desired length and the average accuracy meets your best-possible criteria, and the standard deviation is acceptable then you can take the average latitude, longitude values as your approximate location.

For an example of this simple algorithm at work visit the following URL on your phone and step outside to get a clear view of the sky: http://esri.github.io/html5-geolocation-tool-js/data-test.html.

Tags: , , , , , ,
Posted in Browsers, GPS | Comments Off

After the news last summer about a measly 1.2 billion usernames and passwords being stolen I started doing my own research on Fortune 1000 companies that I do business with. I say measly because many large companies have downplayed the importance of protecting passwords. As far as basic password security the vast majority of them failed. By password security I mean how well will it hold up when it goes head-to-head with a super-powerful, cloud-based password cracker.

There is a very easy test that you can do as well. Simply go to the company’s website and attempted to reset your password.

The main problem is that login-based security enforcement is random and everyone does it somewhat differently. Some companies may do an excellent job of login and password security and other may not. And, yes, passwords are an antiquated, 20th century invention that’s unfortunately lived well past its useful lifespan. However, the usage of them is still so widely accepted that we as consumers are stuck with them for now. So I believe it’s up to us, the consumer, to also voice our opinions of what’s acceptable and what’s not rather than leave companies to go about their business without any feedback.

Companies with failing grades will have the following password characteristics:

  • Allow fewer than 15 characters
  • Limited you to letters only
  • Don’t allow upper and lower case letters
  • Limited you to alpha-numeric only with no special characters
  • Provided no password strength meter. Good ones are not security theater and consumers appreciate the instant feedback
  • Provided no hints on how to create a strong password
  • Blocked the pasting of passwords. This really discourages having longer passwords.
  • Don’t offer two-factor authentication
  • Don’t provide you the option to view your password that you typed. The vast majority of us are in the privacy of our office or home when type passwords. Seeing what you typed, especially if you get it wrong is a huge time and frustration saver.
  • Don’t track which computer you use and log IP addresses, dates and times of access.
  • Don’t allow you to register a specific computer or device.
  • Don’t enforce HTTPS. Any company that houses your secure data should enforce secure HTTP. It’s very rare these days, but it does happen.
  • ?? If you have other characteristics not listed here let me know.

Most of the Fortune 1000 companies I do business easily have some (or most) of the above characteristics. The problem is this creates a honey pot for bad guys who know that a company allows very-easy-to-crack passwords. In my opinion, it’s certainly an advertisement for would-be criminals looking for easy pickings especially if they can simply steal a huge chunk of the company’s database.

Where did I get the 15 character minimum number? It’s basic math. The longer and more complex a password the harder it is to crack. At 15 characters you are talking about some serious and expensive computing power. For example, a simple all lower case password of “abcdefghijklmno” could take a few months to hack using a super massive password cracker according to https://www.grc.com/haystack.htm. That’s potentially a very expensive proposition for cracking large numbers of passwords. In comparison, a complex six-character password combing lower case letters, upper case letters, numbers and special characters “a1C&qZ” might take all of a few seconds to crack. Hmmm, if there are a decent percentage of short passwords in a database that might make it a viable target for the bad guys and certainly easy pickings.

Who cares about passwords? I’ve seen many comments from large companies poo-pooing the need for complex passwords and saying they have far more important security issues to worry about. My retort is I totally disagree. Password security is the least common dominator in today’s world. It’s common knowledge that many crimes happen because there was an easy opportunity. Easy to crack passwords are no different than leaving your car or house door unlocked, or leaving your car window open with a laptop sitting on the front seat.

What type of information are we talking about here that hackers are getting access to? I’m not trying to over dramatize this but we are talking about very personal information as proven in recent data breaches that include but are not limited to: your social security number, date and place of birth, bank login information, private conversations with a doctor or hospital, credit card account information, information on when you’ll be out of town, business dealings, corporate secrets, investments and oh so much more.

And, yeah, I know that consumers can whine about having to be bothered with complex passwords. I too hate having to manually create long, complex passwords especially on a site that enforces what seem like ridiculously twisted rules. However, there is a great solution…password vault apps! PC Magazine, for one, reviews these apps and they range from free to approximately $60. You can also share some of these apps between your laptop, phone and tablet.

Conclusion. Whom do you think a bad guy would target first, a large company that allows easy and short passwords or a large company that enforces good password security and they encrypt their passwords on the database server?

My recommendation is if your company has weak user password security then write the CEO of the company and tell them that password strength does matter and that’s it’s one more piece of armor to help protect all of their systems. Don’t waste your time on filling out the standard blah-blah feedback form. Complain to the top management since they have the power to make changes.

And use a password vault when you can. It will make managing complex passwords oh so much easier.

Recent Examples

password

Password 2

Password 3

References

Wikipedia – Password Strength

How secure is my password?

Think you have a strong password?

Why you can’t have more than 16 characters or symbols in your password.

 

Tags: , , , ,
Posted in Security | Comments Off

WordPress images missing after blog was moved

If you are reading this you probably just migrated your WordPress blog from one hosting provider to the other. If everything else on your blog has been restored and its working fine then congratulations since that was the hard part.

The good news is that restoring your images is fairly straightforward, and the even better news is you won’t have to manually modify all your image links.  This should give you some relief!

Here are the steps:

  1. Back up your database. Many hosting providers have this functionality built-in. If you are hosting your own blog it’s best to just go ahead get another fresh export of the database and copy it to your local machine again. This gives you the most up-to-date copy if something goes wrong in Step 6.
  2. Open your most recent blog post that has a broken image. Copy and paste the broken URL into a text editor. You should be able to figure out the broken link by clicking on the missing image’s empty holder. Example:

http://www.myoldsite.com/WordPress/wp-content/uploads/2014/12/test.png

  1. Fix this one image only using the WordPress blog post editor and save your changes. Then refresh your browser and make sure the image is now displaying correctly.

IMPORTANT NOTE: In the majority of cases this step will go just fine. However, if it doesn’t then there are a few potentially tricky reasons why your changes may not show up immediately. It may depend on how your blog cache is set up, for example if you are using TotalCache you may have to manually blow away your blogs cache(s). Also some hosting providers may take a few minutes to update your data in the cloud. And, lastly you may have to delete your browser cache depending on how the web server that is hosting your blog is configured. Sorry, sometimes there’s no easy answer here, but I believe it’s better for you to be aware.

  1. Now click on the new image and get its URL, then copy and paste the URL to your text editor. Note in my example there are some slight differences between the old and new  URLs. It’s these differences that we need to correct. Example of a new URL:

http://www.mynewsite.com/htdocs/WordPress/wp-content/uploads/2014/12/test.jpg

  1. Open up the database in a SQL editor window. One popular way to access the database is via phpMyAdmin.
  2. In the text editor create a SQL UPDATE command from the URLs mentioned above that you copied and pasted then run the UPDATE command. Here’s an example template of how your command might look. Be sure to modify just the URLs so they fit your unique Blog:

UPDATE wp_posts SET post_content = REPLACE(post_content,’www.myoldsite.com/htdocs’,’www.mynewsite.com’);

  1. Refresh your webpage and see if all the broken image links are restored.
  2. If for some reason your website crashes or the pages get messed up then you will need to restore the database and start over with Step 1 above. The most common reason for any problems happening is there was a mistake made when creating the SQL UPDATE statement.

Reference

WordPress.org – Changing the site url

Tags: ,
Posted in WordPress | Comments Off

Offline JavaScript Part 3 – Intermittent Offline

This is Part 3 of my offline JavaScript series and it covers intermittently offline web apps. The vast majority of web apps are built on the false assumption that the internet will always be available. Yes, the internet is available the vast majority of the time, and most of us rarely encounter issues. However when, not if but when, the internet fails most web apps simply crash and burn in fairly spectacular fashion. I suggest a different approach that there are many, many common use cases that can benefit from offline capabilities in both consumer and professional apps.

As discussed in Part 1, intermittently offline web apps are designed to gracefully handle the occasional, temporary internet connection hiccup. The goals of an intermittent offline app are to make the offline capabilities are lightweight, invisible to the user, and allow the user to seamless pass thru a temporary loss of data connectivity.

The good news, as discussed in Part 2, is you can use a variety of libraries and APIs to solve many of the challenges related to partial offline including detecting whether or not you have an internet connection, and handling of http requests while offline.

How do I decide if I need intermittent offline capabilities?

If you answer ‘yes’ to the following question then you need to consider adding offline capabilities:

Does the app have any critical functionality that could fail if the internet temporarily goes down?

Critical functionality means functionality that’s important to your core business. And to be realistic I’m not talking about building fully armored applications that take every possible contingency into account. That’s just not feasible for the vast majority of non-military-grade applications. Some of the most common use cases are filling in forms and requesting data. And, temporary interruptions can be vary anywhere from a few seconds to a few minutes or longer, and they can happen once or multiple times.

If your application can’t handle this and it needs to then making changes to allow it to be offline can make a big difference to the user. It’s almost as if web development should have it’s own version of “Do no harm” or something like “we can do our best to make users lives easier.” You might be surprised that some very simple and common use cases can benefit from being offline enabled such as filling in form data, or reading an online article.

Filling in form data. This has probably happened to everyone who uses the internet and it applies to both retail/consumer and commercial applications. You spend a while filling out a detailed web form only to have the submit fail and destroy all your hard work because of a temporary interruption in the internet connection or something simply went wrong between the app and the web server.

If our form data was offline-enabled we could store the form data in LocalStorage before attempting to send the data to the server. We could also temporarily prevent the web form from submitting and notify the user there is no internet connection.

Reading an online article. In this scenario you are reading an article while waiting for a train.  Once you get on the train you know the internet will be marginal. You accidentally click on navigation link while scrolling down and the new page fails to load. This effectively ruins your browsing experience because the new page failed to load and you can’t go back to the previous page because it wasn’t cached..

There are a number of different ways to protect this type of application. The easiest way is to block any page load requests until the internet is restored. You can also take advantage of the built-in browser cache to store HTML, CSS, images and JavaScript.

Show me an example workflow?

The most basic workflow takes into consideration the following questions. How these questions get answered depends on your requirements.

  • Do you allow users to restart apps while offline?
  • Do you simply block all HTTP requests and lock down the app?
  • Do you queue HTTP requests and their data?
  • Do you pre-cache certain data?
  • How will you detect if the app is online or offline?

Here is an example coding pattern for the most basic intermittent offline workflow:

What about Offline/Online detection?

If you have no control over what browsers your customers choose, then my recommendation is to use a pre-built library such as Offline.js to check if the internet connection is up or down. It’s not perfect but it’s the best choice out there as of the writing of this post.

Don’t only rely on the window.navigator.online property. It has too many inconsistencies and it is only marginally reliable if the general public is using your app.

What about caching?

There are several built-in browser caching mechanisms that can help your app get past the occasionally internet hiccup. When your app goes offline, you’ll have to rely on local, in-browser resources to keep things going:

  • Browser Caching
  • LocalStorage
  • IndexedDB

As mentioned above, browser caching can be a very efficient way to store HTML, JavaScript, images and CSS. Depending on how you set up your web server, this caching takes place automatically in the users browser and can represent a huge performance gain in eliminating HTTP round trips. I’m not going to talk much about this because there are a ton of great online resources already out there.

Using LocalStorage involves writing JavaScript code if you want to temporarily store HTTP requests. It’s limited to String-based data, so if you are using Objects or binary data you’ll have to serialize the data when you write it to LocalStorage and deserialize when you read it out. LocalStorage also almost always has a limit in terms of how much storage is available. 5MB is the commonly accepted limit.

IndexedDB, on the other hand, stores a wide variety of data types and can store significantly more than 5MB. While in theory the amount of storage space available to IndexedDB is unlimited, practical application of it on a mobile device limits you to around 50MB – 100MB. Your mileage may vary depending on available device memory, the current memory footprint of the browser and the phone’s operating system.

IndexedDB can work natively with types String, Object, Array, Blob, ArrayBuffer, Uint8Array and File. This offers a huge pre- and post-processing savings if you simply are able to pass data directly into IndexedDB.

There are also a number of abstraction libraries that wrap LocalStorage and IndexedDB such as Mozilla’s localForage. These types of libraries are great if you have requirements to store 5MBs of data or less. If your app is running a browser that doesn’t support IndexedDB or WebSQL (e.g. Safari), and you need more than 5MBs of space then you’ll have problems. One potential advantage of some of these libraries is that some of them provide their own internal algorithms for serializing and deserializing data. If working directly with algorithms isn’t your thing, then a library like this can be a huge benefit.

Can you show me some code?

Yes! Here is a very simple example of how to implement basic offline detection into your apps. It’s easiest to try it in Firefox since you can quickly toggle it online/offline using the File > Work Offline option.

The code is available at: http://jsfiddle.net/agup/1yxj5mzp/. You’ll notice two things when you go offline. First is that jsfiddle, itself, will detect you are offline in addition to the web app code. When you go to click the Get Data button while offline, the code sample should detect you are offline and fire off a JavaScript alert.

<!DOCTYPE html>
<html>
<head lang="en">
    <meta charset="UTF-8">
    <title>Simple Offline Demo</title>
</head>
<body>
<div id="status">Status is:</div>
<button onclick="getData()">Get Data</button>
<!-- This is our Offline detection library -->
<script src="http://github.hubspot.com/offline/offline.min.js"></script>

<script>

    // Set our options for the Offline detection library
    Offline.options = {
        checkOnLoad: true,
        checks: {
            image: {
                url: function() {
                    return 'http://esri.github.io/offline-editor-js/tiny-image.png?_='
                        + (Math.floor(Math.random() * 1000000000));
                }
            },
            active: 'image'
        }
    }

    Offline.on('up', internetUp);
    Offline.on('down',internetDown);

    var statusDiv = document.getElementById("status");
    statusDiv.innerHTML = "Status is: " + Offline.state;

    function getData() {

        // See if internet is up or down
        Offline.check();

        switch (Offline.state) {
            case "up":
                // If the internet is up go ahead and retrieve data.
                getFeed(function(success,response){
                    if(success){
                        alert(response);
                    }
                })
                break;
            case "down":
                alert("DOWN");
                break;
        }
    }

    function getFeed(callback) {
        var req = new XMLHttpRequest();
        req.open("GET",
                "http://tmservices1.esri.com/arcgis/rest/services/LiveFeeds/Earthquakes/MapServer?f=pjson");
        req.onload = function() {
            if (req.status === 200 && req.responseText !== "") {
                callback(true,req.responseText);
            } else {
                console.log("Attempt to retrieve feed failed.");
                callback(false,null);
            }
        };

        req.send(null);
    }

    function internetUp(){
        console.log("Internet is up.");
        statusDiv.innerHTML = "Status is: up";
    }

    function internetDown(){
        console.log("Internet is down.");
        statusDiv.innerHTML = "Status is: down";
    }
</script>
</body>
</html>

Are there any examples of real-life offline apps or libraries?

The github repository offline-editor-js is a full-fledged set of libraries for taking maps and mapping data offline and it’s being used in commercial mapping applications around the world. It includes a variety of sample applications that demonstrate how applications can work in either intermittently or fully offline mode.

Wrap-up

Hopefully you have seen that common use cases can significantly benefit from having basic offline capabilities. Modern browsers have advanced to the point where it’s fairly easy to build web apps that can survive intermittent interruptions in the internet. Taking advantage of these capabilities can offer a huge benefit to your end users.

Resources

Optimizing content efficiency – HTTP caching
Offline-editor-js – Offline mapping library

Tags: , , , , , ,
Posted in Browsers, JavaScript | Comments Off