Performance

2017: Boost Firefox Mobile Browsers

As a product manager for mobile browsers, besides being the voice of the users, it’s my ultimate mission to look ahead, collect, detect and predict mobile trends, and validate them as they fit with our users and markets. Finally, as a team, we decide what our team should focus on.

The beginning of a new year always seems to be a good time to reset, reflect, and set goals for the brand new year to come.

Mobile web is an exciting space to be in, even more so now, because last year marked an important milestone; for the first time, StatsCounter reported more people are browsing the web on their mobile devices as opposed to their desktop computers.

This write-up covers some of my reflections and aspirations to build a strong and useful Firefox browser on mobile.

Web Compatibility and Video

If a website doesn’t render content properly, the user is frustrated and will probably switch to a different browser. That, of course, applies to video as well.

By 2020 over 75% of global mobile data traffic will be video content.

My goal for 2017 is to make sure every video on the mobile web works on every Firefox Mobile browser. Mozilla’s platform & web compat team will be eager to help us get there. If you see a video not working on your mobile Firefox browser, let us know.

Most cat videos live on the web, and not in apps.

Mobile web is a mile wide and an inch deep, the app is a mile deep and an inch wide.”

One of the biggest advantages of the (mobile) web is that the user doesn’t need to install an app to get their content or particular service they require. No need to sacrifice more storage on your device for yet another app that isn’t used very often. We can make use of one of the fine advantages of the web; discoverability. We have the entire web at our disposal to help users pioneer new media content.

Some questions I’d like to discuss with our UR & UX team:

👋 Welcome New Users in Emerging Markets

I’m not the first one to tell you this; anybody who has an interest in the mobile space knows by now to pay attention to India’s massive mobile growth in 2017 and the years to follow.

In order to create value in India in the coming decade, companies must have a mobile-first strategy”.

What does that mean for us? Although our data shows that more people in India use an English version of our browser app, a lot of people in India consume mobile web content in Hindi (5x more). Can we localize and customize our app (even more) to accommodate non-English users?

How diverse are our users within different markets and cultures? Do they use features differently and how? For example, we’ve noticed that people in India (and Indonesia) save significantly more media content from the web to their device than in the US (almost double). Let’s investigate and help our users, no matter where they are, by finding solutions to their specific problem and needs.

India already surpassed the US smartphone market, and there will be an additional 330 million unique subscribers in India by 2020.

How can we best welcome these new smartphone users to the world of mobile browsing? Can we learn from their inexperienced and brand-new encounter with the web? What are the things that work well or don’t work at all for them? It’s a fresh new chance for us to give these 330 million new users a safe home and entry point to a safe web.

Partnerships, Cross-App Usage & Diversification

I strongly hold the belief that we should not force users to stay in our apps, rather come back more frequently; we need to foster cross-app usage, well, we actually must encourage cross-app usage. Especially on iOs where there is no concept of intents nor users are able to decide themselves their preferred default apps, we need to strengthen our partnership with other apps by forming an alliance to let users decide (rather than the OS to dictate) what apps they prefer to default to.

Stay tuned for our upcoming Firefox iOS update that will be the start of a series of cross-app pollination efforts to help with mentioned issue above.

My personal goal is to establish more cross-app partnerships this year.

Check out our “Open in Firefox” SDK for iOS if you are interested in giving users a choice when opening web content from within your iOS app.

This year will also be a year for our mobile team to experiment with single purpose apps (e.g. Focus – simple private browser), and validate if diversifying our mobile presence into multiple apps will reveal to be a successful move.

I can’t wait to experiment with some of the great ideas the team has in mind.

Artificial Intelligence

There will soon be no more mobile app, nor browser that doesn’t include some sort of AI or machine learning nuances. My colleagues from the Context Graph team currently optimize and experiment with the concept of a recommendation engine for our users. Stay tuned for more to come this year, carried out in products like Activity Stream for our mobile browsers.

Support for Diverse Input Formats

Personal home assistants like Google Home and Alexa are being operated primarily by our voices to perform tasks.

I’d like to explore and think more about voice as another input format to browse the web, be it via mentioned home assistants or be it via (other) apps on the mobile device.

And let’s not forget about the 2-in-1 devices. As a core mobile product manager, I want to make sure that our users, on these devices, can easily switch between touch, mouse or other input formats to consume any web content they want.

(Mobile) Web Performance

There can’t be a post of mine without mentioning mobile web performance.

First off, I’m so excited about Mozilla’s Quantum project. The team has already committed to deliver a fast web engine by end of 2017. In addition, our team in Taipei is knee deep in building HASAL, a framework for testing web performance between different browsers.

As for the user-facing part, we will continue to include features that will help our users browse the web faster. Obstacles such as slow loading ads, data usage and bandwidth constraints (especially in emerging markets) force us to find solutions for our users.

I want to continue to fight for fast(er) mobile websites and give users choices to understand performance and bandwidth impacts (list of mobile data related bugs)

This was a (small) selection of topics the core mobile browser team at Mozilla will work on in 2017 and the years to come.

Happy New Year, everyone! It’s going to be yet another exciting one.

Advocacy for a Faster Web: A Setting in Your Browser

I’ve talked about Third Party Footprint and Web Performance many times to help designers, developers and publishers understand the importance of web performance.

Now, I’m excited to share with you a solution that I hope, will encourage third party providers to fix their performance issues by sending another strong signal, this time by the actual consumers of tracking providers; the website visitors themselves.

As part of my mission as Fennec’s Product Manager, I don’t only want the web and the browser app to win but also want to provide choices for users to experience their journey online, every day to accomplish their tasks, always (a little) faster.

Mozilla wants to give control back to the user, and help users understand why we’ve added an important feature in our 42 release (Desktop and Android): Tracking Protection in Private Browsing.

Besides the privacy spin and the main reason for this feature to not let advertisers track you without your knowledge, there is one other important side effect that this feature brings with it; Improved performance by disabling third party scripts on the pages you visit.

Not only will it lead to faster loading pages, it will also help users save battery life and data. This is especially important in countries such as India, or countries like China where one of the top reasons users pick their browsers is based on offered data saving features. Chrome and Opera on Android have both explored this idea already as well.

In addition to protecting users’ privacy, I hope by providing users the choice to browse the web without third party scripts, we can send out a message to tracking providers in order to improve the performance of their service.  Don’t get me wrong, I don’t want people to lose their advertising jobs, rather do I see it as another layer of advocacy and strong(er) messaging going out to providers that not only publishers and their developers care about (mobile) performance but also users can now show their opinion by concrete actions. Who knows, it might even encourage publishers to think outside the box and provide their own solutions.

Screen Shot 2016-01-07 at 7.37.05 AMIt’s not new that users don’t like ads, the most popular add-ons on Android Firefox are Ad Blockers.

However, not everyone knows, feels comfortable or wants to install Add-Ons or additional Ad blocker apps. We can’t make it difficult for users to take back control, and offer them speedy web experiences.

Allowing users to disable third party trackers directly from within the browser app, seems convenient for the user, and just makes sense (to me).

Performance Test

Let’s see how turning tracking protection (in private browsing) benefits the user beyond protecting their privacy. The following test shows how much data and HTTP requests can be avoided by turning on Tracking Protection.

Test Scenario

I’ve tested it myself with remote debugging on several websites (e.g. people.com, espn.go.com etc.). I connected my Android device with my laptop and used a real LTE connection (tethered with my other phone) to reproduce real mobile speed.

I tried out 2 scenarios on people.com, and collected remote debugger information, and stored them in HAR files (a convenient file format to analyze for performance).

  1. Private browsing mode with Tracking Protection enabled
  2. Private browsing mode with Tracking Protection disabled

The two graphs below, created by http://onlinecurl.com/har-diff, show how the two different modes loaded people.com. The bars on the left are shorter, indicating that the page could be loaded faster. The bars on the right, without tracking protection enabled, show longer bars which represent longer loading times.

Screen Shot 2016-01-07 at 8.09.08 AM

Left: Tracking protection enabled. Right: Tracking protection disabled

Screen Shot 2016-01-07 at 8.09.50 AM

Results

Test Private Browsing Private Browsing
with Tracking
Protection
%
Data 2.1MB 1.4MB 33
HTTP Requests 182 69 62

The actual HAR files can be found here.

Both metrics (data and HTTP requests) show clear differences between tracking protection enabled and disabled, hinting to an improved user experience for people.com with tracking protection enabled.

Try it out yourself. Download the latest Firefox Android app on your phone, open a private browsing tab, tracking protection is enabled by default in private browsing mode. No extra setup required.

Screen Shot 2016-01-08 at 8.01.16 AM

 

Tracking Protection outside of Private Browsing (Experimental)

We are currently experimenting with options to enable tracking protection in normal mode.

I encourage everyone to try it out, share their opinion, and even experimentally enjoy this feature even when browsing in non-private mode (download Nightly).

Screen Shot 2016-01-08 at 8.04.31 AM

To a healthier and faster web, everyone.


Disclaimer

  • It is to note that by removing parts of the website when enabling tracking protection, some websites might break.
  • The article describes my personal opinion.

 

Avoiding Temptations that Harm Website Performance

The following post is a cross-post to Sitepoint’s blog to promote my book Lean Websites.


Web performance matters. Studies have shown that improvements in website performance—such as page load times—can dramatically increase user engagement and profits.

However, life’s short, and time is money. As web developers, we’re paid to get the job done—by clients, bosses and colleagues who may not appreciate the importance of site performance. So the temptation is to cut corners to get the job done—to find the quickest solution, without regard for performance. In these times of rising mobile usage and search engine preference for lean websites, average page weights continue to soar. It’s not a good situation.

Temptation: the pressure to give in to a desire for easy or immediate pleasure

The consequences of giving in to temptation are often not felt until afterwards.

This article describes some of the temptations you’ve probably faced in your web development journey, and why it’s better not to give in to them.

Using Ready-made Scripts

It’s a typical scenario: you need to add something to a web page—such as a slideshow. So you google “web slideshow” and get hundreds of results. There are so many to choose from, all ready to go, and free. Why not just use one as is, save time, and get paid? Doesn’t everyone else?

We often forget to consider the performance of the scripts we choose. Is the code well written? Is it optimized? Do we need all the functionality it contains?

In Chapter 4 of Lean Websites, I examine how to differentiate between copy and paste and copy and waste.

Pretty Images and Designs

A picture is worth a thousands words; and when it comes to web performance, a picture might be worth more that a thousand lines of code in terms of page weight! Poorly optimized images are by far the the biggest cause of bloated websites.

There are some image considerations that can make a huge difference to web page performance.

Not Every Device Needs a High Resolution

There’s no need to show everybody the high resolution version of the image if not needed. Be context sensitive, considerate and respectful. Don’t fill your page with unnecessary heavy assets like images just because you don’t know what else to put there. Trust me, none of your users on their mobile device while roaming wants to download a 2MB retina image.

Images Cost Bandwidth

Images remain the biggest performance culprits. They currently take up most of the file sizes and usage on the internet, as shown in the chart below:

Bandwidth usage of various content typesBandwidth usage of various content types

The temptation web developers face, especially when working under a lot of time pressure, is to just plug in big images, without considering whether to convert them into a more efficient image dimension or format.

In my book Lean Websites, I look in depth at ways to optimize your images and other site assets to ensure that your site is as performant as possible—especially on devices connected to mobile networks.

Performance Optimization as a Part of Development

When time is money, there’s always a temptation to cut corners. One way to cut corners is to put things off and never end up doing them. Performance testing and optimizing are critical, but it’s tempting to put them off till later and then forget all about them.

Performance optimization is often not mentioned as part of the common software development life cycle at all. But as Ilya Grigorik says, “performance is a feature“, and shouldn’t be relegated to an afterthought.

Lean Websites discusses how you can automate optimization, and make it part of your deployment process, with some easy-to-use and free tools.

Libraries and Frameworks

Christian Heilmann, a web evangelist at Microsoft, calls it “death by a thousand plugins“. It’s so easy to attempt to use modern web development trends by including yet another plugin or library. We sometimes forget that anything you put on your page will cost you and your visitor when it comes to performance. Don’t let too many plugins bloat your website. Heilmann also encourages us to remember that “it is not about what you can add—it is about what we can’t take away”. Something to remember next time when you want to paste another plugin into your site.

Libraries like jQuery, Dojo, and YUI are popular tools that help developers kick-start JavaScript projects, making access to JavaScript objects and methods faster and easier. They simplify the coding experience—but at what cost?

Big Query queryMost popular libraries

The file size of these libraries may vary a lot, especially if they are not minified, gzipped and compressed. jQuery minified and compressed is almost eight times smaller than sending it over the wire without optimization (252 KB uncompressed, 32 KB minified and gzipped).

It’s important to decide what framework or library to use early on in a project. It normally doesn’t make sense to use more than one library or more than one MVC JavaScript framework with a project, as different libraries tend to achieve the same goal. And of course, a library should only be loaded once, though it’s not uncommon to see multiple instances of jQuery on a single page:

Website Count Different jQuery version loaded
www.reddit.com 2 1.7.2,1.7.1
www.washingtontimes.com 2 1.4.2,1.4.4
www.tudoseo.com 10 1.6.2,1.7.2,1.7.1,1.6.4,1.8.2,
1.4.2,1.10.1,1.4.4,1.9.0,1.8.3

Duplicate loading of jQuery, source: example Google Big Query on HTTP Archive for the month of July 2014

Why would you want to load more than one version of jQuery? Isn’t this screaming for a good clean-up? There is definitely some legacy code in there that might require an older version of jQuery. Hence, the temptation is pretty big to just add a new version in addition to use newer functionality in jQuery. That seems like a lot of maintenance and legacy trouble. Instead, take some time, go through the functionality of your site and determine which version to convert to.

While sometimes there might be a good reason to include several JavaScript frameworks, there could also be other reasons that should be verified. The overlapping and duplication of plugins can stem from different reasons:

  • The team building the site didn’t agree on a common framework or library to use.
  • Tangled code that developers have to work with. Sometimes they are only being provided with isolated include files, with very little visibility to the parent code. They could be tempted to just plug in their preferred library and version if needed, to continue their work.
  • Missing enforcement techniques.
  • Carelessness or laziness of developers.
  • Use of other web components including the same frameworks.

Lean Websites looks in detail at the consequences of using libraries and frameworks, and how to make the best use of them without negatively impacting on site performance.

Social Media, Ads and Tracking

If you work for a company with a business intelligence, analytics or marketing department, the chances are high that you are being asked to include anything that could help measure the company’s success.

Social media, ads and tracking scripts are big temptations for marketers and companies to better understand their customers and to add or find other revenue streams, such as selling ads. But any foreign content you add on top of your own content—especially if it’s JavaScript-based—will add weight and load time to your page.

There’s not one social media or tracking tool out there that marketing wouldn’t like to try out.

Lean Websites looks in detail at how to properly and securely include third-party scripts and plugins.

A handy rule of thumb is that the value you get from using a third party script has to be greater than its performance hit.

Conclusion

Performance optimization is often an exercise in compromise, and there are always competing interests to be considered.

This article has raised just a few of the issues involved in site optimization—a topic that is finally coming of age in a big way.

Lean Websites provides a detailed, in-depth overview of the many factors involved in creating efficient, performant websites—from understanding user experience and expectations to monitoring performance, automating tasks, and optimizing server requests, site assets, and the networks our sites run on.

Hopefully this this brief introduction has whetted your appetite to find out more! I’d love to field any questions or comments you may have.

Fast-Forward Performance – The Future Looks Bright

Note: This is a cross-post, the original post can be found as part of the 2014 perf calendar.

 

Generally, I prefer to mention the bad news first:

  • Slow websites will always exist.
  • Websites will continue to become more complex and bigger.
  • Our demand for speed and patience will certainly not decline.

These facts shouldn’t come as a surprise to anyone who cares about web performance.

Predicting the future is difficult and science has not been able to make time travel possible for us to peak ahead to what will happen to web performance. However, have you paid attention to the W3C activities recently? There is some really exciting performance stuff cooking.

My contribution to this year’s performance calendar is to tell you what convenient features we can expect in the future when dealing with web performance.

The future is (almost) here

The good news is that the W3C Web Performance Working Group and browser vendors have acknowledged that performance is an important piece of web development. They have already pushed out, and continue to propose new standards and implementations for those performance APIs.

The purpose of a web API is to provide you with better access to your users’ browser and device. Performance APIs can ease the process of accurately measuring, controlling, and enhancing the users’ performance. In addition, new protocols and HTML elements have been proposed to help serve content even faster and more optimized to users. Prior to these enhancements, it was impossible for developers to accurately measure their website performance.

Please note, I added a browser compatibility table at the end of this post so you can verify each API against current browser support.

I’m excited about the future of web performance and this post describes why.

Can I get an API with that?

There are several already existing, but also new performance APIs that are currently being worked on. To ensure quality and interoperability, W3C standards go through a specification maturity process, as shown below. Starting from step 1, “Editor’s Draft” to step 5, “W3C Recommendation”. Most start landing in browsers (“behind a flag”) during the “Working Draft” phase and get refined over time. After step 3 (“Candidate Recommendation”), developers can expect the API feature to be released un-prefixed in some browsers.

Let’s take a closer look at each API listed in the boxes above, from right to left.

Navigation Timing

This specification defines an interface for web applications to access timing information related to navigation and elements. – W3C

The Navigation Timing API helps measure real user data such as bandwidth, latency, or the overall page load time for the main page, and it is mainly used to collect RUM data.

The API allows developers to inquire about the page’s performance via JavaScript through the PerformanceTiming interface.

varpage = performance.timing,
    plt = page.loadEventStart - page.navigationStart,
 
// Page load time (PTL) output for specific browser/user in msconsole.log(plt);

Navigation timing covers metrics of the entire page. To gather metrics about individual resources, please check out the Resource Timing API further down below.

You can use this API to collect performance metrics about your user, especially when using RUM as one of your measurement techniques.

Navigation Timing 2 has been announced and will replace the first version.

High Resolution Timing

This specification defines a JavaScript interface that provides the current time in sub-millisecond resolution and such that it is not subject to system clock skew or adjustments. – W3C

varperf = performance.now();
// console output 439985.4570000316

When it comes to performance, accurate measurements are very beneficial. The High Resolution Timing API supports floating point timestamps providing measurements to a microsecond level of detail.

Page Visibility

This specification defines a means for site developers to programmatically determine the current visibility state of the page in order to develop power and CPU efficient web applications. – W3C

The visibilitychange event is fired on document whenever the page gains or loses focus.

document.addEventListener('visibilitychange', function(event){if(document.hidden){// Page currently hidden.}else{// Page currently visible.}});

This event is very helpful to programmatically determine the current visibility state of the page. For example, the API can be applied if your user has several browser tabs open and you don’t want specific content to execute (e.g playing a video, or rotating images in a carousel). Especially on mobile devices, this can be a great advantage in saving battery consumption for your users when they don’t have your page visible, but open in an inactive tab.

Here is a neat sample page illustrating the firing of the visibilitychange event.

Resource Timing

This specification defines an interface for web applications to access the complete timing information for resources in a document. – W3C

The Resource Timing API is a bit newer and not as well supported as the Navigation Timing API. You can dig deeper into understanding the behaviour of each individual resource of a page. Imagine you putting an image on your page, but not knowing how it performs in the real world, therefor, you would like to know the Time to First Byte (TTFB) metric for this image.

As an example, let’s pick the performance calendar logo (http://calendar.perfplanet.com/wp-content/themes/wpc/wpclogo.png).

varimg = window.performance.getEntriesByName("http://calendar.perfplanet.com/wp-content/themes/wpc/wpclogo.png")[0];
varttfb = parseInt(img.responseStart - img.startTime),
    total = parseInt(img.responseEnd - img.startTime);
console.log(ttfb); // output 93 (in ms)console.log(total); // output 169 (in ms)// you could log this somewhere in a database or // send an image beacon to your serverlogPerformanceData('main logo', ttfb, total);

If Timing-Allow-Origin header is set by third party providers, you can even check the performance of third party resources on your page.

Beyond the main page’s performance (via Navigation Timing API), you can track real user experiences on a more granular basis (i.e. resource-basis). By having knowledge of this data, you can find potential performance bottlenecks for a specific resource.

Performance Timeline

This specification defines an unified interface to store and retrieve performance metric data. This specification does not cover individual performance metric interfaces. – W3C

The Performance Timeline specification defines a unifying interface to retrieve the performance data collected via Navigation Timing, Resource Timing and User Timing.

// gets all entries in an arrayvarperf = performance.getEntries();
for(vari = 0; i < perf.length; i++){console.log("Asset Type: " +
    perf[i].name +
    " Duration: " +
    perf[i].duration +
    "\n");
}

Check out the detailed post by Andrea Trasatti on the performance interface. He created a tool to generate HAR files from the performance timeline API, which provides you with a timeline view of performance metrics as they happen. You can plot the results as well. Andy Davies created a great waterfall bookmarklet to illustrate this.

Battery Status

This specification defines an API that provides information about the battery status of the hosting device. — Source

The API provides you access to the battery status of your users battery-driven device, as well as events that can be fired.

The charging, chargingTime, dischargingTime and level can be inquired, as well as events can fire based on these statuses.

varbattery = 
  navigator.battery || 
  navigator.webkitBattery ||
  navigator.mozBattery ||
  navigator.msBattery;
 
if(battery){console.log("Battery charging? " + battery.charging ? "Yes" : "No");
  console.log("Battery level: " + battery.level * 100 + " %");
  console.log("Battery charging time: " + battery.chargingTime + " seconds");
  console.log("Battery discharging time: " + battery.dischargingTime + " seconds");
};

More samples and details are posted on the Mozilla Battery Status API page, as well as here

By knowing the users’ battery state, you could serve content based on the status (e.g. don’t send energy intensive elements to the user if the battery level is below 20%).

User Timing

User Timing provides a simple JavaScript API to mark and measure application-specific performance metrics with the help of the same high-resolution timers. – W3C

With the User Timing API, you can set markers to measure specific blocks or functions of your application. The calculated elapsed time can be an indicator for good or bad performance.

performance.mark("start");
loadSomething();
performance.mark("end");
performance.measure("measureIt", "start", "end");
varmarkers = performance.getEntriesByType("mark");
varmeasurements = performance.getEntriesByName("measureIt");
console.log("Markers: ", markers);
console.log("Measurements: ", measurements);        
 
functionloadSomething(){// some crazy cool stuff here :)console.log(1+1);
}

The markers can help you focus on specific activities on your page and measure important milestones when your application/website is being executed.

Beacon

This specification defines an interoperable means for site developers to asynchronously transfer small HTTP data from the User Agent to a web server. – W3C

With the beacon API, you can send analytics or diagnostic code from the user agent to the server. By sending this asynchronously, you won’t block the rendering of the page.

navigator.sendBeacon("http://mywebserver.com",
  "any information you want to sent");

Here is a neat demo page.

You can use the recommended beacon to carry performance information to a specific URL for further RUM analysis.

Animation Timing

This document defines an API web page authors can use to write script-based animations where the user agent is in control of limiting the update rate of the animation. The user agent is in a better position to determine the ideal animation rate based on whether the page is currently in a foreground or background tab, what the current load on the CPU is, and so on. Using this API should therefore result in more appropriate utilization of the CPU by the browser. – W3C

Instead of using setTimeOut or setInterval to create animations, use the requestAnimationFrame. This method grants the browser control over how many frames it can render; aiming to match the screen’s refresh rate (usually 60fps) will result in a jank-free experience. It can also throttle animations if the page loses visibility (e.g., the user switches tabs), dramatically decreasing power consumption and CPU usage.

Check out Microsoft’s demo page comparing setTimeOut with requestAnimationFrame.

Smoother animations result in happy users, low CPU usage, and low power consumption.

Resource Hints

This specification defines a means for site developers to programmatically give the User Agent hints on the download priority of a resource. This will allow User Agents to more efficiently manage the order in which resources are downloaded. – W3C

Predictive browsing is a great way to serve your users with exactly what they want to see or retrieve next. “Pre-browsing” refers to an attempt to predict the users’ activity with your page (i.e. is there anything we can load prior to the user requesting it?).

The following pre-browsing attributes are meant to help you with pre-loading assets on your page.

<linkrel="dns-prefetch"href="//host_to_resolve.com"><linkrel="subresource"href="/javascript/mydata.json"><linkrel="prefetch"href="/images/labrador.jpg"><linkrel="prerender"href="//example.org/page_2.html">

For example, if you set up tracking on your page, you probably know where your users are headed most often. You could use resource hints to pre-load subsequent resources of the next page to allow for quicker loading of that consecutive page.

Other proposals (not supported yet)

  • Frame Timing

    This specification defines an interface to help web developers measure the performance of their applications by giving them access to frame performance data to facilitate smoothness (i.e. Frames per Second and Time to Frame) measurements. – W3C

  • Navigation Error Logging

    This specification defines an interface to store and retrieve error data related to the previous navigations of a document. – W3C

Protocols, standards, and new HTML elements

HTTP/2

HTTP/2 and SPDY protocol (developed by Google) allow several concurrent HTTP requests to run across one TCP session, and provide data compression of HTTP headers.

It’s no secret that SPDY has been a huge motivation for revamping the HTTP protocol. SPDY is not HTTP/2, however, when the HTTP/2 proposals were introduced in 2012, SDPYs specifications were adopted as a starting point (one single TCP connection, HTTP header compression, Server Push etc.), see more here.

When HTTP was introduced a decade ago, latency wasn’t something that was necessarily thought about. In HTTP/1.1, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client. Therefore, it must wait to receive a request for the resource from the client. HTTP/2 promises to make HTTP requests cheaper, reducing the need for us to come up with techniques (or maybe hacks?), such as CSS image sprites, inlining etc., to minimize the number of HTTP requests needed.

The HTTP/2.0 encapsulation enables more efficient use of network resources and reduced perception of latency by allowing header field compression and multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients. — HTTP/2.0 Draft 4

WebP

WebP is a lossy compression format to promise optimized image delivery for the web. WebP is open-source, developed by Google, only supported in Chrome, Opera and Android but promises 30% smaller file size than a comparable JPEG image. Big websites such as Facebook have started to adopt these techniques with great success, 90% of images sent to Facebook and Messenger for Android use the WebP format, while they see a file size decrease up to 80% from PNG to WebP.

You can convert your images using a WebP converter like ImageMagick or others.

One of the drawbacks is that as long as not all browsers support this new format, you will need to save two versions of the image, one in WebP and one in the legacy image format, resulting in more storage costs.

Some examples of WebP images can be found here.

Picture element and srcset attribute

This specification defines the HTML picture element and extends the img and source elements to allow authors to declaratively control or give hints to the user agent about which image resource to use, based on the screen pixel density, viewport size, image format, and other factors. – W3C

The <picture> element and srcset attribute provide two ways of getting responsive images in the browser. The srcset attribute allows you to target particular screen densities with images that have been scaled. The picture element, on the other hand, is primarily used for “art directed” content: where the contents of the image changes based on a CSS breakpoint. The standard <img> tag serves both a fallback and as the actual image container. Together, picture and srcset help deal with different device size, and accommodates for different image sizes when using responsive websites.

<picture><sourcemedia="(min-width: 1280px)"srcset="large-hero.jpg, large-hero-2.jpg 2x"><sourcemedia="(min-width: 600px)"srcset="med-hero.jpg, med-hero-2.jpg 2x"><sourcesrcset="small-hero.jpg, small-hero-2.jpg 2x"><imgsrc="hero-1.jpg"alt="Hero image"></picture>

Responsive image solutions can help save bandwidth by providing the most optimized image to the users’ screen width and device.

Browser support overview

The number in each column describes the browser’s version number and subsequent versions up.

Specification Internet Explorer Firefox Chrome Safari Opera iOS Safari Android
Navigation Timing 9 31 all 8 26 8 (not 8.1) 4.1
High Resolution Timing 10 31 all 8 26 8 (not 8.1) 4.4
Page Visibility 10 31 all 7 26 7.1 4.4
Resource Timing 10 34 all 26 4.4
Battery Status* 31 (partially) 38 26
User Timing 10 all 26 4.4
Beacon 31 39 26
Animation Timing 10 31 all 6.1 26 7.1 4.4
Resource Hints Canary limited
Frame Timing
Navigation Error Logging
WebP* all 26 4.1
Picture element and srcset attribute * 38 26

*Not part of Web Performance Working Group

More information can be found here and here.

Ready, set, go?

The W3C brings together industry and community to make performance recommendations and specification reality so that developers can implement them. Please note, not all browsers implement the APIs exactly according to specification, so make sure to verify its functionality for your supported browser list.

With power comes great responsibility and while offering all these new techniques and APIs to developers, we will need to make sure that we understand their power. Browsers and content delivery networks (CDNs) have helped us quite a bit in prioritizing and optimizing web delivery. However, giving developers additional tools to boost web performance is much appreciated.

Stay up-to-date

Your best look into the future is to subscribe to the Web Performance Working Group mailing list for any performance related updates.

Disclaimer

This blog post is a compilation of different sections from my upcoming book “Lean Websites”, where I discuss not only web performance APIs, but also provide general guidance on how to create lean websites. Feel free to pre-order your copy before the book officially launches in 2015.

Happy Holidays, Everyone!

“Lean Websites” – The ultimate performance bootcamp

I’m extremely delighted to let you know that I’ve started to write my very first own book. The book is called “Lean Websites” and focuses on front-end web performance.

“Lean Websites” will help you understand today’s web performance hurdles and guide you through a fun performance bootcamp with the goal to shave off some unnecessary page weight and increase the speed of your site.

Check out the link for more details.

Follow-up 3rd party footprint

The following post outlines the links, tools and articles I mentioned in my 3rd party footprint talk

Main Slides

Slides (Webdirections, Melbourne) and Slides (Velocity NYC 2014) and Web Rebels in Oslo

Shared Links and Articles

Tools and Tricks

WebPagetest Results

The Answer to Your Mobile Strategy and Performance Could Simply be…ESI

ESI stands for Edge Side Include and is an XML-based markup language. ESI support is offered by content delivery network (CDN) vendors like Akamai and F5 and also now Varnish (but only a very limited subset). If you use any of these vendors, you have ESI at your disposal. ESI can be used for caching purposes, however, in today’s post I will focus on how it could help you with your mobile strategy and performance.

The W3C states:
“ESI allows for dynamic content assembly at the edge of the network, whether it is in a Content Delivery Network, end-user’s browser, or in a “Reverse Proxy” right next to the origin server.” (http://www.w3.org/TR/esi-lang)

So, let’s pay attention to “dynamic content assembly” in the context of the title’s blog post.

If you are familiar with Server-Side-Include (SSI) and XML/XSLT, you will have no problem understanding ESI. It also supports the same access to variables based on HTTP request attributes, e.g. you can easily check for HTTP_USER_AGENT or HTTP_HEADER. ESI can also include fragments or snippets of additional content via an include command. To make this even more powerful, ESI supports conditional processing, which means logic can be applied to execute specific content <include/>s based on specific conditions, e.g. user agents. All of this is done at the edge; the user will never get content they are not supposed to receive. Additionally, when processing ESI, there is no need to go back to the origin for processing, hence the load at the origin is cut down.

What is (your) Mobile Strategy?

A mobile strategy or approach could range from “We don’t have one”, or “We swear on responsive web design” to “We take mobile very seriously and have dedicated sites”. If you opt for the first statement, please read this and then come back. If you opt for the second or third statement, please continue reading. While there are several options out there to do device detection via PHP or any other server-side language, I’d like to provide several insights on how the same can be achieved with ESI. Similar to other redirect strategies out there, ESI can be used to redirect users to a different site based on the visitor’s user agent. The redirect occurs at the edge and is faster than putting the redirect logic at the origin, hence, you experience performance improvements. ESI is powerful and cheap tool, and your way to a proper mobile device strategy could work by following the steps below.

To continue, please go to my guest post for Stoyan’s perf calendar, December 3: http://calendar.perfplanet.com/2013/esi-mobile-strategy-performance/

“Grunt” your way to frontend performance optimization

Performance optimization has been more than ever in the spotlight of web developers, especially for mobile web developers who have to understand and know by heart the challenges and constraints of mobile devices: these devices run off a battery that e.g. drains faster if performance is not taken seriously. The devices are powered by smaller CPUs than desktop devices. Unknown factors like latency and network connectivity challenge developers to build slim, light-weight and fast websites. Data plans still remain expensive and inconsiderate use of served data by web developers should not be ignored.

Clearly, performance is (and should) not (be) an after-thought anymore. When web developers create websites, performance can influence the success or failure of a web product. We’ve been hearing from leading performance advocates like Ilya Grigorik that speed is a feature and should not only be thought of just before a product hits production but rather as an essential part of the web product development cycle.

For example, instead of minifying and concatenating CSS and JavaScript files manually, tools and processes are out there that can help and put these performance tasks into an automated workflow, and more importantly right from the beginning of a product development cycle.

I’ve been using Maven to run most of the automated performance optimization at work, however I’ve been always interested in using Grunt for the same purpose. Grunt is a task runner, created for web products, based on JavaScript that can be leveraged to make performance part of the deployment process.

In today’s blog post I will be sharing some of the plugins for Grunt that can be used to speed up and automate performance optimization. At the end of the blog post, I will present performance results that will show that frontend optimization (FEO) can be fun and easily be automated to cut page load time.

Google’s “Make the Web Fast” team recommends frontend best practices as well as Steve Souder’s “High Performance Web Sites” outlines rules that can be applied for FEO. I decided to pick two of Steve’s rules “Make Fewer HTTP Requests” and “Minify JavaScript” (and CSS, HTML) by using Grunt plugins that can help automate those specific rules. So here it goes.

Note: The post assumes that you’ve worked with Grunt before and know how it’s been installed, and how to install plugins (I won’t go into details, however links at the bottom will help you)

“Let’s grunt it up”

(All plugin headings in this post are clickable links to their appropriate pages)

grunt-montage

Montage helps you sprite images to reduce HTTP requests. You will need ImageMagick to be installed. Alternatively, you can also try out grunt-spritefiles.

grunt-usemin

This plugin is useful when you want to develop and debug a version of your site that doesn’t use the minified and concatenated version of your JavaScript or CSS files. A comment blocks is wrapped around your JavaScript and CSS assets that will be concatenated to your destination after deployment.

<!-- build:js js/magic.min.js -->
<script src="js/1.js"></script>
<script src="js/2.js"></script>
<script src="js/3.js"></script>
<script src="js/4.js"></script>
<!-- endbuild -->

will become

<script src="js/magic.min.js"></script>

You can use grunt-processhtml instead.

grunt-closure-compiler

Alternatively you could use grunt-closure-compiler instead of combining concat and uglify and cssmin for JavaScript and CSS files.

grunt-contrib-uglify

Uglify and concat go almost hand in hand and should be used together, the concat plugin first makes sure to combine all defined JavaScript files. Uglify only works on JavaScript files. It minifies all code in a one line block of code. Use cssmin for CSS files.

grunt-contrib-cssmin

Same logic and idea than uglify, once your CSS files are all concatenated, use cssmin to shrink several lines of CSS code into one single one.

grunt-contrib-concat

Combine CSS and JavaScript files with this plugin, it allows you to reduce your HTTP requests for each and every file to just one combined file.

grunt-contrib-imagemin

imagemin minifies JPG and PNG images. It’s a handy Grunt plugin if you don’t know if the assets you got handed from your designer (or yourself) are optimized for web yet. By using this tool, you have the piece of mind that you use image files in an efficient way. Alternatively, you could use grunt-smushit, it’s based on Yahoo’s great smushit tool that is available in the YSlow plugin for several browsers.

grunt-image-embed

This plugin encodes images as base64 and leverages the technique of data URIs for images, something that can be used inline with CSS to reduce HTTP requests and hence reduce page load time. I’ve written a blog post where this is explained in more detail.

grunt-htmlcompressor

This plugin is using htmlcompressor tool to minify and compress HTML files. The options parameter is handy to tweak your compression, my example uses compressJS and preserveServerScript to also compress inline JavaScript and server script tags in case I wanted to include some SSI code.

spofcheck

Use grunt-exec to run the SPOFcheck. An excellent tool to identify bad 3rd party scripts includes, developed by the eBay team. I didn’t include the scripts asynchronously, hence SPOFcheck complaints to avoid SPOF.

“It’s Magic” Sample Page

I created a simple page themed “It’s magic” where I applied all mentioned plugins. You can find the files including Gruntfile.js here. Please note, I intentionally didn’t put a lot of effort into the styling (It is supposed to look as simple and cheesy as it feels to you)

In a nutshell, the page has a logo, uses JQuery from the Google CDN, includes a simple JQuery gallery with previous and next buttons. Simple JavaScript and CSS files are being used.

“without/magic.html”

The logo is a png logo, the images are not optimized. There are several CSS and JavaScript files that are all individual being included, not minified nor concatenated.

“with/magic.html”

This file is the one that Grunt will create for you. Visually, the file doesn’t look that different, besides the fact that the title has changed….see yourself

Below are screenshots of the two pages (and links) side by side, the one on the left before Grunt tasks were applied. The right one shows the page after Grunt tasks were applied. For the user they both look the same (except for the heading).

without/magic.html with/magic.html
No magic here!

Can you spot the differences?

  1. The logo was being transformed into a data URI
  2. The title has changed from It’s not magic to It’s magic
  3. The local CSS and JavaScript files were being minified and concatenated
  4. The HTML was being compressed, comments were taken out automatically
  5. The next and previous buttons were converted to a sprite file
  6. On build, SPOFcheck was applied and gave us the following warnings so we could address possible SPOF issues

Screen shot 2013-08-02 at 7.25.05 PM

Let’s take a look under the hood

Here are the waterfalls for both versions:

Without magic
notmagic-waterfall

With magicmagic

WebpageTest Results

I ran WebpageTest for both files with 9 runs for IE8 with a DSL connection to retrieve the median. Here are the performance results:

  1. Without magic results
  2. With magic results
  • HTTP requests dropped by ~48%
  • Page load time (PLT) dropped by ~10%
  • File sizes dropped by ~10%

Even if those numbers are not that high (mostly due to the simplicity of the experiment), it shows that Grunt can help you automatically optimize your deployment process.

As you can tell by the sample code, there are many mix and match options available, depending on the magnitude and granularity of your page structure. Nevertheless, this little sample shows how to use Grunt to optimize performance and to illustrate what is possible. Feel free to use the code as a starting point, and tweak or customize it to your likings.

General references and info to get you started with Grunt

Follow-up on my talk “Embracing Performance in Today’s Multi-Platform Macrocosm”

Hello everyone, this is a follow-up blog post on my talk presented at BDConf in San Diego and WebExpo in Prague.

If you landed here because you’ve typed in the URL after attending my talk, great! Thanks for making it all the way here. I hope you enjoyed my talk.

If you landed here via Google, Twitter or any other sites, I welcome you too, of course! You might want to first have a look at my slides (see link below) before clicking on any of the other links below.

Either way, feel free to leave a comment or contact me via twitter with my handle @bbinto.

Enjoy!

Slides

Slides available on SlideShare

Links and articles, recommended content

Maven Tools

Continuos Integration Tools (<3)

General Links

Image Credits

The Power of a Private HTTP Archive Instance: Finding a Representative Performance Baseline

(Note: cross-posted at programming.oreilly.com)

Be honest, have you ever wanted to play Steve Souders for a day and pull some revealing stats or trends about some web sites of your choice? Or maybe dig around the HTTP archive? You can do that and more by setting up your own HTTP Archive.

httparchive.org is a fantastic tool to track, monitor, and review how the web is built. You can dig into trends around page size, page load time, content delivery network (CDN) usage, distribution of different mimetypes, and many other stats. With the integration of WebPagetest, it’s a great tool for synthetic testing as well.

You can download an HTTP Archive MySQL dump (warning: it’s quite large) and the source code from the download page and dissect a snapshot of the data yourself.  Once you’ve set up the database, you can easily query anything you want.

Setup

You need MySQL, PHP, and your own webserver running. As I mentioned above, HTTP Archive relies on WebPagetest—if you choose to run your own private instance of WebPagetest, you won’t have to request an API key. I decided to ask Patrick Meenan for an API key with limited query access. That was sufficient for me at the time. If I ever wanted to use more than 200 page loads per day, I would probably want to set up a private instance of WebPagetest.

To find more details on how to set up an HTTP Archive instance yourself and any further advice, please check out my blog post.

Benefits

Going back to the scenario I described above: the real motivation is that often you don’t want to throw your website(s) in a pile of other websites (e.g. not related to your business) to compare or define trends. Our digital property at the Canadian Broadcasting Corporation’s (CBC) spans over dozens of URLs that have different purposes and audiences. For example, CBC Radio covers most of the Canadian radio landscape, CBC News offers the latest breaking news, CBC Hockey Night in Canada offers great insights on anything related to hockey, and CBC Video is the home for any video available on CBC. It’s valuable for us to not only compare cbc.ca to the top 100K Alexa sites but also to verify stats and data against our own pool of web sites.

In this case, we want to use a set of predefined URLs that we can collect HTTP Archive stats for. Hence a private instance can come in handy—we can run tests every day, or every week, or just every month to gather information about the performance of the sites we’ve selected. From there, it’s easy to not only compare trends from httparchive.org to our own instance as a performance baseline, but also have a great amount of data in our local database to run queries against and to do proper performance monitoring and investigation.

Visualizing Data

The beautiful thing about having your own instance is that you can be your own master of data visualization: you can now create more charts in addition to the ones that came out of the box with the default HTTP Archive setup. And if you don’t like Google chart tools, you may even want to check out D3.js or Highcharts instead.

The image below shows all mime types used by CBC web properties that are captured in our HTTP archive database, using D3.js bubble charts for visualization.

Mime types distribution for CBC web properties using D3.js bubble visualization. The data were taken from the requests table of our private HTTP Archive database.

Mime types distribution for CBC web properties using D3.js bubble visualization. The data were taken from the requests table of our private HTTP Archive database.

Querying the Database

Sometimes, you want to get some questions answered without creating a chart. That’s when you can query the MySQL tables directly. Let’s run a simple query on the requests table.

For example, some of the CBC sites use YUI, some use jQuery—but we would really like to avoid having pages serve both. A simple sample query like the one below could help identify those sites:

SELECT req_referer
FROM requests
WHERE url LIKE "%/jquery_.js%" OR url LIKE "%/i/l/yui/%"
GROUP BY req_referer

And More …

We will share more of the queries and insights we’ve gathered from our HTTP Archive instance that helped us identify bottlenecks. In addition, we will also discuss how this setup came in very handy to discover problems with some unnecessary page weight that we thought we didn’t have.

Join our talk at the Velocity conference in Santa Clara in June titled “The Canadian Public Broadcaster on A Diet: Slimming Down for A Whole Nation.” The talk will not only cover the private HTTP Archive instance but furthermore cover many other aspects of how to focus on (mobile) web fitness and how to “slim down.”

Related Posts to our Talk