Javascript Wormholes: Playing with Channel messaging

Whilst building some recent experiments with ServiceWorkers I’ve discovered a whole new API that I never knew existed: Channel messaging. Paper-clipped onto the end of the HTML5 Web Messaging specification, Channel messaging enables:

…independent pieces of code (e.g. running in different browsing contexts) to communicate directly
http://www.w3.org/TR/webmessaging/#channel-messaging

As far as I can see, they’re basically Javascript wormholes between different tabs and windows.

Wormholes in the real world

How do they work?

To create a new wormhole you call the MessageChannel constructor in the normal way:

var wormhole = new MessageChannel();

The wormhole has two portals, which are wormhole.port1 and wormhole.port2 and to send objects between one and the other you can postMessage the data on the sending port and listen to message message events on the receiving port.

One small complexity is that you won’t be able to listen to any of the incoming messages until start has been called on the receiving port.

Note: any data sent before the port has been opened will be lost – and there’s no way to interrogate the MessageChannel to find out whether a port is open or not.

Also note: as postMessage is asynchronous you can actually swap the wormhole.port2.start() and wormhole.port1.postMessage(‘HELLO’); around and it will still work.

var wormhole = new MessageChannel();
wormhole.port2.addEventListener('message', function(event) {
  console.log('port2 received:'+event.data);
});
wormhole.port2.start();
wormhole.port1.postMessage('HELLO');

See this for yourself on JSBin

It’s no fun to talk to yourself

Let’s now see if we can use a Shared Worker to wire two browser windows up with each other and see what we are able to send, window to window, tab to tab. The full code is up on GitHub and you can try it out there.

For this we’ll need two files: index.html and agent.js.

/agent.js

var mc;
onconnect = function(e) {
  var port = e.ports[0];
  if (mc) {
    port.postMessage({port:mc.port2}, [mc.port2]);
    mc = undefined;
  } else {
    mc = new MessageChannel();
    port.postMessage({port:mc.port1}, [mc.port1]);
  }
};

This is the SharedWorker. Every odd browser window that connects to it (ie. the 1st, 3rd, 5th, etc), it creates a new MessageChannel and passes one of the ports of that MessageChannel object to that browser window. It will also keep hold of a reference to the most recently created MessageChannel so that it can give the other port of it to the ‘even’ connecting browser windows (the 2nd, 3rd, 4th, …).

This allows the SharedWorker to hook up the browser windows, after which it can simply get out of the way – allowing the browser windows to talk to each other directly.

/index.html

<!DOCTYPE HTML>
<title>MessageChannel Demo</title>
<pre id="log">Log:</pre>
<script>
  var worker = new SharedWorker('agent.js');
  var log = document.getElementById('log');
  worker.port.onmessage = function(e) {
    window.portal = e.data.port;
    window.portal.start();
    window.portal.addEventListener('message', function(e) {
      log.innerText += '\n'+ (typeof e.data) + ' : ' + e.data;
    });
  }
</script>
<button onclick="window.portal.postMessage('hi');">Send 'hi'</button>
<button onclick="var now = new Date();window.portal.postMessage(now);">Send a date object</button>
<button onclick="var node = document.createElement('div');window.portal.postMessage(node);">Send a dom node</button>

This code will connect to the SharedWorker, wait for the SharedWorker to send it one of the ports of the MessageChannel (which the SharedWorker will create) and when it gets one, it will start listening to message events and print out the data it receives onto the web page.

I’ve also added some buttons so that it’s easy to test sending bits of data between the two browser windows. (Remember, you need to have two browser windows open for this to work)

Uncaught DataCloneError: Failed to execute ‘postMessage’ on ‘MessagePort’: An object could not be cloned.

Not every kind of javascript object can be sent in this way (which is why DOM nodes fail). According to the specification:

Posts a message to the given window. Messages can be structured objects, e.g. nested objects and arrays, can contain JavaScript values (strings, numbers, Dates, etc), and can contain certain data objects such as File Blob, FileList, and ArrayBuffer objects.
http://www.w3.org/TR/2012/WD-webmessaging-20120313/#posting-messages

Fixing Layout thrashing in the real world

This is sort of a rehash of Wilson Page’s ‘Preventing Layout Thrashing’ but I wanted to look at the topic with a practical example from the FT Web app. Much credit goes to him, Jamie Blair and Ada Edwards for their work creating, and then taming, FastDOM.


Layout Thrashing occurs when JavaScript violently writes, then reads, from the DOM, multiple times causing document reflows.
http://wilsonpage.co.uk/preventing-layout-thrashing/

Often talks on the subject of Layout Thrashing will start with a simple code sample that causes layout thrashing – and show how a developer might rework it to prevent it. Here’s one originally from one of Andrew Betts’s presentations:-

var h1 = element1.clientHeight;           // Read (measures the element)
element1.style.height = (h1 * 2) + 'px';  // Write (invalidates current layout)
var h2 = element2.clientHeight;           // Read (measure again, so must trigger layout)
element2.style.height = (h1 * 2) + 'px';  // Write (invalidates current layout)
var h3 = element3.clientHeight;           // Read (measure again, so must trigger layout)
element3.style.height = (h3 * 2) + 'px';  // Write (invalidates current layout)
etc.

Then there’ll be an example of how you might fix it:-

var h1 = element1.clientHeight;           // Read
var h2 = element2.clientHeight;           // Read
var h3 = element3.clientHeight;           // Read
element1.style.height = (h1 * 2) + 'px';  // Write (invalidates current layout)
element2.style.height = (h1 * 2) + 'px';  // Write (layout already invalidated)
element3.style.height = (h3 * 2) + 'px';  // Write (layout already invalidated)
etc.

Often though many presenters (myself included) will stop there and leave the pesky implementation details up to the developer to sort out.

The problem is nobody actually codes like this.

This is a screenshot from the project that I work on, the FT Web app:

webapp

When we can use CSS (which is immune to layout thrashing) to layout our pages we do but sometimes it’s not possible to do everything we need to in CSS. To give an example of the layout thrashing challenges we have within the web app I’ve highlighted three of the components that each require a little bit of javascript.

Component 1

It’s quite hard to see on the screenshot but we are required to add an ellipsis (…) when the text – which is displayed in two columns – overflows the component (look closely at the bottom right corner). Currently, multi-line ellipsis across multiple columns where the number of lines displayed is dynamic, dependent on the amount of space available, is not possible in CSS. Because of this we created the open source library, FT Ellipsis.

For ellipsis to work it is first required to measure the size of the container, and the number of lines contained within it; and then it has to write the ellipsis styling / insert any additional helper elements into the DOM.

Component 2

The amount of space allocated to component 2 is equal to the height of the window minus the header above it and the advert beneath it – this is done in CSS. However we want to be able to vary the number of lines shown per item. The more space available, the more lines shown – and this is not currently possible in CSS.

To achieve this layout we must first measure the amount of space there is available and then write the appropriate styles into the DOM to clip the text at the point where the component is not able to comfortably show any more text.

Component 3

Finally component 3 is a scrollable column. We would love to do this with pure CSS however the scrolling support for sub-elements on a page on touch devices is currently quite poor and so we must use a momentum scrolling library instead – we use FT Scroller but another popular open source scrolling library is iScroll.

In order to set up a scroller we must first measure the amount of space is available and then add some new layers and apply some new classes on elements on the page.

But we’ve used components! How could we be possibly causing layout thrashing?

Because we want to keep each component completely independent from every other, we store each component’s Javascript in a separate file. Taking the first component as an example, its implementation would look a bit like this:-

[...]

// Javascript to run when the component
// has been inserted into the page
insertedCallback: function() {
  this.ellipsis = new FTEllipsis(this._root);

  // Calculate the space available and figure out
  // where to apply the ellipsis (reads only)
  this.ellipsis.calc();

  // Actually apply the ellipsis styling/actually
  // insert ellipsis helper `div`s into the page.
  // (writes only)
  this.ellipsis.set();
}

[...]

At first glance this seems sensible and their own each component will be as performant as it can be.

Except when we bring the three components together

Because each setupCallback first does a bit of reading followed by a bit of writing, as the browser iterates through and runs them we will inadvertently cause ourselves Layout Thrashing – even though there is no code where it seems we have interleaved DOM reads and DOM writes.

So we created FastDOM.

FastDOM provides a common interface for batching DOM read/write work and internally it uses requestAnimationFrame.

Here’s the code sample above rewritten with FastDOM:

[...]

// Javascript to run when the component
// has been inserted into the page
insertedCallback: function() {
  this.ellipsis = new FTEllipsis(this._root);

  FastDOM.read(function() {

    // Calculate the space available and figure out
    // where to apply the ellipsis (reads only)
    this.ellipsis.calc();

    FastDOM.write(function() {

      // Actually apply the ellipsis styling/actually
      // insert ellipsis helper `div`s into the page.
      // (writes only)
      this.ellipsis.set();
    }, this);
  }, this);
}

[...]

So now when the ‘setupCallback‘s are run for each of the components, we don’t touch the DOM at all. Instead, we tell FastDOM that we want to do a bit of reading and then a bit of writing – and then allow FastDOM to sensibly order those operations. This eliminated Layout Thrashing.

Except that we had caused ourselves a thousand other problems instead

As the FT Web app is a single page app we are constantly loading and unloading pages, bringing new content and layouts into view – only to destroy them shortly after. In some circumstances the lifetime of any one of those pages can be very short. Sometimes even shorter than the lifetime of a requestAnimationFrame timeout.

And when that happened there would be nothing to unschedule the work we had deferred and, even though the DOM element already no longer existed, those FastDOM callbacks would try to do the work that had been assigned to them. Chrome Dev Tools console was full of errors.

We could have simply added a check at the beginning of every FastDOM callback to see if the element still existed but that would have to have been added in hundreds of places – and would probably be forgotten often. We needed to find a proper solution.

FastDOMs for everybody

In order for FastDOM to be effective there can only be one of them active on a page – it needs to be in overall control of all the DOM reads and writes in order to schedule them all at appropriate times. However, the downside of having a single queue for reads and writes is that it is very difficult to unschedule all the work scheduled by a single component.

What we needed was a way for each component to maintain its own queue of FastDOM work – whilst still leaving scheduling and processing of work to the single app-wide FastDOM.

So we created Instantiable FastDOM.

Instantiable FastDOM

The name is actually a little confusing because Instantiable FastDOMs aren’t really FastDOMs at all – they’re queues of work that has been scheduled in FastDOM (we’re thinking about changing this).

They are intended to be used by components so that components can easily clear any outstanding FastDOM work when they are destroyed.

So here is the code sample above rewritten with Instantiable FastDOM:

[...]

// Javascript to run when the component
// has been inserted into the page
insertedCallback: function() {
  this.ifd = new InstantiableFastDOM();
  this.ellipsis = new FTEllipsis(this._root);

  this.ifd.read(function() {

    // Calculate the space available and figure out
    // where to apply the ellipsis (reads only)
    this.ellipsis.calc();

    this.ifd.write(function() {

      // Actually apply the ellipsis styling/actually
      // insert ellipsis helper `div`s into the page.
      // (writes only)
      this.ellipsis.set();
    }, this);
  }, this);
},
removedCallback: function() {

  // Clear any pending work
  this.ifd.clear();
}
[...]

We have a winner

At long last we had a solution that:

  • eliminated layout thrashing;
  • didn’t cause hard javascript errors for its edge cases;
  • and didn’t add too much additional complexity to the implementations of each of our components.

What about 3rd parties?

No matter how performant and well written your application is, all that hard work can be completely undone by the potentially-not-as-well-informed developers of the widget you’ve been forced to embed on your application.

As usual there’s no magic answer – except that to hope that authors of Javascript libraries which interact with the DOM will split their instantiation logic up so that DOM reads and DOM writes can be run separately and users of those libraries can (if they want to) schedule those pieces of work in a sensible order.

Our ellipsis library, FTEllipsis, is our first example of an open source library that provides this flexibility by separating its instantiation logic into: calc (which only does DOM reads) and set (which only does DOM writes).

Layout Thrashing isn’t going away

As web components get ever closer and websites start to be built that adhere to their principles, if we don’t start using tools like FastDOM, those components are going to merrily read and write to the DOM without pausing to consider what other components might be doing – and Layout Thrashing is going to become harder and harder to avoid.

A terrible solution to Service Workers on http

I’ve been following the Service Worker http/https discussion quite closely lately. After years of waiting Service Worker is starting to feel really close. We even have our ‘first’ real demo outside of browserland.

Anyway here’s my terrible idea about serving Service Workers through http:-

Warning: This is a terrible idea and I can’t believe I’m actually going to suggest it. I have already wasted too many hours of my life creating and managing developer keys for native blackberry apps. Here goes.

So native apps solve this problem of verifying the authenticity of by requiring developers to sign apps with keys.

(Brushing all the implementation technicalities aside) could it be technically possible to verify the authenticity of a ServerWorker if it could be signed with a developer key – in the same way Android/Blackberry, etc native apps are?

Maybe this would be a lose, lose situation – doesn’t exactly make Service Worker that usable and also doesn’t improve the overall security of the web in the same way https does.

Also you’d probably want some kind of 3rd party to tie that key to the domain. Hmm. Sounds a lot like what https already does.

What does this actually solve

Well it means you don’t have to install a certificate for your domain, which means – if you’re using GitHub pages or Heroku – you can have a custom domain without paying $20/m (for Heroku, not possible at all for GitHub pages on https :( ). You might be using Cloudflare (who generously offer free http CDN services). If that’s the case you’ll save paying them some money too.

But it really solves it in the wrong place. https is the right choice and most of this just reimplements that, but badly. Basically, for developers to embrace this I think Heroku needs to lower the price it charges for running a small site behind https and/or GitHub need to start allowing people to upload their own certificates to GitHub pages.

Thank you for coming on this journey with me.

Beware of embedding tweets in full screen single page apps

Using components built by other people is fundamental to the success of any piece of technology. The more high quality physical and virtual components you can pull together, the less you need to build from scratch and the faster you can build things. We’ve been sharing and reusing code since the beginning of the web – and almost every web company that I can think of offers some way to embed their content on your site.

That’s all fine until you find the component does something that you don’t expect it to. For example, if the creator of the component made an assumption that is not true for your application, instead of saving you time it can cause problems for your application or the component itself. This happened to us when we tried to embed Tweets in the FT Web App.

This is an embedded Tweet:
(Please excuse the shameless self promotion)

One of the features the javascript Twitter use for embedded tweets on external websites has is that if you click reply or retweet instead of taking your user away from your website to Twitter, it will helpfully open a new, smaller window in which the user can use to post Tweets from, like this:

The problem is that the way this is implemented is that it doesn’t just affect the behaviour for links within the <blockquote class="twitter-tweet"> elements, it will listen to clicks on all links anywhere anywhere on your web page – and if the link is to a URL containing twitter.com/intent it will open a small new window.

To see this behaviour click here.

Interestingly it’ll also match links to others domains, as long as they contain the pattern twitter.com/intent/. Eg. http://mattandre.ws/twitter.com/intent/tweet. Play around with this on JSBin.

After a bit of digging, hidden in the minified code Twitter encourage you to use, are these few lines that are responsible for this:-

function m(e) {
  var t, r, i, s;
  e = e || window.event, t = e.target || e.srcElement;
  if (e.altKey || e.metaKey || e.shiftKey) return;
  while (t) {
    if (~n.indexOf(["A", "AREA"], t.nodeName))
      break;
    t = t.parentNode
  }
  t && t.href && (r = t.href.match(o), r && (s = v(t.href), s = s.replace(/^http[:]/, "https:"), s = s.replace(/^\/\//, "https://"), g(s, t), e.returnValue = !1, e.preventDefault && e.preventDefault()))
}

[...]

var o = /twitter\.com(\:\d{2,4})?\/intent\/(\w+)/, u = "scrollbars=yes,resizable=yes,toolbar=no,location=yes", a = 550, f = 520, l = screen.height, c = screen.width, h;
b.prototype = new t, n.aug(b.prototype, {render: function(e) {
  return h = this, window.__twitterIntentHandler || (document.addEventListener ? document.addEventListener("click", m, !1) : document.attachEvent && document.attachEvent("onclick", m), window.__twitterIntentHandler = !0), s.fulfill(document.body)
}}), b.open = g, e(b)

For most ordinary websites this behaviour wouldn’t be surprising – and probably even desired.

But our site ain’t no ordinary website. It’s one of those modern new fangled offline-first single page apps called the FT Web app.

Most of our users use our application full screen after it has been added to their (typically) iOS home screen. The problem is that we need to be in complete control (within javascript) of what happens when the user clicks any link because the default behaviour is fairly ugly (the application will suddenly close and the link will be opened in Safari). In order to make that experience a little less awful, in order to support external links like we first show the user a popup warning them that they’re about to leave the app like this:-

I’d be the first to admit that this isn’t exactly the pinnacle of user experience – it reminds me of the Microsoft Office paperclip helpfully double checking that you’re absolutely “sure you wanna exit?” but it’s the best we can do for now.

When we tried to start using Twitter’s embedded Tweet functionality we found that the code we’d carefully crafted to stop web links from inadvertently closing our full screen web app was being completely bypassed. In the end decided not to use Twitter’s javascript library.

It’s a little bit unfair that I’ve singled out Twitter, especially as they do provide the raw CSS to style Tweets without the Javascript that does all the weird stuff. In fact we’ve ended up shunning lots of different libraries for similar reasons (eg. jQuery and numerous advertising libraries) and every now and again one of our advertisers creates an advert that breaks critical features of our web application, which never fails to create a little excitement in the office. For being so adverse to externally written code, we’ve gained something of a reputation internally.

The fundamental problem is that unless you use an iframe to embed content (like YouTube does) – which causes numerous other problems for our web app so we don’t support either :( – the web is not encapsulated. If you add a 3rd party library to your web page, that library can do what it wants to your page and, short of just removing it, there isn’t always much you can do about it if it does do something you don’t agree with.

Guidance

If you’re building websites in non-standard ways (full screen ‘web apps’; packaged/hybrid apps; single page apps and/or offline first apps) don’t automatically assume that because you’re using ‘web technologies’ you will be able to use every existing library that was built for the web. All libraries – even modern, well written ones like the one Twitter use for embedding Tweets – are built with certain assumptions that may not be true for your product.

In the future Web components (via Shadow DOM) will finally bring the encapsulation that the web needs that will help us address some of these problems.

Hopefully iOS will also make the way it handles links in full screen web apps a little better too.

ServiceWorker and https

I recently attended State of the Browser in London and in one of the talks was a very convincing argument from Jake Archibald about the justification for requiring https for ServiceWorkers (the new HTML5 API that will replace web underdog AppCache).

So the logic goes:-

HTTP is wide-open to man-in-the-middle attacks. The router(s) & ISP you’re connected to could have freely modified the content of this page. It could alter the views I’m expressing here, or add login boxes that attempt to trick you. Even this paragraph could have been added by a third party to throw you off the scent. But it wasn’t. OR WAS IT? You don’t know. Even when you’re somewhere ‘safe’, caching tricks can make the ‘hacks’ live longer. We should be terrified that the majority of trusted-content websites (such as news sites) are served over HTTP.

To avoid giving attackers more power, service workers are HTTPS-only. This may be relaxed in future, but will come with heavy restrictions to prevent a MITM taking over HTTP sites.

http://jakearchibald.com/2014/service-worker-first-draft/

This makes sense. If I’m on a dodgy network I might not be too surprised if I saw something strange happening.

But I would be very surprised, and quite alarmed, if upon returning to the safety of my own home wifi, a trusted 3G connection or VPN if that strangeness didn’t go away. ServiceWorker, if it were enabled on http, would be wide open for this sort of attack.

This is where I start to disagree

The problem I have with this logic is that this sort of attack is already possible with AppCache. All you would need to do is serve a HTML page to any URL on a domain with manifest attribute like this:

<html manifest="my-evil-appcache.manifest">

And you would have hijacked that URL forever (or until they clear their browser cache). (Because the next time the user loads that page, the AppCache will try to do an update, attempt to download my-evil-appcache.manifest – and it’ll get a 404 – preventing it from updating… locking it in that state forever…)

For extra fun if you make sure to add a line with a forward slash in it you’d also get control of that domain’s home page:

CACHE:
/

So we should definitely require https on ServiceWorker then?

There’s an argument that says because you can take control of more URLs on a domain (with AppCache you have to specify them individually) so ServiceWorker is higher risk:-

But I feel that actually it only takes one URL to be compromised (say the basket of a popular shopping website that wasn’t served on http, which it would be safe to assume most users would go from straight to a payment gateway) for evil AppCache-savvy geniuses to cause some real harm.

All or nothing

I really don’t see how the ServiceWorker is any more dangerous than AppCache and we’ve had AppCache since Chrome 4 (2010) / even earlier in Firefox ~3.5 (~2008).

That said it does feel a bit risky and requiring https would be a good thing for the web in general, but I think it should be fixed for both technologies and for the browser to require https for both AppCache and ServiceWorker or just leave them both open. I just don’t think requiring https for one and not the other achieves very much…

(Alternatively give us an end-of-life date for AppCache ;-) )

HTML5 Offline Workshop this Autumn in Freiberg

I’m going to teaching a workshop on offline technologies in Freiberg at Smashing Conference this Autumn covering:-

  • A brief history of the offline web
  • Patterns for offline web applications
  • Cookies and Local Storage
  • IndexedDB and WebSQL
  • AppCache and ServiceWorker
  • Offline data sync strategies
  • Open-source libraries that can help us
  • Fallback techniques for older browsers, search-engine crawlers and users that do not need an offline experience

Come along, if you like :-)

Automate Sass Testing (with Travis CI)

View my demo project on GitHub

Having a large amount of CSS is unavoidable in a modern web application. Using preprocessors such as Sass help us manage that CSS but as we write more @mixin’s, @function’s and adopt techniques such as object orientated css the complexity grows. At FT Labs we even use (or perhaps abuse) npm as a package manager for Sass only repositories for various projects, including the FT Web app, so that those styles can be shared across projects.

With this ever increasing complexity, the differences between writing CSS and any other programming language are eroding.

All this complexity adds risk

In other programming languages we mitigate this kind of risk with automated testing. It’s time to start testing our Sass.

Testing Sass with Travis CI

Sass isn’t a language that Travis CI currently has first class support for but we can get it working with just a small number of hacks to the .travis.yml file.

Apart from some Sass (which I’m assuming you have already) you will need a .travis.yml file that looks something like this:

script:
 - "test/travis.rb"

language: sass
before_install:
 - gem install sass

View source file

Here I’m telling Travis to first install Sass then execute the file located at test/travis.rb.

If you use a task runner such as make, Rake, Grunt or another you’ll probably want to use it rather than a script like this but I wanted to keep things as simple and technology agnostic as possible.

Interestingly the language: option is actually optional and it even allows for invalid values – helpfully it will default to Ruby (the language Sass is written in). Optimistically I’ve set it to sass but it may be more robust to set this to ruby.

The test

The next step will be to tell Travis exactly what to build.

Here are the contents of my test/travis.rb script:

#!/usr/bin/env ruby
result = `sass main.scss built.css`
raise result unless $?.to_i == 0
raise "When compiled the module should output some CSS" unless File.exists?('built.css')
puts "Regular compile worked successfully"

View source file

I’m using back ticks rather than any of the other other ways to run shell commands in ruby so I can easily check the status code and output any errors thrown by Sass (which come through via stdout). I then check to see if the built files exists – and raise an error if it does not.

An error thrown at either step will stop the script executing and cause the build to fail.

Protection against bad PR

From now on any Pull Request that causes our Sass not to compile will come with a bright yellow ‘The Travis CI build failed’ warning.

What can we actually test?

Compiling is a good first step but that offers little more than a linter and will only catch the most basic of regressions. Here are some other ideas and examples of what could be tested:

Removing noise from Timeline reports in Chrome DevTools

Timeline in Chrome’s Dev Tools is really cool. It can help you get all sorts of data from dozens of metrics on the performance health of your web application.

The problem is making Chrome Timeline recordings to me is a bit like the Manual Burn scene in Apollo 13. The minute you hit the red button and move your cursor back to the website all hell breaks loose.

It becomes a race against time to kill the recording before it has filled up with so much information, triggered by so many different events that you lose all hope of hunting down those janks, layout thrashes and performance burps.

No matter how hard I try my timeline recording never look consistent, with a (relatively) clear root cause like this:

Pauls debugging websites
- Chrome Office Hours: Performance

Unlike other profiling tools in Chrome that can be controlled directly from Javascript I always find my Timeline reports have really poor signal-to-noise ratios. They tend to be a chaotic mixture of colours and where there is good information, it can feel like I’m just being told that I’m doing everything wrong:

Bad paint, bad javascript, bad layout

Note how in the first few frames the frame rate budget is being burst by scripting, rendering and painting.

Where do you even start?

I want a timeline that looks like this:

Much neater

Only the events in the timeline are those I have triggered (which you can see beneath because I triggered them in Javascript). There is nothing after, and nothing before – which means there is no need to waste time digging through looking for the event you’re interested in.

Tip: don’t use your mouse when using Timeline profiling. Give elements IDs and trigger the events on them that you want to profile via the console.

The result of doing this gives you a timeline that contains only the data that is relevant to the action you are profiling, leading to easy investigate, reproducible test cases:

Diagnosable

This is an improvement but it doesn’t work in all cases – some times you might want to profile, say, a hover state. I’d really like more granular control over what can start Timeline recording. Something like Event Listener Breakpoints (Sources panel of Dev Tools) to choose the sort of events that kick off Timeline recording (to be honest even just a way to stop mouse moves from doing it would be a good start)…

Bonus tips – Keyboard Shortcuts

Cmd/Ctrl + E starts and stops Timeline (and other profilers) recordings but the Dev Tools window must be focused so if you need to use the mouse during profiling (and want to avoid collecting too many extraneous mouse move events) you are probably going to want to switch back and forth with the keyboard.

Unfortunately there’s no keyboard shortcut that I know of to directly switch between Dev Tools on the Timeline and the browser*. If you have Dev Tools undocked you can only use the native OS’s window switching shortcuts (Alt + Tab on Windows, Cmd + ~ on Mac).

* In Chrome 30 if you switch on "Enable Cmd + 1-9 shortcut to switch panels" – the last option in the General tab of Dev Tools settings (click the cog) you can open the timeline with the easy to remember Cmd + Shift + J [release] Cmd + 5.

If you have Dev Tools undocked you can use Cmd + Alt + J to show and hide Dev Tools (and switch what is the focused at the same time), but when Dev Tools closes any Timeline recording in progress will be killed :(.

Pet Peeves
Another really helpful feature Timeline has is when you hover over Layout events, it will show you the affected region by adding a semi transparent blue rectangle over the affected area on your web page.

If you accidentally hover over anything in the timeline report whilst it’s still recording it’ll still add blue highlighting to the elements affected – and that blue highlighting itself pollutes Timeline’s output:

Blue highlighting

All the Composite layers events (the green ones) recorded after the ~8 second mark were caused by interacting with Timeline output.

When I’m profiling I’d rather Chrome didn’t do any highlighting at all, but I feel it really, really shouldn’t fill up the report produced Timeline with irrelevant noise that I then have to filter out…

Bug reports


Update: you can now start and stop timeline profiles from javascript

Full Frontal Conference 2013 Notes

09:50 – 10:30: ES6 Uncensored (slides) Angus Croll – @angustweets
A whistlestop tour of ES6 (the upcoming version of javascript due next year). The highlight for me was finally an explanation of yield that I actually understood. Is it me or does ES6 look a lot like Scala?


10:30 – 11:10: Javascript in the real world (slides) Andrew Nesbitt – @teabass
Mind still buzzing with yielding generators we segued into Andrew Nesbitt’s delightful Terminator-reference and Rabbit-photo packed presentation on the cutting edge of Javascript powered robots.


11:40 – 12:20: Mobile isn’t a thing, it is everything (slides) Joe McCann – @joemccann
Lots of charts of absurd growth and examples of mobile changing everything, from the most frivolous to the most life-saving and inspirational ways.


12:20 – 13:00: Pushing the limits of mobile performance (slides) Andrew Grieve – @GrieveAndrew
A peak into the history of Gmail web app for the original iPhones. Amazing to see how much faster smart phones have become since 2007. My main take away: Javascript performance isn’t the issue it used to be on mobile devices – rendering is (probably) a much bigger concern.


break for fish ‘n’ chips


14:30 – 15:10: Our web development workflow is completely broken (slides) Kenneth Auchenberg – @auchenberg
I imagine like many I had unquestioningly accepted the fact that when I use Chrome I must use Chrome dev tools; for Safari and iOS I must use Safari’s and so on.

We’ve all been doing it wrong.


15:10 – 15:50: Stunning visuals with maths and… No javascript? (slides) Ana Tudor – @thebabydino
Really, really cool demonstrations of what can be done with just CSS (and maths).


16:20 – 17:00: Building with web components using x-tags (slides) Angelina Fabbro – @angelinamagnum
The cutting edge of Web Components from the Mozilla camp.


17:00 – 17:40: Time (slides) Jeremy Keith – @adactio
And Full Frontal 2013 ended on Time – an exploration of the permanence of digital information, the longevity of formats and a brief history of time.


What an excellent day at the seaside.