How and why we built Performance Insights

Published on

Interested in helping improve DevTools? Sign up to participate in Google User Research here.

In Chrome 102 you’ll notice a new experimental panel, Performance Insights, in your DevTools. In this post we’ll discuss not only why we’ve been working on a new panel, but also the technical challenges that faced us and the decisions we’ve made along the way.

ALT_TEXT_HERE

Why build another panel?

(If you have not seen it already, we posted a video on why building the Performance Insights panel and how you can get actionable insights on your website's performance with it.)

The existing Performance Panel is a great resource if you want to see all the data for your website in one place, but we felt that it could be a little overwhelming. If you’re not a performance expert it’s hard to know exactly what to look for and what parts of the recording are relevant.

Enter the Insights Panel, where you can still view a timeline of your trace and inspect the data, but also get a handy list of what DevTools considers to be the main “Insights” worth digging into. Insights will identify issues such as render blocking requests, layout shifts and long tasks to name a few, which all can negatively impact your website’s page load performance and specifically your site’s Core Web Vital (CWV) scores. Alongside the flagging of issues, Performance Insights will provide you with actionable suggestions to improve your CWV scores, and provide links to further resources and documentation.

Feedback link in the panel

This panel is experimental and we want your feedback! Please let us know if you encounter any bugs, or have feature requests that you think will help you when working on your site’s performance.

How we built Performance Insights

Like the rest of DevTools, we built Performance Insights in TypeScript and used web components, backed by lit-html, to build the user interface. Where Performance Insights differs is that the primary UI interface is an HTML canvas element, and the timeline is drawn onto this canvas. A lot of the complexity comes from managing this canvas: not only drawing the right details in the right place, but managing mouse events (for example: where did the user click on the canvas? Did they click on an event we’ve drawn?) and ensure that we re-render the canvas effectively.

Important

Note that because this panel is an experimental feature, the code for the panel is not open source, but it may become open source in the future.

Multiple tracks on a single canvas

For a given website, there are multiple “tracks” that we want to render, each representing a different category of data. For example, the Insights panel will show three tracks by default:

And as we continue to land features on the panel, we expect more tracks to be added.

Our initial thought was for each of these tracks to render their own <canvas>, so that the main view would become multiple canvas elements stacked vertically. This would simplify rendering on a track level, because each track could render in isolation and there would be no danger of a track rendering outside of its bounds, but unfortunately this approach has two major issues:

canvas elements are expensive to (re-)render; having multiple canvases is more expensive than one canvas, even if that canvas is larger. Rendering any overlays that go across multiple tracks (for example, vertical lines to mark events such as FCP time) becomes complex: we have to render onto multiple canvases and ensure they are all rendered together and align properly.

Using one canvas for the entire UI meant we needed to figure out how to ensure each track renders at the right coordinates and doesn’t overflow into another track. For example, if a particular track is 100px high, we can’t allow it to render something that’s 120px high and have it bleed into the track that’s below it. To resolve this we are able to use clip. Before we render each track, we draw a rectangle representing the visible track window. This ensures that any paths drawn outside of these bounds will be clipped by the canvas.

canvasContext.beginPath();
canvasContext.rect(
trackVisibleWindow.x, trackVisibleWindow.y, trackVisibleWindow.width, trackVisibleWindow.height);
canvasContext.clip();

We also didn’t want each track to have to know its position vertically: each track should render itself as if it were rendering at (0, 0), and we have a higher level component (which we call TrackManager) to manage the overall track position. This can be done with translate, which translates the canvas by a given (x, y) position. For example:

canvasContext.translate(0, 10); // Translate by 10px in the y direction
canvasContext.rect(0, 0, 10, 10); // draw a rectangle at (0, 0) that’s 10px high and wide

Despite the rect code setting 0, 0 as the position, the overall translation applied will cause the rectangle to be rendered at 0, 10. This allows us to work on a track basis as if we’re rendering at (0, 0), and have our track manager translate as it renders each track to ensure each track is rendered correctly below the previous.

Off-screen canvases for tracks and highlights

Canvas rendering is relatively expensive, and we want to ensure the Insights panel stays smooth and responsive as you work with it. Sometimes you can’t avoid having to re-render the entire canvas: for example, if you change the zoom level we have to start again and re-render everything. Canvas re-rendering is particularly expensive because you can’t really just re-render a small part of it; you need to wipe the entire canvas and redraw. This is unlike DOM re-rendering where tools can calculate the minimal work required and not remove everything and start again.

One area where we hit visual issues was highlighting. When you hover over metrics in the pane, we highlight them on the timeline, and likewise if you hover over an Insight for a given event, we draw a blue border around that event.

This feature was first implemented by detecting a mouse move over an element that triggers a highlight, and then drawing that highlight directly onto the main canvas. Our issue comes when we have to remove the highlight: the only option is to redraw everything! It’s impossible to just redraw the area where the highlight was (not without huge architectural changes), but re-drawing the entire canvas just because we want to remove a blue border around one item felt like overkill. It also visually lagged if you rapidly moved your mouse over different items to trigger multiple highlights in quick succession.

To fix this we split our UI up into two off-screen canvases: the “main” canvas, where tracks render to, and the “highlights” canvas, where highlights are drawn. We then render by copying those canvases onto the single canvas that’s visible on screen to the user. We can use the drawImage method on a canvas context, which can take another canvas as its source.

Doing this means that removing a highlight doesn’t cause the main canvas to be redrawn: instead we can clear the on-screen canvas, and then copy the main canvas onto the visible canvas. The act of copying a canvas is cheap, it’s the drawing that is expensive; so by moving highlights onto a separate canvas, we avoid that cost when turning highlights on and off.

Comprehensively tested trace parsing

One of the benefits of building a new feature from scratch is that you can reflect on the technical choices made previously and make improvements. One of the things we wanted to improve on was to explicitly split our code into two, almost entirely distinct parts:

Parse the trace file and pull out the data required. Render a set of tracks.

Keeping the parsing (part 1) separate from the UI work (part 2) enabled us to build a solid parsing system; each trace is run through a series of Handlers which are responsible for different concerns: a LayoutShiftHandler calculates all the information we need for Layout Shifts and a NetworkRequestsHandler exclusively tackles pulling out network requests. Having this explicit parsing step where we have different handlers responsible for different parts of the trace has also been beneficial: trace parsing can get very complicated, and it helps being able to focus on one concern at a time.

We’ve also been able to comprehensively test our trace parsing by taking recordings in DevTools, saving them and then loading them in as part of our test suite. This is great because we can test with real traces, and not build up huge amounts of fake trace data that could become obsolete.

Screenshot testing for canvas UI

Staying on the topic of testing, we usually test our frontend components by rendering them into the browser and ensuring they behave as expected; we can dispatch click events to trigger updates, and assert that the DOM the components generate is correct. This approach works well for us but falls down when considering rendering to a canvas; there is no way to inspect a canvas and determine what’s drawn there! So our usual approach of rendering and then querying is not appropriate.

To enable us to have some test coverage we turned to screenshot testing. Each test fires up a canvas, renders the track we want to test, and then takes a screenshot of the canvas element. This screenshot is then stored in our codebase, and future test runs will compare the stored screenshot against the screenshot they generate. If the screenshots are different, the test will fail. We also provide a flag to run the test and force a screenshot update when we’ve purposefully changed the rendering and need the test to be updated.

Screenshot tests are not perfect and are a little blunt; you can only test that the entire component renders as expected, rather than more specific assertions, and initially we were guilty of over-using them to ensure every single component (HTML or canvas) rendered correctly. This slowed our test suite down drastically, and led to issues where tiny, almost irrelevant UI tweaks (such as subtle color changes, or adding some margin between items) caused multiple screenshots to fail and require updating. We’ve now scaled back our usage of screenshots, and use them purely for canvas based components, and this balance has worked well for us so far.

Conclusion

Building the new Performance Insights panel has been a very enjoyable, educational experience for the team. We’ve learned a bunch about trace files, working with canvas and much more. We hope you enjoy using the new panel, and can’t wait to hear your feedback.

To learn more about the Performance Insights panel, see Performance insights: Get actionable insights on your website's performance.

Download the preview channels

Consider using the Chrome Canary, Dev or Beta as your default development browser. These preview channels give you access to the latest DevTools features, test cutting-edge web platform APIs, and find issues on your site before your users do!

Getting in touch with the Chrome DevTools team

Use the following options to discuss the new features and changes in the post, or anything else related to DevTools.

  • Submit a suggestion or feedback to us via crbug.com.
  • Report a DevTools issue using the More options   More   > Help > Report a DevTools issues in DevTools.
  • Tweet at @ChromeDevTools.
  • Leave comments on our What's new in DevTools YouTube videos or DevTools Tips YouTube videos.

More from the Chrome DevTools team

Subscribe to Chrome DevTools blog to stay up to date with the DevTools news.

Updated on Improve article

We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.