May 17, 2017

Release Notes for Safari Technology Preview 30

Surfin’ Safari

Safari Technology Preview Release 30 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 215859-216643.


  • Implemented Subresource Integrity (SRI) (r216347)
  • Implemented X-Content-Type-Options:nosniff (r215753, r216195)
  • Added support for Unhandled Promise Rejection events (r215916)
  • Updated document.cookie to only return cookies if the document URL has a network scheme or is a file URL (r216341)
  • Removed the non-standard document.implementation.createCSSStyleSheet() API (r216458)
  • Removed the non-standard Element.scrollByLines() and scrollByPages() (r216456)
  • Changed to allow a null comparator in Array.prototype.sort (r216169)
  • Changed to set the Response.blob() type based on the content-type header value (r216353)
  • Changed Element.slot to be marked as [Unscopable] (r216228)
  • Implemented HTMLPreloadScanner support for <link preload> (r216143)
  • Fixed setting Response.blob() type correctly when the body is a ReadableStream (r216073)
  • Moved offsetParent, offsetLeft, offsetTop, offsetWidth, offsetHeight properties from Element to HTMLElement (r216466)
  • Moved the style property from Element to HTMLElement and SVGElement, then made it settable (r216426)


  • Fixed Arrow function access to this after eval('super()') within a constructor (r216329)
  • Added support for dashed values in unicode locale extensions (r216122)
  • Fixed the behaviour of the .sort(callback) method to match Firefox and Chrome (r216137)


  • Fixed space-evenly behavior with Flexbox (r216536)
  • Fixed font-stretch:normal to select condensed fonts (r216517)
  • Fixed custom properties used in rgb() with calc() (r216188)


  • Fixed the behavior of aria-orientation="horizontal" on a list (r216452)
  • Prevented exposing empty roledescription (r216457)
  • Propagated aria-readonly to grid descendants (r216425)
  • Changed to ignore aria-rowspan value if a rowspan value is provided for a <td> or <th> (r216167)
  • Fixed an issue causing VoiceOver to skip cells after a cell with aria-colspan (r216134)
  • Changed to treat cells with ARIA table cell properties as cells (r216123)
  • Updated implementation of aria-orientation to match specifications (r216089)

Web Inspector

  • Added resource load error reason text in the details sidebar (r216564)
  • Fixed toggling the Request and Response resource views in certain cases (r216461)
  • Fixed miscellaneous RTL and localization issues (r216465, r216629)
  • Fixed Option-Click on URL behavior in Styles sidebar (r216166)
  • Changed 404 Image Loads to appear as a failures in Web Inspector (r216138)


  • Fixed several issues that prevented cookie-related endpoints from working correctly (r216258, r216261, r216292)


  • Removed black background from video layer while in fullscreen (r216472)


  • Fixed problem with the CSS Font Loading API’s load() function erroneously resolving promises when used with preinstalled fonts (r216079)
  • Fixed flickering on asynchronous image decoding and ensured the image is incrementally displayed as new data is received (r216471)

By Jon Davis at May 17, 2017 05:00 PM

May 15, 2017

Responsive Design for Motion

Surfin’ Safari

WebKit now supports the prefers-reduced-motion media feature, part of CSS Media Queries Level 5, User Preferences. The feature can be used in a CSS @media block or through the window.matchMedia() interface in JavaScript. Web designers and developers can use this feature to serve alternate animations that avoid motion sickness triggers experienced by some site visitors.

To explain who this media feature is for, and how it’s intended to work, we’ll cover some background. Skip directly to the code samples or prefers-reduced-motion demo if you wish.

Motion as a Usability Tool

CSS transforms and animations were proposed by WebKit engineers nearly a decade ago as an abstraction of Core Animation concepts wrapped in a familiar CSS syntax. The standardization of CSS Transforms/CSS Animations and adoption by other browsers helped pave the way for web developers of all skill levels. Richly creative animations were finally within reach, without incurring the security risk and battery cost associated with plug-ins.

The perceptual utility of appropriate, functional motion can increase the understandability and —yes— accessibility of a user interface. There are numerous articles on the benefits of animation to increase user engagement:

In 2013, Apple released iOS 7, which included heavy use of animated parallax, dimensionality, motion offsets, and zoom effects. Animation was used as tool to minimize visual user interface elements while reinforcing a user’s understanding of their immediate and responsive interactions with the device. New capabilities in web and native platforms like iOS acted as a catalyst, leading the larger design community to a greater awareness of the benefits of user interface animation.

Since 2013, use of animation in web and native apps has increased by an order of magnitude.

Motion is Wonderful, Except When it’s Not

Included in the iOS accessibility settings is a switch titled “Reduce Motion.” It was added in iOS 7 to allow users the ability to disable parallax and app launching animations. In 2014, iOS included public API for native app developers to detect Reduce Motion (iOS, tvOS) and be notified when the iOS setting changed. In 2016, macOS added a similar user feature and API so developers could both detect Reduce Motion (macOS) and be notified when the macOS pref changed. The prefers-reduced-motion media feature was first proposed to the CSS Working Group in 2014, alongside the release of the iOS API.

Wait a minute! If we’ve established that animation can be a useful tool for increasing usability and attention, why should it ever be disabled or reduced?

The simplest answer is, “We’re not all the same.” Preference is subjective, and many power users like to reduce UI overhead even further once they’ve learned how the interface works.

The more important, objective answer is, “It’s a medical necessity for some.” In particular, this change is required for a portion of the population with conditions commonly referred to as vestibular disorders.

Vestibular Spectrum Disorder

Vestibular disorders are caused by problems affecting the inner ear and parts of the brain that control balance and spatial orientation. Symptoms can include loss of balance, nausea, and other physical discomfort. Vestibular disorders are more common than you might guess: affecting as many as 69 million people in the United States alone.

Most people experience motion sickness at some point in their lives, usually while traveling in a vehicle. Consider the last time you were car-sick, sea-sick, or air-sick. Nausea can be a symptom of situations where balance input from your inner ear seems to conflict with the visual orientation from your eyes. If your senses are sending conflicting signals to your brain, it doesn’t know which one to trust. Conflicting sensory input can also be caused by neurotoxins in spoiled food, hallucinogens, or other ingested poisons, so a common hypothesis is that these conflicting sensory inputs due to motion or vestibular responses lead your brain to infer its being poisoned, and seek to expel the poison through vomiting.

Whatever the underlying cause, people with vestibular disorders have an acute sensitivity to certain motion triggers. In extreme cases, the symptoms can be debilitating.

Vestibular Triggers

The following sections include examples of common vestibular motion triggers, and variants. If your site or web application includes similar animations, consider disabling or using variants when the prefers-reduced-motion media feature matches.

Trigger: Scaling and Zooming

Visual scaling or zooming animations give the illusion that the viewer is moving forward or backward in physical space. Some animated blurring effects give a similar illusion.

Note: It’s okay to keep many real-time, user-controlled direct manipulation effects such as pinch-to-zoom. As long as the interaction is predictable and understandable, a user can choose to manipulate the interface in a style or speed that works for their needs.

Example 1: Mouse-Triggered Scaling

How to Shoot on iPhone incorporates a number of video and motion effects, including a slowly scaling poster when the user’s mouse hovers over the video playback buttons.

The team implemented prefers-reduced-motion to disable the scaling effect and background video motion.

Example 2: 3D Zoom + Blur

The macOS web site simulates flying away from Lone Pine Peak in the Sierra Nevada mountain range. A three-dimensional dolly zoom and animated blur give the viewer a sense that physical position and focal depth-of-field is changing.

In mobile devices, or in browsers that can’t support the more complicated animation, the effect is reduced to a simpler scroll view. By incorporating similar visual treatment, the simpler variant retains the original design intention while removing the effect. The same variant could be used with prefers-reduced-motion to avoid vestibular triggers.

Trigger: Spinning and Vortex Effects

Effects that use spiraling or spinning movements can cause some people with vestibular disorders to lose their balance or vertical orientation.

Example 3: Spinning Parallax Starfield

Viljami Salminen Design features a spinning, background star field by default.

It has incorporated prefers-reduced-motion to stop the spinning effect for users with vestibular needs. (Note: The following video is entirely motionless.)

Trigger: Multi-Speed or Multi-Directional Movement

Parallax effects are widely known, but other variants of multi-speed or multi-directional movement can also trigger vestibular responses.

Example 4: iOS 10 Site Scrolling

The iOS 10 site features images moving vertically at varying speeds.

A similar variant without the scroll-triggered image offsets could be used with prefers-reduced-motion to avoid vestibular triggers.

Trigger: Dimensionality or Plane Shifting

These animations give the illusion of moving two-dimensional (2D) planes in three-dimensional (3D) space. The technique is sometimes referred to as two-and-a-half-dimensional (2.5D).

Example 5: Plane-Shifted Scrolling

Apple’s Environment site features a animated solar array that tilts as the page scrolls.

The site supports a reduced motion variant where the 2.5D effect remains a still image.

Trigger: Peripheral Motion

Horizontal movement in the peripheral field of vision can cause disorientation or queasiness. Think back to the last time you read a book while in a moving vehicle. The center of your vision was focused on the text, but there was constant movement in the periphery. This type of motion is fine for some, and too much to stomach for others.

Example 6: Subtle, Constant Animation Near a Block of Text

After scrolling to the second section on Apple’s Environment site, a group of 10-12 leaves slowly floats near a paragraph about renewable energy.

In the reduced motion variant, these leaves are stationary to prevent peripheral movement while the viewer focuses on the nearby text content.

Take note that only the animations known to be problematic have be modified or removed from the site. More on that later.

Using Reduce Motion on the Web

Now that we’ve covered the types of animation that can trigger adverse vestibular symptoms, let’s cover how to implement the new media feature into your projects.

CSS @Media Block

An @media block is the easiest way to incorporate motion reductions into your site. Use it to disable or change animation and transition values, or serve a different background-image.

@media (prefers-reduced-motion) {
  /* adjust motion of 'transition' or 'animation' properties */

Review the prefers-reduced-motion demo source for example uses.

MediaQueryList Interface

Animations and DOM changes are sometimes controlled with JavaScript, so you can leverage the prefers-reduced-motion media feature with window.matchMedia and register for an event listener whenever the user setting changes.

var motionQuery = matchMedia('(prefers-reduced-motion)');
function handleReduceMotionChanged() {
  if (motionQuery.matches) {
    /* adjust motion of 'transition' or 'animation' properties */
  } else { 
    /* standard motion */
handleReduceMotionChanged(); // trigger once on load if needed

Review the prefers-reduced-motion demo source for example uses.

Using the Accessibility Inspector

When refining your animations, you could toggle the iOS Setting or macOS Preference before returning to your app to view the result, but this indirect feedback loop is slow and tedious. Fortunately, there’s a better way.

The Xcode Accessibility Inspector makes it easier to debug your animations by quickly changing any visual accessibility setting on the host Mac or a tethered device such as an iPhone.

  1. Attach your iOS device via USB.
  2. Select the iOS device in Accessibility Inspector.
  3. Select the Settings Tab.

Alternate closed-captioned version of the Accessibility Inspector demo below.

Don’t Reduce Too Much

In some cases, usability can suffer when reducing motion. If your site uses a vestibular trigger animation to convey some essential meaning to the user, removing the animation entirely may make the interface confusing or unusable.

Even if your site uses motion in a purely decorative sense, only remove the animations you know to be vestibular triggers. Unless a specific animation is likely to cause a problem, removing it prematurely only succeeds in making your site unnecessarily boring.

Consider each animation in its context. If you determine a specific animation is likely to be a vestibular trigger, consider serving an alternate, simpler animation, or display another visual indicator to convey the intended meaning.


  1. Motion can be a great tool for increasing usability and engagement, but certain visual effects trigger physical discomfort in some viewers.
  2. Avoid vestibular trigger animations where possible, and use alternate animations when a user enables the “Reduce Motion” setting. Try out these settings, and use the new media feature when necessary. Review the prefers-reduced-motion demo source for example uses.
  3. Remember that the Web belongs to the user, not the author. Always adapt your site to fit their needs.

More Information

By James Craig at May 15, 2017 07:00 PM

May 03, 2017

Javier Fernández: Can I use CSS Box Alignment ?

Igalia WebKit

As a member of the Igalia’s team implementing the CSS Grid Layout feature for Blink and WebKit rendering engines, I’m very proud of what we’ve achieved from our collaboration with Bloomberg. I think Grid is a very interesting feature for the Web Platform and we still can’t see all its potential.

One of my main assignments on this project is to implement the CSS Box Alignment spec for Grid. It’s obvious that alignment is an important feature for many cases in web development, but I consider it a key for a layout model like the one Grid provides.

We recently announced that the patch implementing the self-baseline alignment landed in Blink. This was the last alignment functionality pending to implement, so now we can consider that the spec is complete for Grid. However, implementing a feature like CSS Box Alignment has an additional complexity in the form of interoperability issues.

Interoperability is always a challenge when implementing any new specification, but I think it’s specially problematic for a feature like this for several reasons:

  • The feature applies to several layout models.
  • The CSS Flexible Box specification already defined some of the CSS properties and values.
  • Once a new layout model implements the new specification, Flexbox is forced to follow it as well.

I admit that the editors of this new specification document made a huge effort to keep backward compatibility with the Flexbox spec (which caused not so few implementation challenges). However, the current Flexbox implementation of the CSS properties and values that both specs have in common would become a Partial Implementation regarding the new spec.

Recently Florian Rivoal found out that this partial implementation of the CSS Box Alignment feature prevents the use of cascade or @support for providing customized fallbacks for the unimplemented Alignment properties.

What does Partial Implementation actually mean ?

As anybody can imagine, implementing a fancy web feature takes a considerable amount of time. During this period, the feature passes through several phases with different exposure to the end users. It’s precisely due to the importance of end user’s feedback that these new web features are shipped under experimental flags. This workflow is specially useful no only for browser devs but for the spec editors as well.

For this reason, the W3C CSS Working Group defines a general policy to manage Partial Implementations, which can be summarized as follows:

So that authors can exploit the forward-compatible parsing rules to assign fallback values, CSS renderers must treat as invalid (and ignore as appropriate) any at-rules, properties, property values, keywords, and other syntactic constructs for which they have no usable level of support. In particular, user agents must not selectively ignore unsupported property values and honor supported values in a single multi-value property declaration: if any value is considered invalid (as unsupported values must be), CSS requires that the entire declaration be ignored.

This policy is added to every spec as part of its Conformance appendix, so it is in the case of the CSS Box Alignment specification document. However, the interpretation of the Partial Implementation policy is far from trivial, specially for a feature like CSS Box Alignment. The most restrictive interpretation would imply the following facts:

  • Any new CSS property of the new spec should be declared invalid until is supported by all the layout models it applies to.
  • Any of the already existent CSS properties with new values defined in the new spec should be declared invalid until all these new values are implemented in all the layout models such property applies to.
  • Browsers shouldn’t ship (without experimental flags) any CSS property or value until it’s implemented in all the layout model it applies to.

When we discussed about this at Igalia we applied a less restrictive interpretation, based on the assumption that the spec actually defined several features which could be implemented and shipped independently, obviously avoiding any browsers interoperability issues. As it’s been always in the nature of the specification, keeping backward compatibility with Flexbox implementations has been a must, since its spec already defines some of the CSS properties now present in the new spec.

The issue filed by Florian was discussed during the Tokyo F2F Apr 19-21 2017 meeting, where it was agreed to add a new section in the CSS Box Alignment spec to clarify how implementors of this feature should manage Partial Implementations:

Since it is expected that support for the features in this module will be deployed in stages corresponding to the various layout models affected, it is hereby clarified that the rules for partial implementations that require treating as invalid any unsupported feature apply to any alignment keyword which is not supported across all layout modules to which it applies for layout models in which the implementation supports the property in general.

The new text added makes the Partial Implementation policy less restrictive and, even it contradicts our interpretation of independent alignment features per layout model, it affects only to models which already implement any of the CSS properties defined in the new spec. In this case, only Flexbox has to be updated to implement the new values defined for its alignment related CSS properties: align-content, justify-content and align-self.

Analysis of the implementation and shipment status

Before thinking on how to address the Partial Implementation issues, I decided to analyze what’s the status of the CSS Box Alignment feature in the different browsers. If you are interested in the full analysis, it’s available here. The following table shows the implementation status of the new spec in the Safary, Chrome and Firefox browsers, using a color code like unimplemented, only grid or both (flex and grid):

If you can try out some examples of these Partial Implementation issues, just try flexbox vs grid cases with some of these alignment values: align-items: center, align-self: left; align-content: start or justify-content: end.

The 3 major browsers analyzed have shipped most, if not all, the CSS Box Alignment spec implemented for CSS Grid Layout (since Chrome 57, Safari 10.1, Firefox 52). Firefox is the browser which implemented and shipped a wider support for CSS Flexible Box.

We can extract the following conclusions:

  • The 3 browsers analyzed have shipped Partial Implementations of the CSS Box Alignment specification, although Firefox is almost complete.
  • The 3 browsers have shipped a Grid feature that supports completely the new CSS Box Alignment spec, although Safari still misses the self-baseline values.
  • The 3 implementations of the new CSS Box Alignment specification are backward compatible with the CSS Flexible Box specification, even though it implements for some properties a lower level of the spec (e.g. self-baseline keywords)

Work in progress

Although we are still evaluating the problem together with the Blink and WebKit communities, at Igalia we are already working on improving the situation. We all agree on the damage to the Web Platform that these Partial Implementation issues are causing, as Florian pointed out initially, so that’s a good starting point. There are bug reports on both WebKit and Blink and we are already providing patches for some of them.

We are still discussing about the best approach, but our bet would be to request an intent-to-implement-and-ship for a CSS Box Alignment (for flexbox layout) feature. This approach fits naturally in our initial plans of implementing several independent features from the alignment specification. It seems that it’s what Firefox is doing, which already announced the implementation of CSS Box Alignment (for block layout)

Thanks to Bloomberg for sponsoring this work, as part of the efforts that Igalia has been doing all these years pursuing a better and more open web.

Igalia & Bloomberg logos

By jfernandez at May 03, 2017 08:19 PM

Release Notes for Safari Technology Preview 29

Surfin’ Safari

Safari Technology Preview Release 29 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 215271-215859.


  • Implemented Intl.DateTimeFormat.prototype.formatToParts (r215616)
  • Improved Date.parse to accept wider range of date strings (r215359)
  • Implemented Object.isFrozen() and Object.isSealed() according to ECMA specifications (r215272)


  • Added support for percentage gaps for CSS Grid (r215463)
  • Changed :focus-within behavior to match specifications (r215719)


  • Avoided repaints for invisible animations on (r215507)
  • Fixed rendering flexbox children across columns (r215320)
  • Fixed text-align:start and text-align:end behavior in table cells (r215375)
  • Fixed animations with large negative animation-delays that fail depending on machine uptime (r215352)
  • Reduced redundant text measuring during mid-word breaking (r215666)
  • Changed memory handling to keep all of the decoded frames for an animated image if the total memory size of the frames is under 30MB (up from 5MB) (r215557)
  • Fixed <li> content inside <ul> to wrap mid-word when word-break:break-word is set (r215660)
  • Fixed the location of the “recent searches” popover of <input type="search"> in RTL mode (r215830)

Web Inspector

  • Added regular expression support to XHR breakpoints (r215584)
  • Added a pause reason for “All Requests” XHR breakpoint (r215427)
  • Fixed the enabled state of “All Requests” XHR breakpoint to be correctly restored (r215435)
  • Fixed a bug where XHR breakpoints would disappear when the inspected page is reloaded (r215473)
  • Fixed XHR breakpoints restored from settings but not appearing in the sidebar (r215422)
  • Fixed Network datagrid columns to correctly restore their shown or hidden state (r215449)
  • Added tooltips to Network grid items for easier reading when text overflows (r215631)
  • Fixed sorting by Priority column in Network datagrids (r215793)
  • Fixed the display of Web Socket messages with non-latin letters (r215388)
  • Prevented showing the Search tab for location links, prefer the Resources tab (r215630)
  • Changed to treat Uint8ClampedArray as an array, not an object (r215855)
  • Fixed Command-G (⌘G) shortcut to allow Find next to work in the console (r215795)
  • Implemented autocompletion for CSS variables (r215358)
  • Updated the icon for the Ignore resource cache button in the Network Tab (r215440)


  • Added support for ECDSA (r215423)
  • Improved converting an ECDSA signature binary into DER format (r215791)


  • Changed the role description of <hr> from “separator” to “rule” (r215532)


  • Restricted WebKit image formats to a known whitelist (r215706, r215829). WebKit now only loads images of the following formats:
    • PNG (.png)
    • GIF (.gif)
    • JPEG (.jpg), (.jpeg), (.jpe), (.jif), (.jfif), (.jfi)
    • JPEG 2000 (.jp2), (.j2k), (.jpf), (.jpx), (.jpm), (.mj2)
    • TIFF (.tiff), (.tif)
    • MPO (.mpo)
    • Microsoft Bitmap (.bmp), (.dib)
    • Microsoft Cursor (.cur)
    • Microsoft Icon (.ico)

Bug Fixes

  • Fixed an issue where the status bar would not display modifier key information (e.g. “Open * in new tab” when holding the Command key) (r215790)
  • Improved performance of typing on pages with many <input> elements
  • Fixed an issue where a hardware “enter” key would not dismiss JavaScript alert, confirm, or prompt; previously, only the “return” key would dismiss a dialog
  • Fixed QuotaExceededError when saving to localStorage in private browsing mode or WebDriver sessions (r215315)
  • Fixed an issue where the Content-Disposition header filename was ignored when the download attribute is specified (r215736)
  • Fixed escaping ‘<‘ and ‘>’ in attribute values when using XMLSerializer.serializeToString() API (r215648)
  • Fixed issues causing beforeunload dialog to be shown even though the user did not interact with the page (r215404)
  • Changed all CORS requests and cross origin access from file:// to be blocked unless Disable Local File Restrictions is selected from the Develop menu

By Jon Davis at May 03, 2017 05:00 PM

Carlos García Campos: WebKitGTK+ remote debugging in 2.18

Igalia WebKit

WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:


This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

With the new remote inspector we have to use any WebKitGTK+ based browser and load


As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

  • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
  • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
  • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
  • The debugger window is automatically closed when the target web view is closed or crashed.

How does the new remote inspector work?

The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

By carlos garcia campos at May 03, 2017 03:43 PM

April 20, 2017

A Few Words on Fetching Bytes

Surfin’ Safari

Like all good puzzles, a web browser is composed of many different pieces. Some are all shiny, like your favorite web API. Some are less visible, like HTML parsing and web resource loading.

Even dull pieces require lots of work to standardize their behavior across browsers. For example, HTML parsing originally provided only: Give me HTML and I’ll give you a document. Now, it is much more reliable across browsers because it has been standardized in detail. Similarly, the loading of web resources was somehow consistent up to: give me an HTTP request and I’ll get you a HTTP response. But loading a web resource encompasses much more than that. The Fetch specification thoroughly standardizes those details. As well as specifying how the browser loads resources, the Fetch specification also defines a JavaScript API for loading resources. This API, the Fetch API, is a replacement to XMLHttpRequest, providing the lowest-level set of options possible in the context of a web page. Let’s see how shiny Fetch API might be.

The Fetch API

The Fetch API consists of a single Promise-returning method called fetch. The returned value is a Response object which contains the response headers and body information. Let’s use the Fetch API to retrieve the list of WebKit features:

async function isFetchAPIFeelingGood() {
    let webkitFeaturesURL = "";
    let response = await fetch(webkitFeaturesURL);
    let features = await response.json();
    return features.specification.find((feature) => == "Fetch API");
isFetchAPIFeelingGood().then((value) => alert(!!value ? "Oh yes!" : "not really!"))

You might notice two await uses in the example above. fetch is returning a promise that gets resolved when the response headers are received. The data being requested is JSON. The second promise resolves when the entire response body is available.

fetch can take either a URL or a Request object. The Request object allows access to a whole new set of options compared to XMLHttpRequest. Let’s try again to check whether fetch API is supported in WebKit, but this time, let’s make sure our cache does not serve us some out-of-date information.

async function isFetchAPIFeelingGoodForReal() {
    let webkitFeaturesURL = "";
    let response = await fetch(new Request(webkitFeaturesURL,
        { cache: "no-cache" }
    let latestFeatures = await response.json();
    return latestFeatures.specification.find((feature) => == "Fetch API");

fetch also provides more flexible access to the response body. In addition to getting it in various flavors (JSON, arrayBuffer, blob, text…), the response provides a ReadableStream body attribute. This makes it possible to process chunks of bytes progressively as they arrive without buffering the whole data, and even aborting the resource load:

async function featureListAsAReader() {
    let webkitFeaturesURL = "";
    let response = await fetch(new Request(webkitFeaturesURL));
    return response.body.getReader();

function checkChunk(searched, buffer, count)
    var i = 0;
    while (i < buffer.length) {
        if (buffer[i++] == searched.charCodeAt(count)) {
            if (++count == searched.length)
               return count;
        } else if (count) {
            count = 0;
    return count;

async function isFetchAPIFeelingGoodWhileChunky(reader, count)
    reader = reader ? reader : await featureListAsAReader();
    count = count ? count : 0;

    let chunk = await;
    if (chunk.done)
        return false;

    let searched = "Fetch API";
    count = checkChunk(searched, chunk.value, count);
    if (count == searched.length)
        return true;
    return isFetchAPISupported(reader, count);

Fetching The Future

The Fetch API journey is not finished. New proposals might cover important features of XMLHttpRequest that Fetch currently lacks, like early cancellation and timeout. New proposals might also cover HTTP/2 push and priority, as well as wider use of the Response object in web APIs: media elements, Web Assembly… The Fetch algorithm is also being constantly refined to reach full interoperability of web resource loading. A first iteration of WebKit Fetch API implementation shipped in Safari. The WebKit community is eager to hear about your feedback on this feature. Comments, suggestions, priorities, use cases, tests, bug reports and candies are all very welcome through the usual WebKit channels. That would be so fetch indeed!

View post on

By Youenn Fablet at April 20, 2017 06:00 PM

April 19, 2017

Release Notes for Safari Technology Preview 28

Surfin’ Safari

Safari Technology Preview Release 28 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 214535-215271.

Power and Performance

  • Changed to pause silent WebAudio rendering in background tabs (r214721)
  • Changed to pause animated SVG images on pages loaded in the background (r214561)
  • Changed to make inaudible background tabs become eligible for memory kill after 8 minutes (r215077)
  • Changed to kill any WebContent process using over 16 GB of memory (r215055)
  • Throttled DOM Timers to 30fps in cross-origin iframes that the user did not interact with (r215116)
  • Throttled requestAnimationFrame callbacks to 30fps in cross-origin iframes the user did not interact with (r215070, r215153)


  • Adapted content-alignment properties to the new baseline syntax (r214624)
  • Adapted place-content alignment shorthand to the new baseline syntax (r214852)
  • Adapted self-alignment properties to the new baseline syntax (r214564)
  • Fixed scroll offset jumps after a programmatic scroll in an overflow container with scroll snapping (r215075)
  • Implemented the place-items shorthand (r214966)
  • Implemented stroke-color CSS property (r215261)
  • Implemented stroke-miterlimit CSS property (r214787)
  • Unprefixed CSS cursor values grab and grabbing (r215146)


  • Fixed objects with gaps between numerical keys getting filled by NaN values (r214714)
  • Fixed Object.seal() and Object.freeze() on global this (r215072)
  • Fixed String.prototype.replace to correctly apply special replacement parameters when passed a function (r214662)


  • Changed _blank, _self, _parent, and _top browsing context names to be case-insensitive (r214944)
  • Cleaned up touch event handler registration when moving nodes between documents (r214819)
  • Fixed <input type="range"> to prevent breaking all mouse events when changing to disabled while active (r214955)
  • Prevented double downloads of preloaded content from when the content is in MemoryCache (r215229)
  • Fixed WebSocket.send (r215102)

Web Inspector

  • Added a preference for Auto Showing Scope Chain sidebar on pause (r214847)
  • Changed the order of Debugger tab sidebar panels: Scope Chain, Resource, Probes (r215047)
  • Changed XHR breakpoints to be global (r214956)
  • Changed hierarchical path component labels to guess directionality based on content for RTL layout (r214862)
  • Fixed RTL alignment of close button shown while docked (r214902)
  • Fixed RTL layout issues in call frame tree elements and async call stacks (r214846)
  • Fixed RTL layout issues in the debugger dashboard putting arrows on the wrong side (r214899)
  • Fixed RTL layout issues in Type Profiler popovers (r214906)
  • Fixed misplaced highlights in Search results of the Search navigation sidebar for RTL layout (r214864)
  • Fixed disappearing section when clicking on the body of a CSS rule after editing (r214863)
  • Fixed showing indicators for hidden DOM element breakpoints in the Elements tab (r214844)
  • Fixed blank Network tab content view after reload (r214551)
  • Made “Enter Class Name” text field wider so the placeholder text doesn’t clip (r215192)
  • Fixed probe values not showing in the Debugger tab sidebar (r214967)
  • Fixed focusing the Find banner immediately after showing it (r214856)
  • Fixed showing Source Map Resources in the Debugger Sources list (r215082)
  • Fixed Styles sidebar warning icon appearing inside property value text (r214617)
  • Fixed broken tabbing in Styles sidebar when additional “:” and “;” are in the property value (r215170)
  • Fixed clipped data in WebSockets data grid (r215206)
  • Fixed staying scrolled to the bottom as new WebSocket log messages get added (r214587)
  • Included additional pause reason details for DOM “subtree modified” breakpoint (r214861)
  • Included more Network information in Resource Details Sidebar (r214903)
  • Included all headers in the Request Headers section of the Resource details sidebar (r215062)


  • Fixed an issue that prevented non-popup windows from being maximized or resized
  • Fixed an issue that caused previously opened tabs to reopen when Safari was launched in order to run a WebDriver test


  • Exposed a new AXSubrole for the explicit ARIA “group” role (r214623)
  • Fixed VoiceOver web article navigation with an article rotor for sites like Facebook and Twitter (r215236)


  • Fixed seeks to currentTime=0 if currentTime is already 0 (r214959)


  • Fixed clipping across page breaks when including <caption>, <thead> or <tbody> in a <table> (r214712)
  • Fixed Japanese fonts in vertical text to support synthesized italics (r214848)
  • Fixed long Arabic text in ContentEditable with CSS white-space=pre to prevent hangs (r214726)
  • Fixed overly heavy fonts on by attempting to normalize variation ranges (r214585, r214572)


  • Added support for AES-CTR (r215051)


  • Changed private browsing sessions to not look in keychain for client certificates (r215125)


  • Fixed an issue where Safari would throw an exception when evaluating JavaScript ending with an implied return value, where the final statement doesn’t include the return keyword

By Jon Davis at April 19, 2017 05:00 PM

April 05, 2017

WebGPU Prototype and Demos

Surfin’ Safari

A few weeks ago, we announced the creation of the W3C GPU for the Web Community Group and our proposal, WebGPU. As of Safari Technology Preview Release 26, and WebKit Nightly Build, a WebGPU prototype is available for you to experiment with on macOS.

To enable WebGPU, first make sure the Develop menu is visible using SafariPreferencesAdvancedShow Develop menu in menu bar. Then, in the Develop menu, make sure Experimental FeaturesWebGPU is checked.

We also have written some simple demos to give you an idea of how the API works. Note that our implementation and documented proposal are not quite aligned, so the code in these demos will change over time. We’ve tried to indicate the places in the code where the prototype is behind the proposal. And, to reiterate, this is a proposal – the final API will almost certainly be quite different.

You should be aware that it is not recommended to browse the Web with this feature enabled. Not only is it experimental and non-standard, we haven’t implemented any validation of content, so an error in WebGPU might cause a crash, or worse.

By Dean Jackson at April 05, 2017 10:40 PM

Release Notes for Safari Technology Preview 27

Surfin’ Safari

Safari Technology Preview Release 27 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 213822-214535.

Browser Changes

  • Added a “Reload Page From Origin” alternate menu item to the View menu. This action reloads a page without using cached resources.
  • Removed the Option-Command-R (⌥⌘R) keyboard shortcut from “Enter/Exit Responsive Design Mode” and mapped it to “Reload Page From Origin” instead.
  • Removed the Disable Caches menu item in the Develop menu. The equivalent functionality is now available through Web Inspector’s Network tab.


  • Implemented ESNext Object Spread proposal (r214038)
  • Changed to allow labels named let when parsing a statement in non-strict mode (r213850)
  • Fixed const location = "foo" in a worker to not throw a SyntaxError (r214145)


  • Aligned initEvent, initCustomEvent, initMessageEvent with the latest specification (r213825)
  • Aligned Document.elementFromPoint() with the CSSOM specification (r213836)
  • Changed XMLHttpRequest getAllResponseHeaders() to transform header names to lowercase before sorting (r214252)
  • Fixed sending an empty "Access-Control-Request-Headers" in preflight requests (r214254)
  • Implemented self.origin (r214147)
  • Implemented the “noopener” feature for (r214251)
  • Improved index validation when using uint index values in WebGL (r214086)
  • Prevented beforeunload alerts when the user hasn’t interacted with the web page (r214277)
  • Prevented innerText setter from inserting an empty text node if the value starts with a newline (r214136)
  • Prevented new navigations during document unload (r214365)
  • Prevented WebSQL databases from being openable in private browsing (r214309)
  • Changed serialization of custom properties in longhand to be "", not the value of the shorthand property (r214383)
  • Changed to tear down descendant renderers when a <slot> display value is set to "contents" (r214232)


  • Fixed pausing animated SVG images when they are outside the viewport, or removed from the document (r214503, r214327)
  • Changed asynchronous image decoding to consider when the drawing size is smaller than the size of the image (r213830)
  • Prevented large images from being decoded asynchronously when they are drawn on a canvas (r214450)
  • Fixed the flow state for positioned inline descendants (r214119)
  • Fixed initial letter rendering that follows pagination (r214110)
  • Fixed clipping columns horizontally in multi-column layout (r213832)
  • Fixed animated GIFs that fail to play in multi-column layout (r213826)


  • Fixed an issue where a dynamically applied :empty pseudo class with display:none does not get unapplied (r214290)
  • Unprefixed -webkit-min-content, -webkit-max-content and -webkit-fit-content (r213831)


  • Fixed loading media files in a <video> tag that are served without a filename extension (r214269)
  • Suspended silent videos playback in background tabs to save CPU (r214195)

Web Inspector

  • Added “Disable Caches” toggle in the Network tab that only applies to the inspected page while Web Inspector is open. (r214494)
  • Added “Save Selected” context menu item to Console (r214077)
  • Added RTL support to the Timeline tab (r213925, r213942, r213928, r213997, r213924, r214009, r214062, r214076)
  • Added RTL support for the Find banner (r214048)
  • Added more accurate Resource Timing data in Web Inspector (r213917)
  • Added context menu item to log content of WebSocket frame (r214371)
  • Added icons for SVG Image cluster path components (r214011)
  • Added keyboard shortcut to clear timeline records (r214140)
  • Added a connection indicator for when a WebSocket connection is open or close (r214354)
  • Changed Option-clicking the close tab button to close all other tabs (r214464)
  • Changed to allow the user to copy locked CSS selectors in Style Rules sidebar (r213887)
  • Changed to allow users to click links in inline and user-agent styles (r214366)
  • Changed SVG image content view to allow toggling between the image and source (r213999)
  • Changed Event Listeners detail section to show listeners by element rather than by event (r213874)
  • Changed Event Listeners to add missing ‘once’ and ‘passive’ event listener flags (r213873)
  • Fixed an issue where adding a WebSocket message could change the currently selected resource (r214387)
  • Fixed clicking DOM breakpoint marker to enable and disable breakpoints (r214256)
  • Fixed an exception when clicking on Clear Network Items icon with the timing popover visible (r214199)
  • Fixed local storage keys and values starting with truncated strings (r214308)
  • Fixed empty attributes added to a DOM tree outline element adding whitespace within the tag (r214141)
  • Fixed an exception when fetching computed styles that can break future updates of section (r213961)
  • Fixed syntax highlighting and formatting when inspecting a main resource that is JavaScript or JSON (r214492)
  • Fixed pseudo-class markers overlapping DOM breakpoints and disclosure triangles (r214196)
  • Fixed an issue causing the Resource details sidebar to display previous image metrics when viewing resource where content load failed (r214436)
  • Fixed text selection in the Console to select only message text (r214024)
  • Fixed formatting JSON request data (r214487)
  • Fixed the filename used when saving a resource from the resource image content view (r214133)
  • The file save dialog no longer suggests the top level directory as the default location (r214442)


  • Fixed VoiceOver for editable text on the web (r214112)


  • Added support for SPKI/PKCS8 Elliptic Curve cryptography (r214074)

By Jon Davis at April 05, 2017 05:00 PM

April 04, 2017

Manuel Rego: Announcing a New Edition of the Web Engines Hackfest

Igalia WebKit

Another year, another Web Engines Hackfest. Following the tradition that started back in 2009, Igalia is arranging a new edition of the Web Engines Hackfest that will happen in A Coruña from Monday, 2nd October, to Wednesday, 4th October.

The hackfest is a gathering of participants from the different parts of the open web platform community, working on projects like Chromium/Blink, WebKit, Gecko, Servo, V8, JSC, SpiderMonkey, Chakra, etc. The main focus of the event is to increase collaboration between the different browsers implementors by working together for a few days. On top of that, we arrange a few talks about some interesting topics which the hackfest attendees are working on, and also arrange breakout sessions for in-depth discussions.

Web Engines Hackfest 2016 Main Room Web Engines Hackfest 2016 Main Room

Last year almost 40 hackers joined the event, the biggest number of attendees ever. Previous attendees might have already received an invitation, but if not, just send us a request if you want to attend this year.

If you don’t want to miss any update, remember to follow @webhackfest on Twitter. See you in October!

April 04, 2017 10:00 PM

March 29, 2017

New Web Features in Safari 10.1

Surfin’ Safari

A new version of Safari shipped with the release of iOS 10.3 and macOS Sierra 10.12.4. Safari on iOS 10.3 and Safari 10.1 on macOS adds many important web features and improvements from WebKit that we are incredibly excited about.

While this release makes the web platform more capable and powerful, it also makes web development easier, simplifying the ongoing maintenance of your code. We’re excited to see how web developers will translate these improvements into better experiences for users.

Read on for quick look at the features included in this release.


Fetch is a modern replacement for XMLHttpRequest. It provides a simpler approach to request resources asynchronously over the network. It also makes use of Promises from ECMAScript 2015 (ES6) for convenient, chain-able response handling. Compared to XMLHttpRequest, the Fetch API allows for cleaner, more readable code that is easier to maintain.

let jsonURLEndpoint = "";
fetch(jsonURLEndpoint, {
    method: "get"
}).then(function(response) {
    response.json().then(function(json) {
}).catch(function(error) {

Find out more in the blog post, A Few Words On Fetching Bytes.

CSS Grid Layout

CSS Grid Layout gives web authors a powerful new layout system based on a grid of columns and rows in a container. It is a significant step forward in providing manageable page layout tools in CSS that enable complex graphic designs that respond to viewport changes. Authors can use CSS Grid Layout to more easily achieve designs normally seen in print, that before required the use of layout quirks in existing CSS tools like floats and Flexbox.

Read more in the blog post, CSS Grid Layout: A New Layout Module for the Web.

ECMAScript 2016 & ECMAScript 2017

WebKit added support in Safari 10.1 for both ECMAScript 2016 and ECMAScript 2017, the latest standards revisions for the JavaScript language. ECMAScript 2016 adds small incremental improvements, but the 2017 standard brings several substantial improvements to JavaScript.

ECMAScript 2016 includes the exponentiation operator (x ** y instead of Math.pow(x, y)) and Array.prototype.includes. Array.prototype.includes is similar to Array.prototype.indexOf, except it can find values including NaN.

ECMAScript 2017 brings async and await syntax, shared memory objects including Atomics and Shared Array Buffers, String.prototype.padStart, String.prototype.padEnd, Object.prototype.values, Object.prototype.entries, and allows trailing commas in function parameter lists and calls.

IndexedDB 2.0

WebKit’s IndexedDB implementation has significant improvements in this release. It’s now faster, standards compliant, and supports new IndexedDB 2.0 features. IndexedDB 2.0 adds support for binary data types as index keys, so you’ll no longer need to serialize them into strings or array objects. It also brings object store and index renaming, getKey() on IDBObjectStore, and getPrimaryKey() on IDBIndex.

Find out more in the Indexed Database API 2.0 specification.

Custom Elements

Custom Elements enables web authors to create reusable components defined by their own HTML elements without the dependency of a JavaScript framework. Like built-in elements, Custom Elements can communicate and receive new values in their attributes, and respond to changes in attribute values using reaction callbacks.

For more information, read the Introducing Custom Elements blog post.


The Gamepad API makes it possible to use game controllers in your web apps. Any gamepad that works on macOS without additional drivers will work on a Mac. All MFi gamepads are supported on iOS.

Read more about the API in the Gamepad specifications.

Pointer Lock

In Safari on macOS, requesting Pointer Lock on an element gives developers the ability to hide the mouse pointer and access the raw mouse movement data. This is particularly helpful for authors creating games on the web. It extends the MouseEvents interface with movementX and movementY properties to provide a stream of information even when the movements are beyond the boundaries of the visible range. In Safari, when the pointer is locked on an element, a banner is displayed notifying the user that the mouse cursor is hidden. Pressing the Escape key once dismisses the banner, and pressing the Escape key again will release the pointer lock on the element.

You can get more information from the Pointer Lock specifications.

Keyboard Input in Fullscreen

WebKit used to restrict keyboard input in HTML5 fullscreen mode. With Safari 10.1 on macOS, when using HTML5 fullscreen mode, WebKit removes the keyboard input restrictions.

Interactive Form Validation

With support for HTML Interactive Form Validation, authors can create forms with data validation contraints that are checked automatically by the browser when the form is submitted, all without the need for JavaScript. It greatly simplifies the complexity of ensuring good data entry from users on the client-side and minimizes the need for complex JavaScript.

Read more about HTML Interactive Form Validation in WebKit.

Input Events

Input Events simplifies implementing rich text editing experiences on the web in contenteditable regions. The Input Events API adds a new beforeinput event to monitor and intercept default editing behaviors and enhances the input event with new attributes.

You can read more about Enhanced Editing with Input Events.

HTML5 Download Attribute

The download attribute for anchor elements is now available in Safari 10.1 on macOS. It indicates the link target is a download link that should download a file instead of navigating to the linked resource. It also enables developers to create a link that downloads blob data as files entirely from JavaScript. Clicking a link with a download attribute causes the target resource to be downloaded as a file. The optional value of the download attribute can be used to provide a suggested name for the file.

<a href="" download="webkit-favicon.ico">Download Favicon</a>

Find out more from the Downloading resources section in the HTML specification.

HTML Media Capture

In Safari on iOS, HTML Media Capture extends file input controls in forms to allow users to use the camera or microphone on the device to capture data.

File inputs can be used to capture an image, video, or audio:

<input name="imageCapture" type="file" accept="image/*" capture>
<input name="videoCapture" type="file" accept="video/*" capture>
<input name="audioCapture" type="file" accept="audio/*" capture>

More details are available in the HTML Media Capture specification.

Improved Fixed and Sticky Element Positioning

When using pinch-to-zoom, fixed and sticky element positioning has improved behavior using a “visual viewports” approach. Using the visual viewports model, focusing an input field that triggers the on-screen keyboard no longer disables fixed and sticky positioning in Safari on iOS.

Improved Web Inspector Debugging

The WebKit team added support for debugging Web Worker JavaScript threads in Web Inspector’s Debugger tab. There are also improvements to debugger stepping with highlights for the currently-executing and about-to-execute statements. The highlights make it much clearer what code is going to execute during debugging, especially for JavaScript with complex control flow or many expressions on a single line.

Learn more about JavaScript Debugging Improvements in Web Inspector.

CSS Wide-Gamut Colors

Modern devices support a broader range of colors. Now, web authors can use CSS colors in wide-gamut color spaces, including the Display P3 color space. A new color-gamut media query can be used to test if the display is capable of displaying a given color space. Then, using the new CSS color() function, developers can define a color in a specific color space.

@media (color-gamut:p3) {
    .brightred {
        color: color(display-p3 1.0 0 0);

For more information, see the CSS Color Module Level 4 standards specification.

Reduced Motion Media Query

The new prefers-reduced-motion media query allows developers using animation to make accommodations for users with conditions where large areas of motion or drastic movements can trigger physical discomfort. With prefers-reduced-motion, authors can create styles that avoid motion for users that set the reduced motion preference in system settings.

@keyframes decorativeMotion {
    /* Keyframes for a decorative animation */

.background {
    animation: decorativeMotion 10s infinite alternate;

@media (prefers-reduced-motion) {
    .background {
        animation: none;

Read more about Responsive Design for Motion.


We’re looking forward to what developers will do with these features to make better experiences for users. These improvements are available to users running iOS 10.3 and macOS Sierra 10.12.4, as well as Safari 10.1 for OS X Yosemite and OS X El Capitan.

Most of these features were also previewed in Safari Technology Preview over the last few months. The changes included in this release of Safari span Safari Technology Preview releases 14, 15, 16, 17, 18, 19, and 20. You can download the latest Safari Technology Preview release to stay on the forefront of future web features.

Finally, we’d love to hear from you! Send a tweet to @webkit or @jonathandavis and let us know which of these features will have the most impact on your design or development work on the web.

By Jon Davis at March 29, 2017 10:00 PM

March 24, 2017

Michael Catanzaro: A Web Browser for Awesome People (Epiphany 3.24)

Igalia WebKit

Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

You Can Load Webpages!

Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

Screenshot showing address bar visibleDiscover GNOME 3! Discover the address bar!

You Can Set a Homepage!

A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

New Bookmarks Interface

There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

Manage Tons of Tabs

One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

Improved Tracking Protection

I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

Insecure Password Form Warning

Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

New Search Engine Manager

Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

New Icon

Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

WebKitGTK+ 2.16

WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

  • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
  • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
  • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
  • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the

Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

Future Features

Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

Help Out

Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

OK, I’m done typing stuff now. Onwards to 3.26!

By Michael Catanzaro at March 24, 2017 01:18 AM

March 22, 2017

Release Notes for Safari Technology Preview 26

Surfin’ Safari

Safari Technology Preview Release 26 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 213542-213822.



  • Added support for history.scrollRestoration (r213590)
  • Aligned Document.elementFromPoint() with the CSSOM specification (r213646)
  • Changed the parameter to input.setCustomValidity() to not be nullable (r213606)
  • Fixed transitions and animations of background-position with right-relative and bottom-relative values (r213603)
  • Fixed an issue where WebSQL directories were not removed when removing website data (r213547)
  • Made the XMLHttpRequest method setRequestHeader() use “,” (including a space) as the separator (r213766)
  • Prevented displaying the label of an <option> element in quirks mode (r213542)
  • Prevented extra downloads of preloaded CSS (r213672)
  • Dropped support for non-standard document.all.tags() (r213619)


  • Implemented stroke-width CSS property (r213634)


  • Enabled asynchronous image decoding for large images (r213764, r213563)
  • Fixed memory estimate for layers supporting subpixel-antialised text (r213767)
  • Fixed columns getting clipped horizontally in CSS Multicolumn (r213593)

Web Inspector

  • Added DOM breakpoints for pausing on node and subtree modifications (r213626)
  • Added XHR breakpoints for pausing on requests by URL (r213691)
  • Added a “Create Breakpoint” context menu item for linked source locations (r213617)
  • Added settings for controlling Styles sidebar intelligence (r213635)
  • Added cache source information (Memory Cache or Disk Cache) in the Network tab (r213621)
  • Added protocol, remote address, priority, and connection ID in the Network tab (r213682)
  • Added individual messages to the content pane for a WebSocket (r213666)
  • Fixed an issue where the DOM tree is broken if an element has a debounce attribute (r213565)
  • Fixed an issue in the Resources tab navigation bar allowing the same file from a contextual menu item to be saved more than once (r213738)
  • Improved the layout of the compositing reasons in the Layers sidebar popover (r213739)


  • Fixed an issue where automation commands hang making it impossible to navigate back or forward (r213790)


  • Implemented ECDH ImportKey and ExportKey operations (r213560)

By Jon Davis at March 22, 2017 05:00 PM

March 20, 2017

Carlos García Campos: WebKitGTK+ 2.16

Igalia WebKit

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.


The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

By carlos garcia campos at March 20, 2017 03:19 PM

Enrique Ocaña: Media Source Extensions upstreaming, from WPE to WebKitGTK+

Igalia WebKit

A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

Thank you for reading.


By eocanha at March 20, 2017 11:55 AM

March 15, 2017

Manuel Rego: CSS Grid Layout is Here to Stay

Igalia WebKit

It’s been a long journey but finally CSS Grid Layout is here! 🚀 In the past week, Chrome 57 and Firefox 52 were released, becoming the first browsers to ship CSS Grid Layout unprefixed (Explorer/Edge has been shipping an older, prefixed version of the spec since 2012). Not only that, but Safari will hopefully be shipping it very soon too.

I’m probably biased after having worked on it for a few years, but I believe CSS Grid Layout is going to be a big step in the history of the Web. Web authors have been waiting for a solution like this since the early days of the Web, and now they can use a very powerful and flexible layout module supported natively by the browser, without the need of any external frameworks.

Igalia has been playing a major role in the implementation of CSS Grid Layout in Chromium/Blink and Safari/WebKit since 2013 sponsored by Bloomberg. This is a blog post about that successful collaboration.

A blast from the past

Grids are not something new at all, since we can even find references to them in some of the initial discussions of the CSS creators. Next is an excerpt from a mail by Håkon Wium Lie in June 1995 to www-style:

Grids! Let the style sheet carve up the canvas into golden rectangles, and use an expert system to lay out the elements!! Ok, drop the expert system and define a set of simple rules that we hardcode.. whoops! But grids do look nice!


Since that time the Web hasn’t stopped moving and there have been different solutions and approaches to try to solve the problem of having grid-based designs in HTML/CSS.

At the beginning of the decade Microsoft started to work on what eventually become the CSS Grid Layout initial specification. This spec was based on the Internet Explorer 10 implementation and the experience gathered by Microsoft during its development. IE10 was released in 2012, shipping a prefixed version of that initial spec.

Then Google started to add support to WebKit at the end of 2011. At that time, WebKit was the engine used by both Chromium and Safari; later in 2012 it would be forked to create Blink.

Meanwhile, Mozilla had not started the Grid implementation in Firefox as they had some conflicts with their XUL grid layout type.

Igalia and Bloomberg collaboration

Bloomberg uses Chromium and they were looking forward to having a proper solution for their layout requirements. They detected performance issues due to the limitations of the current layout modules available on the Web. They see CSS Grid Layout as the right way to fix those problems and cover their needs.

Bloomberg decided to push CSS Grid Layout implementation as part of the collaboration with Igalia. My colleagues, Sergio Villar and Xan López, started to work on CSS Grid Layout around the summer of 2013. In 2014, Javi Fernández and I replaced Xan, joining the effort as well. We’ve been working on this for more than 3 years and counting.

At the beginning, we were working together with some Google folks but later Igalia took the lead role in the development of the specification. The spec has evolved and changed quite a lot since 2013, so we’ve had to deal with all these changes always trying to keep our implementations up to date, and at the same time continue to add new features. As the codebase in Blink and WebKit was still sharing quite a lot of things after the fork, we were working on both implementations at the same time.

Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

The results of this collaboration have been really satisfactory, as now CSS Grid Layout has shipped in Chromium and enabled by default in WebKit too (which will hopefully mean that it’ll be shipped in the upcoming Safari 10.1 release too).

Thanks @jensimmons for the feedback regarding Safari 10.1.

And now what?

Update your browsers, be sure you grab a version with Grid Layout support and start to use CSS Grid Layout, play with it, experiment and so on. We’d love to get bug reports and feedback about it. It’s too late to change the current version of the spec, but ideas for a future version are already being recorded in the CSS Working Group GitHub repository.

If you want to start with Grid Layout, there are plenty of resources available on the Internet:

It’s possible to think that now that CSS Grid Layout has shipped, it’s all over. Nothing is further from the truth as there is still a lot of work to do:

  • An important step would be to complete the W3C Test Suite. Igalia has been contributing to it and it’s currently imported into Blink and WebKit, but it doesn’t cover the whole spec yet.
  • There are some missing features in the current implementations. For example, nobody supports subgrids yet, web authors tell us that they would love to have them available. Another example, in Blink and WebKit is that we are still finishing the support for baseline alignment.
  • When bugs and issues appear they will need to be fixed and some might even imply some minor modifications to the spec.
  • Performance optimizations should be done. CSS Grid Layout is a huge spec so the biggest part effort so far has been done in the implementation. Now it’s time to improve performance of different use cases.
  • And as I explained earlier, people are starting to think about new features for a future version of the spec. Progress won’t stop now.


First of all, it’s important to highlight once again Bloomberg’s role in the development of CSS Grid Layout. Without their vision and support it probably would not be have shipped so soon.

But this is not an individual effort, but something much bigger. I’ll mention several people next, but I’m sure I’ll forget a lot of them, so please forgive me in advance.

So big thanks to:

  • The Microsoft folks who started the spec.
  • The current spec editors: Elika J. Etemad (fantasai), Rossen Atanassov, and Tab Atkins Jr. Especially fantasai & Tab, who have been dealing with most of the issues we have reported.
  • The whole CSS Working Group for their work on this spec.
  • Our reviewers in both Blink and WebKit: Christian Biesinger, Darin Adler, Julien Chaffraix, and many other.
  • Other implementors: Daniel Holbert, Mats Palmgren, etc.
  • People spreading the word about CSS Grid Layout: Jen Simmons, Rachel Andrew, etc.
  • The many other people I’m missing in this list who helped to make CSS Grid Layout the newest layout module for the Web.

Thanks to you all! 😻 And particularly to Bloomberg for letting Igalia be part of this amazing experience. We’re really happy to have walked this path together and we really hope to do more cool stuff in the future.


March 15, 2017 11:00 PM

March 09, 2017

CSS Grid Layout: A New Layout Module for the Web

Surfin’ Safari

People have been using grid designs in magazines, newspapers, posters, etc. for a long time before the Web appeared. At the point when web developers started to create web pages, many of them were based on a grid layout. Different solutions have been used to create grid layouts, like tables, floats, inline blocks, or flexboxes, but all of these techniques have different issues when you try to define a complex grid design.

In order to solve these problems, a new standard was defined to provide a good solution to create grid designs. This specification, called CSS Grid Layout, allows users to very easily create two-dimensional layouts on the Web. It has been designed specifically for this purpose and it brings very powerful features to divide the web page into different regions, granting great flexibility to web authors in order to define the sizing of the different sections and how the elements are positioned in each of them.

Grid Layout has been under development in WebKit for a while, and you can experiment with Grid Layout today using WebKit in Safari Technology Preview.

Basic Concepts

A grid is a structure made up of a series of intersecting lines. The main concepts of the CSS Grid Layout spec are:

  • The grid lines that define the grid: they can be horizontal or vertical,
    and they are numbered starting at 1.
  • The grid tracks, which are the rows (horizontal)
    or columns (vertical) defined in the grid.
  • The grid cells, the intersection of a row and a column.
  • A grid area, one or more adjacent grid cells that define a rectangle.
Grid Concepts

Grid Definition

To create a grid you just need to use a new value for the display property: grid or inline-grid. This is the same syntax as Flexbox, so you might be already used to it.

Then you will need to define the structure of your grid. For example, to define the size of the tracks you can do something like this:

display: grid;
grid-template-rows: 100px 100px;
grid-template-columns: 400px 200px 100px;

This will create a grid with two rows of 100px each, and three columns, with sizes 400px for first column, 200px for the second column, and 100px for the third column. This grid will have four vertical lines (1, 2, 3, 4) and three horizontal lines (1, 2, 3). You can also name the lines, so you can then reference them later easily. For example:

    grid-template-columns: [first] 400px [main] 200px [side] 100px [last];
Grid Definition

Regarding track sizing, you have a lot of flexibility:

  • You can define a fixed size track, setting the length or percentage.
  • You can define an intrinsic-sized track, where the size is based on the size of its content, using auto, min-content, max-content, or fit-content.
  • You can use a new unit fr to take advantage of the available space.

In addition, there are some functions that can be useful to set the track sizes:

  • minmax(), to set the minimum and maximum size of the track, so its size will end up depending on the available space and the size of the rest of the tracks.
  • repeat(), to define a fixed number of repetitions. It can also be used in automatic mode to cause the number of tracks to depend on the available space.

There is a special syntax that allows you to define the grid structure using ASCII art:

    grid-template-areas: "header  header"
                         "sidebar main  "
                         "sidebar footer";

In this example we create a 2×3 grid where the first row is the header, the rest of the first column is the sidebar, the main content is in the second row and second column, and the footer is in the last row and column.

Grid Areas

You can also define the gutter between tracks. For that, you just need to use the grid-row-gap and grid-column-gap properties.

Item Placement

The children of a grid container are called grid items. They can be positioned in the different parts of the grid using the placement properties grid-row-start, grid-row-end, grid-column-start, and grid-column-end. But in most cases you’ll be using the shorthands grid-row, grid-column, and grid-area.

Note that these properties refer to the grid lines, so if you want to put an element on the third row and the second column you can use something like this:

    grid-row: 3;
    grid-column: 2;

The item can also span several lines, so you can use the following syntax to take three rows and two columns:

    grid-row: 2 / 5;
    grid-column: 3 / span 2;
Grid Placement

Apart from that, you can also refer to named lines or areas with these properties, which is very convenient in some scenarios.

As you can imagine, when you’re using Grid Layout you can very easily break the relationship between the DOM order and the visual order. You have to be careful to keep the right order in the DOM to avoid making your content less accessible.

Lastly, there’s also the possibility to let the items place themselves into the grid. If you don’t set any placement property (or if you use auto), the items will be automatically placed on some empty cell of the grid, creating new rows (by default) or columns (if specified through grid-auto-flow property) when required.


One of the big things that come for free when you use CSS Grid Layout is the alignment support. In Grid Layout you can align horizontally and vertically without any issues with just some simple CSS properties.

The alignment capabilities of Grid Layout operate over two different subjects: grid tracks, with regard to the grid container, and grid items in their respective grid areas. In addition, we can operate on both axes, horizontally and vertically.

The CSS properties justify-content and align-content apply to grid tracks to align them horizontally and vertically, respectively. These properties, which define what is known as Content Distribution behavior, can also be used to distribute the grid container’s available space among the tracks following different distributions: between, around, evenly, and stretch. For example, check the following grid:

    display: grid;
    grid-template-rows: 100px 100px;
    grid-template-columns: 150px 150px 150px;
    height: 500px;
    width: 650px;
    align-content: center;
    justify-content: space-evenly;
Grid Alignment Tracks

When it comes to aligning the grid items, the justify-self and align-self properties are used to align horizontally and vertically, respectively. These properties define the Self Alignment behavior of the grid items. It’s possible to define a default behavior for all the items of a grid container by using the Default Alignment properties, align-items and justify-items, which gives an incredible syntactic flexibility for defining the grid’s alignment behavior.

Grid Alignment Items

Responsive Design with Grid

As you can already imagine, all the different Grid Layout properties make it more comfortable to create responsive designs. You can take advantage of the powerful track sizing mechanisms like the fr unit, minmax(), or repeat(). Combine this with media queries to completely change the structure of your grid with just a few CSS lines.

For example:

display: grid;
grid-gap: 10px 20px;
grid-template-rows: 100px 1fr auto;
grid-template-columns: 1fr 200px;
grid-template-areas: "header  header"
                     "content aside "
                     "footer  aside ";

@media (max-width: 600px) {
    grid-gap: 0;
    grid-template-rows: auto 1fr auto auto;
    grid-template-columns: 1fr;
    grid-template-areas: "header "
                         "aside  "
                         "footer ";
Responsive grid using flexible sizes and media queries

And Much More

This is just an introductory blog post about Grid Layout, not a deep review of all the different features that it provides: that would require much more than a single post. The different examples explained in this blog post have been published online. You can start to play with them now!

If you want to learn more about CSS Grid Layout, there are a bunch of good resources out there:

On top of that, several people have been talking about Grid Layout at different conferences and events. You won’t have problems finding some of the talks published online.


CSS Grid Layout is here to stay. We’re looking forward to seeing this soon in shipping versions of Safari and other web browsers. This is very good news for the Web authors that have been waiting for a tool like this for years. We believe this is going to be a huge step forward for the Web.

The implementation of Grid Layout in WebKit has been performed by Igalia’s Web Platform team and sponsored by Bloomberg. If you have any comments or questions, don’t hesitate to contact any of the people working on it: Javi (@lajava77), Manuel (@regocas), or Sergio (@svillarsenin). If you want to keep track of the development you can follow bug #60731. New bug reports are very welcome, especially now that Grid Layout is hitting your browser and testing is easier than ever. Exciting times ahead!

By Manuel Rego at March 09, 2017 06:00 PM

March 08, 2017

Release Notes for Safari Technology Preview 25

Surfin’ Safari

Safari Technology Preview Release 25 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 212356-213542.

Resource Timing

  • Added Resource Timing as an experimental feature enabled by default (r212945)
  • Added Resource Timing support in Workers (r212449)
  • Improved gathering timing information with reliable responseEnd time (r212993)
  • Changed loads initiated by media elements to set the initiatorType to their element name (r212994)

User Timing

  • Enabled User Timing by default as an experimental feature (r212945)
  • Changed performance.measure in Workers to throw a SyntaxError if provided mark name is not found (r212806)


  • Added support for AES-CFB (r212736)


  • Added a new webglcontextchanged event that is dispatched when the GraphicsContext3D notices that the active GPU has changed (r212637)
  • Changed the onbeforeunload event return value coercion to match specification behavior (r212625)
  • Exposed Symbol.toPrimitive and Symbol.valueOf on Location instances (r212378)
  • Fixed <input type=color> and <input type=range readonly> to prevent applying the readonly attribute to match specifications (r212617, r212610)
  • Fixed handling of <input>.labels when the input type changes from "text" to "hidden" to "checkbox" (r212522)
  • Prevented aggressive throttling of DOM timers until they’ve reached their maximum nesting level (r212845)

Web Inspector

  • Enabled import() for modules in Web Inspector console (r212438)
  • Changed the zoom level in the Settings tab to use localized formatting (r212578)
  • Changed Web Inspector to use the Resources tab when showing files instead of the Network tab (r212761)
  • Changed the split console to be allowed in the Elements, Resources, Debugger, and Storage tabs when Web Inspector is docked to the bottom (r212400)
  • Changed CSS variable uses that are unresolved to get marked with a warning icon (r213187)
  • Fixed the Zoom level user interface to match the the setting value (r212580)
  • Prevented dismissing popovers when dragging a Web Inspector window (r212427)
  • Improved copy and paste behavior for request headers (r212423)
  • Included additional detail in the display name of Timeline data elements (r212570)


  • Fixed centering text inside a button set to display:flex with justify-content:center (r213173)
  • Unprefixed -webkit-line-break (r213094)


  • Prevented fixed elements from bouncing when scrolling beyond the bottom of the page (r212559)
  • Improved text wrapping consistency where text might wrap when its preferred logical width is used for sizing the containing block (r213008)


  • Fixed local audio-only streams to trigger playback to begin (r212696)

Bug Fixes

  • Changed pending scripts to execute asynchronously after stylesheet loads are completed (r212614)
  • Fixed an issue where font-weight in @font-face can cause a font to be downloaded even when it’s not used (r212513)
  • Made special URLs without a host invalid (r212470)

By Jon Davis at March 08, 2017 06:00 PM

February 22, 2017

Release Notes for Safari Technology Preview 24

Surfin’ Safari

Safari Technology Preview Release 24 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 211256-212356.

User Timing

  • Added User Timing as an experimental feature (r211332)
  • Implemented PerformanceObserver for User Timing (r211406)
  • Added support for Performance API (, UserTiming) in Workers (r211594)

Link Preload

  • Added <link preload> as an experimental feature (r211341)
  • Added support for speculative resource loading (r211480)
  • Prevented preloaded resources from being cleared after parsing is done (r211649)
  • Addressed memory issues related to clearing preloaded resources (r211673)


  • Changed Location object to throw a TypeError for Object.preventExtensions() (r211778)
  • Changed Pointer Lock to require keyboard focus (r211652)
  • Changed Pointer Lock events to be delivered directly to the target element (r211650)
  • Changed the HTML Form Validation popover to be dismissed when pressing the Escape key (r211653)
  • Changed the HTML Form Validation popover to respect the minimum font size setting (r212325)
  • Fixed an issue causing Fetch to fail when passing undefined as the headers (r212162)
  • Fixed the <details> element to work correctly when content is changed between closing and opening (r212027)
  • Implemented toJSON() for URL objects (r212193)
  • Improved URL specification compliance (r211636, r212279)
  • Prevented a redundant scroll to top-left corner of the page when navigating back to a URL with no fragment (r212197)
  • Made Symbols configurable when exposed on cross-origin Window or Location objects (r211772)


  • Implemented dynamic import operator (r211280)
  • Changed dynamic import through setTimeout() and setInterval() to correctly inherit SourceOrigin (r211314)
  • Changed scripts load priority to ‘high’ (r211334)
  • Fixed Apple Pay line validation to prevent validating line items that are “pending” (r211446)
  • Implemented ArrayBuffer.prototype.byteLength and SharedArrayBuffer.prototype.byteLength (r212196)
  • Implemented lifting template escape sequence restrictions in tagged templates (r211319)


  • Fixed elements with a backdrop-filter and a mask to correctly mask the backdrop (r211305)
  • Updated line-break:auto to match the latest version of Unicode (r212235)

Web Inspector

  • Enabled the console to evaluate dynamic module import() (r211777)
  • Added CSS color keyword entries for all “grey” and “gray” variations (r211452)
  • Added stroke-linecap property values to CSS autocompletion (r211640)
  • Added a horizontal slider for gradient editor angle value where applicable (r211318)
  • Added a limit on Async Call Stacks for asynchronous loops (r211385)
  • Added a setting to preserve network data on navigation for the Network tab (r211451)
  • Added the ability to show the current value of CSS variables in style rules (r212273)
  • Added a warning that webkitSubtle in WebCrypto is deprecated (r212261)
  • Changed docking Web Inspector to collapse the split console in the Timeline and Network tabs (r211976)
  • Fixed jumping from Search tab results to see the resource in other tabs (Resource, Debugger, Network) (r211608)
  • Fixed a Debugger sidebar panel issue that can cause it to have multiple tree selections (r212171)
  • Fixed DOM tree view collapsing when switching back to the Elements tab (r211829)
  • Removed Shift-Command-W (⇧⌘W) shortcut to a close tab (r211485)


  • Fixed text string range from index and length in text controls when there are newlines (r211491)


  • Fixed column progression after enabling pagination on a right-to-left document (r211564)


  • Suspended SVG animations on hidden pages (r211612)
  • Avoided initially creating a layer backing store for elements outside the visible area (r211845)


  • Changed CSS data URL resources to be treated as same origin loads when loaded through HTML <link> elements (r211926)

By Jon Davis at February 22, 2017 06:00 PM

February 15, 2017

JavaScript Debugging Improvements

Surfin’ Safari

Debugging JavaScript is a fundamental part of developing web applications. Having effective debugging tools makes you more productive by making it easier to investigate and diagnose issues when they arise. The ability to pause and step through JavaScript has always been a core feature of Web Inspector.

The JavaScript debugger hasn’t changed in a long time, but the Web and the JavaScript language have. We recently took at look at our debugger to see how we could improve the experience and make it even more useful for you.


Web Inspector now includes extra highlights for the active statement or expression that is about to execute. For previous call frames, we highlight the expression that is currently executing.

Previously, Web Inspector would only highlight the line where the debugger was paused. However, knowing exactly where on the line the debugger was and therefore what was about to execute might not be obvious. By highlighting source text ranges of active expressions we eliminate any confusion and make stepping through code easier and faster.

For example, when stepping through the following code it is always immediately clear what is about to execute, even when we are executing a small part of a larger statement:

Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights
Debugger Stepping Highlights

Highlighting expressions is also useful when looking at previous call frames in the Call Stack. Again, when selecting parent call frames it is always immediately clear where we are currently executing:

Parent Call Frame Expression Highlight
Parent Call Frame Expression Highlight
Parent Call Frame Expression Highlight
Parent Call Frame Expression Highlight

We also made improvements to the stepping behavior itself. We eliminated unnecessary pauses, added pause points that were previously missed, and generally made pausing locations more consistent between old and new syntaxes of the language. Stepping in and out of functions is also more intuitive. Combined with the new highlights stepping through complex code is now easier than ever.


Web Inspector is now smarter and more forgiving about where it resolves breakpoints, making them more consistent and useful.

Previously, setting a breakpoint on an empty line or a line with a comment would create a breakpoint that would never get triggered. Now, Web Inspector installs the breakpoint on the next statement following the location where the breakpoint was set.

We also made it simpler to set a breakpoint for a function or method. Previously, you would have needed to find the first statment within the function and set a breakpoint on that line. Now, you can just set a breakpoint on the line with the function name or its opening brace and the breakpoint will trigger on the first statement in the function.

New Acceptable Breakpoint Locations
New Acceptable Breakpoint Locations
New Acceptable Breakpoint Locations

A new global breakpoint was added for Assertion failures triggered by console.assert. This breakpoint can be found beside the existing global breakpoints, such as pausing on uncaught exceptions.

Asynchronous Call Stacks

Asynchronous Call Stacks

JavaScript functions make it very convenient to evaluate code asynchronously. Callbacks, Events, Timers, and new language features such as Promises and async functions make it easier than ever to run asynchronous code.

Debugging these kinds of asynchronous chains can be complex. Web Inspector now makes it much easier to debug asynchronous code by displaying the call stacks across asynchronous boundary points. Now when your timer fires and you pause inside your callback, you can see the call stack from where the timer was scheduled, and so on if that call stack had been triggered asynchronously.

WebKit currently records asynchronous call stacks in just a few places and is actively bringing it to more features like Promises.

Web Workers

While JavaScript itself is single threaded, Web Workers allow web applications to run scripts in background threads. Web Inspector can now debug scripts in Workers just as easily as scripts in the Page.

Worker Resources in Resources Tab

When inspecting a page with Workers, Worker resources will show in the Resources tab sidebar. Each Worker becomes a top level resource like the Page, allowing you to quickly see the list of active Workers and their scripts and resources.

An execution context picker becomes available in the quick console, allowing you to choose to evaluate JavaScript in either the Page’s context or a Worker’s context. Workers have dramatically improved console logging support, so you will be able to interact with objects logged from a Worker just as you would expect.

Setting breakpoints behaves as expected. When any single context pauses, all other contexts are immediately paused. Selecting call frames inside of a particular thread allows you to step just that individual thread. Use the Continue debugger control to resume all scripts.


When debugging a page with Workers, Web Inspector adds a thread name annotation next to the debugger highlights. If you have multiple Workers, or even Workers and the Page, all paused and stepping through the same script you will be able to see exactly where each thread is.

WebKit currently only supports debugging Workers. Profiling Worker scripts with Timelines will come in the future.

Code Coverage and Type Profiling

Web Inspector also supports Code Coverage and Type profiling.

Previously, Web Inspector had a single button to toggle both profilers. A new [C] button was added to toggle just Code Coverage. The [T] button now only toggles Type Profiler.


You can try out all of these improvements in the latest Safari Technology Preview. Let us know how they work for you. Send feedback on Twitter (@webkit, @JosephPecoraro) or by filing a bug.

By Joseph Pecoraro at February 15, 2017 07:00 PM

February 10, 2017

Carlos García Campos: Accelerated compositing in WebKitGTK+ 2.14.4

Igalia WebKit

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

By carlos garcia campos at February 10, 2017 05:18 PM

February 08, 2017

Release Notes for Safari Technology Preview 23

Surfin’ Safari

Safari Technology Preview Release 23 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 210845-211256.


  • Fixed Gamepad support for PS4 controllers (r211220)
  • Exposed more directional pads for other types of gamepads (r211231)

Pointer Lock

  • Fixed sending Pointer Lock events directly to the target element (r211235)
  • Fixed page requests to re-establish Pointer Lock without a user gesture after being released without a user gesture (r211249)


  • Added client notification when the user plays media otherwise prevented from autoplaying (r211226)


  • Fixed Speak Selection for <iframe> elements (r211095)

Web Inspector

  • Added a way to trigger Garbage Collection (r211075)

Bug Fixes

  • Fixed Flash object placeholder painting when Safari reloads pages with Flash objects after Flash is installed (r211114)
  • Improved switching between GPUs for WebGL content in order to maximize battery life (r211244)

By Jon Davis at February 08, 2017 06:00 PM

Michael Catanzaro: An Update on WebKit Security Updates

Igalia WebKit

One year ago, I wrote a blog post about WebKit security updates that attracted a fair amount of attention at the time. For a full understanding of the situation, you really have to read the whole thing, but the most important point was that, while WebKitGTK+ — one of the two WebKit ports present in Linux distributions — was regularly releasing upstream security updates, most Linux distributions were ignoring the updates, leaving users vulnerable to various security bugs, mainly of the remote code execution variety. At the time of that blog post, only Arch Linux and Fedora were regularly releasing WebKitGTK+ updates, and Fedora had only very recently begun doing so comprehensively.

Progress report!

So how have things changed in the past year? The best way to see this is to look at the versions of WebKitGTK+ in currently-supported distributions. The latest version of WebKitGTK+ is 2.14.3, which fixes 13 known security issues present in 2.14.2. Do users of the most popular Linux operating systems have the fixes?

  • Fedora users are good. Both Fedora 24 and Fedora 25 have the latest version, 2.14.3.
  • If you use Arch, you know you always have the latest stuff.
  • Ubuntu users rejoice: 2.14.3 updates have been released to users of both Ubuntu 16.04 and 16.10. I’m very  pleased that Ubuntu has decided to take my advice and make an exception to its usual stable release update policy to ensure its users have a secure version of WebKit. I can’t give Ubuntu an A grade here because the updates tend to lag behind upstream by several months, but slow updates are much better than no updates, so this is undoubtedly a huge improvement. (Anyway, it’s hardly a bad idea to be cautious when releasing a big update with high regression potential, as is unfortunately the case with even stable WebKit updates.) But if you use the still-supported Ubuntu 14.04 or 12.04, be aware that these versions of Ubuntu cannot ever update WebKit, as it would require a switch to WebKit2, a major API change.
  • Debian does not update WebKit as a matter of policy. The latest release, Debian 8.7, is still shipping WebKitGTK+ 2.6.2. I count 184 known vulnerabilities affecting it, though that’s an overcount as we did not exclude some Mac-specific security issues from the 2015 security advisories. (Shipping ancient WebKit is not just a security problem, but a user experience problem too. Actually attempting to browse the web with WebKitGTK+ 2.6.2 is quite painful due to bugs that were fixed years ago, so please don’t try to pretend it’s “stable.”) Note that a secure version of WebKitGTK+ is available for those in the know via the backports repository, but this does no good for users who trust Debian to provide them with security updates by default without requiring difficult configuration. Debian testing users also currently have the latest 2.14.3, but you will need to switch to Debian unstable to get security updates for the foreseeable future, as testing is about to freeze.
  • For openSUSE users, only Tumbleweed has the latest version of WebKit. The current stable release, Leap 42.2, ships with WebKitGTK+ 2.12.5, which is coincidentally affected by exactly 42 known vulnerabilities. (I swear I am not making this up.) The previous stable release, Leap 42.1, originally released with WebKitGTK+ 2.8.5 and later updated to 2.10.7, but never past that. It is affected by 65 known vulnerabilities. (Note: I have to disclose that I told openSUSE I’d try to help out with that update, but never actually did. Sorry!) openSUSE has it a bit harder than other distros because it has decided to use SUSE Linux Enterprise as the source for its GCC package, meaning it’s stuck on GCC 4.8 for the foreseeable future, while WebKit requires GCC 4.9. Still, this is only a build-time requirement; it’s not as if it would be impossible to build with Clang instead, or a custom version of GCC. I would expect WebKit updates to be provided to both currently-supported Leap releases.
  • Gentoo has the latest version of WebKitGTK+, but only in testing. The latest version marked stable is 2.12.5, so this is a serious problem if you’re following Gentoo’s stable channel.
  • Mageia has been updating WebKit and released a couple security advisories for Mageia 5, but it seems to be stuck on 2.12.4, which is disappointing, especially since 2.12.5 is a fairly small update. The problem here does not seem to be lack of upstream release monitoring, but rather lack of manpower to prepare the updates, which is a typical problem for small distros.
  • The enterprise distros from Red Hat, Oracle, and SUSE do not provide any WebKit security updates. They suffer from the same problem as Ubuntu’s old LTS releases: the WebKit2 API change  makes updating impossible. See my previous blog post if you want to learn more about that. (SUSE actually does have WebKitGTK+ 2.12.5 as well, but… yeah, 42.)

So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates. Thanks Ubuntu!

To arrive at the above vulnerability totals, I just counted up the CVEs listed in WebKitGTK+ Security Advisories, so please do double-check my counting if you want. The upstream security advisories themselves are worth mentioning, as we have only been releasing these for two years now, and the first year was pretty rough when we lost our original security contact at Apple shortly after releasing the first advisory: you can see there were only two advisories in all of 2015, and the second one was huge as a result of that. But 2016 seems to have gone decently well. WebKitGTK+ has normally been releasing most security fixes even before Apple does, though the actual advisories and a few remaining fixes normally lag behind Apple by roughly a month or so. Big thanks to my colleagues at Igalia who handle this work.

Challenges ahead

There are still some pretty big problems remaining!

First of all, the distributions that still aren’t releasing regular WebKit updates should start doing so.

Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project. The good news is that Konstantin Tokarev has been working on a QtWebKit fork based on WebKitGTK+ 2.12, which is almost (but not quite yet) ready for use in distributions. I hope we are able to switch to use his project as the new upstream for QtWebKit in Fedora 26, and I’d encourage other distros to follow along. WebKitGTK+ 2.12 does still suffer from those 42 vulnerabilities, but this will be a big improvement nevertheless and an important stepping stone for a subsequent release based on the latest version of WebKitGTK+. (Yes, QtWebKit will be a downstream of WebKitGTK+. No, it will not use GTK+. It will work out fine!)

It’s also time to get rid of the old WebKitGTK+ 2.4 (“WebKit1”), which all distributions currently parallel-install alongside modern WebKitGTK+ (“WebKit2”). It’s very unfortunate that a large number of applications still depend on WebKitGTK+ 2.4 — I count 41 such packages in Fedora — but this old version of WebKit is affected by over 200 known vulnerabilities and really has to go sooner rather than later. We’ve agreed to remove WebKitGTK+ 2.4 and its dependencies from Fedora rawhide right after Fedora 26 is branched next month, so they will no longer be present in Fedora 27 (targeted for release in November). That’s bad for you if you use any of the affected applications, but fortunately most of the remaining unported applications are not very important or well-known; the most notable ones that are unlikely to be ported in time are GnuCash (which won’t make our deadline) and Empathy (which is ported in git master, but is not currently in a  releasable state; help wanted!). I encourage other distributions to follow our lead here in setting a deadline for removal. The alternative is to leave WebKitGTK+ 2.4 around until no more applications are using it. Distros that opt for this approach should be prepared to be stuck with it for the next 10 years or so, as the remaining applications are realistically not likely to be ported so long as zombie WebKitGTK+ 2.4 remains available.

These are surmountable problems, but they require action by downstream distributions. No doubt some distributions will be more successful than others, but hopefully many distributions will be able to fix these problems in 2017. We shall see!

By Michael Catanzaro at February 08, 2017 06:32 AM

Michael Catanzaro: On Epiphany Security Updates and Stable Branches

Igalia WebKit

One of the advantages of maintaining a web browser based on WebKit, like Epiphany, is that the vast majority of complexity is contained within WebKit. Epiphany itself doesn’t have any code for HTML parsing or rendering, multimedia playback, or JavaScript execution, or anything else that’s actually related to displaying web pages: all of the hard stuff is handled by WebKit. That means almost all of the security problems exist in WebKit’s code and not Epiphany’s code. While WebKit has been affected by over 200 CVEs in the past two years, and those issues do affect Epiphany, I believe nobody has reported a security issue in Epiphany’s code during that time. I’m sure a large part of that is simply because only the bad guys are looking, but the attack surface really is much, much smaller than that of WebKit. To my knowledge, the last time we fixed a security issue that affected a stable version of Epiphany was 2014.

Well that streak has unfortunately ended; you need to make sure to update to Epiphany 3.22.6, 3.20.7, or 3.18.11 as soon as possible (or Epiphany 3.23.5 if you’re testing our unstable series). If your distribution is not already preparing an update, insist that it do so. I’m not planning to discuss the embarrassing issue here — you can check the bug report if you’re interested — but rather on why I made new releases on three different branches. That’s quite unlike how we handle WebKitGTK+ updates! Distributions must always update to the very latest version of WebKitGTK+, as it is not practical to backport dozens of WebKit security fixes to older versions of WebKit. This is rarely a problem, because WebKitGTK+ has a strict policy to dictate when it’s acceptable to require new versions of runtime dependencies, designed to ensure roughly three years of WebKit updates without the need to upgrade any of its dependencies. But new major versions of Epiphany are usually incompatible with older releases of system libraries like GTK+, so it’s not practical or expected for distributions to update to new major versions.

My current working policy is to support three stable branches at once: the latest stable release (currently Epiphany 3.22), the previous stable release (currently Epiphany 3.20), and an LTS branch defined by whatever’s currently in Ubuntu LTS and elementary OS (currently Epiphany 3.18). It was nice of elementary OS to make Epiphany its default web browser, and I would hardly want to make it difficult for its users to receive updates.

Three branches can be annoying at times, and it’s a lot more than is typical for a GNOME application, but a web browser is not a typical application. For better or for worse, the majority of our users are going to be stuck on Epiphany 3.18 for a long time, and it would be a shame to leave them completely without updates. That said, the 3.18 and 3.20 branches are very stable and only getting bugfixes and occasional releases for the most serious issues. In contrast, I try to backport all significant bugfixes to the 3.22 branch and do a new release every month or thereabouts.

So that’s why I just released another update for Epiphany 3.18, which was originally released in September 2015. Compare this to the long-term support policies of Chrome (which supports only the latest version of the browser, and only for six weeks) or Firefox (which provides nine months of support for an ESR release), and I think we compare quite favorably. (A stable WebKit series like 2.14 is only supported for six months, but that’s comparable to Firefox.) Not bad?

By Michael Catanzaro at February 08, 2017 05:56 AM

February 07, 2017

Next-generation 3D Graphics on the Web

Surfin’ Safari

Apple’s WebKit team today proposed a new Community Group at the W3C to discuss the future of 3D graphics on the Web, and to develop a standard API that exposes modern GPU features including low-level graphics and general purpose computation. W3C Community Groups allow all to freely participate, and we invite browser engineers, GPU hardware vendors, software developers and the Web community to join us.

To kick off the discussion, we’re sharing an API proposal, and a prototype of that API for the WebKit Open Source project. We hope this is a useful starting point, and look forward to seeing the API evolve as discussions proceed in the Community Group.

UPDATE: There is now a prototype implementation and demos of WebGPU.

Let’s cover the details of how we got to this point, and how this new group relates to existing Web graphics APIs such as WebGL.

First, a Little History

There was a time where the standards-based technologies for the Web produced pages with static content, and the only graphics were embedded images. Before long, the Web started adding more features that developers could access via JavaScript. Eventually, there was enough demand for a fully programmable graphics API, so that scripts could create images on the fly. Thus the canvas element and its associated 2D rendering API were born inside WebKit, quickly spread to other browser engines, and standardized soon afterward.

Over time, the type of applications and content that people were developing for the Web became more ambitious, and began running into limitations of the platform. One example is gaming, where performance and visual quality are essential. There was demand for games in browsers, but most games were using APIs that provided 3D graphics using the power of Graphics Processing Units (GPUs). Mozilla and Opera showed some experiments that exposed a 3D rendering context from the canvas element, and they were so compelling that the community decided to gather to standardize something that everyone could implement.

All the browser engines collaborated to create WebGL, the standard for rendering 3D graphics on the Web. It was based on OpenGL ES, a cross-platform API for graphics targeted at embedded systems. This was the right starting place, because it made it possible to implement the same API in all browsers easily, especially since most browser engines were running on systems that had support for OpenGL. And even when the system didn’t directly support OpenGL, the API sat at a high enough level of abstraction for projects like ANGLE to emulate it on top of other technologies. As OpenGL evolved, WebGL could follow.

WebGL has unleashed the power of graphics processors to developers on an open platform, and all major browsers support WebGL 1, allowing console-quality games to be built for the Web, and communities like three.js to flourish. Since then, the standard has evolved to WebGL 2 and, again, all major browser engines, including WebKit, are committed to supporting it.

What’s Next?

Meanwhile, GPU technology has improved and new software APIs have been created to better reflect the designs of modern GPUs. These new APIs exist at a lower level of abstraction and, due to their reduced overhead, generally offer better performance than OpenGL. The major platform technologies in this space are Direct3D 12 from Microsoft, Metal from Apple, and Vulkan from the Khronos Group. While these technologies have similar design concepts, unfortunately none are available across all platforms.

So what does this mean for the Web? These new technologies are clearly the next evolutionary step for content that can benefit from the power of the GPU. The success of the web platform requires defining a common standard that allows for multiple implementations, but here we have several graphics APIs that have nuanced architectural differences. In order to expose a modern, low-level technology that can accelerate graphics and computation, we need to design an API that can be implemented on top of many systems, including those mentioned above. With a broader landscape of graphics technologies, following one specific API like OpenGL is no longer possible.

Instead we need to evaluate and design a new web standard that provides a core set of required features, an API that can be implemented on a mix of platforms with different system graphics technologies, and the security and safety required to be exposed to the Web.

We also need to consider how GPUs can be used outside of the context of graphics and how the new standard can work in concert with other web technologies. The standard should expose the general-purpose computational functionality of modern GPUs. Its design should fit with established patterns of the Web, to make it easy for developers to adopt the technology. It needs to be able to work well with other critical emerging web standards like WebAssembly and WebVR. And most importantly, the standard should be developed in the open, allowing both industry experts and the broader web community to participate.

The W3C provides the Community Group platform for exactly this situation. The “GPU for the Web” Community Group is now open for membership.

WebKit’s Initial API Proposal

We anticipated the situation of next-generation graphics APIs a few years ago and started prototyping in WebKit, to validate that we could expose a very low-level GPU API to the Web, and still get worthwhile performance improvements. Our results were very encouraging, so we are sharing the prototype with the W3C Community Group. We will also start landing code in WebKit soon, so that you can try it out for yourself. We don’t expect this to become the actual API that ends up in the standard, and maybe not even the one that the Community Group decides to start with, but we think there is a lot of value in working code. Other browser engines have made their own similar prototypes. It will be exciting to collaborate with the community and come up with a great new technology for graphics.

Let’s take a look at our experiment in detail, which we call “WebGPU”.

Getting a Rendering Context and Rendering Pipeline

The interface to WebGPU is, as expected, via the canvas element.

let canvas = document.querySelector("canvas");
let gpu = canvas.getContext("webgpu"); 

WebGPU is much more object-oriented than WebGL. In fact, that is where some of the efficiencies come from. Rather than setting up state before each draw operation, WebGPU allows you to create and store objects that represent state, along with objects that can process a set of commands. This way we can do some validation up front as the states are created, reducing the work we need to perform during a drawing operation.

A WebGPU context exposes graphics commands and parallel compute commands. Let’s just assume we want to draw something, so we’ll be using a graphics pipeline. The most important elements in the pipeline are the shaders, which are programs that run on the GPU to process the geometric data and provide a color for each drawn pixel. Shaders are typically written in a language that is specialized for graphics.

Deciding on a shading language in a Web API is interesting because there are many factors to consider. We need a language that is powerful, allows programs to be easily created, can be serialized into a format that is efficient for transfer, and can be validated by the browser to make sure the shader is safe. Parts of the industry are moving to shader representations that can be generated from many source formats, sort of like an assembly language. Meanwhile, the Web has thrived on the “View Source” approach, where human readable code is valuable. We expect the discussions around the shading language to be one of the most fun parts of the standardization process, and look forward to hearing community opinions.

For our WebGPU prototype, we decided to defer the issue and just accept an existing language for now. Since we were building on Apple platforms we picked the Metal Shading Language. How do we load our shaders into WebGPU?

let library = gpu.createLibrary( /* source code */ );

let vertexFunction = library.functionWithName("vertex_main");
let fragmentFunction = library.functionWithName("fragment_main");

We ask the gpu object to load and compile the shader from source code, producing a WebGPULibrary. The shader code itself isn’t that important—imagine a very simple vertex and fragment combination. A library can hold multiple shader functions, so we extract the functions we want to use in this pipeline by name.

Now we can create our pipeline.

// The details of the pipeline.
let pipelineDescriptor = new WebGPURenderPipelineDescriptor();
pipelineDescriptor.vertexFunction = vertexFunction;
pipelineDescriptor.fragmentFunction = fragmentFunction;
pipelineDescriptor.colorAttachments[0].pixelFormat = "BGRA8Unorm";

let pipelineState = gpu.createRenderPipelineState(pipelineDescriptor);

We get a new WebGPURenderPipelineState object from the context by passing in the description of what we need. In this case we say which vertex and fragment shaders we’ll use, as well as the type of image data we want.


In order to draw something you need to provide data to the rendering pipeline using a buffer. WebGPUBuffer is the object that can hold such data, such as geometry coordinates, colors and normal vectors.

let vertexData = new Float32Array([ /* some data */ ]);
let vertexBuffer = gpu.createBuffer(vertexData);

In this case we have data for each vertex we want to draw in our geometry inside a Float32Array, and then create a WebGPUBuffer from that data. We’ll use this buffer later when we issue a draw operation.

Vertex data such as this rarely changes, but there are data that change nearly every time a draw happens. These are called uniforms. A common example of a uniform is the current transformation matrix representing a camera position. WebGPUBuffers are used for uniforms too, but in this case we want to write into the buffer after we’ve created it.

// Imagine "buffer" is a WebGPUBuffer that was allocated earlier.
// buffer.contents exposes an ArrayBufferView, that we then interpret
// as an array of 32-bit floating point numbers.
let uniforms = new Float32Array(buffer.contents);

// Set the uniform of interest.
uniforms[42] = Math.PI;

One of the nice things about this is that a JavaScript developer can wrap the ArrayBufferView with a class or Proxy object with custom getters and setters, so that the external interface looks like typical JavasScript objects. The wrapper object then updates the right ranges within the underlying Array that the buffer is using.


Before we can tell the WebGPU context to draw something, we need to set up some state. This includes the destination of the rendering (a WebGPUTexture that will eventually be shown in the canvas ), and a description of how that texture is initialized and used. That state is stored in a WebGPURenderPassDescriptor.

// Ask the context for the texture it expects the next
// frame to be drawn into.
let drawable = gpu.nextDrawable();

let passDescriptor = new WebGPURenderPassDescriptor();
passDescriptor.colorAttachments[0].loadAction = "clear";
passDescriptor.colorAttachments[0].storeAction = "store";
passDescriptor.colorAttachments[0].clearColor = [0.8, 0.8, 0.8, 1.0];
passDescriptor.colorAttachments[0].texture = drawable.texture;

First we ask the WebGPU context for an object that represents the next frame that we can draw into. This is what is ultimately copied into the canvas element. After we’ve finished our drawing code, we tell WebGPU that we’re done with the drawable object so it can display the results and prepare the next frame.

The WebGPURenderPassDescriptor is initialized indicating that we won’t be reading from this texture in a draw operation (the loadAction is clear), that we will use the texture after the draw (storeAction is store), and the color it should fill the texture with.

Next, we create the objects we’ll need to hold the actual draw operations. A WebGPUCommandQueue has a set of WebGPUCommandBuffers. We push operations into a WebGPUCommandBuffer using a WebGPUCommandEncoder.

let commandQueue = gpu.createCommandQueue();
let commandBuffer = commandQueue.createCommandBuffer();

// Use the descriptor we created above.
let commandEncoder = commandBuffer.createRenderCommandEncoderWithDescriptor(

// Tell the encoder which state to use (i.e. shaders).

// And, lastly, the encoder needs to know which buffer
// to use for the geometry.
commandEncoder.setVertexBuffer(vertexBuffer, 0, 0);

At this point we have set up a rendering pipeline with shaders, a buffer holding the geometry, a queue that we’ll submit draw operations to, and an encoder that can submit to the queue. Now we just push the actual command to draw into the encoder.

// We know our buffer has three vertices. We want to draw them
// with filled triangles.
commandEncoder.drawPrimitives("triangle", 0, 3);

// All drawing commands have been submitted. Tell WebGPU to
// show/present the results in the canvas once the queue has
// been processed.

Like most 3D graphics sample code, it feels like a lot of work in order to draw a simple shape. But it’s not a waste. An advantage of these modern APIs is that much of that code is creating objects that can be reused to draw other things. For example, often content will only need a single WebGPUCommandQueue instance, or can create multiple WebGPURenderPipelineState objects up-front for different shaders. And again, the browser can do a lot of early validation to reduce the overhead during the drawing operations.

Hopefully this gave you a taste of the WebGPU proposal. Even though the final API produced by the W3C Community Group may be very different, we expect a lot of the general design principles to be common.

An Open Invitation

Apple’s WebKit team has proposed establishing a W3C Community Group for GPU on the Web to be the forum for this work, and today you are invited to join us in defining the next standard for GPUs. Our proposal has been received positively by our colleagues at other browser engines, GPU vendors, and framework developers. With support from the industry, we invite all with an interest or expertise in this area to join the Community Group.

By Dean Jackson at February 07, 2017 08:00 PM

February 02, 2017

New Interaction Behaviors in iOS 10

Surfin’ Safari

Last year we published a blog post about getting more responsive tapping on iOS. With the release of iOS 10, we’ve made some minor adjustments to the behavior of our fast tapping, and an important change to a very common user interaction: pinch zooming.

Fast Tapping

A common complaint on iOS 9 and earlier was that events triggered by a user tapping the screen were slightly delayed. This was because the browser was waiting to see if the gesture was a double-tap, indicating the user wanted to zoom. Once a small delay had expired without seeing a second tap, the browser would know it was a single tap and dispatch the event. This made some pages that were designed for an instant reaction to tapping feel slightly slow.

As we described in our introductory post, iOS 10 detects situations when a page can support faster taps and dispatches the events instantly, making Web sites feel much more responsive. The feedback has been very positive. However, before iOS 10 shipped we made a few tweaks to the method described in the original article. Here are the current details.

Enabling fast tapping on iOS 10 requires pages to have the following:

  1. There must be a meta tag of type viewport
  2. The viewport must be defined to have width=device-width
  3. The content must be at a scale of 1, which means both:
    a. the user has not manually zoomed off a scale of 1 (e.g. they can have zoomed, but they must have returned to the original scale)
    b. the page content wasn’t so wide that the browser was forced to shrink it to fit

Note: Explaining 3.b, WebKit often sees pages that define width=device-width but then explicitly layout content at very large widths, often greater than 1000px. These sites are usually designed for a large screen, and have added a viewport tag in the hope it makes it a mobile-friendly design. Unfortunately this is not the case. If the browser respected a misleading viewport rule such as this, the user would only see the top left corner of the content—clearly a bad experience. Instead WebKit looks for this scenario and adjusts the zoom factor appropriately. Conceptually, this behaviour is the same as the browser loading the page, then the user pinch zooming out far enough to see all the content, which means the page is no longer at a scale of 1.

Zooming Everywhere

Safari on iOS 10 allows the user to pinch zoom on every page. As a developer, you should be aware of this, and make sure your content works well when zoomed.

What changed? Prior to iOS 10, Safari allowed the content to block the user from zooming on a page by setting user-scalable=no in the viewport, or appropriate min-scale and max-scale values. This unfortunately enabled pages to pick a text size that was unreadable while giving the user no way to zoom. Also, there is now such a wide range of devices with different display dimensions, screen resolutions, pixel densities… it is very difficult to choose an appropriate text size in a design.

Now, we ignore the user-scalable, min-scale and max-scale settings. If you have content that disabled zoom, please test it on iOS 10, and understand that many users will be zooming now.

As users, we’ve all come across content that is too small to comfortably read. We know that a huge number of people appreciate this zooming improvement, even though it might mean some sites that attempt to block zooming are broken until they update.

Zooming in WKWebView Content

You might have an app that mixes a WKWebView with native content, and having the user being able to scale only that content may be inappropriate. In these cases, you can prevent the user from zooming by setting a new property on WKWebViewConfiguration:

There is new API on WKWebViewConfiguration:

var ignoresViewportScaleLimits: Bool

The default value is false, which means that WKWebView content will allow the content to block zooming. This preserves behavior with older versions of iOS.

Meanwhile, Safari and SafariViewController set the value to true. If your app uses a WKWebView in a similar manner, such as showing a large amount of text, we encourage you to change the value to true too.

For feedback, email or tweet to @webkit.

By Dean Jackson at February 02, 2017 05:18 PM

January 27, 2017

Enhanced Editing with Input Events

Surfin’ Safari

Today, the easiest way to create a rich text editor on the web is to add the contenteditable attribute to an element. This allows users to insert, delete and style web content and works great for many uses of editing on the web. However, some web-based rich text editors, such as iCloud Pages or Google Docs, employ JavaScript-based implementations of rich text editing by capturing key events using a hidden contenteditable element and then using the information in these captured events to update the DOM. This gives more control over the editing experience across browsers and platforms.

However, such an approach comes with a weakness — capturing key events only covers a subset of text editing actions. Examples of this include the bold/italic/underline buttons on iOS, the context menu on macOS, and the editing controls shown in the Touch Bar in Safari. While some of these editing actions dispatch input events, these input events do not convey any notion of what the user is trying to accomplish — they only indicate that some editable content has been altered, which is not enough information for a JavaScript-based editor to respond appropriately.

Furthermore, you may need not only to know when a user has performed some editing action, but also to replace the default behavior resulting from this editing action with custom behavior. For instance, you could imagine such functionality being useful for an editable area that only inserts pasted or dropped content as plaintext rather than HTML. Existing input events do not suffice for this purpose, since they are dispatched after the editing action has been carried out, and are therefore non-preventable. Let’s see how input events can address these issues.

Revisiting Input Events

The latest Input Events specification introduces beforeinput events, which are dispatched before any change resulting from the editing action has taken place. These events are cancelable by calling preventDefault() on the event, which also prevents the subsequent input event from being dispatched. Additionally, each input and beforeinput event now contains information relevant to the editing action being performed. Here is an overview of the attributes added to input events:

  • InputEvent.inputType describes the type of editing action being performed. A full list of input types are enumerated in the official spec, linked above. The names of input type are also share prefixes — for instance, all input types that cause text to be inserted begin with the string "insert". Some examples of input types are insertReplacementText, deleteByCut, and formatBold.
  • contains plaintext data to be inserted in the case of insert* input types, and style information in the case of format* input types. However, if the content being inserted contains rich text, this attribute will be null, and the dataTransfer attribute will be used instead.
  • InputEvent.dataTransfer contains both rich and plain text data to be inserted in a contenteditable area. The rich text data is retrieved as an HTML string using dataTransfer.getData("text/html"), while the plain text representation is retrieved using dataTransfer.getData("text/plain").
  • InputEvent.getTargetRanges is a method that returns a list of ranges that will be affected by editing. For example, when spellchecking or autocorrect replaces typed text with replacement text, the target ranges of the beforeinput event indicate the existing ranges of text that are about to be replaced. It is important to note that each range in this list is a type of StaticRange, as opposed to a normal Range; while a StaticRange is similar to a normal Range in that it has start and end containers and start and end offsets, it does not automatically update as the DOM is modified.

Let’s see how this all comes together in a simple example.

Formatting-only Regions Example

Suppose we’re creating a simple editable area where a user can compose a response to an email or comment. Let’s say we want to restrict editing within certain parts of the message that represent quotes from an earlier response — while we allow the user to change the style of text within a quote, we will not allow the user to edit the text content of the quote. Consider the HTML below:


<body onload="setup()">
    <div id="editor" contenteditable>
        <p>This is some regular content.</p>
        <p>This text is fully editable.</p>
        <div class="quote" style="background-color: #EFFEFE;">
            <p>This is some quoted content.</p>
            <p>You can only change the format of this text.</p>
        <p>This is some more regular content.</p>
        <p>This text is also fully editable.</p>

This gives us the basic ability to edit the contents of our message, which contains a quoted region highlighted in blue. Our goal is to prevent the user from performing editing actions that modify the text content of this quoted region. To accomplish this, we first attach a beforeinput event handler to our editable element. In this handler, we call event.preventDefault() if the input event is not a formatting change (i.e. its inputType does not begin with 'format') and it might modify the contents of the quoted region, which we can tell by inspecting the target ranges of the event. If any of the affected ranges starts or ends within the quoted region, we immediately prevent editing and bail from the handler.


function setup() {
    editor.addEventListener("beforeinput", event => {
        if (event.inputType.match(/^format/))

        for (let staticRange of event.getTargetRanges()) {
            if (nodeIsInsideQuote(staticRange.startContainer)
                || nodeIsInsideQuote(staticRange.endContainer)) {

    function nodeIsInsideQuote(node) {
        let currentElement = node.nodeType == Node.ELEMENT_NODE ? node : node.parentElement;
        while (currentElement) {
            if (currentElement.classList.contains("quote"))
                return true;
            currentElement = currentElement.parentElement;
        return false;

After adding the script, attempts to insert or delete text from the quoted region no longer result in any changes, but the format of the text can still be changed. For instance, users can bold text by right clicking selected text in the quote and then choosing Font &rtrif Bold, or by tapping the Bold button in the Touch Bar in Safari. You can check out the final result in an Input Events demo.

Additional Work

Input events are crucial if you want to build a great text editor, but they don’t yet solve every problem. We believe they could be enhanced to give web developers control over more native editing behaviors on macOS and iOS. For instance, it would be useful for an editable element to specify the set of input types that it supports, so that (1) input events of an unsupported input type are not dispatched on the element, and (2) the browser will not show enabled editing UI that would dispatch only unsupported input types.

Another capability is for web pages to provide a custom handler that WebKit can use to determine the style of the current selection. This is particularly useful in the context of the bold/italic/underline controls on both the iOS keyboard and the Touch Bar — these buttons are highlighted if the current selection is already bold, italic or underlined, indicating to the user that interacting with these controls will undo bold, italic or underlined style. If a web page prevents default behavior and renders these text styles via custom means, it would need to inform WebKit of current text style to ensure that platform controls remain in sync with the content.

Input events are enabled by default as of Safari Technology Preview 18, and available in Safari 10.1 in the recent beta releases of macOS 10.12.4 and iOS 10.3. Please give our example a try and experiment with the feature! If you have any questions or comments, please contact me at, or Jonathan Davis, Apple’s Web Technologies Evangelist, at @jonathandavis or

By Wenson Hsieh at January 27, 2017 09:00 PM

January 25, 2017

Release Notes for Safari Technology Preview 22

Surfin’ Safari

Safari Technology Preview Release 22 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 210274-210845.


  • Fixed an error when calling an async arrow function which is in a class’s member function (r210558)
  • Improved the speed of Array.prototype.slice in DFG/FTL JITs (r210695)


  • Implemented scroll-snap-type:proximity scroll snapping (r210560)
  • Fixed updating :active and :hover states across Shadow DOM slots (r210564)
  • Fixed a CSS Grid issue with very big values for grid lines (r210320)
  • Implemented baseline positioning for grid containers (r210792)
  • Made the CSS Grid sizing data persistent through layouts (r210669)
  • Fixed overflow:scroll scroll position getting restored on back navigation (r210329)

Form Validation

  • Fixed the validation message to use singular form of “character” when maxLength value is 1 (r210447)
  • Truncated lengthy validation messages with an ellipsis (r210425)
  • Aligned email validation with the latest HTML specification (r210361)

Web Inspector

  • Added “Persist Logs on Navigation” to Settings tab (r210793)
  • Added UI zoom level to the Settings tab (r210788)
  • Added Command-, (⌘,) keyboard shortcut to open Settings tab (r210772)
  • Fixed showing application cache details in the Storage tab (r210311)
  • Improved the cubic-bezier editor for invalid inputs in component fields (r210674)
  • Fixed an issue clearing pseudo classes toggled on in the Styles sidebar when Web Inspector is closed (r210316)
  • Fixed resources disappearing from the network tab when an iframe gets removed (r210759)
  • Fixed restoring Settings tab when reopening Web Inspector (r210764)
  • Improved the layout of the spring function editor with left-aligned labels and slider tracks (r210618)


  • Provided more detailed role descriptions for many new HTML5 input types (r210295)
  • Aligned the innerText setter with the HTML specification (r210767)
  • Fixed an issue changing the modified timestamp for a given gamepad when it is updated (r210827)
  • Changed pointer lock to release when the page state is reset for any reason, not just when the process exits (r210281)
  • Fixed editing of nested RTL-to-LTR content (r210831)
  • Support iterating over URLSearchParams objects (r210593)
  • Changed the first parameter of Event.initEvent() to be mandatory (r210559)


  • Added support for MediaKeys.generateRequest() (r210555)
  • Added protection against the MediaPlayer being destroyed in the middle of a load() (r210747)


  • Fixed an issue that caused the highlighting of text using the Yoon Gothic webfont to reflow (r210456)
  • Fixed reordering text inside a blockquote when un-indenting the text (r210524)


  • Volume-separated file URLs: disallowed a file URL on one volume from loading a file on another volume in macOS 10.12.4 or later (r210571)
Note Safari WebDriver is broken in this release. We expect this will be fixed in release 23.

By Jon Davis at January 25, 2017 06:00 PM

January 20, 2017

Introducing Riptide: WebKit’s Retreating Wavefront Concurrent Garbage Collector

Surfin’ Safari

As of r209827, 64-bit ARM and x86 WebKit ports use a new garbage collector called Riptide. Riptide reduces worst-case pause times by allowing the app to run concurrently to the collector. This can make a big difference for responsiveness since garbage collection can easily take 10 ms or more, even on fast hardware. Riptide improves WebKit’s performance on the JetStream/splay-latency test by 5x, which leads to a 5% improvement on JetStream. Riptide also improves our Octane performance. We hope that Riptide will help to reduce the severity of GC pauses for many different kinds of applications.

This post begins with a brief background about concurrent GC (garbage collection). Then it describes the Riptide algorithm in detail, including the mature WebKit GC foundation, on which it is built. The field of incremental and concurrent GC goes back a long time and WebKit is not the first system to use it, so this post has a section about how Riptide fits into the related work. This post concludes with performance data.


Garbage collection is expensive. In the worst case, for the collector to free a single object, it needs to scan the entire heap to ensure that no objects have any references to the one it wants to free. Traditional collectors scan the entire heap periodically, and this is roughly how WebKit’s collector has worked since the beginning.

The problem with this approach is that the GC pause can be long enough to cause rendering loops to miss frames, or in some cases it can even take so long as to manifest as a spin. This is a well-understood computer science problem. The originally proposed solution for janky GC pauses, by Guy Steele in 1975, was to have one CPU run the app and another CPU run the collector. This involves gnarly race conditions that Steele solved with a bunch of locks. Later algorithms like Baker’s were incremental: they assumed that there was one CPU, and sometimes the application would call into the collector but only for bounded increments of work. Since then, a huge variety of incremental and concurrent techniques have been explored. Incremental collectors avoid some synchronization overhead, but concurrent collectors scale better. Modern concurrent collectors like DLG (short for Doligez, Leroy, Gonthier, published in POPL ’93 and ’94) have very cheap synchronization and almost completely avoid pausing the application. Taking garbage collection off-core rather than merely shortening the pauses is the direction we want to take in WebKit, since almost all of the devices WebKit runs on have more than one core.

The goal of WebKit’s new Riptide concurrent GC is to achieve a big reduction in GC pauses by running most of the collector off the main thread. Because Riptide will be our always-on default GC, we also want it to be as efficient — in terms of speed and memory — as our previous collector.

The Riptide Algorithm

The Riptide collector combines:

  • Marking: The collector marks objects as it finds references to them. Objects not marked are deleted. Most of the collector’s time is spent visiting objects to find references to other objects.
  • Constraints: The collector allows the runtime to supply additional constraints on when objects should be marked, to support custom object lifetime rules.
  • Parallelism: Marking is parallelized on up to eight logical CPUs. (We limit to eight because we have not optimized it for more CPUs.)
  • Generations: The collector lets the mark state of objects stick if memory is plentiful, allowing the next collection to skip visiting those objects. Sticky mark bits are a common way of implementing generational collection without copying. Collection cycles that let mark bits stick are called eden collections in WebKit.
  • Concurrency: Most of the collector’s marking phase runs concurrently to the program. Because this is by far the longest part of collection, the remaining pauses tend to be 1 ms or less. Riptide’s concurrency features kick in for both eden and full collections.
  • Conservatism: The collector scans the stack and registers conservatively, that is, checking each word to see if it is in the bounds of some object and then marking it if it is. This means that all of the C++, assembly, and just-in-time (JIT) compiler-generated code in our system can store heap pointers in local variables without any hassles.
  • Efficiency: This is our always-on garbage collector. It has to be fast.

This section describes how the collector works. The first part of the algorithm description focuses on the WebKit mark-sweep algorithm on which Riptide is based. Then we dive into concurrency and how Riptide manages to walk the heap while the heap is in flux.

Efficient Mark-Sweep

Riptide retains most of the basic architecture of WebKit’s mature garbage collection code. This section gives an overview of how our mark-sweep collector works: WebKit uses a simple segregated storage heap structure. The DOM, the Objective-C API, the type inference runtime, and the compilers all introduce custom marking constraints, which the GC executes to fixpoint. Marking is done in parallel to maximize throughput. Generational collection is important, so WebKit implements it using sticky mark bits. The collector uses conservative stack scanning to ease integration with the rest of WebKit.

Simple Segregated Storage

WebKit has long used the simple segregated storage heap structure for small and medium-sized objects (up to about 8KB):

  • Small and medium-sized objects are allocated from segregated free lists. Given a desired object size, we perform a table lookup to find the appropriate free list and then pop the first object from this list. The lookup table is usually constant-folded by the compiler.
  • Memory is divided into 16KB blocks. Each block contains cells. All cells in a block have the same cell size, called the block’s size class. In WebKit jargon, an object is a cell whose JavaScript type is “object”. For example, a string is a cell but not an object. The GC literature would typically use object to refer to what our code would call a cell. Since this post is not really concerned with JavaScript types, we’ll use the term object to mean any cell in our heap.
  • At any time, the active free list for a size class contains only objects from a single block. When we run out of objects in a free list, we find the next block in that size class and sweep it to give it a free list.

Sweeping is incremental in the sense that we only sweep a block just before allocating in it. In WebKit, we optimize sweeping further with a hybrid bump-pointer/free-list allocator we call bump’n’pop (here it is in C++ and in the compilers). A per-block bit tells the sweeper if the block is completely empty. If it is, the sweeper will set up a bump-pointer arena over the whole block rather than constructing a free-list. Bump-pointer arenas can be set up in O(1) time while building a free-list is a O(n) operation. Bump’n’pop achieves a big speed-up on programs that allocate a lot because it avoids the sweep for totally-empty blocks. Bump’n’pop’s bump-allocator always bumps by the block’s cell size to make it look like the objects had been allocated from the free list. This preserves the block’s membership in its size class.

Large objects (larger than about 8KB) are allocated using malloc.

Constraint-Based Marking

Garbage collection is ordinarily a graph search problem and the heap is ordinarily just a graph: the roots are the local variables, their values are directional edges that point to objects, and those objects have fields that each create edges to some other objects. WebKit’s garbage collector also allows the DOM, compiler, and type inference system to install constraint callbacks. These constraints are allowed to query which objects are marked and they are allowed to mark objects. The WebKit GC algorithm executes these constraints to fixpoint. GC termination happens when all marked objects have been visited and none of the constraints want to mark anymore objects. In practice, the constraint-solving part of the fixpoint takes up a tiny fraction of the total time. Most of the time in GC is spent performing a depth-first search over marked objects that we call draining.

Parallel Draining

Draining takes up most of the collector’s time. One of our oldest collector optimizations is that draining is parallelized. The collector has a draining thread on each CPU. Each draining thread has its own worklist of objects to visit, and ordinarily it runs a graph search algorithm that only sees this worklist. Using a local worklist means avoiding worklist synchronization most of the time. Each draining thread will check in with a global worklist under these conditions:

  • It runs out of work. When a thread runs out of work, it will try to steal 1/Nth of the global worklist where N is the number of idle draining threads. This means acquiring the global worklist’s lock.
  • Every 100 objects visited, the draining thread will consider donating about half of its worklist to the global worklist. It will only do this if the global worklist is empty, the global worklist lock can be acquired without blocking, and the local worklist has at least two entries.

This algorithm appears to scale nicely to about eight cores, which is good enough for the kinds of systems that WebKit usually runs on.

Draining in parallel means having to synchronize marking. Our marking algorithm uses a lock-free CAS (atomic compare-and-swap instruction) loop to set mark bits.

Sticky Mark Bits

Generational garbage collection is a classic throughput optimization first introduced by Lieberman and Hewitt and Ungar. It assumes that objects that are allocated recently are unlikely to survive. Therefore, focusing the collector on objects that were allocated since the last GC is likely to free up lots of memory — almost as much as if we collected the whole heap. Generational collectors track the generation of objects: either young or old. Generational collectors have (at least) two modes: eden collection that only collects young objects and full collection that collects all objects. During an eden collection, old objects are only visited if they are suspected to contain pointers to new objects.

Generational collectors need to overcome two hurdles: how to track the generation of objects, and how to figure out which old objects have pointers to new objects.

The collector needs to know the generation of objects in order to determine which objects can be safely ignored during marking. In a traditional generational collector, eden collections move objects and then use the object’s address to determine its generation. Our collector does not move objects. Instead, it uses the mark bit to also track generation. Quite simply, we don’t clear any mark bits at the start of an eden collection. The marking algorithm will already ignore objects that have their mark bits set. This is called sticky mark bit generational garbage collection.

The collector will avoid visiting old objects during an eden collection. But it cannot avoid all of them: if an old object has pointers to new objects, then the collector needs to know to visit that old object. We use a write barrier — a small piece of instrumentation that executes after every write to an object — that tells the GC about writes to old objects. In order to cheaply know which objects are old, the object header also has a copy of the object’s state: either it is old or it is new. Objects are allocated new and labeled old when marked. When the write barrier detects a write to an old object, we tell the GC by setting the object’s state to old-but-remembered and putting it on the mark stack. We use separate mark stacks for objects marked by the write barrier, so when we visit the object, we know whether we are visiting it due to the barrier or because of normal marking (i.e. for the first time). Some accounting only needs to happen when visiting the object for the first time. The complete barrier is simply:

object->field = newValue;
if (object->cellState == Old)

Generational garbage collection is an enormous improvement in performance on programs that allocate a lot, which is common in JavaScript. Many new JavaScript features, like iterators, arrow functions, spread, and for-of allocate lots of objects and these objects die almost immediately. Generational GC means that our collector does not need to visit all of the old objects just to delete the short-lived garbage.

Conservative Roots

Garbage collection begins by looking at local variables and some global state to figure out the initial set of marked objects. Introspecting the values of local variables is tricky. WebKit uses C++ local variables for pointers to the garbage collector’s heap, but C-like languages provide no facility for precisely introspecting the values of specific variables of arbitrary stack frames. WebKit solves this problem by marking objects conservatively when scanning roots. We use the simple segregated storage heap structure in part because it makes it easy to ask whether an arbitrary bit pattern could possibly be a pointer to some object.

We view this as an important optimization. Without conservative root scanning, C++ code would have to use some API to notify the collector about what objects it points to. Conservative root scanning means not having to do any of that work.

Mark-Sweep Summary

Riptide implements complex notions of reachability via arbitrary constraint callbacks and allows C++ code to manipulate objects directly. For performance, it parallelizes marking and uses generations to reduce the average amount of marking work.

Handling Concurrency

Riptide makes the draining phase of garbage collection concurrent. This works because of a combination of concurrency features:

  • Riptide is able to stop the world for certain tricky operations like stack scanning and DOM constraint solving.
  • Riptide uses a retreating wavefront write barrier to manage races between marking and object mutation. Using retreating wavefront allows us to avoid any impedance mismatch between generational and concurrent collector optimizations.
  • Retreating wavefront collectors can suffer from the risk of GC death spirals, so Riptide uses a space-time scheduler to put that in check.
  • Visiting an object while it is being reshaped is particularly hard, and WebKit reshapes objects as part of type inference. We use an obstruction-free double collect snapshot to ensure that the collector never marks garbage memory due to a visit-reshape race.
  • Lots of objects have tricky races that aren’t on the critial path, so we put a fast, adaptive, and fair lock in every JavaScript object as a handy way to manage them. It fits in two otherwise unused bits.

While we wrote Riptide for WebKit, we suspect that the underlying intuitions could be useful for anyone wanting to write a concurrent, generational, parallel, conservative, and non-copying collector. This section describes Riptide in detail.

Stopping The World and Safepoints

Riptide does draining concurrently. It is a goal to eventually make other phases of the collector concurrent as well. But so long as some phases are not safe to run concurrently, we need to be able to bring the application to a stop before performing those phases. The place where the collector stops needs to be picked so as to avoid reentrancy issues: for example stopping to run the GC in the middle of the GC’s allocator would create subtle problems. The concurrent GC avoids these problems by only stopping the application at those points where the application would trigger a GC. We call these safepoints. When the collector brings the application to a safepoint, we say that it is stopping the world.

Riptide currently stops the world for most of the constraint fixpoint, and resumes the world for draining. After draining finishes, the world is again stopped. A typical collection cycle may have many stop-resume cycles.

Retreating Wavefront

Draining concurrently means that just as we finish visiting some object, the application may store to one of its fields. We could store a pointer to an unmarked object into an object that is already visited, in which case the collector might never find that unmarked object. If we don’t do something about this, the collector would be sure to prematurely delete objects due to races with the application. Concurrent garbage collectors avoid this problem using write barriers. This section describes Riptide’s write barrier.

Write barriers ensure that the state of the collector is still valid after any race, either by marking objects or by having objects revisited (GC Handbook, chapter 15). Marking objects helps the collector make forward progress; intuitively, it is like advancing the collector’s wavefront. Having objects revisited retreats the wavefront. The literature of full of concurrent GC algorithms, like the Metronome, C4, and DLG, that all use some kind of advancing wavefront write barrier. The simplest such barrier is Dijkstra’s, which marks objects anytime a reference to them is created. I used these kinds of barriers in my past work because they make it easy to make the collector very deterministic. Adding one of those barriers to WebKit would be likely to create some performance overhead since this means adding new code to every write to the heap. But the retreating wavefront barrier, originally invented by Guy Steele in 1975, works on exactly the same principle as our existing generational barrier. This allows Riptide to achieve zero barrier overhead by reusing WebKit’s existing barrier.

It’s easiest to appreciate the similarity by looking at some barrier code. Our old generational barrier looked like this:

object->field = newValue;
if (object->cellState == Old)

Steele’s retreating wavefront barrier looks like this:

object->field = newValue;
if (object->cellState == Black)

Retreating wavefront barriers operate on the same principle as generational barriers, so it’s possible to use the same barrier for both. The only difference is the terminology. The black state means that the collector has already visited the object. This barrier tells the collector to revisit the object if its cellState tells us that the collector had already visited it. This state is part of the classic tri-color abstraction: white means that the GC hasn’t marked the object, grey means that the object is marked and on the mark stack, and black means that the object is marked and has been visited (so is not on the mark stack anymore). In Riptide, the tri-color states that are relevant to concurrency (white, grey, black) perfectly overlap with the sticky mark-bit states that are relevant to generations (new, remembered, old). The Riptide cell states are as follows:

  • DefinitelyWhite: the object is new and white.
  • PossiblyGrey: the object is grey, or remembered, or new and white.
  • PossiblyBlack: the object is black and old, or grey, or remembered, or new and white.

A naive combination generational/concurrent barrier might look like this:

object->field = newValue;
if (object->cellState == PossiblyBlack)

This turns out to need tweaking to work. The PossiblyBlack state is too ambiguous, so the slowPath needs additional logic to work out what the object’s state really was. Also, the order of execution matters: the CPU must run the object->cellState load after it runs the object->field store. That’s hard, since CPUs don’t like to obey store-before-load orderings. Finally, we need to guarantee that the barrier cannot retreat the wavefront too much.

Disambiguating Object State

The GC uses the combination of the object’s mark bit in the block header and the cellState byte in the object’s header to determine the object’s state. The GC clears mark bits at the start of full collection, and it sets the cellState during marking and barriers. It doesn’t reset objects’ cellStates back to DefinitelyWhite at the start of a full collection, because it’s possible to infer that the cellState should have been reset by looking at the mark bit. It’s important that the collector never scans the heap to clear marking state, and even mark bits are logically cleared using versioning. If an object is PossiblyBlack or PossiblyGrey and its mark bit is logically clear, then this means that the object is really white. Riptide’s barrier slowPath is almost like our old generational slow path but it has a new check: it will not do anything if the mark bit of the target object is not set, since this means that we’re in the middle of a GC and the object is actually white. Additionally, the barrier will attempt to set the object back to DefinitelyWhite so that the slowPath path does not have to see the object again (at least not until it’s marked and visited).

Store-Before-Barrier Ordering

The GC must flag the object as PossiblyBlack just before it starts to visit it and the application must store to field before loading object->cellState. Such ordering is not guaranteed on any modern architecture: both x86 and ARM will sink the store below the load in some cases. Inserting an unconditional store-load fence, such as lock; orl $0, (%rsp) on x86 or dmb ish on ARM, would degrade performance way too much. So, we make the fence itself conditional by playing a trick with the barrier’s condition:

object->field = newValue;
if (object->cellState <= blackThreshold)

Where blackThreshold is a global variable. The PossiblyBlack state has the value 0, and when the collector is not running, blackThreshold is 0. But once the collector starts marking, it sets blackThreshold to 100 while the world is stopped. Then the barrier’s slowPath leads with a check like this:

if (object->cellState != PossiblyBlack)

This means that the application takes a slight performance hit while Riptide is running. In typical programs, this overhead is about 5% during GC and 0% when not GCing. The only additional cost when not GCing is that blackThreshold must be loaded from memory, but we could not detect a slow-down due to this change. The 5% hit during collection is worth fixing, but to put it in perspective, the application used to take a 100% performance hit during GC because the GC would stop the application from running.

The complete Riptide write barrier is emitted as if the following writeBarrier function had been inlined just after any store to target:

ALWAYS_INLINE void writeBarrier(JSCell* target)
    if (LIKELY(target->cellState() > blackThreshold))
    if (target->cellState() != PossiblyBlack)

NEVER_INLINE void writeBarrierSlow(JSCell* target)
    if (!isMarked(target)) {
        // Try to label this object white so that we don't take the barrier
        // slow path again.
        if (target->compareExchangeCellState(PossiblyBlack, DefinitelyWhite)) {
            if (Heap::isMarked(target)) {
                // A race! The GC marked the object in the meantime, so
                // pessimistically label it black again.


The JIT compiler inlines the part of the slow path that rechecks the object’s state after doing a fence, since this helps keep the overhead low during GC. Moreover, our just-in-time compilers optimize the barrier further by removing barriers if storing values that the GC doesn’t care about, removing barriers on newly allocated objects (which must be white), clustering barriers together to amortize the cost of the fence, and removing redundant barriers if an object is stored to repeatedly.


When the barrier does append the object to the m_mutatorMarkStack, the object will get revisited eventually. The revisit could happen concurrently to the application. That’s important since we have seen programs retreat the wavefront enough that the total revisit pause would be too big otherwise.

Unlike advancing wavefront, retreating wavefront means forcing the collector to redo work that it has already done. Without some facilities to ensure collector progress, the collector might never finish due to repeated revisit requests from the write barrier. Riptide tackles this problem in two ways. First, we defer all revisit requests. Draining threads do not service any revisit requests until they have no other work to do. When an object is flagged for revisiting, it stays in the grey state for a while and will only be revisited towards the end of GC. This ensures that if an old object often has its fields overwritten with pointers to new objects, then the GC will usually only scan two snapshots’ worth of those fields: one snapshot whenever the GC visited the object first, and another towards the end when the GC gets around to servicing deferred revisits. Revisit deferral reduces the likelihood of runaway GC, but fully eliminating such pathologies is left to our scheduler.

Space-Time Scheduler

The bitter end of a retreating wavefront GC cycle is not pretty: just as the collector goes to visit the last object on the mark stack, some object that had already been visited gets written to, and winds up back on the mark stack. This can go on for a while, and before we had any mitigations we saw Riptide using 5x more memory than with synchronous collection. This death spiral happens because programs allocate a lot all the time and the collector cannot free any memory until it finishes marking. Riptide prevents death spirals using a scheduler that controls the application’s pace. We call it the space-time scheduler because it links the amount of time that the application gets to run for in a timeslice to the amount of space that the application has used by allocating in the collector’s headroom.

The space-time scheduler ensures that the retreating wavefront barrier cannot wreak havoc by giving the collector an unfair advantage: it will periodically stop the world for short pauses even when the collector could be running concurrently. It does this just so the collector can always outpace the application in case of a race. If this was meant as a garbage collector for servers, you could imagine providing the user with a bunch of knobs to control the schedule of these synthetic pauses. Different applications will have different ideal pause lengths. Applications that often write to old memory will retreat the collector’s wavefront a lot, and so they will need a longer pause to ensure termination. Functional-style programs tend to only write to newly allocated objects, so those could get away with a shorter pause. We don’t want web users or web developers to have to configure our collector, so the space-time scheduler adaptively selects a pause schedule.

To be correct, the scheduler must eventually pause the world for long enough to let the collector terminate. The space-time scheduler is based on a simple idea: the length of pauses increases during collection in response to how much memory the application is using.

The space-time scheduler selects the duration and spacing of synthetic pauses based on the headroom ratio, which is a measure of the amount of extra memory that the application has allocated during the concurrent collection. A concurrent collection is triggered by memory usage crossing the trigger threshold. Since the collector allows the application to keep running, the application will keep allocating. The space that the collector makes available for allocation during collection is called the headroom. Riptide is tuned for a max headroom that is 50% larger than the trigger threshold: so if the app needed to allocate 100MB to trigger a collection, its max headroom is 50MB. We want the collector to complete synchronously if we ever deplete all of our headroom: at that point it’s better for the system to pause and free memory than to run and deplete even more memory. The headroom ratio is simply the available headroom divided by the max headroom. The space-time scheduler will divide time into fixed timeslices, and the headroom ratio determines how much time the application is resumed for during that period.

The default tuning of our collector is that the collector timeslice is 2 ms, and the first C ms of it is given to the collector and the remaining M ms is given to the mutator. We always let the collector pause for at least 0.6 ms. Let H be the headroom ratio: 1 at the start of collection, and 0 if we deplete all headroom. With a 0.6 ms minimum pause and a 2 ms timeslice, we define M and C as follows:

M = 1.4 H
C = 2 – M

For example, at the start of usual collection we will give 0.6 ms to the collector and then 1.4 ms to the application, but as soon as the application starts allocating, this window shifts. Aggressive applications, which both allocate a lot and write to old objects a lot, will usually end collection with the split being closer to 1 ms for the collector followed by 1 ms for the application.

Thanks to the space-time scheduler, the worst that an adversarial program could do is cause the GC to keep revisiting some object. But it can’t cause the GC to run out of memory, since if the adversary uses up all of the headroom then M becomes 0 and the collector gets to stop the world until the end of the cycle.

Obstruction-Free Double Collect Snapshot

Concurrent garbage collection means finding exciting new ways of side-stepping expensive synchronization. In traditional concurrent mark-sweep GCs, which focused on nicely-typed languages, the worst race was the one covered by the write barrier. But since this is JavaScript, we get to have a lot more fun.

JavaScript objects may have properties added to them at any time. The WebKit JavaScript object model has three features that makes this efficient:

  • Each object has a structure ID: The first 32 bits of each object is its structure ID. Using a table lookup, this gives a pointer to the object’s structure: a kind of meta-object that describes how its object is supposed to look. The object’s layout is governed by its structure. Some objects have immutable structures, so for those we know that so long as their structure IDs stay the same, they will be laid out the same.
  • The structure may tell us that the object has inline storage. This is a slab of space in the object itself, left aside for JavaScript properties.
  • The structure may tell us about the object’s butterfly. Each object has room for a pointer that can be used to point to an overflow storage for additional properties that we call a butterfly. The butterfly is a bidirectional object that may store named properties to the left of the pointer and indexed properties to the right.

It’s imperative that the garbage collector visits the butterfly using exactly the structure that corresponds to it. If the object has a mutable structure, it’s imperative that the collector visits the butterfly using the data from the structure that corresponds to that butterfly. The collector would crash if it tried to decode the butterfly using wrong information.

To accomplish this, we use a very simple obstruction-free version of Afek et al’s double collect snapshot. To handle the immutable structure case, we just ensure that the application uses this protocol to set both the structure and butterfly:

  1. Nuke the structure ID — this sets a bit in the structure ID to indicate to the GC that the structure and butterfly are changing.
  2. Set the butterfly.
  3. Set the new (decontaminated) structure ID — decontaminating means clearing the nuke bit.

Meanwhile the collector does this to read both the structure and the butterfly:

  1. Read the structure ID.
  2. Read the butterfly.
  3. Read the structure ID again, and compare to (1).

If the collector ever reads a nuked structure ID, or if the structure ID in (1) and (3) are different, then we know that we will have a butterfly-structure mismatch. But if none of these conditions hold, then we are guaranteed that the collector will have a consistent structure and butterfly. See here for the proof.

Harder still is the case where the structure is mutable. In this case, we ensure that the protocol for setting the fields in the structure is to set them after the structure is nuked but before the new one is installed. The collector reads those fields before/after as well. This allows the collector to see a consistent snapshot of the structure, butterfly, and a bit inside the structure without using any locking. All that matters is that the stores in the application and the loads in the collector are ordered. We get this for free on x86, and on ARM we use store-store fences in the application (dmb ishst) and load-load fences in the collector (dmb ish).

This algorithm is said to be obstruction-free because it will complete in O(1) time no matter what kind of race it encounters, but if it does encounter a race then it’ll tell you to try again. Obstruction-free algorithms need some kind of contention manager to ensure that they do eventually complete. The contention manager must provably maximize the likelihood that the obstruction-free algorithm will eventually run without any race. For example, this would be a sound contention manager: exponential back-off in which the actual back-off amount is a random number between 0 and X where X increases exponentially on each try. It turns out that Riptide’s retreating wavefront revisit scheduler is already a natural contention manager. When the collector bails on visiting an object because it detected a race, it schedules revisiting of that object just as if a barrier had executed. So, the GC will visit any object that encountered such a race again anyway. The GC will visit the object much later and the timing will be somewhat pseudo-random due to OS scheduling. If an object did keep getting revisited, eventually the space-time scheduler will increase the collector’s synthetic pause to the point where the revisit will happen with the world stopped. Since there are no safepoints possible in any of the structure/butterfly atomic protocols, stopping the world ensures that the algorithm will not be obstructed.

Embedded WTF Locks

The obstruction-free object snapshot is great, but it’s not scalable — from a WebKit developer sanity standpoint — to use it everywhere. Because we have been adding more concurrency to WebKit for a while, we made this easier by already having a custom locking infrastructure in WTF (Web Template Framework). One of the goals of WTF locks was to fit locks in two bits so that we may one day stuff a lock into the header of each JavaScript object. Many of the loony corner-case race conditions in the concurrent garbage collector happen on paths where acquiring a lock is fine, particularly if that lock has a great inline fast path like WTF locks. So, all JavaScript objects in WebKit now have a fast, adaptive, and fair WTF lock embedded in two bits of what is otherwise the indexingType byte in the object header. This internal lock is used to protect mutations to all sorts of miscellaneous data structures. The collector will hold the internal lock while visiting those objects.

Locking should always be used with care since it can be a slow-down. In Riptide, we only use locking to protect uncommon operations. Additionally, we use an optimized lock implementation to reduce the cost of synchronization even further.

Algorithm Summary

Riptide is an improvement to WebKit’s collector and retains most of the things that made the old algorithm great. The changes that transformed WebKit’s collector were landed over the past six months, starting with the painful work of removing WebKit’s previous use of copying. Riptide combines Guy Steele’s classic retreating wavefront write barrier with a mature sticky-mark-sweep collector and lots of concurrency tricks to get a useful combination of high GC throughput and low GC latency.

Related Work

The paper that introduced retreating wavefront did not claim to implement the idea — it was just a thought experiment. We are aware of two other implementations of retreating wavefront. The oldest is the BDW (Boehm-Demers-Weiser) collector‘s incremental mode. That collector uses a page-granularity revisit because it relies entirely on page faults to trigger the barrier. The collector makes pages that have black objects read-only and then any write to that page triggers a fault. The fault handler makes the page read-write and logs the entire page for revisiting. Riptide uses a software barrier that precisely triggers revisiting only for the object that got stored to. The BDW collector uses page faults for a good reason: so that it can be used as a plug-in component to any kind of language environment. The compiler doesn’t have to be aware of retreating wavefronts or generations since the BDW collector will be sure to catch all of the writes that it cares about. But in WebKit we are happy to have everything tightly integrated and so Riptide relies on the rest of WebKit to use its barrier. This was not hard since the new barrier is almost identical to our old one.

Another user of retreating wavefront is ChakraCore. It appears to have both a page-fault-based barrier like BDW and a software card-marking barrier that can flag 128-byte regions of memory as needing revisit. (For a good explanation of card-marking, albeit in a different VM, see here.) Riptide uses an object-granularity barrier instead. We tried card-marking, but found that it was slower than our barrier unless we were willing to place our entire heap in a single large virtual memory reservation. We didn’t want our memory structure to be that deterministic. All retreating wavefront collectors require a stop-the-world snapshot-at-the-end increment that confirms that there is no more marking left to do. Both BDW and ChakraCore perform all revisiting during the snapshot-at-the-end. If there is a lot of revisiting work, that increment could take a while. That risk is particularly high with card-marking or fault-based barriers, in which a write to a single object usually causes the revisiting of multiple objects. Riptide can revisit objects with the application resumed. Riptide can also resume the application in between executions of custom constraints. Riptide is tuned so that the snapshot-at-the-end is only confirming that there is no more work, rather than spending an unbounded amount of time creating and chasing down new work.

Instead of retreating wavefront, most incremental, concurrent, and real-time collectors use some kind of advancing wavefront barrier. In those kinds of barriers, the application marks the objects it interacts with under certain conditions. Baker’s barrier marks every pointer you load from the heap. Dijkstra’s barrier marks every pointer you store into the heap. Yuasa’s barrier marks every pointer you overwrite. All of these barriers advance the collector’s wavefront in the sense that they reduce the amount of work that the collector will have to do — the thinking goes that the collector would have marked the object anyway so the barrier is helping. Since these collectors usually allocate objects black during collection, marking objects will not postpone when the collector can finish. This means that advancing wavefront collectors will mark all objects that were live at the very beginning of the cycle and all objects allocated during the cycle. Keeping objects allocated during the GC cycle (which may be long) is called floating garbage. Retreating wavefront collectors largely avoid floating garbage since in those collectors an object can only be marked if it is found to be referenced from another marked object.

Advancing wavefront barriers are not a great match for generational collection. The generational barrier isn’t going to overlap with an advancing wavefront barrier the way that Riptide’s, ChakraCore’s, and BDW’s do. This means double the barrier costs. Also, in an advancing wavefront generational collector, eden collections have to be careful to ensure that their floating garbage doesn’t get promoted. This requires distinguishing between an object being marked for survival versus being marked for promotion. For example, the Domani, Kolodner, Petrank collector has a “yellow” object state and special color-toggling machinery to manage this state, all so that it does not promote floating garbage. The Frampton, Bacon, Cheng, and Grove version of the Metronome collector maintains three nurseries to gracefully move objects between generations, and in their collector the eden collections and full collections can proceed concurrently to each other. While those collectors have incredible features, they are not in widespread use, probably because of increased baseline costs due to extra bookkeeping and extra barriers. To put in perspective how annoying the concurrent-generational integration is, many systems like V8 and HotSpot avoid the problem by using synchronous eden collections. We want eden collections to be concurrent because although they are usually fast, we have no bound on how long they could take in the worst case. Not having floating garbage is another reason why it’s so easy for retreating wavefront collectors to do concurrent eden collection: there’s no need to invent states for black-but-new objects.

Using retreating wavefront means we don’t get the advancing wavefront’s GC termination guarantee. We make up for it by having more aggressive scheduling. It’s common for advancing wavefront collectors to avoid all global pauses because all of collection is concurrent. In the most aggressive advancing wavefront concurrent collectors, the closest thing to a “pause” is that at some point each thread must produce a stack scan. Even if all of Riptide’s algorithms were concurrent, we would still have to artificially stop the application simply to ensure termination. That’s a trade-off that we’re happy with, since we get to control how long these synthetic pauses are.

In many ways, Riptide is a classic mark-sweep collector. Using simple segregated storage is very common, and variants of this technique can be found in Jikes RVM, the Metronome real-time garbage collector, the BDW collector, the Bartok concurrent mark-sweep collector, and probably many others. Combining mark-sweep with bump-pointer is not new; Immix is another way to do it. Our bump’n’pop allocator looks most like Hoard‘s, and the technique was also used in Vam and reaps. Our conservative scan is almost like what the BDW collector does. Sticky mark bits are also used in BDW, Jikes RVM, and ChakraCore.


We enabled Riptide once we were satisfied that it did not have any major remaining regressions (in stability, performance, and memory usage) and that it demonstrated an improvement on some test of GC pauses. Enabling it now enables us to expose it to a lot of testing as we continue to tune and validate this collector. This section summarizes what we know about Riptide’s performance so far.

The synchronization features that enable concurrent collection were landed in many revisions over a six month period starting in July 2016. This section focuses on the performance boost that we get once we enable Riptide. Enabling Riptide means that draining will resume the application and allow the application and collector to run alongside each other. The application will still experience pauses: both synthetic pauses from the space-time scheduler and mandatory pauses for things like DOM constraint evaluation. The goal of this evaluation is to give a glimpse of what Riptide can do for observed pauses.

The test that did the best job of demonstrating our garbage collector’s jankyness was the Octane SplayLatency test. This test is also included in JetStream. WebKit was previously not the best at either version of this test so we wanted a GC that would give us a big improvement. The Octane version of this test reports the reciprocal of the root-mean-squared, which rewards uniform performance. JetStream reports the reciprocal of the average of the worst 0.5% of samples, which rewards fast worst-case performance. We tuned Riptide on the JetStream version of this test, but we show results from both versions.

The performance data was gathered on a 15″ MacBook Pro with a 2.8 GHz Intel Core i7 and 16GB RAM. This machine has four cores, and eight logical CPUs thanks to hyperthreading. We took care to quiet down the machine before running benchmarks, by closing almost all apps, disconnecting from the network, disabling Spotlight, and disabling ReportCrash. Our GC is great at taking advantage of hyperthreaded CPUs, so it runs eight draining threads on this machine.

The figure above shows that Riptide improves the JetStream/splay-latency score by a factor of five.

The figure above shows that Riptide improves the Octane/SplayLatency score by a factor of 2.5.

The chart above shows what is happening over 10,000 iterations of the Splay benchmark: without Riptide, an occasional iteration will pause for >10 ms due to garbage collection. Enabling Riptide brings these hiccups below 3 ms.

You can run this benchmark interactively if you want to see how your browser’s GC performs. That version will plot the time per iteration in milliseconds over 2,000 iterations.

We continue to tune Riptide as we validate it on a larger variety of workloads. Our goal is to continue to reduce pause times. That means making more of the collector concurrent and improving the space-time scheduler. Continued tuning is tracked by bug 165909.


This post describes the new Riptide garbage collector in WebKit. Riptide does most of its work off the main thread, allowing for a significant reduction in worst-case pause times. Enabling Riptide leads to a five-fold improvement in latency as reported by the JetStream/splay-latency test. Riptide is now enabled by default in WebKit trunk and you can try it out in Safari Technology Preview 21. Please try it out and file bugs!

By Filip Pizlo at January 20, 2017 06:00 PM

December 21, 2016

Frédéric Wang: ¡Igalia is hiring!

Igalia WebKit

If you read this blog, you probably know that I joined Igalia early this year, where I have been involved in projects related to free software and web engines. You may however not be aware that Igalia has a flat & cooperative structure where all decisions (projects, events, recruitments, company agreements etc) are voted by members of an assembly. In my opinion such an organization allows to take better decisions and to avoid frustrations, compared to more traditional hierarchical organizations.

After several months as a staff, I finally applied to become an assembly member and my application was approved in November! Hence I attended my first assembly last week where I got access to all the internal information and was also able to vote… In particular, we approved the opening of two new job positions. If you are interested in state-of-the-art free software projects and if you are willing to join a company with great human values, you should definitely consider applying!

December 21, 2016 11:00 PM

December 19, 2016

Miguel A. Gómez: WPE: Web Platform for Embedded

Igalia WebKit

WPE is a new WebKit port optimized for embedded platforms that can support a variety of display protocols like Wayland, X11 or other native implementations. It is the evolution of the port formerly known as WebKitForWayland, and it was born as part of a collaboration between Metrological and Igalia as an effort to have a WebKit port running efficiently on STBs.

QtWebKit has been unmaintained upstream since they decided to switch to Blink, hence relying in a dead port for the future of STBs is a no-go. Meanwhile, WebKitGtk+ has been maintained and live upstream which was perfect as a basis for developing this new port, removing the Gtk+ dependency and trying Wayland as a replacement for X server. WebKitForWayland was born!

During a second iteration, we were able to make the Wayland dependency optional, and change the port to use platform specific libraries to implement the window drawings and management. This is very handy for those platforms were Wayland is not available. Due to this, the port was renamed to reflect that Wayland is just one of the several backends supported: welcome WPE!.

WPE has been designed with simplicity and performance in mind. Hence, we just developed a fullscreen browser with no tabs and multimedia support, as small (both in memory usage and disk space) and light as possible.

Current repositories

We are now in the process of moving from the WebKitForWayland repositories to what will be the WPE final ones. This is why this paragraph is about “current repositories”, and why the names include WebKitForWayland instead of WPE. This will change at some point, and expect a new post with the details when it happens. For now, just bear in mind that where it says WebKitForWayland it really refers to WPE.

  • Obviously, we use the main WebKit repository git:// as our source for the WebKit implementation.
  • Then there are some repositories at github to host the specific WPE bits. This repositories include the needed dependencies to build WPE together with the modifications we did to WebKit for this new port. This is the main WPE repository, and it can be easily built for the desktop and run inside a Wayland compositor. The build and run instructions can be checked here. The mission of these repositories is to be the WPE reference repository, containing the differences needed from upstream WebKit and that are common to all the possible downstream implementations. Every release cycle, the changes in upstream WebKit are merged into this repository to keep it updated.
  • And finally we have Metrological repositories. As in the previous case, we added the dependencies we needed together with the WebKit code. This third’s repository mission is to hold the Metrological specific changes both to WPE and its dependencies, and it also updated from the main WPE repository each release cycle. This version of WPE is meant to be used inside Metrological’s buildroot configuration, which is able to build images for the several target platforms they use. These platforms include all the versions of the Raspberry Pi boards, which are the ones we use as the reference platforms, specially the Raspberry Pi 2, together with several industry specific boards from chip vendors such as Broadcom and Intel.


As I mentioned before, building and running WPE from the main repository is easy and the instructions can be found here.

Building an image for a Raspberry Pi is quite easy as well, just a bit more time consuming because of the cross compiling and the extra dependencies. There are currently a couple of configs at Metrological’s buildroot that can be used and that don’t depend on Metrological’s specific packages. Here are the commands you need to run in order to test it:

    • Clone the buildroot repository:
      git clone
    • Select the buildroot configuration you want. Currently you can use raspberrypi_wpe_defconfig to build for the RPi1 and raspberrypi2_wpe_defconfig to build for the RPi2. This example builds for the RPi2, it can be changed to the RPi1 just changing this command to use the appropriate config. The test of the commands are the same for both cases.
      make raspberrypi2_wpe_defconfig
    • Build the config
    • And then go for a coffee because the build will take a while.
    • Once the build is finished you need to deploy the result to the RPi’s card (SD for the RPi1, microSD for the RPi2). This card must have 2 partitions:
      • boot: fat32 file system, with around 100MB of space.
      • root: ext4 file system with around 500MB of space.
    • Mount the SD card partitions in you system and deploy the build result (stored in output/images) to them. The deploy commands assume that the boot partition was mounted on /media/boot and the root partition was mounted on /media/rootfs:
      cp -R output/images/rpi-firmware/* /media/boot/
      cp output/images/zImage /media/boot/
      cp output/images/*.dtb /media/boot/
      tar -xvpsf output/images/rootfs.tar -C /media/rootfs/
    • Remove the card from your system and plug it to the RPi. Once booted, ssh into it and the browser can be easily launched:


I’m planning to write a dedicated post to talk about technical details of the project where I’ll cover this more in deep but, briefly, these are some features that can be found in WPE:

    • support for the common HTML5 features: positioning, CSS, CSS3D, etc
    • hardware accelerated media playback
    • hardware accelerated canvas
    • WebGL
    • MSE
    • MathML
    • Forms
    • Web Animations
    • XMLHttpRequest
    • and many other features supported by WebKit. If you are interested in the complete list, feel freel to browse to and check it yourself!


Current adoption status

We are proud to see that thanks to Igalia’s effort together with Metrological, WPE has been selected to replace QtWebKit inside the RDK stack, and that it’s also been adopted by some big cable operators like Comcast (surpassing other options like Chromium, Opera, etc). Also, there are several other STB manufacturers that have shown interest in putting WPE on their boards, which will lead to new platforms supported and more people contributing to the project.

These are really very good news for WPE, and we hope to have an awesome community around the project (both companies and individuals), to collaborate making the engine even better!!

Future developments

Of course, periodically merging upstream changes and, at the same time, keep adding new functionalities and supported platforms to the engine are a very important part of what we are planning to do with WPE. Both Igalia and Metrological have a lot of ideas for future work: finish WebRTC and EME support, improvements to the graphics pipelines, add new APIs, improve security, etc.

But besides that there’s also a very important refactorization that is being performed, and it’s uploading the code to the main WebKit repository as a new port. Basically this means that the main WPE repository will be removed at some point, and its content will be integrated into WebKit. Together with this, we are setting the pieces to have a continuous build and testing system, as the rest of the WebKit ports have, to ensure that the code is always building and that the layout tests are passing properly. This will greatly improve the quality and robustness of WPE.

So, when we are done with those changes, the repository structure will be:

  • The WebKit main repository, with most of the code integrated there
  • Clients/users of WPE will have their own repository with their specific code, and they will merge main repository’s changes directly. This is the case of the Metrological repository.
  • A new third repository that will store WPE’s rendering backends. This code cannot be upstreamed to the WebKit repository as in many cases the license won’t allow it. So only a generic backend will be upstreamed to WebKit while the rest of the backends will be stored here (or in other client specific repositories).

By magomez at December 19, 2016 02:39 PM

December 15, 2016

Claudio Saavedra: Thu 2016/Dec/15

Igalia WebKit

Igalia is hiring. We're currently interested in Multimedia and Chromium developers. Check the announcements for details on the positions and our company.

December 15, 2016 05:13 PM

December 09, 2016

Frédéric Wang: STIX Two in Gecko and WebKit

Igalia WebKit

On the 1st of December, the STIX Fonts project announced the release of STIX 2. If you never heard about this project, it is described as follows:

The mission of the Scientific and Technical Information Exchange (STIX) font creation project is the preparation of a comprehensive set of fonts that serve the scientific and engineering community in the process from manuscript creation through final publication, both in electronic and print formats.

This sounds a very exciting goal but the way it has been achieved has made the STIX project infamous for its numerous delays, for its poor or confusing packaging, for delivering math fonts with too many bugs to be usable, for its lack of openness & communication, for its bad handling of third-party feedback & contribution

Because of these laborious travels towards unsatisfactory releases, some snarky people claim that the project was actually named after Styx (Στύξ) the river from Greek mythology that one has to cross to enter the Underworld. Or that the story of the project is summarized by Baudelaire’s verses from L’Irrémédiable:

Une Idée, une Forme, un Être
Parti de l’azur et tombé
Dans un Styx bourbeux et plombé
Où nul œil du Ciel ne pénètre ;

More seriously, the good news is that the STIX Consortium finally released text fonts with a beautiful design and a companion math font that is usable in math rendering engines such as Word Processors, LaTeX and Web Engines. Indeed, WebKit and Gecko have supported OpenType-based MathML layout for more than three years (with recent improvements by Igalia) and STIX Two now has correct OpenType data and metrics!

Of course, the STIX Consortium did not address all the technical or organizational issues that have made its reputation but I count on Khaled Hosny to maintain his more open XITS fork with enhancements that have been ignored for STIX Two (e.g. Arabic and RTL features) or with fixes of already reported bugs.

As Jacques Distler wrote in a recent blog post, OS vendors should ideally bundle the STIX Two fonts in their default installation. For now, users can download and install the OTF fonts themselves. Note however that the STIX Two archive contains WOFF and WOFF2 fonts that page authors can use as web fonts.

I just landed patches in Gecko and WebKit so that future releases will try and find STIX Two on your system for MathML rendering. However, you can already do the proper font configuration via the preference menu of your browser:

  • For Gecko-based applications (e.g. Firefox, Seamonkey or Thunderbird), go to the font preference and select STIX Two Math as the “font for mathematics”.
  • For WebKit-based applications (e.g. Epiphany or Safari) add the following rule to your user stylesheet: math { font-family: "STIX Two Math"; }.

Finally, here is a screenshot of MathML formulas rendered by Firefox 49 using STIX Two:

Screenshot of MathML formulas rendered by Firefox using STIX 2

And the same page rendered by Epiphany 3.22.3:

Screenshot of MathML formulas rendered by Epiphany using STIX 2

December 09, 2016 11:00 PM

November 21, 2016

A tale of cylinders and shadows

Gustavo Noronha

Like I wrote before, we at Collabora have been working on improving WebKitGTK+ performance for customer projects, such as Apertis. We took the opportunity brought by recent improvements to WebKitGTK+ and GTK+ itself to make the final leg of drawing contents to screen as efficient as possible. And then we went on investigating why so much CPU was still being used in some of our test cases.

The first weird thing we noticed is performance was actually degraded on Wayland compared to running under X11. After some investigation we found a lot of time was being spent inside GTK+, painting the window’s background.

Here’s the thing: the problem only showed under Wayland because in that case GTK+ is responsible for painting the window decorations, whereas in the X11 case the window manager does it. That means all of that expensive blurring and rendering of shadows fell on GTK+’s lap.

During the web engines hackfest, a couple of months ago, I delved deeper into the problem and noticed, with Carlos Garcia’s help, that it was even worse when HiDPI displays were thrown into the mix. The scaling made things unbearably slower.

You might also be wondering why would painting of window decorations be such a problem, anyway? They should only be repainted when a window changes size or state anyway, which should be pretty rare, right? Right, that is one of the reasons why we had to make it fast, though: the resizing experience was pretty terrible. But we’ll get back to that later.

So I dug into that, made a few tries at understanding the issue and came up with a patch showing how applying the blur was being way too expensive. After a bit of discussion with our own Pekka Paalanen and Benjamin Otte we found the root cause: a fast path was not being hit by pixman due to the difference in scale factors on the shadow mask and the target surface. We made the shadow mask scale the same as the surface’s and voilà, sane performance.

I keep talking about this being a performance problem, but how bad was it? In the following video you can see how huge the impact in performance of this problem was on my very recent laptop with a HiDPI display. The video starts with an Epiphany window running with a patched GTK+ showing a nice demo the WebKit folks cooked for CSS animations and 3D transforms.

After a few seconds I quickly alt-tab to the version running with unpatched GTK+ – I made the window the exact size and position of the other one, so that it is under the same conditions and the difference can be seen more easily. It is massive.

Yes, all of that slow down was caused by repainting window shadows! OK, so that solved the problem for HiDPI displays, made resizing saner, great! But why is GTK+ repainting the window even if only the contents are changing, anyway? Well, that turned out to be an off-by-one bug in the code that checks whether the invalidated area includes part of the window decorations.

If the area being changed spanned the whole window width, say, it would always cause the shadows to be repainted. By fixing that, we now avoid all of the shadow drawing code when we are running full-window animations such as the CSS poster circle or gtk3-demo’s pixbufs demo.

As you can see in the video below, the gtk3-demo running with the patched GTK+ (the one on the right) is using a lot less CPU and has smoother animation than the one running with the unpatched GTK+ (left).

Pretty much all of the overhead caused by window decorations is gone in the patched version. It is still using quite a bit of CPU to animate those pixbufs, though, so some work still remains. Also, the overhead added to integrate cairo and GL rendering in GTK+ is pretty significant in the WebKitGTK+ CSS animation case. Hopefully that’ll get much better from GTK+ 4 onwards.

By kov at November 21, 2016 05:04 PM

November 14, 2016

Manuel Rego: Recap of the Web Engines Hackfest 2016

Igalia WebKit

This is my personal summary of the Web Engines Hackfest 2016 that happened at Igalia headquarters (in A Coruña) during the last week of September.

The hackfest is a special event, because the target audience are implementors of the different web browser engines. The idea is to join browser hackers for a few days to discuss different topics and work together in some of them. This year we almost reached 40 participants, which is the largest number of people attending the hackfest since we started it 8 years ago. Also it was really nice to have people from the different communities around the open web platform.

One of the breakout sessions of the Web Engines Hackfest 2016 One of the breakout sessions of the Web Engines Hackfest 2016


Despite the main focus of the event is the hacking time and the breakout sessions to discuss the different topics, we took advantage of having great developers around to arrange a few short talks. I’ll do a brief review of the talks from this year, which have been already published on a YouTube playlist (thanks to Chema and Juan for the amazing recording and video edition).

CSS Grid Layout

In my case, as usual lately, my focus was CSS Grid Layout implementation on Blink and WebKit, that Igalia is doing as part of our collaboration with Bloomberg. The good thing was that Christian Biesinger from Google was present in the hackfest, he’s usually the Blink developer reviewing our patches, so it was really nice to have him around to discuss and clarify some topics.

My colleague Javi already wrote a blog post about the hackfest with lots of details about the Grid Layout work we did. Probably the most important bit is that we’re getting really close to ship the feature. The spec is now Candidate Recommendation (despite a few issues that still need to be clarified) and we’ve been focusing lately on finishing the last features and fixing interoperability issues with Gecko implementation. And we’ve just sent the Intent to ship” mail to blink-dev, if everything goes fine Grid Layout might be enabled by default on Chromium 57 that will be released around March 2017.


It’s worth to mention the work performed by Behdad, Fred and Khaled adding support on HarfBuzz for the OpenType MATH table. Again Fred wrote a detailed blog post about all this work some weeks ago.

Also we had the chance to discuss with more googlers about the possibilities to bring MathML back into Chromium. We showed the status of the experimental branch created by Fred and explained the improvements we’ve been doing on the WebKit implementation. Now Gecko and WebKit both follow the implementation note and the tests have been upstreamed into the W3C Web platform Tests repository. Let’s see how all this will evolve in the future.


Last, but not least, as one of the event organizers, I’ve to say thanks to everyone attending the hackfest, without all of you the event won’t make any sense. Also I want to acknowledge the support from the hackfest sponsors Collabora, Igalia and Mozilla. And I’d also like to give kudos to Igalia for hosting and organizing the event one year more. Looking forward for 2017 edition!

Web Engines Hackfest 2016 sponsors: Collabora, Igalia and Mozilla Web Engines Hackfest 2016 sponsors: Collabora, Igalia and Mozilla

Igalia 15th Anniversary & Summit

Just to close this blog post let me talk about some extra events that happened on the last week of September just after the Web Engines Hackfest.

Igalia was celebrating its 15th anniversary with several parties during the week, one of them was in the last day of the hackfest at night with a cool live concert included. Of course hackfest attendees were invited to join us.

Igalia 15th Anniversary Party Igalia 15th Anniversary Party (picture by Chema Casanova)

Then, on the weekend, Igalia arranged a new summit. We usually do two per year and, in my humble opinion, they’re really important for Igalia as a whole. The flat structure is based on trust in our peers, so spending a few days per year all together is wonderful. It allows us to know each other better, while having a good time with our colleagues. And I’m sure they’re very useful for newcomers too, in order to help them to understand our company culture.

Igalia Fall Summit 2016 Igalia Fall Summit 2016 (picture by Alberto Garcia)

And that’s all for this post, let’s hope you didn’t get bored. Thanks for reading this far. 😊

November 14, 2016 11:00 PM

November 10, 2016

Xabier Rodríguez Calvar: Web Engines Hackfest 2016

Igalia WebKit

From September 26th to 28th we celebrated at the Igalia HQ the 2016 edition of the Web Engines Hackfest. This year we broke all records and got participants from the three main companies behind the three biggest open source web engines, say Mozilla, Google and Apple. Or course, it was not only them, we had some other companies and ourselves. I was active part of the organization and I think we not only did not get any complain but people were comfortable and happy around.

We had several talks (I included the slides and YouTube links):

We had lots and lots of interesting hacking and we also had several breakout sessions:

  • WebKitGTK+ / Epiphany
  • Servo
  • WPE / WebKit for Wayland
  • Layout Models (Grid, Flexbox)
  • WebRTC
  • JavaScript Engines
  • MathML
  • Graphics in WebKit

What I did during the hackfest was working with Enrique and Žan to advance on reviewing our downstream implementation of Media Source Extensions (MSE) in order to land it as soon as possible and I can proudly say that we did already (we didn’t finish at the hackfest but managed to do it after it). We broke the bots and pissed off Michael and Carlos but we managed to deactivate it by default and continue working on it upstream.

So summing up, from my point of view and it is not only because I was part of the organization at Igalia, based also in other people’s opinions, I think the hackfest was a success and I think we will continue as we were or maybe growing a bit (no spoilers!).

Finally I would like to thank our gold sponsors Collabora and Igalia and our silver sponsor Mozilla.

By calvaris at November 10, 2016 08:59 AM

October 31, 2016

Manuel Rego: My experience at W3C TPAC 2016 in Lisbon

Igalia WebKit

At the end of September I attended W3C TPAC 2016 in Lisbon together with my igalian fellows Joanie and Juanjo. TPAC is where all the people working on the different W3C groups meet for a week to discuss tons of topics around the Web. It was my first time on a W3C event so I had the chance to meet there a lot of amazing people, that I’ve been following on the internet for a long time.

Igalia booth

Like past year Igalia was present in the conference, and we had a nice exhibitor booth just in front the registration desk. Most of the time my colleague Juanjo was there explaining Igalia and the work we do to the people that came to the booth.

Igalia Booth at W3C TPAC 2016 Igalia Booth at W3C TPAC 2016 (picture by Juan José Sánchez)

In the booth we were showcasing some videos of the last standards implementations we’ve been doing upstream (like CSS Grid Layout, WebRTC, MathML, etc.). In addition, we were also showing a few demos of our WebKit port called WPE, which has been optimized to run on low-end devices like the Raspberry Pi.

It’s really great to have the chance to explain the work we do around browsers, and check that there are quite a lot of people interested on that. On this regard, Igalia is a particular consultancy that can help companies to develop standards upstream, contributing to both the implementations and the specification evolution and discussions inside the different standard bodies. Of course, don’t hesitate to contact us if you think we can help you to achieve similar goals on this field.

CSS Working Group

During TPAC I was attending the CSS WG meetings as an observer. It’s been really nice to check that a lot of people there appreciate the work we do around CSS Grid Layout, as part of our collaboration with Bloomberg. Not only the implementation effort we’re leading on Blink and WebKit, but also the contributions we make to the spec itself.

New ideas for CSS Grid Layout

There were not a lot of discussion about Grid Layout during the meetings, just a few issues that I’ll explain later. However, Jen Simmons did a very cool presentation with a proposal for a new regions specification.

As you probably remember one of the main concerns regarding CSS Regions spec is the need of dummy divs, that are used to flow the text into them. Jen was suggesting that we could use CSS Grids to solve that. Grids create boxes from CSS where you place elements from the DOM, the idea would be that if you can reference those boxes from CSS then the contents could flow there without the need of dummy nodes. This was linked with some other new ideas to improve CSS Grid Layout:

  • Apply backgrounds and borders to grid cells.
  • Skip cells during auto-placement.

All this is already possible nowadays using dummy div elements (of course if you use a browser with regions and grid support like Safari Technology Preview). However, the idea would be to be able to achieve the same thing without any empty item, directly referring them from CSS.

This of course needs further discussion and won’t be part of the level 1 of CSS Grid Layout spec, but it’d be really nice to get some of those things included in the future versions. I was discussing with Jen and some of them (like skipping cells for auto-placement) shouldn’t be hard to implement. Last, this reminded me about a discussion we had two years ago at the Web Engines Hackfest 2014 with Mihnea Ovidenie.

Last issues on CSS Grid Layout

After the meetings I had the chance to discuss hand by hand some issues with fantasai (one of the Grid Layout spec editors).

One of the topics was discussed during the CSS WG meeting, it was related to how to handle percentages inside Grid Layout when they need to be resolved in an intrinsic size situation. The question here was if both percentage tracks and percentage gaps should resolve the same or differently, the group agreed that they should work exactly the same. However there is something else, as Firefox computes percentage for margins different to what’s done in the other browsers. I tried to explain all this in a detailed issue for the CSS WG, I really think we only have option to do this, but it’ll take a while until we’ve a final resolution on this.

Another issue I think we’ve still open is about the minimum size of grid items. I believe the text on the spec still needs some tweaks, but, thanks to fantasai, we managed to clarify what should be the expected behavior in the common case. However, there’re still some open questions and an ongoing discussion regarding images.

Finally, it’s worth to mention that just after TPAC, the Grid Layout spec has transitioned to Candidate Recommendation (CR), which means that it’s getting stable enough to finish the implementations and release it to the wild. Hopefully this open issues will be fixed pretty soon.


I also attended the first day of Houdini meetings, I was lucky as they started with the Layout API (the one I was more interested on). It’s clear that Google is pushing a lot for this to happen, all the new Layout NG project inside Blink seems quite related to the new Houdini APIs. It looks like a nice thing to have, but, on a first sight, it seems quite hard to get it right. Mainly due to the big complexity of the layout on the web.

On the bright side the Painting API transitioned to CR. Blink has some prototype implementations already that can be used to create cool demos. It’s really amazing to see that the Houdini effort is already starting to produce some real results.


It’s always a pleasure to visit Portugal and Lisbon, both a wonderful country and city. One of the conference afternoons I had the chance to go for a walk from the congress center to the lovely Belém Tower. It was clear I couldn’t miss the chance to go there for such a big conference, you don’t always have TPAC so close from home.

Lisbon wall painting Lisbon wall painting

Overall it was a really nice experience, everyone was very kind and supportive about Igalia and our work on the web platform. Let’s keep working hard to push the web forward!

October 31, 2016 11:00 PM

October 05, 2016

Web Engines Hackfest 2016!

Gustavo Noronha

I had a great time last week and the web engines hackfest! It was the 7th web hackfest hosted by Igalia and the 7th hackfest I attended. I’m almost a local Galician already. Brazilian Portuguese being so close to Galician certainly helps! Collabora co-sponsored the event and it was great that two colleagues of mine managed to join me in attendance.

It had great talks that will eventually end up in videos uploaded to the web site. We were amazed at the progress being made to Servo, including some performance results that blew our minds. We also discussed the next steps for WebKitGTK+, WebKit for Wayland (or WPE), our own Clutter wrapper to WebKitGTK+ which is used for the Apertis project, and much more.

Zan giving his talk on WPE (former WebKitForWayland)Zan giving his talk on WPE (former WebKitForWayland)

One thing that drew my attention was how many Dell laptops there were. Many collaborans (myself included) and igalians are now using Dells, it seems. Sure, there were thinkpads and macbooks, but there was plenty of inspirons and xpses as well. It’s interesting how the brand make up shifted over the years since 2009, when the hackfest could easily be mistaken with a thinkpad shop.

Back to the actual hackfest: with the recent release of Gnome 3.22 (and Fedora 25 nearing release), my main focus was on dealing with some regressions suffered by users experienced after a change that made putting the final rendering composited by the nested Wayland compositor we have inside WebKitGTK+ to the GTK+ widget so it is shown on the screen.

One of the main problems people reported was applications that use WebKitGTK+ not showing anything where the content was supposed to appear. It turns out the problem was caused by GTK+ not being able to create a GL context. If the system was simply not able to use GL there would be no problem: WebKit would then just disable accelerated compositing and things would work, albeit slower.

The problem was WebKit being able to use an older GL version than the minimum required by GTK+. We fixed it by testing that GTK+ is able to create GL contexts before using the fast path, falling back to the slow glReadPixels codepath if not. This way we keep accelerated compositing working inside WebKit, which gives us nice 3D transforms and less repainting, but take the performance hit in the final “blit”.

Introducing Introducing “WebKitClutterGTK+”

Another issue we hit was GTK+ not properly updating its knowledge of the window’s opaque region when painting a frame with GL, which led to some really interesting issues like a shadow appearing when you tried to shrink the window. There was also an issue where the window would not use all of the screen when fullscreen which was likely related. Both were fixed.

André Magalhães also worked on a couple of patches we wrote for customer projects and are now pushing upstream. One enables the use of more than one frontend to connect to a remote web inspector server at once. This can be used to, for instance, show the regular web inspector on a browser window and also use IDE integration for setting breakpoints and so on.

The other patch was cooked by Philip Withnall and helped us deal with some performance bottlenecks we were hitting. It improves the performance of painting scroll bars. WebKitGTK+ does its own painting of scrollbars (we do not use the GTK+ widgets for various reasons). It turns out painting scrollbars can be quite a hit when the page is being scrolled fast, if not done efficiently.

Emanuele Aina had a great time learning more about meson to figure out a build issue we had when a more recent GStreamer was added to our jhbuild environment. He came out of the experience rather sane, which makes me think meson might indeed be much better than autotools.

Igalia 15 years cakeIgalia 15 years cake

It was a great hackfest, great seeing everyone face to face. We were happy to celebrate Igalia’s 15 years with them. Hope to see everyone again next year =)

By kov at October 05, 2016 12:23 PM

September 22, 2016

WebKitGTK+ 2.14 and the Web Engines Hackfest

Gustavo Noronha

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

By kov at September 22, 2016 05:03 PM

December 15, 2014

Web Engines Hackfest 2014

Gustavo Noronha

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

By kov at December 15, 2014 11:20 PM

December 08, 2014

How to build TyGL

University of Szeged

This is a follow-up blog post of our announcement of TyGL - the 2D-accelerated GPU rendering port of WebKit.

We have been received lots of feedback about TyGL and we would like to thank you for all questions, suggestions and comments. As we promised lets get into some technical details.

read more

By szilard.ledan at December 08, 2014 12:47 PM

November 12, 2014

Announcing the TyGL-WebKit port to accelerate 2D web rendering with GPU

University of Szeged

We are proud to announce the TyGL port (link: on the top of EFL-WebKit. TyGL (pronounced as tigel) is part of WebKit and provides 2D-accelerated GPU rendering on embedded systems. The engine is purely GPU based. It has been developed on and tested against ARM-Mali GPU, but it is designed to work on any GPU conforming to OpenGL ES 2.0 or higher.

The GPU involvement on future graphics is inevitable considering the pixel growth rate of displays, but harnessing the GPU power requires a different approach than CPU-based optimizations.

read more

By zoltan.herczeg at November 12, 2014 02:18 PM

October 22, 2014

Fuzzinator reloaded

University of Szeged

It's been a while since I last (and actually first) posted about Fuzzinator. Now I think that I have enough new experiences worth sharing.

More than a year ago, when I started fuzzing, I was mostly focusing on mutation-based fuzzer technologies since they were easy to build and pretty effective. Having a nice error-prone test suite (e.g. LayoutTests) was the warrant for fresh new bugs. At least for a while.

read more

By renata.hodovan at October 22, 2014 10:38 PM

September 25, 2014

Measuring ASM.JS performance

University of Szeged

What is ASM.JS?

Now that mobile computers and cloud services become part of our lives, more and more developers see the potential of the web and online applications. ASM.JS, a strict subset of JavaScript, is a technology that provides a way to achieve near native speed in browsers, without the need of any plugin or extension. It is also possible to cross-compile C/C++ programs to it and running them directly in your browser.

In this post we will compare the JavaScript and ASM.JS performance in different browsers, trying out various kinds of web applications and benchmarks.

read more

By matyas.mustoha at September 25, 2014 10:40 AM

August 28, 2014

CSS Shapes now available in Chrome 37 release

Adobe Web Platform

Support for CSS Shapes is now available in the latest Google Chrome 37 release.


What can I do with CSS Shapes?

CSS Shapes lets you think out of the box! It gives you the ability to wrap content outside any shape. Shapes can be defined by geometric shapes, images, and even gradients. Using Shapes as part of your website design takes a visitor’s visual and reading experience to the next level. If you want to start with some tutorials, please go visit Sarah Soueidan’s article about Shapes.


The following shapes use case is from the Good Looking Shapes Gallery blog post.

Without CSS Shapes
With CSS Shapes

In the first picture, we don’t use CSS Shapes. The text wraps around the rectangular image container, which leads to a lot of empty space between the text and the visible part of the image.

In the second picture, we use CSS Shapes. You can see the wrapping behavior around the image. In this case the white parts of the image are transparent, thus the browser can automatically wrap the content around the visible part, which leads to this nice and clean, visually more appealing wrapping behavior.

How do I get CSS Shapes?

Just update your Chrome browser to the latest version from the Chrome/About Google Chrome menu, or download the latest stable version from

I’d like to thank the collaboration of WebKit and Blink engineers, and everyone else in the community who has contributed to this feature. The fact that Shapes is shipping in two production browsers — Chrome 37 now and Safari 8 later this year — is the upshot of the open source collaboration between the people who believe in a better, more expressive web. Although Shapes will be available in these browsers, you’ll need another solution for the other browsers. The CSS Shapes Polyfill is one method of achieving consistent behavior across browsers.

Where should I start?

For more info about CSS Shapes, please check out the following links:

Let us know your thoughts or if you have nice demos, here or on Twitter: @AdobeWeb and @ZoltanWebKit.

By Zoltan Horvath at August 28, 2014 05:12 PM

May 13, 2014

Good-Looking Shapes Gallery

Adobe Web Platform

As a modern consumer of media, you rarely crack open a magazine or a pamphlet or anything that would be characterized as “printed”. Let me suggest that you take a walk on the wild side. The next time you are in a doctor’s office, or a supermarket checkout lane, or a library, thumb though a magazine. Most of the layouts you’ll find inside can also be found on the web, but not all of them. Layouts where content hugs the boundaries of illustrations are common in print and rare on the web. One of the reasons non-rectangular contour-hugging layouts are uncommon on the web is that they are difficult to produce.

They are not difficult to produce anymore.

The CSS Shapes specification is now in the final stages of standardization. This feature enables flowing content around geometric shapes (like circles and polygons), as well as around shapes defined by an image’s alpha channel. Shapes make it easy to produce the kinds of layouts you can find in print today, with all the added flexibility and power that modern online media affords. You can use CSS Shapes right now with the latest builds of WebKit and Blink based browsers, like Safari and Chrome.

Development of CSS Shapes has been underway for about two years, and we’ve been regularly heralding its progress here. Many of those reports have focused on the evolution of the spec and implementations, and they’ve included examples that emphasized basics over beauty. This article is an attempt to tilt the balance back towards good-looking. Listed below are simple shapes demos that we think look pretty good. Everyone on Adobe’s CSS Shapes engineering team contributed at least one.

There’s a live version of each demo in the gallery. Click on the demo screenshot or one of the handy links to take a look. You’ll want to view the demos with a browser that supports Shapes and you’ll need to enable CSS Shapes in that browser. For example you can use a nightly build of the Safari browser or you can enable shapes in Chrome or Chrome Canary like this:

  1. Copy and paste chrome://flags/#enable-experimental-web-platform-features into the address bar, then press enter.
  2. Click the ‘Enable’ link within that section.
  3. Click the ‘Relaunch Now’ button at the bottom of the browser window.

A few of the demos use the new Shapes Polyfill and will work in most browsers.

And now, without further ado, please have a look through our good-looking shapes gallery.

Ozma of Oz


This demo reproduces the layout style that opens many of the chapters of the L. Frank Baum books, including Ozma of Oz.  The first page is often dominated by an illustration on the left or right. The chapter’s text conforms to the illustration, but not too tightly. The books were published over 100 years ago and they still look good print.  With CSS Shapes they can still look good on the web.

Top Cap


The conventional “drop-cap” opens a paragraph by enlarging and highlighting the first letter, word or phrase. The drop-cap’s goal is to draw your attention to where you can start reading. This demo delivers the same effect by crowning the entire opening paragraph with a “top cap” that funnels your attention into the article. In both cases, what’s going on is a segue from a graphic element to the text.



A violator is small element that “violates” rectangular text layout by encroaching on a corner or a small part of an edge. This layout idiom is common in short-form magazines and product packaging. That “new and improved” banner which blazes through the corner of thousands of consumer products (whether or not they are new or improved) – it’s a violator.

Column Interest


When a print magazine feels the need to incorporate some column layout melodrama, they often reach for this idiom. The shape spans a pair of columns, which creates visual interest in the middle of the page. Without it you’d be faced with a wall of attention sapping text and more than likely turn the page.


Screenshot of the wine jug caption demo.

The old-school approach for including a caption with an image is to put the caption text alongside or below the image. Putting a caption on top of an image requires a little more finesse, since you have to ensure that the text doesn’t obscure anything important and that the text is rendered in a way that preserves readability.  The result can be relatively attractive.

This photograph was taken by Zoltan Horvath who has pointed out that I’ve combined a quote about tea with a picture of a ceremonial wine jug.  I apologize for briefly breaching that beverage boundary. It’s just a demo.


Screenshot of the paging demo.

With a layout like this, one could simple let the content wrap and around the shape on the right and then expand into the usual rectangle.  In this demo the content is served up a paragraph at a time, in response to the left and right arrow keys.

Note also: yes in fact the mate gourd is perched on exactly the same windowsill as the previous demo. Zoltan and Pope Francis are among the many fans of yerba mate tea.

Ersatz shape-inside

Screenshot of the ersatz shape-inside demo.

Originally the CSS Shapes spec included shape-inside as well as shape-outside. Sadly, shape-inside was promoted to “Level 2” of the spec and isn’t available in the current implementations. Fortunately for shape insiders everywhere, it’s still sometimes possible to mimic shape-inside with an adjacent pair of carefully designed shape-outside floats. This demo is a nice example of that, where the text appears inside a bowl of oatmeal.



This is an animated demo, so to appreciate it you’ll really need to take a look at the live version. It is an example of using an animated shape to draw the user’s attention to a particular message.  Of course one must use this approach with restraint, since an animated loop on a web page doesn’t just gently tug at the user’s attention. It drags at their attention like a tractor beam.



Advertisements are intended to grab the user’s attention and a second or two of animation will do that. In this demo a series of transition motions have been strung together into a tiny performance that will temporarily get the reader’s attention. The highlight of the performance is – of course – the text snapping into the robot’s contour for the finale. Try and imagine a soundtrack that punctuates the action with some whirring and clanking noises, it’s even better that way.

By hmuller at May 13, 2014 05:38 PM

April 24, 2014

Adobe Web Platform Goes to the 2014 WebKit Contributors’ Meeting

Adobe Web Platform

Last week, Apple hosted the 2014 WebKit Contributors’ Meeting at their campus in Cupertino. As usual it was an unconference-style event, with session scheduling happening on the morning of the first day. While much of the session content was very specific to WebKit implementation, there were topics covered that are interesting to the wider web community. This post is a roundup of some of these topics from the sessions that Adobe Web Platform Team members attended.

CSS Custom Properties for Cascading Variables

Alan Stearns suggested a session on planning a new implementation of CSS Custom Properties for Cascading Variables. While implementations of this spec have been attempted in WebKit in the past, they never got past the experimental stage. Despite this, there is still much interest in implementing this feature. In addition, the current version of the spec has addressed many of the issues that WebKit contributors had previously expressed. We talked about a possible issue with using variables in custom property values, which Alan is investigating. More detail is available in the notes from the Custom Properties session.

CSS Regions

Andrei Bucur presented the current state of the CSS Regions implementation in WebKit. The presentation was well received and well attended. Notably, this was one of the few sessions with enough interest that it had a time slot all to itself.

While CSS Regions shipped last year in iOS 7 and Safari 6.1 and 7, the implementation in WebKit hasn’t been standing still. Andrei mentioned the following short list of changes in WebKit since the last Safari release:

  • correct painting of fragments and overflow
  • scrollable regions
  • accelerated content inside regions
  • position: fixed elements
  • the regionoversetchange event
  • better selection
  • better WebInspector integration
  • and more…

Andrei’s slides outlining the state of CSS Regions also contain a roadmap for the feature’s future in WebKit as well as a nice demo of the fix to fragment and overflow handling. If you are following the progress of CSS Regions in WebKit, the slides are definitely worth a look. (As of this writing, the Regions demo in the slides only works in Safari and WebKit Nightly.)

CSS Shapes

Zoltan Horvath, Bear Travis, and I covered the current state of CSS Shapes in WebKit. We are almost done implementing the functionality in Level 1 of the CSS Shapes Specification (which is itself a Candidate Recommendation, the last step before becoming an official W3C standard). The discussion in this session was very positive. We received good feedback on use cases for shape-outside and even talked a bit about the possibilities for when shape-inside is revisited as part of CSS Shapes Level 2. While I don’t have any slides or demos to share at the moment, we will soon be publishing a blog post to bring everyone up to date on the latest in CSS Shapes. So watch this space for more!

Subpixel Layout

This session was mostly about implementation. However, Zalan Bujtas drew an interesting distinction between subpixel layout and subpixel painting. Subpixel layout allows for better space utilization when laying out elements on the page, as boxes can be sized and positioned more precisely using fractional units. Subpixel painting allows for better utilization of high DPI displays by actually drawing elements on the screen using fractional CSS pixels (For example: on a 2x “Retina” display, half of a CSS pixel is one device pixel). Subpixel painting allows for much cleaner lines and smoother animations on high DPI displays when combined with subpixel layout. While subpixel layout is currently implemented in WebKit, subpixel painting is currently a work in progress.

Web Inspector

The Web Inspector is full of shiny new features. The front-end continues to shift to a new design, while the back-end gets cleaned up to remove cruft. The architecture for custom visual property editors is in place and will hopefully enable quick and intuitive editing of gradients, transforms, and animations in the future. Other goodies include new breakpoint actions (like value logging), a redesigned timeline, and IndexedDB debugging support. The Web Inspector still has room for new features, and you can always check out the #webkit-inspector channel on freenode IRC for the latest and greatest.

Web Components

The Web Components set of features continues to gather interest from the browser community. Web Components is made up of four different features: HTML Components, HTML Imports, Shadow DOM, and HTML Templates. The general gist of the talk was that the Web Components concepts are desirable, but there are concerns that the features’ complexity may make implementation difficult. The main concerns seemed to center around performance and encapsulation with Shadow DOM, and will hopefully be addressed with a prototype implementation of the feature (in the works). You can also take a look at the slides from the Web Components session.

CSS Grid Layout

The WebKit implementation of the CSS Grid Layout specification is relatively advanced. After learning in this session that the only way to test out Grid Layout in WebKit was to make a custom build with it enabled, session attendees concluded that it should be turned on by default in the WebKit Nightlies. So in the near future, experimenting with Grid Layout in WebKit should be as easy as installing a nightly build.


As I mentioned earlier, this was just a high-level overview of a few of the topics at this year’s WebKit Contributors’ Meeting. Notes and slides for some of the topics not mentioned here are available on the 2014 WebKit Meeting page in the wiki. The WebKit project is always welcoming new contributors, so if you happen to see a topic on that wiki page that interests you, feel free to get in touch with the community and see how you can get involved.


This post would not have been possible without the notes and editing assistance of my colleagues on the Adobe Web Platform Team that attended the meeting along with me: Alan Stearns, Andrei Bucur, Bear Travis, and Zoltan Horvath.

By Bem Jones-Bey at April 24, 2014 05:23 PM

March 18, 2014

QtWebKit is no more, what now?

Gustavo Noronha

Driven by the technical choices of some of our early clients, QtWebKit was one of the first web engines Collabora worked on, building the initial support for NPAPI plugins and more. Since then we had kept in touch with the project from time to time when helping clients with specific issues, hardware or software integration, and particularly GStreamer-related work.

With Google forking Blink off WebKit, a decision had to be made by all vendors of browsers and platform APIs based on WebKit on whether to stay or follow Google instead. After quite a bit of consideration and prototyping, the Qt team decided to take the second option and build the QtWebEngine library to replace QtWebKit.

The main advantage of WebKit over Blink for engine vendors is the ability to implement custom platform support. That meant QtWebKit was able to use Qt graphics and networking APIs and other Qt technologies for all of the platform-integration needs. It also enjoyed the great flexibility of using GStreamer to implement HTML5 media. GStreamer brings hardware-acceleration capabilities, support for several media formats and the ability to expand that support without having to change the engine itself.

People who are using QtWebKit because of its being Gstreamer-powered will probably be better served by switching to one of the remaining GStreamer-based ports, such as WebKitGTK+. Those who don’t care about the underlying technologies but really need or want to use Qt APIs will be better served by porting to the new QtWebEngine.

It’s important to note though that QtWebEngine drops support for Android and iOS as well as several features that allowed tight integration with the Qt platform, such as DOM manipulation through the QWebElement APIs, making QObject instances available to web applications, and the ability to set the QNetworkAccessManager used for downloading resources, which allowed for fine-grained control of the requests and sharing of cookies and cache.

It might also make sense to go Chromium/Blink, either by using the Chrome Content API, or switching to one its siblings (QtWebEngine included) if the goal is to make a browser which needs no integration with existing toolkits or environments. You will be limited to the formats supported by Chrome and the hardware platforms targeted by Google. Blink does not allow multiple implementations of the platform support layer, so you are stuck with what upstream decides to ship, or with a fork to maintain.

It is a good alternative when Android itself is the main target. That is the technology used to build its main browser. The main advantage here is you get to follow Chrome’s fast-paced development and great support for the targeted hardware out of the box. If you need to support custom hardware or to be flexible on the kinds of media you would like to support, then WebKit still makes more sense in the long run, since that support can be maintained upstream.

At Collabora we’ve dealt with several WebKit ports over the years, and still actively maintain the custom WebKit Clutter port out of tree for clients. We have also done quite a bit of work on Chromium-powered projects. Some of the decisions you have to make are not easy and we believe we can help. Not sure what to do next? If you have that on your plate, get in touch!

By kov at March 18, 2014 07:44 PM

February 25, 2014

Improving your site’s visual details: CSS3 text-align-last

Adobe Web Platform

In this post, I want to give a status report regarding the text-align-last CSS3 property. If you are interested in taking control of the small visual details of your site with CSS, I encourage you to keep reading.

The problem

First, let’s talk about why we need this property. You’ve probably already seen many text blocks on pages that don’t quite seem visually correct, because the last line isn’t justified with the previous lines. Check out the example paragraph below:

Example of the CSS3 text-align-last property

In the first column, the last line isn’t justified. This is the expected behavior, when you apply the ‘text-align: justify’ CSS property on a container. On the other hand, in the second column, the content is entirely justified, including the last line.

The solution

This magic is the ‘text-align-last’ CSS3 property, which is set to justify on the second container. The text-align-last property is part of the CSS Text Module Level 3 specification, which is currently a working draft. The text-align-last property describes how the last line of a block or a line right before a forced line break is aligned when ‘text-align’ is ‘justify’, which means you gain full control over the alignment of the last line of a block. The property allows several more options, which you can read about on docs, or the CSS Text Module Level 3 W3C Specification.

A possible use case (Added April – 2014)

After looking at the previous example (which was rather focusing on the functionality of the property), let’s move on to a more realistic use case. The feature is perfect to make our multi-line captions look better. Check out the centered, and the justified image caption examples below.


And now, compare them with a justified, multi-line caption, where the last line has been centered by text-align-last: center.

I think the proper alignment of the last line gives a better overlook to the caption.

Browser Support

I recently added rendering support for the property in WebKit (Safari) based on the latest specification. Dongwoo Joshua Im from Samsung added rendering support in Blink (Chrome). If you like to try it out in WebKit, you’ll need to make a custom developer build and use the CSS3 text support build flag (--css3-text).

The property is already included in Blink’s developer nightlies by default, so after launching your latest Chrome Canary, you only need to enable ‘Enable experimental Web Platform features’ under chrome://flags, and enjoy the full control over your last lines.

Developer note

Please keep in mind that both the W3C specification and the implementations are under experimental status. I’ll keep blogging about the feature and let you know if anything changes, including when the feature ships for production use!

By Zoltan Horvath at February 25, 2014 04:58 PM

December 11, 2013

WebKitGTK+ hackfest 5.0 (2013)!

Gustavo Noronha

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coruña to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coruña, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.

Web with wider tabs and the new prefs dialogWeb with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web’s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.

Martin Robinson and Dan Winship hackMartin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don’t) or because we hate autotools (we do), but because it should make people’s lives easier when adding new files to the build, and should also make our build less hacky and quicker – it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there’s plenty to do.

Hackers take advantage of the icecream network for faster buildsHackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup’s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.

Jon and Guillaume battling codeJon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There’s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.

Hackers chillin' at A CoruñaHackers chillin’ at A Coruña

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!

By kov at December 11, 2013 09:47 AM

August 27, 2013

HTML Alchemy – Combining CSS Shapes with CSS Regions

Adobe Web Platform

Note: Support for shape-inside is only available until the following nightly builds: WebKit r166290 (2014-03-26); Chromium 260092 (2014-03-28).

I have been working on rendering for almost a year now. Since I landed the initial implementation of Shapes on Regions in both Blink and WebKit, I’m incredibly excited to talk a little bit about these features and how you can combine them together.


Don’t know what CSS Regions and Shapes are? Start here!

The first ingredient in my HTML alchemy kitchen is CSS Regions. With CSS Regions, you can flow content into multiple styled containers, which gives you enormous creative power to make magazine style layouts. The second ingredient is CSS Shapes, which gives you the ability to wrap content inside or outside any shape. In this post I’ll talk about the “shape-inside” CSS property, which allows us to wrap content inside an arbitrary shape.

Let’s grab a bowl and mix these two features together, CSS Regions and CSS Shapes to produce some really interesting layouts!
In the latest Chrome Canary and Safari WebKit Nightly, after enabling the required experimental features, you can flow content continuously through multiple kinds of shapes. This rocks! You can step out from the rectangular text flow world and break up text into multiple, non-rectangular shapes.


If you already have the latest Chrome Canary/Safari WebKit Nightly, you can just go ahead and try a simple example on If you are too lazy, or if you want to extend your mouse button life by saving a few button clicks, you can continue reading.


In the picture above we see that the “Lorem ipsum” story flows through 4 different, colorful regions. There is a circle shape on each of the first two fixed size regions. Check out the code below to see how we apply the shape to the region. It’s pretty straightforward, right?
#region1, #region2 {
    -webkit-flow-from: flow;
    background-color: yellow;
    width: 200px;
    height: 200px;
    -webkit-shape-inside: circle(50%, 50%, 50%);
The content flows into the third (percentage sized) region, which represents a heart (drawn by me, all rights reserved). I defined the heart’s coordinates in percentages, so the heart will stretch as you resize the window.
#region3 {
    -webkit-flow-from: flow;
    width: 50%;
    height: 400px;
    background-color: #EE99bb;
    -webkit-shape-inside: polygon(11.17% 10.25%,2.50% 30.56%,3.92% 55.34%,12.33% 68.87%,26.67% 82.62%,49.33% 101.25%,73.50% 76.82%,85.17% 65.63%,91.63% 55.51%,97.10% 31.32%,85.79% 10.21%,72.47% 5.35%,55.53% 14.12%,48.58% 27.88%,41.79% 13.72%,27.50% 5.57%);

The content that doesn’t fit in the first three regions flows into the fourth region. The fourth region (see the retro-blue background color) has its CSS width and height set to auto, so it grows to fit the remaining content.

Real world examples

After trying the demo and checking out the links above, I’m sure you’ll see the opportunities for using shape-inside with regions in your next design. If you have some thoughts on this topic, don’t hesitate to comment. Please keep in mind that these features are under development, and you might run into bugs. If you do, you should report them on WebKit’s Bugzilla for Safari or Chromium’s issue tracker for Chrome. Thanks for reading!

By Zoltan Horvath at August 27, 2013 04:00 PM

August 21, 2013

Steps towards a new Web Inspector

Brent Fulgham

Those of you who follow the Surfin' Safari blog may remember that Apple recently Open Sourced the new Web Inspector shipping with the forthcoming version of Safari and Mac OS.  You might even have tried the Developer Preview to see how it worked.

At the time there was some discussion about the fact that the original inspector was no longer being actively maintained, and some encouragement to other ports to migrate to the new infrastructure as soon as possible.

I'm very pleased to announce that we are close to landing changes to the Windows and WinCairo ports that will enable this awesome new development environment for those ports.

There's still some work to be done: there are a few missing folder graphics, short cuts probably don't have the right key combinations, and I noticed that some 'localized strings'  don't seem to be properly encoded.  But the core functionality works, and now Windows users can access shadow DOM elements, check load times, and examine the size of resources retrieved from a website.

For example, who knew that I downloaded a megabyte of material from the YouTube website each time it nags me that I don't have Flash Installed?

By Brent Fulgham ( at August 21, 2013 05:55 AM

August 06, 2013

WebGL, at last!

Brent Fulgham

It's been a long time since I've written an update -- but my lack of blog posting is not an indication of a lack of progress in WebKit or the WinCairo port. Since I left my former employer (who *still* hasn't gotten around to updating the build machine I set up there), we've:

  • Migrated from Visual Studio 2005 to Visual Studio 2010 (and soon, VS2012)
  • Enabled New-run-webkit-tests
  • Updated the WinCairo Support Libraries to support 64-bit builds
  • Integrated a ton of cURL improvements and extensions thanks to the TideSDK guys 
  • and ...
... thanks to the hard work of Alex Christensen, brought up WebGL on the WinCairo port.  This is a little exciting for me, because it marks the first time (I can recall) where the WinCairo port actually gained a feature that was not already part of the core Apple Windows port.

The changes needed to see these circa-1992 graphics in all their three-dimensional glory are already landed in the WebKit tree.  You just need to:

  1. Enable the libEGL, libGLESv2, translator_common, translator_glsl, and translator_hlsl for the WinCairo build (they are currently turned off).
  2. Make the following change to WTF/wtf/FeatureDefines.h: 

Brent Fulgham@WIN7-VM ~/WebKit/Source/WTF/wtf
$ svn diff
Index: FeatureDefines.h
--- FeatureDefines.h    (revision 153733)
+++ FeatureDefines.h    (working copy)
@@ -245,6 +245,13 @@

+#define ENABLE_WEBGL 1
+#define WTF_USE_3D_GRAPHICS 1
+#define WTF_USE_OPENGL 1
+#define WTF_USE_OPENGL_ES_2 1
+#define WTF_USE_EGL 1
 #endif /* PLATFORM(WIN_CAIRO) */

 /* --------- EFL port (Unix) --------- */

Performance is a little ragged, but we hope to improve that in the near future.

We have plenty of more plans for the future, including full 64-bit support (soon), and hopefully some improvements to the WinLauncher application to make it a little more useful.

As always, if you would like to help out,

By Brent Fulgham ( at August 06, 2013 05:53 AM

March 27, 2013

Freeing the Floats of the Future From the Tyranny of the Rectangle

Adobe Web Platform

With modern web layout you can have your content laid out in whatever shape you want as long as it’s a rectangle. Designers in other media have long been able to have text and other content lay out inside and around arbitrarily complex shapes. The CSS Exclusions, CSS Shapes Level 1, and CSS Shapes Level 2 specifications

aim to bring this capability to the web.

While these features aren’t widely available yet, implementation is progressing and it’s already possible to try out some of the features yourself. Internet Explorer 10 has an implementation of the exclusions processing model, so you can try out exclusions in IE 10 today.

At Adobe we have been focusing on implementing the shapes specification. We began with an implementation of shape-inside and now have a working implementation of the shape-outside property on floats. We have been building our implementation in WebKit, so the easiest way to try it out yourself is to download a copy of Chrome Canary. Once you have Canary, enable Experimental Web Platform Features and go wild!

What is shape-outside?

“Now hold up there,” you may be thinking, “I don’t even know what a shape-outside is and you want me to read this crazy incomprehensible specification thing to know what it is!?!”

Well you’ll be happy to know that it really isn’t that complex, especially in the case of floats. When an element is floated, inline content avoids the floated element. Content flows around the margin box of the element as defined by the CSS box model. The shape-outside CSS property allows you to tell the browser to use a specified shape instead of the margin box when wrapping content around the floating element.

CSS Exclusions

The current implementation allows for rectangles, rounded rectangles, circles, ellipses, and polygons. While this gives a lot of flexibility, eventually you will be able to use a SVG path or the alpha channel of an image to make it easier to create complex shapes.

How do I use it?

First, you need to get a copy of Chrome Canary and then enable Experimental Web Platform features. Once you have that, load up this post in Chrome Canary so that you can click on the images below to see a live example of the code. Even better, the examples are on Codepen, so you can and should play with them yourself and see what interesting things you can come up with.

Note that in this post and the examples I use the unprefixed shape-outside property.
If you want to test these examples outside of my Codepen then you will need to use the prefixed -webkit-shape-outside property or use (which is a built in option in Codepen).

We’ll start with a HTML document with some content and a float. Currently shape-outside only works on floating elements, so those are the ones to concentrate on. For example: (click on the image to see the code)

HTML without shape-outside

You can now add the shape-outside property to the style for your floats.

.float {
  shape-outside: circle(50%, 50%, 50%);

A circle is much more interesting than a standard rectangle, don’t you think? This circle is centered in the middle of the float and has a radius that is half the width of the float. The effect on the layout is something like this:

shape-outside circle

While percentages were used for this circle, you can use any CSS unit you like to specify the shape. All of the relative units are relative to the dimensions of element where the shape-outside is specified.

Supported shapes

Circles are cool and all, but I promised you other shapes, and I will deliver. There are four types of shapes that are supported by the current shape-outside implementation: rectangle, circle, ellipse, and polygon.


You have the ability to specify a shape-outside that is a fairly standard rectangle:

shape-outside: rectangle(x, y, width, height);

The x and y parameters specify the coordinates of the top-left corner of the rectangle. This coordinate is in relation to the top-left corner of the floating element’s content box. Because of the way this interacts with the rules of float positioning, setting these to anything other than 0 causes an effect that is similar to relatively positioning the float’s content. (Explaining this is beyond the scope of this post.)

The width and height parameters should be self-explanatory: they are the width and height of the resulting rectangle.

Where things get interesting is with the six-argument form of rectangle:

shape-outside: rectangle(x, y, width, height, rx, ry);

The first four arguments are the same as explained above, but the last two specify corner radii in the horizontal (rx) and vertical (ry) directions. This not only allows the creation of rounded rectangles, you can create circles and ellipses as well. (Just like with [border-radius][border-radius].)

Here’s an example of a rectangle, a rounded rectangle, a circle, and an ellipse using just rectangle syntax:

shape-outside rectangle

If you’re reading this in Chrome Canary with exclusions turned on, play around with this demo and see what other things you can do with the rectangles.


I already showed you a simple circle demo and you’ll be happy to know that’s pretty much all there is to know about circles:

shape-outside: circle(cx, cy, radius);

The cx and cy parameters specify the coordinates of the center of the circle. In most situations you’ll want to put them at the center of your box. Just like with rectangles moving this around can be useful, but it behaves similarly to relatively positioning the float’s content with respect to the shape.

The radius parameter is the radius of the resulting circle.

In case you’d like to see it again, here’s what a circle looks like:

shape-outside circle

While it is possible to create circles with rounded rectangles as described above, having a dedicated circle shape is much more convenient.


Sometimes, you need to squish your circles and that’s where the ellipse comes in handy.

shape-outside: ellipse(cx, cy, rx, ry);

Just like a circle, an ellipse has cx and cy to specify the coordinates of its center and you will likely want to have them at the center of your float. And just like all the previous shapes, changing these around will cause the float’s content to position relative to your shape.

The rx and ry parameters will look familiar from the rounded rectangle case and they are exactly what you would expect: the horizontal and vertical radii of the ellipse.

Ellipses can be used to create circles (rx = ry) and rounded rectangles can be used to create ellipses, but it’s best to use the shape that directly suits your purpose. It’s much easier to read and maintain that way.

Here’s an example of using an ellipse shape:

shape-outside ellipse


Now here’s where things get really interesting. The polygon `shape-outside` allows you to specify an arbitrary polygonal shape for your float:

shape-outside: polygon(x1 y1, x2 y2, ... , xn yn);

The parameters of the polygon are the x and y coordinates of each vertex of the shape. You can have as many vertices as you would like.

Here’s an example of a simple polygon:

shape-outside triangle

Feel free to play with this and see what happens if you create more interesting shapes!

Putting content in the float

The previous examples all had divs without any content just to make it easier to read and understand the code, but a big motivation for shape-outside is to wrap around other content. Interesting layouts often involve wrapping text around images as this final example shows:

shape-outside with images

As usual, you should take a look and play with the code for this example of text wrapping around floated images. This is just the beginning of the possibilities, as you can put a shape outside on any floating element with any content you want inside.

Next steps

We are still hard at work on fixing bugs in the current implementation and implementing the rest of the features in the CSS Shapes Level 1 specification. We welcome your feedback on what is already implemented and also on the spec itself. If you are interested in becoming part of the process, you can raise issues with the current WebKit implementation by filing bugs in the WebKit bugzilla. If you have issues with the spec, those are best raised on the www-style mailing list. And of course, you can leave your feedback as comments on this post.

I hope that you enjoy experimenting with shape-outside and the other features we are currently working on.

By Bem Jones-Bey at March 27, 2013 05:10 PM

March 05, 2013

CSS Fragmentation In WebKit

Adobe Web Platform

What is fragmentation?

The CSS 2.1 specification defines a box model to represent the layout of a document and pretty much everything is a box. Normal flow nodes (e.g. not absolutely positioned) are laid out child by child starting at the top of their parent element box. If an element’s box is too small to fit all the content, the content is said to overflow and this overflow can either be visible or get clipped.

Fragmentation is different from overflow because it allows flowing the content through multiple boxes called fragmentation containers – fragmentainers for short. When the end of the current fragmentainer is reached a break occurs and the layout continues within the next fragmentainer. Using CSS, authors can also force breaks to occur after or before an element and even avoid them altogether.

The important detail to remember is that fragmentation doesn’t mean taking the overflow of a box and visually moving it to another one. Fragmentation happens during the layout and affects the dimensions of the content boxes.

There are a few specifications in CSS based on fragmentation:

  • CSS3 Pagination – the document is laid out so it can fit on pages; the pages act as fragmentainers.
  • CSS3 Multi-column – the element defines a number of columns that fill its box and where the content is laid out; the columns act as fragmentainers.
  • CSS3 Regions – selected content forms a flow that’s laid out inside boxes called regions; the regions act as fragmentainers.

All of these specifications share common concepts covered by the CSS3 Fragmentation specification.

Paragraph in regions.

Example of a paragraph flown in two regions.

Fragmentation in WebKit

The Layout Process

To fully understand the concepts presented here you should know a bit about how layout works in WebKit. There are some nice articles on the Web covering the basics such as Bem’s article.

Long story short, the DOM tree is internally mapped to a render tree that is used to build up the box properties for every node. The render tree is made up from objects like RenderBox, RenderBlock, RenderInline etc. that represent the concepts defined in CSS2.1. During layout this tree is traversed and the various geometrical properties of the renderers are computed based on the style values. In the end all the elements have their box information computed.

As a side note, you’ll notice the symbols inside WebKit are usually named using the “pagination” or “page” terminology. This is for historical reasons, but most of the time the pagination concepts map to the fragmentation ones – pages, columns or regions.

There were a couple of ways the fragmentation behavior could have been implemented in WebKit. One of them was to have a renderer for each element fragment. For instance a simple paragraph with ten lines where there’s a break after the fourth one, we would have had one renderer with four lines in the first fragmentainer and a second renderer with six lines in the second fragmentainer. Both of the renderers would belong to the initial element. This approach is difficult to implement correctly and maintain because of the complexity it brings to the codebase. Additionally, it is also very risky from a security standpoint as it can introduce many subtle memory management bugs.

A rule of maximum one renderer per DOM node was created because of these concerns. Fragmentation is implemented by shifting the monolithic boxes (boxes that can’t be fragmented, such as line boxes – rectangles wrapping each line of text) so that they don’t overlap with the breaks. The correct rendering is obtained during the painting phase by placing each fragment exactly where it is supposed to appear in the fragmentainer. More about this topic in the next section.

In the case of unforced breaks, a position adjustment, called pagination strut in the codebase, is attached to the boxes during layout. This value represents a shift offset from the default layout position to the next fragmentainer in case the box doesn’t fit the current one. This offset needs to be stored separately on each box, not as a part of the top position because it’s not an attribute of the renderer. It’s a layout artifice that helps measuring the fragmented boxes. For example, if a block is collapsing at the top margin with its container (e.g. it’s the first child) and it also has a pagination strut, that offset will be transferred to the container. This makes sense because both the block and the container need to be placed in the next fragmentainer even though it’s the child that won’t fit.

The same rule applies to the first line box of a block. Also, during subsequent layouts it’s possible for an element to shift in the block-flow direction. The line boxes pagination struts need to be recomputed because the shift offsets are most likely different after the element changed position.

When a line doesn't fit the current fragmentainer it is shifted using the pagination strut.

When a line doesn’t fit the current fragmentainer it is shifted using the pagination strut.

As an example, in the case of the example above, the first four line boxes would be positioned normally in the first fragmentainer. The fifth line box may need to have a pagination strut defined if it fits only partially in the first fragmentainer. This pagination strut would logically shift the line inside the next fragmentainer. The rest of the lines are all placed normally in the second fragmentainer.

In the case of forced breaks, the pagination strut is no longer used because the authors directly specify where the breaks occur. The boxes are positioned inside their container so they respect the break condition.

Each type of fragmentation layout has its own specialized behavior. The multi-column elements have the ColumnInfo object attached to them. It contains information about the number and the width of the columns, the distance between breaks (used to balance the columns if they have auto-height) etc. This object is also pushed on the layout state stack so it can be accessed from any point inside the render tree.

The regions implementation is currently the most complex type of layout that makes use of fragmentation. The content node renderers are not attached to their DOM parent renderer. Instead, they are moved to a special object, RenderFlowThread, that sets up a pagination context when it is laid out. These renderers need to hold and use various information about the flow thread: the regions size, how the descendant boxes change width in every region they are flowing into etc. Because the concept of the flow thread is so generic there are plans to port the multi-column implementation on top of it.

The Layout Performance

From a performance perspective, the layout process will always be slower for fragmented content because the engine can’t apply the same optimizations that are made for continuous vertical content. For example, without the possibility of fragmentation, updating the top margin of an element can shift it on the vertical axis but no relayout may be needed. If the element is enclosed in a fragmentation context, by shifting the element it’s possible it won’t fit any more inside the fragmentainer, triggering a break. To cover this case, a relayout of the element is always required.

The layout engine is usually optimized for both speed and memory consumption. This means fragmentation code is used only when necessary: if there is a printing context, if the node being laid out is a part of a multi-column element or if the renderer belongs to a flow thread. The logic for enabling the fragmentation code can be found inside the LayoutState class. During layout, a stack of LayoutState objects is created and stored on the root of the render tree, the RenderView. Any renderer can query the top of this stack to determine if the content can be fragmented or not.

The Painting Phase

The painting of the renderers in WebKit is handled by the RenderLayer tree. For some renderers the engine creates layers that are used to paint the document in the correct order (e.g. with respect the z-index restrictions). Layers are always created for the renderers that fragment, such as multi-column blocks or RenderFlowThread objects. Because of this, multi-column elements and regions create stacking contexts and are painted as a single item.

For regions, when the painting operation occurs, the engine takes into account the fragmentation properties of the renderer and shifts the layer to the correct position so the painting is always executed inside the correct fragmentainer. The content not belonging to the current fragmentainer is clipped so the engine always paints only the content that should be displayed in a certain area. The mechanism is in some ways similar to how sprites (e.g. CSS Sprites) work.

The RenderFlowThread is painted in the two regions at the offsets corresponding to each content fragment.

The RenderFlowThread is painted in the two regions at the offsets corresponding to each content fragment.

For example, let’s say the paragraph above is flown into regions A and B. When region A is painted, the flow thread layer is called to paint in the fragmentainer box. It will paint the first four lines and then blank space until the bottom of the fragmentainer because the fifth line was shifted at layout-time using the pagination strut. When region B is painted, the flow thread layer is called again to paint inside the fragmentainer box, but at a new offset, where the region B fragment starts (top of the fifth line). The last six lines of the paragraph are painted.

Something similar happens with the multi-column blocks. However, the main difference is the fragmentainers for multi-column blocks don’t have their own layers. By using the same layer as the multi-column block the content is able to use advanced graphics features such as 3D transforms and video tags (see WebKit Compositing for more details about how accelerated layers work).

In the case of a flow thread, making its layer work with the region layers is one of the major challenges that need to be solved. As a consequence of this problem, content layers that require accelerated compositing (e.g. video layers) will not work correctly inside flow threads.


The modern fragmentation concepts are not yet fully implemented inside WebKit. There are some issues with the handling of forced breaks and the avoid value is implemented only for the break-inside property. The widows and orphans properties were just recently fully enabled.

On the CSS Regions side there’s some work left to do to achieve a smooth integration with the layers and compositing subsystem. Testing is also much needed to ensure good integration with other Web features.

By Andrei Bucur at March 05, 2013 05:45 PM

February 07, 2013

A look into Custom Filters reference implementation

Adobe Web Platform

A shader example

Over the past two years, my team in Adobe has been actively working on the CSS Custom Filters specification (formerly CSS Shaders), which is just one part of the greater CSS Filters specification. Alongside the spec work, we have been working on the CSS Custom Filters WebKit implementation, so I’ve decided to write this blog post to explain how this is all being implemented in WebKit, giving an high-level conceptual overview of the game.

Support for OpenGL shaders on general DOM content has been one of the noted features for the latest Chrome releases. Users and web developers are excited about the ability to apply cinematic effects on their pages.

Shaders, Shaders, Shaders

I’ve often been asked “what is a shader?”; well, shaders are short programs that run on the GPU that – unlike regular programs – solely deal with bitmaps (textures) and 3D meshes, performing specific operations on pixels (called fragments) and vertices. You can get a more in-depth explanation of shaders on Wikipedia.

Roughly speaking, the reason why these small programs run on the your graphic card’s processor (the GPU) instead of the regular CPU is that GPUs are very good at performing ad hoc math operations in parallel. While the GPU is busy performing graphical operations, your CPU is idle and ready to perform other tasks. Everything is faster and snappier.

In our case, the texture passed to the graphic pipeline is styled DOM content, your page content. This means that with one line of CSS and a shader, you can get splendid and astonishing results, just take a look at the image at the beginning of this post.

HTML5Rocks has a great post covering the whole idea behind CSS Custom Filters with an example of their capabilities so I won’t spend too many words on how to use them, instead I’ll be focusing on what is the general idea behind the implementation. Let’s start!

Parsing and Style Resolution

The life of a CSS Shader Custom Filter starts with the CSS Custom Filter declaration. With the currently implemented WebKit syntax, it might look like the following:

#myElement {
    -webkit-filter: custom(url(vertex.vs) none)

This CSS is firstly encountered by WebKit’s CSSParser, which – as the name might suggest – is the component within WebKit responsible for CSS parsing. Raw text data gathered from this object is then computed by WebKit’s StyleResolver and from there, a CustomFilterOperation is created. This class will contain all the major information about the Custom Filter you’ve asked to use, such as its parameters. It also references the program, a CustomFilterProgram, which is a vertex shader and fragment shader pair, that will be applied to the texture (your styled DOM content).

Here’s a picture of the happy event:

PassRefPtr StyleResolver::createCustomFilterOperation(WebKitCSSFilterValue* filterValue)
    // ...snip...
    RefPtr program = StyleCustomFilterProgram::create(vertexShader.release(), fragmentShader.release(), programType, mixSettings, meshType);
    return CustomFilterOperation::create(program.release(), parameterList, meshRows, meshColumns, meshType);

Computing and Rendering

The RenderLayer tree is one of the many trees in WebKit and it’s currently being used, among other things, to deal with group operations such as opacity, stacking contexts, transforms and to do things like scrolling and accelerated compositing: it’s a huge huge piece of code that defines the word “complexity”.:-)

What happens after the parsing and the style resolution is most interesting: when a DOM element has a filter associated with it, it receives its own RenderLayer. When this happens, the relevant DOM content is painted in an image buffer and then sent to the GPU for the “shading treatment” as a regular texture. All this starts to take place thanks to RenderLayerFilterInfo which, among other things, downloads the shaders as external resources and calls the RenderLayer back to notify it that everything’s ready to repaint.

The computing magic begins when a RenderLayer starts to iterate on all the filters associated with it:

FilterOperations RenderLayer::computeFilterOperations(const RenderStyle* style)
	// ...snip...
        for (size_t i = 0; i  filters.size(); ++i) {
        RefPtr filterOperation = filters.operations().at(i);
        if (filterOperation->getOperationType() == FilterOperation::CUSTOM) {
	// ...snip...
            CustomFilterGlobalContext* globalContext = renderer()->view()->customFilterGlobalContext();
            RefPtr validatedProgram = globalContext->getValidatedProgram(program->programInfo());
	// ...snip...

For every custom filter, a validated version of the program (see next paragraph) is requested; this is then appended to the list of the associated filter operations.

The actual rendering will take place in the FilterEffectRenderer class where the “filter building” operation will start in a surprisingly straightforward manner:

bool FilterEffectRenderer::build(RenderObject* renderer, const FilterOperations& operations)
	// ...snip...
        effect = createCustomFilterEffect(this, document, customFilterOperation);
	// ...snip...

FilterEffectRenderer::createCustomFilterEffect is the method that will rule them all. Take a look at it, it’s just gorgeous! 🙂

A word about security

Among the other things, shaders deal with pixels which means that a texture can be sampled (read). This translates into a potential security risk: we don’t really like the idea of malicious shaders reading your page content. Imagine to what would happen if anyone was able to push a shader capable of making a screenshot of your private messages on some social network.

In light of this we do enforce restrictions on shader code before it’s sent to the GPU. We do this by validating and rewriting the shaders in a controlled and secure fashion, and CustomFilterValidatedProgram serves just this purpose, one of its goals is to avoid texture sampling, meaning that the shaders cannot read pixels from your page content anymore. Most of the validation work actually relies on ANGLE, an open source library used by many other projects, including WebKit and Gecko, for WebGL validation and portability.

If you want to have an idea of the kind of validation we perform, pay attention to CustomFilterValidatedProgram: it might prove a very learning reading.

Wrapping up

Currently, we’re working with the CSS Working Group to make the CSS Custom Filters syntax more elegant and future-proof. As with any new web feature, you can expect changes. Stay tuned for more information on upcoming improvements!

By Michelangelo De Simone at February 07, 2013 05:00 PM

February 05, 2013

A Visual Method for Understanding WebKit Layout

Adobe Web Platform

My last post was an introduction to the WebKit layout code. It was pretty high level, with a focus on documentation. This post is much more hands on: I will explain some changes that you can make to WebKit’s C++ rendering code to be able to see which classes handle which parts of a web page.

Getting and building the code

This is entirely optional, so if you don’t feel like setting this up or don’t have a machine powerful enough to do the build, feel free to skip to the next section. I will put in links to all of the code that I reference throughout the document, so you should be able to follow along with nothing more than the web browser you are using to read this.

Note that this section does not attempt to be a complete guide to setting up a build environment and getting a working build. That would be an entire blog series of its own!

Getting the code

If you’re still with me, before you build anything, you’ll need to get a copy of the source code. WebKit uses Subversion for source control; however, most developers work with the Git mirror. If you’re on Windows, you should jump directly to the Building the code section below, as you need to have the code checked out to a specific location in order for the code to build. Otherwise, you can continue with this section.

Installing Git on Linux and Mac OS X

Installing Git is pretty straightforward on Linux and Mac OS X. On Linux, I’d suggest that you use your distribution’s package manager, but if you can’t do that for some reason, you can download Git from the Git Homepage.

For recent versions of Mac OS X, it is included in the Xcode developer tools, and you will need them to build the code anyways. Once you’ve installed Xcode, you will need to install the command line tools, which is located under Xcode -> Preferences -> Downloads in Xcode 4.5.2, and in a similar location in earlier versions.

Downloading the source

Once you have Git installed, grabbing the source is pretty straightforward. First, make sure you have at least 4 gigs of free space. Then, open a terminal, find a suitable directory, and run the following command:

git clone git:// WebKit

This will take awhile, but when it’s done, you’ll have your very own copy of the WebKit source code.

That’s great, but I still don’t get this Git thing

Explaining how to use Git is beyond the scope of this post, but here are a couple of resources:

Building the code

WebKit is designed to be able to run on many different platforms. To facilitate this, functions like graphics and networking have abstraction layers that allow an implementation for a specific platform to be plugged in. This combination of the platform independent part of WebKit with the low level bindings for a specific platform is called a port.

WebKit Port Diagram

Note that while there are ports that are operating system specific, like the Mac and Windows ports, there are also ports for platforms like Qt, which is a library that can be used on many different operating systems. The short of it is that in order to build WebKit, you need to choose the proper port to build for your purposes.

The best place to start to learn how to build WebKit is to take a look at the documentation on installing the developer tools to build webkit. It lists out the most common ports with either instructions in the page or links to the Wiki documentation on how to set up a development environment for that port. I would suggest using the Mac port if you’re on a Mac, and the Windows port if you’re on Windows. If you’re on Linux, you can choose between the Gtk, Qt, and EFL ports. If you don’t care or don’t know which one to choose, you could use the Gtk port if you’re using Gnome, and the Qt port if you’re using KDE.

If you’re building a port other than the Mac or Windows port, all of the build instructions are on the wiki page about that port, which I linked to in the paragraph above. Otherwise, you will want to read the build instructions for the Mac and Windows ports.

There are more ports than the ones I mentioned here; however, when doing WebKit development, it is probably easiest to start with one of the ports I mentioned above. If you would like to know about others, there is a list of ports on the WebKit wiki.

Help! I followed all the instructions and it still doesn’t work!

If you can’t get your chosen port to build, or if you’re unsure of which port to choose, there is a page of contact information for the WebKit project. I would suggest starting by sending a message to the webkit-help mailing list, as that will get you in contact with people that should be able to help with any build problems or questions that you may encounter. You can also contact the developers on IRC: the #webkit channel on is WebKit central.

Visualizing layout

The layout process determines what features of a web page get drawn into which locations. However, when starting out, it can be hard to link the C++ code to the visual result that you see in your browser. Probably the easiest way to make this connection in a visual way is to modify the code that draws everything to the screen. This drawing step happens after layout, and WebKit calls it painting.

Basics of painting

After layout has finished, the process of painting begins. It happens in a very similar way to layout: each rendering class has a paint method that is analogous to the layout method we discussed in the last post. This paint method is responsible for drawing the current renderer and it’s children onto the display.

The CSS specification defines the painting order for elements, and the WebKit implementation has painting phases for all of these steps. As with many of these things, it is too complex to explain here in its entirety, but the rules of the painting order ensures that all of CSS’s positioning rules are rendered properly. This must be taken into account when thinking of where in the individual paint methods changes should be made.

The paint method can be instrumented to draw the outline of the area painted by the class. This can make it much easier to understand which parts of the code create which parts of the rendered page.

What does it look like?

To generate the above screenshot, RenderBlock::paint method (Look in Source/WebCore/rendering/RenderBlock.cpp for the definition) was instrumented to draw a green border around the block itself.

Drawing the outlines

The C++ code that generated those lovely green outlines is the following:

Color color;
color.setRGB(0, 255, 0);
paintInfo.context->setStrokeColor(color, style()->colorSpace());
paintInfo.context->strokeRect(overflowBox, 2);

Let’s break this down line by line:

Color color;

This creates a Color object. It is hopefully not surprising that this object represents a color, and will be used to set the color that we are going to draw our border with.

color.setRGB(0, 255, 0);

This call sets the color for our Color object. It takes 3 numbers between 0 and 255, each one representing the amount of red, green, or blue that makes up the color. For simplicity, I just went with entirely green, but you could do any sort of fancy thing you like here. Setting the color to different values could come in handy if you want to instrument multiple classes at once: each could have a different color, making it easy to tell them apart.

paintInfo.context->setStrokeColor(color, style()->colorSpace());

Now’s when things get more complex. paintInfo is an instance of PaintInfo that is passed as an argument to the paint method. PaintInfo is a simple structure that contains the state of the painting operation as well as the object used to actually call the native drawing routines. That object is the context member of PaintInfo. This object is actually a facade to provide a platform independent API for drawing. It is declared in platform/graphics/GraphicsContext.h, and there are corresponding platform specific implementations that it uses depending in which WebKit port is in use.

If you have done graphics programming before (Even using HTML’s canvas element), you will probably find the use of the GraphicsContext familiar. The methods may have different names, but the underlying concepts of the API are exactly the same. For example, this setStrokeColor call sets the color that is used by subsequent drawing operations that use this context.

The last piece of this line that I should explain is the style() call. This gets the CSS style information for the RenderBlock, and queries it for the color space. This is one of the things that will need to change when modifying this code to work in paint methods in other objects, since there are some cases when a renderer delegates painting some part of it’s content to another class that itself is not a renderer. In those cases, that class has a reference to the renderer it is attached to, and the style() method can be called via that member. In most cases, this member is called m_renderer, so this call changes to m_renderer->style()->colorSpace(). I give an example of this case later in this post.

paintInfo.context->strokeRect(overflowBox, 2);

This is the call that actually draws the border rectangle. The overflowBox is the dimensions of the block plus any visible overflow: that’s the entire area of the page that is rendered by this block and its children. The second argument to strokeRect is the width of the line used to draw the rectangleA.

Where is this code added?

Of course, this code cannot just be added at random, you need to put it in a specific place in the paint method. As of this writing, the paint method in RenderBlock.cpp looks like the following:

void RenderBlock::paint(PaintInfo& paintInfo, const LayoutPoint& paintOffset)
    LayoutPoint adjustedPaintOffset = paintOffset + location();

    PaintPhase phase = paintInfo.phase;

    // Check if we need to do anything at all.
    // FIXME: Could eliminate the isRoot() check if we fix background painting
    // so that the RenderView paints the root's background.
    if (!isRoot()) {
        LayoutRect overflowBox = overflowRectForPaintRejection();
        if (!overflowBox.intersects(paintInfo.rect))

    bool pushedClip = pushContentsClip(paintInfo, adjustedPaintOffset);
    paintObject(paintInfo, adjustedPaintOffset);
    if (pushedClip)
        popContentsClip(paintInfo, phase, adjustedPaintOffset);

    // Our scrollbar widgets paint exactly when we tell them to,
    // so that they work properly with z-index. We paint after we painted
    // the background/border, so that the scrollbars will sit above
    // the background/border.
    if (hasOverflowClip() && style()->visibility() == VISIBLE &&
        (phase == PaintPhaseBlockBackground ||
        phase == PaintPhaseChildBlockBackground) &&
            roundedIntPoint(adjustedPaintOffset), paintInfo.rect);

I’m not going to go over this one line at a time, as by the time you read this, this method may or may not look the same. However, I would like to share where I placed the code and the rationale behind it so that hopefully you will still be able to instrument the code yourself even if it has changed, and will be able to think about how to instrument the paint methods in other render classes as well.

When looking for a place to put the code in this method, the first thing I needed to find was where I could find a LayoutRect that represents the area that this renderer will paint into. This is the dimensions of the block plus any visible overflow. This value is usually used early on in the paint method to determine if the current block is actually in area to be painted (paintInfo.rect). In RenderBlock::paint, you can see this in the if statement following the helpful comment “Check if we need to do anything at all.”:

if (!isRoot()) {
    LayoutRect overflowBox = overflowRectForPaintRejection();
    if (!overflowBox.intersects(paintInfo.rect))

So overflowBox is our variable, which you might remember from the instrumentation code example above. Now, we need to determine where to put our instrumentation code. In this case, unless we want to move the definition of overflowBox, we don’t have much choice: it must go in the body of the if statement above. Since we don’t want to draw the outline if the block is not in the area to be painted, we should put it after the return statement at the end, like so:

if (!isRoot()) {
    LayoutRect overflowBox = overflowRectForPaintRejection();
    if (!overflowBox.intersects(paintInfo.rect))

    Color color;
    color.setRGB(0, 255, 0);
    paintInfo.context->setStrokeColor(color, style()->colorSpace());
    paintInfo.context->strokeRect(overflowBox, 2);

And now if we build and run the code, we should see an effect like in the screenshot above. It is also very interesting to launch the web inspector with this active.

More complex paint methods

The paint method of RenderBlock makes for a nice example because it is pretty small and it is pretty straightforward to see how to instrument it. (Most of the complexity is nicely factored out into helper methods.) However, this is not the case for the paint method in InlineFlowBox. InlineFlowBox is used to manage flows of inline content, like text.

I am not going to reproduce the entirety of InlineFlowBox::paint here, but you should be able to follow along by either opening the file in your favorite editor if you opted to download the code, or by looking at InlineFlowBox.cpp on Trac.

While this is a more complex paint method, the check if painting should happen is right at the top of the method:

LayoutRect overflowRect(visualOverflowRect(lineTop, lineBottom));

if (!paintInfo.rect.intersects(pixelSnappedIntRect(overflowRect)))

From this, we can glean two pieces of information, just like we did in the RenderBox example: that overflowRect defines our painted area, and that we should instrument after the if statement that determines if we should paint or not.

Looking at the rest of the method and the plethora of if and return statements, it is less obvious how far into the method our instrumentation code should go. All of those conditions might look very opaque, but you may be happy to know that you don’t need to understand them all to decide where to put the instrumentation code.

Most of the conditions in InlineFlowBox::paint have to do with the different phases of painting. I mentioned earlier that CSS defines the order in which painting should be done: WebKit uses paint phases to implement this. So all these if statements are doing is ensuring that only the proper things are drawn in a given phase. If we were writing production code, we would need to determine in which phase our outline should be drawn in, and make sure that it is only drawn in that phase. However, since this is exploratory code, we can just draw the outline in every phase, which will ensure that it gets shows up on top of any other content rendered by this object. The simplest way to do this is to place the instrumentation code as the first thing after it is determined that painting should happen.

Great! Now we know where the code should go, and we know which variable contains our bounding box, so we can take the code from earlier, changing overflowBox to overflowRect and changing the color to red just in case we want to have this active at the same time as the changes to RenderBlock:

Color color;
color.setRGB(255, 0, 0);
paintInfo.context->setStrokeColor(color, style()->colorSpace());
paintInfo.context->strokeRect(overflowRect, 2);

That’s the right direction, but if you do this, you will find out that it doesn’t compile: the compiler can’t find a style() method! That’s because we’ve discovered that InlineFlowBox is not a render object, it is a helper, used by render objects like RenderInline.

Luckily, these helper objects have a reference to their render object, accessible via the m_renderer instance variable. This allows us to make one more change to our code which will allow it to compile:

Color color;
color.setRGB(255, 0, 0);
paintInfo.context->setStrokeColor(color, m_renderer->style()->colorSpace());
paintInfo.context->strokeRect(overflowRect, 2);

And then you can see the InlineFlowBoxes in all their glory:

What’s next?

If you didn’t get a working build before reading the whole thing, I would suggest going back, setting up a build, and then trying out the code. It really is a great way to learn how layout (and even a bit of painting) works.

If you have a working build and were following along at home, you should try instrumenting the paint methods of some of the other classes in Source/WebCore/rendering (InlineBox and InlineTextBox are good places to start), and see how these classes map to the visual elements of a web page. You might be surprised by what you discover.

And if that isn’t enough for you, there are plenty of issues on that you could dig into. (And if you can’t find something you find interesting there, I’m sure that someone on the #webkit channel on could give suggestions.) After all, there’s no better way to learn than to really work with the code.

Of course, I will be also writing more posts here as well. Stay tuned for my next post on WebKit’s layout tests and the infrastructure that runs them.

By Bem Jones-Bey at February 05, 2013 05:14 PM

January 24, 2013

Introduction to the Performance-Tests in WebKit

Adobe Web Platform

In this post I would like to give a short overview of WebKit’s performance and memory testing framework. Along with a bunch of WebKit geeks, I have been involved in the development process for a while, mostly from the side of contributions to memory-measurement.

If I were to summarize the evolution of performance tests in WebKit, I’d starting with the early ages when we had JavaScript performance tests only for JavaScriptCore (SunSpider test suite) and we had another set of tests for V8 (V8 test suite). We also had a ton of layout tests, which could give performance feedback, but had purposes other than performance testing. You can say at this point: “Hey, stop! There are benchmark sites and suites all around online!” I totally agree with this, but we needed something more Web-Engine specific, something for testing WebKit itself (including the JavaScript engine and the Web engine) instead of the browser that is built above WebKit.

Let’s distinguish between online benchmarks for browser-level performance (e.g. Chromium performance testing) and the engine-level performance tests. Engine-level performance tests do not test the browser or the underlying platform’s graphical performance, they intend to test a part of a specific component of the web-engine (e.g. parse time of the HTML parser or runtime of the layout-system for floating containers). Browser-level performance tests usually do complex things that most likely try to use more components of the web-engine. Usually the goal is with these tests to reproduce some kind of real browsing behavior to measure the browser’s response time. This post is only about the engine-level testing.

Engine Level Performance Tests have been in WebKit trunk for more than a year now. We improved the system a lot in the last year (e.g. we added support for memory measurements in WebKit Bug #78984), so I think it is time to blog about it to a wider audience. Although work on the system is still in progress, it’s already capable both of demonstrating improvements or catching both performance and memory regressions.

In the first part of this entry, I want to give you a short introduction to the system and a short description how you can use the testing infrastructure. In the second part I intend to talk about the continuous performance measuring system and its online visualizer – the performance-test website. If you follow carefully I’m sure you get some information about our super-exciting future plans!

What is a performance test in WebKit?


Our performance tests measure the run-time performance and the memory usage of WebKit. In all the test cases we start the measurement before the concrete test (the function, the animation, the page loading) starts and stop it when it finishes. Although measuring performance and memory sound pretty straightforward, it can be deceptively difficult. For example, what do we mean by run-time performance? The right answer depends on the test itself. The animation performance-tests produce frame per second values (fps). The tests that measure the runtime of different JS functions (DOM manipulation, page loading, etc.) can produce either milliseconds (ms) or runs per second (run/s). If we really want to understand the meaning of a specific set of performance measurements, we need to look deeper at the actual test cases, but that’s not goal of this post. C’mon, this is just an introduction!

The memory consumption tests produce two values. The first one is the general heap usage of WebKit (memory allocated through the FastMalloc interface) and the second is the heap usage of JavaScript (memory allocated by the actual JavaScript engine). We count all of our raw memory results in bytes, but as you will see it later on the result pages, we display them in kilobytes.

Both the performance and the memory results are produced via the JavaScript engine. All of our performance-tests are JavaScript-based except the Webpage Replay Tests (you can read about them on the related page in WebKit trac). We experimented with other approaches like C++ and Python, but the JavaScript one was the most general over the different WebKit ports, so we stayed with it.

How to run performance tests in WebKit?


It’s time do some actual experiments! We have a script to run performance tests, it is located under the trunk/Tools/Scripts directory called run-perf-tests. I assume you have a WebKit build (if you don’t have a WebKit build, there is a documentation how to set one up), so to run all performance tests (located under trunk/PerformanceTests) you only need to run that script (different ports require additional parameters, for the details check out the –platform parameter). Because running all the tests can take long time, you can restrict the number of the tests run by specifying some directories or a list of tests as a parameter for run-perf-tests.

The script produces pretty straightforward output:


Running Bindings/set-attribute.html (18 of 115)
DESCRIPTION: This benchmark covers 'setAttribute' in Dromaeo/dom-attr.html and
             other DOM methods that return an undefined.
RESULT Bindings: set-attribute= 670.696533834 runs/s
median= 670.967741935 runs/s, stdev= 4.72565174943 runs/s, 
        min= 663.265306122 runs/s, max= 677.966101695 runs/s
RESULT Bindings: set-attribute: JSHeap= 57804.8 bytes
median= 57832.0 bytes, stdev= 1283.97383151 bytes, 
        min= 54112.0 bytes, max= 59296.0 bytes
RESULT Bindings: set-attribute: Malloc= 1574148.0 bytes
median= 1572772.0 bytes, stdev= 3346.56713349 bytes, 
        min= 1568992.0 bytes, max= 1584504.0 bytes
Finished: 16.537399 s


The output contains all the things that we are interested in, like test names, descriptions (if they exist) and the performance and memory results. By default after the warm-up run, we run each test 20 times (DumpRenderTree / WebKitTestRunner) to provide a stable result.

After the script has finished the testing, in addition to the console output we get a nice HTML based table with all of our results. The results in the table show the average values and their deviation for each test. If the result is not stable enough we also get a nice sign.




It’s possible to switch between the Time and the Memory view and reorder the results. If we are interested in a specific test, we can select it to see more details, as the screenshot below.



How to compare performance results?

Let’s see a simple workflow: we have a clean WebKit repository with a build. We run all the performance-tests and we want to compare the results with our modified WebKit. Since we store all the performance results by default in the WebKitBuild/Release/PerformanceTestsResults.json file, it’s very simple to compare the results… You just need to apply your patch to the repository, rebuild WebKit and run performance-tests again. After the rerun, the results HTML page contains a new column and shows all the new and old results in the same table. You can repeat the measurement as many times as you wish, each run will add a new column to your results. Additionally, you can use compare different repositories or specify another file for storing the results. You can find additional details if you run the run-perf-tests –help command.




This way you can easily test the effect of your changes. Furthermore the results table will inform you about your improvements and regressions in a simple and clear way.

Continuous performance and memory results online


Each platform has its own performance bot ( that provides continuous performance measurements and test results are submitted to ​




I think the most useful feature of the Perf-O-Matic system is the Custom Charts section. With the help of Custom Charts, we are able to make custom queries to check out, compare, and investigate individual test results that run on the performance bots in the past. This way we can verify improvements or catch regressions on the individual test level. See below for an example test evaluating CSS property setters and getters by setting all the possible CSS properties to a pre-defined value and access properties through JavaScript.




You can find several useful pieces of information in the chart above. We always want to know the name of the test, the port that we have tested, the difference to the previous measurement, the SVN revision and the date of the measurement. All of this information is located on a nice zoomable chart. Take some time to play with the charts, I’m pretty sure that you will also find it useful. We are planning to update this system to a newer version soon, so more useful features are coming!

Looking into the future of performance measurements


After we further stabilize the continuous performance testing system (WebKit Bug #77037) and we can identify a well-defined set of tests with low deviation and meaningful results (WebKit Bug #105003), we are planning to add a performance-testing Early Warning System support to Bugzilla. The system will report the possible performance and memory regressions for the uploaded patches automatically in a Bugzilla comment, so it will work just like the build/layout Early Warning System. It will help to improve the quality of our developments in WebKit a lot. It sounds cool, doesn’t it?

I hope you enjoyed this little introduction to the WebKit Performance Test system and its online visualizer, and that you will try some of these tools.



By Zoltan Horvath at January 24, 2013 05:01 PM

June 25, 2012

OpenGL Cairo Backend

I've been trying to get the OpenGL backend working under Windows, and have finally gotten a simple test application to work.

The next step is to attempt to tie in the various Cairo GL routines in the WebKit tree to see if we can get any hardware accelerated action on the 3D CSS and perhaps WebGL fronts.

I'm in the process of updating the WinCairoRequirements tree with the changes (as well as pushing them upstream), and should have it wrapped up some time tomorrow.

By Brent Fulgham ( at June 25, 2012 04:06 AM

June 14, 2012

Updates Galore

After an extremely long delay, I have finally been able to devote a little time to WebKit hacking. Over the past couple of days I have completed the following:
  • Updated the WinCairoRequirements source repository:
    • Cairo (version 1.12.2)
    • ICU (version 4.6.1, With Apple's 10.7.4 updates)
    • libxml2 (version 2.8.0)
    • libxslt (version 1.1.26)
    • libpng (version 1.5.10)
    • OpenCFLite (version 635.21)
    • Pixman (version 0.26.0)
    • SQLite (version
    • zlib (version 1.2.7)
    Note that anything I don't mention here is the same version as before. I also updated the build solutions for VS2005, VS2008, and VS2010.
  • Posted a patch against WebKit rev 120024 that allows you to build with either Visual Studio 2008 or Visual Studio 2010.
  • And finally, I've moved the file from my soon-to-be-defunct iDisk to DropBox. As of r120235 the system should automatically grab the build dependencies from the new location. Please let me know if this does not work for you.
I'm hoping to devote a lot more time to WebKit over the next few weeks, and hope to focus on the glaring lack of 3D CSS support. I'd love to hear from anyone interested in helping out! If you haven't used the WinCairo build recently, I suggest you give it a try with the updated Cairo release. Cairo 1.12.2 has significantly speed up several operations, and so 2D graphical work is faster than ever!

By Brent Fulgham ( at June 14, 2012 03:11 AM