July 25, 2016

New <video> Policies for iOS

WebKit Blog

Since before your sun burned hot in space and before your race was born, Safari on iOS has required a user gesture to play media in a <video> or <audio> element. When Safari first supported <video> in iPhoneOS 3, media data loaded only when the user interacted with the page. But with the goal of returning more control over media playback to web developers, we relaxed this restriction in iOS 8: Safari began honoring the preload="metadata" attribute, allowing <video> and <audio> elements to load enough media data to determine that media’s size, duration, and available tracks. For Safari in iOS 10, we are further relaxing this user gesture requirement for silent <video> elements.

Motivation

It turns out that people these days really like GIFs. But the GIF format turns out to be a very expensive way to encode animated images when compared to a modern video codec like H.264. We’ve found that GIFs can be up to twelve times as expensive in bandwidth and twice as expensive in energy use. It’s so expensive that many of the largest GIF providers have been moving away from GIFs and toward the <video> element. Since most of these GIFs started out their lives as video clips, were converted into animated GIFs, and were again converted back to video clips, you might say that the circle is complete.

But while this move does spare websites’ bandwidth costs as well as saving users’ batteries, it comes at a usability cost. On iOS 9, <video>s will only begin playing as a result of a user gesture. So pages which replace an <img> with a <video> will require a user gesture before displaying their animated content, and, on iPhone, the <video> will enter fullscreen when starting playback.

A note about the user gesture requirement: when we say that an action must have happened “as a result of a user gesture”, we mean that the JavaScript which resulted in the call to video.play(), for example, must have directly resulted from a handler for a touchend, click, doubleclick, or keydown event. So, button.addEventListener('click', () => { video.play(); }) would satisfy the user gesture requirement. video.addEventListener('canplaythrough', () => { video.play(); }) would not.

Similarly, web developers are doing some seriously amazing stuff by integrating <video> elements into the presentation of their pages. And these pages either don’t work at all on iOS due to the user gesture requirement, or the <video> elements completely obscure the page’s presentation by playing an animated background image in fullscreen.

WebKit’s New policies for video

Starting in iOS 10, WebKit relaxes its inline and autoplay policies to make these presentations possible, but still keeps in mind sites’ bandwidth and users’ batteries.

By default, WebKit will have the following policies:

  • <video autoplay> elements will now honor the autoplay attribute, for elements which meet the following conditions:
    • <video> elements will be allowed to autoplay without a user gesture if their source media contains no audio tracks.
    • <video muted> elements will also be allowed to autoplay without a user gesture.
    • If a <video> element gains an audio track or becomes un-muted without a user gesture, playback will pause.
    • <video autoplay> elements will only begin playing when visible on-screen such as when they are scrolled into the viewport, made visible through CSS, and inserted into the DOM.
    • <video autoplay> elements will pause if they become non-visible, such as by being scrolled out of the viewport.
  • <video> elements will now honor the play() method, for elements which meet the following conditions:
    • <video> elements will be allowed to play() without a user gesture if their source media contains no audio tracks, or if their muted property is set to true.
    • If a <video> element gains an audio track or becomes un-muted without a user gesture, playback will pause.
    • <video> elements will be allowed to play() when not visible on-screen or when out of the viewport.
    • video.play() will return a Promise, which will be rejected if any of these conditions are not met.
  • On iPhone, <video playsinline> elements will now be allowed to play inline, and will not automatically enter fullscreen mode when playback begins.
    <video> elements without playsinline attributes will continue to require fullscreen mode for playback on iPhone.
    When exiting fullscreen with a pinch gesture, <video> elements without playsinline will continue to play inline.

For clients of the WebKit framework on iOS, these policies can still be controlled through API, and clients who are using existing API to control these policies will see no change. For more fine-grained control over autoplay policies, see the new WKWebViewConfiguration property mediaTypesRequiringUserActionForPlayback. Safari on iOS 10 will use WebKit’s default policies.

A note about the playsinline attribute: this attribute has recently been added to the HTML specification, and WebKit has adopted this new attribute by unprefixing its legacy webkit-playsinline attribute. This legacy attribute has been supported since iPhoneOS 4.0, and accordance with our updated unprefixing policy, we’re pleased to have been able to unprefix webkit-playsinline. Unfortunately, this change did not make the cut-off for iOS 10 Developer Seed 2. If you would like to experiment with this new policy with iOS Developer Seed 2, the prefixed attribute will work, but we would encourage you to transition to the unprefixed attribute when support for it is available in a future seed.

Examples

So how would the humble web developer take advantage of these new policies? Suppose one had a blog post or article with many GIFs which one would prefer to serve as <video> elements instead. Here’s an example of a simple GIF replacement:

<video autoplay loop muted playsinline>
  <source src="image.mp4">
  <source src="image.webm" onerror="fallback(parentNode)">
  <img src="image.gif">
</video>
function fallback(video)
{
  var img = video.querySelector('img');
  if (img)
    video.parentNode.replaceChild(img, video);
}

On iOS 10, this provides the same user experience as using a GIF directly with graceful fallback to that GIF if none of the <video>‘s sources are supported. In fact, this code was used to show you that awesome GIF.

If your page design requires different behavior if inline playback is allowed vs. when fullscreen playback is required, use the -webkit-video-playable-inline media query to differentiate the two:

<div id="either-gif-or-video">
  <video src="image.mp4" autoplay loop muted playsinline></video>
  <img src="image.gif">
</div>
#either-gif-or-video video { display: none; }
@media (-webkit-video-playable-inline) {
    #either-gif-or-video img { display: none; }
    #either-gif-or-video video { display: initial; }
}

These new policies mean that more advanced uses of the <video> element are now possible, such as painting a playing <video> to a <canvas> without taking that <video> into fullscreen mode.

  var video;
  var canvas;

  function startPlayback()
  {
    if (!video) {
      video = document.createElement('video');
      video.src = 'image.mp4';
      video.loop = true;
      video.addEventListener('playing', paintVideo);
    }
    video.play();
  }

  function paintVideo()
  {
    if (!canvas) {
      canvas = document.createElement('canvas');
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;
      document.body.appendChild(canvas);
    }
    canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
    if (!video.paused)
      requestAnimationFrame(paintVideo);
  }
<button onclick="startPlayback()">Start Playback</button> 

The same technique can be used to render into a WebGL context. Note in this example that a user gesture–the click event–is required, as the <video> element is not in the DOM, and thus is not visible. The same would be true for a <video style="display:none"> or <video style="visibility:hidden">.

We believe that these new policies really make video a much more useful tool for designing modern, compelling websites without taxing users bandwidth or batteries. For more information, contact me at @jernoble, Jonathan Davis, Apple’s Web Technologies Evangelist, at @jonathandavis, or the WebKit team at @WebKit.

By Jer Noble at July 25, 2016 02:00 PM

July 22, 2016

Improvements in MathML Rendering

WebKit Blog

MathML is a W3C recommendation and an ISO/IEC standard that can be used to write mathematical content on HTML5 web pages, EPUB3 e-books and many other XML-based formats. Although it has been supported by WebKit for a long time, the rendering quality of mathematical formulas was not as high as one would expect and several important MathML features were missing. However, Igalia has recently contributed to WebKit’s MathML implementation and big improvements are already available in Safari Technology Preview 9. We give an overview of these recent changes as well as screenshots of beautiful mathematical formulas rendered by the latest Safari Technology Preview. The MathML demos from which the screenshots were taken are also provided and you are warmly invited to try them yourself.

Mathematical Fonts

We continue to rely on Open Font Format features to improve the quality of the mathematical rendering. The default user agent stylesheet will try and find known mathematical fonts but you can always set the font-familyon the <math> element to pick your favorite font. In the following screenshot, the first formula is rendered with Latin Modern Math while the second one is rendered with Libertinus Math. The last formula is rendered with an obsolete version of the STIX fonts lacking many Open Font Format features. For example, one can see that integral and summation symbols are too small or that their scripts are misplaced. Scheduled for the third quarter of 2016, STIX 2 will hopefully have many issues fixed and hence become usable for WebKit’s MathML rendering.

Screenshot of MathML with different fonts in Safari Technology Preview 9

Hyperlinks

WebKit now supports the href attribute which can be used to set hyperlinks on any part of a formula. This is useful to provide references (e.g. notations or theorems) as shown in the following example.

Screenshot of MathML hyperlinks in Safari Technology Preview 9

Mathvariants

Unicode contains Mathematical Alphanumeric Symbols to convey special meaning. WebKit now uses the italic characters from this Unicode block for mathematical variables and the mathvariant attribute can also be used to easily access these special characters. In the example below, you can see italic, fraktur, blackboard bold and script variables.

Screenshot of MathML mathvariants in Safari Technology Preview 9

Operators

A big refactoring has been performed on the code handling stretchy operators, large symbols and radicals. As a consequence the rendering quality is now much better and many weird bugs have been fixed.

Screenshot of MathML operators in Safari Technology Preview 9

Displaystyle

Mathematical formulas can be integrated inside a paragraph of text (inline math in TeX terminology) or displayed in its own horizontally centered paragraph (display math in TeX terminology). In the latter case, the formula is in displaystyle and does not have any restrictions on vertical spacing. In the former case, the layout of the mathematical formula is modified a bit to minimize this vertical spacing and to better integrate within the surrounding text. The displaystyle property can also be set using the corresponding attribute or can change automatically in subformulas (e.g. in fractions or scripts). The screenshot below shows the layout difference according to whether the equation is in displaystyle or not. Note that the displaystyle property should also affect the font-size but WebKit does not support the scriptlevel yet.

Screenshot of MathML displaystyle in Safari Technology Preview 9

OpenType MATH Parameters

Many new OpenType MATH features have been implemented following the MathML in HTML5 Implementation Note:

  • Use of the AxisHeight parameter to set the vertical position of fractions, tables and symmetric operators.
  • Use of layout constants for radicals, scripts and fractions in order to improve math spacing and positioning.
  • Use of the italic correction of large operator glyph to set the position of subscripts.

The screenshots below illustrate some of these improvements. For the first one, the use of AxisHeight allows to better align the fraction bars with the plus, minus and equal signs. For the second one, the use of layout constants for scripts as well as the italic correction of the surface integral improves the placement of its subscript.

Screenshot of MathML examples with OpenType MATH parameters in Safari Technology Preview 9

Arabic Mathematics

WebKit already had support for right-to-left mathematical layout used to write Arabic mathematical notations. Although glyph-level mirroring is not supported yet, we added support for right-to-left radicals. This allows to use basic arithmetic notations as shown in the image that follows.

Screenshot of Arabic Mathematics in Safari Technology Preview 9

Conclusion

Great changes have happened in WebKit’s MathML support recently and Igalia’s web plaform team is excited about seeing beautiful mathematical formulas in WebKit-based browsers and e-readers! If you have any comments, questions or feedback, you can contact Frédéric at fwang@igalia.com or Manuel at @regocas. Detailed bug reports are also welcome to WebKit’s bug tracker. We are still refining the implementation so stay tuned for more awesome news!

By Frédéric Wang at July 22, 2016 02:00 PM

July 20, 2016

Release Notes for Safari Technology Preview Release 9

WebKit Blog

Safari Technology Preview Release 9 is now available for download for both macOS Sierra betas and OS X El Capitan. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 202612–203152.

JavaScript

  • Improved spec compliance for Array and %TypedArray% (r202631, r202943, r202926, r203147, r203101, r203107, r203037)
  • Improved the performance of %TypedArray%.prototype.subarray (r203076)
  • Fixed Date.setYear() to be more spec compliant (r202683)
  • Fixed Date.toGMTString() to be the Date.toUTCString() function (r202752)
  • Fixed a successful RegExp.compile to properly return the regular expression (r202770)
  • Improved String to better match specs (r202916, r202956)
  • Fixed cyclic prototypes to throw the right error type according to specs (r202832)
  • Implemented HasRestrictedGlobalProperty when checking for global lexical tier conflicts (r202734)
  • Fixed an unexpected “Out of memory” error for String.prototype.repeat() with a negative value argument (r202954)
  • Fixed use strict handling when parsing functions that include use strict after other non-strict statements (r202828)

Web APIs

  • Changed FontFaceSet.load and FontFaceSet.check to honor the second argument (r203092)
  • Moved assignedSlot from CharacterData to Text to match the latest Shadow DOM specs (r202873)
  • Replaced Event.prototype.scoped by Event.prototype.composed to reflect the latest Shadow DOM spec (r202953)
  • Aligned document.body setter with the HTML specification (r202893)
  • Fixed document.body to return the first child of the HTML element that is either a body or frameset element (r202881)
  • Fixed document.title setter for SVG documents (r202895)
  • Fixed an issue that prevented calling the document.fonts.ready callback (r202945)
  • Added support for unprefixed version of the CSS image-set() function (r202765)
  • Setting table.tFoot or calling table.createTFoot() should append HTML <tfoot> element to the end of the table (r203011)
  • Exposed <td> and <th> elements as HTMLTableCellElement objects (r202937)
  • Prevented tbody.deleteRow(–1) and tr.deleteCell(–1) from throwing an error when there are no rows or cells (r202952)

MathML

  • Added support for href attribute in MathML (r203104)
  • Improved the layout of <munderover> using parameters from the OpenType MATH table (r203074)
  • Added support for mathvariants that cannot be emulated via CSS. (r203072)
  • Refactored MathML layout functions to avoid using flexbox (r202934)

Apple Pay

  • Fixed a bug allowing multiple payment sheets in Safari to be open at the same time (r203084)
  • Renamed addressFields tocontactFields (r202644)
  • Added type and paymentPass properties in PaymentMethod (r202655)

Web Inspector

  • Added “Copy as cURL” feature for resource requests in the timeline (r203132)
  • Fixed an issue causing Top Functions data to show even when disabled (r203102)
  • Improved JSON pretty printing in more scenarios (r202933)
  • Added spring to transition-timing-function value auto-completion (r202702)
  • Added pixel area column to the Layout timeline view (r202713)
  • Improved the API view of native DOM APIs (r202666)
  • Fixed ⌘⇧S in the Styles sidebar to always show the save dialog (r203017)
  • Fixed an issue causing a scrolled JavaScript Snapshot list to be drawn blank when switching back from a comparison view (r202932)
  • Fixed an issue that caused a UTF8-encoded XHR to show garbled data in the Resource sidebar (r202843)
  • Added Shadow Content type to DOM tree in the Elements tab (r202634)
  • Fixed an issue preventing the last normal tab from being closed with the “Close Tab” context menu item (r202711)
  • Fixed an issue causing the wrong function name to show in the Scope Chain sidebar of the Debugger tab (r202717)

Media

  • Pause small video elements when returning to inline when they are too small (r203066)
  • Changed the picture-in-picture control glyph (r202880)
  • Changed fullscreen and picture-in-picture to not animate when navigating away from the host page (r202872)
  • Fixed WebAudio convolver channels to throw exceptions for an invalid number of channels (r202617)
  • Fixed clearing of MSE buffers by switching to MediaTime (r202641)
  • Fixed an issue where Facebook videos without audio tracks will sometimes cause playback controls to appear (r202918)

Rendering

  • Fixed rendering an <img> element with a wide gamut PDF (r202927)
  • Fixed drawing an SVG image into a <canvas> that is not in the DOM (r202712)

Bug Fixes

  • Fixed database process crashes when deleting a corrupt SQLite database file (r202822)
  • Fixed blob content type when retrieving blobs from IndexedDB (r202747)
  • Prevented a crash when attempting to copy an image (r202754)

By Jon Davis at July 20, 2016 05:00 PM

July 12, 2016

Frédéric Wang: MathML Improvements in WebKit

Igalia WebKit

In a previous blog post, I explained the work made by Igalia’s web platform team to refactor WebKit’s MathML layout classes. I stated that although some rendering improvements were a nice side effect, the main goal of the first phase was really to clean the code up so that it is easier for developers to work on MathML in the future. Indeed this really made things easier to review: Quite unexpectedly to me, the second phase only took 4 days to be upstreamed… Kudos to Brent Fulgham for having reviewed so many patches in such a short period of time!

In this blog post, I am going to give an overview of the improvements made during these two first phases taking changeset r203109 as a reference. The changes will be available in WebKitGTK+ 2.14 in September and are likely to be included this month in the next Safari Technology Preview. It definitely remains more work to do such as the third phase or other rendering improvements, but I believe we have already made a big step forward!

Mathematical Fonts

Two years ago, basic support for operator stretching via the OpenType MATH table was added to WebKit. During the refactoring, we improved that support and also made use of more parameters to improve the math layout (see section about OpenType MATH parameters below). While Latin Modern Math will be used in most screenshots, the following one shows that you can use any math fonts. By default WebKit will try and use one of these fonts but if none are available or if you force a non-math font-family then the rendering quality may not be good.

The following screenshot gives the rendering for various fonts. For the last one we used the value sans-serif to illustrate issues with non-math fonts (displaystyle integral too small, mathvariant italic glyphs taken from another font, missing italic correction etc).

Screenshots of a formula with various fonts Integral obtained by contour integration
WebKitGTK+ r203109 with various font-family values.
From top to bottom: TeX Gyre Schola, Latin Modern, DejaVu Math, STIX, and sans-serif.

The href Attribute

This new feature is obvious: You can now create a hyperlink for any part of a mathematical formula! Here is a screenshot of the MathML Torture Test 21 with additional links, as displayed in WebKit r203109. Click the image to load the test case in your browser and test it.

Screenshot of MathML Torture test 21 with hyperlinks

The mathvariant Attribute

Unicode contains Mathematical Alphanumeric Symbols to convey special meaning such as double-struck or specific Arabic styles. Characters for these symbols are generally provided by math fonts. In MathML, mathematical variables are automatically rendered using the italic characters from this Unicode block. One can also access these characters via the mathvariant attribute and that attribute is actually used by many LaTeX-to-MathML converters.

In the following screenshot, you can see that the letters f, x and y are now drawn with this special mathematical italic glyphs and that WebKit uses the conventional fraktur style for the Lie algebra g. Note that the prime is still too small because WebKit does not make use of the ssty feature yet.

Screenshot of Wikipedia article on Lie Algebra Homomorphism of Lie algebra
Top: Safari 9.1.1. Bottom: Safari r203109.

Operators and Spacing

As said in my previous blog post, the rendering of large and stretchy operators have been rewritten a lot and as a consequence the rendering has improved. Also, I mentioned that the width of operators may depend on their height. This may cause accumulated approximations during the computation of preferred widths. The old flexbox-based implementation incorrectly forced layout during preferred computation to avoid that but a quick workaround for that security concern caused the approximate preferred widths to be used for the logical widths. With our new implementation, the logical width is now correctly calculated. Finally, we added partial support for the mpadded element which is often used to tweak spacing in mathematical formulas.

The screenshot below illustrates the fix for a serious regression with large operator (summation symbol) as well as improvements in the rendering of stretchy operators (horizontal braces). Note that the formula has a hack with a zero-width mpadded element which used to cause improper spacing (large gap between the group of a’s and the group of b’s).

Screenshot from Mozilla's MathML torture test using Safari 9.1.1 or the the current refactoring Tests 21 and 22 from the MathML torture test
Left: Safari 9.1.1. Right: Safari r203109.

The following screenshot shows how incorrect width computations used to cause excessive spacing after the stretchy radical and slash symbols:

Screenshot of Wikipedia article on Trigonometric Functions Sine of 1 degree
Top: Safari 9.1.1. Bottom: Safari r203109.

The displaystyle Property

Mathematical formulas can be integrated inside a paragraph of text (inline math in TeX terminology) or displayed in its own horizontally centered paragraph (display math in TeX terminology). In the latter case, the formula is in displaystyle and does not have any restrictions on vertical spacing. In the former case, the layout of the mathematical formula is modified a bit to optimize this vertical spacing and to better integrate within the surrounding text. The displaystyle property can also be set using the corresponding attribute or can change automatically in subformulas (e.g. in fractions).

In the following screenshot the fix for the large operator regression is obvious but you can also notice that the summation is now slightly different for the definition of a Bézier curve (top) and for the one of a rational Bézier curve (bottom). For example, to save some vertical space in the fractions, the sigma symbol is drawn smaller and the scripts attached to it are moved on its right. However, the script size could still be improved when we implement the scriptlevel property.

Screenshot of Wikipedia article on Bézier Curves Definitions of Bézier Curve
Left: Safari 9.1.1. Right: Safari r203109.

OpenType MATH Parameters

Many new OpenType MATH features have been implemented following the MathML in HTML5 Implementation Note:

  • Use of the AxisHeight parameter to set vertical position of fractions, tables and symmetric operators.
  • Use of layout constants for radicals (some of them were already used), scripts and fractions. This improves math spacing and positioning and allows to adjust them according to value of the displaystyle property discussed in the previous section.
  • Use of the italic correction of large operator glyph to set the position of subscripts.

The screenshots below illustrate some of these improvements. For the first one, the use of AxisHeight allows to better align the fraction bar with the plus sign. For the second one, the use of layout constants for scripts as well as the italic correction of the integral improves the placement of limits.

Screenshot of Wikipedia article on the Continued Fractions Generalized Continued Fraction
Top: Safari 9.1.1. Bottom: Safari r203109.
Screenshot of Wikipedia article on the Gamma Function Definition of the Gamma Function
Top: Safari 9.1.1. Bottom: Safari r203109.

Right-to-left radicals

One of the advantage of the old flexbox-based implementation is that right-to-left layout was available for free. This support has of course been preserved in the new implementation. We also added a simple workaround to mirror radicals using a scale transform as shown in the screenshot below. However, general glyph-level mirroring is still missing.

Screenshot of Wikipedia article on Lie Algebra Test 13 from the MathML torture test (Machrek Style)
Top: Safari 9.1.1. Bottom: Safari r203109.

Conclusion

Igalia’s web platform team has been able to follow the MathML in HTML5 Implementation Note in order to significantly improve the rendering of mathematical expressions in WebKit. More work remains to do but we will definitely appreciate any feedback that can help improving native MathML support in web engines. We are also excited to continue work and collaboration at the next Web Engines Hackfest!

July 12, 2016 10:00 PM

July 11, 2016

ES6 Feature Complete

WebKit Blog

As of r202125, JavaScriptCore supports all of the new features in the ECMAScript 6 (ES6) language specification. All of the new ES6 features are available in the latest WebKit Nightly and Safari Technology Preview. While we have an implementation of modules, no web-facing API has been finalized yet. ES6 adds a huge number of great features to the JavaScript language and all the JavaScriptCore contributors are excited about the future of JavaScript. We have worked hard to not just implement the new ES6 features but also make sure those features perform exceptionally. Today, however, we want to talk about a different but equally important aspect of our ES6 feature development, maintaining the current performance of existing ES5 websites and applications.

RegExp changes in ES6

ES6 adds an incredible amount of customizability to JavaScript regular expressions. It allows developers to customize how the String.prototype functions handle a RegExp argument via Symbol property methods. For example, String.prototype.match() attempts to use the Symbol.match property on its first argument in order to execute the match[1]. By default, RegExp.prototype has functions for each of the corresponding String.prototype operations (Symbol.replace, Symbol.search, Symbol.match, etc), which can be changed if a developer wants custom behavior. Additionally and more importantly, ES6 specifies how each of the RegExp.prototype functions access or call all the RegExp properties, such as flags, global, and exec. If naively implemented, these new features, while providing more functionality for developers who want to use them, come at a performance cost for webpages that do not.

One of the goals of the JavaScriptCore Virtual Machine (VM) is to ensure that developers don’t pay a cost for features that they don’t use. In the case of RegExp functions, our goal means existing webpages should not see an impact in the performance of the page’s RegExp code. In order to maintain the same performance, JavaScriptCore needs to be able to avoid looking up and executing functions and getters, when possible. For example, if JavaScriptCore can ensure that every call to String.prototype.match() is passed a RegExp object that does not override or modify any of match’s relevant properties (Symbol.match, exec, global, and unicode) it can execute match() with a faster, specialized implementation. With the knowledge that these properties have not changed, the specialized implementation can avoid potentially costly property lookups and can load any information the match function needs directly off of the RegExp object. This also allows the specialized implementation to inline the exec function.

The JavaScriptCore VM uses a multi-tiered engine design where the each tier does progressively more advanced optimizations. The highest tiers of JavaScriptCore make speculations about what the code is doing, which enables optimizations that would otherwise be impossible. More information about the tiered architecture of JavaScriptCore can be found in previous blog posts[2][3]. In order to make speculations about the properties on RegExp objects in the optimizing tiers of our engine, we needed a way to look up those properties without observable effects.

To do this, a new bytecode was added, TryGetById, which acts similarly to the bytecode for a normal property lookup on an identifier, GetById. If the property is either unset or is a normal value (JavaScript has some special properties with special effects, such as [].length, which are excluded) then TryGetById just returns the appropriate value. Otherwise, if the property is an accessor, TryGetById returns our internal GetterSetter object, which is compared to the expected original value. Since accessors have a getter and setter associated with them, GetterSetter objects are an internal object the VM uses to hold those functions. By itself, TryGetById does not solve the performance problems associated with the ES6 behavior since TryGetById does work to prove that the values are what the VM expects. As code moves into the VM’s optimizing compilers, however, information gathered from TryGetById in the lower tiers allows us to remove almost all the checks needed to execute the specialized code.

Structures

Before going into detail on how TryGetById is optimized, it is useful to understand the concept of Structures. Readers that are already familiar with the concept of Structures (also commonly referred to as shapes or maps) may want to skip to the next section. Originating in the 80s as an optimization for the Smalltalk and Self languages[4], Structures are a way to represent memory layout of properties on an object. A naive way to store the properties on an object in JavaScript would be to have every object be a hash map from an identifier to its corresponding value/accessor. While using a hash map does work, it does not take advantage of how objects are commonly used. In JavaScript, it is very common to create objects by calling a constructor, which adds several properties. Most of the time, objects constructed by the same function will share the same set of properties. Structures are a way to take advantage of the similarity between these objects. Consider the following:

function Point(i, j) {
    this.x = i;
    this.y = j;
}

let p1 = new Point(1, 2);
let p2 = new Point(3, 4);

Even though p1 and p2 have different values, they are both initialized in the same way. Namely, when calling new Point(), a new, empty object, o, is allocated. Then the property x is added to that object, followed by the property y. In the above example, when the code assigns this.x to i, the VM starts by looking up property x on the structure of o. Since o is initially an empty object, it will have the initial Structure shared by all normal JavaScript objects with no properties. That structure will not have an entry for property x. Hence, it will need to be added.

Since Structures are immutable, in order to add the x property to the object, the VM will need to replace the object’s Structure with another one that has an entry for x in addition to all of the object’s old properties. This replacement of Structures is what we call a transition. Besides adding a new property, there are many other reasons why a transition might be necessary, such as, deleting properties, changing the attributes of a property (via Object.defineOwnProperty), or changing the prototype.

Whenever performing a Structure transition to add a new property, the VM first looks to see if any other objects, sharing the same Structure, have added a property with the same identifier as the one we are about to add. If such a Structure exists, it is reused. On the other hand, if one does not exist, a new Structure is allocated, copying all the properties on the current Structure before adding the new property to the new Structure. Then, we change the object’s Structure pointer to the new Structure.

In JavaScriptCore, object properties are stored in an array-like object called the Butterfly. The Butterfly is pointed to by the object. The Structure stores a hash map from identifiers to the offset in the Butterfly where that property’s value is stored. Each new property is simply given the next free offset. Once the VM has fully initialized the new Structure, it marks on the old Structure that any other new property transition adding the same identifier should reuse the new Structure.

For example, in the figure above we can see how a Point instance transitions at each line of a call to new Point(3, 4). Each time a new property is added, the Point instance changes its structure and adds a new offset to the Butterfly. Note that Structure 1 is marked with a transition to Structure 2 when a property x is added to it. Similarly, Structure 2 is marked with a transition to Structure 3 when property y is added to it. When a subsequent instance of Point is created and initialized, it will reuse the same Structures 1, 2, and 3, and will have the same Butterfly layout as that of the first Point instance.

Using Structures provides two main benefits. First, it can save a significant amount of memory. If a program allocates thousands of Points, without structures, each Point needs to hold a hash table of its properties. With Structures, only a constant amount of memory is used to map properties and each object only needs an array of its properties. Additionally, most of the time, objects that have the same properties also have the same prototype. Hence, instead of storing the pointer to the prototype in each of these objects, JavaScriptCore only stores one copy of the prototype pointer on the common Structure shared by these objects. The figure below shows how the prototype chain for a Set instance object might look.The second, and perhaps more important benefit of Structures comes from the fact that each object that shares a Structure has the same layout of properties in its Butterfly. JavaScriptCore capitalizes on this fact to perform various optimizations throughout the engine.

Object Property Conditions and Adaptive Watchpoints

Since TryGetById participates in the same inline caching system that GetById uses, which you can read more about in previous blog posts, TryGetById can take advantage of all the optimizations that GetById has. Inline caching is relatively simple when caching a property located on the object itself (also referred to as an own property). After the first access, to ensure all subsequent accesses are fast, all the VM needs to do is repatch a couple of instructions with the Structure and offset of the property. Next time the GetById/TryGetById is executed, the program will check if the new object has the same Structure as the last, then the VM can load the property from the cached offset skipping the hash table lookup on the Structure to find the offset. Verifying the Structure of an object is one the VM knows about is referred to as a Structure check and is used for many different purposes.

Loading properties from the prototype, as is the case for all of the new ES6 RegExp changes, is a much harder problem. While a Structure check guarantees that any object with that Structure does not have the desired property it does not guarantee that the object’s prototype does or does not have that property as well. Since the Safari 9.0 release, our inline caches for prototypes have been made significantly more powerful with the addition of two new concepts, Object Property Conditions and Adaptive Watchpoints. In particular, Object Property Conditions and Adaptive Watchpoints allow JavaScriptCore, in some cases, to completely eliminate all heap loads and almost all Structure checks in optimized code for property accesses.

Object Property Conditions

An Object Property Condition (OPC) is a constraint that validates some heap access when looking up a property on the prototype chain. An Object Property Condition, much like its name suggests, holds an object and some condition on that object. There are three kinds of property conditions used by GetById/TryGetById: Presence, Absence, and Equivalence. Presence and Absence are simple. Presence says that the object in the OPC has a property with a given identifier. Absence, on the other hand, says that the object in the OPC does not have a property with that identifier. The Equivalence watchpoint will be discussed a little bit later. For any GetById/TryGetById on an object there needs to be an OPC for each object in the prototype chain between the base object and the prototype with the property. When no object on the prototype chain has the desired property then there needs to be an OPC on every object in the prototype chain.

function foo() {
    let r = new Set();
    console.log(r.toString());
}

In the example above, if we wanted to quickly load the toString property off a newly created Set instance object, we would need to have a two OPCs and Structure check. The first OPC says Object.prototype has a Presence condition for the toString property, and the second says Set.prototype has a Absence condition for the toString property. With those conditions and a Structure check, our GetById/TryGetById inline caches can directly load the current value of Object.prototype.toString. Note, a single Structure check is sufficient because the Structure tells us both the prototype of the object and that the object has no toString property.

Adaptive Watchpoints

Just because an OPC was valid when it was created does not mean that it will remain so later. It is entirely possible that a programmer deletes a property that was present before, or adds a property with the same identifier on the prototype chain, intercepting the old property. This is where Adaptive Watchpoints come in. Whenever the VM creates an OPC for some object, it also create an Adaptive Watchpoint to attach to the condition’s object’s Structure to ensure that the condition remains valid. If an object with the watched object’s Structure ever transitions, the Adaptive Watchpoint is fired. Once fired, an Adaptive Watchpoint checks if its OPC is still valid on the watched object. If the transition is on the watched object and has invalidated the OPC, any code that depends on that condition will be thrown away. Otherwise, the Adaptive Watchpoint relocates itself to the watched object’s new Structure. The figure below illustrates how Adaptive Watchpoints and Object Property Conditions would interact with the prototype chain when looking up the toString property on a Set instance object.

Once some code tiers up to one of JavaScriptCore’s optimizing compilers, the VM starts to utilize the Equivalence conditions mentioned before. An Equivalence condition is essentially a stronger Presence condition. It tells us not only that a property, p, is present on an object, but also that p has a specific value. Knowing that a GetById/TryGetById may return a constant allows our optimizing compliers to make numerous optimizations that would otherwise be impossible.

As the VM tiers up a piece of code to the optimizing compilers, the VM examines the cases each GetById/TryGetById saw in the lower tiers. If the VM finds that every case loads from a Presence condition on the same prototype object, the VM attempts to convert that Presence condition into an Equivalence condition. In order to ensure that an Equivalence condition remains valid, the Adaptive Watchpoint holding the Equivalence condition needs to ensure no property store to the watched object replaces the existing value for the condition’s property. Although there is a significant cost to checking each store to the watched object, in practice, stores to prototype objects are quite rare and the gains in the optimizing compilers tend to be much larger.

Now that we have an understanding of the way JavaScriptCore does property load optimizations, let’s look at why TryGetById can be used to improve the performance of regular expressions. Since most code does not change properties on the RegExp.prototype object, the VM is usually able to convert all the TryGetByIds on the relevant properties into constants via Equivalence conditions. Our optimizing compilers are then able to recognize that all the pre-checks before our specialized code are redundant and eliminate them. In the String.prototype.match example from before, we can eliminate the checks that none of Symbol.match, RegExp.prototype.exec, RegExp.prototype.global, and RegExp.prototype.unicode have been overridden. The resulting optimized code need only perform a single Structure check on the argument’s RegExp object, then it can then go straight to the fast, specialized code.

Conclusion

The JavaScriptCore team is very excited about all of the new features in ES6 and we think developers will get a lot of mileage out of them. Moving forward, we will continue to make those features faster and we plan on putting out more blog posts on our progress as we go. For now, you can check out the current implementation of our ES6 features in WebKit nightly or Safari Technology Preview. As always, let us know what you think and let us know about any issues or bugs you experience.

By Keith Miller at July 11, 2016 04:00 PM

July 08, 2016

Frédéric Wang: MathML Refactoring in WebKit

Igalia WebKit

Introduction

If you follow WebKit developments, you are certainly aware that Igalia has been working on WebKit’s MathML implementation for some time. More recently, effort has been made to write a clean implementation addressing issues reported by WebKit reviewers in the past. After joining Igalia in March, I have been in charge of getting this work reviewed and merged into WebKit’s development branch. In the past four months, we have been successful in upstreaming the first phase of the refactoring and the work accomplished is described in this blog post.

Screenshot from Mozilla's MathML torture test using Safari 9.1.1 or the the current refactoring
MathML torture tests with the Latin Modern Math font
Left: Safari 9.1.1. Right: Current MathML branch.

Note that the focus was on code refactoring so the improvement may not be obvious for non-developers. Nevertheless many issues have already been fixed as a consequence of that work: math italic, position of scripts, stretchy and large operators, rendering update and more. More importantly, this preliminary step opens the way for beautiful math rendering based on TeX rules and the OpenType MATH table. Rendering improvements and implementation of new features have already started in the next phase of the refactoring, so stay tuned…

Design Issues

As explained in a previous report, the main design issues in the flexbox-based implementation released in 2013 can essentially be summarized in three points:

  1. WebKit’s code to stretch operator was not efficient at all and was limited to some basic fences buildable via Unicode characters.
  2. WebKit’s MathML code violated many layout invariants, making the code unreliable.
  3. WebKit’s MathML code relied heavily on the C++ renderer classes for flexboxes and had to manage too many anonymous renderers.

For the first point, the performance issue had been fixed by Igalia developers right after the initial feedback from WebKit developers and we improved that again during our refactoring. Partial support for the OpenType MATH table was added during my crowdfunding project and allowed to stretch more operators with the help of math fonts. For the second point, the main issue was also fixed right after the initial feedback. However one could still have some doubts regarding the layout steps, given the complexity implied by the third point. That last issue was still problematic so far and addressing it was the main achievement of our refactoring.

Technically, the dependence on flexbox is unnecessary and the implementation actually only used a limited set of flexbox features. Thus executing the whole flexbox code was overkill. It can also be a burden for people working on other places of the layout code. For example Javi Fernández has worked on improving the box alignments in the past and he had hard time fixing the MathML code impacted by his changes. This is probably the cause of the bad position of the summation symbol that can be seen in the screenshot above.

From the layout perspective, most of the rendering logic was implemented in the flexbox classes and the MathML “renderer” classes were really just managing the creation and update of anonymous nodes and style. Although this sounds good code reuse it actually made impossible to understand how and when layout steps happen or to add more advanced features. The new implementation replaced this manipulation of the render tree with simple arithmetic calculations on box metrics which is safer and more reliable. Also, complex renderers such as RenderMathMLScripts or RenderMathMLRoot actually achieve better rendering quality with less code!

As an example of the complexity, RenderMathMLUnderOver can behave as a RenderMathMLScripts in some situation so we really want the former class to reuse the latter class. As we will see below the old implementation of the two renderers were quite different: RenderMathMLUnderOver only relied on setting column direction in the user agent stylesheet while RenderMathMLScripts created a complex render tree structures with anonymous style. Hence it seemed difficult to share the two cases or to handle DOM changes causing to move from one case to the other one. With our new implementation, this is simply reduced to simple C++ inheritance.

When I started to work on WebKit some years ago, I made the mistake of continuing with the existing approach. The implementation of multiscripts or automatic italic mathvariant added more anonymous objects and made the situation even worse. After the end of my crowdfunding project, Alex Castro did more cleanup and tried to implement important features such as displaystyle but he also soon realized that it was too hard to work with the current code base…

Layout Refactoring

In order to solve the issues discussed in the previous section, Javi and Alex worked on a new MathML branch where the first step was to remove the inheritance on the flexbox layout classes. During the Web Engines Hackfest 2015, I collaborated with the Igalia’s web platform team team to continue the work on this branch. In a second step, we rewrote many MathML renderer functions so that they stop creating anonymous nodes or style. We obtained very encouraging results: The implementation looked much simpler and much more understandable!

Alex announced the initial plan on the webkit-dev mailing list. He started opening bugs and attaching patches to merge the first step. However, that step still required many of the flexbox logic and so made code hard to understand for reviewers. Hence when I joined Igalia four months ago Alex asked me to try and see how to reorganize patches so that the two initial steps can be submitted in one go. This corresponds to the first phase mentioned in the introduction. As indicated on the wiki page, the layout refactoring consisted in rewriting the following member functions of each renderer class:

  • computePreferredLogicalWidths: calculate preferred widths, based on the preferred widths of child renderers.
  • layoutBlock: set final position and size of child renderers.
  • firstLineBaseLine: calculate the ascent of the renderer.
  • paint (optional): perform special painting such as fraction bars.

Refactored renderers no longer rely on any flexbox code nor anonymous renderers and the functions mentioned above essentially perform arithmetic computations. By reading the code, one can be sure that we follow standard layout rules and that we do not perform unnecessary reflow. Moreover, the rules specific to math rendering are only located in the MathML renderers and can be better understood. Details for each class are provided in the next subsections. After all the layout functions were rewritten and the code managing the render tree structure removed, we were able to make the RenderMathMLBlock class inherit from RenderBlock instead of RenderFlexibleBox. Many of the bugs could then be immediately closed or otherwise fixed with small follow-up patches.

Spacing

RenderMathMLSpace is a simple class inserting blank boxes for adjusting spacing of math formulas. Obviously, we do not need any of the complexity of flexbox so it was straightforward to write the layout functions.

3x Large space between 3 and x.

Grouping

RenderMathMLRow performs rendering of a row of math items. Since WebKit does not support linebreaking in MathML at the moment, this is just putting child boxes on a same baseline. One specificity is that some operators can be stretched vertically and so their width may depend on their height.

{ 2x x 3 Row containing a stretched brace, a fraction and a scripted element.

Again, flexbox features are useless here. With the old code, it was not clear whether we were violating the CSS invariant with preferred and logical widths and which kind of relayout or render tree changes would happen when doing the stretch call. By properly implementing the layout functions previously mentioned all of this became much more trustable.

Fractions

RenderMathMLFraction draws a fraction with numerator and denominator.

x+1y+2 Simple fraction.

This used to be implemented using a column direction for the fraction element. Numerator and denominator were wrapped into anonymous nodes with additional style to leave space for the fraction bar and to adjust the horizontal alignments.

RenderMathMLFraction (flexbox with column direction)
  RenderMathMLBlock (anonymous flexbox)
    RenderMathMLRow (numerator)
      ...
  RenderMathMLBlock (anonymous flexbox)
    RenderMathMLRow (denominator)
      ...

It was relatively easy to implement this without any anonymous nodes and again the use of flexbox did not sound justified. For example, to calculate the preferred width we just take the maximum of the preferred widths of the numerator and denominator. For the layout, the calculation of the logical width is similar and we calculate the horizontal coordinates of numerator and denominator so that their centers are aligned. Vertical metrics are similarly calculated from the vertical metrics of the numerator and denominator. During that step, we also fixed some bugs with the linethickness attribute and added support for some OpenType MATH table constants.

Scripts above and below

RenderMathMLUnderOver is used to attach some scripts above and below a base. Each child can itself be a horizontal stretchy operator.

base→ Base with stretchy arrow over it.

This was implemented in the user agent stylesheet by using flexboxes with column direction for the corresponding MathML elements and the C++ class had additional rules to fire the stretching. So the problems and solutions for this class were essentially a mixed of the cases of RenderMathMLFraction and RenderMathMLRow we just discussed.

Subscripts and Superscripts

RenderMathMLScripts is used for a base with some arbitrary number of scripts. All the scripts can have different positions (pre, post, sub, super) and metrics (width, ascent and descent). We must avoid collisions and take care of horizontal and vertical alignements.

base a b c d e f Base with pre and post scripts.

The old code used a complex render tree with additional style to achieve the best possible result. However, the quality was still bad as you can see for the script attached to the integral in the screenshot above. Managing the render tree was a nightmare: Just to give the idea, additional anonymous node and style were used to allow horizontal and vertical adjustments (similar to RenderMathMLFraction above) and prescripts had negative order property so that they were positioned before the base.

RenderMathMLScripts
    Base Wrapper (anonymous flexbox)
      RenderMathMLRow (base)
        ...
    SubSupPair Wrapper (anonymous flexbox with column direction)
      RenderMathMLRow (post-subscript)
        ...
      RenderMathMLRow (subperscript)
        ...
    SubSupPair Wrapper (anonymous flexbox with column direction)
      RenderMathMLRow (post-subscript)
        ...
      RenderMathMLRow (post-superscript)
        ...
    ... (more postscripts)
    RenderMathMLBlock (prescripts separator)
    SubSupPair Wrapper (anonymous flexbox with column direction and order -1)
      RenderMathMLRow (pre-subscript)
        ...
      RenderMathMLRow (pre-subperscript)
        ...
    SubSupPair Wrapper (anonymous flexbox with column direction and order -1)
      RenderMathMLRow (pre-subscript)
        ...
      RenderMathMLRow (pre-superscript)
        ...
    ... (more prescripts)

Rules from TeX and the OpenType MATH table are relatively complex and we decided to implement them directly in the new refactoring as otherwise it was impossible to get decent quality. The code is still complex but we now have clear rules, we only perform simple calculations and the render tree structure matches the DOM tree.

“Enclosing” Notations

RenderMathMLMenclose is a row of math items with some additional notations. Gurpreet Kaur implemented this element two years ago but she followed the same approch, combining anonymous nodes and style for some simple notations and special painting for others.

x+1 circle and strike notations

During the refactoring, the code has been completely rewritten so that RenderMathMLMenclose is now essentially a derived class of RenderMathMLRow with the measuring and painting functions adjusted to take into account the additional notations. During that refactoring, we also removed support for unused radical notation, which was implemented using an anonymous RenderMathMLSquareRoot (see Radicals section below).

Helper Classes for Operators

The RenderMathMLOperator class is used for math operators. It was quite complex class and we decided to extract from it two features that are unrelated to layout:

The remaining code was indeed the real layout part but the mess with anonymous node and style was only removed later (see Text Classes below). Although it seems we just needed to move the code out of RenderMathMLOperator into those new classes, the case of MathOperator was particularly difficult. We had to split the effort into several small steps to make review possible and also fixed many issues due to the entanglement and confusion of these three different features of the RenderMathMLOperator class… The work done for MathOperator actually improved the rendering of stretchy operators as you can see for the horizontal braces in the screenshot above.

Radicals

RenderMathMLRoot is used for square root or arbitrary N-th root. Many of the TeX and OpenType MATH table rules were already used by the old implementation with anonymous nodes and style. However, there were bugs difficult to fix related to zooming, child removal or style change due to the management of the anonymous RenderMathMLOperator to draw the radical sign.

x+1 + x+1 3 square and cube roots

The old implementation actually had two classes for the square and general cases (RenderMathMLSquareRoot and RenderMathMLRoot). The usual technique with various anonymous wrappers and style was used. After the refactoring, we were able to merge everything in a single RenderMathMLRoot class. Because the square root behaves as an mrow, we also made that class derive from RenderMathMLRow to reuse as much code as possible. Here is are how the render trees used to look like:

RenderMathMLSquareRoot
  RenderMathMLBlock (anonymous used for metric adjustements)
    RenderMathMLRadicalOperator (anonymous used for the radical symbol)
      ...
  RenderMathMLRootWrapper (anonymous used for the children)
    RenderMathMLRow (child 1)
      ...
    RenderMathMLRow (child 2)
      ...
    ...
    RenderMathMLRow (child N)
      ...

RenderMathMLRoot
  RenderMathMLRootWrapper (anonymous for the index)
    ...
  RenderMathMLBlock (anonymous used for metric adjustements)
     RenderMathMLRadicalOperator (anonymous used for the radical symbol)
       ...
  RenderMathMLRootWrapper (anonymous for the base)
    ...

Again, we rewrote the implementation using only simple box positioning. The difficult part was to get rid of the anonymous RenderMathMLRadicalOperator to draw the radical symbol. This class was derived from RenderMathMLOperator and extended it with some fallback drawing when math fonts were not available. After having extracted stretchy operator shaping from RenderMathMLOperator it became possible to use the MathOperator helper class to draw the radical symbol. We implemented the fallback for missing math fonts the same as Gecko: Use a scale transform to stretch the base glyph for U+221A SQUARE ROOT. As a bonus, we used such transform to implement glyph mirroring, as required to draw right-to-left radicals in some Arabic mathematical notations.

Text Classes

These classes are containers for math text such as variables or operators. There is a generic RenderMathMLToken class and a derived class RenderMathMLOperator adding features specific to operators such as spacing, dictionary property, stretching… Anonymous wrappers and style were used to implement automatic italic mathvariant or operator spacing. The RenderText child of RenderMathMLOperator was (re)built as an anonymous text node so that is was possible to convert U+002D HYPHEN-MINUS into U+2212 MINUS SIGN or to provide some text for anonymous operators created by RenderMathMLFenced (see Unchanged Classes section).

RenderMathMLToken (e.g. mi element)
  RenderMathMLBlock (anonymous flexbox used to apply CSS italic)
    RenderBlock (anonymous created elsewhere to honor CSS rules)
      RenderText
        text run "x"

RenderMathMLOperator (mo element)
   RenderMathMLBlock (anonymous flexbox used for spacing)
    RenderBlock (anonymous created elsewhere to honor CSS rules)
      RenderText (anonymous destroyed and built again)
         text run "−"

We did a big refactoring to remove all the anonymous nodes created by the MathML renderer classes. Just like for MathOperator, we had to be careful and submit various small pieces as the text rendering was quite sensible to code change.

The simplified operator spacing that was supported by WebKit was easy to implement with the new approach. To do automatic italic mathvariant, we modified the paint function to use Mathematical Alphanumeric Symbols instead of CSS italic as you can notice for the variables displayed in the screenshot above. Hence we could remove the RenderMathMLBlock anonymous wrapper.

The use of an anonymous node for the text prevented it to appear in the dumped render tree of layout tests and also required some hacks in the accessibility code to expose that text. In order to address the cases of the minus sign and of mfenced operators, we decided to use our new MathOperator class again. Indeed MathOperator is actually also able to draw unstretched operators made of a single character and this works for the minus sign and for mfenced operators used in practice.

Unchanged Classes

Two classes have not been modified but such modifications were not needed to remove the dependency on RenderFlexibleBox:

  • RenderMathMLFenced is used for an mrow-like element that is defined in the MathML specification as strictly equivalent to constructions with rows and operators. It is implemented as a derived class of RenderMathMLRow and creates anonymous RenderMathMLOperators. This is the only remaining class that modifies the render tree structure. Note that prominent MathML websites and generators do not use the mfenced element, so it is not a big concern.

  • RenderMathMLTable is used for table layout. It is just derived from RenderTable, not RenderFlexibleBox. We did not change anything for now but we considered creating our own implementation in order to make our code independent from HTML table, to support MathML-specific table features and to make it better integrated with the rest of the MathML code.

Accessibility

Even if our main focus was on rendering, the changes we made also had impact on the MathML accessibility code. Indeed, the accessibility tree is generated from the MathML renderer classes: Since we changed the latter during the refactoring, we also had to adjust the accessibility code. Fortunately, we are lucky to have Joanmarie Diggs in our team and she was able to provide some help here.

First, the accessibility code exposes the linethickness of fractions to implement Apple’s AXMathLineThickness attribute. In practice, this is really necessary to know whether the linethickness is null or not (e.g. binomial coefficient VS the Legendre symbol). Apple’s unit test seemed to expose the ratio between the actual thickness and the default thickness but the accessibility code really just reads the actual thickness calculated by RenderMathMLFraction. Our fix and improvement for linethickness made the Apple’s unit test fail so we had to adjust RenderMathMLFraction to expose the value expected by that test.

In general, the accessibility code does not care about anonymous nodes created for layout purpose and there was some code to avoid exposing them in the accessibility tree. So removing all the anonymous during the layout refactoring was actually a good and safe thing to do. There were some helper functions to implement Apple’s AXMathRootRadicand and AXMathRootIndex attributes that had to be adjusted, though. These functions used to do some work to skip the anonymous wrappers and we were actually able to simplify them.

There was also some specific code for the RenderMathMLOperators and their anonymous RenderText that were necessary to expose the text content. Actually, there was an old bug in the accessibility code and the anonymous RenderMathMLOperators created by mfenced were not correctly exposed. The unit test we had for mfenced operators was only checking the text content but it was still passing and so the regression had never been detected before. After the layout refactoring we removed the anonymous RenderText of mfenced operators and so broke that test… We thus spent some time to fix the RenderMathMLOperator code. Essentially, we removed all the old hacks and only left a specific handling for mfenced operators. We also used this opportunity to improve and extend our MathML accessibility tests.

Finally, the MathML accessibility code was directly implemented into a generic AccessibilityRenderObject class. There was some functions to access math nodes and properties but also specific cases scattered all over the code (anonymous boxes, mfenced operators, math roles etc). In order to facilitate future work and maintenance we decided to move all the MathML code into a new AccessibilityMathMLElement class. Hence the work implied by the layout refactoring actually encouraged us to improve the organization and testing of our accessibility code!

Conclusion

In the past four months, Igalia’s web platform team has successfully upstreamed the refactoring of WebKit’s MathML renderer classes and we are now very confident about the quality of the layout code. In addition to the people mentioned above I would personally like to thank everybody who helped with this work. More specifically, I am very grateful to other people from Igalia (Martin Robinson, Sergio Villar and Manuel Rego) or Apple (Brent Fulgham and Darin Adler) who have spent some time to review patches. As a nice side effect of this work, mathematical formulas look better and the accessibility code has been improved. More is happenning in the next two phases. We are looking forward to continuing implementation of Web standards and collaboration with browser vendors at the next Web Engines Hackfest!

July 08, 2016 10:00 PM

July 06, 2016

Release Notes for Safari Technology Preview Release 8

WebKit Blog

Safari Technology Preview Release 8 is now available for download for both macOS Sierra beta 2 and OS X El Capitan. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. Release 8 of Safari Technology Preview covers WebKit revisions 202085–202612.

JavaScript

  • Achieved 100% ECMAScript 2015 (ES6) support by adding back Symbol.isConcatSpreadable (r202125)
  • Unified the handling of regular expression character escapes \w and \W and word assertions \b and \B (r202490)
  • Implemented isFinite and isNaN in JavaScript for a performance benefit (r202413)
  • Fixed a crash when a finally clause is used inside a for-in loop (r202608)

CSS

  • Updated :default CSS pseudo-class to match checkbox and radio inputs with a checked attribute (r202245)
  • Fixed an issue where the :hover CSS pseudo-class would match after the cursor left the element (r202324)
  • Prevented :in-range and :out-of-range CSS pseudo-classes to not match on inputs that do not have range limitations (r202143)
  • Changed :in-range and :out-of-range CSS pseudo-classes to not match disabled or readonly inputs (r202159)
  • Fixed border rendering for elements with min-width: -webkit-fill-available and zero available width (r202103)
  • Fixed style updates applied by :host() when changing the CSS class name of a shadow-host element (r202227)

Web APIs

  • Added fallback mechanisms for stretching and mirroring radical symbols in MathML (r202161)
  • Used the MathOperator to handle some non-stretchy operators in MathML (r202271)
  • Set the upper limit for the size or number of pieces of stretchy operators in MathML (r202489)
  • Allowed phrasing content to be accepted in <mo> elements (r202572)
  • Fixed a bug retrieving Blobs from IndexedDB using cursors in WebKit2 sandboxing (r202414)
  • Changed synchronous event tracking to each event type instead of each event sequence (r202408)

Apple Pay

  • Moved Apple Pay code to the open source repository (r202298, r202309, r202310, r202311, r202312, r202432, r202345, r202346, r202444)
  • Moved WebKit2 Apple Pay code to the open source repository (r202432)
  • Moved the WebKit1 Apple Pay code to the open source repository (r202346)
  • Added a logged error message when passing an invalid API version to ApplePaySession constructor (r202499)
  • Moved the user gesture requirement to the ApplePaySession constructor (r202584)
  • Fixed the Apple Pay total amount error message to trigger for the correct limits (r202582)
  • Added shippingType to the list of valid Apple Pay payment request properties (r202409)
  • Added an exception to Apple Pay when the shipping method has an invalid amount (r202341, r202342)
  • Fixed discounted Apple Pay line items to display positive amounts (r202504)

Web Inspector

  • Changed console.profile to use the new Sampling Profiler (r202234)
  • Fixed an infrequent but persistent crash that could occur when closing Web Inspector (r202492, r202515)
  • Prevented automatic timeline recording when Web Inspector is not visible (r202352, r202353)
  • Delayed the first auto-capture heap snapshot during page reloads until after the page performs its first navigation (r202384)
  • Fixed showing impossible values for underflowed bmalloc sizes in the Memory Timeline (r202394)
  • Uncoupled the Quick Console selection updates from the UI to improve testing (r202566)
  • Fixed selectElement.options entries in console for named indexes beyond the collection length (r202568)
  • Fixed the Snapshot List to show the total size and the total live size (r202253)
  • Improved JavaScript Heap Snapshot clean up handling (r202383)
  • Ensured localStorage is updated when modifying sessionStorage (r202529)
  • Fixed broken text search in resources with <CR> (r202498)

Media

  • Fixed picture-in-picture placeholder visibility when the controls attribute is removed (r202509)
  • Fixed the volume slider for web video playback controls for right-to-left content (r202183)
  • Implemented support for “replacement” codec (r202599)
  • Fixed an issue where media controls stop working after exiting picture-in-picture mode (r202333)
  • Fixed media elements to not lose playback controls when muted by a user gesture (r202459)
  • Fixed the playback controls element used when playing multiple items in a page (r202425)

Rendering

  • Fixed the position for composition underlines in right-to-left content (r202250)
  • Fixed <attachment> elements jumping around when subtitle text changes slightly (r202117)
  • Fixed style invalidation for :active when the activated node has no renderer (r202517)
  • Flipped the behavior for the Forward and Back keyboard shortcuts in right-to-left content (r202129)
  • Fixed the placement of <select> popup menus in right-to-left content (r202112)
  • Corrected the behavior of the :active state of elements when focus changes (r202470)

Accessibility

  • Added support for CSS4 :focus-within pseudo-class (r202358)
  • Exposed anonymous RenderMathMLOperators to the accessibility tree (r202497)
  • Fixed the ARIA role attribute for label elements (r202516)

Security

  • Prevent file scheme access to a resource on a different volume (r202186)
  • Set CORS preflight with a non–200 response to be a preflight failure (r202162)
  • Allowed * to match the originating page’s scheme for Content Security Policy (r202155)
  • Changed security origin inheritance check to ignore case (r202174)

Bug Fixes

  • Fixed parent document scrolling when a focus event is dispatched in an <iframe> (r202292)
  • Fixed an issue causing Google Maps transit schedule explorer to initially load blank (r202104)
  • Changed HTMLElement and SVGElement to implement GlobalEventHandlers (r202539)
  • Implemented a constructor for TouchEvent (r202178)
  • Fixed double-tap to zoom on Yahoo Finance (r202354)
  • Implemented support for Vary:Cookie validation in private browsing (r202089)
  • Fixed playback controls on Vimeo.com videos (r202455)
  • Improved resource handling when navigating by discarding decoded image that is only “in use” due to page cache not actual pages (r202231)

By Jon Davis at July 06, 2016 05:15 PM

July 01, 2016

Improving Color on the Web

WebKit Blog

The past few years have seen a dramatic improvement in display technology. First it was the upgrade to higher-resolution screens, starting with mobile devices and then desktops and laptops. Web developers had to understand high-DPI and know how to implement page designs that used this extra resolution. The next revolutionary improvement in displays is happening now: better color reproduction. Here I’ll explain what that means, and how you, the Web developer, can detect such displays and provide a better experience for your users.

Let’s call the typical computer monitor, the type that you’ve been using for more than a decade, an sRGB display. Apple’s recent hardware, including the late-2015 Retina iMac and early-2016 iPad Pro, are able to show more colors than an sRGB display. Such displays are called wide-gamut (I’ll explain what sRGB and gamut are below).

Why is this useful? A wide-gamut system will often provide a more accurate representation of the original color. For example, my colleague @hober has some extremely funky shoes.

hober's bright orange shoes

Unfortunately, what you’re seeing above doesn’t really convey just how funky the shoes are! The problem is the shoe fabric uses colors that are unable to be represented by an sRGB display. The camera that took the photograph (Sony a6300) has a sensor that is able to capture more accurate color information, and that data are included in the original file, but the display can’t show it. Here’s a version of the photo where every pixel that used a color outside the range of a typical display has been replaced with a light blue:

hober's bright orange shoes, but with all the out-of-gamut pixels replaced by blue

As you can see, the colors on the shoe fabric and much of the grass are outside the range of the sRGB display. In fact, fewer than half of the pixels in the photograph are accurately represented. As a Web developer, you should be concerned by this. Suppose you’re an online store selling these shoes. Your customers are not going to know exactly what color they’ve ordered, and might be surprised when their purchase arrives.

This problem is lessened with a wide-gamut display. If you have one of the devices mentioned above, or a similar device, here is a version of the photograph that will show you more colors:

hober's bright orange shoes, this time with a color profile embedded

For those of you on a wide-gamut display, you’ll see the shoes are a brighter orange, and there is slightly more variation in the green grass. Unfortunately, if you don’t have a wide-gamut display, you’re probably seeing something very close to the first image above. In this case, the best I can offer is that tinted image that highlights which parts of the image you’re missing out on.

Anyway, this is great news! Wide-gamut displays are more vibrant, and provide a more accurate depiction of reality. Obviously you’re going to want to make sure you serve your users with imagery that makes use of this technology.

Here’s another example, this time with a generated image. To users on an sRGB display there is a uniform red square below. However, it’s a bit of a trick. There are actually two different shades of red in that image, one of which is only distinct on wide-gamut displays. On such a display you’ll see a faint WebKit logo inside the red square.

A red square with a faint WebKit logo

Sometimes the difference between a normal and wide-gamut image is subtle. Sometimes it is more dramatic.

Demo

Take a look at this page of examples that lets you swap between different versions of images to compare, as well as shows you where in the image there are pixels outside of the range of an sRGB display. There’s also a slightly more interactive version that shows you the different images side by side.

Definitions

Here’s a quick explainer for terms you often hear when discussing color.

Color Space: A color space is an environment in which you can define and compare colors. There are a few types of color spaces, each using a different set of parameters to describe the colors. For example, a greyscale color space only has one parameter which controls the level of brightness going from black to white. You’re probably familiar with RGB-type color spaces that use red, green, and blue as parameters representing the colors of the lights in a display that add together to make a final color. Print workflows often use CMYK-type color spaces, where cyan, magenta, yellow, and black are the parameters representing the colors of the inks.

Color Profile: In 1993, a group of vendors formed the ICC to define a standard that described color spaces. A color profile is data that defines what the color space of a device is, and can be used to convert between different spaces. The common ones are given names, such as sRGB (or more formally, IEC 61966-2-1). My use of sRGB above now makes a bit more sense: such a display is able to show colors corresponding to the sRGB color space. Color profile data can be written to a file, or embedded directly into an image, which allows a computer to understand what the color values in the image actually mean.

Gamut: A gamut is the range of colors a device can process, or that a color space can define. In the case of a computer display, a gamut is all the colors it can accurately show. Visualizing a gamut is a bit complicated, but it is slightly similar to the color pickers you see in design software. Imagine the range of all possible colors applied to a surface, with the primary colors at different extremes. As you move towards the red extreme, the color gets more red. Moving towards blue, more blue, etc. A gamut is the area on this imaginary surface that represents as far as the device can go in all directions. You might see a gamut represented with a diagram such as this, where the colored shape is showing what a typical human eye can see, and the white triangle shows the range of the sRGB color space (as you notice, it’s a lot smaller than what the eye could detect).

Diagram showing the sRGB gamut

Image source: Wikipedia

These diagrams can be a bit misleading because they are showing you colors that you can obviously see, and then telling you that a gamut doesn’t include those colors. However, they do give you a nice way to compare the size of different gamuts. Note also that here you’re seeing a two-dimensional representation of a color gamut surface, when in reality it is a three- or four-dimensional space (this is all pretty complicated – we’re just trying to give a simple introduction).

Wide-gamut: This is an informal term that the industry is using to describe devices or color spaces that have a gamut larger than sRGB, which is the gamut that nearly all computer displays have used for the last decade or so. Wider gamut displays have been around for a while, but were mostly limited to professional use. Now they are becoming available to regular consumers, which means that there are more colors available. Sometimes wide-gamut is also referred to as extended color. The modern Apple displays support the Display P3 color space, which is about 25% wider than sRGB.

Color Depth: Computers can use varying levels of precision, or depth, to represent a color. This is not the same as gamut, which describes a range of colors. Rather it is the number of distinct colors within the gamut that can be defined. Web developers are probably familiar with the CSS rgb() syntax, which takes 0-255 values for red, green and blue. That is a depth of 8 bits per channel, for a total of 16,777,216 different colors. If you add in alpha/opacity, you can store the color in 32 bits. If you use a depth of 8 bits per channel, you can only ever represent that same number of colors, no matter what color space you’re using – it would just be a different set of colors. If you chose 16 bits per channel, you would have a deeper space, and could represent more colors within the gamut. A good example of this is when you draw a gradient between similar colors: you can see banding, where the computer and display don’t have the depth to show a smooth range of colors between the end points.

Here is an example which shows how a lack of color depth produces banding, even though all the colors between the end points are within the gamut (it’s an artificial example to exaggerate the effect).

A smaller color depth might demonstrate distinct jumps between similar colors.

A light red to dark red gradient with distinct bands

With a larger color depth, the jumps are much less noticeable.

A light red to dark red gradient without distinct bands

With that introduction, let’s get into the details of color on the Web and recent improvements in WebKit to help you develop content that is more aware of color. We also introduce some features we’ve proposed to the W3C that will allow you to take even more advantage of this new display technology.

Colors on the Web

The Web has often struggled to handle colors correctly. I’m sure there are some readers out there who painfully remember Web-safe colors! While we’ve moved on from that, we still have limitations, such as HTML and CSS having been defined to work only in the sRGB color space. Just like the example of hober‘s shoes above, this means there are many colors that your CSS, images, and canvas are unable to represent. This is a problem if you’re trying to show your family the spring flowers blooming in your garden, or shopping for a bright red sports car to help with your mid-life crisis.

So when we showed the photograph of the shoes above, an sRGB display squished all the colors outside the sRGB gamut into colors that it could show. But a display with a wider gamut such as Display P3 didn’t need to squish into sRGB. Here’s a visual representation showing the difference between sRGB and the Display P3 color space.

sRGB P3 gamut comparison

The colored triangle is the sRGB space. The white triangle shows the coverage of the Display P3 space, significantly bigger than sRGB, particularly in the more saturated reds, yellows, purples and greens. The black outline shows the ability of the typical human eye.

Remember the red square with the faint WebKit logo? That was generated by creating an image in the Display P3 color space, filling it with 100% red, rgb(255, 0, 0), and then painting the logo in a slightly different red, rgb(241, 0, 0). On an sRGB display, you can’t see the logo, because all the red values above 241 in Display P3 are beyond the highest red in sRGB, so the 241 red and the 255 red end up as the same color.

Note: I saw a bit of confusion about this on Twitter, so let’s try an alternate explanation. Basically, all the fully red values between 241/255 and 255/255 in the Display P3 color space are indistinguishable when shown in sRGB. This doesn’t mean that a 241 red from Display P3 is the same as a 255 red in sRGB – unfortunately, it’s not quite that simple and I don’t want to get too detailed in this introductory article. For those who are interested, on macOS there is an app called Color Sync Utility that allows you to convert between color spaces in different ways, as well as comparing gamuts.

So now you understand why you should be aware of color, and that you should use this technology to give users a better experience. That’s the background — let’s talk about what this means for WebKit.

Color-matching Images

We mentioned above that the Web is defined to use sRGB. WebKit/Safari on Mac has operated in this color space for a number of years now, meaning you’d get consistent colors across different displays (at the time of writing, most other browser engines just operate in what’s called the device color space, which means they don’t process the color values before passing them on to the display hardware).

WebKit color-matches all images on both iOS and macOS. This means that if the image has a color profile, we will make sure the colors in the image are accurately represented on the display, whether it is normal or wide gamut. This is useful since many digital cameras don’t use sRGB in their raw format, so simply interpreting the red, green and blue values as such is unlikely to produce the correct color. Typically, you won’t have to do anything to get this color-matching. Nearly all image processing software allows you to tag an image with a color profile, and many do it by default.

With the examples linked above, you’re seeing this color-matching in action on Safari from OS X 10.11.3 and above, as well as iOS 9.3 and above (Retina devices). All I had to do was make sure that the images had been tagged with the appropriate color profile.

If an image doesn’t have a tagged profile, WebKit assumes that it is sRGB. This makes it easy for your generated artwork, such as border and background images, to match what you have defined in CSS. That is, a CSS value of rgb(255, 0, 0) would match the corresponding full sRGB red value.

Note: This was another point that provoked some comment, with some suggesting an untagged image should not be assumed to be sRGB. The reason we do this is explained above: it’s to make sure that colors in the image match the CSS colors in the page.

That’s ok for the display technology from the last decade. But now that we have displays that support a wider gamut, you’ll want more control over how your content is shown.

Detecting the Display

I’ve explained above why you should prefer to serve wide-gamut images to users on a wide-gamut display. If you serve wide-gamut images to users not on a wide-gamut display, WebKit will color-match the images and show them in the sRGB space. However, this conversion into sRGB can be done a few ways and isn’t guaranteed to happen identically on other browsers or platforms. As a discerning Web developer or designer you will want to convert your images offline to better control what the end user will see. Also, embedding the color profile adds a little to the file size, so you might not want to send that extra data if it isn’t necessary.

The best solution is to serve a wide-gamut image when the user is on a wide-gamut display, and an sRGB image when the user is not on a wide-gamut display. This is just another case of responsive images, and is exactly what the element and media queries were designed to handle.

WebKit now supports the (new to CSS Color Level 4) color-gamut media query. Here’s how you use it:

<picture>
  <source media="(color-gamut: p3)" srcset="photo-wide.jpg">
  <img src="photo-srgb.jpg">
</picture>

You can also use this query inside a style sheet:

.main {
  background-image: url("photo-srgb.jpg");
}

@media (color-gamut: p3) {
  .main {
    background-image: url("photo-wide.jpg");
  }
}

Or script:

if (window.matchMedia("(color-gamut: p3)").matches) {
  // Do especially colorful stuff here.
}

The color-gamut query accepts p3 and rec2020 as values, which are intentionally vague terms to specify the range of colors supported by the system, including the browser engine and the display hardware. By default, since nearly all displays support sRGB, there is no need to test for that functionality. But a typical modern wide-gamut display can support the range of colors included in the DCI P3 space, or near about, and would pass the media query. For example, the Display P3 space I mentioned above is a variant of DCI P3. The rec2020 value indicates the system has a display that supports an even wider color gamut, one approximately defined by the Rec. 2020 space (at the moment, it’s quite rare to come across hardware on the Web that actually supports Rec. 2020, so you probably don’t need to worry about that yet).

Since media queries have a graceful fallback, you can start using color-gamut right away to give wide-gamut users nicer colors while still accomodating users who don’t yet have compatible browsers or hardware.

Future Directions

Serving and rendering wide-gamut images is relatively straightforward, but what about if you want to combine it with other page elements like a background color, or paint the image into a canvas element? These are some of the challenges the standards bodies are grappling with, and I’d like to talk about where they’re going.

Wide-gamut colors in CSS

I showed above that rgb(241, 0, 0) in Display P3 is rgb(255, 0, 0) in sRGB. What can you do if you want a color defined in CSS to match something inside a wide-gamut image? We can’t yet specify those colors in CSS.

This is what members of the WebKit project have proposed for CSS. The current idea is to add a new function called color() that can take a color profile as well as the parameters defining the color.

/* NOTE: Proposed syntax. Not yet implemented. */
strong {
  color: color(p3 1.0 0 0); /* 100% red in the P3 color space */
}

In practice, you’d probably use this with an @supports rule.

strong {
  color: rgb(255, 0, 0); /* 100% red in the sRGB color space */
}

@supports (color: color(p3 0 0 0)) {
  strong {
    color: color(p3 1.0 0 0); /* 100% red in the P3 color space */
  }
}
Note: I originally had a bad typo, showing the syntax as color(p3, 255, 0, 0). This is one of the annoyances of the existing rgb() function. The new color() function will take floating point numbers as parameters, not 8-bit integers.

CSS will define some known color profile names, so you can easily find the color values you want. There is still some discussion about allowing authors to link to external profiles, or maybe point at an image that has an embedded profile.

In addition, CSS might decide to allow you to specify values outside 0-255 (or 0-100%) in the existing rgb() function. For example, rgb(102.34%, -0.1%, 4%) would be a color that has more red, less green and a tiny touch of blue. The problem here is that it might be confusing to understand these values (what does it mean to be a negative green?).

Another suggestion is to be able to define a color space for the entire document, which would mean that all the regular CSS color values would be interpreted in that space. External images with embedded profiles would still be color matched.

These discussions are happening in the W3C CSS Working Group at the moment. Your opinions are valuable – the group needs more input from Web developers. If you’re interested in participating, look for emails with subjects starting “[css-color]” on the www-style email list.

WebKit hopes to implement these features as soon as we’re confident they are stable.

Wide-gamut colors in HTML

While CSS handles most of the presentation of an HTML document, there is still one important area which is outside its scope: the canvas element. Both 2D and WebGL canvases assume they operate within the sRGB color space. This means that even on wide-gamut displays, you won’t be able to create a canvas that exercises the full range of color.

The proposed solution is to add an optional flag to the getContext function, specifying the color space the canvas should be color matched to. For example:

// NOTE: Proposed syntax. Not yet implemented.
canvas.getContext("2d", { colorSpace: "p3" });

There is something else to consider, which is how to create a canvas that has a greater color depth. For example, in WebGL you can use half-float textures that give you 16 bits of precision per color channel. However, even if you use these deeper textures inside WebGL, you’re limited to 8 bits of precision when you paint the WebGL into the document.

There needs to be a way for the developer to specify the depth of the color buffer for a canvas element.

This gets more complicated when combined with the getImageData/putImageData functions (or the readPixels equivalent in WebGL). With today’s 8 bit per channel buffers, there is no loss of precision going into and out of a canvas. Also, the transfer can be efficient, both in performance and memory, because the canvas and program side data are the same type. But when you have variable color depths, this might not be possible. For example, WebGL’s half float buffer doesn’t have an equivalent type in JavaScript, which means there will either have to be some transformation as the data is read or written, as well as use more memory to store the data, or you’ll have to work with a raw ArrayBuffer and do awkward math with bit masks.

These discussions are currently happening on the WhatWG github and soon at the W3C. Again, we’d love for you to be part of the conversation.

Summary

Wide-gamut displays have arrived and are the future of computing devices. As your audience using these beautiful displays grows, you’ll want to take advantage of the stunning range of colors on offer and provide users with a more engaging Web experience.

WebKit is very excited to provide improved color features to developers through color matching and color gamut detection, available today in the Safari Technology Preview, as well as the macOS Sierra and iOS 10 betas. We’re also keen to start implementing more advanced color features, such as specifying wide-gamut colors in CSS, tagging canvas elements with profiles and allowing greater color depth.

If you have any comments, questions, delicious cookies or spam I can be found at dino@apple.com or @grorgwork. You can also contact Jonathan Davis at web-evangelist@apple.com, @jonathandavis, or send general comments to the @webkit Twitter account.

By Dean Jackson at July 01, 2016 04:30 PM

May 26, 2016

Manuel Rego: CSS Grid Layout and positioned items

Igalia WebKit

As part of the work done by Igalia in the CSS Grid Layout implementation on Chromium/Blink and Safari/WebKit, we’ve been implementing the support for positioned items. Yeah, absolute positioning inside a grid. 😅

Probably the first idea is that come to your mind is that you don’t want to use positioned grid items, but maybe in some use cases it can be needed. The idea of this post is to explain how they work inside a grid container as they have some particularities.

Actually there’s not such a big difference compared to regular grid items. When the grid container is the containing block of the positioned items (e.g. using position: relative; on the grid container) they’re placed almost the same than regular grid items. But, there’re a few differences:

  • Positioned items don't stretch by default.
  • They don't use the implicit grid. They don't create implicit tracks.
  • They don't occupy cells regarding auto-placement feature.
  • autohas a special meaning when referring lines.

Let’s explain with more detail each of these features.

Positioned items shrink to fit

We’re used to regular items that stretch by default to fill their area. However, that’s not the case for positioned items, similar to what a positioned regular block does, they shrink to fit.

This is pretty easy to get, but a simple example will make it crystal clear:

<div style="display: grid; position: relative;
            grid-template-columns: 300px 200px; grid-template-rows: 200px 100px;">
    <div style="grid-column: 1 / 3; grid-row: 1 / 3;"></div>
    <div style="position: absolute; grid-column: 1 / 3; grid-row: 1 / 3;"></div>
</div>

In this example we’ve a simple 2x2 grid. Both the regular item and the positioned one are placed with the same rules taking the whole grid. This defines the area for those items, which takes the 1st & 2nd rows and 1st & 2nd columns.

Positioned items shrink to fit Positioned items shrink to fit

The regular item stretches by default both horizontally and vertically, so it takes the whole size of the grid area. However, the positioned item shrink to fit and adapts its size to the contents.

For the examples in the next points I’m ignoring this difference, as I want to show the area that each positioned item takes. To get the same result than in the pictures, you’d need to set 100% width and height on the positioned items.

Positioned items and implicit grid

Positioned items don’t participate in the layout of the grid, neither they affect how items are placed.

You can place a regular item outside the explicit grid, and the grid will create the required tracks to accommodate the item. However, in the case of positioned items, you cannot even refer to lines in the implicit grid, they'll be treated as auto. Which means that you cannot place a positioned item in the implicit grid. they cannot create implicit tracks as they don't participate in the layout of the grid.

Let’s use an example to understand this better:

<div style="display: grid; position: relative;
            grid-template-columns: 200px 100px; grid-template-rows: 100px 100px;
            grid-auto-columns: 100px; grid-auto-rows: 100px;
            width: 300px; height: 200px;">
    <div style="position: absolute; grid-area: 4 / 4;"></div>
</div>

The example defines a 2x2 grid, but the positioned item is using grid-area: 4 / 4; so it tries to goes to the 4th row and 4th column. However the positioned items cannot create those implicit tracks. So it’s positioned like if it has auto, which in this case will take the whole explicit grid. auto has a special meaning in positioned items, it’ll be properly explained later.

Positioned items do not create implicit tracks Positioned items do not create implicit tracks

Imagine another example where regular items create implicit tracks:

<div style="display: grid; position: relative;
            grid-template-columns: 200px 100px; grid-template-rows: 100px 100px;
            grid-auto-columns: 100px; grid-auto-rows: 100px;
            width: 500px; height: 400px;">
    <div style="grid-area: 1 / 4;"></div>
    <div style="grid-area: 4 / 1;"></div>
    <div style="position: absolute; grid-area: 4 / 4;"></div>
</div>

In this case, the regular items will be creating the implicit tracks, making a 4x4 grid in total. Now the positioned item can be placed on the 4th row and 4th column, even if those columns are on the explicit grid.

Positioned items can be placed on the implicit grid Positioned items can be placed on the implicit grid

As you can see this part of the post has been modified, thanks to @fantasai for notifying me about the mistake.

Positioned items and placement algorithm

Again the positioned items do not affect the position of other items, as they don’t participate in the placement algorithm.

So, if you’ve a positioned item and you’re using auto-placement for some regular items, it’s expected that the positioned one overlaps the other. The positioned items are completely ignored during auto-placement.

Just showing a simple example to show this behavior:

<div style="display: grid; position: relative;
            grid-template-columns: 300px 300px; grid-template-rows: 200px 200px;">
    <div style="grid-column: auto; grid-row: auto;"></div>
    <div style="grid-column: auto; grid-row: auto;"></div>
    <div style="grid-column: auto; grid-row: auto;"></div>
    <div style="position: absolute;
                grid-column: 2 / 3; grid-row: 1 / 2;"></div>
</div>

Here we’ve again a 2x2 grid, with 3 auto-placed regular items, and 1 absolutely positioned item. As you can see the positioned item is placed on the 1st row and 2nd column, but there’s an auto-placed item in that cell too, which is below the positioned one. This shows that the grid container doesn’t care about positioned items and it just ignores them when it has to place regular items.

Positioned items and placement algorithm Positioned items and placement algorithm

If all the children were not positioned, the last one would be placed in the given position (1st row and 2nd column), and the rest of them (auto-placed) will take the other cells, without overlapping.

Positioned items and auto lines

This is probably the biggest difference compared to regular grid items. If you don’t specify a line, it’s considered that you’re using auto, but auto is not resolved as span 1 like in regular items. For positioned items auto is resolved to the padding edge.

The specification introduces the concepts of the lines 0 and -0, despite how weird it can sound, it actually makes sense. The auto lines would be referencing to those 0 and -0 lines, that represent the padding edges of the grid container.

Again let’s use a few examples to explain this:

<div style="display: grid; position: relative;
            grid-template-columns: 300px 200px; grid-template-rows: 200px 200px;
            padding: 50px 100px; width: 500px; height: 400px;">
    <div style="position: absolute;
                grid-column: 1 / auto; grid-row: 2 / auto;"></div>
</div>

Here we have a 2x2 grid container, which has some padding. The positioned item will be placed in the 2nd row and 1st column, but its area will take up to the padding edges (as the end line is auto in both axis).

Positioned items and auto lines Positioned items and auto lines

We could even place positioned grid items on the padding itself. For example using “grid-column: auto / 1;” the item would be on the left padding.

Positioned items using auto lines to be placed on the left padding Positioned items using auto lines to be placed on the left padding

Of course if the grid is wider and we’ve some free space on the content box, the items will take that space too. For example:

<div style="display: grid; position: relative;
            grid-template-columns: 300px 200px; grid-template-rows: 200px 200px;
            padding: 50px 100px; width: 600px; height: 400px;">
    <div style="position: absolute;
                grid-column: 3 / auto; grid-row: 2 / 3;"></div>
</div>

Here the grid columns are 500px, but the grid container has 600px width. This means that we’ve 100px of free space in the grid content box. As you can see in the example, that space will be also used when the positioned items extend up to the padding edges.

Positioned items taking free space and right padding Positioned items taking free space and right padding

Offsets

Of course you can use offsets to place your positioned items (left, right, top and bottom properties).

These offsets will apply inside the grid area defined for the positioned items, following the rules explained above.

Let’s use another example:

<div style="display: grid; position: relative;
            grid-template-columns: 300px 200px; grid-template-rows: 200px 200px;
            padding: 50px 100px; width: 500px; height: 400px;">
    <div style="position: absolute;
                grid-column: 1 / auto; grid-row: 2 / auto;
                left: 90px; right: 70px; top: 80px; bottom: 60px;"></div>
</div>

Again a 2x2 grid container with some padding. The positioned item have some offsets which are applied inside its grid area.

Positioned items and offets Positioned items and offets

Wrap-up

I’m not completely sure about how important is the support of positioned elements for web authors using Grid Layout. You’ll be the ones that have to tell if you really find use cases that need this. I hope this post helps to understand it better and make your minds about real-life scenarios where this might be useful.

The good news is that you can test this already in the most recent versions of some major browsers: Chrome Canary, Safari Technology Preview and Firefox. We hope that the 3 implementations are interoperable, but please let us know if you find any issue.

There’s one last thing missing: alignment support for positioned items. This hasn’t been implemented yet in any of the browsers, but the behavior will be pretty similar to the one you can already use with regular grid items. Hopefully, we’ll have time to add support for this in the coming months.

Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

Last but least, thanks to Bloomberg for supporting Igalia in the CSS Grid Layout implementation on Blink and WebKit.

May 26, 2016 10:00 PM

May 23, 2016

Igalia Compilers Team: Awaiting the future of JavaScript in V8

Igalia WebKit

On the evening of Monday, May 16th, 2016, we have made history. We’ve landed the initial implementation of “Async Functions” in V8, the JavaScript runtime in use by the Google Chrome and Node.js. We do these things not because they are easy, but because they are hard. Because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one we are willing to accept. It is very exciting to see this, roughly 2 months of implementation, codereview and standards finangling/discussion to land. It is truly an honour.

 

To introduce you to Async Functions, it’s first necessary to understand two things: the status quo of async programming in JavaScript, as well as Generators (previously implemented by fellow Igalian Andy).

 

Async programming in JavaScript has historically been implemented by callbacks. `window.setTimeout(function toExecuteLaterOnceTimeHasPassed() {}, …)` being the common example. Callbacks on their own are not scalable: when numerous nested asynchronous operations are needed, code becomes extremely difficult to read and reason about. Abstraction libraries have been tacked on to improve this, including caolan’s async package, or Promise libraries such as Q. These abstractions simplify control flow management and data flow management, and are a massive improvement over plain Callbacks. But we can do better! For a more detailed look at Promises, have a look at the fantastic MDN article. Some great resources on why and how callbacks can lead to utter non-scalable disaster exist too, check out http://callbackhell.com!

 

The second concept, Generators, allow a runtime to return from a function at an arbitrary line, and later re-enter that function at the following instruction, in order to continue execution. So right away you can imagine where this is going — we can continue execution of the same function, rather than writing a closure to continue execution in a new function. Async Functions rely on this same mechanism (and in fact, on the underlying Generators implementation), to achieve their goal, immensely simplifying non-trivial coordination of asynchronous operations.

 

As a simple example, lets compare the following two approaches:
function deployApplication() {
  return cleanDirectory(__DEPLOYMENT_DIR__).
    then(fetchNpmDependencies).
    then(
      deps => Promise.all(
        deps.map(
          dep => moveToDeploymentSite(
            dep.files,
            `${__DEPLOYMENT_DIR__}/deps/${dep.name}`
        ))).
    then(() => compileSources(__SRC_DIR__,
                              __DEPLOYMENT_DIR__)).
    then(uploadToServer);
}

The Promise boiler plate makes this preit harder to read and follow than it could be. And what happens if an error occurs? Do we want to add catch handlers to each link in the Promise chain? That will only make it even more difficult to follow, with error handling interleaved in difficult to read ways.

Lets refactor this using async functions:
async function deployApplication() {
    await cleanDIrectory(__DEPLOYMENT_DIR__);
    let dependencies = await fetchNpmDependencies();

    // *see below*
    for (let dep of dependencies) {
        await moveToDeploymentSite(
            dep.files,
            `${__DEPLOYMENT_DIR__}/deps/${dep.name}`);
    }

    await compileSources(__SRC_DIR__,
                         __DEPLOYMENT_DIR__);
    return uploadToServer();
}

 

You’ll notice that the “moveToDeploymentSite” step is slightly different in the async function version, in that it completes each operation in a serial pipeline, rather than completing each operation in parallel, and continuing once finished. This is an unfortunate limitation of the async function specification, which will hopefully be improved on in the future.

 

In the meantime, it’s still possible to use the Promise API in async functions, as you can `await` any Promise, and continue execution after it is resolved. This grants compatibility with  numerous existing Web Platform APIs (such as `fetch()`), which is ultimately a good thing! Here’s an alternative implementation of this step, which performs the `moveToDeploymentSite()` bits in parallel, rather than serially:

 

await Promise.all(dependencies.map(
  dep => moveToDeploymentSite(
    dep.files,
    `${__DEPLOYMENT_DIR__}/deps/${dep.name}`
)));

 

Now, it’s clear from the let dependencies = await fetchNpmDependencies(); line that Promises are unwrapped automatically. What happens if the promise is rejected with an error, rather than resolved with a value? With try-catch blocks, we can catch rejected promise errors inside async functions! And if they are not caught, they will automatically return a rejected Promise from the async function.
function throwsError() { throw new Error("oops"); }

async function foo() { throwsError(); }

// will print the Error thrown in `throwsError`.
foo().catch(console.error)

async function bar() {
  try {
    var value = await foo();
  } catch (error) {
    // Rejected Promise is unwrapped automatically, and
    // execution continues here, allowing us to recover
    // from the error! `error` is `new Error("oops!")`
  }
}

 

There are also lots of convenient forms of async function declarations, which hopefully serve lots of interesting use-cases! You can concisely declare methods as asynchronous in Object literals and ES6 classes, by preceding the method name with the `async` keyword (without a preceding line terminator!)

 

class C {
  async doAsyncOperation() {
    // ...
  }
};

var obj = {
  async getFacebookProfileAsynchronously() {
    /* ... */
  }
};

 

These features allow us to write more idiomatic, easier to understand asynchronous control flow in our applications, and future extensions to the ECMAScript specification will enable even more idiomatic forms for writing complex algorithms, in a maintainable and readable fashion. We are very excited about this! There are numerous other resources on the web detailing async functions, their benefits, and perhaps ways they might be improved in the future. Some good ones include [this piece from Google’s Jake Archibald](https://jakearchibald.com/2014/es7-async-functions/), so give that a read for more details. It’s a few years old, but it holds up nicely!

 

So, now that you’ve seen the overview of the feature, you might be wondering how you can try it out, and when it will be available for use. For the next few weeks, it’s still too experimental even for the “Experimental Javascript” flag.  But if you are adventurous, you can try it already!  Fetch the latest Chrome Canary build, and start Chrome with the command-line-flag `–js-flags=”–harmony-async-await”`. We can’t make promises about the shipping timeline, but it could ship as early as Chrome 53 or Chrome 54, which will become stable in September or October.

 

We owe a shout out to Bloomberg, who have provided us with resources to improve the web platform that we love. Hopefully, we are providing their engineers with ways to write more maintainable, more performant, and more beautiful code. We hope to continue this working relationship in the future!

 

As well, shoutouts are owed to the Chromium team, who have assisted in reviewing the feature, verifying its stability, getting devtools integration working, and ultimately getting the code upstream. Terriffic! In addition, the WebKit team has also been very helpful, and hopefully we will see the feature land in JavaScriptCore in the not too distant future.

By compilers at May 23, 2016 02:08 PM

April 16, 2016

Frédéric Wang: OpenType MATH in HarfBuzz

Igalia WebKit

TL;DR:

  • Work is in progress to add OpenType MATH support in HarfBuzz and will be instrumental for many math rendering engines relying on that library, including browsers.

  • For stretchy operators, an efficient way to determine the required number of glyphs and their overlaps has been implemented and is described here.

In the context of Igalia browser team effort to implement MathML support using TeX rules and OpenType features, I have started implementation of OpenType MATH support in HarfBuzz. This table from the OpenType standard is made of three subtables:

  • The MathConstants table, which contains layout constants. For example, the thickness of the fraction bar of ab\frac{a}{b}.

  • The MathGlyphInfo table, which contains glyph properties. For instance, the italic correction indicating how slanted an integral is e.g. to properly place the subscript in ∫D\displaystyle\displaystyle\int_{D}.

  • The MathVariants table, which provides larger size variants for a base glyph or data to build a glyph assembly. For example, either a larger parenthesis or a assembly of U+239B, U+239C, U+239D to write something like:

      (abcdefgh\left(\frac{\frac{\frac{a}{b}}{\frac{c}{d}}}{\frac{\frac{e}{f}}{\frac{g}{h}}}\right.  

Code to parse this table was added to Gecko and WebKit two years ago. The existing code to build glyph assembly in these Web engines was adapted to use the MathVariants data instead of only private tables. However, as we will see below the MathVariants data to build glyph assembly is more general, with arbitrary number of glyphs or with additional constraints on glyph overlaps. Also there are various fallback mechanisms for old fonts and other bugs that I think we could get rid of when we move to OpenType MATH fonts only.

In order to add MathML support in Blink, it is very easy to import the OpenType MATH parsing code from WebKit. However, after discussions with some Google developers, it seems that the best option is to directly add support for this table in HarfBuzz. Since this library is used by Gecko, by WebKit (at least the GTK port) and by many other applications such as Servo, XeTeX or LibreOffice it make senses to share the implementation to improve math rendering everywhere.

The idea for HarfBuzz is to add an API to

  1. 1.

    Expose data from the MathConstants and MathGlyphInfo.

  2. 2.

    Shape stretchy operators to some target size with the help of the MathVariants.

It is then up to a higher-level math rendering engine (e.g. TeX or MathML rendering engines) to beautifully display mathematical formulas using this API. The design choice for exposing MathConstants and MathGlyphInfo is almost obvious from the reading of the MATH table specification. The choice for the shaping API is a bit more complex and discussions is still in progress. For example because we want to accept stretching after glyph-level mirroring (e.g. to draw RTL clockwise integrals) we should accept any glyph and not just an input Unicode strings as it is the case for other HarfBuzz shaping functions. This shaping also depends on a stretching direction (horizontal/vertical) or on a target size (and Gecko even currently has various ways to approximate that target size). Finally, we should also have a way to expose italic correction for a glyph assembly or to approximate preferred width for Web rendering engines.

As I mentioned at the beginning, the data and algorithm to build glyph assembly is the most complex part of the OpenType MATH and deserves a special interest. The idea is that you have a list of n≥1n\geq 1 glyphs available to build the assembly. For each 0≤i≤n-10\leq i\leq n-1, the glyph gig_{i} has advance aia_{i} in the stretch direction. Each gig_{i} has straight connector part at its start (of length sis_{i}) and at its end (of length eie_{i}) so that we can align the glyphs on the stretch axis and glue them together. Also, some of the glyphs are “extenders” which means that they can be repeated 0, 1 or more times to make the assembly as large as possible. Finally, the end/start connectors of consecutive glyphs must overlap by at least a fixed value omino_{\mathrm{min}} to avoid gaps at some resolutions but of course without exceeding the length of the corresponding connectors. This gives some flexibility to adjust the size of the assembly and get closer to the target size tt.

gig_{i}

sis_{i}

eie_{i}

aia_{i}

gi+1g_{i+1}

si+1s_{i+1}

ei+1e_{i+1}

ai+1a_{i+1}

oi,i+1o_{i,i+1}

Figure 0.1: Two adjacent glyphs in an assembly

To ensure that the width/height is distributed equally and the symmetry of the shape is preserved, the MATH table specification suggests the following iterative algorithm to determine the number of extenders and the connector overlaps to reach a minimal target size tt:

  1. 1.

    Assemble all parts by overlapping connectors by maximum amount, and removing all extenders. This gives the smallest possible result.

  2. 2.

    Determine how much extra width/height can be distributed into all connections between neighboring parts. If that is enough to achieve the size goal, extend each connection equally by changing overlaps of connectors to finish the job.

  3. 3.

    If all connections have been extended to minimum overlap and further growth is needed, add one of each extender, and repeat the process from the first step.

We note that at each step, each extender is repeated the same number of times r≥0r\geq 0. So if IExtI_{\mathrm{Ext}} (respectively INonExtI_{\mathrm{NonExt}}) is the set of indices 0≤i≤n-10\leq i\leq n-1 such that gig_{i} is an extender (respectively is not an extender) we have ri=rr_{i}=r (respectively ri=1r_{i}=1). The size we can reach at step rr is at most the one obtained with the minimal connector overlap omino_{\mathrm{min}} that is

∑i=0N-1(∑j=1riai-omin)+omin=(∑i∈INonExtai-omin)+(∑i∈IExtr⁢(ai-omin))+omin\sum_{i=0}^{N-1}\left(\sum_{j=1}^{r_{i}}{a_{i}-o_{\mathrm{min}}}\right)+o_{% \mathrm{min}}=\left(\sum_{i\in I_{\mathrm{NonExt}}}{a_{i}-o_{\mathrm{min}}}% \right)+\left(\sum_{i\in I_{\mathrm{Ext}}}r{(a_{i}-o_{\mathrm{min}})}\right)+o% _{\mathrm{min}}

We let NExt=|IExt|N_{\mathrm{Ext}}={|I_{\mathrm{Ext}}|} and NNonExt=|INonExt|N_{\mathrm{NonExt}}={|I_{\mathrm{NonExt}}|} be the number of extenders and non-extenders. We also let SExt=∑i∈IExtaiS_{\mathrm{Ext}}=\sum_{i\in I_{\mathrm{Ext}}}a_{i} and SNonExt=∑i∈INonExtaiS_{\mathrm{NonExt}}=\sum_{i\in I_{\mathrm{NonExt}}}a_{i} be the sum of advances for extenders and non-extenders. If we want the advance of the glyph assembly to reach the minimal size tt then

  SNonExt-omin⁢(NNonExt-1)+r⁢(SExt-omin⁢NExt)≥t{S_{\mathrm{NonExt}}-o_{\mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}+{r% \left(S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}\right)}\geq t  

We can assume 0" class="ltx_Math" display="inline" id="p12.m1">SExt-omin⁢NExt>0S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}>0 or otherwise we would have the extreme case where the overlap takes at least the full advance of each extender. Then we obtain

  r≥rmin=max⁡(0,⌈t-SNonExt+omin⁢(NNonExt-1)SExt-omin⁢NExt⌉)r\geq r_{\mathrm{min}}=\max\left(0,\left\lceil\frac{t-{S_{\mathrm{NonExt}}+o_{% \mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}}{S_{\mathrm{Ext}}-o_{\mathrm{% min}}N_{\mathrm{Ext}}}\right\rceil\right)  

This provides a first simplification of the algorithm sketched in the MATH table specification: Directly start iteration at step rminr_{\mathrm{min}}. Note that at each step we start at possibly different maximum overlaps and decrease all of them by a same value. It is not clear what to do when one of the overlap reaches omino_{\mathrm{min}} while others can still be decreased. However, the sketched algorithm says all the connectors should reach minimum overlap before the next increment of rr, which means the target size will indeed be reached at step rminr_{\mathrm{min}}.

One possible interpretation is to stop overlap decreasing for the adjacent connectors that reached minimum overlap and to continue uniform decreasing for the others until all the connectors reach minimum overlap. In that case we may lose equal distribution or symmetry. In practice, this should probably not matter much. So we propose instead the dual option which should behave more or less the same in most cases: Start with all overlaps set to omino_{\mathrm{min}} and increase them evenly to reach a same value oo. By the same reasoning as above we want the inequality

  SNonExt-o⁢(NNonExt-1)+rmin⁢(SExt-o⁢NExt)≥t{S_{\mathrm{NonExt}}-o\left(N_{\mathrm{NonExt}}-1\right)}+{r_{\mathrm{min}}% \left(S_{\mathrm{Ext}}-oN_{\mathrm{Ext}}\right)}\geq t  

which can be rewritten

  SNonExt+rmin⁢SExt-o⁢(NNonExt+rmin⁢NExt-1)≥tS_{\mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-{o\left(N_{\mathrm{NonExt% }}+{r_{\mathrm{min}}N_{\mathrm{Ext}}}-1\right)}\geq t  

We note that N=NNonExt+rmin⁢NExtN=N_{\mathrm{NonExt}}+{r_{\mathrm{min}}N_{\mathrm{Ext}}} is just the exact number of glyphs used in the assembly. If there is only a single glyph, then the overlap value is irrelevant so we can assume NNonExt+r⁢NExt-1=N-1≥1N_{\mathrm{NonExt}}+{rN_{\mathrm{Ext}}}-1=N-1\geq 1. This provides the greatest theorical value for the overlap oo:

  omin≤o≤omaxtheorical=SNonExt+rmin⁢SExt-tNNonExt+rmin⁢NExt-1o_{\mathrm{min}}\leq o\leq o_{\mathrm{max}}^{\mathrm{theorical}}=\frac{S_{% \mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-t}{N_{\mathrm{NonExt}}+{r_{% \mathrm{min}}N_{\mathrm{Ext}}}-1}  

Of course, we also have to take into account the limit imposed by the start and end connector lengths. So omaxo_{\mathrm{max}} must also be at most min⁡(ei,si+1)\min{(e_{i},s_{i+1})} for 0≤i≤n-20\leq i\leq n-2. But if rmin≥2r_{\mathrm{min}}\geq 2 then extender copies are connected and so omaxo_{\mathrm{max}} must also be at most min⁡(ei,si)\min{(e_{i},s_{i})} for i∈IExti\in I_{\mathrm{Ext}}. To summarize, omaxo_{\mathrm{max}} is the minimum of omaxtheoricalo_{\mathrm{max}}^{\mathrm{theorical}}, of eie_{i} for 0≤i≤n-20\leq i\leq n-2, of sis_{i} 1≤i≤n-11\leq i\leq n-1 and possibly of e0e_{0} (if 0∈IExt0\in I_{\mathrm{Ext}}) and of of sn-1s_{n-1} (if n-1∈IExt{n-1}\in I_{\mathrm{Ext}}).

With the algorithm described above NExtN_{\mathrm{Ext}}, NNonExtN_{\mathrm{NonExt}}, SExtS_{\mathrm{Ext}}, SNonExtS_{\mathrm{NonExt}} and rminr_{\mathrm{min}} and omaxo_{\mathrm{max}} can all be obtained using simple loops on the glyphs gig_{i} and so the complexity is O⁢(n)O(n). In practice nn is small: For existing fonts, assemblies are made of at most three non-extenders and two extenders that is n≤5n\leq 5 (incidentally, Gecko and WebKit do not currently support larger values of nn). This means that all the operations described above can be considered to have constant complexity. This is much better than a naive implementation of the iterative algorithm sketched in the OpenType MATH table specification which seems to require at worst

  ∑r=0rmin-1NNonExt+r⁢NExt=NNonExt⁢rmin+rmin⁢(rmin-1)2⁢NExt=O⁢(n×rmin2)\sum_{r=0}^{r_{\mathrm{min}}-1}{N_{\mathrm{NonExt}}+rN_{\mathrm{Ext}}}=N_{% \mathrm{NonExt}}r_{\mathrm{min}}+\frac{r_{\mathrm{min}}\left(r_{\mathrm{min}}-% 1\right)}{2}N_{\mathrm{Ext}}={O(n\times r_{\mathrm{min}}^{2})}  

and at least Ω⁢(rmin)\Omega(r_{\mathrm{min}}).

One of issue is that the number of extender repetitions rminr_{\mathrm{min}} and the number of glyphs in the assembly NN can become arbitrary large since the target size tt can take large values e.g. if one writes \underbrace{\hspace{65535em}} in LaTeX. The improvement proposed here does not solve that issue since setting the coordinates of each glyph in the assembly and painting them require Θ⁢(N)\Theta(N) operations as well as (in the case of HarfBuzz) a glyph buffer of size NN. However, such large stretchy operators do not happen in real-life mathematical formulas. Hence to avoid possible hangs in Web engines a solution is to impose a maximum limit NmaxN_{\mathrm{max}} for the number of glyph in the assembly so that the complexity is limited by the size of the DOM tree. Currently, the proposal for HarfBuzz is Nmax=128N_{\mathrm{max}}=128. This means that if each assembly glyph is 1em large you won’t be able to draw stretchy operators of size more than 128em, which sounds a quite reasonable bound. With the above proposal, rminr_{\mathrm{min}} and so NN can be determined very quickly and the cases N≥NmaxN\geq N_{\mathrm{max}} rejected, so that we avoid losing time with such edge cases…

Finally, because in our proposal we use the same overlap oo everywhere an alternative for HarfBuzz would be to set the output buffer size to nn (i.e. ignore r-1r-1 copies of each extender and only keep the first one). This will leave gaps that the client can fix by repeating extenders as long as oo is also provided. Then HarfBuzz math shaping can be done with a complexity in time and space of just O⁢(n)O(n) and it will be up to the client to optimize or limit the painting of extenders for large values of NN…

By fredw at April 16, 2016 09:23 PM

April 15, 2016

Frédéric Wang: OpenType MATH in HarfBuzz

Igalia WebKit

TL;DR:

  • Work is in progress to add OpenType MATH support in HarfBuzz and will be instrumental for many math rendering engines relying on that library, including browsers.

  • For stretchy operators, an efficient way to determine the required number of glyphs and their overlaps has been implemented and is described here.

In the context of Igalia browser team effort to implement MathML support using TeX rules and OpenType features, I have started implementation of OpenType MATH support in HarfBuzz. This table from the OpenType standard is made of three subtables:

  • The MathConstants table, which contains layout constants. For example, the thickness of the fraction bar of ab\frac{a}{b}.

  • The MathGlyphInfo table, which contains glyph properties. For instance, the italic correction indicating how slanted an integral is e.g. to properly place the subscript in ∫D\displaystyle\displaystyle\int_{D}.

  • The MathVariants table, which provides larger size variants for a base glyph or data to build a glyph assembly. For example, either a larger parenthesis or a assembly of U+239B, U+239C, U+239D to write something like:

    (abcdefgh\left(\frac{\frac{\frac{a}{b}}{\frac{c}{d}}}{\frac{\frac{e}{f}}{\frac{g}{h}}}\right.

Code to parse this table was added to Gecko and WebKit two years ago. The existing code to build glyph assembly in these Web engines was adapted to use the MathVariants data instead of only private tables. However, as we will see below the MathVariants data to build glyph assembly is more general, with arbitrary number of glyphs or with additional constraints on glyph overlaps. Also there are various fallback mechanisms for old fonts and other bugs that I think we could get rid of when we move to OpenType MATH fonts only.

In order to add MathML support in Blink, it is very easy to import the OpenType MATH parsing code from WebKit. However, after discussions with some Google developers, it seems that the best option is to directly add support for this table in HarfBuzz. Since this library is used by Gecko, by WebKit (at least the GTK port) and by many other applications such as Servo, XeTeX or LibreOffice it make senses to share the implementation to improve math rendering everywhere.

The idea for HarfBuzz is to add an API to

  1. 1.

    Expose data from the MathConstants and MathGlyphInfo.

  2. 2.

    Shape stretchy operators to some target size with the help of the MathVariants.

It is then up to a higher-level math rendering engine (e.g. TeX or MathML rendering engines) to beautifully display mathematical formulas using this API. The design choice for exposing MathConstants and MathGlyphInfo is almost obvious from the reading of the MATH table specification. The choice for the shaping API is a bit more complex and discussions is still in progress. For example because we want to accept stretching after glyph-level mirroring (e.g. to draw RTL clockwise integrals) we should accept any glyph and not just an input Unicode strings as it is the case for other HarfBuzz shaping functions. This shaping also depends on a stretching direction (horizontal/vertical) or on a target size (and Gecko even currently has various ways to approximate that target size). Finally, we should also have a way to expose italic correction for a glyph assembly or to approximate preferred width for Web rendering engines.

As I mentioned at the beginning, the data and algorithm to build glyph assembly is the most complex part of the OpenType MATH and deserves a special interest. The idea is that you have a list of n≥1n\geq 1 glyphs available to build the assembly. For each 0≤i≤n-10\leq i\leq n-1, the glyph gig_{i} has advance aia_{i} in the stretch direction. Each gig_{i} has straight connector part at its start (of length sis_{i}) and at its end (of length eie_{i}) so that we can align the glyphs on the stretch axis and glue them together. Also, some of the glyphs are “extenders” which means that they can be repeated 0, 1 or more times to make the assembly as large as possible. Finally, the end/start connectors of consecutive glyphs must overlap by at least a fixed value omino_{\mathrm{min}} to avoid gaps at some resolutions but of course without exceeding the length of the corresponding connectors. This gives some flexibility to adjust the size of the assembly and get closer to the target size tt.

gig_{i}

sis_{i}

eie_{i}

aia_{i}

gi+1g_{i+1}

si+1s_{i+1}

ei+1e_{i+1}

ai+1a_{i+1}

oi,i+1o_{i,i+1}

Figure 1: Two adjacent glyphs in an assembly

To ensure that the width/height is distributed equally and the symmetry of the shape is preserved, the MATH table specification suggests the following iterative algorithm to determine the number of extenders and the connector overlaps to reach a minimal target size tt:

  1. 1.

    Assemble all parts by overlapping connectors by maximum amount, and removing all extenders. This gives the smallest possible result.

  2. 2.

    Determine how much extra width/height can be distributed into all connections between neighboring parts. If that is enough to achieve the size goal, extend each connection equally by changing overlaps of connectors to finish the job.

  3. 3.

    If all connections have been extended to minimum overlap and further growth is needed, add one of each extender, and repeat the process from the first step.

We note that at each step, each extender is repeated the same number of times r≥0r\geq 0. So if IExtI_{\mathrm{Ext}} (respectively INonExtI_{\mathrm{NonExt}}) is the set of indices 0≤i≤n-10\leq i\leq n-1 such that gig_{i} is an extender (respectively is not an extender) we have ri=rr_{i}=r (respectively ri=1r_{i}=1). The size we can reach at step rr is at most the one obtained with the minimal connector overlap omino_{\mathrm{min}} that is

∑i=0N-1(∑j=1riai-omin)+omin=(∑i∈INonExtai-omin)+(∑i∈IExtr⁢(ai-omin))+omin\sum_{i=0}^{N-1}\left(\sum_{j=1}^{r_{i}}{a_{i}-o_{\mathrm{min}}}\right)+o_{ \mathrm{min}}=\left(\sum_{i\in I_{\mathrm{NonExt}}}{a_{i}-o_{\mathrm{min}}} \right)+\left(\sum_{i\in I_{\mathrm{Ext}}}r{(a_{i}-o_{\mathrm{min}})}\right)+o% _{\mathrm{min}}

We let NExt=|IExt|N_{\mathrm{Ext}}={|I_{\mathrm{Ext}}|} and NNonExt=|INonExt|N_{\mathrm{NonExt}}={|I_{\mathrm{NonExt}}|} be the number of extenders and non-extenders. We also let SExt=∑i∈IExtaiS_{\mathrm{Ext}}=\sum_{i\in I_{\mathrm{Ext}}}a_{i} and SNonExt=∑i∈INonExtaiS_{\mathrm{NonExt}}=\sum_{i\in I_{\mathrm{NonExt}}}a_{i} be the sum of advances for extenders and non-extenders. If we want the advance of the glyph assembly to reach the minimal size tt then

SNonExt-omin⁢(NNonExt-1)+r⁢(SExt-omin⁢NExt)≥t{S_{\mathrm{NonExt}}-o_{\mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}+{r% \left(S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}\right)}\geq t

We can assume 0" display="inline">SExt-omin⁢NExt>0S_{\mathrm{Ext}}-o_{\mathrm{min}}N_{\mathrm{Ext}}>0 or otherwise we would have the extreme case where the overlap takes at least the full advance of each extender. Then we obtain

r≥rmin=max⁡(0,⌈t-SNonExt+omin⁢(NNonExt-1)SExt-omin⁢NExt⌉)r\geq r_{\mathrm{min}}=\max\left(0,\left\lceil\frac{t-{S_{\mathrm{NonExt}}+o_{ \mathrm{min}}\left(N_{\mathrm{NonExt}}-1\right)}}{S_{\mathrm{Ext}}-o_{\mathrm{ min}}N_{\mathrm{Ext}}}\right\rceil\right)

This provides a first simplification of the algorithm sketched in the MATH table specification: Directly start iteration at step rminr_{\mathrm{min}}. Note that at each step we start at possibly different maximum overlaps and decrease all of them by a same value. It is not clear what to do when one of the overlap reaches omino_{\mathrm{min}} while others can still be decreased. However, the sketched algorithm says all the connectors should reach minimum overlap before the next increment of rr, which means the target size will indeed be reached at step rminr_{\mathrm{min}}.

One possible interpretation is to stop overlap decreasing for the adjacent connectors that reached minimum overlap and to continue uniform decreasing for the others until all the connectors reach minimum overlap. In that case we may lose equal distribution or symmetry. In practice, this should probably not matter much. So we propose instead the dual option which should behave more or less the same in most cases: Start with all overlaps set to omino_{\mathrm{min}} and increase them evenly to reach a same value oo. By the same reasoning as above we want the inequality

SNonExt-o⁢(NNonExt-1)+rmin⁢(SExt-o⁢NExt)≥t{S_{\mathrm{NonExt}}-o\left(N_{\mathrm{NonExt}}-1\right)}+{r_{\mathrm{min}} \left(S_{\mathrm{Ext}}-oN_{\mathrm{Ext}}\right)}\geq t

which can be rewritten

SNonExt+rmin⁢SExt-o⁢(NNonExt+rmin⁢NExt-1)≥tS_{\mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-{o\left(N_{\mathrm{NonExt% }}+{r_{\mathrm{min}}N_{\mathrm{Ext}}}-1\right)}\geq t

We note that N=NNonExt+rmin⁢NExtN=N_{\mathrm{NonExt}}+{r_{\mathrm{min}}N_{\mathrm{Ext}}} is just the exact number of glyphs used in the assembly. If there is only a single glyph, then the overlap value is irrelevant so we can assume NNonExt+r⁢NExt-1=N-1≥1N_{\mathrm{NonExt}}+{rN_{\mathrm{Ext}}}-1=N-1\geq 1. This provides the greatest theorical value for the overlap oo:

omin≤o≤omaxtheorical=SNonExt+rmin⁢SExt-tNNonExt+rmin⁢NExt-1o_{\mathrm{min}}\leq o\leq o_{\mathrm{max}}^{\mathrm{theorical}}=\frac{S_{ \mathrm{NonExt}}+r_{\mathrm{min}}S_{\mathrm{Ext}}-t}{N_{\mathrm{NonExt}}+{r_{ \mathrm{min}}N_{\mathrm{Ext}}}-1}

Of course, we also have to take into account the limit imposed by the start and end connector lengths. So omaxo_{\mathrm{max}} must also be at most min⁡(ei,si+1)\min{(e_{i},s_{i+1})} for 0≤i≤n-20\leq i\leq n-2. But if rmin≥2r_{\mathrm{min}}\geq 2 then extender copies are connected and so omaxo_{\mathrm{max}} must also be at most min⁡(ei,si)\min{(e_{i},s_{i})} for i∈IExti\in I_{\mathrm{Ext}}. To summarize, omaxo_{\mathrm{max}} is the minimum of omaxtheoricalo_{\mathrm{max}}^{\mathrm{theorical}}, of eie_{i} for 0≤i≤n-20\leq i\leq n-2, of sis_{i} 1≤i≤n-11\leq i\leq n-1 and possibly of e0e_{0} (if 0∈IExt0\in I_{\mathrm{Ext}}) and of of sn-1s_{n-1} (if n-1∈IExt{n-1}\in I_{\mathrm{Ext}}).

With the algorithm described above NExtN_{\mathrm{Ext}}, NNonExtN_{\mathrm{NonExt}}, SExtS_{\mathrm{Ext}}, SNonExtS_{\mathrm{NonExt}} and rminr_{\mathrm{min}} and omaxo_{\mathrm{max}} can all be obtained using simple loops on the glyphs gig_{i} and so the complexity is O⁢(n)O(n). In practice nn is small: For existing fonts, assemblies are made of at most three non-extenders and two extenders that is n≤5n\leq 5 (incidentally, Gecko and WebKit do not currently support larger values of nn). This means that all the operations described above can be considered to have constant complexity. This is much better than a naive implementation of the iterative algorithm sketched in the OpenType MATH table specification which seems to require at worst

∑r=0rmin-1NNonExt+r⁢NExt=NNonExt⁢rmin+rmin⁢(rmin-1)2⁢NExt=O⁢(n×rmin2)\sum_{r=0}^{r_{\mathrm{min}}-1}{N_{\mathrm{NonExt}}+rN_{\mathrm{Ext}}}=N_{ \mathrm{NonExt}}r_{\mathrm{min}}+\frac{r_{\mathrm{min}}\left(r_{\mathrm{min}}-% 1\right)}{2}N_{\mathrm{Ext}}={O(n\times r_{\mathrm{min}}^{2})}

and at least Ω⁢(rmin)\Omega(r_{\mathrm{min}}).

One of issue is that the number of extender repetitions rminr_{\mathrm{min}} and the number of glyphs in the assembly NN can become arbitrary large since the target size tt can take large values e.g. if one writes \underbrace{\hspace{65535em}} in LaTeX. The improvement proposed here does not solve that issue since setting the coordinates of each glyph in the assembly and painting them require Θ⁢(N)\Theta(N) operations as well as (in the case of HarfBuzz) a glyph buffer of size NN. However, such large stretchy operators do not happen in real-life mathematical formulas. Hence to avoid possible hangs in Web engines a solution is to impose a maximum limit NmaxN_{\mathrm{max}} for the number of glyph in the assembly so that the complexity is limited by the size of the DOM tree. Currently, the proposal for HarfBuzz is Nmax=128N_{\mathrm{max}}=128. This means that if each assembly glyph is 1em large you won’t be able to draw stretchy operators of size more than 128em, which sounds a quite reasonable bound. With the above proposal, rminr_{\mathrm{min}} and so NN can be determined very quickly and the cases N≥NmaxN\geq N_{\mathrm{max}} rejected, so that we avoid losing time with such edge cases…

Finally, because in our proposal we use the same overlap oo everywhere an alternative for HarfBuzz would be to set the output buffer size to nn (i.e. ignore r-1r-1 copies of each extender and only keep the first one). This will leave gaps that the client can fix by repeating extenders as long as oo is also provided. Then HarfBuzz math shaping can be done with a complexity in time and space of just O⁢(n)O(n) and it will be up to the client to optimize or limit the painting of extenders for large values of NN…

April 15, 2016 10:00 PM

December 15, 2014

Web Engines Hackfest 2014

Gustavo Noronha

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

By kov at December 15, 2014 11:20 PM

December 08, 2014

How to build TyGL

University of Szeged

This is a follow-up blog post of our announcement of TyGL - the 2D-accelerated GPU rendering port of WebKit.

We have been received lots of feedback about TyGL and we would like to thank you for all questions, suggestions and comments. As we promised lets get into some technical details.

read more

By szilard.ledan at December 08, 2014 12:47 PM

November 12, 2014

Announcing the TyGL-WebKit port to accelerate 2D web rendering with GPU

University of Szeged

We are proud to announce the TyGL port (link: http://github.com/szeged/TyGL) on the top of EFL-WebKit. TyGL (pronounced as tigel) is part of WebKit and provides 2D-accelerated GPU rendering on embedded systems. The engine is purely GPU based. It has been developed on and tested against ARM-Mali GPU, but it is designed to work on any GPU conforming to OpenGL ES 2.0 or higher.

The GPU involvement on future graphics is inevitable considering the pixel growth rate of displays, but harnessing the GPU power requires a different approach than CPU-based optimizations.

read more

By zoltan.herczeg at November 12, 2014 02:18 PM

October 22, 2014

Fuzzinator reloaded

University of Szeged

It's been a while since I last (and actually first) posted about Fuzzinator. Now I think that I have enough new experiences worth sharing.

More than a year ago, when I started fuzzing, I was mostly focusing on mutation-based fuzzer technologies since they were easy to build and pretty effective. Having a nice error-prone test suite (e.g. LayoutTests) was the warrant for fresh new bugs. At least for a while.

read more

By renata.hodovan at October 22, 2014 10:38 PM

September 25, 2014

Measuring ASM.JS performance

University of Szeged

What is ASM.JS?

Now that mobile computers and cloud services become part of our lives, more and more developers see the potential of the web and online applications. ASM.JS, a strict subset of JavaScript, is a technology that provides a way to achieve near native speed in browsers, without the need of any plugin or extension. It is also possible to cross-compile C/C++ programs to it and running them directly in your browser.

In this post we will compare the JavaScript and ASM.JS performance in different browsers, trying out various kinds of web applications and benchmarks.

read more

By matyas.mustoha at September 25, 2014 10:40 AM

August 28, 2014

CSS Shapes now available in Chrome 37 release

Adobe Web Platform

Support for CSS Shapes is now available in the latest Google Chrome 37 release.

chromelogo

What can I do with CSS Shapes?

CSS Shapes lets you think out of the box! It gives you the ability to wrap content outside any shape. Shapes can be defined by geometric shapes, images, and even gradients. Using Shapes as part of your website design takes a visitor’s visual and reading experience to the next level. If you want to start with some tutorials, please go visit Sarah Soueidan’s article about Shapes.

Demo

The following shapes use case is from the Good Looking Shapes Gallery blog post.

Without CSS Shapes
the_11_guesses_no_shapes
With CSS Shapes
the_11_guesses_shapes

In the first picture, we don’t use CSS Shapes. The text wraps around the rectangular image container, which leads to a lot of empty space between the text and the visible part of the image.

In the second picture, we use CSS Shapes. You can see the wrapping behavior around the image. In this case the white parts of the image are transparent, thus the browser can automatically wrap the content around the visible part, which leads to this nice and clean, visually more appealing wrapping behavior.

How do I get CSS Shapes?

Just update your Chrome browser to the latest version from the Chrome/About Google Chrome menu, or download the latest stable version from https://www.google.com/chrome/browser/.

I’d like to thank the collaboration of WebKit and Blink engineers, and everyone else in the community who has contributed to this feature. The fact that Shapes is shipping in two production browsers — Chrome 37 now and Safari 8 later this year — is the upshot of the open source collaboration between the people who believe in a better, more expressive web. Although Shapes will be available in these browsers, you’ll need another solution for the other browsers. The CSS Shapes Polyfill is one method of achieving consistent behavior across browsers.

Where should I start?

For more info about CSS Shapes, please check out the following links:

Let us know your thoughts or if you have nice demos, here or on Twitter: @AdobeWeb and @ZoltanWebKit.

By Zoltan Horvath at August 28, 2014 05:12 PM

May 13, 2014

Good-Looking Shapes Gallery

Adobe Web Platform

As a modern consumer of media, you rarely crack open a magazine or a pamphlet or anything that would be characterized as “printed”. Let me suggest that you take a walk on the wild side. The next time you are in a doctor’s office, or a supermarket checkout lane, or a library, thumb though a magazine. Most of the layouts you’ll find inside can also be found on the web, but not all of them. Layouts where content hugs the boundaries of illustrations are common in print and rare on the web. One of the reasons non-rectangular contour-hugging layouts are uncommon on the web is that they are difficult to produce.

They are not difficult to produce anymore.

The CSS Shapes specification is now in the final stages of standardization. This feature enables flowing content around geometric shapes (like circles and polygons), as well as around shapes defined by an image’s alpha channel. Shapes make it easy to produce the kinds of layouts you can find in print today, with all the added flexibility and power that modern online media affords. You can use CSS Shapes right now with the latest builds of WebKit and Blink based browsers, like Safari and Chrome.

Development of CSS Shapes has been underway for about two years, and we’ve been regularly heralding its progress here. Many of those reports have focused on the evolution of the spec and implementations, and they’ve included examples that emphasized basics over beauty. This article is an attempt to tilt the balance back towards good-looking. Listed below are simple shapes demos that we think look pretty good. Everyone on Adobe’s CSS Shapes engineering team contributed at least one.

There’s a live CodePen.io version of each demo in the gallery. Click on the demo screenshot or one of the handy links to take a look. You’ll want to view the demos with a browser that supports Shapes and you’ll need to enable CSS Shapes in that browser. For example you can use a nightly build of the Safari browser or you can enable shapes in Chrome or Chrome Canary like this:

  1. Copy and paste chrome://flags/#enable-experimental-web-platform-features into the address bar, then press enter.
  2. Click the ‘Enable’ link within that section.
  3. Click the ‘Relaunch Now’ button at the bottom of the browser window.

A few of the demos use the new Shapes Polyfill and will work in most browsers.

And now, without further ado, please have a look through our good-looking shapes gallery.


Ozma of Oz

ozma-demo-screenshot

This demo reproduces the layout style that opens many of the chapters of the L. Frank Baum books, including Ozma of Oz.  The first page is often dominated by an illustration on the left or right. The chapter’s text conforms to the illustration, but not too tightly. The books were published over 100 years ago and they still look good print.  With CSS Shapes they can still look good on the web.

Top Cap

topcap-demo-screenshot

The conventional “drop-cap” opens a paragraph by enlarging and highlighting the first letter, word or phrase. The drop-cap’s goal is to draw your attention to where you can start reading. This demo delivers the same effect by crowning the entire opening paragraph with a “top cap” that funnels your attention into the article. In both cases, what’s going on is a segue from a graphic element to the text.

Violator

monsters-demo-screenshot

A violator is small element that “violates” rectangular text layout by encroaching on a corner or a small part of an edge. This layout idiom is common in short-form magazines and product packaging. That “new and improved” banner which blazes through the corner of thousands of consumer products (whether or not they are new or improved) – it’s a violator.

Column Interest

columns-demo-screenshot

When a print magazine feels the need to incorporate some column layout melodrama, they often reach for this idiom. The shape spans a pair of columns, which creates visual interest in the middle of the page. Without it you’d be faced with a wall of attention sapping text and more than likely turn the page.

Caption

Screenshot of the wine jug caption demo.

The old-school approach for including a caption with an image is to put the caption text alongside or below the image. Putting a caption on top of an image requires a little more finesse, since you have to ensure that the text doesn’t obscure anything important and that the text is rendered in a way that preserves readability.  The result can be relatively attractive.

This photograph was taken by Zoltan Horvath who has pointed out that I’ve combined a quote about tea with a picture of a ceremonial wine jug.  I apologize for briefly breaching that beverage boundary. It’s just a demo.

Paging

Screenshot of the paging demo.

With a layout like this, one could simple let the content wrap and around the shape on the right and then expand into the usual rectangle.  In this demo the content is served up a paragraph at a time, in response to the left and right arrow keys.

Note also: yes in fact the mate gourd is perched on exactly the same windowsill as the previous demo. Zoltan and Pope Francis are among the many fans of yerba mate tea.

Ersatz shape-inside

Screenshot of the ersatz shape-inside demo.

Originally the CSS Shapes spec included shape-inside as well as shape-outside. Sadly, shape-inside was promoted to “Level 2″ of the spec and isn’t available in the current implementations. Fortunately for shape insiders everywhere, it’s still sometimes possible to mimic shape-inside with an adjacent pair of carefully designed shape-outside floats. This demo is a nice example of that, where the text appears inside a bowl of oatmeal.

Animation

animation-demo-screeenshot

This is an animated demo, so to appreciate it you’ll really need to take a look at the live version. It is an example of using an animated shape to draw the user’s attention to a particular message.  Of course one must use this approach with restraint, since an animated loop on a web page doesn’t just gently tug at the user’s attention. It drags at their attention like a tractor beam.

Performance

performance-demo-screenshot

Advertisements are intended to grab the user’s attention and a second or two of animation will do that. In this demo a series of transition motions have been strung together into a tiny performance that will temporarily get the reader’s attention. The highlight of the performance is – of course – the text snapping into the robot’s contour for the finale. Try and imagine a soundtrack that punctuates the action with some whirring and clanking noises, it’s even better that way.

By hmuller at May 13, 2014 05:38 PM

April 24, 2014

Adobe Web Platform Goes to the 2014 WebKit Contributors’ Meeting

Adobe Web Platform

Last week, Apple hosted the 2014 WebKit Contributors’ Meeting at their campus in Cupertino. As usual it was an unconference-style event, with session scheduling happening on the morning of the first day. While much of the session content was very specific to WebKit implementation, there were topics covered that are interesting to the wider web community. This post is a roundup of some of these topics from the sessions that Adobe Web Platform Team members attended.

CSS Custom Properties for Cascading Variables

Alan Stearns suggested a session on planning a new implementation of CSS Custom Properties for Cascading Variables. While implementations of this spec have been attempted in WebKit in the past, they never got past the experimental stage. Despite this, there is still much interest in implementing this feature. In addition, the current version of the spec has addressed many of the issues that WebKit contributors had previously expressed. We talked about a possible issue with using variables in custom property values, which Alan is investigating. More detail is available in the notes from the Custom Properties session.

CSS Regions

Andrei Bucur presented the current state of the CSS Regions implementation in WebKit. The presentation was well received and well attended. Notably, this was one of the few sessions with enough interest that it had a time slot all to itself.

While CSS Regions shipped last year in iOS 7 and Safari 6.1 and 7, the implementation in WebKit hasn’t been standing still. Andrei mentioned the following short list of changes in WebKit since the last Safari release:

  • correct painting of fragments and overflow
  • scrollable regions
  • accelerated content inside regions
  • position: fixed elements
  • the regionoversetchange event
  • better selection
  • better WebInspector integration
  • and more…

Andrei’s slides outlining the state of CSS Regions also contain a roadmap for the feature’s future in WebKit as well as a nice demo of the fix to fragment and overflow handling. If you are following the progress of CSS Regions in WebKit, the slides are definitely worth a look. (As of this writing, the Regions demo in the slides only works in Safari and WebKit Nightly.)

CSS Shapes

Zoltan Horvath, Bear Travis, and I covered the current state of CSS Shapes in WebKit. We are almost done implementing the functionality in Level 1 of the CSS Shapes Specification (which is itself a Candidate Recommendation, the last step before becoming an official W3C standard). The discussion in this session was very positive. We received good feedback on use cases for shape-outside and even talked a bit about the possibilities for when shape-inside is revisited as part of CSS Shapes Level 2. While I don’t have any slides or demos to share at the moment, we will soon be publishing a blog post to bring everyone up to date on the latest in CSS Shapes. So watch this space for more!

Subpixel Layout

This session was mostly about implementation. However, Zalan Bujtas drew an interesting distinction between subpixel layout and subpixel painting. Subpixel layout allows for better space utilization when laying out elements on the page, as boxes can be sized and positioned more precisely using fractional units. Subpixel painting allows for better utilization of high DPI displays by actually drawing elements on the screen using fractional CSS pixels (For example: on a 2x “Retina” display, half of a CSS pixel is one device pixel). Subpixel painting allows for much cleaner lines and smoother animations on high DPI displays when combined with subpixel layout. While subpixel layout is currently implemented in WebKit, subpixel painting is currently a work in progress.

Web Inspector

The Web Inspector is full of shiny new features. The front-end continues to shift to a new design, while the back-end gets cleaned up to remove cruft. The architecture for custom visual property editors is in place and will hopefully enable quick and intuitive editing of gradients, transforms, and animations in the future. Other goodies include new breakpoint actions (like value logging), a redesigned timeline, and IndexedDB debugging support. The Web Inspector still has room for new features, and you can always check out the #webkit-inspector channel on freenode IRC for the latest and greatest.

Web Components

The Web Components set of features continues to gather interest from the browser community. Web Components is made up of four different features: HTML Components, HTML Imports, Shadow DOM, and HTML Templates. The general gist of the talk was that the Web Components concepts are desirable, but there are concerns that the features’ complexity may make implementation difficult. The main concerns seemed to center around performance and encapsulation with Shadow DOM, and will hopefully be addressed with a prototype implementation of the feature (in the works). You can also take a look at the slides from the Web Components session.

CSS Grid Layout

The WebKit implementation of the CSS Grid Layout specification is relatively advanced. After learning in this session that the only way to test out Grid Layout in WebKit was to make a custom build with it enabled, session attendees concluded that it should be turned on by default in the WebKit Nightlies. So in the near future, experimenting with Grid Layout in WebKit should be as easy as installing a nightly build.

More?

As I mentioned earlier, this was just a high-level overview of a few of the topics at this year’s WebKit Contributors’ Meeting. Notes and slides for some of the topics not mentioned here are available on the 2014 WebKit Meeting page in the wiki. The WebKit project is always welcoming new contributors, so if you happen to see a topic on that wiki page that interests you, feel free to get in touch with the community and see how you can get involved.

Acknowledgements

This post would not have been possible without the notes and editing assistance of my colleagues on the Adobe Web Platform Team that attended the meeting along with me: Alan Stearns, Andrei Bucur, Bear Travis, and Zoltan Horvath.

By Bem Jones-Bey at April 24, 2014 05:23 PM

March 18, 2014

QtWebKit is no more, what now?

Gustavo Noronha

Driven by the technical choices of some of our early clients, QtWebKit was one of the first web engines Collabora worked on, building the initial support for NPAPI plugins and more. Since then we had kept in touch with the project from time to time when helping clients with specific issues, hardware or software integration, and particularly GStreamer-related work.

With Google forking Blink off WebKit, a decision had to be made by all vendors of browsers and platform APIs based on WebKit on whether to stay or follow Google instead. After quite a bit of consideration and prototyping, the Qt team decided to take the second option and build the QtWebEngine library to replace QtWebKit.

The main advantage of WebKit over Blink for engine vendors is the ability to implement custom platform support. That meant QtWebKit was able to use Qt graphics and networking APIs and other Qt technologies for all of the platform-integration needs. It also enjoyed the great flexibility of using GStreamer to implement HTML5 media. GStreamer brings hardware-acceleration capabilities, support for several media formats and the ability to expand that support without having to change the engine itself.

People who are using QtWebKit because of its being Gstreamer-powered will probably be better served by switching to one of the remaining GStreamer-based ports, such as WebKitGTK+. Those who don’t care about the underlying technologies but really need or want to use Qt APIs will be better served by porting to the new QtWebEngine.

It’s important to note though that QtWebEngine drops support for Android and iOS as well as several features that allowed tight integration with the Qt platform, such as DOM manipulation through the QWebElement APIs, making QObject instances available to web applications, and the ability to set the QNetworkAccessManager used for downloading resources, which allowed for fine-grained control of the requests and sharing of cookies and cache.

It might also make sense to go Chromium/Blink, either by using the Chrome Content API, or switching to one its siblings (QtWebEngine included) if the goal is to make a browser which needs no integration with existing toolkits or environments. You will be limited to the formats supported by Chrome and the hardware platforms targeted by Google. Blink does not allow multiple implementations of the platform support layer, so you are stuck with what upstream decides to ship, or with a fork to maintain.

It is a good alternative when Android itself is the main target. That is the technology used to build its main browser. The main advantage here is you get to follow Chrome’s fast-paced development and great support for the targeted hardware out of the box. If you need to support custom hardware or to be flexible on the kinds of media you would like to support, then WebKit still makes more sense in the long run, since that support can be maintained upstream.

At Collabora we’ve dealt with several WebKit ports over the years, and still actively maintain the custom WebKit Clutter port out of tree for clients. We have also done quite a bit of work on Chromium-powered projects. Some of the decisions you have to make are not easy and we believe we can help. Not sure what to do next? If you have that on your plate, get in touch!

By kov at March 18, 2014 07:44 PM

February 25, 2014

Improving your site’s visual details: CSS3 text-align-last

Adobe Web Platform

In this post, I want to give a status report regarding the text-align-last CSS3 property. If you are interested in taking control of the small visual details of your site with CSS, I encourage you to keep reading.

The problem

First, let’s talk about why we need this property. You’ve probably already seen many text blocks on pages that don’t quite seem visually correct, because the last line isn’t justified with the previous lines. Check out the example paragraph below:

Example of the CSS3 text-align-last property

In the first column, the last line isn’t justified. This is the expected behavior, when you apply the ‘text-align: justify’ CSS property on a container. On the other hand, in the second column, the content is entirely justified, including the last line.

The solution

This magic is the ‘text-align-last’ CSS3 property, which is set to justify on the second container. The text-align-last property is part of the CSS Text Module Level 3 specification, which is currently a working draft. The text-align-last property describes how the last line of a block or a line right before a forced line break is aligned when ‘text-align’ is ‘justify’, which means you gain full control over the alignment of the last line of a block. The property allows several more options, which you can read about on WebPlatform.org docs, or the CSS Text Module Level 3 W3C Specification.

A possible use case (Added April – 2014)

After looking at the previous example (which was rather focusing on the functionality of the property), let’s move on to a more realistic use case. The feature is perfect to make our multi-line captions look better. Check out the centered, and the justified image caption examples below.

centertext_align__simple_justify

And now, compare them with a justified, multi-line caption, where the last line has been centered by text-align-last: center.
text_align_last_center

I think the proper alignment of the last line gives a better overlook to the caption.

Browser Support

I recently added rendering support for the property in WebKit (Safari) based on the latest specification. Dongwoo Joshua Im from Samsung added rendering support in Blink (Chrome). If you like to try it out in WebKit, you’ll need to make a custom developer build and use the CSS3 text support build flag (--css3-text).

The property is already included in Blink’s developer nightlies by default, so after launching your latest Chrome Canary, you only need to enable ‘Enable experimental Web Platform features’ under chrome://flags, and enjoy the full control over your last lines.

Developer note

Please keep in mind that both the W3C specification and the implementations are under experimental status. I’ll keep blogging about the feature and let you know if anything changes, including when the feature ships for production use!

By Zoltan Horvath at February 25, 2014 04:58 PM

December 11, 2013

WebKitGTK+ hackfest 5.0 (2013)!

Gustavo Noronha

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coruña to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coruña, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.

Web with wider tabs and the new prefs dialog

Web with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web’s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.

Martin Robinson and Dan Winship hack

Martin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don’t) or because we hate autotools (we do), but because it should make people’s lives easier when adding new files to the build, and should also make our build less hacky and quicker – it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there’s plenty to do.

Hackers take advantage of the icecream network for faster builds

Hackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup’s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.

Jon and Guillaume battling code

Jon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There’s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.

Hackers chillin' at A Coruña

Hackers chillin’ at A Coruña

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!


By kov at December 11, 2013 09:47 AM

August 27, 2013

HTML Alchemy – Combining CSS Shapes with CSS Regions

Adobe Web Platform

Note: Support for shape-inside is only available until the following nightly builds: WebKit r166290 (2014-03-26); Chromium 260092 (2014-03-28).


I have been working on rendering for almost a year now. Since I landed the initial implementation of Shapes on Regions in both Blink and WebKit, I’m incredibly excited to talk a little bit about these features and how you can combine them together.

Mad_scientist

Don’t know what CSS Regions and Shapes are? Start here!

The first ingredient in my HTML alchemy kitchen is CSS Regions. With CSS Regions, you can flow content into multiple styled containers, which gives you enormous creative power to make magazine style layouts. The second ingredient is CSS Shapes, which gives you the ability to wrap content inside or outside any shape. In this post I’ll talk about the “shape-inside” CSS property, which allows us to wrap content inside an arbitrary shape.

Let’s grab a bowl and mix these two features together, CSS Regions and CSS Shapes to produce some really interesting layouts!

In the latest Chrome Canary and Safari WebKit Nightly, after enabling the required experimental features, you can flow content continuously through multiple kinds of shapes. This rocks! You can step out from the rectangular text flow world and break up text into multiple, non-rectangular shapes.

Demo

If you already have the latest Chrome Canary/Safari WebKit Nightly, you can just go ahead and try a simple example on codepen.io. If you are too lazy, or if you want to extend your mouse button life by saving a few button clicks, you can continue reading.


iBuyd3

In the picture above we see that the “Lorem ipsum” story flows through 4 different, colorful regions. There is a circle shape on each of the first two fixed size regions. Check out the code below to see how we apply the shape to the region. It’s pretty straightforward, right?
#region1, #region2 {
    -webkit-flow-from: flow;
    background-color: yellow;
    width: 200px;
    height: 200px;
    -webkit-shape-inside: circle(50%, 50%, 50%);
}
The content flows into the third (percentage sized) region, which represents a heart (drawn by me, all rights reserved). I defined the heart’s coordinates in percentages, so the heart will stretch as you resize the window.
#region3 {
    -webkit-flow-from: flow;
    width: 50%;
    height: 400px;
    background-color: #EE99bb;
    -webkit-shape-inside: polygon(11.17% 10.25%,2.50% 30.56%,3.92% 55.34%,12.33% 68.87%,26.67% 82.62%,49.33% 101.25%,73.50% 76.82%,85.17% 65.63%,91.63% 55.51%,97.10% 31.32%,85.79% 10.21%,72.47% 5.35%,55.53% 14.12%,48.58% 27.88%,41.79% 13.72%,27.50% 5.57%);
}

The content that doesn’t fit in the first three regions flows into the fourth region. The fourth region (see the retro-blue background color) has its CSS width and height set to auto, so it grows to fit the remaining content.

Real world examples

After trying the demo and checking out the links above, I’m sure you’ll see the opportunities for using shape-inside with regions in your next design. If you have some thoughts on this topic, don’t hesitate to comment. Please keep in mind that these features are under development, and you might run into bugs. If you do, you should report them on WebKit’s Bugzilla for Safari or Chromium’s issue tracker for Chrome. Thanks for reading!

By Zoltan Horvath at August 27, 2013 04:00 PM

August 06, 2013

WebGL, at last!

Brent Fulgham

It's been a long time since I've written an update -- but my lack of blog posting is not an indication of a lack of progress in WebKit or the WinCairo port. Since I left my former employer (who *still* hasn't gotten around to updating the build machine I set up there), we've:

  • Migrated from Visual Studio 2005 to Visual Studio 2010 (and soon, VS2012)
  • Enabled New-run-webkit-tests
  • Updated the WinCairo Support Libraries to support 64-bit builds
  • Integrated a ton of cURL improvements and extensions thanks to the TideSDK guys 
  • and ...
... thanks to the hard work of Alex Christensen, brought up WebGL on the WinCairo port.  This is a little exciting for me, because it marks the first time (I can recall) where the WinCairo port actually gained a feature that was not already part of the core Apple Windows port.



The changes needed to see these circa-1992 graphics in all their three-dimensional glory are already landed in the WebKit tree.  You just need to:

  1. Enable the libEGL, libGLESv2, translator_common, translator_glsl, and translator_hlsl for the WinCairo build (they are currently turned off).
  2. Make the following change to WTF/wtf/FeatureDefines.h: 

Brent Fulgham@WIN7-VM ~/WebKit/Source/WTF/wtf
$ svn diff
Index: FeatureDefines.h
===================================================================
--- FeatureDefines.h    (revision 153733)
+++ FeatureDefines.h    (working copy)
@@ -245,6 +245,13 @@
 #define ENABLE_VIEW_MODE_CSS_MEDIA 0
 #endif

+#define ENABLE_WEBGL 1
+#define WTF_USE_3D_GRAPHICS 1
+#define WTF_USE_OPENGL 1
+#define WTF_USE_OPENGL_ES_2 1
+#define WTF_USE_EGL 1
+#define WTF_USE_GRAPHICS_SURFACE 1
+
 #endif /* PLATFORM(WIN_CAIRO) */

 /* --------- EFL port (Unix) --------- */

Performance is a little ragged, but we hope to improve that in the near future.

We have plenty of more plans for the future, including full 64-bit support (soon), and hopefully some improvements to the WinLauncher application to make it a little more useful.

As always, if you would like to help out,

By Brent Fulgham (noreply@blogger.com) at August 06, 2013 05:53 AM

May 15, 2013

CSS Level 3 Text Decoration on WebKit and Blink – status

Bruno de Oliveira Abinader

It’s been a while since I wrote the last post about progress on implementing CSS Level 3 Text Decoration features in WebKit. I’ve been involved with other projects but now I can finally resume the work in cooperation with my colleague from basysKom, Lamarque Souza. So far we have implemented:

  • text-decoration-line (link)
  • text-decoration-style (link)
  • text-decoration-color (link)
  • text-underline-position (link)

These properties are currently available under -webkit- prefix on WebKit, and guarded by a feature flag - CSS3_TEXT – which is enabled by default on both EFL and GTK ports. On Blink, plans are to get these properties unprefixed and under a runtime flag, which can be activated by enabling the “Experimental WebKit Features” (updated to “Experimental Web Platform Features” in latest builds) flag – see chrome://flags inside Google Chrome/Chromium). There are still some Skia-related issues to fix on Blink to enable proper dashed and dotted text decoration styles to be displayed. In the near future, we shall also have the text-decoration shorthand as specified on CSS Level 3 specification.

See below a summary of things I plan to finish in the near future:

  • [webkit] Property text-decoration-line now accepts blink as valid value
  • [blink] Fix implementation of dashed and dotted styles on Skia
  • [blink] Fix an issue where previous Skia stroke styles were used when rendering paint decorations
  • [blink] Implement CSS3_TEXT as a runtime flag
  • [blink] Property text-decoration-line now accepts blink as valid value
  • [blink] Implement support for text-decoration shorthand
  • [webkit] Implement support for text-decoration shorthand

Note: Please do not confuse text-decoration‘s blink value with Blink project :)

Stay tuned for further updates!

By Bruno Abinader at May 15, 2013 04:52 PM

March 27, 2013

Freeing the Floats of the Future From the Tyranny of the Rectangle

Adobe Web Platform

With modern web layout you can have your content laid out in whatever shape you want as long as it’s a rectangle. Designers in other media have long been able to have text and other content lay out inside and around arbitrarily complex shapes. The CSS Exclusions, CSS Shapes Level 1, and CSS Shapes Level 2 specifications aim to bring this capability to the web.

While these features aren’t widely available yet, implementation is progressing and it’s already possible to try out some of the features yourself. Internet Explorer 10 has an implementation of the exclusions processing model, so you can try out exclusions in IE 10 today.

At Adobe we have been focusing on implementing the shapes specification. We began with an implementation of shape-inside and now have a working implementation of the shape-outside property on floats. We have been building our implementation in WebKit, so the easiest way to try it out yourself is to download a copy of Chrome Canary. Once you have Canary, enable Experimental Web Platform Features and go wild!

What is shape-outside?

“Now hold up there,” you may be thinking, “I don’t even know what a shape-outside is and you want me to read this crazy incomprehensible specification thing to know what it is!?!”

Well you’ll be happy to know that it really isn’t that complex, especially in the case of floats. When an element is floated, inline content avoids the floated element. Content flows around the margin box of the element as defined by the CSS box model. The shape-outside CSS property allows you to tell the browser to use a specified shape instead of the margin box when wrapping content around the floating element.

CSS Exclusions

The current implementation allows for rectangles, rounded rectangles, circles, ellipses, and polygons. While this gives a lot of flexibility, eventually you will be able to use a SVG path or the alpha channel of an image to make it easier to create complex shapes.

How do I use it?

First, you need to get a copy of Chrome Canary and then enable Experimental Web Platform features. Once you have that, load up this post in Chrome Canary so that you can click on the images below to see a live example of the code. Even better, the examples are on Codepen, so you can and should play with them yourself and see what interesting things you can come up with.

Note that in this post and the examples I use the unprefixed shape-outside property.
If you want to test these examples outside of my Codepen then you will need to use the prefixed -webkit-shape-outside property or use (which is a built in option in Codepen).

We’ll start with a HTML document with some content and a float. Currently shape-outside only works on floating elements, so those are the ones to concentrate on. For example: (click on the image to see the code)

HTML without shape-outside

You can now add the shape-outside property to the style for your floats.

.float {
  shape-outside: circle(50%, 50%, 50%);
}

A circle is much more interesting than a standard rectangle, don’t you think? This circle is centered in the middle of the float and has a radius that is half the width of the float. The effect on the layout is something like this:

shape-outside circle

While percentages were used for this circle, you can use any CSS unit you like to specify the shape. All of the relative units are relative to the dimensions of element where the shape-outside is specified.

Supported shapes

Circles are cool and all, but I promised you other shapes, and I will deliver. There are four types of shapes that are supported by the current shape-outside implementation: rectangle, circle, ellipse, and polygon.

rectangle

You have the ability to specify a shape-outside that is a fairly standard rectangle:

shape-outside: rectangle(x, y, width, height);

The x and y parameters specify the coordinates of the top-left corner of the rectangle. This coordinate is in relation to the top-left corner of the floating element’s content box. Because of the way this interacts with the rules of float positioning, setting these to anything other than 0 causes an effect that is similar to relatively positioning the float’s content. (Explaining this is beyond the scope of this post.)

The width and height parameters should be self-explanatory: they are the width and height of the resulting rectangle.

Where things get interesting is with the six-argument form of rectangle:

shape-outside: rectangle(x, y, width, height, rx, ry);

The first four arguments are the same as explained above, but the last two specify corner radii in the horizontal (rx) and vertical (ry) directions. This not only allows the creation of rounded rectangles, you can create circles and ellipses as well. (Just like with [border-radius][border-radius].)

Here’s an example of a rectangle, a rounded rectangle, a circle, and an ellipse using just rectangle syntax:

shape-outside rectangle

If you’re reading this in Chrome Canary with exclusions turned on, play around with this demo and see what other things you can do with the rectangles.

circle

I already showed you a simple circle demo and you’ll be happy to know that’s pretty much all there is to know about circles:

shape-outside: circle(cx, cy, radius);

The cx and cy parameters specify the coordinates of the center of the circle. In most situations you’ll want to put them at the center of your box. Just like with rectangles moving this around can be useful, but it behaves similarly to relatively positioning the float’s content with respect to the shape.

The radius parameter is the radius of the resulting circle.

In case you’d like to see it again, here’s what a circle looks like:

shape-outside circle

While it is possible to create circles with rounded rectangles as described above, having a dedicated circle shape is much more convenient.

ellipse

Sometimes, you need to squish your circles and that’s where the ellipse comes in handy.

shape-outside: ellipse(cx, cy, rx, ry);

Just like a circle, an ellipse has cx and cy to specify the coordinates of its center and you will likely want to have them at the center of your float. And just like all the previous shapes, changing these around will cause the float’s content to position relative to your shape.

The rx and ry parameters will look familiar from the rounded rectangle case and they are exactly what you would expect: the horizontal and vertical radii of the ellipse.

Ellipses can be used to create circles (rx = ry) and rounded rectangles can be used to create ellipses, but it’s best to use the shape that directly suits your purpose. It’s much easier to read and maintain that way.

Here’s an example of using an ellipse shape:

shape-outside ellipse

polygon

Now here’s where things get really interesting. The polygon `shape-outside` allows you to specify an arbitrary polygonal shape for your float:

shape-outside: polygon(x1 y1, x2 y2, ... , xn yn);

The parameters of the polygon are the x and y coordinates of each vertex of the shape. You can have as many vertices as you would like.

Here’s an example of a simple polygon:

shape-outside triangle

Feel free to play with this and see what happens if you create more interesting shapes!

Putting content in the float

The previous examples all had divs without any content just to make it easier to read and understand the code, but a big motivation for shape-outside is to wrap around other content. Interesting layouts often involve wrapping text around images as this final example shows:

shape-outside with images

As usual, you should take a look and play with the code for this example of text wrapping around floated images. This is just the beginning of the possibilities, as you can put a shape outside on any floating element with any content you want inside.

Next steps

We are still hard at work on fixing bugs in the current implementation and implementing the rest of the features in the CSS Shapes Level 1 specification. We welcome your feedback on what is already implemented and also on the spec itself. If you are interested in becoming part of the process, you can raise issues with the current WebKit implementation by filing bugs in the WebKit bugzilla. If you have issues with the spec, those are best raised on the www-style mailing list. And of course, you can leave your feedback as comments on this post.

I hope that you enjoy experimenting with shape-outside and the other features we are currently working on.

By Bem Jones-Bey at March 27, 2013 05:10 PM

August 01, 2012

WebKit CSS3 text-decoration properties (preview)

Bruno de Oliveira Abinader

WebKit currently supports CSS Text Level 2.1 version of text-decoration property (link). This version treats only about the decoration line types (underline, overline, line-through and blink – the latter is not supported on WebKit).

The draft version of CSS Text Level 3 upgrades text-decoration (link) property as a shorthand to 3 newly added properties, named text-decoration-line (link), text-decoration-style (link) and text-decoration-color (link), and also adds text-decoration-skip (link) property.

Among other WebKit stuff I’ve been doing lately, this feature implementation is one of the most cool ones I’m enjoying implementing. I’ve grabbed the task of implementing all of these CSS3 text-decoration* properties on WebKit, and results are great so far!

As you can see below, these are the new text decoration styles (solid, double, dotted, dashed and wavy – the latter still requires platform support) available:

Text decoration style layout test results on Qt platform

And also specific text decoration colors can be set:

Text decoration color layout test results on Qt platform

These features (with exception to text-decoration-skip property) are already implemented on Firefox, thus it gets easier to compare results with different web engines. It is important to notice since CSS3 specification is still in development, all these properties have a -webkit- prefix (ie. -webkit-text-decoration), so text-decoration still maintains CSS2.1 specification requirements. The patches are being reviewed and will soon land upstream, let’s hope it will be soon!

By Bruno Abinader at August 01, 2012 09:58 PM

April 11, 2012

A guide for Qt5/WebKit2 development setup for Nokia N9 on Ubuntu Linux

Bruno de Oliveira Abinader

As part of my daily activities at basysKom on QtWebKit maintenance and development for Nokia devices, it is interesting to keep a track on latest developments circa QtWebKit. There is currently a promising project of a Qt5/WebKit2-based browser called Snowshoe mainly developed by my fellow friends from INdT which is completely open-source. This browser requires latest Qt5 and QtWebKit binaries and thus requires us to have a functional build system environment. There is a guide available on WebKit’s wiki (link) which is very helpful but lacks some information about compilation issues found when following the setup steps. So I am basing this guide from that wiki page and I hope that it gets updated soon :)

On this guide it is assumed the following:

  • All commands are issued on a Linux console. I am not aware of how this guide would work on other systems.
  • All commands are supposed to be issued inside base directory, unless expressely said otherwise (ie. cd <QT5_DIR>).
  • You might want to check if you have git and rsync packages installed in your system.

1. Install Qt SDK

In order to build Qt5 and QtWebKit for Nokia N9, you need to set up a cross-compiler. Thankfully, Qt SDK already comes with a working setup. Please download the online installer from Qt Downloads section (link).

NOTE: The offline installer comes with an outdated version of the MADDE target, which can be updated by running the script below and chosing “Update components” when asked:

$ ~/QtSDK/SDKMaintenanceTool

2. Directory setup

It is suggested (and actually required by some build scripts) to have a base directory which holds Qt5, Qt Components and WebKit project sources. The suggested base directory can be created by running:

$ mkdir -p ~/swork

NOTE: You can actually choose another directory name, but so far it is required by some scripts to have at least a symbolic link pointing to <HOME_DIR>/swork.

3. Download convenience scripts

3.1. browser-scripts

$ git clone https://github.com/resworb/scripts.git browser-scripts

3.2. rsync-scripts

$ wget http://trac.webkit.org/attachment/wiki/SettingUpDevelopmentEnvironmentForN9/rsync-scripts.tar.gz?format=raw
$ tar xzf rsync-scripts.tar.gz

4. Download required sources

4.1. testfonts

$ git clone git://gitorious.org/qtwebkit/testfonts.git

4.2. Qt5, QtComponents and WebKit

The script below when successfully run will create ~/swork/qt5, ~/swork/qtcomponents and ~/swork/webkit directories:

$ browser-scripts/clone-sources.sh --no-ssh

NOTE: You can also manually download sources, but remember to stick with the directory names described above.

5. Pre-build hacks

5.1. Qt5 translations

Qt5 translations are not being properly handled by cross-platform toolchain. This happens mainly because lrelease application is called to generate Qt message files, but since it is an ARMEL binary your system is probably not capable of running it natively (unless you have a misc_runner kernel module properly set, then you can safely skip this step). In this case, you can use lrelease from your system’s Qt binaries without any worries.

If you have a Scratchbox environment set, it is suggested for you to stop its service first:

$ sudo service scratchbox-core stop

Now you can manually generate Qt message files by running this:

$ cd ~/swork/qt5/qttranslations/translations
$ for file in `ls *ts`; do lrelease $file -qm `echo "$file" | sed 's/ts$/qm/'`; done

5.2. Disable jsondb-client tool

QtJsonDB module from Qt5 contains a tool called jsondb-client, which depends on libedit (not available on MADDE target). It is safe to disable its compilation for now:

$ sed -i 's/jsondb-client//' ~/swork/qt5/qtjsondb/tools/tools.pro

5.3. Create missing symbolic links

Unfortunately Qt5 build system is not robust enough to support our cross-compilation environment, so some symbolic links are required on MADDE to avoid compilation errors (where <USER> is your system user name):

$ ln -s ~/swork/qt5/qtbase/include ~/QtSDK/Madde/sysroots/harmattan_sysroot_10.2011.34-1_slim/home/<USER>/swork/qt5/qtbase
$ ln -s ~/swork/qt5/qtbase/mkspecs ~/QtSDK/Madde/sysroots/harmattan_sysroot_10.2011.34-1_slim/home/<USER>/swork/qt5/mkspecs

6. Build sources

You can execute the script that will build all sources using cross-compilation setup:

$ browser-scripts/build-sources.sh --cross-compile

If everything went well, you now have the most up-to-date binaries for Qt5/WebKit2 development for Nokia N9. Please have a look at WebKit’s wiki for more information about how to update sources after a previous build and information on how to keep files in sync with device. The guide assumes PR1.1 firmware for N9 device, which is already outdated, so I might come up next with updated instructions on how to safely sync files to your PR1.2-enabled device.

That’s all for now, I appreciate your comments and feedback!

By Bruno Abinader at April 11, 2012 07:18 AM

March 10, 2012

WebKitGTK+ Debian packaging repository changes

Gustavo Noronha

For a while now the git repository used for packaging WebKitGTK+ has been broken. Broken as in nobody was able to clone it. In addition to that, the packaging workflow had been changing over time, from a track-upstream-git/patches applied one to a import-orig-only/patches-not-applied one.

After spending some more time trying to unbreak the repository for the third time I decided it might be a good time for a clean up. I created a new repository, imported all upstream versions for series 1.2.x (which is in squeeze), 1.6.x (unstable), and 1.7.x (experimental). I also imported packaging-related commis for those versions using git format-patch and black magic.

One of the good things about doing this move, and which should make hacking the WebKitGTK+ debian package more pleasant and accessible can be seen here:


kov@goiaba ~/s/debian-webkit> du -sh webkit/.git webkit.old/.git
27M webkit/.git
1.6G webkit.old/.git

If you care about the old repository, it’s on git.debian.org still, named old-webkit.git. Enjoy!

By kov at March 10, 2012 05:32 PM

December 07, 2011

WebKitGTK+ hackfest \o/

Gustavo Noronha

It’s been a couple days since I returned from this year’s WebKitGTK+ hackfest in A Coruña, Spain. The weather was very nice, not too cold and not too rainy, we had great food, great drinks and I got to meet new people, and hang out with old friends, which is always great!

Hackfest black board, photo by Mario

I think this was a very productive hackfest, and as usual a very well organized one! Thanks to the GNOME Foundation for the travel sponsorship, to our friends at Igalia for doing an awesome job at making it happen, and to Collabora for sponsoring it and granting me the time to go there! We got a lot done, and although, as usual, our goals list had many items not crossed, we did cross a few very important ones. I took part in discussions about the new WebKit2 APIs, got to know the new design for GNOME’s Web application, which looks great, discussed about Accelerated Compositing along with Joone, Alex, Nayan and Martin Robinson, hacked libsoup a bit to port the multipart/x-mixed-replace patch I wrote to the awesome gio-based infrastructure Dan Winship is building, and some random misc.

The biggest chunk of time, though, ended up being devoted to a very uninteresting (to outsiders, at least), but very important task: making it possible to more easily reproduce our test results. TL;DR? We made our bots’ and development builds use jhbuild to automatically install dependencies; if you’re using tarballs, don’t worry, your usual autogen/configure/make/make install have not been touched. Now to the more verbose version!

The need

Our three build slaves reporting a few failures

For a couple years now we have supported an increasingly complex and very demanding automated testing infrastructure. We have three buildbot slaves, one provided by Collabora (which I maintain), and two provided by Igalia (maintained by their WebKitGTK+ folks). Those bots build as many check ins as possible with 3 different configurations: 32 bits release, 64 bits release, and 64 bits debug.

In addition to those, we have another bot called the EWS, or Early Warning System. There are two of those at this moment: one VM provided by Collabora and my desktop, provided by myself. These bots build every patch uploaded to the bugzilla, and report build failures or passes (you can see the green bubbles). They are very important to our development process because if the patch causes a build failure for our port people can often know that before landing, and try fixes by uploading them to bugzilla instead of doing additional commits. And people are usually very receptive to waiting for EWS output and acting on it, except when they take way too long. You can have an idea of what the life of an EWS bot looks like by looking at the recent status for the WebKitGTK+ bots.

Maintaining all of those bots is at times a rather daunting task. The tests require a very specific set of packages, fonts, themes and icons to always report the same size for objects in a render. Upgrades, for instance, had to be synchronized, and usually involve generating new baselines for a large number of tests. You can see in these instructions, for instance, how strict the environment requirements are – yes, we need specific versions of fonts, because they often cause layouts to change in size! At one point we had tests fail after a compiler upgrade, which made rounding act a bit different!

So stability was a very important aspect of maintaining these bots. All of them have the same version of Debian, and most of the packages are pinned to the same version. On the other hand, and in direct contradition to the stability requirement, we often require bleeding edge versions of some libraries we rely on, such as libsoup. Since we started pushing WebKitGTK+ to be libsoup-only, its own progress has been pretty much driven by WebKitGTK+’s requirements, and Dan Winship has made it possible to make our soup backend much, much simpler and way more featureful. That meant, though, requiring very recent versions of soup.

To top it off, for anyone not running Debian testing and tracking the exact same versions of packages as the bots it was virtually impossible to get the tests to pass, which made it very difficult for even ourselves to make sure all patches were still passing before committing something. Wow, what a mess.

The explosion^Wsolution

So a few weeks back Martin Robinson came up with a proposed solution, which, as he says, is the “nuclear bomb” solution. We would have a jhbuild environment which would build and install all of the dependencies necessary for reproducing the test expectations the bots have. So over the first three days of the hackfest Martin and myself hacked away in building scripts, buildmaster integration, a jhbuild configuration, a jhbuild modules file, setting up tarballs, and wiring it all in a way that makes it convenient for the contributors to get along with. You’ll notice that our buildslaves now have a step just before compiling called “updated gtk dependencies” (gtk is the name we use for our port in the context of WebKit), which runs jhbuild to install any new dependencies or version bumps we added. You can also see that those instructions I mentioned above became a tad simpler.

It took us way more time than we thought for the dust to settle, but it eventually began to. The great thing of doing it during the hackfest was that we could find and fix issues with weird configurations on the spot! Oh, you build with AR_FLAGS=cruT and something doesn’t like it? OK, we fix it so that the jhbuild modules are not affected by that variable. Oh, turns out we missed a dependency, no problem, we add it to the modules file or install them on the bots, and then document the dependency. I set up a very clean chroot which we could use for trying out changes so as to not disrupt the tree too much for the other hackfest participants, and I think overall we did good.

The aftermath

By the time we were done our colleagues who ran other distributions such as Fedora were already being able to get a substantial improvements to the number of tests passing, and so did we! Also, the ability to seamlessly upgrade all the bots with a simple commit made it possible for us to very easily land a change that required a very recent (as in unreleased) version of soup which made our networking backend way simpler. All that red looks great, doesn’t it? And we aren’t done yet, we’ll certainly be making more tweaks to this infrastructure to make it more transparent and more helpful to the users (contributors and other people interested in running the tests).

If you’ve been hit by the instability we caused, sorry about that, poke mrobinson or myself in the #webkitgtk+ IRC channel on FreeNode, and we’ll help you out or fix any issues. If you haven’t, we hope you enjoy all the goodness that a reproducible testing suite has to offer! That’s it for now, folks, I’ll have more to report on follow-up work started at the hackfest soon enough, hopefully =).

By kov at December 07, 2011 11:34 PM

November 29, 2011

Accelerated Compositing in webkit-clutter

Gustavo Noronha

For a while now my fellow Collaboran Joone Hur has been working on implementing the Accelerated Compositing infrastructure available in WebKit in webkit-clutter, so that we can use Clutter’s powers for compositing separate layers and perform animations. This work is being done by Collabora and is sponsored by BOSCH, whom I’d like to thank! What does all this mean, you ask? Let me tell me a bit about it.

The way animations usually work in WebKit is by repainting parts of the page every few milliseconds. What that means in technical terms is that an area of the page gets invalidated, and since the whole page is one big image, all of the pieces that are in that part of the page have to be repainted: the background, any divs, images, text that are at that part of the page.

What the accelerated compositing code paths allow is the creation of separate pieces to represent some of the layers, allowing the composition to happen on the GPU, removing the need to perform lots of cairo paint operations per second in many cases. So if we have a semi-transparent video moving around the page, we can have that video be a separate texture that is layered on top of the page, made transparent and animated by the GPU. In webkit-clutter’s case this is done by having separate actors for each of the layers.

I have been looking at this code on and off, and recently joined Joone in the implementation of some of the pieces. The accelerated compositing infrastructure was originally built by Apple and is, for that reason, works in a way that is very similar to Core Animation. The code is still a bit over the place as we work on figuring out how to best translate the concepts into clutter concepts and there are several bugs, but some cool demos are already possible! Bellow you have one of the CSS3 demos that were made by Apple to demo this new functionality running on our MxLauncher test browser.

You can also see that the non-Accelerated version is unable to represent the 3D space correctly. Also, can you guess which of the two MxLauncher instances is spending less CPU? ;) In this second video I show the debug borders being painted around the actors that were created to represent layers.

The code, should you like to peek or test is available in the ac2 branch of our webkit-clutter repository: http://gitorious.org/webkit-clutter/webkit-clutter/commits/ac2

We still have plenty of work to do, so expect to hear more about it. During our annual hackfest in A Coruña we plan to discuss how this work could be integrated also in the WebKitGTK+ port, perhaps by taking advantage of clutter-gtk, which would benefit both ports, by sharing code and maintenance, and providing this great functionality to Epiphany users. Stay tuned!

By kov at November 29, 2011 05:55 PM

October 09, 2011

Tests Active

Brent Fulgham


Looking back over this blog, I see that it was around a year ago that I got the initial WinCairo buildbot running. I'm very pleased to announce that I have gotten ahold of a much more powerful machine, and am now able to run a full build and tests in slightly under an hour -- a huge improvement over the old hardware which took over two hours just to build the software!

This is a big step, because we can now track regressions and gauge correctness compared to the other platforms. Up to now, testing has largely consisted of periodic manual runs of the test suite, and a separate set of high-level tests run as part of a larger application. This was not ideal, because it was easy for low-level functions in WebKit that I rarely use to be broken and missed.

All is not perfect, of course. Although over 12,000 tests now run (successfully) with each build, that is effectively two thirds of the full test suite. Most of the tests I have disabled are due to small differences in the output layout. I'm trying to understand why these differences exist, but I suspect many of them simply reflect small differences in Cairo compared to the CoreGraphics rendering layer.

If any of you lurkers are interested in helping out, trying out some of the tests I have disabled and figuring out why they fail would be a huge help!

By Brent Fulgham (noreply@blogger.com) at October 09, 2011 02:43 AM

July 14, 2011

An Unseasonable Snowfall

Brent Fulgham

A year or two ago I ported the Cocoa "CallJS" application to MFC for use with WebKit. The only feedback I ever got on the topic was a complaint that it would not build under the Visual Studio Express software many people used.

After seeing another few requests on the webkit-help mailing list for information on calling JavaScript from C++ (and vice-versa), I decided to dust off the old program and convert it to pure WINAPI calls so that VS Express would work with it.

Since my beloved Layered Window patches finally landed in WebKit, I also incorporated a transparent WebKit view floating over the main application window. Because I suck at art, I stole appropriated the Let It Snow animation example to give the transparent layer something to do.

Want to see what it looks like?

By Brent Fulgham (noreply@blogger.com) at July 14, 2011 06:34 PM

July 10, 2011

Updated WebKit SDK (@r89864)

Brent Fulgham

I have updated the WebKitSDK to correspond to SVN revision r8984.

Major changes in this revision:
* JavaScript engine improvements.
* Rendering improvements.
* New 'Transparent Web View' support.
* General performance and memory use improvements.

This ZIP file also contains updated versions of Zlib, OpenSSL, cURL, and OpenCFLite.

Note that I have stopped statically linking Cairo; I'm starting to integrate some more recent Cairo updates (working towards some new rendering features), and wanted to be able to update it incrementally as changes are made.

This package contains the same Cairo library (in DLL form) as used in previous versions.

As usual, please let me know if you encounter any problems with this build.

[Update] I forgot to include zlib1.dll! Fixed in the revised zip file.

By Brent Fulgham (noreply@blogger.com) at July 10, 2011 04:24 AM

July 05, 2011

WinCairoRequirements Sources Archive

Brent Fulgham

I've posted the 80 MB source archive of the requirements needed to build the WinCairo port of WebKit.

Note that you do NOT need these sources unless you plan on building them yourself or wish to archive the source code for these modules. The binaries are always present in the WinCairoRequirements.zip file, which is downloaded and unzipped to the proper place when you execute the update-webkit --wincairo command.

By Brent Fulgham (noreply@blogger.com) at July 05, 2011 07:39 PM

June 28, 2011

Towards a Simpler WinCairo Build

Brent Fulgham


For the past couple of years, anyone interested in trying to build the WinCairo port of WebKit had to track down a number of support libraries, place them in their development environment's include (and link search) paths, and then cross their fingers and hope everything built.

To make things a little easier, I wrapped up the libraries and headers I use for building and posted them as a zip file on my .Mac account. This made things a little easier, but you still had to figure out where to drop the files and figure out if I had secretly updated my 'requirements.zip' file without telling anyone. Not ideal.

A couple of days ago, while trolling through the open review queue, I ran across a Bug filed by Carl Lobo, which automated the task of downloading the requirements file when running build-webkit --wincairo. This was a huge improvement!

Today, I hijacked Carl's changes and railroaded the patch through the review process (making a few modifications along the way):

  • I renamed my requirements file WinCairoRequirements.zip.

  • I added a timestamp file, so that build-webkit --wincairo can check to see if the file changed, and download it if necessary.

  • I propagated Carl's changes to update-webkit, so that now by adding the --wincairo argument it will update the WinCairoRequirements file.


I'm really excited about this update. If you've been wanting to try out the WinCairo port of WebKit, this would be a great time to try it out. I'd love to hear your experiences!

By Brent Fulgham (noreply@blogger.com) at June 28, 2011 04:42 AM

June 14, 2011

Benchmarking Javascript engines for EFL

Lucas De Marchi

The Enlightenment Foundation Libraries has several bindings for other languages in order to ease the creation of end-user applications, speeding up its development. Among them, there’s a binding for Javascript using the Spidermonkey engine. The questions are: is it fast enough? Does it slowdown your application? Is Spidermonkey the best JS engine to be used?

To answer these questions Gustavo Barbieri created some C, JS and Python benchmarks to compare the performance of EFL using each of these languages. The JS benchmarks were using Spidermonkey as the engine since elixir was already done for EFL. I then created new engines (with only the necessary functions) to also compare to other well-known JS engines: V8 from Google and JSC (or nitro) from WebKit.

Libraries setup

For all benchmarks EFL revision 58186 was used. Following the setup of each engine:

  • Spidermonkey: I’ve used version 1.8.1-rc1 with the already available bindings on EFL repository, elixir;
  • V8: version 3.2.5.1, using a simple binding I created for EFL. I named this binding ev8;
  • JSC: WebKit’s sources are needed to compile JSC. I’ve used revision 83063. Compiling with CMake, I chose the EFL port and enabled the option SHARED_CORE in order to have a separated library for Javascript;

Benchmarks

Startup time: This benchmark measures the startup time by executing a simple application that imports evas, ecore, ecore-evas and edje, bring in some symbols and then iterates the main loop once before exiting. I measured the startup time for both hot and cold cache cases. In the former the application is executed several times in sequence and the latter includes a call to drop all caches so we have to load the library again from disk

Runtime – Stress: This benchmark executes as many frames per second as possible of a render-intensive operation. The application is not so heavy, but it does some loops, math and interacts with EFL. Usually a common application would do far less operations every frame because many operations are done in EFL itself, in C, such as list scrolling that is done entirely in elm_genlist. This benchmark is made of 4 phases:

  • Phase 0 (P0): Un-scaled blend of the same image 16 times;
  • Phase 1 (P1): Same as P0, with additional 50% alpha;
  • Phase 2 (P2): Same as P0, with additional red coloring;
  • Phase 3 (P3): Same as P0, with additional 50% alpha and red coloring;

The C and Elixir’s versions are available at EFL repository.

Runtime – animation: usually an application doesn’t need “as many FPS as possible”, but instead it would like to limit to a certain amount of frames per second. E.g.: iphone’s browser tries to keep a constant of 60 FPS. This is the value I used on this benchmark. The same application as the previous benchmark is executed, but it tries to keep always the same frame-rate.

Results

The first computer I used to test these benchmarks on was my laptop. It’s a Dell Vostro 1320, Intel Core 2 Duo with 4 GB of RAM and a standard 5400 RPM disk. The results are below.

Benchmarks on Dell 1320 laptop

First thing to notice is there are no results for “Runtime – animation” benchmark. This is because all the engines kept a constant of 60fps and hence there were no interesting results to show. The first benchmark shows that V8’s startup time is the shortest one when considering we have to load the application and libraries from disk. JSC was the slowest and  Spidermonkey was in between.

With hot caches, however, we have another complete different scenario, with JSC being almost as fast as the native C application. Following, V8 with a delay a bit larger and Spidermonkey as the slowest one.

The runtime-stress benchmark shows that all the engines are performing well when there’s some considerable load in the application, i.e. removing P0 from from this scenario. JSC was always at the same speed of native code; Spidermonkey and V8 had an impact only when considering P0 alone.

 

Next computer to consider in order to execute these benchmarks was  a Pandaboard, so we can see how well the engines are performing in an embedded platform. Pandaboard has an ARM Cortex-A9 processor with 1GB of RAM and the partition containing the benchmarks is in an external flash storage drive. Following the results for each benchmark:

 

Benchmarks on Pandaboard

Once again, runtime-animation is not shown since it had the same results for all engines. For the startup tests, now Spidermonkey was much faster than the others, followed by V8 and JSC in both hot and cold caches. In runtime-stress benchmark, all the engines performed well, as in the first computer, but now JSC was the clear winner.

 

There are several points to be considered when choosing an engine to be use as a binding for a library such as EFL. The raw performance and startup time seems to be very near to the ones achieved with native code. Recently there were some discussions in EFL mailing list regarding which engine to choose, so I think it would be good to share these numbers above. It’s also important to notice that these bindings have a similar approach of elixir, mapping each function call in Javascript to the correspondent native function. I made this to be fair in the comparison among them, but depending on the use-case it’d  be good to have a JS binding similar to what python’s did, embedding the function call in real python objects.

By Lucas De Marchi at June 14, 2011 05:25 PM

April 29, 2011

Collection of WebKit ports

Holger Freyther

WebKit is a very successfull project. It is that in many ways. The code produced seems to very fast, the code is nice to work on, the people are great, the partys involved collaborate with each other in the interest of the project. The project is also very successfull in the mobile/smartphone space. All the major smartphone platforms but Windows7 are using WebKit. This all looks great, a big success but there is one thing that stands out.

From all the smartphone platforms no one has fully upstreamed their port. There might be many reasons for that and I think the most commonly heard reason is the time needed to get it upstreamed. It is specially difficult in a field that is moving as fast as the mobile industry. And then again there is absolutely no legal obligation to work upstream.

For most of today I collected the ports I am aware of, put them into one git repository, maybe find the point where they were branched, rebase their changes. The goal is to make it more easy to find interesting things and move them back to upstream. One can find the combined git tree with the tags here. I started with WebOS, moved to iOS, then to Bada and stopped at Android as I would have to pick the sourcecode for each android release for each phone from each vendor. I think I will just be happy with the Android git tree for now. At this point I would like to share some of my observations in the order I did the import.

Palm


Palm's release process is manual. In the last two releases they call the file .tgz but forgot to gzip it, in 2.0.0 the tarball name was in camel case. The thing that is very nice about Palm is that they provide their base and their changes (patch) separately. From looking at the 2.1.0 release it looks that for the Desktop version they want to implement Complex Font rendering. Earlier versions (maybe it is still the case) lack the support for animated GIF.

iOS


Apple's release process seems to be very structured. The source can be downloaded here. What I think is to note is that the release tarball contains some implementations of WebCore only as .o file and Apple has stopped releasing the WebKit sourcecode beginning with iOS 4.3.0.

Bada


This port is probably not known by many. The release process seems to be manual as well, the name of directories changed a lot between the releases, they come with a WML Script engine and they do ship something they should not ship.

I really hope that this combined tree is useful for porters that want to see the tricks used in the various ports and don't want to spend the time looking for each port separately.

By zecke (noreply@blogger.com) at April 29, 2011 07:20 PM

February 13, 2011

How to make the GNU Smalltalk Interpreter slower

Holger Freyther

This is another post about a modern Linux based performance measurement utility. It is called perf, it is included in the Linux kernel sources and it entered the kernel in v2.6.31-rc1. In many ways it is obsoleting OProfile, in fact for many architectures oprofile is just a wrapper around the perf support in the kernel. perf comes with a few nice application. perf top provides a statistics about which symbols in user and in kernel space are called, perf record to record an application or to start an application to record it and then perf report to browse this report with a very simple CLI utility. There are tools to bundle the record and the application in an archive, a diff utility.

For the last year I was playing a lot with GNU Smalltalk and someone posted the results of a very simplistic VM benchmark ran across many different Smalltalk implementations. In one of the benchmarks GNU Smalltalk is scoring last among the interpreters and I wanted to understand why it is slower. In many cases the JavaScriptCore interpreter is a lot like the GNU Smalltalk one, a simple direct-threaded bytecode interpreter, uses computed goto (even is compiled with -fno-gcse as indicated by the online help, not that it changed something for JSC), heavily inlined many functions.

There are also some differences, the GNU Smalltalk implementation is a lot older and in C. The first notable is that it is a Stack Machine and not register based, there are global pointers for the SP and the IP. Some magic to make sure that in the hot loop the IP/SP is 'local' in a register, depending on the available registers also keep the current argument in one, the interpreter definition is in a special file format but mostly similar to how Interepreter::privateExecute is looking like. The global state mostly comes from the fact that it needs to support switching processes and there might be some event during the run that requires access to the IP to store it to resume the old process. But in general the implementation is already optimized and there is little low hanging fruits and most experiments result in a slow down.

The two important things are again: Having a stable benchmark, having a tool to help to know where to look for things. In my case the important tools are perf stat, perf record, perf report and perf annotate. I have put a copy of the output to the end of this blog post. The stat utility provides one with number of instructions executed, branches, branch misses (e.g. badly predicted), L1/L2 cache hits and cache misses.

The stable benchmark helps me to judge if a change is good, bad or neutral for performance within the margin of error of the test. E.g. if I attempt to reduce the code size the instructions executed should decrease, if I start putting __builtin_expect.. into my code the number of branch misses should go down as well. The other useful utility is to the perf report that allows one to browse the recorded data, this can help to identify the methods one wants to start to optimize, it allows to annotate these functions inside the simple TUI interface, but does not support searching in it.

Because the codebase is already highly optimized any of my attempts should either decrease the code size (and the pressure on the i-cache), the data size (d-cache), remove stores or loads from memory (e.g. reorder instructions), fix branch predictions. The sad truth is that most of my changes were either slow downs or neutral to the performance and it is really important to undo these changes and not have false pride (unless it was also a code cleanup or such).

So after about 14 hours of toying with it the speed ups I have managed to make come from inlining a method to unwind a context (callframe), reordering some compares on the GC path and disabling the __builtin_expect branch hints as they were mostly wrong (something the kernel people found to be true in 2010 as well). I will just try harder, or try to work on the optimizer or attempt something more radical...



$ perf stat gst -f Bench.st
219037433 bytecodes/sec; 6025895 sends/sec

Performance counter stats for 'gst -f Bench.st':

17280.101683 task-clock-msecs # 0.969 CPUs
2076 context-switches # 0.000 M/sec
123 CPU-migrations # 0.000 M/sec
3925 page-faults # 0.000 M/sec
22215005506 cycles # 1285.583 M/sec (scaled from 70.02%)
40593277297 instructions # 1.827 IPC (scaled from 80.00%)
5063469832 branches # 293.023 M/sec (scaled from 79.98%)
70691940 branch-misses # 1.396 % (scaled from 79.98%)
27844326 cache-references # 1.611 M/sec (scaled from 20.02%)
134229 cache-misses # 0.008 M/sec (scaled from 20.03%)

17.838888599 seconds time elapsed


PS: The perf support probably works best on Intel based platforms and the biggest other problem is that perf annotate has some issues when the code is included from other c files.

By zecke (noreply@blogger.com) at February 13, 2011 08:56 PM

January 17, 2011

Using systemtap userspace tracing...

Holger Freyther

At the 27C3 we were running a GSM network and during the preparation I noticed a strange performance problem coming from the database library we are using running. I filled our database with some dummy data and created a file with the queries we normally run and executed time cat queries | sqlite3 file as a mini benchmark. I also hacked this code into our main routine and ran it with time as well. For some reason the code running through the database library was five times slower.

I was a bit puzzled and I decided to use systemtap to explore this to build a hypothesis and to also have the tools to answer the hypothesis. I wanted to find out if if it is slow because our database library is doing some heavy work in the implementation, or because we execute a lot more queries behind the back. I was creating the below probe:


probe process("/usr/lib/libsqlite3.so.0.8.6").function("sqlite3_get_table")
{
a = user_string($zSql);
printf("sqlite3_get_table called '%s'\n", a);
}


This probe will be executed whenever the sqlite3_get_table function of the mentioned library will be called. The $zSql is a variable passed to the sqlite3_get_table function and contains the query to be executed. I am converting the pointer to a local variable and then can print it. Using this simple probe helped me to see which queries were executed by the database library and helped me to do an easy optimisation.

In general it could be very useful to build a set of probes (I think one calls set a tapset) that check for API misusage, e.g. calling functions with certain parameters where something else might be better. E.g. in Glib use truncate instead of assigning "" to the GString, or check for calls to QString::fromUtf16 coming from Qt code itself. On second thought this might be better as a GCC plugin, or both.

By zecke (noreply@blogger.com) at January 17, 2011 12:41 PM

December 17, 2010

In the name of performance

Holger Freyther

I tend to see people doing weird things and then claim that the change is improving performance. This can be re-ordering instructions to help the compiler, attempting to use multiple cores of your system, writing a memfill in assembly. On the one hand people can be right and the change is making things faster, on the other hand they could use assembly to make things look very complicated, justify their pay, and you might feel awkward to question if it is making any sense.

In the last couple of weeks I have stumbled on some of those things. For some reason I found this bug report about GLIBC changing the memcpy routine for SSE and breaking the flash plugin (because it uses memcpy in the wrong way). The breakage is justified that the new memcpy was optimized and is faster. As Linus points out with his benchmark the performance improvement is mostly just wishful thinking.

Another case was someone providing MIPS optimized pixman code to speed-up all drawing which turned out to be wishful thinking as well...

The conclusion is. If someone claims that things are faster with his patch. Do not simply trust him, make sure he refers to his benchmark, is providing numbers of before and after and maybe even try to run it yourself. If he can not provide this, you should wonder how he measured the speed-up! There should be no place for wishful thinking in benchmarking. This is one of the areas where Apple's WebKit team is constantly impressing me.

By zecke (noreply@blogger.com) at December 17, 2010 01:48 PM

December 16, 2010

Benchmarking QtWebKit-V8 on Linux

University of Szeged

For some time it has been possible to build and run QtWebKit on Linux using Google's V8 JavaScript engine instead of the default JavaScriptCore. I thought it would be good to see some numbers comparing the runtime performance of the two engines in the same environment and also measuring the performance of the browser bindings.

read more

By andras.becsi at December 16, 2010 01:04 PM

October 23, 2010

Easily embedding WebKit into your EFL application

Lucas De Marchi

This is the first of a series of posts that I’m planning to do using basic examples in EFL, the Enlightenment Foundation Libraries. You may have heard that EFL is reaching its 1.0 release. Instead of starting from the very beginning with the basic functions of these libraries, I decided to go the opposite way, showing the fun stuff that is possible to do. Since I’m also an WebKit developer, let’s put the best of both softwares together and have a basic window rendering a webpage.

Before starting off, just some remarks:

  1. I’m using here the basic EFL + WebKit-EFL (sometimes called ewebkit). Developing an EFL application can be much simpler, particularly if you use an additional library with pre-made widgets like Elementary. However, it’s good to know how the underlying stuff works, so I’m providing this example.
  2. This could have been the last post in a series when talking about EFL since it uses at least 3 libraries. Don’t be afraid if you don’t understand what a certain function is for or if you can’t get all EFL and WebKit running right now. Use the comment section below and I’ll make my best to help you.

Getting EFL and WebKit

In order to able to compile the example here, you will need to compile two libraries from source: EFL and WebKit. For both libraries, you can either get the last version from svn or use the last snapshots provided.

  • EFL:

Grab a snapshot from the download page. How to checkout the latest version from svn is detailed here, as well as some instructions on how to compile

  • WebKit-EFL:

A very detailed explanation on how to get WebKit-EFL up and running is available on trac. Recently, though, WebKit-EFL started to be released too. It’s not detailed in the wiki yet, but you can grab a snapshot instead of checking out from svn.

hellobrowser!

In the spirit of “hello world” examples, our goal here is to make a window showing a webpage rendered by WebKit. For the sake of simplicity, we will use a default start page and put a WebKit-EFL “widget” to cover the entire window. See below a screenshot:

hellobrowser - WebKit + EFL

The code for this example is available here. Pay attention to a comment in the beginning of this file that explains how to compile it:

gcc -o hellobrowser hellobrowser.c \
     -DEWK_DATADIR="\"$(pkg-config --variable=datadir ewebkit)\"" \
     $(pkg-config --cflags --libs ecore ecore-evas evas ewebkit)

The things worth noting here are the dependencies and a variable. We directly depend on ecore and evas from EFL and on WebKit. We define a variable, EWK_DATADIR, using pkg-config so our browser can use the default theme for web widgets defined in WebKit. Ecore handles events like mouse and keyboard inputs, timers etc whilst evas is the library responsible for drawing. In a later post I’ll detail them a bit more. For now, you can read more about them on their official site.

The main function is really simple. Let’s divide it by pieces:

    // Init all EFL stuff we use
    evas_init();
    ecore_init();
    ecore_evas_init();
    ewk_init();

Before you use a library from EFL, remember to initialize it. All of them use their own namespace, so it’s easy to know which library you have to initialize: for example, if you call a function starting by “ecore_”, you know you first have to call “ecore_init()”. The last initialization function is WebKit’s, which uses the “ewk_” namespace.

    window = ecore_evas_new(NULL, 0, 0, 800, 600, NULL);
    if (!window) {
        fprintf(stderr, "something went wrong... :(\n");
        return 1;
    }

Ecore-evas then is used to create a new window with size 800×600. The other options are not relevant for an introduction to the libraries and you can find its complete documentation here.

    // Get the canvas off just-created window
    evas = ecore_evas_get(window);

From the Ecore_Evas object we just created, we grab a pointer to the evas, which is the space in which we can draw, adding Evas_Objects. Basically an Evas_Object is an object that you draw somewhere, i.e. in the evas. We want to add only one object to our window, that is where WebKit you render the webpages. Then, we have to ask WebKit to create this object:

    // Add a View object into this canvas. A View object is where WebKit will
    // render stuff.
    browser = ewk_view_single_add(evas);

Below I demonstrate a few Evas’ functions that you use to manipulate any Evas_Object. Here we are manipulating the just create WebKit object, moving to the desired position, resizing to 780x580px and then telling Evas to show this object. Finally, we tell Evas to show the window we created too. This way we have a window with an WebKit object inside with a little border.

    // Make a 10px border, resize and show
    evas_object_move(browser, 10, 10);
    evas_object_resize(browser, 780, 580);
    evas_object_show(browser);
    ecore_evas_show(window);

We need to setup a bit more things before having a working application. The first one is to give focus to the Evas_Object we are interested on in order to receive keyboard events when opened. Then we connect a function that will be called when the window is closed, so we can properly exit our application.

    // Focus it so it will receive pressed keys
    evas_object_focus_set(browser, 1);
 
    // Add a callback so clicks on "X" on top of window will call
    // main_signal_exit() function
    ecore_event_handler_add(ECORE_EVENT_SIGNAL_EXIT, main_signal_exit, window);

After this, we are ready to show our application, so we start the mainloop. This function will only return when the application is closed:

    ecore_main_loop_begin();

The function called when the application is close, just tell Ecore to exit the mainloop, so the function above returns and the application can shutdown. See its implementation below:

static Eina_Bool
main_signal_exit(void *data, int ev_type, void *ev)
{
    ecore_evas_free(data);
    ecore_main_loop_quit();
    return EINA_TRUE;
}

Before the application exits, we shutdown all the libraries that were initialized, in the opposite order:

    // Destroy all the stuff we have used
    ewk_shutdown();
    ecore_evas_shutdown();
    ecore_shutdown();
    evas_shutdown();

This is a basic working browser, with which you can navigate through pages, but you don’t have an entry to set the current URL, nor “go back” and “go forward” buttons etc. All you have to do is start adding more Evas_Objects to your Evas and connect them to the object we just created. For a still basic example, but with more stuff implemented, refer to the EWebLauncher that we ship with the WebKit source code. You can see it in the “WebKitTools/EWebLauncher/” folder or online at webkit’s trac. Eve is another browser with a lot more features that uses Elementary in addition to EFL, WebKit. See a blog post about it with some nice pictures.

Now, let’s do something funny with our browser. With a bit more lines of code you can turn your browser upside down. Not really useful, but it’s funny. All you have to do is to rotate the Evas_Object WebKit is rendering on. This is implemented by the following function:

// Rotate an evas object by 180 degrees
static void
_rotate_obj(Evas_Object *obj)
{
    Evas_Map *map = evas_map_new(4);
 
    evas_map_util_points_populate_from_object(map, obj);
    evas_map_util_rotate(map, 180.0, 400, 300);
    evas_map_alpha_set(map, 0);
    evas_map_smooth_set(map, 1);
    evas_object_map_set(obj, map);
    evas_object_map_enable_set(obj, 1);
 
    evas_map_free(map);
}

See this screenshot below and  get the complete source code.

EFL + WebKit doing Politreco upside down

By Lucas De Marchi at October 23, 2010 09:53 PM

October 02, 2010

Deploying WebKit, common issues

Holger Freyther

From my exposure to people deploying QtWebKit or WebKit/GTK+ there are some things that re-appear and I would like to discuss these here.

  • Weird compile error in JavaScript?
  • It is failing in JavaScriptCore as it is the first that is built. It is most likely that the person that provided you with the toolchain has placed a config.h into it. There are some resolutions to it. One would be to remove the config.h from the toolchain (many things will break), or use -isystem instead of -I for system includes.
    The best way to find out if you suffer from this problem is to use -E instead of -c to only pre-process the code and see where the various includes are coming from. It is a strategy that is known to work very well.

  • No pages are loaded.
  • Most likely you do not have a DNS Server set, or no networking, or the system your board is connected to is not forwarding the data. Make sure you can ping a website that is supposed to work, e.g. ping www.yahoo.com, the next thing would be to use nc to execute a simple HTTP 1.1 get on the site and see if it is working. In most cases you simply lack networking connectivity.

  • HTTPS does not work
  • It might be either an issue with Qt or an issue with your system time. SSL Certificates at least have two dates (Expiration and Creation) and if your system time is after the Expiration or before the Creation you will have issues. The easiest thing is to add ntpd to your root filesystem to make sure to have the right time.

    The possible issue with Qt is a bit more complex. You can build Qt without OpenSSL support, you can make it link to OpenSSL or you can make it to dlopen OpenSSL at runtime. If SSL does not work it is most likely that you have either build it without SSL support, or with runtime support but have failed to install the OpenSSL library.

    Depending on your skills it might be best to go back to ./configure and make Qt link to OpenSSL to avoid the runtime issue. strings is a very good tool to find out if your libQtNetwork.so contains SSL support, together with using objdump -x and search for _NEEDED you will find out which config you have.

  • Local pages are not loaded
  • This is a pretty common issue for WebKit/GTK+. In WebKit/GTK+ we are using GIO for local files and to determine the filetype it is using the freedesktop.org shared-mime-info. Make sure you have that installed.

  • The page only displays blank
  • This is another issue that comes back from time to time. It only appears on WebKit/GTK+ with the DirectFB backend but sadly people never report back if and how they have solved it. You could make a difference and contribute back to the WebKit project.


    In general most of these issues can be avoided by using a pre-packaged Embedded Linux Distribution like Ångström (or even Debian). The biggest benefit of that approach is that someone else made sure that when you install WebKit, all dependencies will be installed as well and it will just work for your ARM/MIPS/PPC system. It will save you a lot of time.

    By zecke (noreply@blogger.com) at October 02, 2010 06:12 AM

    August 28, 2010

    WebKit

    Lucas De Marchi

    After some time working with the EFL port of WebKit, I’ve been nominated as an official webkit developer. Now I have super powers in the official repository :-), but I swear I intend to use it with caution and responsibility. I’ll not forget Uncle Ben’s advice: ”with great power comes great responsibility”.

    I’m preparing a post to talk about WebKit, EFL, eve (a new web browser based on WebKit + EFL) and how to easily embed a browser in your application. Stay tuned.

    By Lucas De Marchi at August 28, 2010 03:15 AM

    August 10, 2010

    Coscup2010/GNOME.Asia with strong web focus

    Holger Freyther

    On the following weekend the Coscup 2010/GNOME.Asia is taking place in Taipei. The organizers have decided to have a strong focus on the Web as can be seen in the program.

    On saturday there are is a keynote and various talks about HTML5, node.js. The Sunday will see three talks touching WebKit/GTK+. There is one about building a tablet OS with WebKit/GTK+, one by Xan Lopez on how to build hybrid applications (a topic I have devoted moiji-mobile.com to) and a talk by me using gdb to explain how WebKit/GTK+ is working and how the porting layer interacts with the rest of the code.

    I hope the audience will enjoy the presentations and I am looking forward to attend the conference, there is also a strong presence of the ex-Openmoko Taiwan Engineering team. See you on Saturday/Sunday and drop me an email if you want to talk about WebKit or GSM...

    By zecke (noreply@blogger.com) at August 10, 2010 04:32 PM

    July 16, 2010

    Cross-compiling QtWebKit for Windows on Linux using MinGW

    University of Szeged

    In this post I'll show you how to configure and compile a MinGW toolchain for cross-compilation on Linux, then how to build Qt using this toolchain and finally compile the Qt port of WebKit from trunk.

    read more

    By andras.becsi at July 16, 2010 09:31 AM

    September 06, 2008

    Skia graphics library in Chrome: First impressions

    Alp Toker

    With the release of the WebKit-based Chrome browser, Google also introduced a handful of new backends for the browser engine including a new HTTP stack and the Skia graphics library. Google’s Android WebKit code drops have previously featured Skia for rendering, though this is the first time the sources have been made freely available. The code is apparently derived from Google’s 2005 acquisition of North Carolina-based software firm Skia and is now provided under the Open Source Apache License 2.0.

    Weighing in at some 80,000 lines of code (to Cairo’s 90,000 as a ballpark reference) and written in C++, some of the differentiating features include:

    • Optimised software-based rasteriser (module sgl/)
    • Optional GL-based acceleration of certain graphics operations including shader support and textures (module gl/)
    • Animation capabilities (module animator/)
    • Some built-in SVG support (module (svg/)
    • Built-in image decoders: PNG, JPEG, GIF, BMP, WBMP, ICO (modules images/)
    • Text capabilities (no built-in support for complex scripts)
    • Some awareness of higher-level UI toolkit constructs (platform windows, platform events): Mac, Unix (sic. X11, incomplete), Windows, wxwidgets
    • Performace features
      • Copy-on-write for images and certain other data types
      • Extensive use of the stack, both internally and for API consumers to avoid needless allocations and memory fragmentation
      • Thread-safety to enable parallelisation

    The library is portable and has (optional) platform-specific backends:

    • Fonts: Android / Ascender, FreeType, Windows (GDI)
    • Threading: pthread, Windows
    • XML: expat, tinyxml
    • Android shared memory (ashmem) for inter-process image data references

    Skia Hello World

    In this simple example we draw a few rectangles to a memory-based image buffer. This also demonstrates how one might integrate with the platform graphics system to get something on screen, though in this case we’re using Cairo to save the resulting image to disk:

    #include "SkBitmap.h"
    #include "SkDevice.h"
    #include "SkPaint.h"
    #include "SkRect.h"
    #include <cairo.h>
     
    int main()
    {
      SkBitmap bitmap;
      bitmap.setConfig(SkBitmap::kARGB_8888_Config, 100, 100);
      bitmap.allocPixels();
      SkDevice device(bitmap);
      SkCanvas canvas(&device);
      SkPaint paint;
      SkRect r;
     
      paint.setARGB(255, 255, 255, 255);
      r.set(10, 10, 20, 20);
      canvas.drawRect(r, paint);
     
      paint.setARGB(255, 255, 0, 0);
      r.offset(5, 5);
      canvas.drawRect(r, paint);
     
      paint.setARGB(255, 0, 0, 255);
      r.offset(5, 5);
      canvas.drawRect(r, paint);
     
      {
        SkAutoLockPixels image_lock(bitmap);
        cairo_surface_t* surface = cairo_image_surface_create_for_data(
            (unsigned char*)bitmap.getPixels(), CAIRO_FORMAT_ARGB32,
            bitmap.width(), bitmap.height(), bitmap.rowBytes());
        cairo_surface_write_to_png(surface, "snapshot.png");
        cairo_surface_destroy(surface);
      }
     
      return 0;
    }

    You can build this example for yourself linking statically to the libskia.a object file generated during the Chrome build process on Linux.

    Not just for Google Chrome

    The Skia backend in WebKit, the first parts of which are already hitting SVN (r35852, r36074) isn’t limited to use in the Chrome/Windows configuration and some work has already been done to get it up and running on Linux/GTK+ as part of the ongoing porting effort.

    The post Skia graphics library in Chrome: First impressions appeared first on Alp Toker.

    By alp at September 06, 2008 12:11 AM

    June 12, 2008

    WebKit Meta: A new standard for in-game web content

    Alp Toker

    Over the last few months, our browser team at Nuanti Ltd. has been developing Meta, a brand new WebKit port suited to embedding in OpenGL and 3D applications. The work is being driven by Linden Lab, who are eagerly investigating WebKit for use in Second Life.

    While producing Meta we’ve paid great attention to resolving the technical and practical limitations encountered with other web content engines.


    uBrowser running with the WebKit Meta engine

    High performance, low resource usage

    Meta is built around WebKit, the same engine used in web browsers like Safari and Epiphany, and features some of the fastest content rendering around as well as nippy JavaScript execution with the state of the art SquirrelFish VM. The JavaScript SDK is available independently of the web renderer for sandboxed client-side game scripting and automation.

    It’s also highly scalable. Some applications may need only a single browser context but virtual worlds often need to support hundreds of web views or more, each with active content. To optimize for this use case, we’ve cut down resource usage to an absolute minimum and tuned performance across the board.

    Stable, easy to use cross-platform SDK

    Meta features a single, rock-solid API that works identically on all supported platforms including Windows, OS X and Linux. The SDK is tailored specifically to embedding and allows tight integration (shared main loop or operation in a separate rendering thread, for example) and hooks to permit seamless visual integration and extension. There is no global setup or initialization and the number of views can be adjusted dynamically to meet resource constraints.

    Minimal dependencies

    Meta doesn’t need to use a conventional UI toolkit and doesn’t need any access to the underlying windowing system or the user’s filesystem to do its job, so we’ve done away with these concepts almost entirely. It adds only a few megabytes to the overall redistributable application’s installed footprint and won’t interfere with any pre-installed web browsers on the user’s machine.

    Nuanti will be offering commercial and community support and is anticipating involvement from the gaming industry and homebrew programmers.

    In the mid term, we aim to submit components of Meta to the WebKit Open Source project, where our developers are already actively involved in maintaining various subsystems.

    Find out more

    Today we’re launching meta.nuanti.com and two mailing lists to get developers talking. We’re looking to make this site a focal point for embedders, choc-full of technical details, code samples and other resources.

    The post WebKit Meta: A new standard for in-game web content appeared first on Alp Toker.

    By alp at June 12, 2008 09:35 AM

    April 21, 2008

    Acid3 final touches

    Alp Toker

    Recently we’ve been working to finish off and land the last couple of fixes to get a perfect pixel-for-pixel match against the reference Acid3 rendering in WebKit/GTK+. I believe we’re the first project to achieve this on Linux — congratulations to everyone on the team!


    Epiphany using WebKit r32284

    We also recently announced our plans to align more closely with the GNOME desktop and mobile platform. To this end we’re making a few technology and organisational changes that I hope to discuss in an upcoming post.

    The post Acid3 final touches appeared first on Alp Toker.

    By alp at April 21, 2008 02:38 AM

    April 06, 2008

    WebKit Summer of Code Projects

    Alp Toker

    With the revised deadline for Google Summer of Code ’08 student applications looming, we’ve been getting a lot of interest in browser-related student projects. I’ve put together a list of some of my favourite ideas.

    If in doubt, now’s the time to submit proposals. Already-listed ideas are the most likely to get mentored but students are free to propose their own ideas as well. Proposals for incremental improvements will tend to be favoured over ideas for completely new applications, but a proof of concept and/or roadmap can help when submitting plans for larger projects.

    Update: There’s no need to keep asking about the status of an application on IRC/private mail etc. It’s a busy time for the upstream developers but they’ll get back in touch as soon as possible.

    The post WebKit Summer of Code Projects appeared first on Alp Toker.

    By alp at April 06, 2008 08:40 PM

    March 27, 2008

    WebKit gets 100% on Acid3

    Alp Toker

    Today we reached a milestone with WebKit/GTK+ as it became the first browser engine on Linux/X11 to get a full score on Acid3, shortly after the Acid3 pass by WebKit for Safari/Mac.

    Acid3
    Epiphany using WebKit r31371

    There is actually still a little work to be done before we can claim a flawless Acid3 pass. Two of the most visible remaining issues in the GTK+ port are :visited (causing the “LINKTEST FAILED” notice in the screenshot) and the lack of CSS text shadow support in the Cairo/text backend which is needed to match the reference rendering.

    It’s amazing to see how far we’ve come in the last few months, and great to see the WebKit GTK+ team now playing an active role in the direction of WebCore as WebKit continues to build momentum amongst developers.

    Update: We now also match the reference rendering.

    The post WebKit gets 100% on Acid3 appeared first on Alp Toker.

    By alp at March 27, 2008 09:06 PM

    March 15, 2008

    Bossa Conf ’08

    Alp Toker

    Am here in the LHR lounge. In a couple of hours, we take off for the INdT Bossa Conference, Pernambuco, Brazil via Lisbon. Bumped in to Pippin who will be presenting Clutter. Also looking forward to Lennart‘s PulseAudio talk amongst others.

    If you happen to be going, drop by on my WebKit Mobile presentation, 14:00 Room 01 this Monday. We have a small surprise waiting for Maemo developers.

    WebKit Mobile

    The post Bossa Conf ’08 appeared first on Alp Toker.

    By alp at March 15, 2008 03:29 AM