June 29, 2015

Manuel Rego: CSS Grid Layout is just around the corner (CSSConf US 2015)

Igalia WebKit

Coming back to real life after a wonderful week in New York City is not that easy, but here we’re on the other side of the pond writing about CSS Grid Layout again.

First kudos to Bocoup for the CSSConf US 2015 organization. Specially to Adam Sontag and the rest of the conference staff. You were really supportive during the whole week. And the videos with live transcripts were available just a few days after the conference, awesome job! The only issue was the internet connection which was really flaky.

So, yeah I attended CSSConf this year, but not only that, I was also speaking about CSS Grid Layout and the video of my talk is already online together with the slides.

During the talk I described the basic concepts, syntax and features of CSS Grid with different live coding examples. Then I tried to explain the main tasks that the browser has to do in order to render a grid and gave some tips about grid performance. Finally, we reviewed the browsers adoption and the status of Chromium/Blink and Safari/WebKit implementations that Igalia is doing.

CSS Grid Layout is just around the corner talk sketchnotes by Susan CSS Grid Layout is just around the corner talk sketchnotes by Susan

The feedback about my talk was incredibly positive and everybody seemed really excited about what CSS Grid Layout can bring to the web platform. Big thanks to you all!

Of course, there were other great talks at CSSConf as you can check in the videos. From the top of my head, I loved the one by Lea Verou, impressive talk as usual where she even released a polyfill for conic gradients on the stage. SVG and animations have two nice talks by Chris Coyier and Sarah Drasner. PostCSS and inline styles were also hot topics. Responsive (and responsible!) images, Fun.css and CSS? WTF! were also great (and probably I’m forgetting some other).

Last, on Thursday’s night we attended BrooklynJS which had a great panel discussing about CSS. The inline styles vs stylesheets topic became hot, as projects like React are moving people away from stylesheets. Chris Coyier (one of the panelists and also speaker at CSSConf) wrote a nice post past week giving a good overview of this topic. Also The Four Fives were amazing!

On top of that, as part of the collaboration between Igalia and Bloomberg, I was visiting their fancy office in Manhattan. I spent a great time there talking about grids with several people from the team. They really believe that CSS Grid Layout will change the future of the web benefiting lots of people in different use cases, and hopefully helping to alleviate performance issues in complex scenarios.

Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

Looking forward for the next opportunity to talk about CSS Grid Layout. Keeping the hard work to make it a reality as soon as possible!

June 29, 2015 10:00 PM

June 24, 2015

Web Inspector Console Improvements

Surfin’ Safari

The console is an essential part of Web Inspector. Evaluating expressions in the quick console is one of the primary ways of interacting with the inspected page. Logs, errors, and warnings emitted from the page show up in the console and exploring or interacting with these objects is a given while debugging.

We recently improved both the Console and Object views in Web Inspector to make it more powerful and fun to use. Our main focus was getting quicker access to useful data and modernizing it to work better with the new changes in JavaScript.

Basics – Object Previews, Trees, and $n

Object previews allow you to see the first few properties without needing to expand them. You’ll notice that each evaluation provides you with a “$n” debugger variable to refer back to that object later. These special variables are known only to the tools, so you won’t be cluttering the page with temporary variables. $0 still exists and refers to the current selected node in the DOM Tree.

Object Preview

When expanded, the full object tree view cleanly separates properties and API. Again, we use object previews where possible to reveal more data at a glance. The icons for each property correspond to the type of the value for that property. For example, in the image below you’ll see properties with number values have a blue N icon, strings with a red S, functions a green F, etc. The icons give objects a visual pattern, and makes it easy to visually find a particular property or an unexpected change in the normal data an object holds.

Object Tree

Supporting New Types

Web Inspector has always had great support for inspecting certain built-in JavaScript types such as Arrays and DOM types like Nodes. Web Inspector has improved those views and now has comprehensive support for all of the built-in JavaScript types. This including the new ES6 types (Symbol, Set, Map, WeakSet, WeakMap, Promises, Classes, Iterators).

Array, Set, and Map object trees

WebKit’s tools are most useful when they show internal state of objects, known only to the engine, that is otherwise inaccessible. For example, showing the current status of Promises:

Promises

Or upcoming values of native Iterators:

Iterators

Other interesting cases are showing values in WeakSets and WeakMaps, or showing the original target function and bound arguments for bound functions.

API View

When expanding an object’s prototype you get a great API view showing what methods you can call on the object. The API view always provides parameter names for user functions and even provides curated versions for native functions. The API view makes it really convenient to lookup or discover the ways that you can interact with objects already available to you in the console.

Array API ViewLocal Storage Object Tree

As an added bonus, if you are working with ES6 Classes and log a class by its name or its constructor you immediately get the API view for that class.

Interactivity

Object trees are more interactive. Hover a property icon to see the property’s descriptor attributes. Hover the property name to see the exact path you can use to access the property. Getters can be invoked, and their results can be further explored.

Property Descriptor tooltipProperty Path tooltip

Context menus also provide more options. One of the most powerful features is that with any value in an Object tree you can use the context menu and select “Log Value” to re-log the value to the Console. This immediately creates a $n reference to the live object, letting you interact with it or easily reference it again later.

Console Messages

Console messages have also had a UI refresh, making logs, errors, warnings, and their location links stand out more:

Console Messages

Feedback

These enhancements are available to use in WebKit Nightly Builds. We would love to hear your feedback! You can send us quick feedback on Twitter (@JosephPecoraro, @xeenon), file a bug report, or even consider contributing your own enhancements!

By Joseph Pecoraro at June 24, 2015 09:00 PM

Javier Fernández: Performance analysis of Grid Layout

Igalia WebKit

Now that we have a quite complete implementation of CSS Grid Layout specification it’s time to take care of performance analysis and optimizations. In this essay, which is the first of a series of posts about performance, I’ll first introduce briefly how to use Blink (Chrome) and WebKit (Safari) performance analysis tools, some of the most interesting cases I’ve seen during my work on the implementation of this spec and, finally, a basic case to compare Flexbox and Grid layout models, which I’d like to evolve and analyze further in the coming months.

Performance analysis tools

Both WebKit and Blink projects provide several useful and easy to use scrips (python) to run a set of test cases and take different measurements and early analysis. They were written before the fork, that’s why related documentation can be found at WebKit’s track, but both engines still uses them, for the time being.

Tools/Scripts/run-perf-tests
Tools/Scripts/webkitpy/performance_tests/

There are a wide set of performance tests under PerformanceTest folder, at Blink’s/WebKit’s root directory, but even though both engines share a substantial number of tests, there are some differences.

(blink’s root directory) $ ls PerformanceTests/
Bindings BlinkGC Canvas CSS DOM Dromaeo Events inspector Layout Mutation OWNERS Parser resources ShadowDOM Skipped SunSpider SVG XMLHttpRequest XSSAuditor

Chromium project has introduced a new performance tool, called Telemetry, which in addition of running the above mentioned tests, it’s designed to execute more complex cases like running specific PageSets or doing benchmarking to compare results with a preset recording (WebPageRelay). It’s also possible to send patches to performance try bots, directly from gclient or git (depot_tools) command line. There are quite much information available in the following links:

Regarding profiling tools, it’s possible both in Webkit and Blink to use the –profiler option when running the performance tests so we can collect profiling data. However, while WebKit recommends perf for linux, Google’s Blink engine provides some alternatives.

CSS Grid Layout performance tests and current status

While implementing a new browser feature is not easy to measure performance while code evolves so much and quickly and, what it’s worst, be aware of regressions introduced by new logic. When the feature’s syntax changes or there are missing or incomplete functionality, it’s not always possible to establish a well defined baseline for performance. It’s also a though decision to determine which use cases we might care about; obviously the faster the better, but adding performance optimizations usually complicates code, it may affect its robustness and it could lead to unexpected, and even worst, hard to find bugs.

At the time of this writing, we had 3 basic performance tests:

Why we have selected those uses cases to measure and keep track of performance regression ? First of all, note that auto-sizing one of the most expensive branches inside the grid track sizing algorithm, so we are really interested on both, improving it and keeping track of regressions on this code path.

body {
    display: grid;
    grid-template-rows: repeat(100, auto);
    grid-template-columns: repeat(20, auto);
}
.gridItem {
    height: 200px;
    width: 200px;
}

On the other hand, fixed-sized is the easiest/fastest path of the algorithm, so besides the importance of avoiding regressions (when possible), it’s also a good case to compare with auto-sized.

body {
    display: grid;
    grid-template-rows: repeat(100, 200px);
    grid-template-columns: repeat(20, 200px);
}
.gridItem {
    height: 200px;
    width: 200px;
}

Finally, a stretching use cases was added because it’s the default alignment value for grid items and the two test cases already described use fixed size items, hence no stretch (even though items fill the whole grid cell area). Given that I implemented CSS Box Alignment support for grid I was conscious of how expensive the stretching logic is, so I considered it an important use case to analyze and optimize as much as possible. Actually, I’ve already introduced several optimizations because the early implementation was quite slow, around 40% slower than using any other basic alignment (start, end, center). We will talk more about this later when we analyze a case to compare Flexbox and Grid performance in layout.

body {
    display: grid;
    grid-template-rows: repeat(100, 200px);
    grid-template-columns: repeat(20, 200px);
}
.gridItem {
    height: auto;
    width: auto;
}

The basic HTML body of these 3 tests is quite simple because we want to analyze performance of very specific parts of the Grid Layout logic, in order to detect regressions in sensible code paths. We’d like to have eventually some real use cases to analyze and create many more performance tests, but chrome performance platform it’s definitively not the place to do so. The following graphs show performance evolution during 2015 for the 3 tests we have defined so far.

grid-performance-overview

Note that yellow trace shows data taken from a reference build, so we can discount temporary glitches on the machine running the performance tests of target build, which are shown in the blue trace; this reference trace is also useful to detect invalid regression alerts.

Why performance is so different for these cases ?

The 3 tests we have for Grid Layout use runs/second values as a way to measure performance; this is the preferred method for both WebKit and Blink engines because we can detect regressions with relatively small tests. It’s possible, though, to do other kind of measurements. Looking at the graphs above we can extract the following data:

  • auto-sized grid: around 650 runs/sec
  • fixed-sized grid: around 1400 runs/sec
  • fixed-sized stretched grid: around 1250 runs/sec

Before analyzing possible causes of performance drop for each case, I’ve defined some additional tests to stress even more these 3 cases, so we can realize how grid size affect to the obtained results. I defined 20 tests for these cases, each one with different grid items; from 10×10 up to 200×200 grids. I run those tests in my own laptop, so let’s take the absolute numbers of each case with a grain of salt; although differences between each of these 3 scenarios should be coherent. The table below shows some numeric results of this experiment.

grid-fixed-VS-auto-VS-stretch

First of all, recall that these 3 tests produce the same web visualization, consisting of grids with NxN items of 100px each one. The only difference is the grid layout strategy used to produce such result: auto-sizing, fixed-sizing and stretching. So now, focusing on previous table’s data we can evaluate the cost, in terms of layout performance, of using auto-sized tracks for defining the grid (which may be the only solution for certain cases). Performance drop is even growing with the number of grid items, but we can conclude that it’s stabilized around 60%. On the other hand stretching is also slower but, unlike auto-sized, in this case performance drop does not show a high dependency of grid size, more or less constant around 15%.

grid-performance-graphs-2

Impact of auto-sized tracks in layout performance

Basically, the track sizing algorithm can be described in the following 4 steps:

  • 1- Initialize per Grid track variables.
  • 2- Resolve content-based TrackSizingFunctions.
  • 3- Grow all Grid tracks in GridTracks from their baseSize up to their growthLimit value until freeSpace is exhausted.
  • 4- Grow all Grid tracks having a fraction as the MaxTrackSizingFunction.

These steps will be executed twice, first cycle for determining column tracks’s size and another cycle to set row tracks’s size which it may depend on grid’s width. When using just fixed-sized tracks in the very simple case we are testing, the only computation required to determine grid’s size is completing step 1 and determining free available space based on the specified fixed-size values of each track.

// 1. Initialize per Grid track variables.
for (size_t i = 0; i < tracks.size(); ++i) {
    GridTrack& track = tracks[i];
    GridTrackSize trackSize = gridTrackSize(direction, i);
    const GridLength& minTrackBreadth = trackSize.minTrackBreadth();
    const GridLength& maxTrackBreadth = trackSize.maxTrackBreadth();
 
    track.setBaseSize(computeUsedBreadthOfMinLength(direction, minTrackBreadth));
    track.setGrowthLimit(computeUsedBreadthOfMaxLength(direction, maxTrackBreadth, track.baseSize()));
 
    if (trackSize.isContentSized())
        sizingData.contentSizedTracksIndex.append(i);
    if (trackSize.maxTrackBreadth().isFlex())
        flexibleSizedTracksIndex.append(i);
}
for (const auto& track: tracks) {
    freeSpace -= track.baseSize();
}

Focusing now on the auto-sized scenario, we will have the overhead of resolving content-sized functions for all the grid items.

// 2. Resolve content-based TrackSizingFunctions.
if (!sizingData.contentSizedTracksIndex.isEmpty())
    resolveContentBasedTrackSizingFunctions(direction, sizingData);

I didn’t add source code of resolveContentBasedTrackSizingFunctions because it’s quite complex, but basically it implies a cost proportional to the number of grid tracks (minimum of 2x), in order to determine minContent and maxContent values for each grid item. It might imply additional computation overhead when using spanning items; it would require to sort them based on their spanning value and iterate over them again to resolve their content-sized functions.

Some issues may be interesting to analyze in the future:

  • How much each content-sized track costs ?
  • What is the impact on performance of using flexible-sized tracks ? Would it be the worst case scenario ? Considering it will require to follow the four steps of track sizing algorithm, it likely will.
  • Which are the performance implications of using spanning items ?

Why stretching is so performance drain ?

This is an interesting issue, given that stretch is the default value for both Grid and Flexbox items. Actually, it’s the root cause of why Grid beats Flexbox in terms of layout performance for the cases when stretch alignment is used. As I’ll explain later, Flexbox doesn’t have the optimizations I’ve implemented for Grid Layout.

Stretching logic takes place during the grid container layout operations, after all tracks have their size precisely determined and we have properly computed all grid track’s positions relatively to the grid container. It happens before the alignment logic is executed because stretching may imply changing some grid item’s size, hence they will be marked for layout (if they wasn’t already).

Obviously, stretching only takes place when the corresponding Self Alignment properties (align-self, justify-self) have either auto or stretch as value, but there are other conditions that must be fulfilled to trigger this operation:

  • box’s computed width/height (as appropriate to the axis) is auto.
  • neither of its margins (in the appropriate axis) are auto
  • still respecting the constraints imposed by min-height/min-width/max-height/max-width

In that scenario, stretching logic implies the following operations:

LayoutUnit stretchedLogicalHeight = availableAlignmentSpaceForChildBeforeStretching(gridAreaBreadthForChild, child);
LayoutUnit desiredLogicalHeight = child.constrainLogicalHeightByMinMax(stretchedLogicalHeight, -1);
 
bool childNeedsRelayout = desiredLogicalHeight != child.logicalHeight();
if (childNeedsRelayout || !child.hasOverrideLogicalContentHeight())
    child.setOverrideLogicalContentHeight(desiredLogicalHeight - child.borderAndPaddingLogicalHeight());
if (childNeedsRelayout) {
    child.setLogicalHeight(0);
    child.setNeedsLayout();
}
 
LayoutUnit LayoutGrid::availableAlignmentSpaceForChildBeforeStretching(LayoutUnit gridAreaBreadthForChild, const LayoutBox& child) const
{
    LayoutUnit childMarginLogicalHeight = marginLogicalHeightForChild(child);
 
    // Because we want to avoid multiple layouts, stretching logic might be performed before
    // children are laid out, so we can't use the child cached values. Hence, we need to
    // compute margins in order to determine the available height before stretching.
    if (childMarginLogicalHeight == 0)
        childMarginLogicalHeight = computeMarginLogicalHeightForChild(child);
 
    return gridAreaBreadthForChild - childMarginLogicalHeight;
}

In addition to the extra layout required for changing grid item’s size, computing the available space for stretching adds an additional overhead, overall if we have to compute grid item’s margins because some layout operations are still incomplete.

Given that grid container relies on generic block’s layout operations to determine the stretched width, this specific logic is only executed for determining the stretched height. Hence performance drop is alleviated, compared with the auto-sized tracks scenario.

Grid VS Flexbox layout performance

One of the main goals of CSS Grid Layout specification is to complement Flexbox layout model for 2 dimensions. It’s expectable that creating grid designs with Flexbox will be more inefficient than using a layout model specifically designed for these cases, not only regarding CSS syntax, but also regarding layout performance.

However, I think it’s interesting to measure Grid Layout performance in 1-dimensional cases, usually managed using Flexbox, so we can have comparable scenarios to evaluate both models. In this post I’ll start with such cases, using a very simple one in this occasion. I’d like to get more complex examples in future posts, the ones more usual in Flexbox based designs.

So, let’s consider the following simple test case:

<div class="className">
   <div class="i1">Item 1</div> 
   <div class="i2">Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</div>
   <div class="i3">Item 3 longer</div>
</div>

I evaluated the simple HTML example above with both Flexbox and Grid layouts to measure performance. I used a CPU profiler to figure out where the bottlenecks are for each model, trying to explain where differences came from. So, I defined 2 CSS classes for each layout model, as follows:

.flex {
    background-color: silver;
    display: flex;
    height: 100px;
    align-items: start;
}
.grid {
    background-color: silver;
    display: grid;
    grid-template-columns: 100px 1fr auto;
    grid-template-rows: 100px;
    align-items: start;
    justify-items: start;
}
.i1 { 
    background-color: cyan;
    flex-basis: 100px; 
}
.i2 { 
    background-color: magenta;
    flex: 1; 
}
.i3 { 
    background-color: yellow; 
}

Given that there is not concept of row in Flexbox, I evaluated performance of 100 up to 2000 grid or flex containers, creating 20 tests to be run inside the chrome performance framework, described at the beginning of this post. You can check out resources and a script to generate them at our github examples repo.

flexVSgrid

When comparing both layout models targeting layout times, we see clearly that Grid Layout beats Flexbox using the default values for CSS properties controlling layout itself and alignment, which is stretch for these containers. As it was explained before, the stretching logic adds an important computation overhead, which as we can see now in the numeric table above, has more weight for Flexbox than Grid.

Looking at the plot about differences in layout time, we see that for the default case, Grid performance improvement is stabilized around 7%. However, when we avoid the stretching logic, for instance by using any other alignment value, layout performance it’s considerable worse than Flexbox, for this test case, around 15% slower. This is something sensible, as this test case is the idea for Flexbox, while a bit artificial for Grid; using a single Grid with N rows improves performance considerably, getting much better numbers than Flexbox, but we will see these cases in future analysis.

Grid layout better results for the default case (stretch) are explained because I implemented several optimizations for Grid. Probably Flexbox should do the same, as it’s the default value and it could affect many sites using this layout model in their designs.

Thanks to Bloomberg for sponsoring this work, as part of the efforts that Igalia has been doing all these years pursuing a better and more open web.

Igalia & Bloomberg logos

By jfernandez at June 24, 2015 12:03 PM

June 12, 2015

Introduction to WebKit Content Blockers

Surfin’ Safari

Browser extensions have been a big part of modern browsers for a while now. With extensions, everyone can change their browser to their preferences.

Today, there are several models to extend browsers. Most extensions are written in JavaScript and loaded by the browser, following a model introduced by Mozilla over a decade ago. That model is also used with WebKit. It is the classical way to write extensions for OS X Safari, Web/Epiphany, and other browsers.

On OS X and iOS, there is also the concept of App Extensions with a different approach to security and performance. They are essentially little sandboxed applications that are launched on demand to extend some specific piece of functionality, known as an extension point.

The JavaScript extensions model has been great for many use cases, but there is one category of extensions where members of the WebKit project felt we should do better: the content blocking extensions. Such extensions are the most popular kind; they let users decide what should load and not load, who can track them, what should be visible on pages, etc.

The reason we are unhappy about the JavaScript-based content blocking extensions is they have significant performance drawbacks. The current model uses a lot of energy, reducing battery life, and increases page load time by adding latency for each resource. Certain kinds of extensions also reduce the runtime performance of webpages. Sometimes, they can allocate tremendous amounts of memory, which goes against our efforts to reduce WebKit’s memory footprint.

It is an area were we want to do better. We are working on new tools to enable content blocking at a fraction of the cost.

One new feature, we are developing allows describing content blocking rules in a structured format ahead-of-time, declaratively, rather than running extension-provided code at the moment a decision about blocking needs to be made. This model allows for WebKit to compile the ruleset into a format that’s very efficient to apply to loads and page content.

Content blockers in action

Before I dive into the details, let’s see what a declarative extension look like in the new format.

In essence, each content blocker extension is a list of rules that tells the engine how to act when loading a resource.

The rules are written in JSON format. For example, here is an extension with two rules:


    [
        {
            "trigger": {
                "url-filter": "evil-tracker.js"
            },
            "action": {
                "type": "block"
            }
        },
        {
            "trigger": {
                "url-filter": ".*",
                "resource-type": ["image", "style-sheet"]
                "unless-domain": ["reputable-content-server.com"]
            },
            "action": {
                "type": "block-cookies"
            }
        }
    ]

The first rule activates for any URL that contains the string “evil-tracker.js”. When the first rule is activated, the load is blocked.

The second rule activates for any resource loaded as an image or a style sheet on any domain except “reputable-content-server.com”. When the rule is activated, cookies are stripped from the request before sending it to the server.

The rules are passed to the engine by the browser. In iOS Safari, it is done through the native app extension mechanism. In OS X Safari, browser extensions can provide their rules through a new API. If you hack on WebKit, MiniBrowser also lets you load rule sets directly from the Debug menu.

Once the rules are passed to WebKit, they are compiled into an efficient bytecode format. The engine then executes this bytecode for each resource request, and uses the result to modify the request or inject CSS.

The bytecode is executed for each resource in the network subsystem. The goal is to reduce the latency between a request being created by the page and the request being actually dispatched over the network.

Content blocker format

The content blocker rules are passed in JSON format. The top level object is an array containing every rule that needs to be loaded.

Each rule of the content blocker is a dictionary with two parts: a trigger which activates the rule, and an action defining what to do when the rule is activated.

A typical rule set looks like this:


    [
        {
            "trigger": {
                …
            },
            "action": {
                …
            }
        },
        {
            "trigger": {
                …
            },
            "action": {
                …
            }
        }
    ]

The order of the rules is important. For every extension, the actions are applied in order. There is an action that skip all the rules that appears before the current one: “ignore-previous-rules”.

Let’s dive into the “trigger” and “action” objects.

Trigger definition

The “trigger” defines what properties activate a rule. When the rule is activated, its action is queued for execution. When all the triggers have been evaluated, the actions are applied in order.

Currently, the triggers are based on resource load information: the url and type of each resource, the domain of the document, and the relation of the resource to the document.

The valid fields in the trigger are:

  • “url-filter” (string, mandatory): matches the resource’s URL.
  • “url-filter-is-case-sensitive”: (boolean, optional): changes the “url-filter” case-sensitivity.
  • “resource-type”: (array of strings, optional): matches how the resource will be used.
  • “load-type”: (array of strings, optional): matches the relation to the main resource.
  • “if-domain”/”unless-domain” (array of strings, optional): matches the domain of the document.

The most important field, and the only mandatory one, is “url-filter”. In this field, you define a regular expression that will be evaluated against the URL of each resource. It is possible to match every URL by matching every character (e.g. “.*”) but in general it is better to be as precise as possible to avoid unforeseen side effects.

The syntax of the regular expression is a strict subset of JavaScript regular expressions. It is introduced later in this post.

It is possible to change the case-sensitivity of “url-filter” with the field “url-filter-is-case-sensitive”. By default, the matching is case-insensitive.

The optional field “resource-type” specifies the type of load to match. The content of this field is an array with all the types of load that can activate the trigger. The possible values are:

  • “document”
  • “image”
  • “style-sheet”
  • “script”
  • “font”
  • “raw” (any untyped load, like XMLHttpRequest)
  • “svg-document”
  • “media”
  • “popup”

Since the triggers are evaluated before the load starts, these types defines how the engine intends to use the resource, not necessarily the type of the resource itself (for example, <img src=”something.css”> is identified as an image.)

If “resource-type” is not specified, the default is to match all types of resources.

The field “load-type” defines the relation between the domain of the resource being loaded and the domain of the document. The two possible values are:

  • “first-party”
  • “third-party”

A “first-party” load is any load where the URL has the same security origin as the document. Every other case is “third-party”.

Finally, it is possible to make a trigger conditional on the URL of the main document. In this case, the rule only applies for a particular domain, or only outside of a particular domain.

The domain filters are “if-domain” and “unless-domain”, those fields are exclusive. In those fields, you can provide a domain filter for the main document.

It is important to be careful with the trigger to avoid rules being unexpectedly activated. Since it is impossible to test the entire web to validate a trigger, it is advised to be as specific as possible.

Action definition

The “action” part of the dictionary defines what the engine should do when a resource is matched by a trigger.

Currently, the action object has only 2 valid fields:

  • “type” (string, mandatory): defines what to do when the rule is activated.
  • “selector” (string, mandatory for the “css-display-none” type): defines a selector list to apply on the page.

There are 3 types of actions that limit resources: “block”, “block-cookies”, “css-display-none”. There is an additional type that does not have any impact on the resource but changes how the content extension behaves: “ignore-previous-rules”.

The action “block” is the most powerful one. It tells the engine to abort loading the resource. If the resource was cached, the cache is ignored and the load will still fail.

The action “block-cookies” changes the way the resource is requested over the network. Before sending the request to the server, all cookies are stripped from the header. Safari has its own privacy policy that applies on top of this rule. It is only possible to block cookies that would otherwise be accepted by the privacy policy, combining “block-cookies” and “ignore-previous-rules” still follows the browser’s privacy settings.

The action “css-display-none” acts on the CSS subsystem. It lets you hide elements of the page based on selectors. When this action is set, there should be a second entry named “selector” containing a selector list. Any element matching the selector list has its “display” property set to “none” which hides it.

Every selector supported by WebKit is supported for content extensions, including compound selectors and the new selectors from CSS Selectors Level 4. For example, the following action definition is valid:


    "action": {
        "type": "css-display-none",
        "selector": "#newsletter, :matches(.main-page, .article) .annoying-overlay"
    }

Finally, there is the action type “ignore-previous-rules”. All it does is ignore every rule before the current one if the trigger is activated. Note that it is not possible to ignore the rules of an other extension. Each extension is isolated from the others.

The Regular expression format

Triggers support filtering the URLs of each resource based on regular expression.

All strings in “url-filter” are interpreted as regular expressions. You have to be careful to escape regular expression control characters. Typically the dot appears in filters and needs to be escaped (for example, “energy-waster.com” should appear as “energy-waster\.com”.

The format is a strict subset of JavaScript regular expressions. Syntactically, everything supported by JavaScript is reserved but only a subset will be accepted by the parser. An unsupported expression results in a parse error.

The following features are supported:

  • Matching any character with “.”.
  • Matching ranges with the range syntax [a-b].
  • Quantifying expressions with “?”, “+” and “*”.
  • Groups with parenthesis.

It is possible to use the beginning of line (“^”) and end of line (“$”) marker but they are restricted to be the first and last character of the expression. For example, a pattern like “^bar$” is perfectly valid, while “(foo)?^bar$” causes a syntax error.

All URL matching is done against the canonical version of the URL. As such, you can expect the URL to be completely ASCII. The domain will already be punycode encoded. Both the scheme and domain are already lowercase. The resource part of the URL is already percent encoded.

Since the URL is known to be ASCII, the url-filter is also restricted to ASCII. Patterns with non-ASCII characters result in a parse error.

Privacy

We have been building these features with a focus on providing better control over privacy. We wanted to enable better privacy filters, and that is what has been driving the feature set that exists today.

There is a whole universe of features that can take advantage of the content blocker API, around privacy or better user experience. We would love to hear your feedback about what works well, what needs improvement, and what is missing.

A major benefit of the declarative content blocking extension model is that the extension does not see the URLs of pages and resources the user browsed to or had a page request. WebKit itself does not keep track of what rules have been executed on which URLs; we do not track you by design.

Everything has been developed in the open; everyone is welcome to audit and improve the code. The main part of content blockers lives in Source/WebCore/contentextensions.

Performance advice

A big focus of this feature is performance. We are trying to have good scalability with minimal performance impact.

If the rule compiler detects that a set of rules would negatively impact user experience, it refuses to load them and returns an error.

There are parameters in your extensions that do impact performance. In this section, I will give some general rules to get good performance. There are a few big themes to maximize performance:

  • Avoid quantifiers (“*”, “+”, “?”) in “url-filter” as much as possible.
  • CSS rules are best defined before any “ignore-previous-rules”
  • Make the trigger as specific as possible.
  • Group rules with similar actions.

Minimize quantifiers in regular expressions

Avoiding quantifiers helps us optimize the triggers in the backend. Quantifiers are useful but they increase the matching possibilities which tends to reduce performance. We have found that many existing privacy extensions sometimes use quantifiers excessively which tends to be costly.

One particularly bad case is quantifier appearing in the middle of a string. Cases like:


    foo.*bar

tends to be slower than “foo” and “bar” separately. Using too many of them can cause the rule set to be rejected.

One exception to that rule is common prefixes. For example, if you have many rules such as those:


    https?://user-tracker.com
    https?://we-follow-you.com
    https?://etc.com

The rules are grouped by the prefix “https?://, and it only counts as one rule with quantifiers.

Have CSS rules before “ignore-previous-rules”

When compiling the rules, we group the CSS rules whenever we determine they will be used together. For example, if a set of rules applies on every page (by using the filter “.*”), a special stylesheet is prepared for them to be ready to use them instantly when the page has loaded.

When a “ignore-previous-rules” appears, it forces the compiler to break the stylesheet since the rules appearing before an action “ignore-previous-rules” are all dismissed when the action is activated.

Use specific triggers

Good triggers try to exclude everything that could be activated by accident. Regular expressions should be as specific as possible to achieve your desired goal. Specifying the resource types if possible, and using domain filter if the rule should only apply on certain domains.

Having specific rules is important to avoid changing pages inadvertently, but it is also useful for performance. Having few actions to execute is a good idea.

Grouping rules with the same actions

Finally, grouping rules with similar actions can simplify the execution. Ideally, your extension will have all the rules blocking loads, followed by all the rules blocking cookies, etc.

Since rules are evaluated in order, it is useful to have them grouped together since matching a trigger means all the following rules of the same action can be skipped.

For example, if all the “block” rules are together, as soon as the first one is activated, all the following block rules are skipped since they have the same actions. The trigger evaluation continues on the first following rule with a different action.

We want your feedback

We have been developing these capabilities with the goal of achieving better privacy without incurring an unreasonable performance cost. We have intentionally limited ourselves to just a few features that could be improved.

If you start building content blocker extensions, it would help us if you send us your JSON files. Having many different use cases will help us optimize the code better.

For short questions, you can contact me on Twitter. For longer questions, you can email webkit-help or file bug report. If you’re interested in hacking on the code, feel free to ask me, Alex Christensen, Brady Eidson and Sam Weinig for help.

For questions about Safari’s adoption of these APIs, contact Brian Weinstein or Jon Davis.

By Benjamin Poulain at June 12, 2015 11:26 PM

June 02, 2015

Manuel Rego: Grid and the City

Igalia WebKit

I’m really glad to announce that my talk “CSS Grid Layout is just around the corner” has been accepted at CSSConf US 2015 (18-19 June). Thanks to the organizers for selecting my proposal, it’s a pleasure to be among all these great speakers. BTW, If you haven’t grabbed your ticket yet, you could use the following promo code when checking out to save some money: MR200

I’m part of the Igalia Web Platform team, and I’m currently working on the implementation of the CSS Grid Layout W3C spec on Blink and WebKit. So, I’m kind of an “exotic” profile in a conference like CSSConf, as I’m not working on frontend. However, I’ll try to bring the implementor perspective to the table, explaining some internals about how grid works. I’ll also introduce the basic syntax to be able to start playing with it.

My talk abstract from CSSConf website My talk abstract from CSSConf website

CSSConf this year is happening in New York City and the venue, Caroline’s on Broadway, is in the heart of Manhattan. So, I’ll take advantage to pay a visit to our friends at Bloomberg, whom we collaborate with in the development of CSS Grid Layout. In addition, BrooklynJS is organized on the evening of June 18th, and as part of the ticket for CSSConf, we’ll have the chance to attend this event too.

From the personal side, this will be my first time in NYC, exciting times ahead! Feel free to ping me if you want to talk about grid, the web, Igalia or simply do some sightseeing; as I’ll be arriving on 15th June’s night.

Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

As you might guess, I’m very excited about this crazy week, full of events and new experiences. I’m sure I’ll meet lots of great people and I’ll do my best to convince the world about the goodness of grid and make them feel how awesome it is. Exciting times ahead!

June 02, 2015 10:00 PM

June 01, 2015

Javier Fernández: Distributing tracks along Grid Layout container

Igalia WebKit

In my last post I introduced the concept of Content Distribution alignment and how it affects Grid Layout implementation. At that time, it was possible to use all the <content-position> values to select grid tracks position inside a grid container, moving them across the available space. However, it wasn’t until recently that users can distribute grid tracks along such available space, literally adding gaps in between or even stretching them.

In this post I’ll describe how each <content-distribution> value affect tracks in a Grid Layout, their position and size, using different grid structures (eg. number of tracks, span).

Let’s start analyzing the new Content Distribution alignment syntax defined in the CSS Box Alignment specification:

auto | <baseline-position> | <content-distribution> || [ <overflow-position>? && <content-position> ]

In case of a <content-distribution> value can’t be applied, its associated fallback <content-distribution> value should be used instead. However, this CSS syntax allow users to specify a preferred fallback value:

If both a <content-distribution> and <content-position> are given, the <content-position> provides an explicit fallback alignment.

Before going into each value, I think it’s a good idea to refresh the concepts of alignment container and alignment subject and how they apply in the context of Grid Layout:

The alignment container is the grid container’s content box. The alignment subjects are the grid tracks.

The different <content-distribution> values that can be used for align-content and justify-content CSS properties are defined as follows:

  • space-betweenThe alignment subjects are evenly distributed in the alignment container. Default fallback: start.
  • space-aroundThe alignment subjects are evenly distributed in the alignment container, with a half-size space on either end. Default fallback: center.
  • space-evenlyThe alignment subjects are evenly distributed in the alignment container, with a full-size space on either end. Default fallback: center.
  • stretchAny auto-sized alignment subjects have their size increased equally (not proportionally) so that the combined size exactly fills the alignment container. Default fallback: start.

Picture below describes how these values would behave depending on the number of grid tracks; for simplicity I only use justify-content property, so tracks are distributed along the inline (row) axis. In next examples we will see how both properties work together using more complex grid definitions.

content-distribution-1

Effect of different Content Distribution values on Grid Layout. Click on the Image to evaluate the behavior when using different number of tracks.

Previous examples were defined with grid items filling grid areas of just 1×1 tracks, which makes distribution pretty simple and easier to predict. But thanks to the flexibility of Grid Layout syntax we can define irregular grids, for instance, using the grid-template-areas property like in the next example.

align-content-and span-4

Basic example of how to apply the different values and its effect on irregular grid design.

Since Content Distribution alignment considers grid tracks as the alignment subject, distributing tracks along the available space may have the consequence of modifying the dimensions of grid-areas defined by more than one track. The following picture shows the result of the code above and provides and excellent example of how powerful is the Content Alignment effect on a Grid Layout.

These use cases can be obtained from Igalia’s Grid Layout examples repository, so anybody can play with different grid designs and alignment values combinations. They are also available at our codepen repository.

Grid Layout behind the scene

Now I’d like to explain a bit what I had to implement in the browser’s webcore to get these new features done; just some small pieces of source code, the ones I considered better, to get an idea of what implementing new behavior in browsers implies.

As you might already know because of my previous posts, CSS Box Alignment specification was born to generalize Flexbox’s alignment behavior so that it can be used for grid and even regular blocks. Several new properties were added, like justify-items and justify-self, and CSS syntax has changed considerably. Specially noteworthy how Content Distribution alignment properties have changed from their initial Flexbox definition. They now support complex values like ‘space-between true’, ‘space-around start’, or even ‘stretch center safe’. This makes possible to express more info than using the previous simple keyword form, although it requires a new CSS parsing logic in Browsers.

More complex CSS parsing

Since both align-content and justify-content properties accept multiple optional keywords I needed to re-implement completely their parsing logic. I’m happy to announce that it recently landed WebKit’s trunk too, so now both web engines support the new CSS syntax.

Due to the complex values defined for theses CSS properties, a new CSSValue derived class was defined to hold all the Content Alignment data, named as CSSContentDistributionValue. This data is then converted to something meaningful for the style logic using the StyleBuilderConverter class. This is the preferred method in both WebKit and Blink engines, which it just needs to be declared in the CSSPropertyNames.in and CSSProperties.in template files, respectively.

align-content initial=initialContentAlignment, converter=convertContentAlignmentData
justify-content initial=initialContentAlignment, converter=convertContentAlignmentData

The StyleBuildConverter logic is pretty simple thanks to these 2 new data structures, as it can be appreciated in the following excerpt of source code:

StyleContentAlignmentData StyleBuilderConverter::convertContentAlignmentData(StyleResolverState&amp;, CSSValue* value)
{
    StyleContentAlignmentData alignmentData = ComputedStyle::initialContentAlignment();
    CSSContentDistributionValue* contentValue = toCSSContentDistributionValue(value);
    if (contentValue->distribution()->getValueID() != CSSValueInvalid)
        alignmentData.setDistribution(*contentValue->;distribution());
    if (contentValue->position()->getValueID() != CSSValueInvalid)
        alignmentData.setPosition(*contentValue->;position());
    if (contentValue->overflow()->getValueID() != CSSValueInvalid)
        alignmentData.setOverflow(*contentValue->overflow());
    return alignmentData;
}

The StyleContentAlignmentData class was defined to simplify how we manage these complex values, so that we can handle properties as they had an atomic value. This approach allows a more efficient and robust way of detecting and managing style changes in these properties.

New Layout operations

Once this new CSS syntax is correctly parsed and a LayoutStyle instance generated according to user defined CSS style rules, I needed to modify Flexbox’s layout code for adapting it to the new data structures, ensuring browser backward compatibility and passing all the Layout and Unit tests. I implemented from scratch this logic for Grid Layout so I had the opportunity to introduce several performance optimizations to avoid unnecessary layouts and repaints. This area is pretty interesting and I’ll talk about it soon in a new post.

One interesting aspect of Content Distribution alignment is that it might take part in the track sizing algorithm. As it was explained in my previous post about Self Alignment, stretch value increases alignment subject’s size to fill its alignment container’s available space. This is also the case of Content Alignment, but considering tracks as alignment subject. However, there is another case not so obvious where <content-distribution> values may influence in track sizing resolution, or perhaps better said, grid area sizing.

Let’s consider this example of grid where there are certain areas using more than one track:

grid-template-areas: "a a b"
                     "c d b"
grid-auto-columns: 20px;
grid-auto-rows: 40px;
width: 150px;
height: 300px;

The example above defines a grid with 3 column tracks of 20px and 2 row tracks of 40px, which would be laid out as it’s shown in the following diagram:

content-distribution-spans

Grid Layout with areas filing more than one track. Click on the picture to evaluate the effect of each value on the grid area size.

This fact has interesting implementation implications due to the fact that in certain cases, in order to determine grid item’s logical height we need its logical width to be resolved first. Track sizing algorithm uses children grid area size to determine grid cell’s logical height, hence given that alignment logic needs track sizes have been already resolved, it may imply a re-layout of the grid items which size could be affected by the used content-distribution value. The following source code shows how I handle this scenario:

LayoutUnit LayoutGrid::gridAreaBreadthForChild(const LayoutBox& child, GridTrackSizingDirection direction, const Vector& tracks) const
{
    const GridCoordinate& coordinate = cachedGridCoordinate(child);
    const GridSpan& span = (direction == ForColumns) ? coordinate.columns : coordinate.rows;
    const Vector& trackPositions = (direction == ForColumns) ? m_columnPositions : m_rowPositions;
    if (span.resolvedFinalPosition.toInt() < trackPositions.size()) {
        LayoutUnit startOftrack = trackPositions[span.resolvedInitialPosition.toInt()];
        LayoutUnit endOfTrack = trackPositions[span.resolvedFinalPosition.toInt()];
        return endOfTrack - startOftrack + tracks[span.resolvedFinalPosition.toInt()].baseSize();
    }
    LayoutUnit gridAreaBreadth = 0;
    for (GridSpan::iterator trackPosition = span.begin(); trackPosition != span.end(); ++trackPosition)
        gridAreaBreadth += tracks[trackPosition.toInt()].baseSize();
 
    return gridAreaBreadth;

The code above will return different results, in the cases mentioned before, depending on whether it’s run during track sizing alignment or after applying the alignment logic. This will likely make needed a new layout of the whole grid, or at least, the affected grid items, which it likely has a negative impact on performance.

Current status and next steps

I’d like to finish this post with a snapshot of current situation and challenges for the next months, as I’ve been regularly doing in my last posts.

Unlike last reports, this time I’ve got good news regarding reduction of implementation gaps between the two web engines we are focusing our efforts on, WebKit and Blink. The following table describes current situation:

alignment-status

The table above indicates that several milestones were reached since the last report, although there are still some pending issues:

  • I’ve completed the implementation in WebKit of the parsing logic for the new Box Alignment properties: align-items and align-self.
  • As a side effect, I’ve also upgraded the ones already present because of Flexbox to the latest CSS3 Box Alignment specification.
  • WebKit has now full support for Default and Self Alignment fro Grid Layout, including also overflow handling
  • Blink has now full support for Content Distribution alignment, which missed <content-distrbuton> values.
  • WebKit’s Grid Layout implementation still misses support for Content Distribution alignment.
  • Baseline Alignment is still missing in both web engines

In addition to the above mentioned pending issues, our roadmap include the following tasks as part of my todo list for the next months:

  • Even though there s support for different writing-modes and flow directions, there are still some issues with orthogonal flows. I’ve got already some promising patches but they still have to be reviewed by Blink and WebKit engineers.
  • Optimizations of style and repaint invalidations triggered by changes on the alignment properties. As commented before, this is a very interesting topic involving, which I’ll elaborate further in next posts.
  • Performance analysis of relevant Grid Layout use cases, which hopefully will lead to optimizations proposals.

All this work and many other contributions to Grid Layout for WebKit and Blink web engines are the result of the collaboration between Bloomberg and Igalia to implement this W3C specification.

Igalia & Bloomberg logos

By jfernandez at June 01, 2015 07:32 PM

March 30, 2015

Manuel Rego: Web Engines Hackfest 2015: Save the dates!

Igalia WebKit

This is a short note to announce the dates of the Web Engines Hackfest 2015, that will happen next winter at Igalia Headquearters in A Coruña (Spain), from Monday, 7th December, to Wednesday, 9th December.

After all the great work and collaboration that happened during the past year edition, with hackers from all parts of the Web Platform community (Chromium/Blink, WebKit, Gecko, Servo, JSC, V8, SpiderMonkey, etc.), Igalia is really excited to host this great event again.

Web Engines Hackfest 2014 (picture by Adrián Pérez) Web Engines Hackfest 2014 (picture by Adrián Pérez)

We’re still closing the last details and will be sending the invitations in the coming weeks. However, do not hesitate to send us an invitation request if you are willing to come by the end of the year.

Do not miss any update following @webhackfest on twitter. For more details, visit the official webpage http://www.webengineshackfest.org/.

March 30, 2015 10:00 PM

March 23, 2015

Carlos García Campos: WebKitGTK+ 2.8.0

Igalia WebKit

We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.

Gestures

Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.

HTML5 Notifications

notifications

Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.

WebView background color

There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.

webkitgtk-2.8-bgcolor

A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).

User script messages

The communication between the UI process and the Web Extensions is something that we have always left to the users, so that everybody can use their own IPC mechanism. Epiphany and most of the apps use D-Bus for this, and it works perfectly. However, D-Bus is often too much for simple cases where there are only a few  messages sent from the Web Extension to the UI process. User script messages make these cases a lot easier to implement and can be used from JavaScript code or using the GObject DOM bindings.

Let’s see how it works with a very simple example:

In the UI process, we register a script message handler using the WebKitUserContentManager and connect to the “script-message-received-signal” for the given handler:

webkit_user_content_manager_register_script_message_handler (user_content, 
                                                             "foo");
g_signal_connect (user_content, "script-message-received::foo",
                  G_CALLBACK (foo_message_received_cb), NULL);

Script messages are received in the UI process as a WebKitJavascriptResult:

static void
foo_message_received_cb (WebKitUserContentManager *manager,
                         WebKitJavascriptResult *message,
                         gpointer user_data)
{
        char *message_str;

        message_str = get_js_result_as_string (message);
        g_print ("Script message received for handler foo: %s\n", message_str);
        g_free (message_str);
}

Sending a message from the web process to the UI process using JavaScript is very easy:

window.webkit.messageHandlers.foo.postMessage("bar");

That will send the message “bar” to the registered foo script message handler. It’s not limited to strings, we can pass any JavaScript value to postMessage() that can be serialized. There’s also a convenient API to send script messages in the GObject DOM bindings API:

webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, 
                                                            "foo", "bar");

 

Who is playing audio?

WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that :-)

ephy-is-playing-audio

HTML5 color input

Color input element is now supported by default, so instead of rendering a text field to manually input the color  as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.

mb-color-input-popover

APNG

APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.

webkitgtk-2.8-apng

SSL

The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.

WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.

Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.

Performance: bmalloc and concurrent JIT

bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.

Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.

Both performance improvements were very noticeable in the performance bot:

webkitgtk-2.8-perf

 

The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.

Plans for 2.10

WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.

  • More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
  • Sandboxing: seccomp filters will be used in the different secondary processes.
  • More performance: FTL will be enabled in JavaScriptCore by default.
  • Even more performance: this time in the graphics side, by using the threaded compositor.
  • Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
  • Implementation of the Database process: to bring back IndexedDB support.
  • Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.

By carlos garcia campos at March 23, 2015 11:56 AM

January 27, 2015

Building WebKit for iOS Simulator

Surfin’ Safari

I am proud to formally announce that you can now build and run top-of-tree WebKit for iOS in the iOS Simulator. We have updated the pages on webkit.org with details on building for iOS Simulator. For your convenience, I have summarized the steps to get you up and running below:

  1. Install Xcode 6.1.1.
  2. Get the Code.
  3. Enable Xcode to build command line tools by running sudo Tools/Scripts/configure-xcode-for-ios-development in the Terminal.
  4. Build WebKit for iOS Simulator by running Tools/Scripts/build-webkit --ios-simulator.
  5. Launch Safari in the iOS Simulator with the WebKit version you built by running Tools/Scripts/run-safari --ios-simulator.

Early Warning System (EWS) bots for iOS are running to help contributors catch build breakage before a patch is landed. The EWS bots build 32-bit iOS WebKit for ARMv7 hardware. We chose to build this configuration because it will most likely reveal build errors that differ from the configuration built by the existing Mac EWS bots.

We are working to bring up support for running layout tests, build and test build bots and additional iOS EWS configurations to help contributors notice build issues and regressions in WebKit for iOS.

We have always encouraged you to file all WebKit bugs that you find. Since upstreaming iOS WebKit to open source in early 2014, we have tracked iOS WebKit bugs in bugs.webkit.org. Now that you are able to build and run iOS WebKit yourself, we invite you to help fix them!

By Daniel Bates at January 27, 2015 04:01 PM

December 15, 2014

Web Engines Hackfest 2014

Gustavo Noronha

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

By kov at December 15, 2014 11:20 PM

December 08, 2014

How to build TyGL

University of Szeged

This is a follow-up blog post of our announcement of TyGL - the 2D-accelerated GPU rendering port of WebKit.

We have been received lots of feedback about TyGL and we would like to thank you for all questions, suggestions and comments. As we promised lets get into some technical details.

read more

By szilard.ledan at December 08, 2014 12:47 PM

November 12, 2014

Announcing the TyGL-WebKit port to accelerate 2D web rendering with GPU

University of Szeged

We are proud to announce the TyGL port (link: http://github.com/szeged/TyGL) on the top of EFL-WebKit. TyGL (pronounced as tigel) is part of WebKit and provides 2D-accelerated GPU rendering on embedded systems. The engine is purely GPU based. It has been developed on and tested against ARM-Mali GPU, but it is designed to work on any GPU conforming to OpenGL ES 2.0 or higher.

The GPU involvement on future graphics is inevitable considering the pixel growth rate of displays, but harnessing the GPU power requires a different approach than CPU-based optimizations.

read more

By zoltan.herczeg at November 12, 2014 02:18 PM

October 22, 2014

Fuzzinator reloaded

University of Szeged

It's been a while since I last (and actually first) posted about Fuzzinator. Now I think that I have enough new experiences worth sharing.

More than a year ago, when I started fuzzing, I was mostly focusing on mutation-based fuzzer technologies since they were easy to build and pretty effective. Having a nice error-prone test suite (e.g. LayoutTests) was the warrant for fresh new bugs. At least for a while.

read more

By renata.hodovan at October 22, 2014 10:38 PM

September 25, 2014

Measuring ASM.JS performance

University of Szeged

What is ASM.JS?

Now that mobile computers and cloud services become part of our lives, more and more developers see the potential of the web and online applications. ASM.JS, a strict subset of JavaScript, is a technology that provides a way to achieve near native speed in browsers, without the need of any plugin or extension. It is also possible to cross-compile C/C++ programs to it and running them directly in your browser.

In this post we will compare the JavaScript and ASM.JS performance in different browsers, trying out various kinds of web applications and benchmarks.

read more

By matyas.mustoha at September 25, 2014 10:40 AM

August 28, 2014

CSS Shapes now available in Chrome 37 release

Adobe Web Platform

Support for CSS Shapes is now available in the latest Google Chrome 37 release.

chromelogo

What can I do with CSS Shapes?

CSS Shapes lets you think out of the box! It gives you the ability to wrap content outside any shape. Shapes can be defined by geometric shapes, images, and even gradients. Using Shapes as part of your website design takes a visitor’s visual and reading experience to the next level. If you want to start with some tutorials, please go visit Sarah Soueidan’s article about Shapes.

Demo

The following shapes use case is from the Good Looking Shapes Gallery blog post.

Without CSS Shapes
the_11_guesses_no_shapes
With CSS Shapes
the_11_guesses_shapes

In the first picture, we don’t use CSS Shapes. The text wraps around the rectangular image container, which leads to a lot of empty space between the text and the visible part of the image.

In the second picture, we use CSS Shapes. You can see the wrapping behavior around the image. In this case the white parts of the image are transparent, thus the browser can automatically wrap the content around the visible part, which leads to this nice and clean, visually more appealing wrapping behavior.

How do I get CSS Shapes?

Just update your Chrome browser to the latest version from the Chrome/About Google Chrome menu, or download the latest stable version from https://www.google.com/chrome/browser/.

I’d like to thank the collaboration of WebKit and Blink engineers, and everyone else in the community who has contributed to this feature. The fact that Shapes is shipping in two production browsers — Chrome 37 now and Safari 8 later this year — is the upshot of the open source collaboration between the people who believe in a better, more expressive web. Although Shapes will be available in these browsers, you’ll need another solution for the other browsers. The CSS Shapes Polyfill is one method of achieving consistent behavior across browsers.

Where should I start?

For more info about CSS Shapes, please check out the following links:

Let us know your thoughts or if you have nice demos, here or on Twitter: @AdobeWeb and @ZoltanWebKit.

By Zoltan Horvath at August 28, 2014 05:12 PM

August 16, 2014

A Quick'n'Dirty Set-up of an Aarch64 Ubuntu 14.04 VM with QEMU

University of Szeged

Lately, I came up with the idea to do some development on Aarch64. However, I couldn't get my hands on real hardware easily so I started to look for alternatives (i.e., emulators). The ARMv8 Foundation Model seemed to be the trivial solution but I've heard that QEMU is somewhat faster so I gave it a try. My goal was to set up the VM as quick as possible: reuse whatever is already "out there" and rebuild only what's utterly necessary. In the end it turned out that it's quite easy to get such a VM working ... once you know what you need.

read more

By akos.kiss at August 16, 2014 02:20 PM

June 10, 2014

Using ARIA 1.0 and the WebKit Accessibility Node Inspector

Surfin’ Safari

On the heels of the 25th birthday of the Web, WAI-ARIA 1.0—the specification for Accessible Rich Internet Applications—is a W3C Recommendation, thanks in part to WebKit’s implementation. Most major web applications use ARIA and related techniques to improve accessibility and general usability.

Many web developers are familiar with the simple parts of ARIA, such as retrofitting roles in legacy or otherwise non-semantic uses of HTML like <div role="button" tabindex="0">, but ARIA has other equally important uses:

  • Improving languages like SVG where no accessibility semantics exist natively.
  • Augmenting technologies like EPUB that build on existing HTML semantics.
  • Allowing accessibility in native implementations, like the sub-DOM controls of <video> elements.
  • Supporting accessibility and full keyboard access when HTML is insufficient, such as with data grids, tree controls, or application dialogs.
  • Enabling accessible solutions where there is no equivalent semantic or functionality. For example, HTML has no concept similar to “live” regions.

More on these topics below, including how to diagnose and debug problems using new accessibility inspection features in the WebKit Web Inspector.

Example 1: ARIA in an SVG Map of Africa

The Scalable Vector Graphics (SVG) language does not include semantics to indicate what type of content is represented, such as a chart, illustration, or interactive application control. However, ARIA roles and attributes can be used in SVG today for both raster- and vector-based images, and the SVG Working Group recently adopted ARIA officially into SVG 2.

The following video shows VoiceOver’s touchscreen navigation of an accessible map. It uses a simple role="img" on each country path, and an aria-labelledby relationship to associate that country path with the text label. After watching the video, view the source of the test case SVG map of Africa to see how it works.

Closed captioned video showing VoiceOver on iOS with touch screen navigation of African map in SVG+ARIA.

Prior to WebKit’s implementation of ARIA in SVG, the best opportunity for a blind individual to experience spatial data like charts and maps was to print expensive tactile graphics on swell paper or with a modified braille embosser. Along with WebKit’s first native implementation of accessible MathML, accessible graphics represent new possibilities in the category of study collectively referred to as STEM: science, technology, engineering, and math.

Note: The test case SVG map of Africa is based on an original by Peter Collingridge, with accessibility added for the demo.

Introducing the Accessibility Node Inspector

Recent nightly builds of WebKit include a new accessibility section in the node properties of the Web Inspector. Select any DOM element to see relevant accessibility information, such as the computed role.

The properties and relationships displayed come from WebCore. Accessibility children and parent nodes cannot be detected through a JavaScript interface and are not a one-to-one mapping to the DOM, so these relationships have not previously been available to web developers. Many other accessibility properties are likewise not detectable through the rendered DOM or a JavaScript interface.

We’ll use the WebKit Accessibility Node Inspector to diagnose and inspect the examples below.

Complex ARIA Widget Examples

Many of the original features of ARIA (such as dialogs, landmarks, and menus) have been adopted into recent versions of HTML as elements. However, there are interaction patterns, common on the Web since the 1990s, that still have no native support or unreliable rendering support in HTML, such as date pickers, combo boxes, tab sets, data grids, tree controls, and more. Web developers must render all these controls themselves, using custom code or a JavaScript framework.

Making ARIA widgets can be challenging. The examples below illustrate some of the trickier bits of implementation. Debugging these controls is made easier by observing accessibility properties in the Web Inspector.

Example 2: Selectable List Box with Full Keyboard Support Using Native Focus Events

This demo was created in 2009 for Apple’s World Wide Developer Conference (WWDC) and uses the “roaming tabindex” technique on a custom list box.

Assistive technologies do not change the DOM, so there’s no hidden magic happening. JavaScript running in the page uses standard event handling and DOM interfaces like setAttribute() and focus(). View the source or step through in the WebKit debugger for a deeper understanding.

Closed captioned video showing an accessible “managed focus” list box demo

For a full explanation of the techniques and test case roaming tabindex demo used in the video, see WWDC 2009: Improving Accessibility in Web Applications.

Example 3: Combo Box with a “Status” Live Region

During the life cycle of a web application, there may be multiple points of user interest. In the list box example, the web application moves focus to the updated item, but moving focus is not always appropriate. In a combo box, keyboard focus should remain on the text field so the user can continue typing. The selected item in the related drop-down list is conveyed to the API when the selection changes, allowing a a screen reader to speak updates to both elements. Some combo boxes have an additional status display to indicate the total number of results. In this demo, we’ll use an ARIA “live region” for the status updates.

As with the previous example, there’s no hidden magic occurring in the DOM. JavaScript running in the page uses standard event handling and DOM interfaces like setAttribute(). View the source or step through in the WebKit debugger for a deeper understanding of the techniques.

Closed captioned video showing an accessible combo box demo with live region support

As this combo box demo shows, the ability for an assistive technology to simultaneously follow and report changes to multiple points of user interest was never possible in web applications prior to ARIA.

Major Contributors to WebKit’s ARIA Implementation

WebKit’s implementation of ARIA played a significant part in the ARIA 1.0 Recommendation milestone, and many individuals collaborated on the work.

The initial implementation was completed in 2008 by Alice Liu Barraclough and Beth Dakin. Much of the remaining ARIA implementation in WebCore, as well as the Mac and iOS platform mapping, was completed by Chris Fleizach. Sam White made major improvements to WebKit’s accessibility architecture. Jon Honeycutt, Brent Fulgham, Dominic Mazzoni, Mario Sánchez Prada, and others completed the platform mapping to the Windows and Linux accessibility APIs. Credit for the ARIA test harness and WebKit test results goes to Michael Cooper, Jon Gunderson, Mona Heath, Joseph Scheuhammer, Rich Schwerdtfeger, and others. The full list of working group acknowledgements is available in the ARIA spec.

The Web is a more enabling resource for everyone because of the efforts of these individuals. Thank you!

Future Direction of ARIA

ARIA 1.0 has much room for improvement, but it’s an incredibly important step toward ensuring equal access, opportunity, and usability of the web platform.

Future work on ARIA will cover additional semantics for HTML, SVG, and EPUB, and some of the work proposed includes non-declarative JavaScript accessibility API support for custom view technologies like WebGL and Canvas. There is also work to be done for rich text editing that is beyond the capability of contenteditable, such as the custom display and input-proxied views that are used on application suites like Google Docs and iWork for iCloud.

A Call to Action

Many of the widget libraries available in JavaScript frameworks do NOT include support for accessibility and keyboard behavior. If you are a front-end engineer, you have an opportunity to change this situation.

When you contribute to JavaScript UI libraries, include support for accessibility. Test your code for accessibility and keyboard behavior using focus() where appropriate. Detect and update your web app state based on user focus events. Don’t just style the CSS of controls to look focused; use real keyboard focus.

The amount of effort it takes to add and test for accessibility is well worth the fit-and-finish it will bring to your web app. You’ll improve the experience for all users.

Additional Video Resources

Each of these videos is about an hour in length. They cover ARIA and related techniques in detail.

By James Craig at June 10, 2014 05:40 PM

June 02, 2014

Introducing the JetStream benchmark suite

Surfin’ Safari

Today we are introducing a new WebKit JavaScript benchmark test suite, JetStream. JetStream codifies what our de facto process has been — to combine latency and throughput benchmarks with roughly equal weighting, and capturing both metrics of traditional JavaScript programming styles as well as new JavaScript-based technologies that have captured our imaginations. Scores on JetStream are a good indicator of the performance users would see in advanced web applications like games.

Optimizing the performance of our JavaScript engine is a high priority for the WebKit project. Examples of some of the improvements we introduced in the last year include concurrent compilation, generational GC, and the FTL JIT. Engineering such improvements requires focus: we try to prioritize high-impact projects over building and maintaining complex optimizations that have smaller benefits. Thus, we motivate performance work with benchmarks that illustrate the kinds of workloads that WebKit users will likely encounter. This philosophy of benchmark-driven development has long been part of WebKit.

The previous state of JavaScript benchmarking

As we made enhancements to the WebKit JavaScript engine, we found that no single benchmark suite was entirely representative of the scenarios that we wanted to improve. We like that JSBench measures the performance of JavaScript code on popular websites, but WebKit already does very well on this benchmark. We like SunSpider for its coverage of commonly-used language constructs and for the fact that its running time is representative of the running time of code on the web, but it falls short for measuring peak throughput. We like Octane, but it skews too far in the other direction: it’s useful for determining our engine’s peak throughput but isn’t sensitive enough to the performance you’d be most likely to see on typical web workloads. It also downplays novel JavaScript technologies like asm.js; only one of Octane’s 15 benchmarks was an asm.js test, and this test ignores floating point performance.

Finding good asm.js benchmarks is difficult. Even though Emscripten is gaining mindshare, its tests are long-running and until recently, lacked a web harness. So we built our own asm.js benchmarks by using tests from the LLVM test suite. These C and C++ tests are used by LLVM developers to track performance improvements of the clang/LLVM compiler stack. Emscripten itself uses LLVM to generate JavaScript code. This makes the LLVM test suite particularly appropriate for testing how well a JavaScript engine handles native code. Another benefit of our new tests is that they are much quicker to run than the Emscripten test suite.

Having good JavaScript benchmarks allows us to confidently pursue ambitious improvements to WebKit. For example, SunSpider guided our concurrent compilation work, while the asm.js tests and Octane’s throughput tests motivated our work on the FTL JIT. But allowing our testing to be based on a hodgepodge of these different benchmark suites has become impractical. It’s difficult to tell contributors what they should be testing if there is no unified test suite that can tell them if their change had the desired effect on performance. We want one test suite that can report one score in the end, and we want this one score to be representative of WebKit’s future direction.

Designing the new JetStream benchmark suite

Different WebKit components require different approaches to measuring performance. For example, for DOM performance, we just introduced the Speedometer benchmark. In some cases, the obvious approach works pretty well: for example, many layout and rendering optimizations can be driven by measuring page load time on representative web pages. But measuring the performance of a programming language implementation requires more subtlety. We want to increase the benchmarks’ sensitivity to core engine improvements, but not so much so that we lose perspective on how those engine improvements play out in real web sites. We want to minimize the opportunities for system noise to throw off our measurements, but anytime a workload is inherently prone to noise, we want a benchmark to show this. We want our benchmarks to represent a high-fidelity approximation of the workloads that WebKit users are likely to care about.

JetStream combines a variety of JavaScript benchmarks, covering a variety of advanced workloads and programming techniques, and reports a single score that balances them using a geometric mean. Each test is run three times and scores are reported with 95% confidence intervals. Each benchmark measures a distinct workload, and no single optimization technique is sufficient to speed up all benchmarks. Some benchmarks demonstrate tradeoffs, and aggressive or specialized optimization for one benchmark might make another benchmark slower. Demonstrating trade-offs is crucial for our work. As discussed in my previous post about our new JIT compiler, WebKit tries to dynamically adapt to workload using different execution tiers. But this is never perfect. For example, while our new FTL JIT compiler gives us fantastic speed-ups on peak throughput tests, it does cause slight regressions in some ramp-up tests. New optimizations for advanced language runtimes often run into such trade-offs, and our goal with JetStream is to have a benchmark that informs us about the trade-offs that we are making.

JetStream includes benchmarks from the SunSpider 1.0.2 and Octane 2 JavaScript benchmark suites. It also includes benchmarks from the LLVM compiler open source project, compiled to JavaScript using Emscripten 1.13. It also includes a benchmark based on the Apache Harmony open source project’s HashMap, hand-translated to JavaScript. More information about the benchmarks included in JetStream is available on the JetStream In Depth page.

We’re excited to be introducing this new benchmark. To run it, simply visit browserbench.org/JetStream. You can file bugs against the benchmark using WebKit’s bug management system under the Tools/Tests component.

By Filip Pizlo at June 02, 2014 07:42 PM

Speedometer: Benchmark for Web App Responsiveness

Surfin’ Safari

Today we are pleased to announce Speedometer, a new benchmark that measures the responsiveness of web applications.

Benchmark

Speedometer measures simulated user interactions in web applications. Version 1.0 of Speedometer uses TodoMVC to simulate user actions for adding, completing, and removing to-do items. Speedometer repeats the same actions using DOM APIs — a core set of web platform APIs used extensively in web applications — as well as six popular JavaScript frameworks: Ember.js, Backbone.js, jQueryAngularJS, React, and Flight. Many of these frameworks are used on the most popular websites in the world, such as Facebook and Twitter. The performance of these types of operations depends on the speed of the DOM APIs, the JavaScript engine, CSS style resolution, layout, and other technologies.

Motivation

When we set out to improve the performance of interactive web applications in WebKit last year, we looked for a benchmark to guide our work. However, many browser benchmarks we checked were micro-benchmarks, and didn’t reflect how DOM APIs were used in the real world, or how individual APIs interacted with the rest of the web browser engine.

For example, one popular DOM benchmark assigns the value of element.id to a global variable repeatedly inside a loop:

for (var i = 0; i < count; i++)
     globalVariable = element.id; 

We like these micro-benchmarks for tracking regressions in heavily used DOM APIs like element.id. However, we couldn’t use them to guide our performance work because they don’t tell us the relative importance of each DOM API. With these micro-benchmarks, we could have easily over optimized APIs that don’t matter as much in actual web applications.

These micro-benchmarks can also encourage browser vendors to implement optimizations that don’t translate into any real-world benefit. In the example above for instance, some browser engines detect that element.id doesn’t have any side effect and eliminate the loop entirely; assigning the value exactly once. However, real-world websites rarely access element.id repeatedly without ever using the result or modifying the DOM in between.

Hence we decided to write a new benchmark for the end-to-end performance of a complete web application instead of testing individual DOM calls.

Mechanics

We tried to make Speedometer faithfully simulate a typical workload on a demo application by replaying a sequence of user interactions. We did have to work around certain limitations of the browser, however. For instance, we call click() on each checkbox in order to simulate a mouse click since many browsers don’t allow web content to create fake mouse or keyboard events. To make the run time long enough to measure with the limited precision, we synchronously execute a large number of the operations, such as adding one hundred to-do items.

We also noticed that some browser engines have used an optimization strategy of doing some work asynchronously to reduce the run time of synchronous operations. Returning control back to JavaScript execution as soon as possible is worth pursuing. However, a holistic, accurate measurement of web application performance involves measuring when these related, asynchronous computations actually complete since they could still eat up a big proportion of the 16-millisecond-per-frame budget required to achieve smooth, 60 frames-per-second user interaction. Thus, we measure the time browsers spend executing those asynchronous tasks in Speedometer, estimated as the time between when a zero-second delay timer is scheduled and when it is fired.

It is worth noting that Speedometer is not meant to compare the performance of different JavaScript frameworks. The mechanism by which we simulate user actions is different for each framework, and we’re forcing frameworks to do more work synchronously than needed in some cases to ensure that run time can be measured.

Optimizations in WebKit

Over the past eight months since we introduced the first beta version of Speedometer under a different name, we have been optimizing WebKit’s performance on this benchmark.

One of our biggest improvements on Speedometer came from making render tree creation lazy. In WebKit, we create render objects for each element that gets displayed on screen in order to compute each element’s style and position. To avoid thrash, we changed WebKit to only create an element’s render object when it was needed, giving WebKit a huge performance boost. We also made many DOM APIs not trigger synchronous style resolutions or synchronous layout so that we get further benefit from lazily creating the render tree.

Another area we made substantial improvements was JavaScript bindings, the layer that sits between C++ browser code and JavaScript. We removed redundant layers of abstraction and made more properties and member functions on DOM objects inline cacheable (See Introducing the WebKit FTL JIT). For example, we added a new JavaScriptCore feature to deal with named properties on the document object so that its attributes could be inline cached. We also optimized node lists returned by getElementsByTagName as well as inline caching the length property.

Finally, WebKit’s performance on Speedometer benefited from two major architectural changes. JavaScriptCore’s concurrent and parallel JIT (See Introducing the WebKit FTL JIT), which allows JavaScript to be compiled while the main thread is running other code, reduced the run time of JavaScripts in Speedometer. The CSS JIT, which allows CSS selectors to be compiled up front and rapidly checked against elements, reduced time spent in style resolution and made querySelector and querySelectorAll much faster.

Because Speedometer is an end-to-end benchmark that uses popular JavaScript frameworks, it also helped us catch surprising performance regressions. For instance, when we tried to optimize calls to bound functions in JavaScript by doing more work up front in Function.prototype.bind, we saw a few percent performance regression on Speedometer because many bound functions were called only once before being discarded. We initially doubted that this result reflected the behavior of real web applications, so we collected statistics on popular websites like Facebook and Twitter. To our surprise, we found exactly the same behavior: the average bound function was called only once or twice on the websites we studied.

Future Plans

In future versions, we hope to add more variations of web applications and frameworks. If you know of any good demo web applications distributed under MIT license or BSD license that could be incorporated into Speedometer, please let us know.

With Speedometer, the web browser community now has a benchmark that measures the responsiveness of real-world web applications. We are looking forward to making further performance improvements in WebKit using Speedometer, and we hope other browser vendors will join us.

By Ryosuke Niwa at June 02, 2014 07:40 PM

May 13, 2014

Good-Looking Shapes Gallery

Adobe Web Platform

As a modern consumer of media, you rarely crack open a magazine or a pamphlet or anything that would be characterized as “printed”. Let me suggest that you take a walk on the wild side. The next time you are in a doctor’s office, or a supermarket checkout lane, or a library, thumb though a magazine. Most of the layouts you’ll find inside can also be found on the web, but not all of them. Layouts where content hugs the boundaries of illustrations are common in print and rare on the web. One of the reasons non-rectangular contour-hugging layouts are uncommon on the web is that they are difficult to produce.

They are not difficult to produce anymore.

The CSS Shapes specification is now in the final stages of standardization. This feature enables flowing content around geometric shapes (like circles and polygons), as well as around shapes defined by an image’s alpha channel. Shapes make it easy to produce the kinds of layouts you can find in print today, with all the added flexibility and power that modern online media affords. You can use CSS Shapes right now with the latest builds of WebKit and Blink based browsers, like Safari and Chrome.

Development of CSS Shapes has been underway for about two years, and we’ve been regularly heralding its progress here. Many of those reports have focused on the evolution of the spec and implementations, and they’ve included examples that emphasized basics over beauty. This article is an attempt to tilt the balance back towards good-looking. Listed below are simple shapes demos that we think look pretty good. Everyone on Adobe’s CSS Shapes engineering team contributed at least one.

There’s a live CodePen.io version of each demo in the gallery. Click on the demo screenshot or one of the handy links to take a look. You’ll want to view the demos with a browser that supports Shapes and you’ll need to enable CSS Shapes in that browser. For example you can use a nightly build of the Safari browser or you can enable shapes in Chrome or Chrome Canary like this:

  1. Copy and paste chrome://flags/#enable-experimental-web-platform-features into the address bar, then press enter.
  2. Click the ‘Enable’ link within that section.
  3. Click the ‘Relaunch Now’ button at the bottom of the browser window.

A few of the demos use the new Shapes Polyfill and will work in most browsers.

And now, without further ado, please have a look through our good-looking shapes gallery.


Ozma of Oz

ozma-demo-screenshot

This demo reproduces the layout style that opens many of the chapters of the L. Frank Baum books, including Ozma of Oz.  The first page is often dominated by an illustration on the left or right. The chapter’s text conforms to the illustration, but not too tightly. The books were published over 100 years ago and they still look good print.  With CSS Shapes they can still look good on the web.

Top Cap

topcap-demo-screenshot

The conventional “drop-cap” opens a paragraph by enlarging and highlighting the first letter, word or phrase. The drop-cap’s goal is to draw your attention to where you can start reading. This demo delivers the same effect by crowning the entire opening paragraph with a “top cap” that funnels your attention into the article. In both cases, what’s going on is a segue from a graphic element to the text.

Violator

monsters-demo-screenshot

A violator is small element that “violates” rectangular text layout by encroaching on a corner or a small part of an edge. This layout idiom is common in short-form magazines and product packaging. That “new and improved” banner which blazes through the corner of thousands of consumer products (whether or not they are new or improved) – it’s a violator.

Column Interest

columns-demo-screenshot

When a print magazine feels the need to incorporate some column layout melodrama, they often reach for this idiom. The shape spans a pair of columns, which creates visual interest in the middle of the page. Without it you’d be faced with a wall of attention sapping text and more than likely turn the page.

Caption

Screenshot of the wine jug caption demo.

The old-school approach for including a caption with an image is to put the caption text alongside or below the image. Putting a caption on top of an image requires a little more finesse, since you have to ensure that the text doesn’t obscure anything important and that the text is rendered in a way that preserves readability.  The result can be relatively attractive.

This photograph was taken by Zoltan Horvath who has pointed out that I’ve combined a quote about tea with a picture of a ceremonial wine jug.  I apologize for briefly breaching that beverage boundary. It’s just a demo.

Paging

Screenshot of the paging demo.

With a layout like this, one could simple let the content wrap and around the shape on the right and then expand into the usual rectangle.  In this demo the content is served up a paragraph at a time, in response to the left and right arrow keys.

Note also: yes in fact the mate gourd is perched on exactly the same windowsill as the previous demo. Zoltan and Pope Francis are among the many fans of yerba mate tea.

Ersatz shape-inside

Screenshot of the ersatz shape-inside demo.

Originally the CSS Shapes spec included shape-inside as well as shape-outside. Sadly, shape-inside was promoted to “Level 2″ of the spec and isn’t available in the current implementations. Fortunately for shape insiders everywhere, it’s still sometimes possible to mimic shape-inside with an adjacent pair of carefully designed shape-outside floats. This demo is a nice example of that, where the text appears inside a bowl of oatmeal.

Animation

animation-demo-screeenshot

This is an animated demo, so to appreciate it you’ll really need to take a look at the live version. It is an example of using an animated shape to draw the user’s attention to a particular message.  Of course one must use this approach with restraint, since an animated loop on a web page doesn’t just gently tug at the user’s attention. It drags at their attention like a tractor beam.

Performance

performance-demo-screenshot

Advertisements are intended to grab the user’s attention and a second or two of animation will do that. In this demo a series of transition motions have been strung together into a tiny performance that will temporarily get the reader’s attention. The highlight of the performance is – of course – the text snapping into the robot’s contour for the finale. Try and imagine a soundtrack that punctuates the action with some whirring and clanking noises, it’s even better that way.

By hmuller at May 13, 2014 05:38 PM

April 24, 2014

Adobe Web Platform Goes to the 2014 WebKit Contributors’ Meeting

Adobe Web Platform

Last week, Apple hosted the 2014 WebKit Contributors’ Meeting at their campus in Cupertino. As usual it was an unconference-style event, with session scheduling happening on the morning of the first day. While much of the session content was very specific to WebKit implementation, there were topics covered that are interesting to the wider web community. This post is a roundup of some of these topics from the sessions that Adobe Web Platform Team members attended.

CSS Custom Properties for Cascading Variables

Alan Stearns suggested a session on planning a new implementation of CSS Custom Properties for Cascading Variables. While implementations of this spec have been attempted in WebKit in the past, they never got past the experimental stage. Despite this, there is still much interest in implementing this feature. In addition, the current version of the spec has addressed many of the issues that WebKit contributors had previously expressed. We talked about a possible issue with using variables in custom property values, which Alan is investigating. More detail is available in the notes from the Custom Properties session.

CSS Regions

Andrei Bucur presented the current state of the CSS Regions implementation in WebKit. The presentation was well received and well attended. Notably, this was one of the few sessions with enough interest that it had a time slot all to itself.

While CSS Regions shipped last year in iOS 7 and Safari 6.1 and 7, the implementation in WebKit hasn’t been standing still. Andrei mentioned the following short list of changes in WebKit since the last Safari release:

  • correct painting of fragments and overflow
  • scrollable regions
  • accelerated content inside regions
  • position: fixed elements
  • the regionoversetchange event
  • better selection
  • better WebInspector integration
  • and more…

Andrei’s slides outlining the state of CSS Regions also contain a roadmap for the feature’s future in WebKit as well as a nice demo of the fix to fragment and overflow handling. If you are following the progress of CSS Regions in WebKit, the slides are definitely worth a look. (As of this writing, the Regions demo in the slides only works in Safari and WebKit Nightly.)

CSS Shapes

Zoltan Horvath, Bear Travis, and I covered the current state of CSS Shapes in WebKit. We are almost done implementing the functionality in Level 1 of the CSS Shapes Specification (which is itself a Candidate Recommendation, the last step before becoming an official W3C standard). The discussion in this session was very positive. We received good feedback on use cases for shape-outside and even talked a bit about the possibilities for when shape-inside is revisited as part of CSS Shapes Level 2. While I don’t have any slides or demos to share at the moment, we will soon be publishing a blog post to bring everyone up to date on the latest in CSS Shapes. So watch this space for more!

Subpixel Layout

This session was mostly about implementation. However, Zalan Bujtas drew an interesting distinction between subpixel layout and subpixel painting. Subpixel layout allows for better space utilization when laying out elements on the page, as boxes can be sized and positioned more precisely using fractional units. Subpixel painting allows for better utilization of high DPI displays by actually drawing elements on the screen using fractional CSS pixels (For example: on a 2x “Retina” display, half of a CSS pixel is one device pixel). Subpixel painting allows for much cleaner lines and smoother animations on high DPI displays when combined with subpixel layout. While subpixel layout is currently implemented in WebKit, subpixel painting is currently a work in progress.

Web Inspector

The Web Inspector is full of shiny new features. The front-end continues to shift to a new design, while the back-end gets cleaned up to remove cruft. The architecture for custom visual property editors is in place and will hopefully enable quick and intuitive editing of gradients, transforms, and animations in the future. Other goodies include new breakpoint actions (like value logging), a redesigned timeline, and IndexedDB debugging support. The Web Inspector still has room for new features, and you can always check out the #webkit-inspector channel on freenode IRC for the latest and greatest.

Web Components

The Web Components set of features continues to gather interest from the browser community. Web Components is made up of four different features: HTML Components, HTML Imports, Shadow DOM, and HTML Templates. The general gist of the talk was that the Web Components concepts are desirable, but there are concerns that the features’ complexity may make implementation difficult. The main concerns seemed to center around performance and encapsulation with Shadow DOM, and will hopefully be addressed with a prototype implementation of the feature (in the works). You can also take a look at the slides from the Web Components session.

CSS Grid Layout

The WebKit implementation of the CSS Grid Layout specification is relatively advanced. After learning in this session that the only way to test out Grid Layout in WebKit was to make a custom build with it enabled, session attendees concluded that it should be turned on by default in the WebKit Nightlies. So in the near future, experimenting with Grid Layout in WebKit should be as easy as installing a nightly build.

More?

As I mentioned earlier, this was just a high-level overview of a few of the topics at this year’s WebKit Contributors’ Meeting. Notes and slides for some of the topics not mentioned here are available on the 2014 WebKit Meeting page in the wiki. The WebKit project is always welcoming new contributors, so if you happen to see a topic on that wiki page that interests you, feel free to get in touch with the community and see how you can get involved.

Acknowledgements

This post would not have been possible without the notes and editing assistance of my colleagues on the Adobe Web Platform Team that attended the meeting along with me: Alan Stearns, Andrei Bucur, Bear Travis, and Zoltan Horvath.

By Bem Jones-Bey at April 24, 2014 05:23 PM

March 18, 2014

QtWebKit is no more, what now?

Gustavo Noronha

Driven by the technical choices of some of our early clients, QtWebKit was one of the first web engines Collabora worked on, building the initial support for NPAPI plugins and more. Since then we had kept in touch with the project from time to time when helping clients with specific issues, hardware or software integration, and particularly GStreamer-related work.

With Google forking Blink off WebKit, a decision had to be made by all vendors of browsers and platform APIs based on WebKit on whether to stay or follow Google instead. After quite a bit of consideration and prototyping, the Qt team decided to take the second option and build the QtWebEngine library to replace QtWebKit.

The main advantage of WebKit over Blink for engine vendors is the ability to implement custom platform support. That meant QtWebKit was able to use Qt graphics and networking APIs and other Qt technologies for all of the platform-integration needs. It also enjoyed the great flexibility of using GStreamer to implement HTML5 media. GStreamer brings hardware-acceleration capabilities, support for several media formats and the ability to expand that support without having to change the engine itself.

People who are using QtWebKit because of its being Gstreamer-powered will probably be better served by switching to one of the remaining GStreamer-based ports, such as WebKitGTK+. Those who don’t care about the underlying technologies but really need or want to use Qt APIs will be better served by porting to the new QtWebEngine.

It’s important to note though that QtWebEngine drops support for Android and iOS as well as several features that allowed tight integration with the Qt platform, such as DOM manipulation through the QWebElement APIs, making QObject instances available to web applications, and the ability to set the QNetworkAccessManager used for downloading resources, which allowed for fine-grained control of the requests and sharing of cookies and cache.

It might also make sense to go Chromium/Blink, either by using the Chrome Content API, or switching to one its siblings (QtWebEngine included) if the goal is to make a browser which needs no integration with existing toolkits or environments. You will be limited to the formats supported by Chrome and the hardware platforms targeted by Google. Blink does not allow multiple implementations of the platform support layer, so you are stuck with what upstream decides to ship, or with a fork to maintain.

It is a good alternative when Android itself is the main target. That is the technology used to build its main browser. The main advantage here is you get to follow Chrome’s fast-paced development and great support for the targeted hardware out of the box. If you need to support custom hardware or to be flexible on the kinds of media you would like to support, then WebKit still makes more sense in the long run, since that support can be maintained upstream.

At Collabora we’ve dealt with several WebKit ports over the years, and still actively maintain the custom WebKit Clutter port out of tree for clients. We have also done quite a bit of work on Chromium-powered projects. Some of the decisions you have to make are not easy and we believe we can help. Not sure what to do next? If you have that on your plate, get in touch!

By kov at March 18, 2014 07:44 PM

February 25, 2014

Improving your site’s visual details: CSS3 text-align-last

Adobe Web Platform

In this post, I want to give a status report regarding the text-align-last CSS3 property. If you are interested in taking control of the small visual details of your site with CSS, I encourage you to keep reading.

The problem

First, let’s talk about why we need this property. You’ve probably already seen many text blocks on pages that don’t quite seem visually correct, because the last line isn’t justified with the previous lines. Check out the example paragraph below:

Example of the CSS3 text-align-last property

In the first column, the last line isn’t justified. This is the expected behavior, when you apply the ‘text-align: justify’ CSS property on a container. On the other hand, in the second column, the content is entirely justified, including the last line.

The solution

This magic is the ‘text-align-last’ CSS3 property, which is set to justify on the second container. The text-align-last property is part of the CSS Text Module Level 3 specification, which is currently a working draft. The text-align-last property describes how the last line of a block or a line right before a forced line break is aligned when ‘text-align’ is ‘justify’, which means you gain full control over the alignment of the last line of a block. The property allows several more options, which you can read about on WebPlatform.org docs, or the CSS Text Module Level 3 W3C Specification.

A possible use case (Added April – 2014)

After looking at the previous example (which was rather focusing on the functionality of the property), let’s move on to a more realistic use case. The feature is perfect to make our multi-line captions look better. Check out the centered, and the justified image caption examples below.

centertext_align__simple_justify

And now, compare them with a justified, multi-line caption, where the last line has been centered by text-align-last: center.
text_align_last_center

I think the proper alignment of the last line gives a better overlook to the caption.

Browser Support

I recently added rendering support for the property in WebKit (Safari) based on the latest specification. Dongwoo Joshua Im from Samsung added rendering support in Blink (Chrome). If you like to try it out in WebKit, you’ll need to make a custom developer build and use the CSS3 text support build flag (--css3-text).

The property is already included in Blink’s developer nightlies by default, so after launching your latest Chrome Canary, you only need to enable ‘Enable experimental Web Platform features’ under chrome://flags, and enjoy the full control over your last lines.

Developer note

Please keep in mind that both the W3C specification and the implementations are under experimental status. I’ll keep blogging about the feature and let you know if anything changes, including when the feature ships for production use!

By Zoltan Horvath at February 25, 2014 04:58 PM

January 05, 2014

Funding MathML Developments in Gecko and WebKit (part 2)

Frédéric Wang

As I mentioned three months ago, I wanted to start a crowdfunding campaign so that I can have more time to devote to MathML developments in browsers and (at least for Mozilla) continue to mentor volunteer contributors. Rather than doing several crowdfunding campaigns for small features, I finally decided to do a single crowdfunding campaign with Ulule so that I only have to worry only once about the funding. This also sounded more convenient for me to rely on some French/EU website regarding legal issues, taxes etc. Also, just like Kickstarter it's possible with Ulule to offer some "rewards" to backers according to the level of contributions, so that gives a better way to motivate them.

As everybody following MathML activities noticed, big companies/organizations do not want to significantly invest in funding MathML developments at the moment. So the rationale for a crowdfunding campaign is to rely on the support of the current community and on the help of smaller companies/organizations that have business interest in it. Each one can give a small contribution and these contributions sum up in enough money to fund the project. Of course this model is probably not viable for a long term perspective, but at least this allows to start something instead of complaining without acting ; and to show bigger actors that there is a demand for these developments. As indicated on the Ulule Website, this is a way to start some relationship and to build a community around a common project. My hope is that it could lead to a long term funding of MathML developments and better partnership between the various actors.

Because one of the main demand for MathML (besides accessibility) is in EPUB, I've included in the project goals a collection of documents that demonstrate advanced Web features with native MathML. That way I can offer more concrete rewards to people and federate them around the project. Indeed, many of the work needed to improve the MathML rendering requires some preliminary "code refactoring" which is not really exciting or immediately visible to users...

Hence I launched the crowdfunding campaign the 19th of November and we reached 1/3 of the minimal funding goal in only three days! This was mainly thanks to the support of individuals from the MathML community. In mid december we reached the minimal funding goal after a significant contribution from the KWARC Group (Jacobs University Bremen, Germany) with which I have been in communication since the launch of the campaign. Currently, we are at 125% and this means that, minus the Ulule commision and my social/fiscal obligations, I will be able to work on the project during about 3 months.

I'd like to thank again all the companies, organizations and people who have supported the project so far! The crowdfunding campaign continues until the end of January so I hope more people will get involved. If you want better MathML in Web rendering engines and ebooks then please support this project, even a symbolic contribution. If you want to do a more significant contribution as a company/organization then note that Ulule is only providing a service to organize the crowdfunding campaign but otherwise the funding is legally treated the same as required by my self-employed status; feel free to contact me for any questions on the project or funding and discuss the long term perspective.

Finally, note that I've used my savings and I plan to continue like that until the official project launch in February. Below is a summary of what have been done during the five weeks before the holiday season. This is based on my weekly updates for supporters where you can also find references to the Bugzilla entries. Thanks to the Apple & Mozilla developers who spent time to review my patches!

Collection of documents

The goal is to show how to use existing tools (LaTeXML, itex2MML, tex4ht etc) to build EPUB books for science and education using Web standards. The idea is to cover various domains (maths, physics, chemistry, education, engineering...) as well as Web features. Given that many scientific circles are too much biased by "math on paper / PDF" and closed research practices, it may look innovative to use the Open Web but to be honest the MathML language and its integration with other Web formats is well established for a long time. Hence in theory it should "just work" once you have native MathML support, without any circonvolutions or hacks. Here are a couple of features that are tested in the sample EPUB books that I wrote:

  • Rendering of MathML equations (of course!). Since the screen size and resolution vary for e-readers, automatic line breaking / reflowing of the page is "naturally" tested and is an important distinction with respect to paper / PDF documents.
  • CSS styling of the page and equations. This includes using (Web) fonts, which are very important for mathematical publishing.
  • Using SVG schemas and how they can be mixed with MathML equations.
  • Using non-ASCII (Arabic) characters and RTL/LTR rendering of both the text and equations.
  • Interactive document using Javascript and <maction>, <input>, <button> etc. For those who are curious, I've created some videos for an algebra course and a lab practical.
  • Using the <video> element to include short sequences of an experiment in a physics course.
  • Using the <canvas> element to draw graphs of functions or of physical measurements.
  • Using WebGL to draw interactive 3D schemas. At the moment, I've only adapted a chemistry course and used ChemDoodle to load Crystallographic Information Files (CIF) and provide 3D-representation of crystal structures. But of course, there is not any problem to put MathML equations in WebGL to create other kinds of scientific 3D schemas.

WebKit

I've finished some work started as a MathJax developer, including the maction support requested by the KWARC Group. I then tried to focus on the main goals: rendering of token elements and more specifically operators (spacing and stretching).

  • I improved LTR/RTL handling of equations (full RTL support is not implemented yet and not part of the project goal).
  • I improved the maction elements and implemented the toggle actiontype.
  • I refactored the code of some "mrow-like" elements to make them all behave like an <mrow> element. For example while WebKit stretched (some) operators in <mrow> elements it could not stretch them in <mstyle>, <merror> etc Similarly, this will be needed to implement correct spacing around operators in <mrow> and other "mrow-like" elements.
  • I analyzed more carefully the vertical stretching of operators. I see at least two serious bugs to fix: baseline alignment and stretch size. I've uploaded an experimental patch to improve that.
  • Preliminary work on the MathML Operator Dictionary. This dictionary contains various properties of operators like spacing and stretchiness and is fundamental for later work on operators.
  • I have started to refactor the code for mi, mo and mfenced elements. This is also necessary for many serious bugs like the operator dictionary and the style of mi elements.
  • I have written a patch to restore support for foreign objects in annotation-xml elements and to implement the same selection algorithm as Gecko.

Gecko

I've continued to clean up the MathML code and to mentor volunteer contributors. The main goal is the support for the Open Type MATH table, at least for operator stretching.

  • Xuan Hu's work on the <mpadded> element landed in trunk. This element is used to modify the spacing of equations, for example by some TeX-to-MathML generators.
  • On Linux, I fixed a bug with preferred widths of MathML token elements. Concretely, when equations are used inside table cells or similar containers there is a bug that makes equations overflow the containers. Unfortunately, this bug is still present on Mac and Windows...
  • James Kitchener implemented the mathvariant attribute (e.g used by some tools to write symbols like double-struck, fraktur etc). This also fixed remaining issues with preferred widths of MathML token elements. Khaled Hosny started to update his Amiri and XITS fonts to add the glyphs for Arabic mathvariants.
  • I finished Quentin Headen's code refactoring of mtable. This allowed to fix some bugs like bad alignment with columnalign. This is also a preparation for future support for rowspacing and columnspacing.
  • After the two previous points, it was finally possible to remove the private "_moz-" attributes. These were visible in the DOM or when manipulating MathML via Javascript (e.g. in editors, tree inspector, the html5lib etc)
  • Khaled Hosny fixed a regression with script alignments. He started to work on improvements regarding italic correction when positioning scripts. Also, James Kitchener made some progress on script size correction via the Open Type "ssty" feature.
  • I've refactored the stretchy operator code and prepared some patches to read the OpenType MATH table. You can try experimental support for new math fonts with e.g. Bill Gianopoulos' builds and the MathML Torture Tests.

Blink/Trident

MathML developments in Chrome or Internet Explorer is not part of the project goal, even if obviously MathML improvements to WebKit could hopefully be imported to Blink in the future. Users keep asking for MathML in IE and I hope that a solution will be found to save MathPlayer's work. In the meantime, I've sent a proposal to Google and Microsoft to implement fallback content (alttext and semantics annotation) so that authors can use it. This is just a couple of CSS rules that could be integrated in the user agent style sheet. Let's see which of the two companies is the most reactive...

By fredw at January 05, 2014 07:45 PM

December 11, 2013

WebKitGTK+ hackfest 5.0 (2013)!

Gustavo Noronha

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coruña to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coruña, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.

Web with wider tabs and the new prefs dialog

Web with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web’s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.

Martin Robinson and Dan Winship hack

Martin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don’t) or because we hate autotools (we do), but because it should make people’s lives easier when adding new files to the build, and should also make our build less hacky and quicker – it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there’s plenty to do.

Hackers take advantage of the icecream network for faster builds

Hackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup’s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.

Jon and Guillaume battling code

Jon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There’s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.

Hackers chillin' at A Coruña

Hackers chillin’ at A Coruña

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!


By kov at December 11, 2013 09:47 AM

October 15, 2013

Funding MathML Developments in Gecko and WebKit

Frédéric Wang

update 2013-10-15: since I got feedback, I have to say that my funding plan is independent of my work at MathJax ; I'm not a MathJax employee but I have an independent contractor status. Actually, I already used my business to fund an intern for Gecko MathML developments during Summer 2011 :-)

Retrospect

Since last April, I have been allowed by the MathJax Consortium to dedicate a small amount of my time to do MathML development in browsers, until possibly more serious involvement later. At the same time, we mentioned this plan to Google developers but unfortunately they just decided to drop the WebKit MathML code from Blink, making external contributions hard and unwelcome...

Hence I have focused mainly on Gecko and WebKit: You can find the MathML bugs that have been closed during that period on bugzilla.mozilla.org and bugs.webkit.org. For Gecko, this has allowed me to finish some of the work I started as a volunteer before I was involved full-time in MathJax as well as to continue to mentor MathML contributors. Regarding WebKit, I added a few new basic features like MathML lengths, <mspace> or <mmultiscripts> while I was getting familiar with the MathML code and WebKit organization/community. I also started to work on <semantics> and <maction>. More importantly, I worked with Martin Robinson to address the design concerns of Google developers and a patch to fix these issues finally landed early this week.

However, my progress has been slow so as I mentioned in my previous blog post, I am planning to find a way to fund MathML developments...

Why funding MathML?

Note: I am assuming that the readers of this blog know why MathML is important and are aware of the benefits it can bring to the Web community. If not, please check Peter Krautzberger's Interview by Fidus Writer or the MozSummit MathML slides for a quick introduction. Here my point is to explain why we need more than volunteer-driven development for MathML.

First the obvious thing: Volunteer time is limited so if we really want to see serious progress in MathML support we need to give a boost to MathML developments. e-book publishers/readers, researchers/educators who are stuck outside the Web in a LaTeX-to-PDF world, developers/users of accessibility tools or the MathML community in general want good math support in browsers now and not to wait again for 15 more years until all layout engines catch up with Gecko or that the old Gecko bugs are fixed.

MathML

There are classical misunderstandings from people thinking that non-native MathML solutions and other polyfills are the future or that math on the Web could be implemented via PNG/SVG images or Web Components. Just open a math book and you will see that e.g. inline equations must be correctly aligned with the text or participate in line wrapping. Moreover we are considering math on the Web not math on paper, so we want it to be compatible with HTML, SVG, CSS, Javascript, Unicode, Bidi etc and also something that is fast and responsive. Technically, this means that a clean solution must be in the core rendering engine, spread over several parts of the code and must have strong interaction with the various components like the HTML5 parser, the layout tree, the graphic and font libraries, the DOM module, the style tree and so forth. I do not see any volunteer-driven Blink/Gecko/WebKit feature off the top of my head that has this characteristic and actually even SVG or any other kind of language for graphics have less interaction with HTML than MathML has.

The consequence of this is that it is extremely difficult for volunteers to get involved in native MathML and to do quick progress because they have to understand how the various components of the Blink/Gecko/WebKit code work and be sure to do things correctly. Good mathematical rendering is already something hard by itself, so that is even more complicated when you are not writing an isolated rendering engine for math on which you can have full control. Also, working at the Blink/Gecko/WebKit level requires technical skills above the average so finding volunteers who can work with the high-minded engineers of the big browser companies is not something easy. For instance, among the enthusiastic people coming to me and willing to help MathML in Gecko, many got stuck when e.g. they tried to build the Firefox source or do something more advanced and I never heard back from them. In the other direction, Blink/Gecko/WebKit paid developers are generally not familiar with MathML and do not have time to learn more about it and thus can not always provide a relevant review of the code, or they may break something while trying to modify code they do not entirely understand. Moreover, both the volunteers and paid staff have only a small amount of time to do MathML stuff while the other parts of the engine evolve very quickly, so it's sometimes hard to keep everything in sync. Finally, the core layout engines have strong security requirements that are difficult to satisfy in a volunteer-driven situation...

Beyond volunteer-driven MathML developments

At that point, there are several options. First the lazy one: Give up with native math rendering, only focus on features that have impact on the widest Web audience (i.e. those that would allow browser vendors to get more market share and thus increase their profit), thank the math people for creating the Web and kindly ask them to use whatever hacks they can imagine to display equations on the Web. Of course as a Mozillian, I think people must decide the Web they want and thus exclude this option.

Next there is the ingenuous option: Expect that browser companies will understand the importance of math-on-the-Web and start investing seriously in MathML support. However, Netscape and Microsoft rejected the <MATH> tag from the 1995 HTML 3.0 draft and the browser companies have kept repeating they would only rely on volunteer contributions to move MathML forward, despite the repeated requests from MathML folks and other scientific communities. So that option is excluded too, at least in the short to medium term.

So it remains the ambitious option: Math people and other interested parties must get together and try to fund native MathML developments. Despite the effort of my manager at MathJax to convince partners and raise funds, my situation has not changed much since April and it is not clear when/if the MathJax Consortium can take the lead in native MathML developments. Given my expertise in Gecko, WebKit and MathML, I feel the duty to do something. Hence I wish to reorganize my work time: Decrease my involvement in MathJax core in order to increase my involvement in Gecko/WebKit developments. But I need the help of the community for that purpose. If you run a business with interest for math-on-the-Web and are willing to fund my work, then feel free to contact me directly by mail for further discussion. In the short term, I want to experiment with Crowd Funding as discussed in the next section. If this is successful we can think of a better organization for MathML developments in the long term.

Crowd Funding

Wikipedia defines Crowd funding as "the collective effort of individuals who network and pool their money, usually via the Internet, to support efforts initiated by other people or organizations". There are several Crowd Funding platforms with similar rule/interface. I am considering Catincan which is specialized in Open Source Crowd Funding, can be used by any backer/developer around the world, can rely on Bugzilla to track the bug status and seems to have good process to collect the fund from backers and to pay developers. You can easily login to the Catincan Website if you have a GitHub, Facebook or Google account (apparently Persona is not supported yet...). Finally, it seems to have a communication interface between backers and developers, so that everybody can follow the development on the funded features.

One distinctive feature of catincan is that only well-established Open Source projects can be funded and only developers from these projects can propose and work on the new features ; so that backers can trust that the features will be implemented. Of course, I have been working on Gecko, WebKit and MathML projects so I hope people believe I sincerely want to improve MathML support in browsers and that I have the skills to do so ;-)

As said in my previous blog post, it is not clear at all (at least to me) whether Crowd Funding can be a reliable method, but it is worth trying. There are many individuals and small businesses showing interest in MathML, without the technical knowledge or appropriate staff to improve MathML in browsers. So if each one fund a small amount of money, perhaps we can get something.

One constraint is that each feature has 60 days to reach the funding goal. I do not have any idea about how many people are willing to contribute to MathML and how much money they can give. The statistical sample of projects currently funded is too small to extract relevant information. However, I essentially see two options: Either propose small features and split the big ones in small steps, so that each catincat submission will need less work/money and improvements will be progressive with regular feedback to backers ; or propose larger features so they look more attractive and exciting to people and will require less frequent submissions to catincat. At the beginning, I plan to start with the former and if the crowd funding is successful perhaps try the latter.

Status in Open Source Layout Engines

Note: Obviously, Open Source Crowd Funding does not apply to Internet Explorer, which is the one main rendering engine not mentioned below. Although Microsoft has done a great job on MathML for Microsoft Word, they did not give any public statement about MathML in Internet Explorer and all the bug reports for MathML have been resolved "by design" so far. If you are interested in MathML rendering and accessibility in Internet Explorer, please check Design Science blog for the latest updates and tools.

Blink

Note: I am actually focusing on the history of Chromium here but of course there are other Blink-based browsers. Note that programs like QtWebEngine (formerly WebKit-based) or Opera (formerly Presto-based) lost the opportunity to get MathML support when they switched to Blink.

Alex Milowski and François Sausset's first MathML implementation did not pass Google's security review. Dave Barton fixed many issues in that implementation and as far as I know, there were not any known security vulnerabilities when Dave submitted his last version. MathML was enabled in Chrome 24 but Chrome developers had some concerns about the design of the MathML implementation in WebKit, which indeed violated some assumptions of WebKit layout code. So MathML was disabled in Chrome 25 and as said in the introduction, the source code was entirely removed when they forked.

Currently, the Chromium Dashboard indicates that MathML is shipped in Firefox/Safari, has positive feedback from developers and is an established standard ; but the Chromium status remains "No active development". If I understand correctly, Google's official position is that they do not plan to invest in MathML development but will accept external contributions and may re-enable MathML when it is ready (for some sense of "ready" to be defined). Given the MathML story in Chrome, it seems really unlikely that any volunteer will magically show up and be willing to submit a MathML patch. Incidentally, note the interesting work of the ChromeVox team regarding MathML accessibility: Their recent video provides a good overview of what they achieve (where Volker Sorge politely regrets that "MathML is not implemented in all browsers").

Although Google's design concerns have now been addressed in WebKit, one most serious remark from one Google engineer is that the WebKit MathML implementation is of too low quality to be shipped so they just prefer to have no MathML at all. As a consequence, the best short term strategy seems to be improving WebKit MathML support and, once it is good enough, to submit a patch to Google. The immediate corollary is that if you wish to see MathML in Chrome or other Blink-based browsers you should help WebKit MathML development. See the next section for more details.

Chromatic chromatic

Actually, I tried to import MathML into Blink one day this summer. However, there were divergences between the WebKit and Blink code bases that made that a bit difficult. I do not plan to try again anytime soon, but if someone is interested, I have published my script and patch on GitHub. Note there may be even more divergences now and the patch is certainly bit-rotten. I also thought about creating/maintaining a "Chromatic" browser (Chrome + mathematics) that would be a temporary fork to let Blink users benefit from native MathML until it is integrated back in Blink. But at the moment, that would probably be too much effort for one person and I would prefer to focus on WebKit/Gecko developments for now.

WebKit

The situation in WebKit is much better. As said before, Google's concerns are now addressed and MathML will be enabled again in all WebKit releases soon. Martin Robinson is interested in helping the MathML developments in WebKit and his knowledge of fonts will be important to improve operator stretching, which is one of the biggest issue right now. One new volunteer contributor, Gurpreet Kaur, also started to do some work on WebKit like support for the *scriptshifts attributes or for the <menclose> element. Last but not least, a couple of Apple/WebKit developers reviewed and accepted patches and even helped to fix a few bugs, which made possible to move development forward.

WebKineTic

When he was still working on WebKit, Dave Barton opened bug 99623 to track the top priorities. When the bugs below and their related dependencies are fixed, I think the rendering in WebKit will be good enough to be usable for advanced math notations and WebKit will pass the MathML Acid 1 test.

  • Bug 44208: For example, in expression like sin(x), the "x" should be in italic but not the "sin". This is actually slightly more complicated: It says when the default mathvariant value must be normal/italic. mathvariant is more like the text-transform CSS property in the sense that it remaps some characters to the corresponding mathematical characters (italic, bold, fraktur, double-struck...) for example A (mathvariant=fraktur A) should render exactly the same as 𝔄 (U+1D504). By the way, there is the related bug 24230 on Windows, that prevents to use plane 1 characters. The best solution will probably be to implement mathvariant correctly. See also Gecko's ongoing work by James Kitchener below.
  • Bug 99618: Implement <mmultiscripts>, that allows expressions like C614 or Rij;j=12S;i. As said in the introduction, this is fixed in WebKit Nightly.
  • Bug 99614: Support for stretchy operators like in (z1+z2¯3)4. Currently, WebKit can only stretch operators vertically using a few Unicode constructions like ⎛ (U+239B) + ⎜ (U+239C) + ⎝ (U+239D) for the left parenthesis. Essentially only similar delimiters like brackets, braces etc are supported. For small sizes like ( or for large operators like n2 it is necessary to use non-unicode glyphs in various math fonts, but this is not possible in WebKit MathML yet. All of this will require a fair amount of work: implementing horizontal stretching, font-specific stuff, largeop/symmetric properties etc
  • Bug 99620: Implement the operator dictionary. Currently, WebKit treats all the operators the same way, so for example it will use the same 0.2em spacing before and after parenthesis, equal sign or invisible operators in e.g. f(x)=x2. Instead it should use the information provided by the MathML operator dictionary. This dictionary also specifies whether operators are stretchy, symmetric or largeop and thus is related to the previous point.
  • Bug 119038: Use the code for vertical stretchy operators to draw the radical symbols in roots like 23. Currently, WebKit uses graphic primitives which do not give a really good rendering.
  • Bug 115610: Implement <mspace> which is used by many MathML generators to do some spacing in mathematical formulas. As said in the introduction, this is fixed in WebKit Nightly.

In order to pass the Mozilla MathML torture tests, at least displaystyle and scriptlevel must be implemented too, probably as internal CSS properties. This should also allow to pass Joe Java's MathML test, although that one relies on the infamous <mfenced> that duplicates the stretchy operator features and is implemented inconsistently in rendering engines. I think passing the MathML Acid 2 test will require slightly more effort, but I expect this goal to be achievable if I have more time to work on WebKit:

  • Bug 115610: Implement <mspace>. Fixed!
  • Bug 120164: Implement negative spacing for <mspace> (I have an experimental patch).
  • Bug 85730: Implement <mpadded>, which is also used by MathML generators to do some tweaking of formulas. I have only done some experiments, that would be a generalization of <mspace>
  • Bug 85733: Implement the href attribute ; well I guess the name is explicit enough to understand what it is used for! I only have some experimental patch here too. That would be mimicing what is done in SVG or HTML.
  • Bug 120059 and bug 100626: Implement <maction> (at least partially) and <semantics>, which have been asked by long-time MathML users Jacques Distler and Michael Kohlhase. I have patches ready for that and this could be fixed relatively soon, I just need to find time to finish the work.

In general passing the MathML Acid 2 test is not too hard, you merely only need to implement those few MathML elements whose exact rendering is clearly defined by the MathML specification. Passing the MathML Acid 3 test is not expected in the medium term. However, the score will naturally increase while we improve WebKit MathML implementation. The priority is to implement what is currently known to be important to users. To give examples of bugs not previously mentioned: Implementing menclose or fixing various DOM issues like bugs 57695, 57696 or 107392.

More advanced features like those mentioned in the next section for Gecko are probably worth considering later (Open type MATH, linebreaking, mlabeledtr...). It is worth noting that Apple has already done some work on accessibility (with MathML being readable by VoiceOver in iOS7), authoring and EPUB (MathML is enabled in WebKit-based ebook readers and ibooks-author has an integrated LaTeX-to-MathML converter).

Gecko

Mathzilla

In general I think I have a good relationship with the Mozilla community and most people have respect for the work that has been done by volunteers for almost 15 years now. The situation has greatly improved since I joined the project, at that time some people claimed the Mozilla MathML project was dead after Roger Sidge's departure. One important point is that Karl Tomlinson has worked on repairing the MathML support when Roger Sidge left the project. Hence there is at least one Mozilla employee with good knowledge of MathML who can review the volunteer patches. Another key ingredient is the work that has recently been made by Mozilla to increase engagement of the volunteer community like good documentation on MDN, the #Introduction channel, Josh Matthews' mentored bugs and of course programs like GSOC. However, as said above, it is one thing to attract enthusiastic contributors and another thing to get long-term contributors who can work independently on more advanced features. So let's go back to my latest Roadmap for the Mozilla MathML Project and see what has been accomplished for one year:

  • Font support: Dmitry Shachnev created a Debian package for the MathJax fonts and Mike Hommey added MathJax and Asana fonts in the list of suggested packages for Iceweasel. The STIX fonts have also been updated in Fedora and are installed by default on Mac OS X Lion (10.7). For Linux distributions, it would be helpful to implement Auto Installation Support. The bug to add mathematical fonts to Android has been assigned in June but no more progress has happened so far. Henri Sinoven opened a bug for FirefoxOS but there has not been any progress there either. I had some patches to restore the "missing MathML fonts" warning (using an information bar) but it was refused by Firefox reviewers. However, the code to detect missing MathML font could still be used for the similar bug 648548, which also seems inactive since January. There are still some issues on the MathJax side that prevent to integrate Web fonts for the native MathML output mode. So at the moment the solution is still to inform visitors about MathML fonts or to add MathML Web fonts to your Web site. Khaled Hosny (font and LaTeX expert) recently updated my patches to prepare the support for Open Type fonts and he offered to help on that feature. After James Kitchener's work on mathvariant, we realized that we will probably need to provide Arabic mathematical fonts too.
  • Spacing: Xuan Hu continued to work on <mpadded> improvements and I think his patch is close to be accepted. Quentin Headen has done some progress on <mtable> before focusing on his InstantBird GSOC project. He is still far from being able to work on mtable@rowspacing/columnspacing but a work around for that has been added to MathJax. I fixed the negative space regression which was missing to pass the MathML Acid 2 test and is used in MathJax. Again, Khaled Hosny is willing to help to use the spacing of the Open Type MATH, but that will still be a lot of work.
  • <mlabeledtr>: A work around for native MathML has been added in MathJax.
  • Linebreaking: No progress except that I have worked on fixing a bug with intrinsic width computation. The unrelated printing issues mentioned in the blog post have been fixed, though.
  • Operator Stretching: No progress. I tried to analyze the regression more carefully, but nothing is ready yet.
  • Tabular elements: As said above, Quentin Headen has worked a bit on cleaning up <mtable> but not much improvements on that feature so far.
  • Token elements: My patch for <ms> landed and I have done significant progress on the bad measurement of intrinsic width for token elements (however, the fix only seems to work on Linux right now). James Kitchener has taken over my work on improving our mathvariant support and doing related refactoring of the code. I am confident that he will be able to have something ready soon. The primes in exponents should render correctly with MathJax fonts but for other math fonts we will have to do some glyph substitutions.
  • Dynamic MathML: No progress here but there are not so many bugs regarding Javascript+MathML, so that should not be too serious.
  • Documentation: It is now possible to use MathML in code sample or directly in the source code. The MathML project pages have been entirely migrated to MDN. Also, Florian Scholz has recently been hired by Mozilla as a documentation writer (congrats!) and will in particular continue the work he started as a volunteer to document MathML on MDN.

I apologize to volunteers who worked on bugs that are not mentioned above or who are doing documentation or testing that do not appear here. For a complete list of activity since September 2012, Bugzilla is your friend. There are two ways to consider the progress above. If you see the glass half full, then you see that several people have continued the work on various MathML issues, they have made some progress and we now pass the MathML Acid 2 test. If you see the glass half empty, then you see that most issues have not been addressed yet and in particular those that are blocking the native MathML to be enabled in MathJax: bug 687807, bug 415413, the math font issues discussed in the first point, and perhaps linebreaking too. That is why I believe we should go beyond volunteer-driven MathML developments.

Most of the bugs mentioned above are tested by the MathML Acid 3 tests and we will win a few points when they are fixed. Again, passing MathML Acid 3 test is not a goal by itself so let's consider what are the big remaining areas it contains:

  • Improving Tabular Elements and Operator Stretching, which are obviously important and used a lot in e.g. MathJax.
  • Linebreaking, which as I said is likely to become fundamental with small screens and ebooks.
  • Elementary Mathematics (you know addition, subtraction, multiplication, and division that kids learn), which I suspect will be important for educational tools and ebooks.
  • Alignment: This is the one part of MathML that I am not entirely sure is relevant to work on in the short term. I understand it is useful for advanced layout but most MathML tools currently just rely on tables to do that job and as far as I know the only important engine that implements that is MathPlayer.

Finally there are other features outside the MathML rendering engines that I also find important but for which I have less expertise:

  • Transferring MathML that is implementing copy/cut/drag and paste. Currently, we can do that by treating MathML as normal HTML5 code or by using the "show MathML source" feature and copying the source code. However, it would be best to implement a standard way to communicate with other MathML applications like Microsoft Word, Mathematica, Mapple, Windows' Handwriting panel etc I wrote some work-in-progress patches last year.
  • Authoring MathML: Essentially implementing things like deletion, insertion etc maybe simple MathML token creation ; in Gecko's core editor, which is used by BlueGriffon, KompoZer, SeaMonkey, Thunderbird or even MDN. Other things like integrating Javascript parsers (e.g. ASCIIMath) or equation panels with buttons like are probably better done at the higher JS/HTML/XUL level. Daniel Glazman already wrote math input panels for BlueGriffon and Thunderbird.
  • MathML Accessibility: This is one important application of MathML for which there is strong demand and where Mozilla is behind the competitors. James Teh started some experimental work on his NVDA tool before the summit.
  • EPUB reader for FirefoxOS (and other mobile platforms): During the "Co-creating Action Plans" session, the Mozilla Taipei people were thinking about missing features for FirefoxOS and this idea about EPUB reader was my modest contribution. There are a few EPUB readers relying on Gecko and it would be good to check if they work in FirefoxOS and if they could be integrated by default, just like Apple has iBooks. BTW, there is a version of BlueGriffon that can edit EPUB books.

Conclusion

I hope I have convinced some of the readers about the need to fund MathMLin browsers. There is a lot of MathML work to do on Gecko and WebKit but both projects have volunteers and core engineers who are willing to help. There are also several individuals / companies relying on MathML support in rendering engines for their projects and could support the MathML developments in some way. I am willing to put more of my time on Gecko and WebKit developments, but I need financial help for that purpose. I'm proposing catincan Crowd Funding in the short term so that anyone can contribute at the appropriate level, but other alternatives to fund the MathML development can be found like asking Peter Krautzberger about native MathML funding in MathJax, discussing with Igalia about funding Martin Robinson to work more on WebKit MathML or contacting me directly to establish some kind of part-time consulting agreement.

Please leave a comment on this blog or send me a private mail, if you agree that funding MathML in browsers is important, if you like the crowd funding idea and plan to contribute ; or if you have any opinions about alternative funding options. Also, please tell me what seem to be the priority for you and your projects among what I have mentioned above (layout engines, features etc) or among others that I may have forgotten. Of course, any other constructive comment to help MathML support in browsers is welcome. I plan to submit features on catincan soon, once I have more feedback on what people are interested in. Thank you!

By fredw at October 15, 2013 09:05 PM

October 07, 2013

Post-Summit Thoughts on the MathML Project

Frédéric Wang

I'm back from a great Mozilla Summit 2013 and I'd just like to write a quick blog post about the MathML booths at the Innovation Fairs. I did not have the opportunity to talk with the MathML people who ran the booth at Santa Clara yet. However, everything went pretty well at Brussels, modulo of course some demos failing when done in live... If you are interested, the slides and other resources are available on my GitHub page.

Many Mozillians did not know about MathML or that it had been available in Gecko since the early days of the Mozilla project. Many people who use math (or just knowing someone who does) were curious about that feature and excited about the MathML potentials. I appreciated to get this positive feedback from Mozillians willing to use math on the Web and related media, instead of the scorn or hatred I sometimes see by misinformed people. I expect to provide more updates on LaTeXML, MediaWiki Math and MathJax when their next versions are released. The Gecko MathML support improves slowly but there has been interesting work by James Kitchener recently that I'd like to mention too.

MathZilla on blackboard

Let's do an estimation à la Fermi: only a few volunteers have been contributing regularly and simultaneously to MathML in Gecko while most Mozilla-funded Gecko projects have certainly development teams that are 3 times as large. Let's be optimistic and assume that these volunteers have been able to dedicate a mean of 1 work day per week, compared to 5 for full-time staff. Given that the Mozilla MathML project will celebrate its 15 years next May, that means that the volunteer work transposed in terms of paid-staff time is only 15 35 = 1 year. To be honest, I'm disregarding here the great work made by the Mozilla NZ team around 2007 to repair MathML after the Cairo migration. But still, what we have achieved in quality and completeness with such limited resources and time is really impressive.

As someone told me at the MathML booth, it's really frustating that something that is so important for the small portion of math-educated people is ignored because it is useless for the vast majority of people. This is not entirely true, since even elementary mathematics taught at school like the one of this blog post are not easily expressed with standard HTML and even less in a way accessible to people with visual disabilities. However, it summarizes well the feeling MathML folks had when they tried to convince Google to accept the volunteer work on MathML, despite its low quality.

As explained at the Summit Sessions, Mozilla's mission is different and the goal is to give people the right to control the Web they want. The MathML project is perhaps one of the oldest and successful volunteer-driven Mozilla project that is still active and demonstrates concretely the idea of the Mozilla's mission with e.g. the work of Roger Sidge who started to write the MathML implementation when Netscape opened its source code or the one of Florian Scholz who made MDN one of the most complete Web resource for MathML.

Mozilla Corporation has kept saying they don't want to invest in MathML developments and the focus right now is clearly on other features like FirefoxOS. Even projects that have a larger audience than the MathML support like the mail client or the editor are not in the priorities so someone else definitely need to step in for MathML. I've tried various methods, with more or less success, to boost the MathML developments like mentoring a GSoC project, funding a summer internship or relying on mentored bugs. I'm now considering crowd funding to help the MathML developments in Gecko (and WebKit). I don't want to do another Fermi estimation now but at first that looks like a very unreliable method. The only revenue generated by the MathML project so far are the 2 100 π 100 = 2 3.14 = 6.28 dollars to the Mozilla Fundation via contributions to my MathML-fonts add-on, so it's hard to get an idea of how much people would contribute to the Gecko implementaton. However, that makes sense since the only people who showed interest in native MathML support so far are individuals or small businesses (e.g. working on EPUB or accessibility) and I think it's worth trying it anyway. That's definitely something I'll consider after MathJax 2.3 is released...

By fredw at October 07, 2013 04:18 PM

August 27, 2013

HTML Alchemy – Combining CSS Shapes with CSS Regions

Adobe Web Platform

Note: Support for shape-inside is only available until the following nightly builds: WebKit r166290 (2014-03-26); Chromium 260092 (2014-03-28).


I have been working on rendering for almost a year now. Since I landed the initial implementation of Shapes on Regions in both Blink and WebKit, I’m incredibly excited to talk a little bit about these features and how you can combine them together.

Mad_scientist

Don’t know what CSS Regions and Shapes are? Start here!

The first ingredient in my HTML alchemy kitchen is CSS Regions. With CSS Regions, you can flow content into multiple styled containers, which gives you enormous creative power to make magazine style layouts. The second ingredient is CSS Shapes, which gives you the ability to wrap content inside or outside any shape. In this post I’ll talk about the “shape-inside” CSS property, which allows us to wrap content inside an arbitrary shape.

Let’s grab a bowl and mix these two features together, CSS Regions and CSS Shapes to produce some really interesting layouts!

In the latest Chrome Canary and Safari WebKit Nightly, after enabling the required experimental features, you can flow content continuously through multiple kinds of shapes. This rocks! You can step out from the rectangular text flow world and break up text into multiple, non-rectangular shapes.

Demo

If you already have the latest Chrome Canary/Safari WebKit Nightly, you can just go ahead and try a simple example on codepen.io. If you are too lazy, or if you want to extend your mouse button life by saving a few button clicks, you can continue reading.


iBuyd3

In the picture above we see that the “Lorem ipsum” story flows through 4 different, colorful regions. There is a circle shape on each of the first two fixed size regions. Check out the code below to see how we apply the shape to the region. It’s pretty straightforward, right?
#region1, #region2 {
    -webkit-flow-from: flow;
    background-color: yellow;
    width: 200px;
    height: 200px;
    -webkit-shape-inside: circle(50%, 50%, 50%);
}
The content flows into the third (percentage sized) region, which represents a heart (drawn by me, all rights reserved). I defined the heart’s coordinates in percentages, so the heart will stretch as you resize the window.
#region3 {
    -webkit-flow-from: flow;
    width: 50%;
    height: 400px;
    background-color: #EE99bb;
    -webkit-shape-inside: polygon(11.17% 10.25%,2.50% 30.56%,3.92% 55.34%,12.33% 68.87%,26.67% 82.62%,49.33% 101.25%,73.50% 76.82%,85.17% 65.63%,91.63% 55.51%,97.10% 31.32%,85.79% 10.21%,72.47% 5.35%,55.53% 14.12%,48.58% 27.88%,41.79% 13.72%,27.50% 5.57%);
}

The content that doesn’t fit in the first three regions flows into the fourth region. The fourth region (see the retro-blue background color) has its CSS width and height set to auto, so it grows to fit the remaining content.

Real world examples

After trying the demo and checking out the links above, I’m sure you’ll see the opportunities for using shape-inside with regions in your next design. If you have some thoughts on this topic, don’t hesitate to comment. Please keep in mind that these features are under development, and you might run into bugs. If you do, you should report them on WebKit’s Bugzilla for Safari or Chromium’s issue tracker for Chrome. Thanks for reading!

By Zoltan Horvath at August 27, 2013 04:00 PM

August 06, 2013

WebGL, at last!

Brent Fulgham

It's been a long time since I've written an update -- but my lack of blog posting is not an indication of a lack of progress in WebKit or the WinCairo port. Since I left my former employer (who *still* hasn't gotten around to updating the build machine I set up there), we've:

  • Migrated from Visual Studio 2005 to Visual Studio 2010 (and soon, VS2012)
  • Enabled New-run-webkit-tests
  • Updated the WinCairo Support Libraries to support 64-bit builds
  • Integrated a ton of cURL improvements and extensions thanks to the TideSDK guys 
  • and ...
... thanks to the hard work of Alex Christensen, brought up WebGL on the WinCairo port.  This is a little exciting for me, because it marks the first time (I can recall) where the WinCairo port actually gained a feature that was not already part of the core Apple Windows port.



The changes needed to see these circa-1992 graphics in all their three-dimensional glory are already landed in the WebKit tree.  You just need to:

  1. Enable the libEGL, libGLESv2, translator_common, translator_glsl, and translator_hlsl for the WinCairo build (they are currently turned off).
  2. Make the following change to WTF/wtf/FeatureDefines.h: 

Brent Fulgham@WIN7-VM ~/WebKit/Source/WTF/wtf
$ svn diff
Index: FeatureDefines.h
===================================================================
--- FeatureDefines.h    (revision 153733)
+++ FeatureDefines.h    (working copy)
@@ -245,6 +245,13 @@
 #define ENABLE_VIEW_MODE_CSS_MEDIA 0
 #endif

+#define ENABLE_WEBGL 1
+#define WTF_USE_3D_GRAPHICS 1
+#define WTF_USE_OPENGL 1
+#define WTF_USE_OPENGL_ES_2 1
+#define WTF_USE_EGL 1
+#define WTF_USE_GRAPHICS_SURFACE 1
+
 #endif /* PLATFORM(WIN_CAIRO) */

 /* --------- EFL port (Unix) --------- */

Performance is a little ragged, but we hope to improve that in the near future.

We have plenty of more plans for the future, including full 64-bit support (soon), and hopefully some improvements to the WinLauncher application to make it a little more useful.

As always, if you would like to help out,

By Brent Fulgham (noreply@blogger.com) at August 06, 2013 05:53 AM

July 10, 2013

Fuzzinator, a mutation and generation based browser fuzzer

University of Szeged

Fuzzers are widely used tools for testing software. They can generate random test cases and use them as input against the software under fuzzing/testing. Since the tests have randomly-built content, it is not necessary to check them for correctness, but they are suitable for catching rough bugs like use-after-frees, memory corruptions, assertion failures and further crashes. There are many approaches how to generate these tests, but all of them can be categorized into three main groups: whitebox, blackbox and graybox fuzzers.

read more

By renata.hodovan at July 10, 2013 12:50 PM

May 15, 2013

CSS Level 3 Text Decoration on WebKit and Blink – status

Bruno de Oliveira Abinader

It’s been a while since I wrote the last post about progress on implementing CSS Level 3 Text Decoration features in WebKit. I’ve been involved with other projects but now I can finally resume the work in cooperation with my colleague from basysKom, Lamarque Souza. So far we have implemented:

  • text-decoration-line (link)
  • text-decoration-style (link)
  • text-decoration-color (link)
  • text-underline-position (link)

These properties are currently available under -webkit- prefix on WebKit, and guarded by a feature flag - CSS3_TEXT – which is enabled by default on both EFL and GTK ports. On Blink, plans are to get these properties unprefixed and under a runtime flag, which can be activated by enabling the “Experimental WebKit Features” (updated to “Experimental Web Platform Features” in latest builds) flag – see chrome://flags inside Google Chrome/Chromium). There are still some Skia-related issues to fix on Blink to enable proper dashed and dotted text decoration styles to be displayed. In the near future, we shall also have the text-decoration shorthand as specified on CSS Level 3 specification.

See below a summary of things I plan to finish in the near future:

  • [webkit] Property text-decoration-line now accepts blink as valid value
  • [blink] Fix implementation of dashed and dotted styles on Skia
  • [blink] Fix an issue where previous Skia stroke styles were used when rendering paint decorations
  • [blink] Implement CSS3_TEXT as a runtime flag
  • [blink] Property text-decoration-line now accepts blink as valid value
  • [blink] Implement support for text-decoration shorthand
  • [webkit] Implement support for text-decoration shorthand

Note: Please do not confuse text-decoration‘s blink value with Blink project :)

Stay tuned for further updates!

By Bruno Abinader at May 15, 2013 04:52 PM

May 03, 2013

Firefox Nightly passes the Acid2 test

Frédéric Wang

Some updates on the MathML Acid Tests... First the patch for bug 717546 landed in Nightly and thus Gecko is now the first layout engine to pass the MathML Acid2 test. Here is a screenshot that should look familiar:

MathML Acid2, Nightly

As you know, Google developers forked Webkit and decided to remove from Blink all the code (including MathML) on which they don't plan to work in the short term. As a comparison, here is how the MathML Acid2 test looks like in Chrome Canary:

MathML Acid 2 Test, Canary

Next, someone reported that Firefox Mac got more errors in the MathML Acid3 test. I was already aware of some shortcomings anyway and thus took the opportunity to rewrite the tests with a better error tolerance. The changes also fixed some measurement issues with auto resizing on mobile platforms or when the zoom level is not set to the default. I also made the tests for stretchy operators more reliable and as a consequence, Gecko lost two points: the new score is 60/100. I still need to review and describe the tests and hope I won't find more mistakes.

Finally, I also added a MathML Acid1 test. It does not really look like the "classical" Acid1 test and is not "automated", in the sense that a reader must carefully (and in a subjective way) check the basic requirements. But at least it provides a small test in the spirit of CSS Acid 1: all 100%-conformant HTML 5 agents should be able to render these very elementary MathML expressions. Note that the formulas in the MathML Acid1 test are supposed to express mathematical properties of boxes from the CSS Acid1 test.

By fredw at May 03, 2013 12:43 PM

March 27, 2013

Freeing the Floats of the Future From the Tyranny of the Rectangle

Adobe Web Platform

With modern web layout you can have your content laid out in whatever shape you want as long as it’s a rectangle. Designers in other media have long been able to have text and other content lay out inside and around arbitrarily complex shapes. The CSS Exclusions, CSS Shapes Level 1, and CSS Shapes Level 2 specifications aim to bring this capability to the web.

While these features aren’t widely available yet, implementation is progressing and it’s already possible to try out some of the features yourself. Internet Explorer 10 has an implementation of the exclusions processing model, so you can try out exclusions in IE 10 today.

At Adobe we have been focusing on implementing the shapes specification. We began with an implementation of shape-inside and now have a working implementation of the shape-outside property on floats. We have been building our implementation in WebKit, so the easiest way to try it out yourself is to download a copy of Chrome Canary. Once you have Canary, enable Experimental Web Platform Features and go wild!

What is shape-outside?

“Now hold up there,” you may be thinking, “I don’t even know what a shape-outside is and you want me to read this crazy incomprehensible specification thing to know what it is!?!”

Well you’ll be happy to know that it really isn’t that complex, especially in the case of floats. When an element is floated, inline content avoids the floated element. Content flows around the margin box of the element as defined by the CSS box model. The shape-outside CSS property allows you to tell the browser to use a specified shape instead of the margin box when wrapping content around the floating element.

CSS Exclusions

The current implementation allows for rectangles, rounded rectangles, circles, ellipses, and polygons. While this gives a lot of flexibility, eventually you will be able to use a SVG path or the alpha channel of an image to make it easier to create complex shapes.

How do I use it?

First, you need to get a copy of Chrome Canary and then enable Experimental Web Platform features. Once you have that, load up this post in Chrome Canary so that you can click on the images below to see a live example of the code. Even better, the examples are on Codepen, so you can and should play with them yourself and see what interesting things you can come up with.

Note that in this post and the examples I use the unprefixed shape-outside property.
If you want to test these examples outside of my Codepen then you will need to use the prefixed -webkit-shape-outside property or use (which is a built in option in Codepen).

We’ll start with a HTML document with some content and a float. Currently shape-outside only works on floating elements, so those are the ones to concentrate on. For example: (click on the image to see the code)

HTML without shape-outside

You can now add the shape-outside property to the style for your floats.

.float {
  shape-outside: circle(50%, 50%, 50%);
}

A circle is much more interesting than a standard rectangle, don’t you think? This circle is centered in the middle of the float and has a radius that is half the width of the float. The effect on the layout is something like this:

shape-outside circle

While percentages were used for this circle, you can use any CSS unit you like to specify the shape. All of the relative units are relative to the dimensions of element where the shape-outside is specified.

Supported shapes

Circles are cool and all, but I promised you other shapes, and I will deliver. There are four types of shapes that are supported by the current shape-outside implementation: rectangle, circle, ellipse, and polygon.

rectangle

You have the ability to specify a shape-outside that is a fairly standard rectangle:

shape-outside: rectangle(x, y, width, height);

The x and y parameters specify the coordinates of the top-left corner of the rectangle. This coordinate is in relation to the top-left corner of the floating element’s content box. Because of the way this interacts with the rules of float positioning, setting these to anything other than 0 causes an effect that is similar to relatively positioning the float’s content. (Explaining this is beyond the scope of this post.)

The width and height parameters should be self-explanatory: they are the width and height of the resulting rectangle.

Where things get interesting is with the six-argument form of rectangle:

shape-outside: rectangle(x, y, width, height, rx, ry);

The first four arguments are the same as explained above, but the last two specify corner radii in the horizontal (rx) and vertical (ry) directions. This not only allows the creation of rounded rectangles, you can create circles and ellipses as well. (Just like with [border-radius][border-radius].)

Here’s an example of a rectangle, a rounded rectangle, a circle, and an ellipse using just rectangle syntax:

shape-outside rectangle

If you’re reading this in Chrome Canary with exclusions turned on, play around with this demo and see what other things you can do with the rectangles.

circle

I already showed you a simple circle demo and you’ll be happy to know that’s pretty much all there is to know about circles:

shape-outside: circle(cx, cy, radius);

The cx and cy parameters specify the coordinates of the center of the circle. In most situations you’ll want to put them at the center of your box. Just like with rectangles moving this around can be useful, but it behaves similarly to relatively positioning the float’s content with respect to the shape.

The radius parameter is the radius of the resulting circle.

In case you’d like to see it again, here’s what a circle looks like:

shape-outside circle

While it is possible to create circles with rounded rectangles as described above, having a dedicated circle shape is much more convenient.

ellipse

Sometimes, you need to squish your circles and that’s where the ellipse comes in handy.

shape-outside: ellipse(cx, cy, rx, ry);

Just like a circle, an ellipse has cx and cy to specify the coordinates of its center and you will likely want to have them at the center of your float. And just like all the previous shapes, changing these around will cause the float’s content to position relative to your shape.

The rx and ry parameters will look familiar from the rounded rectangle case and they are exactly what you would expect: the horizontal and vertical radii of the ellipse.

Ellipses can be used to create circles (rx = ry) and rounded rectangles can be used to create ellipses, but it’s best to use the shape that directly suits your purpose. It’s much easier to read and maintain that way.

Here’s an example of using an ellipse shape:

shape-outside ellipse

polygon

Now here’s where things get really interesting. The polygon `shape-outside` allows you to specify an arbitrary polygonal shape for your float:

shape-outside: polygon(x1 y1, x2 y2, ... , xn yn);

The parameters of the polygon are the x and y coordinates of each vertex of the shape. You can have as many vertices as you would like.

Here’s an example of a simple polygon:

shape-outside triangle

Feel free to play with this and see what happens if you create more interesting shapes!

Putting content in the float

The previous examples all had divs without any content just to make it easier to read and understand the code, but a big motivation for shape-outside is to wrap around other content. Interesting layouts often involve wrapping text around images as this final example shows:

shape-outside with images

As usual, you should take a look and play with the code for this example of text wrapping around floated images. This is just the beginning of the possibilities, as you can put a shape outside on any floating element with any content you want inside.

Next steps

We are still hard at work on fixing bugs in the current implementation and implementing the rest of the features in the CSS Shapes Level 1 specification. We welcome your feedback on what is already implemented and also on the spec itself. If you are interested in becoming part of the process, you can raise issues with the current WebKit implementation by filing bugs in the WebKit bugzilla. If you have issues with the spec, those are best raised on the www-style mailing list. And of course, you can leave your feedback as comments on this post.

I hope that you enjoy experimenting with shape-outside and the other features we are currently working on.

By Bem Jones-Bey at March 27, 2013 05:10 PM

March 02, 2013

MathML Acid Tests

Frédéric Wang

There has recently been discussion in the Mozilla community about Opera switch from Presto to Webkit and the need to preserve browser competition and diversity of rendering engines, especially with mobile devices. Some people outside the community seem a bit skeptic about that argument. Perhaps a striking example to convince them is to consider the case of MathML where basically only Gecko has a decent native implementation and the situation in the recent eBooks workshop illustrates that very well: MathML support is very important for some publishers (e.g. for science or education) but the main eBook readers rely exclusively on the Webkit engine and its rudimentary MathML implementation. Unfortunately because there is currently essentially no alternatives on mobile platforms, developers of eBook readers have no other choices than proposing a partial EPUB support or relying on polyfill....

After Google's announce to remove MathML from Chrome 25, someone ironized on twitter about the fact that an Acid test for MathML should be written since that seems to motivate them more than community feedback. I do not think that MathML support is something considered important from the point of view of browser competition but I took this idea and started writing MathML versions of the famous Acid2 and Acid3 tests. The current source of these MathML Acid tests is available on GitHub. Of course, I believe that native MathML implementation is very important and I expect at least that these tests could help the MathML community ; users and implementers.

Here is the result of the MathML Acid2 test with the stable Gecko release. To pass the test we only need to implement negative spacing or at least integrate the patch I submitted when I was still active in Gecko developments (bug 717546).

MathML Acid2 test ; Gecko

And here is the score of the MathML Acid 3 test with the stable Gecko release. The failure of test 18 was not supposed to happen but I discovered it when I wrote the test. That will be fixed by James Kitchener's refactoring in bug 827713. Obviously, reaching the score of 100/100 will be much more difficult to achieve by our volunteer developers, but the current score is not too bad compared to other rendering engines...

MathML Acid 3 ; Gecko

By fredw at March 02, 2013 06:19 PM

January 11, 2013

MathML in Chrome, a couple of demos and some perspectives...

Frédéric Wang

For those who missed the news, Google Chrome 24 has recently been released with native MathML support. I'd like to thank Dave Barton again for his efforts during the past year, that have allowed to make this happen. Obviously, some people may ironize on how long it took for Google to make this happen (Mozilla MathML project started in 1999) or criticize the bad rendering quality. However the MathML folks, aware of the history of the language in browsers, will tend to be more tolerant and appreciate this important step towards MathML adoption. After all, this now means that among the most popular browsers, Firefox, Safari and Chrome have MathML support and Opera a basic CSS-based implementation. This also means that about three people out of four will be able to read pages with MathML without the need of any third-party rendering engine.

After some testing, I think the Webkit MathML support is now good enough to be used on my Website. There are a few annoyances with stretchy characters or positioning, but in general the formulas are readable. Hence in order to encourage the use of MathML and let people report bugs upstream and hopefully help to fix them, I decided to rely on the native MathML support for Webkit-based browsers. I'll still keep MathJax for Internet Explorer (when MathPlayer is not installed) and Opera.

I had the chance to meet Dave Barton when I was at the Silicon Valley last October for the GSoC mentor summit. We could exchange our views on the MathML implementations in browsers and discuss the perspectives for the future of MathML. The history of MathML in Webkit is actually quite similar to Gecko's one: one volunteer Alex Milowski decided to write the initial implementation. This idea attracted more volunteers who joined the effort and helped to add new features and to conduct the project. Dave told me that the initial Webkit implementation did not pass the Google's security review and that's why MathML was not enabled in Chrome. It was actually quite surprising that Apple decided to enable it in Safari and in particular all Apple's mobile products. Dave's main achievement has been to fix all these security bugs so that MathML could finally appear in Chrome.

One of the idea I share with Dave is how important it is to have native MathML support in browsers, rather than to delegate the rendering to Javascript libraries like MathJax or browser plug-in like MathPlayer. That's always a bit sad to see that third-party tools are necessary to improve the native browser support of a language that is sometimes considered a core XML language for the Web together with XHTML and SVG. Not only native support is faster but also it integrates better in the browser environment: zooming text, using links, applying CSS style, mixing with SVG diagrams, doing dynamic updates with e.g. Javascript etc all of the features Web users are familiar with are immediately available. In order to illustrate this concretely, here is a couple of demos. Some of them are inspired from the Mozilla's MathML demo pages, recently moved to MDN. By the way, the famous MathML torture page is now here. Also, try this test page to quickly determine whether you need to install additional fonts.

MathML with CSS text-shadow & transform properties, href & dir attributes as well as Javascript events

det ( 1 2 3 4 5 6 7 8 9 ) = 45 + 84 + 96 ( 105 + 48 + 72 ) = 0 محدد ( ١‎ ٢ ٣ ٤ ٥ ٦ ٧‎ ٨ ٩ ) = ٤٥ + ٨٤ + ٩٦ ( ١‎٠٥ + ٤٨ + ٧‎٢ ) = ٠

HTML and animated SVG inside MathML tokens

tr ( ) n = = 0 π 2 θ θ

MathML inside animated SVG (via the <foreignObject> element):

<foreignObject width="60" height="60"> n = 0 + α n n ! </foreignObject> exp(α)

Note that although Dave was focused on improving MathML, the language naturally integrates with the rest of Webkit's technologies and almost all the demos above work as expected, without any additional efforts. Actually, Gecko's MathML support relies less on the CSS layout engine than Webkit does and this has been a recurrent source of bugs. For example in the first demo, the text-shadow property is not applied to some operators (bug 827039) while it is in Webkit.

In my opinion, one of the problem with MathML is that the browser vendors never really shown a lot of interest in this language and the standardization and implementation efforts were mainly lead and funded by organizations from the publishing industry or by volunteer contributors. As the MathML WG members keep repeating, they would love to get more feedback from the browser developers. This is quite a problem for a language that has among the main goal the publication of mathematics on the Web. This leads for example to MathML features (some of them are now deprecated) duplicating CSS properties or to the <mstyle> element which has most of its attributes unused and do similar things as CSS inheritance in an incompatible way. As a consequence, it was difficult to implement all MathML features properly in Gecko and this is the source of many bugs like the one I mention in the previous paragraph.

Hopefully, the new MathML support in Chrome will bring more interest to MathML from contributors or Web companies. Dave told me that Google could hire a full-time engineer to work on MathML. Apparently, this is mostly because of demands from companies working on Webkit-based mobile devices or involved in EPUB. Although I don't have the same impression from Mozilla Corporation at the moment, I'm confident that with the upcoming FirefoxOS release, things might change a bit.

Finally I also expect that we, at MathJax, will continue to accompany the MathML implementations in browsers. One of the ideas I proposed to the team was to let MathJax select the output mode according to the MathML features supported by the browser. Hence the native MathML support could be used if the page contains only basic mathematics while MathJax's rendering engine will be used when more advanced mathematical constructions are involved. Another goal to achieve will be to make MathJax the default rendering in Wikipedia, which will be much better than the current raster image approach and will allow the users to switch to their browser's MathML support if they wish...

By fredw at January 11, 2013 01:53 PM

August 01, 2012

WebKit CSS3 text-decoration properties (preview)

Bruno de Oliveira Abinader

WebKit currently supports CSS Text Level 2.1 version of text-decoration property (link). This version treats only about the decoration line types (underline, overline, line-through and blink – the latter is not supported on WebKit).

The draft version of CSS Text Level 3 upgrades text-decoration (link) property as a shorthand to 3 newly added properties, named text-decoration-line (link), text-decoration-style (link) and text-decoration-color (link), and also adds text-decoration-skip (link) property.

Among other WebKit stuff I’ve been doing lately, this feature implementation is one of the most cool ones I’m enjoying implementing. I’ve grabbed the task of implementing all of these CSS3 text-decoration* properties on WebKit, and results are great so far!

As you can see below, these are the new text decoration styles (solid, double, dotted, dashed and wavy – the latter still requires platform support) available:

Text decoration style layout test results on Qt platform

And also specific text decoration colors can be set:

Text decoration color layout test results on Qt platform

These features (with exception to text-decoration-skip property) are already implemented on Firefox, thus it gets easier to compare results with different web engines. It is important to notice since CSS3 specification is still in development, all these properties have a -webkit- prefix (ie. -webkit-text-decoration), so text-decoration still maintains CSS2.1 specification requirements. The patches are being reviewed and will soon land upstream, let’s hope it will be soon!

By Bruno Abinader at August 01, 2012 09:58 PM

April 11, 2012

A guide for Qt5/WebKit2 development setup for Nokia N9 on Ubuntu Linux

Bruno de Oliveira Abinader

As part of my daily activities at basysKom on QtWebKit maintenance and development for Nokia devices, it is interesting to keep a track on latest developments circa QtWebKit. There is currently a promising project of a Qt5/WebKit2-based browser called Snowshoe mainly developed by my fellow friends from INdT which is completely open-source. This browser requires latest Qt5 and QtWebKit binaries and thus requires us to have a functional build system environment. There is a guide available on WebKit’s wiki (link) which is very helpful but lacks some information about compilation issues found when following the setup steps. So I am basing this guide from that wiki page and I hope that it gets updated soon :)

On this guide it is assumed the following:

  • All commands are issued on a Linux console. I am not aware of how this guide would work on other systems.
  • All commands are supposed to be issued inside base directory, unless expressely said otherwise (ie. cd <QT5_DIR>).
  • You might want to check if you have git and rsync packages installed in your system.

1. Install Qt SDK

In order to build Qt5 and QtWebKit for Nokia N9, you need to set up a cross-compiler. Thankfully, Qt SDK already comes with a working setup. Please download the online installer from Qt Downloads section (link).

NOTE: The offline installer comes with an outdated version of the MADDE target, which can be updated by running the script below and chosing “Update components” when asked:

$ ~/QtSDK/SDKMaintenanceTool

2. Directory setup

It is suggested (and actually required by some build scripts) to have a base directory which holds Qt5, Qt Components and WebKit project sources. The suggested base directory can be created by running:

$ mkdir -p ~/swork

NOTE: You can actually choose another directory name, but so far it is required by some scripts to have at least a symbolic link pointing to <HOME_DIR>/swork.

3. Download convenience scripts

3.1. browser-scripts

$ git clone https://github.com/resworb/scripts.git browser-scripts

3.2. rsync-scripts

$ wget http://trac.webkit.org/attachment/wiki/SettingUpDevelopmentEnvironmentForN9/rsync-scripts.tar.gz?format=raw
$ tar xzf rsync-scripts.tar.gz

4. Download required sources

4.1. testfonts

$ git clone git://gitorious.org/qtwebkit/testfonts.git

4.2. Qt5, QtComponents and WebKit

The script below when successfully run will create ~/swork/qt5, ~/swork/qtcomponents and ~/swork/webkit directories:

$ browser-scripts/clone-sources.sh --no-ssh

NOTE: You can also manually download sources, but remember to stick with the directory names described above.

5. Pre-build hacks

5.1. Qt5 translations

Qt5 translations are not being properly handled by cross-platform toolchain. This happens mainly because lrelease application is called to generate Qt message files, but since it is an ARMEL binary your system is probably not capable of running it natively (unless you have a misc_runner kernel module properly set, then you can safely skip this step). In this case, you can use lrelease from your system’s Qt binaries without any worries.

If you have a Scratchbox environment set, it is suggested for you to stop its service first:

$ sudo service scratchbox-core stop

Now you can manually generate Qt message files by running this:

$ cd ~/swork/qt5/qttranslations/translations
$ for file in `ls *ts`; do lrelease $file -qm `echo "$file" | sed 's/ts$/qm/'`; done

5.2. Disable jsondb-client tool

QtJsonDB module from Qt5 contains a tool called jsondb-client, which depends on libedit (not available on MADDE target). It is safe to disable its compilation for now:

$ sed -i 's/jsondb-client//' ~/swork/qt5/qtjsondb/tools/tools.pro

5.3. Create missing symbolic links

Unfortunately Qt5 build system is not robust enough to support our cross-compilation environment, so some symbolic links are required on MADDE to avoid compilation errors (where <USER> is your system user name):

$ ln -s ~/swork/qt5/qtbase/include ~/QtSDK/Madde/sysroots/harmattan_sysroot_10.2011.34-1_slim/home/<USER>/swork/qt5/qtbase
$ ln -s ~/swork/qt5/qtbase/mkspecs ~/QtSDK/Madde/sysroots/harmattan_sysroot_10.2011.34-1_slim/home/<USER>/swork/qt5/mkspecs

6. Build sources

You can execute the script that will build all sources using cross-compilation setup:

$ browser-scripts/build-sources.sh --cross-compile

If everything went well, you now have the most up-to-date binaries for Qt5/WebKit2 development for Nokia N9. Please have a look at WebKit’s wiki for more information about how to update sources after a previous build and information on how to keep files in sync with device. The guide assumes PR1.1 firmware for N9 device, which is already outdated, so I might come up next with updated instructions on how to safely sync files to your PR1.2-enabled device.

That’s all for now, I appreciate your comments and feedback!

By Bruno Abinader at April 11, 2012 07:18 AM

March 10, 2012

WebKitGTK+ Debian packaging repository changes

Gustavo Noronha

For a while now the git repository used for packaging WebKitGTK+ has been broken. Broken as in nobody was able to clone it. In addition to that, the packaging workflow had been changing over time, from a track-upstream-git/patches applied one to a import-orig-only/patches-not-applied one.

After spending some more time trying to unbreak the repository for the third time I decided it might be a good time for a clean up. I created a new repository, imported all upstream versions for series 1.2.x (which is in squeeze), 1.6.x (unstable), and 1.7.x (experimental). I also imported packaging-related commis for those versions using git format-patch and black magic.

One of the good things about doing this move, and which should make hacking the WebKitGTK+ debian package more pleasant and accessible can be seen here:


kov@goiaba ~/s/debian-webkit> du -sh webkit/.git webkit.old/.git
27M webkit/.git
1.6G webkit.old/.git

If you care about the old repository, it’s on git.debian.org still, named old-webkit.git. Enjoy!

By kov at March 10, 2012 05:32 PM

December 07, 2011

WebKitGTK+ hackfest \o/

Gustavo Noronha

It’s been a couple days since I returned from this year’s WebKitGTK+ hackfest in A Coruña, Spain. The weather was very nice, not too cold and not too rainy, we had great food, great drinks and I got to meet new people, and hang out with old friends, which is always great!

Hackfest black board, photo by Mario

I think this was a very productive hackfest, and as usual a very well organized one! Thanks to the GNOME Foundation for the travel sponsorship, to our friends at Igalia for doing an awesome job at making it happen, and to Collabora for sponsoring it and granting me the time to go there! We got a lot done, and although, as usual, our goals list had many items not crossed, we did cross a few very important ones. I took part in discussions about the new WebKit2 APIs, got to know the new design for GNOME’s Web application, which looks great, discussed about Accelerated Compositing along with Joone, Alex, Nayan and Martin Robinson, hacked libsoup a bit to port the multipart/x-mixed-replace patch I wrote to the awesome gio-based infrastructure Dan Winship is building, and some random misc.

The biggest chunk of time, though, ended up being devoted to a very uninteresting (to outsiders, at least), but very important task: making it possible to more easily reproduce our test results. TL;DR? We made our bots’ and development builds use jhbuild to automatically install dependencies; if you’re using tarballs, don’t worry, your usual autogen/configure/make/make install have not been touched. Now to the more verbose version!

The need

Our three build slaves reporting a few failures

For a couple years now we have supported an increasingly complex and very demanding automated testing infrastructure. We have three buildbot slaves, one provided by Collabora (which I maintain), and two provided by Igalia (maintained by their WebKitGTK+ folks). Those bots build as many check ins as possible with 3 different configurations: 32 bits release, 64 bits release, and 64 bits debug.

In addition to those, we have another bot called the EWS, or Early Warning System. There are two of those at this moment: one VM provided by Collabora and my desktop, provided by myself. These bots build every patch uploaded to the bugzilla, and report build failures or passes (you can see the green bubbles). They are very important to our development process because if the patch causes a build failure for our port people can often know that before landing, and try fixes by uploading them to bugzilla instead of doing additional commits. And people are usually very receptive to waiting for EWS output and acting on it, except when they take way too long. You can have an idea of what the life of an EWS bot looks like by looking at the recent status for the WebKitGTK+ bots.

Maintaining all of those bots is at times a rather daunting task. The tests require a very specific set of packages, fonts, themes and icons to always report the same size for objects in a render. Upgrades, for instance, had to be synchronized, and usually involve generating new baselines for a large number of tests. You can see in these instructions, for instance, how strict the environment requirements are – yes, we need specific versions of fonts, because they often cause layouts to change in size! At one point we had tests fail after a compiler upgrade, which made rounding act a bit different!

So stability was a very important aspect of maintaining these bots. All of them have the same version of Debian, and most of the packages are pinned to the same version. On the other hand, and in direct contradition to the stability requirement, we often require bleeding edge versions of some libraries we rely on, such as libsoup. Since we started pushing WebKitGTK+ to be libsoup-only, its own progress has been pretty much driven by WebKitGTK+’s requirements, and Dan Winship has made it possible to make our soup backend much, much simpler and way more featureful. That meant, though, requiring very recent versions of soup.

To top it off, for anyone not running Debian testing and tracking the exact same versions of packages as the bots it was virtually impossible to get the tests to pass, which made it very difficult for even ourselves to make sure all patches were still passing before committing something. Wow, what a mess.

The explosion^Wsolution

So a few weeks back Martin Robinson came up with a proposed solution, which, as he says, is the “nuclear bomb” solution. We would have a jhbuild environment which would build and install all of the dependencies necessary for reproducing the test expectations the bots have. So over the first three days of the hackfest Martin and myself hacked away in building scripts, buildmaster integration, a jhbuild configuration, a jhbuild modules file, setting up tarballs, and wiring it all in a way that makes it convenient for the contributors to get along with. You’ll notice that our buildslaves now have a step just before compiling called “updated gtk dependencies” (gtk is the name we use for our port in the context of WebKit), which runs jhbuild to install any new dependencies or version bumps we added. You can also see that those instructions I mentioned above became a tad simpler.

It took us way more time than we thought for the dust to settle, but it eventually began to. The great thing of doing it during the hackfest was that we could find and fix issues with weird configurations on the spot! Oh, you build with AR_FLAGS=cruT and something doesn’t like it? OK, we fix it so that the jhbuild modules are not affected by that variable. Oh, turns out we missed a dependency, no problem, we add it to the modules file or install them on the bots, and then document the dependency. I set up a very clean chroot which we could use for trying out changes so as to not disrupt the tree too much for the other hackfest participants, and I think overall we did good.

The aftermath

By the time we were done our colleagues who ran other distributions such as Fedora were already being able to get a substantial improvements to the number of tests passing, and so did we! Also, the ability to seamlessly upgrade all the bots with a simple commit made it possible for us to very easily land a change that required a very recent (as in unreleased) version of soup which made our networking backend way simpler. All that red looks great, doesn’t it? And we aren’t done yet, we’ll certainly be making more tweaks to this infrastructure to make it more transparent and more helpful to the users (contributors and other people interested in running the tests).

If you’ve been hit by the instability we caused, sorry about that, poke mrobinson or myself in the #webkitgtk+ IRC channel on FreeNode, and we’ll help you out or fix any issues. If you haven’t, we hope you enjoy all the goodness that a reproducible testing suite has to offer! That’s it for now, folks, I’ll have more to report on follow-up work started at the hackfest soon enough, hopefully =).

By kov at December 07, 2011 11:34 PM

November 29, 2011

Accelerated Compositing in webkit-clutter

Gustavo Noronha

For a while now my fellow Collaboran Joone Hur has been working on implementing the Accelerated Compositing infrastructure available in WebKit in webkit-clutter, so that we can use Clutter’s powers for compositing separate layers and perform animations. This work is being done by Collabora and is sponsored by BOSCH, whom I’d like to thank! What does all this mean, you ask? Let me tell me a bit about it.

The way animations usually work in WebKit is by repainting parts of the page every few milliseconds. What that means in technical terms is that an area of the page gets invalidated, and since the whole page is one big image, all of the pieces that are in that part of the page have to be repainted: the background, any divs, images, text that are at that part of the page.

What the accelerated compositing code paths allow is the creation of separate pieces to represent some of the layers, allowing the composition to happen on the GPU, removing the need to perform lots of cairo paint operations per second in many cases. So if we have a semi-transparent video moving around the page, we can have that video be a separate texture that is layered on top of the page, made transparent and animated by the GPU. In webkit-clutter’s case this is done by having separate actors for each of the layers.

I have been looking at this code on and off, and recently joined Joone in the implementation of some of the pieces. The accelerated compositing infrastructure was originally built by Apple and is, for that reason, works in a way that is very similar to Core Animation. The code is still a bit over the place as we work on figuring out how to best translate the concepts into clutter concepts and there are several bugs, but some cool demos are already possible! Bellow you have one of the CSS3 demos that were made by Apple to demo this new functionality running on our MxLauncher test browser.

You can also see that the non-Accelerated version is unable to represent the 3D space correctly. Also, can you guess which of the two MxLauncher instances is spending less CPU? ;) In this second video I show the debug borders being painted around the actors that were created to represent layers.

The code, should you like to peek or test is available in the ac2 branch of our webkit-clutter repository: http://gitorious.org/webkit-clutter/webkit-clutter/commits/ac2

We still have plenty of work to do, so expect to hear more about it. During our annual hackfest in A Coruña we plan to discuss how this work could be integrated also in the WebKitGTK+ port, perhaps by taking advantage of clutter-gtk, which would benefit both ports, by sharing code and maintenance, and providing this great functionality to Epiphany users. Stay tuned!

By kov at November 29, 2011 05:55 PM

October 09, 2011

Tests Active

Brent Fulgham


Looking back over this blog, I see that it was around a year ago that I got the initial WinCairo buildbot running. I'm very pleased to announce that I have gotten ahold of a much more powerful machine, and am now able to run a full build and tests in slightly under an hour -- a huge improvement over the old hardware which took over two hours just to build the software!

This is a big step, because we can now track regressions and gauge correctness compared to the other platforms. Up to now, testing has largely consisted of periodic manual runs of the test suite, and a separate set of high-level tests run as part of a larger application. This was not ideal, because it was easy for low-level functions in WebKit that I rarely use to be broken and missed.

All is not perfect, of course. Although over 12,000 tests now run (successfully) with each build, that is effectively two thirds of the full test suite. Most of the tests I have disabled are due to small differences in the output layout. I'm trying to understand why these differences exist, but I suspect many of them simply reflect small differences in Cairo compared to the CoreGraphics rendering layer.

If any of you lurkers are interested in helping out, trying out some of the tests I have disabled and figuring out why they fail would be a huge help!

By Brent Fulgham (noreply@blogger.com) at October 09, 2011 02:43 AM

July 14, 2011

An Unseasonable Snowfall

Brent Fulgham

A year or two ago I ported the Cocoa "CallJS" application to MFC for use with WebKit. The only feedback I ever got on the topic was a complaint that it would not build under the Visual Studio Express software many people used.

After seeing another few requests on the webkit-help mailing list for information on calling JavaScript from C++ (and vice-versa), I decided to dust off the old program and convert it to pure WINAPI calls so that VS Express would work with it.

Since my beloved Layered Window patches finally landed in WebKit, I also incorporated a transparent WebKit view floating over the main application window. Because I suck at art, I stole appropriated the Let It Snow animation example to give the transparent layer something to do.

Want to see what it looks like?

By Brent Fulgham (noreply@blogger.com) at July 14, 2011 06:34 PM

July 10, 2011

Updated WebKit SDK (@r89864)

Brent Fulgham

I have updated the WebKitSDK to correspond to SVN revision r8984.

Major changes in this revision:
* JavaScript engine improvements.
* Rendering improvements.
* New 'Transparent Web View' support.
* General performance and memory use improvements.

This ZIP file also contains updated versions of Zlib, OpenSSL, cURL, and OpenCFLite.

Note that I have stopped statically linking Cairo; I'm starting to integrate some more recent Cairo updates (working towards some new rendering features), and wanted to be able to update it incrementally as changes are made.

This package contains the same Cairo library (in DLL form) as used in previous versions.

As usual, please let me know if you encounter any problems with this build.

[Update] I forgot to include zlib1.dll! Fixed in the revised zip file.

By Brent Fulgham (noreply@blogger.com) at July 10, 2011 04:24 AM

July 05, 2011

WinCairoRequirements Sources Archive

Brent Fulgham

I've posted the 80 MB source archive of the requirements needed to build the WinCairo port of WebKit.

Note that you do NOT need these sources unless you plan on building them yourself or wish to archive the source code for these modules. The binaries are always present in the WinCairoRequirements.zip file, which is downloaded and unzipped to the proper place when you execute the update-webkit --wincairo command.

By Brent Fulgham (noreply@blogger.com) at July 05, 2011 07:39 PM

June 28, 2011

Towards a Simpler WinCairo Build

Brent Fulgham


For the past couple of years, anyone interested in trying to build the WinCairo port of WebKit had to track down a number of support libraries, place them in their development environment's include (and link search) paths, and then cross their fingers and hope everything built.

To make things a little easier, I wrapped up the libraries and headers I use for building and posted them as a zip file on my .Mac account. This made things a little easier, but you still had to figure out where to drop the files and figure out if I had secretly updated my 'requirements.zip' file without telling anyone. Not ideal.

A couple of days ago, while trolling through the open review queue, I ran across a Bug filed by Carl Lobo, which automated the task of downloading the requirements file when running build-webkit --wincairo. This was a huge improvement!

Today, I hijacked Carl's changes and railroaded the patch through the review process (making a few modifications along the way):

  • I renamed my requirements file WinCairoRequirements.zip.

  • I added a timestamp file, so that build-webkit --wincairo can check to see if the file changed, and download it if necessary.

  • I propagated Carl's changes to update-webkit, so that now by adding the --wincairo argument it will update the WinCairoRequirements file.


I'm really excited about this update. If you've been wanting to try out the WinCairo port of WebKit, this would be a great time to try it out. I'd love to hear your experiences!

By Brent Fulgham (noreply@blogger.com) at June 28, 2011 04:42 AM

June 14, 2011

Benchmarking Javascript engines for EFL

Lucas De Marchi

The Enlightenment Foundation Libraries has several bindings for other languages in order to ease the creation of end-user applications, speeding up its development. Among them, there’s a binding for Javascript using the Spidermonkey engine. The questions are: is it fast enough? Does it slowdown your application? Is Spidermonkey the best JS engine to be used?

To answer these questions Gustavo Barbieri created some C, JS and Python benchmarks to compare the performance of EFL using each of these languages. The JS benchmarks were using Spidermonkey as the engine since elixir was already done for EFL. I then created new engines (with only the necessary functions) to also compare to other well-known JS engines: V8 from Google and JSC (or nitro) from WebKit.

Libraries setup

For all benchmarks EFL revision 58186 was used. Following the setup of each engine:

  • Spidermonkey: I’ve used version 1.8.1-rc1 with the already available bindings on EFL repository, elixir;
  • V8: version 3.2.5.1, using a simple binding I created for EFL. I named this binding ev8;
  • JSC: WebKit’s sources are needed to compile JSC. I’ve used revision 83063. Compiling with CMake, I chose the EFL port and enabled the option SHARED_CORE in order to have a separated library for Javascript;

Benchmarks

Startup time: This benchmark measures the startup time by executing a simple application that imports evas, ecore, ecore-evas and edje, bring in some symbols and then iterates the main loop once before exiting. I measured the startup time for both hot and cold cache cases. In the former the application is executed several times in sequence and the latter includes a call to drop all caches so we have to load the library again from disk

Runtime – Stress: This benchmark executes as many frames per second as possible of a render-intensive operation. The application is not so heavy, but it does some loops, math and interacts with EFL. Usually a common application would do far less operations every frame because many operations are done in EFL itself, in C, such as list scrolling that is done entirely in elm_genlist. This benchmark is made of 4 phases:

  • Phase 0 (P0): Un-scaled blend of the same image 16 times;
  • Phase 1 (P1): Same as P0, with additional 50% alpha;
  • Phase 2 (P2): Same as P0, with additional red coloring;
  • Phase 3 (P3): Same as P0, with additional 50% alpha and red coloring;

The C and Elixir’s versions are available at EFL repository.

Runtime – animation: usually an application doesn’t need “as many FPS as possible”, but instead it would like to limit to a certain amount of frames per second. E.g.: iphone’s browser tries to keep a constant of 60 FPS. This is the value I used on this benchmark. The same application as the previous benchmark is executed, but it tries to keep always the same frame-rate.

Results

The first computer I used to test these benchmarks on was my laptop. It’s a Dell Vostro 1320, Intel Core 2 Duo with 4 GB of RAM and a standard 5400 RPM disk. The results are below.

Benchmarks on Dell 1320 laptop

First thing to notice is there are no results for “Runtime – animation” benchmark. This is because all the engines kept a constant of 60fps and hence there were no interesting results to show. The first benchmark shows that V8’s startup time is the shortest one when considering we have to load the application and libraries from disk. JSC was the slowest and  Spidermonkey was in between.

With hot caches, however, we have another complete different scenario, with JSC being almost as fast as the native C application. Following, V8 with a delay a bit larger and Spidermonkey as the slowest one.

The runtime-stress benchmark shows that all the engines are performing well when there’s some considerable load in the application, i.e. removing P0 from from this scenario. JSC was always at the same speed of native code; Spidermonkey and V8 had an impact only when considering P0 alone.

 

Next computer to consider in order to execute these benchmarks was  a Pandaboard, so we can see how well the engines are performing in an embedded platform. Pandaboard has an ARM Cortex-A9 processor with 1GB of RAM and the partition containing the benchmarks is in an external flash storage drive. Following the results for each benchmark:

 

Benchmarks on Pandaboard

Once again, runtime-animation is not shown since it had the same results for all engines. For the startup tests, now Spidermonkey was much faster than the others, followed by V8 and JSC in both hot and cold caches. In runtime-stress benchmark, all the engines performed well, as in the first computer, but now JSC was the clear winner.

 

There are several points to be considered when choosing an engine to be use as a binding for a library such as EFL. The raw performance and startup time seems to be very near to the ones achieved with native code. Recently there were some discussions in EFL mailing list regarding which engine to choose, so I think it would be good to share these numbers above. It’s also important to notice that these bindings have a similar approach of elixir, mapping each function call in Javascript to the correspondent native function. I made this to be fair in the comparison among them, but depending on the use-case it’d  be good to have a JS binding similar to what python’s did, embedding the function call in real python objects.

By Lucas De Marchi at June 14, 2011 05:25 PM

April 29, 2011

Collection of WebKit ports

Holger Freyther

WebKit is a very successfull project. It is that in many ways. The code produced seems to very fast, the code is nice to work on, the people are great, the partys involved collaborate with each other in the interest of the project. The project is also very successfull in the mobile/smartphone space. All the major smartphone platforms but Windows7 are using WebKit. This all looks great, a big success but there is one thing that stands out.

From all the smartphone platforms no one has fully upstreamed their port. There might be many reasons for that and I think the most commonly heard reason is the time needed to get it upstreamed. It is specially difficult in a field that is moving as fast as the mobile industry. And then again there is absolutely no legal obligation to work upstream.

For most of today I collected the ports I am aware of, put them into one git repository, maybe find the point where they were branched, rebase their changes. The goal is to make it more easy to find interesting things and move them back to upstream. One can find the combined git tree with the tags here. I started with WebOS, moved to iOS, then to Bada and stopped at Android as I would have to pick the sourcecode for each android release for each phone from each vendor. I think I will just be happy with the Android git tree for now. At this point I would like to share some of my observations in the order I did the import.

Palm


Palm's release process is manual. In the last two releases they call the file .tgz but forgot to gzip it, in 2.0.0 the tarball name was in camel case. The thing that is very nice about Palm is that they provide their base and their changes (patch) separately. From looking at the 2.1.0 release it looks that for the Desktop version they want to implement Complex Font rendering. Earlier versions (maybe it is still the case) lack the support for animated GIF.

iOS


Apple's release process seems to be very structured. The source can be downloaded here. What I think is to note is that the release tarball contains some implementations of WebCore only as .o file and Apple has stopped releasing the WebKit sourcecode beginning with iOS 4.3.0.

Bada


This port is probably not known by many. The release process seems to be manual as well, the name of directories changed a lot between the releases, they come with a WML Script engine and they do ship something they should not ship.

I really hope that this combined tree is useful for porters that want to see the tricks used in the various ports and don't want to spend the time looking for each port separately.

By zecke (noreply@blogger.com) at April 29, 2011 07:20 PM

February 13, 2011

How to make the GNU Smalltalk Interpreter slower

Holger Freyther

This is another post about a modern Linux based performance measurement utility. It is called perf, it is included in the Linux kernel sources and it entered the kernel in v2.6.31-rc1. In many ways it is obsoleting OProfile, in fact for many architectures oprofile is just a wrapper around the perf support in the kernel. perf comes with a few nice application. perf top provides a statistics about which symbols in user and in kernel space are called, perf record to record an application or to start an application to record it and then perf report to browse this report with a very simple CLI utility. There are tools to bundle the record and the application in an archive, a diff utility.

For the last year I was playing a lot with GNU Smalltalk and someone posted the results of a very simplistic VM benchmark ran across many different Smalltalk implementations. In one of the benchmarks GNU Smalltalk is scoring last among the interpreters and I wanted to understand why it is slower. In many cases the JavaScriptCore interpreter is a lot like the GNU Smalltalk one, a simple direct-threaded bytecode interpreter, uses computed goto (even is compiled with -fno-gcse as indicated by the online help, not that it changed something for JSC), heavily inlined many functions.

There are also some differences, the GNU Smalltalk implementation is a lot older and in C. The first notable is that it is a Stack Machine and not register based, there are global pointers for the SP and the IP. Some magic to make sure that in the hot loop the IP/SP is 'local' in a register, depending on the available registers also keep the current argument in one, the interpreter definition is in a special file format but mostly similar to how Interepreter::privateExecute is looking like. The global state mostly comes from the fact that it needs to support switching processes and there might be some event during the run that requires access to the IP to store it to resume the old process. But in general the implementation is already optimized and there is little low hanging fruits and most experiments result in a slow down.

The two important things are again: Having a stable benchmark, having a tool to help to know where to look for things. In my case the important tools are perf stat, perf record, perf report and perf annotate. I have put a copy of the output to the end of this blog post. The stat utility provides one with number of instructions executed, branches, branch misses (e.g. badly predicted), L1/L2 cache hits and cache misses.

The stable benchmark helps me to judge if a change is good, bad or neutral for performance within the margin of error of the test. E.g. if I attempt to reduce the code size the instructions executed should decrease, if I start putting __builtin_expect.. into my code the number of branch misses should go down as well. The other useful utility is to the perf report that allows one to browse the recorded data, this can help to identify the methods one wants to start to optimize, it allows to annotate these functions inside the simple TUI interface, but does not support searching in it.

Because the codebase is already highly optimized any of my attempts should either decrease the code size (and the pressure on the i-cache), the data size (d-cache), remove stores or loads from memory (e.g. reorder instructions), fix branch predictions. The sad truth is that most of my changes were either slow downs or neutral to the performance and it is really important to undo these changes and not have false pride (unless it was also a code cleanup or such).

So after about 14 hours of toying with it the speed ups I have managed to make come from inlining a method to unwind a context (callframe), reordering some compares on the GC path and disabling the __builtin_expect branch hints as they were mostly wrong (something the kernel people found to be true in 2010 as well). I will just try harder, or try to work on the optimizer or attempt something more radical...



$ perf stat gst -f Bench.st
219037433 bytecodes/sec; 6025895 sends/sec

Performance counter stats for 'gst -f Bench.st':

17280.101683 task-clock-msecs # 0.969 CPUs
2076 context-switches # 0.000 M/sec
123 CPU-migrations # 0.000 M/sec
3925 page-faults # 0.000 M/sec
22215005506 cycles # 1285.583 M/sec (scaled from 70.02%)
40593277297 instructions # 1.827 IPC (scaled from 80.00%)
5063469832 branches # 293.023 M/sec (scaled from 79.98%)
70691940 branch-misses # 1.396 % (scaled from 79.98%)
27844326 cache-references # 1.611 M/sec (scaled from 20.02%)
134229 cache-misses # 0.008 M/sec (scaled from 20.03%)

17.838888599 seconds time elapsed


PS: The perf support probably works best on Intel based platforms and the biggest other problem is that perf annotate has some issues when the code is included from other c files.

By zecke (noreply@blogger.com) at February 13, 2011 08:56 PM

January 17, 2011

Using systemtap userspace tracing...

Holger Freyther

At the 27C3 we were running a GSM network and during the preparation I noticed a strange performance problem coming from the database library we are using running. I filled our database with some dummy data and created a file with the queries we normally run and executed time cat queries | sqlite3 file as a mini benchmark. I also hacked this code into our main routine and ran it with time as well. For some reason the code running through the database library was five times slower.

I was a bit puzzled and I decided to use systemtap to explore this to build a hypothesis and to also have the tools to answer the hypothesis. I wanted to find out if if it is slow because our database library is doing some heavy work in the implementation, or because we execute a lot more queries behind the back. I was creating the below probe:


probe process("/usr/lib/libsqlite3.so.0.8.6").function("sqlite3_get_table")
{
a = user_string($zSql);
printf("sqlite3_get_table called '%s'\n", a);
}


This probe will be executed whenever the sqlite3_get_table function of the mentioned library will be called. The $zSql is a variable passed to the sqlite3_get_table function and contains the query to be executed. I am converting the pointer to a local variable and then can print it. Using this simple probe helped me to see which queries were executed by the database library and helped me to do an easy optimisation.

In general it could be very useful to build a set of probes (I think one calls set a tapset) that check for API misusage, e.g. calling functions with certain parameters where something else might be better. E.g. in Glib use truncate instead of assigning "" to the GString, or check for calls to QString::fromUtf16 coming from Qt code itself. On second thought this might be better as a GCC plugin, or both.

By zecke (noreply@blogger.com) at January 17, 2011 12:41 PM

December 17, 2010

In the name of performance

Holger Freyther

I tend to see people doing weird things and then claim that the change is improving performance. This can be re-ordering instructions to help the compiler, attempting to use multiple cores of your system, writing a memfill in assembly. On the one hand people can be right and the change is making things faster, on the other hand they could use assembly to make things look very complicated, justify their pay, and you might feel awkward to question if it is making any sense.

In the last couple of weeks I have stumbled on some of those things. For some reason I found this bug report about GLIBC changing the memcpy routine for SSE and breaking the flash plugin (because it uses memcpy in the wrong way). The breakage is justified that the new memcpy was optimized and is faster. As Linus points out with his benchmark the performance improvement is mostly just wishful thinking.

Another case was someone providing MIPS optimized pixman code to speed-up all drawing which turned out to be wishful thinking as well...

The conclusion is. If someone claims that things are faster with his patch. Do not simply trust him, make sure he refers to his benchmark, is providing numbers of before and after and maybe even try to run it yourself. If he can not provide this, you should wonder how he measured the speed-up! There should be no place for wishful thinking in benchmarking. This is one of the areas where Apple's WebKit team is constantly impressing me.

By zecke (noreply@blogger.com) at December 17, 2010 01:48 PM

October 23, 2010

Easily embedding WebKit into your EFL application

Lucas De Marchi

This is the first of a series of posts that I’m planning to do using basic examples in EFL, the Enlightenment Foundation Libraries. You may have heard that EFL is reaching its 1.0 release. Instead of starting from the very beginning with the basic functions of these libraries, I decided to go the opposite way, showing the fun stuff that is possible to do. Since I’m also an WebKit developer, let’s put the best of both softwares together and have a basic window rendering a webpage.

Before starting off, just some remarks:

  1. I’m using here the basic EFL + WebKit-EFL (sometimes called ewebkit). Developing an EFL application can be much simpler, particularly if you use an additional library with pre-made widgets like Elementary. However, it’s good to know how the underlying stuff works, so I’m providing this example.
  2. This could have been the last post in a series when talking about EFL since it uses at least 3 libraries. Don’t be afraid if you don’t understand what a certain function is for or if you can’t get all EFL and WebKit running right now. Use the comment section below and I’ll make my best to help you.

Getting EFL and WebKit

In order to able to compile the example here, you will need to compile two libraries from source: EFL and WebKit. For both libraries, you can either get the last version from svn or use the last snapshots provided.

  • EFL:

Grab a snapshot from the download page. How to checkout the latest version from svn is detailed here, as well as some instructions on how to compile

  • WebKit-EFL:

A very detailed explanation on how to get WebKit-EFL up and running is available on trac. Recently, though, WebKit-EFL started to be released too. It’s not detailed in the wiki yet, but you can grab a snapshot instead of checking out from svn.

hellobrowser!

In the spirit of “hello world” examples, our goal here is to make a window showing a webpage rendered by WebKit. For the sake of simplicity, we will use a default start page and put a WebKit-EFL “widget” to cover the entire window. See below a screenshot:

hellobrowser - WebKit + EFL

The code for this example is available here. Pay attention to a comment in the beginning of this file that explains how to compile it:

gcc -o hellobrowser hellobrowser.c \
     -DEWK_DATADIR="\"$(pkg-config --variable=datadir ewebkit)\"" \
     $(pkg-config --cflags --libs ecore ecore-evas evas ewebkit)

The things worth noting here are the dependencies and a variable. We directly depend on ecore and evas from EFL and on WebKit. We define a variable, EWK_DATADIR, using pkg-config so our browser can use the default theme for web widgets defined in WebKit. Ecore handles events like mouse and keyboard inputs, timers etc whilst evas is the library responsible for drawing. In a later post I’ll detail them a bit more. For now, you can read more about them on their official site.

The main function is really simple. Let’s divide it by pieces:

    // Init all EFL stuff we use
    evas_init();
    ecore_init();
    ecore_evas_init();
    ewk_init();

Before you use a library from EFL, remember to initialize it. All of them use their own namespace, so it’s easy to know which library you have to initialize: for example, if you call a function starting by “ecore_”, you know you first have to call “ecore_init()”. The last initialization function is WebKit’s, which uses the “ewk_” namespace.

    window = ecore_evas_new(NULL, 0, 0, 800, 600, NULL);
    if (!window) {
        fprintf(stderr, "something went wrong... :(\n");
        return 1;
    }

Ecore-evas then is used to create a new window with size 800×600. The other options are not relevant for an introduction to the libraries and you can find its complete documentation here.

    // Get the canvas off just-created window
    evas = ecore_evas_get(window);

From the Ecore_Evas object we just created, we grab a pointer to the evas, which is the space in which we can draw, adding Evas_Objects. Basically an Evas_Object is an object that you draw somewhere, i.e. in the evas. We want to add only one object to our window, that is where WebKit you render the webpages. Then, we have to ask WebKit to create this object:

    // Add a View object into this canvas. A View object is where WebKit will
    // render stuff.
    browser = ewk_view_single_add(evas);

Below I demonstrate a few Evas’ functions that you use to manipulate any Evas_Object. Here we are manipulating the just create WebKit object, moving to the desired position, resizing to 780x580px and then telling Evas to show this object. Finally, we tell Evas to show the window we created too. This way we have a window with an WebKit object inside with a little border.

    // Make a 10px border, resize and show
    evas_object_move(browser, 10, 10);
    evas_object_resize(browser, 780, 580);
    evas_object_show(browser);
    ecore_evas_show(window);

We need to setup a bit more things before having a working application. The first one is to give focus to the Evas_Object we are interested on in order to receive keyboard events when opened. Then we connect a function that will be called when the window is closed, so we can properly exit our application.

    // Focus it so it will receive pressed keys
    evas_object_focus_set(browser, 1);
 
    // Add a callback so clicks on "X" on top of window will call
    // main_signal_exit() function
    ecore_event_handler_add(ECORE_EVENT_SIGNAL_EXIT, main_signal_exit, window);

After this, we are ready to show our application, so we start the mainloop. This function will only return when the application is closed:

    ecore_main_loop_begin();

The function called when the application is close, just tell Ecore to exit the mainloop, so the function above returns and the application can shutdown. See its implementation below:

static Eina_Bool
main_signal_exit(void *data, int ev_type, void *ev)
{
    ecore_evas_free(data);
    ecore_main_loop_quit();
    return EINA_TRUE;
}

Before the application exits, we shutdown all the libraries that were initialized, in the opposite order:

    // Destroy all the stuff we have used
    ewk_shutdown();
    ecore_evas_shutdown();
    ecore_shutdown();
    evas_shutdown();

This is a basic working browser, with which you can navigate through pages, but you don’t have an entry to set the current URL, nor “go back” and “go forward” buttons etc. All you have to do is start adding more Evas_Objects to your Evas and connect them to the object we just created. For a still basic example, but with more stuff implemented, refer to the EWebLauncher that we ship with the WebKit source code. You can see it in the “WebKitTools/EWebLauncher/” folder or online at webkit’s trac. Eve is another browser with a lot more features that uses Elementary in addition to EFL, WebKit. See a blog post about it with some nice pictures.

Now, let’s do something funny with our browser. With a bit more lines of code you can turn your browser upside down. Not really useful, but it’s funny. All you have to do is to rotate the Evas_Object WebKit is rendering on. This is implemented by the following function:

// Rotate an evas object by 180 degrees
static void
_rotate_obj(Evas_Object *obj)
{
    Evas_Map *map = evas_map_new(4);
 
    evas_map_util_points_populate_from_object(map, obj);
    evas_map_util_rotate(map, 180.0, 400, 300);
    evas_map_alpha_set(map, 0);
    evas_map_smooth_set(map, 1);
    evas_object_map_set(obj, map);
    evas_object_map_enable_set(obj, 1);
 
    evas_map_free(map);
}

See this screenshot below and  get the complete source code.

EFL + WebKit doing Politreco upside down

By Lucas De Marchi at October 23, 2010 09:53 PM

October 02, 2010

Deploying WebKit, common issues

Holger Freyther

From my exposure to people deploying QtWebKit or WebKit/GTK+ there are some things that re-appear and I would like to discuss these here.

  • Weird compile error in JavaScript?
  • It is failing in JavaScriptCore as it is the first that is built. It is most likely that the person that provided you with the toolchain has placed a config.h into it. There are some resolutions to it. One would be to remove the config.h from the toolchain (many things will break), or use -isystem instead of -I for system includes.
    The best way to find out if you suffer from this problem is to use -E instead of -c to only pre-process the code and see where the various includes are coming from. It is a strategy that is known to work very well.

  • No pages are loaded.
  • Most likely you do not have a DNS Server set, or no networking, or the system your board is connected to is not forwarding the data. Make sure you can ping a website that is supposed to work, e.g. ping www.yahoo.com, the next thing would be to use nc to execute a simple HTTP 1.1 get on the site and see if it is working. In most cases you simply lack networking connectivity.

  • HTTPS does not work
  • It might be either an issue with Qt or an issue with your system time. SSL Certificates at least have two dates (Expiration and Creation) and if your system time is after the Expiration or before the Creation you will have issues. The easiest thing is to add ntpd to your root filesystem to make sure to have the right time.

    The possible issue with Qt is a bit more complex. You can build Qt without OpenSSL support, you can make it link to OpenSSL or you can make it to dlopen OpenSSL at runtime. If SSL does not work it is most likely that you have either build it without SSL support, or with runtime support but have failed to install the OpenSSL library.

    Depending on your skills it might be best to go back to ./configure and make Qt link to OpenSSL to avoid the runtime issue. strings is a very good tool to find out if your libQtNetwork.so contains SSL support, together with using objdump -x and search for _NEEDED you will find out which config you have.

  • Local pages are not loaded
  • This is a pretty common issue for WebKit/GTK+. In WebKit/GTK+ we are using GIO for local files and to determine the filetype it is using the freedesktop.org shared-mime-info. Make sure you have that installed.

  • The page only displays blank
  • This is another issue that comes back from time to time. It only appears on WebKit/GTK+ with the DirectFB backend but sadly people never report back if and how they have solved it. You could make a difference and contribute back to the WebKit project.


    In general most of these issues can be avoided by using a pre-packaged Embedded Linux Distribution like Ångström (or even Debian). The biggest benefit of that approach is that someone else made sure that when you install WebKit, all dependencies will be installed as well and it will just work for your ARM/MIPS/PPC system. It will save you a lot of time.

    By zecke (noreply@blogger.com) at October 02, 2010 06:12 AM

    August 28, 2010

    WebKit

    Lucas De Marchi

    After some time working with the EFL port of WebKit, I’ve been nominated as an official webkit developer. Now I have super powers in the official repository :-), but I swear I intend to use it with caution and responsibility. I’ll not forget Uncle Ben’s advice: ”with great power comes great responsibility”.

    I’m preparing a post to talk about WebKit, EFL, eve (a new web browser based on WebKit + EFL) and how to easily embed a browser in your application. Stay tuned.

    By Lucas De Marchi at August 28, 2010 03:15 AM

    August 10, 2010

    Coscup2010/GNOME.Asia with strong web focus

    Holger Freyther

    On the following weekend the Coscup 2010/GNOME.Asia is taking place in Taipei. The organizers have decided to have a strong focus on the Web as can be seen in the program.

    On saturday there are is a keynote and various talks about HTML5, node.js. The Sunday will see three talks touching WebKit/GTK+. There is one about building a tablet OS with WebKit/GTK+, one by Xan Lopez on how to build hybrid applications (a topic I have devoted moiji-mobile.com to) and a talk by me using gdb to explain how WebKit/GTK+ is working and how the porting layer interacts with the rest of the code.

    I hope the audience will enjoy the presentations and I am looking forward to attend the conference, there is also a strong presence of the ex-Openmoko Taiwan Engineering team. See you on Saturday/Sunday and drop me an email if you want to talk about WebKit or GSM...

    By zecke (noreply@blogger.com) at August 10, 2010 04:32 PM

    September 06, 2008

    Skia graphics library in Chrome: First impressions

    Alp Toker

    With the release of the WebKit-based Chrome browser, Google also introduced a handful of new backends for the browser engine including a new HTTP stack and the Skia graphics library. Google’s Android WebKit code drops have previously featured Skia for rendering, though this is the first time the sources have been made freely available. The code is apparently derived from Google’s 2005 acquisition of North Carolina-based software firm Skia and is now provided under the Open Source Apache License 2.0.

    Weighing in at some 80,000 lines of code (to Cairo’s 90,000 as a ballpark reference) and written in C++, some of the differentiating features include:

    • Optimised software-based rasteriser (module sgl/)
    • Optional GL-based acceleration of certain graphics operations including shader support and textures (module gl/)
    • Animation capabilities (module animator/)
    • Some built-in SVG support (module (svg/)
    • Built-in image decoders: PNG, JPEG, GIF, BMP, WBMP, ICO (modules images/)
    • Text capabilities (no built-in support for complex scripts)
    • Some awareness of higher-level UI toolkit constructs (platform windows, platform events): Mac, Unix (sic. X11, incomplete), Windows, wxwidgets
    • Performace features
      • Copy-on-write for images and certain other data types
      • Extensive use of the stack, both internally and for API consumers to avoid needless allocations and memory fragmentation
      • Thread-safety to enable parallelisation

    The library is portable and has (optional) platform-specific backends:

    • Fonts: Android / Ascender, FreeType, Windows (GDI)
    • Threading: pthread, Windows
    • XML: expat, tinyxml
    • Android shared memory (ashmem) for inter-process image data references

    Skia Hello World

    In this simple example we draw a few rectangles to a memory-based image buffer. This also demonstrates how one might integrate with the platform graphics system to get something on screen, though in this case we’re using Cairo to save the resulting image to disk:

    #include "SkBitmap.h"
    #include "SkDevice.h"
    #include "SkPaint.h"
    #include "SkRect.h"
    #include <cairo.h>
     
    int main()
    {
      SkBitmap bitmap;
      bitmap.setConfig(SkBitmap::kARGB_8888_Config, 100, 100);
      bitmap.allocPixels();
      SkDevice device(bitmap);
      SkCanvas canvas(&device);
      SkPaint paint;
      SkRect r;
     
      paint.setARGB(255, 255, 255, 255);
      r.set(10, 10, 20, 20);
      canvas.drawRect(r, paint);
     
      paint.setARGB(255, 255, 0, 0);
      r.offset(5, 5);
      canvas.drawRect(r, paint);
     
      paint.setARGB(255, 0, 0, 255);
      r.offset(5, 5);
      canvas.drawRect(r, paint);
     
      {
        SkAutoLockPixels image_lock(bitmap);
        cairo_surface_t* surface = cairo_image_surface_create_for_data(
            (unsigned char*)bitmap.getPixels(), CAIRO_FORMAT_ARGB32,
            bitmap.width(), bitmap.height(), bitmap.rowBytes());
        cairo_surface_write_to_png(surface, "snapshot.png");
        cairo_surface_destroy(surface);
      }
     
      return 0;
    }

    You can build this example for yourself linking statically to the libskia.a object file generated during the Chrome build process on Linux.

    Not just for Google Chrome

    The Skia backend in WebKit, the first parts of which are already hitting SVN (r35852, r36074) isn’t limited to use in the Chrome/Windows configuration and some work has already been done to get it up and running on Linux/GTK+ as part of the ongoing porting effort.

    The post Skia graphics library in Chrome: First impressions appeared first on Alp Toker.

    By alp at September 06, 2008 12:11 AM

    June 12, 2008

    WebKit Meta: A new standard for in-game web content

    Alp Toker

    Over the last few months, our browser team at Nuanti Ltd. has been developing Meta, a brand new WebKit port suited to embedding in OpenGL and 3D applications. The work is being driven by Linden Lab, who are eagerly investigating WebKit for use in Second Life.

    While producing Meta we’ve paid great attention to resolving the technical and practical limitations encountered with other web content engines.


    uBrowser running with the WebKit Meta engine

    High performance, low resource usage

    Meta is built around WebKit, the same engine used in web browsers like Safari and Epiphany, and features some of the fastest content rendering around as well as nippy JavaScript execution with the state of the art SquirrelFish VM. The JavaScript SDK is available independently of the web renderer for sandboxed client-side game scripting and automation.

    It’s also highly scalable. Some applications may need only a single browser context but virtual worlds often need to support hundreds of web views or more, each with active content. To optimize for this use case, we’ve cut down resource usage to an absolute minimum and tuned performance across the board.

    Stable, easy to use cross-platform SDK

    Meta features a single, rock-solid API that works identically on all supported platforms including Windows, OS X and Linux. The SDK is tailored specifically to embedding and allows tight integration (shared main loop or operation in a separate rendering thread, for example) and hooks to permit seamless visual integration and extension. There is no global setup or initialization and the number of views can be adjusted dynamically to meet resource constraints.

    Minimal dependencies

    Meta doesn’t need to use a conventional UI toolkit and doesn’t need any access to the underlying windowing system or the user’s filesystem to do its job, so we’ve done away with these concepts almost entirely. It adds only a few megabytes to the overall redistributable application’s installed footprint and won’t interfere with any pre-installed web browsers on the user’s machine.

    Nuanti will be offering commercial and community support and is anticipating involvement from the gaming industry and homebrew programmers.

    In the mid term, we aim to submit components of Meta to the WebKit Open Source project, where our developers are already actively involved in maintaining various subsystems.

    Find out more

    Today we’re launching meta.nuanti.com and two mailing lists to get developers talking. We’re looking to make this site a focal point for embedders, choc-full of technical details, code samples and other resources.

    The post WebKit Meta: A new standard for in-game web content appeared first on Alp Toker.

    By alp at June 12, 2008 09:35 AM

    April 21, 2008

    Acid3 final touches

    Alp Toker

    Recently we’ve been working to finish off and land the last couple of fixes to get a perfect pixel-for-pixel match against the reference Acid3 rendering in WebKit/GTK+. I believe we’re the first project to achieve this on Linux — congratulations to everyone on the team!


    Epiphany using WebKit r32284

    We also recently announced our plans to align more closely with the GNOME desktop and mobile platform. To this end we’re making a few technology and organisational changes that I hope to discuss in an upcoming post.

    The post Acid3 final touches appeared first on Alp Toker.

    By alp at April 21, 2008 02:38 AM

    April 06, 2008

    WebKit Summer of Code Projects

    Alp Toker

    With the revised deadline for Google Summer of Code ’08 student applications looming, we’ve been getting a lot of interest in browser-related student projects. I’ve put together a list of some of my favourite ideas.

    If in doubt, now’s the time to submit proposals. Already-listed ideas are the most likely to get mentored but students are free to propose their own ideas as well. Proposals for incremental improvements will tend to be favoured over ideas for completely new applications, but a proof of concept and/or roadmap can help when submitting plans for larger projects.

    Update: There’s no need to keep asking about the status of an application on IRC/private mail etc. It’s a busy time for the upstream developers but they’ll get back in touch as soon as possible.

    The post WebKit Summer of Code Projects appeared first on Alp Toker.

    By alp at April 06, 2008 08:40 PM

    March 27, 2008

    WebKit gets 100% on Acid3

    Alp Toker

    Today we reached a milestone with WebKit/GTK+ as it became the first browser engine on Linux/X11 to get a full score on Acid3, shortly after the Acid3 pass by WebKit for Safari/Mac.

    Acid3
    Epiphany using WebKit r31371

    There is actually still a little work to be done before we can claim a flawless Acid3 pass. Two of the most visible remaining issues in the GTK+ port are :visited (causing the “LINKTEST FAILED” notice in the screenshot) and the lack of CSS text shadow support in the Cairo/text backend which is needed to match the reference rendering.

    It’s amazing to see how far we’ve come in the last few months, and great to see the WebKit GTK+ team now playing an active role in the direction of WebCore as WebKit continues to build momentum amongst developers.

    Update: We now also match the reference rendering.

    The post WebKit gets 100% on Acid3 appeared first on Alp Toker.

    By alp at March 27, 2008 09:06 PM

    March 15, 2008

    Bossa Conf ’08

    Alp Toker

    Am here in the LHR lounge. In a couple of hours, we take off for the INdT Bossa Conference, Pernambuco, Brazil via Lisbon. Bumped in to Pippin who will be presenting Clutter. Also looking forward to Lennart‘s PulseAudio talk amongst others.

    If you happen to be going, drop by on my WebKit Mobile presentation, 14:00 Room 01 this Monday. We have a small surprise waiting for Maemo developers.

    WebKit Mobile

    The post Bossa Conf ’08 appeared first on Alp Toker.

    By alp at March 15, 2008 03:29 AM