April 15, 2021

Introducing CSS Grid Inspector

Surfin’ Safari

CSS Grid Layout is a web-standard layout system used to define a grid structure of rows and columns in CSS. HTML elements can then be distributed to the resulting grid cells to achieve a flexible and predictable layout.

The system itself is highly capable, but it does require a shift from mental models of the past when creating layout with CSS. Since the grid definition itself is not visible (only its effect on the position of elements is visible), complex grid layouts require web developers to memorize constraints and imagine space distribution among elements to deduce whether the layout works as intended.

Concert poster demo using CSS Grid LayoutConcert poster demo using CSS Grid Layout by Jen Simmons

If you can see both the structure of the grid and its effects in context on the page, it makes working with CSS Grid Layout more approachable and easier to reason about.

Page Overlay showing grid constraints of concert poster demo using CSS Grid LayoutOverlay showing grid structure of concert poster demo

CSS Grid Inspector is a new tool in Web Inspector which helps to visualize grids defined using CSS Grid Layout notation and to verify that elements are positioned within them as expected. It was introduced in Safari Technology Preview 123.

Visualizing grids

CSS Grid Inspector shows a page overlay on top of HTML elements which are CSS Grid containers. An element is a CSS Grid container when its display CSS property has a value of either grid or inline-grid.

You can turn on the overlay for a CSS Grid container in one of two ways:

  • Click the “grid“ badge shown next to the element in the DOM Tree outline of the Elements Tab.
  • Open the new Layout panel in the details sidebar of the Elements Tab, then click the checkbox next to the corresponding element in the Grid Overlays list.
A “grid” badge shown next to <div id=”css-grid-demo”> in the DOM Tree outline. Layout sidebar panel with the overlay enabled for the “div#css-grid-demo” CSS Grid container

The overlay can show:

  • Lines which define the grid rows, and columns (a.k.a. grid tracks)
  • Spacing between grid tracks (a.k.a. grid gaps)
  • Labels for line numbers and grid track sizes
  • Labels for grid line names, and grid area names
Elements positioned on a grid defined with CSS Grid LayoutElements positioned on a grid defined with CSS Grid Layout CSS Grid Inspector overlay showing labels for grid line numbers and grid track sizesCSS Grid Inspector overlay showing labels for grid line numbers and grid track sizes

Configuring the CSS Grid Inspector overlay

The CSS Grid Inspector overlay can show many properties of a grid’s components according to the CSS properties used. Showing everything all at once can be overwhelming. Depending on your workflow, you may prefer to see all or just a subset.

Use the settings in the Page Overlay Options section of the Layout panel in the details sidebar to configure the level of detail presented with the overlay. Changes are applied immediately and saved across Web Inspector sessions.

You can toggle the following options:

  • Track Sizes: shows a label with the user-authored value for the track size or auto if the value is not explicitly set. This helps visual inspection by matching the user-authored value set in CSS with the corresponding grid track on the page.
  • Line Numbers: shows a label with the ordinal and the reverse ordinal of explicit grid lines. The reverse ordinal is useful when referencing lines backward from the end. For example, 1 -4 means “the first and the fourth last”.
  • Line Names: shows a label with the user-defined name for a grid line or the implicit grid line name derived from a grid area name. When undefined, this does nothing. Learn more about implicit grid line names on MDN.
  • Area Names: shows a label with the user-defined name for a group of cells. When undefined, this does nothing.
  • Extended Grid Lines: extends grid lines infinitely in their respective directions. This is useful for checking alignment with other elements on the page.

To change the overlay color, use the color swatch next to the corresponding element in the Grid Overlays list found in the Layout sidebar panel. The new color will be saved for that element on that page and is remembered across Web Inspector sessions. When you return later to inspect the same element, the overlay will use the color you picked.

Try it out

If you’re using Safari Technology Preview 123 or a later release, you can inspect the example below to try out the CSS Grid Inspector on this page. Open Web Inspector, go to the Elements Tab, switch to the Layout sidebar panel, then toggle the grid overlay for the element marked div#css-grid-demo.


If you encounter any issues, please file a report at webkit.org/new-inspector-bug.
If you want to share feedback or ideas, please send them to us on Twitter: @webkit.

Note: Learn more about Web Inspector from the Web Inspector Reference documentation.

April 15, 2021 09:00 AM

April 13, 2021

Enrique Ocaña: GStreamer WebKit debugging tricks using GDB (1/2)

Igalia WebKit

I’ve been developing and debugging desktop and mobile applications on embedded devices over the last decade or so. The main part of this period I’ve been focused on the multimedia side of the WebKit ports using GStreamer, an area that is a mix of C (glib, GObject and GStreamer) and C++ (WebKit).

Over these years I’ve had to work on ARM embedded devices (mobile phones, set-top-boxes, Raspberry Pi using buildroot) where most of the environment aids and tools we take for granted on a regular x86 Linux desktop just aren’t available. In these situations you have to be imaginative and find your own way to get the work done and debug the issues you find in along the way.

I’ve been writing down the most interesting tricks I’ve found in this journey and I’m sharing them with you in a series of 7 blog posts, one per week. Most of them aren’t mine, and the ones I learnt in the begining of my career can even seem a bit naive, but I find them worth to share anyway. I hope you find them as useful as I do.

Breakpoints with command

You can break on a place, run some command and continue execution. Useful to get logs:

break getenv
 # This disables scroll continue messages
 # and supresses output
 set pagination off
 p (char*)$r0

break grl-xml-factory.c:2720 if (data != 0)
 call grl_source_get_id(data->source)
 # $ is the last value in the history, the result of
 # the previous call
 call grl_media_set_source (send_item->media, $)
 call grl_media_serialize_extended (send_item->media, 

This idea can be combined with watchpoints and applied to trace reference counting in GObjects and know from which places the refcount is increased and decreased.

Force execution of an if branch

Just wait until the if chooses a branch and then jump to the other one:

6 if (i > 3) {
(gdb) next
7 printf("%d > 3\n", i);
(gdb) break 9
(gdb) jump 9
9 printf("%d <= 3\n", i);
(gdb) next
5 <= 3

Debug glib warnings

If you get a warning message like this:

W/GLib-GObject(18414): g_object_unref: assertion `G_IS_OBJECT (object)' failed

the functions involved are: g_return_if_fail_warning(), which calls to g_log(). It’s good to set a breakpoint in any of the two:

break g_log

Another method is to export G_DEBUG=fatal_criticals, which will convert all the criticals in crashes, which will stop the debugger.

Debug GObjects

If you want to inspect the contents of a GObjects that you have in a reference…

(gdb) print web_settings 
$1 = (WebKitWebSettings *) 0x7fffffffd020

you can dereference it…

(gdb) print *web_settings
$2 = {parent_instance = {g_type_instance = {g_class = 0x18}, ref_count = 0, qdata = 0x0}, priv = 0x0}

even if it’s an untyped gpointer…

(gdb) print user_data
(void *) 0x7fffffffd020
(gdb) print *((WebKitWebSettings *)(user_data))
{parent_instance = {g_type_instance = {g_class = 0x18}, ref_count = 0, qdata = 0x0}, priv = 0x0}

To find the type, you can use GType:

(gdb) call (char*)g_type_name( ((GTypeInstance*)0x70d1b038)->g_class->g_type )
$86 = 0x2d7e14 "GstOMXH264Dec-omxh264dec"

Instantiate C++ object from gdb

(gdb) call malloc(sizeof(std::string))
$1 = (void *) 0x91a6a0
(gdb) call ((std::string*)0x91a6a0)->basic_string()
(gdb) call ((std::string*)0x91a6a0)->assign("Hello, World")
$2 = (std::basic_string<char, std::char_traits<char>, std::allocator<char> > &) @0x91a6a0: {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x91a6f8 "Hello, World"}}
(gdb) call SomeFunctionThatTakesAConstStringRef(*(const std::string*)0x91a6a0)

See: 1 and 2

By eocanha at April 13, 2021 10:49 AM

April 04, 2021

Manuel Rego: :focus-visible in WebKit - March 2021

Igalia WebKit

Another month is gone, and we are back with another status update (see January and February ones).

This is about the work Igalia is doing on the implementation of :focus-visible in WebKit. This is a part of the Open Prioriziatation campaign and being sponsored by many people. Thank you!

The work on March has slowed down, so this status update is smaller than previous ones. Main focus has been around spec discussions trying to reach agreement.

Implementation details

The initial patch is available in the last Safari Technology Preview (STP) releases behind a runtime flag, but it has an annoying bug that was causing the body element to match :focus-visible when you used the keyboard to move focus. The issue was fixed past month but it hasn’t been included on a STP release yet (hopefully it’ll made it in release 124). Apart from that some minor patches related to implementation details have landed too. But this was just a small part of the work during March.

In addition I realized that :focus-visible appears in the Chromium and Firefox DevTools, so I took a look about how to make that happen on WebKit too. At that point I realized that :focus-within, which has been shipping for a long time, isn’t available in WebKit Web Inspector yet, so I cooked a simple patch to add it there. However that hasn’t landed yet, because it needs some UI rework, otherwise the list of pseudo-classes is going to be too long and not looking nice on the inspector. So the patch is waiting for some changes on the UI before it can be merged. Once that’s solved, adding :focus-within and :focus-visible to the Web Inspector is going to be pretty straight forward.

Spec discussions

This was the main part of the work during March, and the goal was to reach some agreement before finishing the implementation bits.

The main issue was how :focus-visible should work when a script moves focus. The behavior from the current implementations was not interoperable, the spec was not totally clear and, as explained on the previous report, in order to clarify this I created a set of new tests. These tests demonstrated some interesting incompatibilities. Based on this, we compared the results with the widely used polyfill as well. We found that there were various misalignments on tricky cases which generated significant discussions on which was correct, and why. After considerable discussion with people from Google and Mozilla, it looks like we have finally reached an agreement on the expectations.

Next was to see if we could clarify the text so that these cases couldn’t be interpreted in importantly incompatible ways, and following the advice from the CSS Working Group, I worked on a PR for the HTML spec trying to define when a browser should draw a focus indicator, and thus match :focus-visible. There some discussion about which elements should always match :focus-visible and how to define that in a normative text was raised (as some elements like <select> draw a focus ring in some browsers and not other when clicked, and some elements like <input type="date"> allow keyboard input or not depending on the platform). The discussion is still ongoing, and we’re still trying to find the proper way to define this in the HTML spec. Anyway if we manage to do that, that would be a great step forward regarding interoperability of :focus-visible implementations, and a big win for the final people using this feature.

Apart from that I’ve also created a test for my proposal about how :focus-visible should work in combination with Shadow DOM, but I don’t think I want to open that can of worms until the other part is done.

Some numbers

Let’s take a look to the numbers again, though things have moved slowly this time.

  • 21 PRs merged in WPT (1 in March).
  • 17 patches landed in WebKit (3 in March).
  • 7 patches landed in Chromium.
  • 2 PRs merged in CSS spcs (1 in March).
  • 1 PR merged in HTML spec.

Next steps

The main goal for April would be to close the spec discussions and land the PRs in HTML spec, so we can focus again in finishing the implementation in WebKit.

However if reaching an agreement to make these changes on HTML spec is not possible, probably we can still land some patches on the implementation side, to add some support for script focus on the WebKit side.

Stay tuned for more updates!

April 04, 2021 10:00 PM

March 31, 2021

Release Notes for Safari Technology Preview 123

Surfin’ Safari

Safari Technology Preview Release 123 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 273903-274002.

Web Inspector

  • Make the border color of the grid badge match to the corresponding outline (r273992)
  • Save and restore user-defined CSS Grid overlay colors (r273912)


  • Changed to consider intrinsic sizes as automatic whenever the block axis of the flex item is the flexible box main size (r273955)
  • Fixed orthogonal items with percentage sizes in Flexbox (r273958)
  • Fixed position: sticky behavior in a table with dir=RTL (r273982)


  • Removed the Origin header if the navigation request goes from POST to GET (r273905)

March 31, 2021 05:50 PM

March 11, 2021

Release Notes for Safari Technology Preview 122

Surfin’ Safari

Safari Technology Preview Release 122 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 272845-273903.

Web Inspector


  • Changed CSS properties that disallow negative values to not animate to negative values (r273001)
  • Changed blending of border-image-width to be discrete between auto values and other types (r273635)
  • Fixed border-image-outset to handle float values (r273478)
  • Fixed border-image-slice blending to account for the fill keyword (r273625)


  • Changed min-content and max-content keywords to behave as an initial value in block axis (r273206)
  • Changed CSS grid to not allow negative heights (r273470)
  • Fixed min- and max- widths of grid affected by ancestor (r273435)
  • Changed the initial value for border-image-outset to 0 (r273882)
  • Implemented :focus-visible (r273812, r272983)
  • Implemented the first case in Definite and Indefinite Sizes specifications in section 9.8 for flexbox (r273072)
  • Fixed runtime-disabled CSS features still appearing enabled via CSS.supports() (r273385)
  • Removed support in the CSS parser for -khtml- prefixed CSS values (r273637)
  • Removed support for pos and pixel prefixed CSS values (r273627)

CSS Color

  • Added experimental support for CSS Color 5 color-contrast() (r273683)
  • Added experimental support for CSS Color 5 color-mix() (r273244)
  • Added experimental support for CSS Color 5 Relative Color Syntax (r273127)
  • Changed color(lab ...) to serialize as color(lab ...) not lab() according to latest CSS Color 4 spec (r273211)
  • Fixed lab() and lch() colors to clamp out-of-bound values at parse time (r272909)
  • Fixed lch() colors to serialize as lch() (r273078)

CSS Aspect Ratio

  • Added support for aspect-ratio on flexbox items (r273193)
  • Changed an aspect-ratio that ends with forward slash to be treated as invalid (r273068)
  • Fixed aspect-ratio showing in computed styles when disabled (r273314)
  • Changed to take box-sizing into account in replaced element intrinsic sizing for aspect-ratio (r273753)


  • Enabled private methods (r273125)
  • Implemented private static methods (r273107)
  • Implemented top-level-await (r273225)
  • Implemented RegExp Match Indices (r273086)
  • Implemented GC verifier (r273138)
  • Added support for modules in Workers and Worklets (r273203)
  • Added support for modules in Service Workers (r273224)
  • Avoided performing toObject for options in new Intl constructors to align them to a new spec change (r273153)
  • Reduced promise reaction memory usage when there are multiple reactions (r273718)
  • Optimized object-reset expression (r273135)
  • Optimized Promise#then by avoiding function allocations in major case (r273605)
  • Micro-optimized for-in (r273766)
  • Threw TypeError when getFunctionRealm hits revoked Proxy (r273661)
  • Threw TypeError when TypedArray’s [[DefineOwnProperty]] failed (r273750)
  • Fixed delete with index for module namespace object when arbitrary module identifiers use index (r273816)


  • Extended wasm type with type index (r273813)
  • Implemented non-trapping float to int conversion (r272933)


  • Enabled Paint Timing (r273221, r273220)
  • Changed window proxy of detached iframe doesn’t respect updates to global values (r273901)
  • Fixed devicemotion and deviceorientation events to work in third-party iframes with Feature-Policy allowing it (r273444)


  • Fixed media segment getting incorrectly dropped when using negative timestampOffset or when source buffer appendWindow is set in MSE (r273461)
  • Fixed audio that stops playing when backgrounding a page that is playing and capturing audio (r273069)


  • Added support for WebRTC priority (r273550)
  • Fixed MediaRecorder.stop() to work correctly when recording has been paused (r272911)
  • Added support for BigInt as media-stream encryption key (r273158)


  • Added the ability for an embedded accessibility image description in an image file to be reported if available (r273214)
  • Fixed VoiceOver announces grid as having “0 columns” causing VoiceOver to not enter the grid (r273715)
  • Fixed VoiceOver incorrectly announcing groups in ARIA tree instances as empty (r273328)


  • Fixed scroll snapping when dragging scrollbars (r273690)

March 11, 2021 09:30 PM

March 03, 2021

Embracing the inevitable: It’s time for B2B financial services to take on digital

Adobe Web Platform

<div class="embed embed-internal embed-internal-embracingtheinevitableitstimeforb2bfinancialservicestotakeondigital embed-internal-03"> <div> <h1 id="embracing-the-inevitable-its-time-for-b2b-financial-services-to-take-on-digital">Embracing the inevitable: It’s time for B2B financial services to take on digital</h1> <p>B2B brands must drive the same level of personal engagement that was once had with clients and customers across the boardroom table, but in digital.</p> </div> <div> <img src="/hlx_52caa66088d7c12096570745d799e1a84ba741ab.jpeg" alt="Woman at desk with laptop in video meeting"> </div> <div> <p>By Christopher Young</p> <p>Posted on 03-03-2021</p> </div> <div> <p>For years, the B2B side of financial services succeeded without needing to make a focused investment in digital capabilities, until 2020 arrived and pandemic changed everything. While face-to-face business interactions essentially shut down, other trends emerged that depended on digital, such as buyers doing more research online and embracing mobile more than ever. Marketers were forced to adapt to a digital way of doing business.</p> <p>There is a new sense of urgency to drive the same level of personal engagement you once had with clients and customers across the boardroom table, only in a digital format. B2B brands can seize this newfound digital opportunity to redefine client experiences and grow relationships and business in unexpected ways.</p> <p>We asked experts from Adobe’s Digital Strategy Group, product marketing, and one of our strategic partners why it’s vital for financial services to embrace digital now — and how it can position you for success in 2021.</p> <p>“There’s a lot more that we can do digitally than we ever thought possible,” said Alarice Cesareo Lonergan, IBM, partner, NA financial services, enterprise strategy &amp; iX market leader. “By providing content and sharing perspectives more broadly, you might attract a whole new segment of potential clients.</p> <h3 id="switch-your-focus">Switch your focus</h3> <p>Direct-to-consumer businesses already see experience as the primary means of competitive differentiation. They’re well-versed in using physical <em>and</em> digital channels to attract customers. To differentiate in a sustainable way, B2B brands need to look towards what the most successful B2C brands are doing, and change their focus from products and services to experiences.</p> <p><img src="/hlx_423b425e11fcc6c97dd957a5591428808a4a7c1b.png" alt="Infographic: differences in CX approach between B2B and B2C"></p> <p>“Customer expectations have changed,” said Adrienne Whitten, director of product and segment marketing at Adobe. “Even the consumer side of the same financial services businesses have set the bar.”</p> <p>“We see more examples of banks consolidating the martech stack across B2B and B2C,” said Adobe’s Karen Cha, senior analyst, digital strategy group. “They’ll take advanced processes that the B2C side is using and leverage for stronger processes on the B2B side.”</p> <p>Some firms are already forging ahead. They’re investing in all-digital ways to engage customers, including developing content like blogs, case studies, webinars, and podcasts. But reaching clients regularly with content that exactly fits their journey stage requires technology.</p> <p><img src="/hlx_447ca80626378dd15c6041f8afc9d120c9c9c9a4.png" alt="Infographic: plans businesses have to address customer engagement challenges"></p> <p><em>Reshaping the Customer Experience: July 2020 COVID-19 Pulse Survey, Celent</em></p> <p>“You need to take a look at the processes where you can interject automation and amp up the digital capabilities for more useful and engaging interactions,” said Lonergan.</p> <h3 id="digitals-time-is-now">Digital’s time is now</h3> <p>Enterprise sales have already embraced digital channels with strategies that financial services can adopt to accelerate growth — and it’s not too late.</p> <p>“Those who didn’t establish a digital foundation yet aren’t that far behind, because there are lots of tools and technologies they can invest in now to help them overcome that hurdle,” Lonergan said.</p> <p>Customer demographics are also compelling a switch to digital. Millennials and Gen Z are moving up — they comprise 55 percent of directors, senior directors, and the C/VP level, according to the “Tech Buyers Generational Insights Research” by Adobe and PK Global (2020). The new generation was brought up in a different environment and depends on technology. They prefer digital as a way to interact, without the need for humans or phones. And they prioritize thought leadership and other online content over product-specific information like features, benefits, and roadmaps.</p> <h3 id="aim-for-the-north-star">Aim for the North Star</h3> <p>Many businesses now use account-based marketing (ABM) to identify and pursue qualified opportunities and high-value accounts, and to provide relevant experiences that generate more revenue, more quickly.</p> <p>“Sales and business leaders may not understand how digital can play a role in driving conversions, sales, and revenues,” said Lisa Sheth, head of digital strategy, Adobe. “By establishing organizational buy-in, you can work toward a digital mindset and foundation.”</p> <p>The digital capabilities your business needs start with personalization across channels, purposeful content, measurement and attribution, and journey management.</p> <ol> <li><strong>Personalize across channels</strong></li> </ol> <p>“Start with personalization to simulate the connection you’ve had in person,” Lonergan said. “Target and create that experience so it feels tailored for your customer. You’ll attract the customer with the experiences you’re going to deliver to them.”</p> <p>You need to coordinate touches across digital and in-person channels — when we have them again. Great experiences keep customers loyal, happy, and coming back.</p> <p>“We see a need to drive the empathy and deep connection between humans in a digital way,” said Lonergan. “That should be top of mind in 2021 across financial institutions.”</p> <ol start="2"> <li><strong>Deliver content that resonates</strong></li> </ol> <p>Ask yourself what content you need to develop to attract a particular persona or buying group. Determine what’s resonated well in the past and since the pandemic. Then engage the right people within each business account at complementary touchpoints throughout their journey</p> <p>“One of the largest banks went from a handful of webinars to hundreds,” said Sheth. “That’s how they’ve been able to continue their conversations and stay top of mind.</p> <ol start="3"> <li><strong>Track your marketing impact</strong></li> </ol> <p>You can’t improve what you can’t measure. Analyze marketing impact by tracking the progress of activities that are part of your digital initiative. Prove the impact of digital marketing by attributing revenue to the points where’s it’s generated — and then fine-tune to improve the results.</p> <ol start="4"> <li><strong>Design the complete experience</strong></li> </ol> <p>Technology, manufacturing, and other B2B verticals are emulating B2C by creating digital journeys that personalize the end-to-end experience. Financial services companies need to follow their lead.</p> <p>“You can architect an end-to-end experience and determine where you can use automation to personally engage the customer,” Lonergan said. You also need to evaluate which points could benefit from human interaction, either virtual or in person</p> <div class="embed embed-internal embed-internal-experiencecloud embed-internal-experiencecloud"> <div><p><img src="/hlx_34720ecd0ad7c3510509b6fa3a9337f0696639ab.png" alt=""></p> <h3 id="adobe-experience-cloud">Adobe Experience Cloud</h3> <p>AI-driven customer experience applications for marketing, analytics, advertising, and commerce.</p> <p><a href="https://www.adobe.com/experience-cloud.html">Learn more</a></p></div> </div> <h3 id="taking-the-inevitable-leap">Taking the inevitable leap</h3> <p>Responding to the pandemic has given financial services a push into the digital future more quickly than you might have imagined. But if you’re like many B2B marketers, you may discover you actually prefer a mix of physical and digital channels. Sixty-five percent of B2B decision makers believe the new sales model is at least as effective as it was before COVID-19, according to McKinsey, May 2020.</p> <p>To learn more about the steps you need to take, how to proceed, and the success other B2B financial services marketers are having, read our article <a href="https://blog.adobe.com/en/publish/2021/02/19/why-commercial-banks-and-asset-managers-need-adopt-account-based-experience-mindset.html">on implementing the account-based experience mindset in commercial banking.</a></p> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/2020/07/28/in-complex-times-panasonic-made-its-b2b-marketing-simple.html#gs.t969is">https://blog.adobe.com/en/2020/07/28/in-complex-times-panasonic-made-its-b2b-marketing-simple.html#gs.t969is</a></li> <li><a href="https://blog.adobe.com/en/2020/08/03/tapping-into-the-power-of-ai-to-drive-better-b2b-event-marketing.html#gs.t96cp5">https://blog.adobe.com/en/2020/08/03/tapping-into-the-power-of-ai-to-drive-better-b2b-event-marketing.html#gs.t96cp5</a></li> <li><a href="https://blog.adobe.com/en/2020/05/19/adobe-named-a-leader-in-the-forrester-wave-b2c-and-b2b-commerce-suites.html#gs.t96fyi">https://blog.adobe.com/en/2020/05/19/adobe-named-a-leader-in-the-forrester-wave-b2c-and-b2b-commerce-suites.html#gs.t96fyi</a></li> </ul> </div> <div> <p>Topics: Insights &amp; Inspiration, Digital Transformation, Personalization, Financial Services, Experience Cloud,</p> <p>Products: Experience Cloud, Marketo&nbsp;Engage,</p> </div> </div>

March 03, 2021 12:00 AM

Go paperless in the workplace with Acrobat online PDF tools

Adobe Web Platform

<div class="embed embed-internal embed-internal-gopaperlessintheworkplacewithacrobatonlinepdftools embed-internal-03"> <div> <h1 id="go-paperless-in-the-workplace-with-acrobat-online-pdf-tools">Go paperless in the workplace with Acrobat online PDF tools</h1> <p>Work is stressful enough without unsightly clutter on your desk. Adobe Acrobat can help you go paperless.</p> </div> <div> <img src="/hlx_c1f3633366b4126ec86a24c8b1479c2513e47651.jpeg" alt="Go paperless with online PDF tools"> </div> <div> <p>By Adobe Document Cloud Team</p> <p>Posted on 03-03-2021</p> </div> <div> <p>Most offices wrestle with an all-too-common problem: What to do with all the paper? It piles up on desks and gets crammed into folders, creating unsightly clutter and organizational chaos. Even with a good file management system, the clutter and sheer volume of paper files make it much too easy for important documents to be misplaced, lost, or even stolen. Your workday is stressful enough without having to wrangle reams of paper around the office. If you’re ready to save the planet and your sanity, Adobe Acrobat is here to help.</p> <div class="embed embed-internal embed-internal-acrobat embed-internal-documentcloud"> <div><p><img src="/hlx_3e0e58654cbc37f0005f6ba2a61edb9314c3feaf.png" alt=""></p> <h3 id="adobe-acrobat">Adobe Acrobat</h3> <p>Stay connected to your team with simple workflows across desktop, mobile, and web — no matter where you’re working.</p> <p><a href="https://acrobat.adobe.com/us/en/acrobat.html">Learn more</a></p></div> </div> <p>Adobe Document Cloud offerings can help make going paperless as smooth as possible. <a href="https://acrobat.adobe.com/us/en/mobile/scanner-app.html">Adobe Scan</a> is a simple yet powerful mobile application for digitizing paper documents and reducing waste. Once you’ve started the digitization process, Acrobat <a href="https://www.adobe.com/acrobat/online.html">online PDF tools</a> can help you manage your new digital documents. You can give these tools a try in any browser to reduce paper use and increase your workplace efficiency:</p> <ul> <li>Combine similar or related documents, like monthly reports, into a single file using the <a href="https://www.adobe.com/acrobat/online/merge-pdf.html">Merge PDFs</a> tool to create a comprehensive year-end document.</li> <li>The <a href="https://www.adobe.com/acrobat/online/rearrange-pdf.html">Reorder PDF Pages</a> tool allows users to organize scanned documents, like a presentation deck, by moving pages around within the document to present content in your preferred order.</li> <li>The <a href="https://www.adobe.com/acrobat/online/delete-pdf-pages.html">Delete PDF Pages</a> tool lets users remove pages from a document that are outdated or unnecessary while keeping the relevant pages intact.</li> </ul> <h3 id="the-benefits-of-going-paperless">The benefits of going paperless</h3> <p>With simple and affordable digitization tools available, there’s no good reason to put off going paperless. On the contrary, there are many good reasons to make the transition as soon as possible.</p> <ul> <li><strong>Cost reduction</strong>: Paper costs a lot of money. Some studies have estimated that businesses spend as much as $80 per employee on paper every year. Add in the costs of ink and toner, printing and copying machines, machine maintenance, and storage space, and your bottom line can take quite a hit.</li> <li><strong>Environmental sustainability:</strong> Paper production is the third-largest industrial polluter in the United States, and it contributes to deforestation around the world. Going digital eliminates massive amounts of material, energy, and pollution costs associated with paper production and consumption.</li> <li><strong>Efficiency and accessibility:</strong> Paper systems also cost time. Employees spend hours every day looking for paper documents, resulting in tremendous efficiency and productivity losses. In a paperless office, they can locate documents with a simple keyword search from their desktop, phone, or any other connected device.</li> <li><strong>Collaboration:</strong> PDFs and digital documents enable easy communication and collaboration. Teammates can share files online and work together from a single digital copy rather than printing as many copies as there are collaborators.</li> <li><strong>Durability:</strong> Damage to paper records from fire, flood, or decay can devastate an organization. When you go paperless, you never have to worry that an unforeseen event could wipe out all of your important records and information as long as you’re maintaining digital backups of your documents.</li> </ul> <h3 id="tips-for-transitioning-to-a-paperless-office">Tips for transitioning to a paperless office</h3> <p>Making the transition to a paperless office is more complicated than digitizing all of your existing documents, however. In order to become a truly paperless organization, you need to put in place tools and systems to ensure that paperless work and collaboration is simple, effective, and enjoyable.</p> <ul> <li><strong>Create a plan:</strong> Creating a paperless office is a complex task. In order to make sure you’re setting the right goals and sticking to them, appoint an individual or a group of people to map your paper use and create a set of priorities and action items to reduce paper consumption throughout the organization.</li> <li><strong>Use a project management system:</strong> Instead of communicating and keeping track of ongoing projects with memos, sticky notes, and project binders, adopt a digital project management system. Such systems allow you to create timelines for deliverables, assign tasks to teams or individuals, and keep track of updates and progress across platforms.</li> <li><strong>Paperless meetings:</strong> Instead of providing all attendees with their own copy of meeting materials, share a digital version they can access during and after the meeting. Simply appoint one person in each meeting to keep digital minutes, which attendees can refer back to later.</li> <li><strong>Start using e-signatures:</strong> Cut down the amount of paper you use externally by switching to digital agreements for all of your client and vendor contracts. You can try sending a document to others for e-signing with the <a href="https://www.adobe.com/acrobat/online/request-signature.html">Acrobat Request Signatures tool</a>.</li> </ul> <p>Are you ready to take your workplace completely into the digital realm? If so, sign up for a <a href="https://acrobat.adobe.com/us/en/free-trial-download.html">free seven-day trial of Adobe Acrobat Pro DC</a> (then US$14.99/mo) for unlimited access to PDF and e-signing tools that can make the transition to paperless as easy and painless as possible.</p> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/02/adobe-adds-new-acrobat-tools-to-tackle-pdf-tasks-in-the-browser.html#gs.ugxeo4">https://blog.adobe.com/en/publish/2021/02/02/adobe-adds-new-acrobat-tools-to-tackle-pdf-tasks-in-the-browser.html#gs.ugxeo4</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/12/16/maximize-your-productivity-on-the-go-with-adobe-acrobat.html#gs.ugxft5">https://blog.adobe.com/en/publish/2020/12/16/maximize-your-productivity-on-the-go-with-adobe-acrobat.html#gs.ugxft5</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/11/02/recycle-your-pdf-content-with-acrobat-online-tools.html#gs.ugxj0a">https://blog.adobe.com/en/publish/2020/11/02/recycle-your-pdf-content-with-acrobat-online-tools.html#gs.ugxj0a</a></li> </ul> </div> <div> <p>Topics: Responsibility, Future of Work, Productivity, Sustainability, Document Cloud,</p> <p>Products: Document Cloud, Scan, Acrobat</p> </div> </div>

March 03, 2021 12:00 AM

Discover the future of work management at Adobe Summit

Adobe Web Platform

<div class="embed embed-internal embed-internal-discoverthefutureofworkmanagementatadobesummit embed-internal-03"> <div> <h1 id="discover-the-future-of-work-management-at-adobe-summit">Discover the future of work management at Adobe Summit</h1> <p>The Collaborative Work Management Track at Adobe Summit will introduce attendees to the collective power of Workfront and Adobe enterprise solutions.</p> </div> <div> <img src="/hlx_076f3ba302e7ba437a3fe5be94a0c54264a3a769.png" alt="Work Front an Adobe Company with abstract art in the background. "> </div> <div> <p>By Adobe Communications Team</p> <p>Posted on 03-03-2021</p> </div> <div> <p>Today, every business is a digital business — whether it is B2B or B2C, brick or click, or local or global. Over the last 12 months, accelerated digital transformation has pushed us all to rethink and refine the way we do business, including every aspect of how we power the people and processes behind digital work.</p> <p>That is why our recent acquisition of Workfront is so exciting. With Workfront coming under the Adobe umbrella, our customers will now have an agile enterprise work management solution that seamlessly connects <a href="https://www.adobe.com/creativecloud.html">Adobe Creative Cloud</a>, <a href="https://www.adobe.com/experience-cloud.html">Adobe Experience Cloud</a>, and other applications and platforms. Now, customers will have a single system that supports the entire marketing lifecycle, from planning and collaboration to governance, maximizing productivity and creativity, and helping deliver even better real-time customer experiences. And during Adobe Summit, attendees will have a chance to preview it all.</p> <div class="embed embed-internal embed-internal-summitregistration embed-internal-summit"> <div><p><img src="/hlx_87e087ba6e238de864812fc9d69832a821c85f64.jpeg" alt=""></p> <h3 id="expand-your-genius-at-adobe-summit">Expand your genius at Adobe Summit</h3> <p>A free virtual event April 27-29</p> <p>Every great event requires a great maker. Stay ahead of the latest CXM trends, learn industry best practices, and virtually connect with peers.</p> <p><a href="http://apps.enterprise.adobe.com/go/7015Y000002e4q9QAA">Register for free</a></p></div> </div> <p>“Even before the acquisition, customers were leveraging integrations between Workfront and Adobe to work faster and smarter,” said Alex Shootman, VP and GM of Workfront, an Adobe company. “Summit gives us the chance to showcase the work of some of these innovative leaders, as well as the opportunity to preview our vision for a fully integrated marketing system of record that powers the people and work behind great customer experiences.”</p> <h3 id="the-workfront-experience-at-adobe-summit">The Workfront experience at Adobe Summit</h3> <p>Adobe Summit content will also pick up where Leap — Workfront’s annual conference — left off in 2020.</p> <p>“There are a few ways we’re folding our Leap content into Summit,” says Gary Clinger, head of marketing programs for Adobe. “The most significant is our Innovation Super Session featuring Alex Shootman. He will highlight Workfront’s unique perspective on work transformation and why the marriage of Workfront technology with Adobe solutions is so powerful for our customers.”</p> <p>Attendees can also expect to hear from current Workfront customers and joint Workfront/Adobe customers, including Monique Evans, Workfront systems analyst for Stanley Black &amp; Decker, Nick Zappas, senior manager of project management at The Walt Disney Company, and presenters from VaynerMedia and Informatica.</p> <p>“We also have over 25 breakout sessions within the Collaborative Work Management track,” Clinger says. “Sessions will range from work visibility, to agile work management, to adoption challenges surrounding new technology. About half of those sessions will include Workfront customers, so attendees can understand diverse use cases that, ideally, they can apply to their own businesses.”</p> <p>“Learning through customer examples and use cases is central to the Adobe Summit experience,” adds Wade Sherman, VP and head of the Adobe + Workfront Integration. These peer-led sessions, he explains, tend to drive deeper understanding and relevance for the broad audience of attendees.</p> <p>“We want to present meaningful examples, best practices, and use cases that are going to resonate the most with our customers,” Sherman says. “We know, in this case, that means showing real-world examples of customers utilizing Workfront as their marketing system of record — as the work management platform that ties together their use of Adobe Creative Cloud and Adobe Experience Cloud. Seeing that on the big screen or hearing a customer talk about that is so compelling because it grounds everyone in the message and helps to align it with their own business and work.”</p> <h3 id="building-on-a-solid-partnership-and-shared-customer-base">Building on a solid partnership and shared customer base</h3> <p>For many attendees, much of this will feel familiar — the two companies share more than 1,000 enterprise clients.</p> <p>“Workfront has been a premier Adobe partner since April 2020,” says Sherman. “We’ve been building on that synergy for some time.”</p> <p>Now, he adds, shared and individual customers will be able to do and manage even more, with integrated workflows and optimized work management solutions.</p> <p>“Bringing in a marketing system of record that serves to connect Adobe Creative Cloud Enterprise with Adobe Experience Cloud apps like AEM and Marketo helps manage the cradle-to-grave content ROI,” Sherman says. “Now teams can create beautiful, compelling content assets using our world-class creative products — and then manage, launch, and measure the effectiveness of those assets once they are out in the world. Workfront sits right in the middle of that. It’s the platform that enables it all — and increases content velocity by connecting the creative tools with marketing tools.”</p> <p>For other attendees, Summit will be a first look at Workfront and other Adobe enterprise solutions.</p> <p>“We’re anticipating a diverse audience, in terms of product knowledge and insights,” Clinger says. “Many attendees are already using Workfront and want to learn how to optimize or accelerate their workflows. Others are still learning the difference between project management and workflow management. Some are familiar with the combined power of Adobe and Workfront — people who are integrating the APIs already.”</p> <p>Sessions will offer universal takeaways, ensuring everyone walks away with a clear-cut sense of what Workfront could do for their organizations. “Our sessions and workshops are designed to create the right type of curiosity for our existing customers and Adobe customers who aren’t Workfront users — yet,” says Clinger “Our goal is that Summit attendees will want to continue the conversation post-Summit. We want them to experience how much potential exists in this acquisition, and see how our combined technology can really make a difference.”</p> <p>Shootman agrees. “There is a better way to manage marketing work,” he says. “Adobe understands that — and that’s where the Workfront integration comes in. We’re eager for people to come to Summit and learn about Workfront. Adobe is an expert at understanding the needs of the marketer, and during our sessions and training workshops, you’ll see how bringing in Workfront helps us support our customers even more.”</p> <p><em><a href="http://apps.enterprise.adobe.com/go/7015Y000001zEANQA2">Register for Adobe Summit for free</a>, then browse and register for targeted sessions and training workshops in the <a href="https://portal.adobe.com/widget/adobe/as21/catalogsum2021?search.track=1610747845177001lC4i&amp;search=">Collaborative Work Management Track</a>.</em></p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li> <p><a href="https://blog.adobe.com/en/publish/2020/12/07/adobe-completes-workfront-acquisition.html#gs.tzo42i">https://blog.adobe.com/en/publish/2020/12/07/adobe-completes-workfront-acquisition.html#gs.tzo42i</a></p> </li> <li> <p><a href="https://blog.adobe.com/en/publish/2020/03/24/cc-integrations-work-from-home.html#gs.tzo5he">https://blog.adobe.com/en/publish/2020/03/24/cc-integrations-work-from-home.html#gs.tzo5he</a></p> </li> <li> <p><a href="https://blog.adobe.com/en/publish/2021/02/09/photoshop-illustrator-and-fresco-introduce-document-collaboration.html#gs.tzo8tl">https://blog.adobe.com/en/publish/2021/02/09/photoshop-illustrator-and-fresco-introduce-document-collaboration.html#gs.tzo8tl</a></p> </li> </ul> </div> <div> <p>Topics: Future of Work, Events, Adobe Summit, B2B, Experience Cloud,</p> <p>Products: Creative Cloud, Experience Cloud, Workfront</p> </div> </div>

March 03, 2021 12:00 AM

March 02, 2021

How YouTuber Amy Tangerine inspires creativity through craft

Adobe Web Platform

<div class="embed embed-internal embed-internal-howyoutuberamytangerineinspirescreativitythroughcraft embed-internal-02"> <div> <h1 id="how-youtuber-amy-tangerine-inspires-creativity-through-craft">How YouTuber Amy Tangerine inspires creativity through craft</h1> <p>Amy Tangerine has cultivated a loyal community of creatives with her YouTube channel, with DIY videos edited with Adobe Premiere Pro.</p> </div> <div> <p><img src="/hlx_46231b66a9aee9cce6dba6d3abd387e812ca4a08.jpeg" alt="Colorful items strewn across turquoise background. "></p> <p><em>Image Source: Amy Tangerine.</em></p> </div> <div> <p>By Meagan Keane</p> <p>Posted on 03-02-2021</p> </div> <div> <p>Amy Tan is crafting a life she loves — and loves to look back on.</p> <p>After turning her passion for scrapbooking into a successful business, the creative entrepreneur and author is inspiring people around the world to take time for reflection through art. “I’m a very sentimental person, so I love memory-keeping and looking back on the most random things,” she says. “Crafting isn’t just a hobby. It’s a way of being that inspires people to live their best lives.”</p> <p>Tan has grown her business under the moniker Amy Tangerine — so as not to be confused with the similarly named author. In addition to taking on art and fashion design work for private clients, Tan has cultivated a loyal community of creatives with her YouTube channel, branded online workshops, podcast, and products. They flock to her not only for inspiration, but for permission.</p> <p>“People often put the brakes on their own creativity because they feel they should have a purpose, but that’s simply not the case,” says Tan. “Allow yourself permission to just play and see where it takes you, because it could trigger something else that makes you have a breakthrough. And if it doesn’t, it’s okay because you still spent time doing something that you enjoyed.”</p> <h2 id="helping-people-make-things--and-make-things-happen">Helping people make things — and make things happen</h2> <p>With more than 57,000 YouTube subscribers, video has become yet another creative outlet for Tan to share tips on everything from stickers and watercolors to traveler’s notebooks and craft room makeovers — with some family trip vlogs thrown in for good measure.</p> <p>Tan was inspired to create her YouTube channel in 2015. With minimal scrapbooking content available on the platform, she dove into the experience headfirst.</p> <p>“Being a YouTuber means you really just have to take the time to figure it out,” she says. “People who might have budget to hire a team can spend more time honing their craft, but then they don’t understand the little things that you might do to make it easier on yourself if you had to do this all on your own. I think that’s an important experience to have.”</p> <p>That’s why these days, Tan is the one who plans which shots she needs and shoots them herself using her Canon G7X Mark ii or her iPhone. And when she’s done, she’ll take <a href="https://www.adobe.com/creativecloud/video/discover/b-roll.html">B-roll shots</a> of certain tools or materials in case she needs to splice them in during the editing process. Tan usually leaves the audio voiceover to the end so that she has the creative freedom she needs to craft in real-time.</p> <p>“While you’re doing something, sometimes you don’t really think about why you’re doing it. You just do it,” says Tan. “Afterwards I realize that it might be helpful for me to explain the reasoning behind certain things that I’ve done, and that’s when I’ll do the voiceover. People like to both see the process and hear my thoughts.”</p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/tDaIBw2u3es?rel=0&amp;v=tDaIBw2u3es&amp;feature=youtu.be&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <h3 id="sharing-the-editing-responsibilities">Sharing the editing responsibilities</h3> <p>Although Tan is a big do-it-yourselfer, she does split the editing work with an editor, Amanda Benson, who uses <a href="https://www.adobe.com/products/premiere.html">Adobe Premiere Pro</a> to pull everything together. It’s a trade-off Tan can live with — Benson can get the job done in half the time, which frees Tan to focus more attention on content creation.</p> <p>“I’m getting better and faster at editing, but my editor is so familiar with my style that it just makes sense for us to split the work,” says Tan.</p> <div class="embed embed-internal embed-internal-premierepro embed-internal-creativecloud"> <div><p><img src="/hlx_6437eac3f9725128f1febd131fffb983cdd30b0b.png" alt="Inserting image..."></p> <h3 id="premiere-pro">Premiere Pro</h3> <p>Video editing that’s always a cut above.</p> <p><a href="https://www.adobe.com/products/premiere.html">Learn more</a></p></div> </div> <p>Benson describes Tan’s style as “run-n-gun”, so she tries not to make the videos too cinematic because that wouldn’t feel genuine to Tan’s style. Instead, she lets the experiences Tan shoots speak for themselves. In Premiere Pro, Benson keeps clips in chronological order, so the adventure translates through the vlog. She also uses the Lumetri Color panel in Premiere Pro to make the videos bright and fun, just like Tan’s brand.</p> <p>“I use the Lumetri Color panel and the color scopes in Premiere Pro as my first step in color correcting and grading,” says Benson. “I love how you can do very basic corrections, but if you want to go above and beyond, you can create really cool and unique color looks. Lumetri is a great place to start.”</p> <p>The reason their collaboration works so well is twofold. First, Benson’s familiarity with Tan’s content and style means she just gets it, and she isn’t afraid to experiment. But none of it looks like an experiment — it’s all very seamless. And second, Tan shares what inspires her, so that Benson can draw direction and inspiration directly from the source. But the end goal is the same.</p> <p>“I want to consistently create content that resonates with people,” says Tan. “Everyone should realize that they can make things — and make things happen.”</p> <p><em>Watch the <a href="https://www.youtube.com/watch?v=peheahLIY7Y&amp;list=PLHRegP5ZOj7CIVBmsN__kyOqdYewqRhwe&amp;index=29">Tips &amp; Tricks Tuesday session on IRL editing</a> and get access to host Valentina Vee’s <a href="https://premiere.adobelanding.com/yts-livestream/">tutorial guide</a>.</em></p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2020/12/17/how-amber-torrealba-making-splash-in-and-out-of-the-water.html">https://blog.adobe.com/en/publish/2020/12/17/how-amber-torrealba-making-splash-in-and-out-of-the-water.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/11/18/how-kevin-parry-creates-stop-motion-animation-that-will-stop-you-in-your-tracks.html">https://blog.adobe.com/en/publish/2020/11/18/how-kevin-parry-creates-stop-motion-animation-that-will-stop-you-in-your-tracks.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/05/five-editing-tips-adam-epstein-learned-from-his-time-at-snl.html">https://blog.adobe.com/en/publish/2021/01/05/five-editing-tips-adam-epstein-learned-from-his-time-at-snl.html</a></li> </ul> </div> <div> <p>Topics: Customer Stories, Creativity, Media &amp; Entertainment, Creative Cloud</p> <p>Products: Premiere Pro, Creative Cloud</p> </div> </div>

March 02, 2021 12:00 AM

Austin Community College prepares students for success in a digital future

Adobe Web Platform

<div class="embed embed-internal embed-internal-austincommunitycollegepreparesstudentsforsuccessinadigitalfuture embed-internal-02"> <div> <h1 id="austin-community-college-prepares-students-for-success-in-a-digital-future">Austin Community College prepares students for success in a digital future</h1> <p>Adobe Creative Campus, Austin Community College, is dedicated to preparing students for success in a digital world.</p> </div> <div> <p><img src="/hlx_e3464be10ca920a264893b51d7eb2f93e6b14c99.jpeg" alt="The Paseo at ACC Highland campus"></p> <p><em>The Paseo at ACC Highland Campus, Building 2000 — January, Friday 22, 2021.</em></p> </div> <div> <h2 id="by-karen-mccavitt">By Karen McCavitt</h2> <p>Posted on 03-02-2021</p> </div> <div> <p>With 11 campuses and more than 70,000 students annually, Austin Community College District (ACC) is one of the largest community college systems in the nation. It’s also the top school in Texas for transferring into public universities, providing an affordable start for students pursuing a four-year degree. As part of the dynamic ecosystem of technology and art that thrives in the Austin area, the institution is dedicated to preparing students for success in a digital world. And, ACC is an Adobe Creative Campus — the country’s first community college to have that distinction.</p> <p>“Many companies look for digitally fluent employees well-versed in <a href="https://www.adobe.com/education/digital-literacy.html">Adobe Creative Cloud</a>,” says Thomas Nevill, Dean of Arts and Digital Media at Austin Community College. “As an Adobe Creative Campus, ACC gives students an advantage and helps them build digital portfolios in an increasingly online world.”</p> <p>Digital fluency is an important part of ACC’s 2025 Academic Master Plan, with Adobe Creative Cloud at the center of its Digital Literacy Equity and Inclusion for All (DLEI) initiative. As co-chair of the planning process, Nevill recognizes the need to integrate digital skills into every discipline — not only Arts and Digital Media but in Health Sciences, Education, Liberal Arts, and beyond. Today, Adobe Creative Cloud is available at no cost to employees and at a deep discount to students, and his goal is to make Adobe Creative Cloud free for both students and employees in the future. With Adobe Creative Cloud tools such as <a href="https://blog.adobe.com/en/topics/xd.html">Adobe XD</a>, ACC is bringing digital fluency to the community college curriculum, exposing people to digital skills who might not otherwise develop them. He wants people to explore the possibilities, and he’s not afraid to lead by example.</p> <p>“To create the final proposals for the Academic Master Plan, I decided to try something new and taught myself how to use <a href="https://blog.adobe.com/en/topics/premiere-rush.html">Adobe Premiere Rush</a>, complete with still shots, video, and voice-over,” Nevill says. “It was important for me to take that step and show people that if I can do it, anyone can.”</p> <p>Indeed, Adobe Creative Cloud tools such as Adobe XD and <a href="https://blog.adobe.com/en/topics/spark.html">Adobe Spark</a> are catching on across departments, from User Experience Design to Speech Communication, and ACC makes sure students and faculty have the training they need.</p> <div class="embed embed-internal embed-internal-digitalliteracycc embed-internal-bigrockassets"> <div><p><img src="/hlx_f07df7196230fa129c7946bb099aafe35a8e2e2e.png" alt=""></p> <h3 id="adobe--digital-literacy">Adobe &amp; Digital Literacy</h3> <p>Become an Adobe Creative Campus to build digital literacy at your school.</p> <p><a href="https://www.adobe.com/education/digital-literacy.html">Learn more</a>.</p></div> </div> <h3 id="how-acc-is-planting-the-seeds-of-digital-fluency">How ACC is planting the seeds of digital fluency</h3> <p>As Associate Vice President of Academic Programs at ACC, Dr. Gaye Lynn Scott is eager to drive adoption of Adobe Creative Cloud. She oversees programs that reach up to 75 percent of students, including transfer programs in liberal arts, science, engineering, and math — and for her, digital fluency is critical across the board.</p> <p>“Students should all have the same opportunities to learn and succeed,” says Dr. Scott. “Becoming an Adobe Creative Campus exposes students to digital tools that will serve them in the world after graduation.”</p> <p>Dr. Scott is a firm believer in the power of “informal but intentional efforts” to drive adoption. That includes recruiting early adopters and seeding cross-talk among faculty, as well as practicing what she preaches. “I intend to be vocal whenever I can, encouraging faculty to replace their slide decks with Adobe Spark,” she says. “Spark is easy yet sophisticated, and it can be a great way to engage students.”</p> <p>Meanwhile, ACC invests in formal training to support both faculty and students. They rely heavily on Adobe tutorials and LinkedIn Learning tutorials, but the community college is finding creative ways to establish resident experts and get ideas flowing.</p> <p>“We’re developing a one-year faculty fellowship program around Adobe Creative Cloud, where fellows receive advanced training and get a stipend to redesign a course,” says Matthew Evins, Director of Academic Technology in the Teaching and Learning Excellence Division. “It’s a way to support faculty members who are serious about integrating digital tools into their courses — and a great way to show what’s possible.”</p> <p>Evins and his team not only train faculty to use technology in the classroom, they also set up audio/visual equipment and support websites. They even produce videos to supplement course materials, using <a href="https://blog.adobe.com/en/topics/premiere-pro.html">Adobe Premiere Pro</a> and <a href="https://blog.adobe.com/en/topics/after-effects.html">Adobe After Effects</a> to create polished, professional work. During the pandemic, the video production team was busy producing dozens of videos as faculty members looked for ways to replace in-person, hands-on instruction. And the team is increasingly focused on students — providing support for Adobe tools as well as wi-fi, laptops, tablets, and software to make sure students can succeed in their online courses.</p> <p><img src="/hlx_972658915537daa8b1bf8d7dc33ce4d86e596add.png" alt="Photo of a studio with a green-screen and chairs"></p> <p><em>The larger of the two multicam Radio, Television &amp; Film studios at ACC Highland Campus, Building 2000 — Wednesday, November 11, 2020.</em></p> <h3 id="students-bring-designs-to-life-with-adobe-xd">Students bring designs to life with Adobe XD</h3> <p>It’s no surprise that the User Experience (UX) Design program at ACC is already immersed in digital tools and technologies. “Many of our students are professional designers looking to transition into a UX career, so they already have extensive experience with Adobe Creative Cloud,” says Molly McClurg, Associate Professor of User Experience Design at ACC. “Instead of spending time on learning new tools, students get to focus on creativity and putting new ideas into the world.”</p> <p>Most of McClurg’s students are well-versed in <a href="https://blog.adobe.com/en/topics/photoshop.html">Adobe Photoshop</a>, <a href="https://blog.adobe.com/en/topics/illustrator.html">Adobe Illustrator</a>, and <a href="https://blog.adobe.com/en/topics/indesign.html">Adobe InDesign</a>. But she has recently introduced a new tool into the mix — Adobe XD. It’s a fun, refreshing way to engage students in her Survey of User Experience Design course, where she teaches them how to research, ideate, prototype, and test solutions. Near the end of the semester, students conduct the entire design process, working in small teams to create a prototype they can show off.</p> <p>“Many students decided to address the affordable housing challenge in Austin,” says McClurg. “Because they’re just starting to explore UX design, we don’t put any constraints on them — so they really go big and experiment with fun ideas.”</p> <p>One of those ideas came from student Carson Stanch, who explored the idea of creating a tiny home for people with chronic medical conditions or mobility limitations, along with a smart home interface for easier control of door locks, lights, heaters, and appliances. Her team used Adobe XD to design <a href="http://carsonstanch.com/tiny-smart-home">wireframes and a working prototype</a>.</p> <p>Another student, Amy Frazier, decided to tackle the need for safe restaurant takeout orders during the pandemic. Noticing that many local businesses lacked an online ordering process, she used Adobe XD to <a href="https://www.amyfrazierdesign.com/portfolio/flories">mockup wireframes</a> that would make it easy for restaurants to take orders on mobile and desktop.</p> <p>Seeing their ideas solidified in wireframes and prototypes helps McClurg provide useful feedback. “Using Adobe XD, I can assess the details of the individual screens as well as the vision of how everything will work together,” she says. “Plus, students see their ideas come to life, and their faces light up. It’s exciting to see them discover that UX design could be their future.”</p> <h3 id="ready-for-the-job-market-with-communication-and-digital-skills">Ready for the job market with communication and digital skills</h3> <p>Digital fluency has also become essential in the liberal arts department, thanks to the dedication of faculty members like Theresa Glenn, Chair of Communication Studies. Glenn is on a mission to equip students with strong digital skills, an online portfolio, and a modern, tech-savvy resume as they enter the workforce.</p> <p>Glenn has found that Adobe Spark is an easy entry point to digital literacy. In her Introduction to Human Communication course, students have been using Adobe Spark to demonstrate their mastery of concepts they learn in her class. That includes conflict management skills such as active listening, perception checking, and assertive messaging — and students do a great job of modeling those techniques on camera.</p> <p>At the end of the semester, students pull everything together into an Adobe Spark website that serves two purposes, as Glenn points out. “Not only does this project serve as a final exam, students can link to it on their resumes to show employers their competencies in both communication and digital literacy,” she says.</p> <p>Students have taken the assignments and run with them, producing webpages that would impress potential employers. Check out <a href="https://spark.adobe.com/page/jC9ZQncf7YeBG/">a Spark page created by student Amurri Davis</a> and <a href="https://spark.adobe.com/page/Q1rpntIXpSmhY/">another from fellow student Tyler Langlais</a>.</p> <h3 id="the-college-experience-goes-paperless">The college experience goes paperless</h3> <p>ACC takes its commitment to digital literacy seriously, and that means using new tools and technologies beyond the classroom. A few years ago, the community college adopted <a href="https://blog.adobe.com/en/topics/sign.html">Adobe Sign</a> and <a href="https://blog.adobe.com/en/topics/acrobat-dc.html">Adobe Acrobat DC</a> to go paperless and use e-signatures to make administrative processes more efficient. Today, ACC processes a few thousand electronic documents every month through Adobe Sign — and that number only continues to grow.</p> <p>“Our primary value as a community college is providing a good education for students, not processing documents — so the more streamlined we can get, the better,” says John Wilsonmay, Product Owner at ACC. “By increasing efficiency, Adobe Sign helps us keep tuition affordable while making the college experience easier and more enjoyable for students.”</p> <p>The community college uses Adobe Sign for many processes, including faculty onboarding forms, faculty performance evaluations, study abroad applications, and articulation agreements with other universities, which define transfer policies. During the pandemic, faculty members started using Adobe Sign to have students review and sign class rules agreements and syllabi.</p> <p>By moving to digital workflows and e-signatures, Wilsonmay estimates that ACC has reduced document processing time by as much as two weeks. With Adobe Acrobat DC, administrators create clearer, more effective forms — for example, using required fields to avoid missing information and pop-up boxes to minimize confusion. Instead of mailing or walking paper documents across campus, people rely on automated approval routing and an audit trail of signatures in Adobe Sign to keep things moving no matter how many people have to sign a document.</p> <p>ACC uses Box as its document storage solution, and integration with Adobe Sign makes it easy to send documents for signature while keeping them in a central repository, with the ability to set deadlines and track workflows. That’s good for everyone in the ACC community, who can focus on what really matters — student success.</p> <h3 id="helping-students-navigate-their-own-path-to-a-digital-future">Helping students navigate their own path to a digital future</h3> <p>Students at ACC are diverse and non-traditional — they might be finishing their high school diploma, earning a degree in nursing or computer science, or simply enriching their lives with a ceramics class. But they all have one thing in common: they are forging their own paths to success. With its commitment to digital fluency as an Adobe Creative Campus, ACC is ready to help them on their journeys.</p> <p>“No matter where students come from, we meet them where they are,” says Nevill. “That’s one of the most rewarding aspects of working at Austin Community College. We have so many unique and exciting ways to help them reach their goals.”</p> </div> <div> <h2 id="featured-posts">Featured posts</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/01/22/duke-university-undergrads-spend-an-epic-summer-with-code-and-adobe.html#gs.sj5w5x">https://blog.adobe.com/en/publish/2021/01/22/duke-university-undergrads-spend-an-epic-summer-with-code-and-adobe.html#gs.sj5w5x</a></li> <li><a href="https://blog.adobe.com/en/2020/11/12/university-of-new-mexico-builds-a-bridge-to-the-future.html#gs.sj5woy">https://blog.adobe.com/en/2020/11/12/university-of-new-mexico-builds-a-bridge-to-the-future.html#gs.sj5woy</a></li> <li><a href="https://blog.adobe.com/en/2020/09/17/swinburne-university-learning-adapting-changing-world.html#gs.sj5ws7">https://blog.adobe.com/en/2020/09/17/swinburne-university-learning-adapting-changing-world.html#gs.sj5ws7</a></li> </ul> </div> <div> <p>Topics: Insights &amp; Inspiration, Customer Stories, Education, Digital Literacy, Creative Cloud, Document Cloud</p> <p>Products: Creative Cloud, XD, Premiere Pro, After Effects, Acrobat, Sign</p> </div> </div>

March 02, 2021 12:00 AM

2020 digital economy index | a closer look at consumer electronics

Adobe Web Platform

<div class="embed embed-internal embed-internal-2020digitaleconomyindexacloserlookatconsumerelectronics embed-internal-02"> <div> <h1 id="2020-digital-economy-index--a-closer-look-at-consumer-electronics">2020 digital economy index | a closer look at consumer electronics</h1> <p>After analyzing the digital economic landscape, we’ve found that electronics is one of three categories that has a heavy influence on the online economy trends.</p> </div> <div> <img src="/hlx_ce2d598a82b2636d9ca1908a3a20d52f78b936f2.jpeg" alt="Black and white photo of electronics"> </div> <div> <p>By Jill Steinhour</p> <p>Posted on 03-02-2021</p> </div> <div> <p>After analyzing the <a href="https://www.adobe.com/experience-cloud/digital-insights/digital-economy-index.html">digital economic landscape</a>, we have found that electronics is one of three categories (along with grocery and apparel) that has a heavy influence on the online economy trends. We are excited to share powerful data insights to help you understand and act on digital trends in consumer electronics. Let us take a closer look at this segment — the online shopping trends, price changes, and consumer technologies adoption.</p> <h3 id="online-shopping-trends">Online shopping trends</h3> <p>There has been steady growth in the consumer electronics segment since the beginning of the COVID-19 pandemic due to consumers’ shift to online shopping, but the growth was not as strong, compared to the rest of B2C ecommerce. Things changed around Black Friday and Cyber Monday — holiday shopping events drove an increased demand for electronics. During the holiday season CE retailers outpaced the overall growth in the first part of the season — check out the graph below for the details.</p> <p>While one in five consumers upgraded their home office in 2020, and one in three upgraded their distance learning environment, privacy remains to be a significant concern. Baby Boomers tend to be the most concerned, and Gen Z — the least concerned.</p> <p><img src="/hlx_45413a6db3499288360205e56cb592c1c3ef0221.png" alt=""></p> <h3 id="price-change-for-electronics">Price change for electronics</h3> <p>Electronics prices are now beginning to plateau, after years of innovation-fueled price reduction. The absorption of offline purchasing into the online electronics category and COVID associated demand have been key culprits. There has been a 44 percent drop in online prices for electronics between the years 2014 and 2020.</p> <p>Price (74 percent) is one of top three buying criteria of electronic devices, preceded by quality (82 percent) and ease of use (77 percent). Learn about the consumer decision-making process by examining the rank of each buying criteria in the <a href="https://www.adobe.com/offer/consumer-electronics-report.html">full report</a>.</p> <h3 id="penetration-of-consumer-technologies">Penetration of consumer technologies</h3> <p>Consumers picked up more electronic devices in 2020. When we asked consumers what devices they own, almost every trending product saw an increase. To illustrate, 31 percent of participants owned a smart speaker in 2020 (26 percent in 2019), 66 percent owned a laptop (62 percent in 2019), and 16 percent owned a car with voice assistant (12 percent in 2019).</p> <p>Virtual Reality (VR) technology still is not reaching the masses, with only 1 out of 4 consumers having tried VR. 1 out of 3 consumers have purchased a voice-controlled device in the past year, with Millennials and Gen Z leading the charge.</p> <p>These insights come from analyzing more than 1 trillion visits to U.S. based retail websites, as well as sales of more than 100 million unique products and a survey of 1,000 US consumers. If you would like to learn more insights about consumer electronics that will empower you to leverage the latest trends, read the full <a href="https://www.adobe.com/offer/consumer-electronics-report.html">2020 Digital Economy Index</a> report.</p> <div class="embed embed-internal embed-internal-consumerelectronicsreport embed-internal-promotions"> <div><p><img src="/hlx_70a17ac6bcdf2c8ef40555b187eeeb0d0a309258.png" alt=""></p> <h3 id="consumer-electronics-report">Consumer Electronics Report</h3> <p>Breaking down shopper data to build insight-driven strategies.</p> <p><a href="https://www.adobe.com/offer/consumer-electronics-report.html">Read the report</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/2021/01/26/how-adobe-experience-cloud-helped-powersports-company-brp-take-adventure-to-the-next-level.html">https://blog.adobe.com/en/2021/01/26/how-adobe-experience-cloud-helped-powersports-company-brp-take-adventure-to-the-next-level.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/23/the-digital-economy-trends-challenges-and-opportunities-for-the-year-ahead.html">https://blog.adobe.com/en/publish/2021/02/23/the-digital-economy-trends-challenges-and-opportunities-for-the-year-ahead.html</a></li> <li><a href="https://blog.adobe.com/en/2021/01/21/adobe-experience-manager-content-commerce-innovations-delivering-exceptional-shoppable-experiences.html">https://blog.adobe.com/en/2021/01/21/adobe-experience-manager-content-commerce-innovations-delivering-exceptional-shoppable-experiences.html</a></li> </ul> </div> <div> <p>Topics: Trends &amp; Research, Insights &amp; Inspiration, Manufacturing, Retail, Experience Cloud,</p> <p>Products: Experience Platform,</p> </div> </div>

March 02, 2021 12:00 AM

Creativity for All: Adobe and Sundance nurture next-generation filmmakers

Adobe Web Platform

<div class="embed embed-internal embed-internal-creativityforalladobeandsundancenurturenextgenerationfilmmakers embed-internal-02"> <div> <h1 id="creativity-for-all-adobe-and-sundance-nurture-next-generation-filmmakers">Creativity for All: Adobe and Sundance nurture next-generation filmmakers</h1> </div> <div> <img src="/hlx_9503d2a4d8a0ec487a14923b40b17991966828fd.png" alt="Sundance Ignite short film challenge: young filmmakers at work"> </div> <div> <p>By Adobe Corporate Communications</p> <p>Posted on 03-02-2021</p> </div> <div> <p>Everyone has a story to tell, but for aspiring filmmakers who want to share their stories with the world, having mentors, a platform to showcase their work, and the resources to invest in creative exploration can make all the difference.</p> <p>To give everyone an opportunity to tell their story, we’re once again collaborating with the Sundance Institute on the <a href="https://collab.sundance.org/ignitelanding2021">Sundance Ignite x Adobe Short Film Challenge</a>. Both Sundance and Adobe, a founding partner in the program, share a mission to inspire creativity and bring diverse stories to life. Through the challenge, we’ll identify 10 new Sundance Ignite x Adobe Fellows, who will have yearlong access to powerful mentorships and professional development opportunities.</p> <p>The challenge is open now through April 6. To enter the challenge, documentary and narrative filmmakers between the ages of 18 and 25 must submit a 1- to 15-minute short film that demonstrates their creative vision and unique voice.</p> <h3 id="designed-to-ignite-your-film-career">Designed to ignite your film career</h3> <p>Since 2015, Adobe has partnered with Sundance Ignite to create a path for 80 young filmmakers to become better at their craft, giving them the opportunity to be mentored by experienced filmmakers, receive feedback and direction on their work, collaborate with peers, and gain more exposure to the industry.</p> <p>The 10 challenge winners selected as fellows will begin their fellowship with a weeklong virtual lab and orientation. Throughout the year, they’ll work with a Sundance Institute alumni mentor, be eligible for internships and program opportunities, receive an artist grant to produce future work, and receive a complimentary one-year Adobe Creative Cloud membership.</p> <p>Along with these opportunities, one of the program highlights is that fellows have the opportunity to join the 2022 <a href="https://www.sundance.org/festivals/sundance-film-festival/">Sundance Film Festival</a>, one of the premier industry events for independent filmmakers.</p> <h3 id="dont-just-take-our-word-for-it">Don’t just take our word for it…</h3> <p>The Sundance Ignite program has laid the groundwork for emerging creators to grow their careers. 10 fellows have premiered their films at Sundance, 9 fellows have interned at labs in the film industry, and 5 fellows have participated in other fellowships, intensives, or summits.</p> <p>“For a lot of young people, Sundance has been an alternative film festival to celebrate independent film and independent filmmakers. So, to me as a young person, the Ignite program and Sundance represent everything that I want in the film industry, which is inclusivity. It represents diverse voices. It represents democratizing creativity, filmmaking, and just storytelling in general,” says Carol Nguyen, a 2018 Sundance Ignite x Adobe Fellow who debuted her latest film, “No Crying at the Dinner Table,” at the Toronto International Film Festival in 2019.</p> <p>Lance Oppenheim, a 2019 fellow, produced a feature documentary, “Some Kind of Heaven,” that was named an official selection for the 2020 Sundance Film Festival. He says one of the best parts of the Ignite program is the mentorship element. Lance’s mentor was Jeff Orlowski, a filmmaker he’s long admired. Jeff and Lance would talk several times a month and he eventually became a producer on Lance’s film.</p> <p>“With Jeff, we really bonded. Being in a very specific situation where I was dealing with so many different kinds of creative and business worlds while we’re trying to get this feature off the ground, our relationship really deepened,” Lance says.</p> <p><a href="https://theblog.adobe.com/whats-really-like-sundance-ignite-fellow/">2017 fellow Charlotte Regan</a>, whose film “Fry Up” premiered at the 2018 Sundance Film Festival, adds that the support from peers is invaluable.</p> <p>“<a href="https://project1324.com/challenge/sundance2018">Sundance Ignite</a> really showed me how much your work can improve with collaboration. I’ve never had a group of peers to pass work back and forth between each other, give each other notes and help out on each other’s shoots,” she says. “I come from a background of self-shooting music videos, and they’re a pretty lonely place creatively — it’s you, your camera, and, at most, a camera assistant. So having this group of friends who actually gives you the time you need to chat about and improve your work is incredible.”</p> <p>Lance says that aspiring filmmakers who are interested in applying to the challenge should “submit stuff that you believe in and that you are very passionate about.”</p> <p>“You don’t need to have made 10 short films to be an Ignite Fellow,” he says. “Some filmmakers who were in my class, it was their first film, so don’t be intimidated by the idea of being an experienced filmmaker. Just make the things you believe in and put your best foot forward.”</p> <p><em>Learn more about</em> <em>the</em> <em><a href="https://www.sundance.org/initiatives/ignite">Sundance Ignite program</a> and get ready to</em> <em><a href="https://collab.sundance.org/catalog/ignite">submit your short film</a>.</em></p> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/01/27/celebrating-creativity-at-2021-sundance-film-festival.html">https://blog.adobe.com/en/publish/2021/01/27/celebrating-creativity-at-2021-sundance-film-festival.html</a></li> <li><a href="https://blog.adobe.com/en/2020/03/11/celebrating-and-empowering-women-today-and-everyday.html">https://blog.adobe.com/en/2020/03/11/celebrating-and-empowering-women-today-and-everyday.html</a></li> <li><a href="https://blog.adobe.com/en/2020/08/10/elevating-the-voices-of-women-in-film.html#gs.uzh9ts">https://blog.adobe.com/en/2020/08/10/elevating-the-voices-of-women-in-film.html#gs.uzh9ts</a></li> </ul> </div> <div> <p>Topics: Creativity, Video &amp; Audio, Insights &amp; Inspiration, Media &amp; Entertainment, Creative Cloud, Brand</p> <p>Products: Creative Cloud,</p> </div> </div>

March 02, 2021 12:00 AM

Capgemini’s RDV design system for financial UX with Adobe XD

Adobe Web Platform

<div class="embed embed-internal embed-internal-capgeminixdrdvdesignsystemfinancialux embed-internal-02"> <div> <h1 id="capgeminis-rdv-design-system-for-financial-ux-with-adobe-xd">Capgemini’s RDV design system for financial UX with Adobe XD</h1> <p>Learn how Capgemini combined Rapid Design &amp; Visualization with Adobe XD to create a customized design system for their fintech clients.</p> </div> <div> <img src="/hlx_850743689b665aaa682ba8e5d8590d4856f44867.jpeg" alt="Title slide: A comprehensive design system for Fintech using Adobe XD"> </div> <div> <p>By Kapil Joshi</p> <p>Posted on 03-02-2021</p> </div> <div> <p>In today’s fast-paced world, people are accessing (and indeed relying on) their digital devices more than ever before. At Capgemini, we keep this top of mind in all of our work — our constant endeavor is to provide seamless user experience and design solutions, always considering market trends, industry standards, and the scope of technology.</p> <p>When it comes to the work of designers, there is one fundamental challenge that the whole community faces these days — tighter deadlines with faster turnaround times. Add to that the constant challenge of preferences and personalized workflows — every business or organization we work with has their choice of tools.</p> <p>At Capgemini, we have chosen <a href="https://www.adobe.com/products/xd.html">Adobe XD</a> as our solution for meeting the growing demands facing designers, due to its diverse features and the way it caters to all our requirements. In fact, we’ve been using XD ever since its launch in 2016 (enjoying the advanced <a href="https://www.adobe.com/ca/products/xd/features/whats-new.html">features the team has been regularly adding</a> with every release). XD is a tool that caters to all parts of works — designing, prototyping, exporting visual assets, and UI specs — and this has given us a serious edge against the competition.</p> <div class="embed embed-internal embed-internal-xd embed-internal-creativecloud"> <div><p><img src="/hlx_14c20520abab7f5a26508a03e5607d6f48395803.png" alt=""></p> <h3 id="adobe-xd">Adobe XD</h3> <p>Create and share designs for websites, mobile apps, voice interfaces, games, and more.</p> <p><a href="https://www.adobe.com/products/xd.html">Learn more</a></p></div> </div> <p>Nowhere has this been clearer than a recent project we completed in the notoriously tough-to-design-for financial services sector.</p> <h3 id="rapid-design-and-visualization-a-new-process-to-meet-fintech-client-requirements-faster">Rapid Design and Visualization: A new process to meet fintech client requirements faster</h3> <p>We were recently asked to help a global fintech leader increase its revenues. To do this, we created a <a href="https://xd.adobe.com/ideas/principles/design-systems/">design system</a> in Adobe XD, and this coupled with our use of Rapid Design and Visualization (RDV) methodology allowed us to find success in a big way.</p> <p>Rapid Design and Visualization is a scientific requirement gathering and experience design methodology, which is based on <a href="https://xd.adobe.com/ideas/principles/human-computer-interaction/user-centered-design/">user-centered design principles</a>. It helps us capture and define user requirements in an agile fashion through collaborative design thinking workshops conducted with business stakeholders and our domain experts and technical architects.</p> <p><img src="/hlx_e4484f49db99f6160bfcf8421e2725794ab1b711.jpeg" alt="RDV methodology infographic"></p> <p><em>The RDV methodology enhances requirement gathering and experience design.</em></p> <p>We also create visual simulations without writing a single line of code, which helps save a lot of development effort as the usability can be assessed beforehand. In this financial services project, and other design projects we’ve worked on, RDV has enabled us to deliver multiple benefits for our clients:</p> <ul> <li><strong>Save time and cost:</strong> We help business users and technical teams capture requirements by creating visual simulations.</li> <li><strong>Re-define experiences:</strong> Our RDV experts help customers boost the user experience by improving journeys and usability.</li> <li><strong>Understand users:</strong> We conduct usability and A/B testing with RDV simulations to validate the requirements and journeys without a single line of code.</li> <li><strong>Facilitate consistency and iterate faster:</strong> We use a large and structured repository built to banking and insurance-industry standards.</li> </ul> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/bS6QpHT7PyI?rel=0&amp;v=bS6QpHT7PyI&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>In terms of business goals, we have an advantage as we can provide robust solutions and save time and effort that go into coding during the delivery of the project. Here’s how we did it for this specific financial services client.</p> <h3 id="a-robust-design-system-for-financial-services-clients">A robust design system for financial services clients</h3> <p>For this fintech giant, we built a one-stop white-label solution for their banking and wealth management clients. The design system, which took our team and Adobe XD working in collaboration, four months to create, includes a huge collection of reusable interaction patterns and UI elements. It’s robust, cohesive, and customizable, which saves a lot of development effort, and the solution is far more realistic and effective.</p> <p>The main motive was to ensure a quick turnaround time for creating low-fidelity prototypes. We wanted to build a foundational ecosystem that could help our designers meet their stringent deadlines and reduce the number of iterations, boosting their productivity. We also wanted the design system to be more in line with our changing customer needs and technical feasibility. Finally, we wanted to ensure our design system was not only relevant to our domain. We didn’t want it to have an isolated view but a holistic perspective.</p> <table> <thead> <tr> <th>Block Embed</th> </tr> </thead> <tbody> <tr> <td><a href="https://video.tv.adobe.com/v/331913">https://video.tv.adobe.com/v/331913</a></td> </tr> </tbody> </table> <p>Designed for end users, the design system improves the user experience by reducing cognitive and motor load, which leads to faster decision making. We rapidly created visual simulations of multiple investor portfolios, including customized branding of all their banking and wealth management clients, for hundreds of screens in record time. These high-fidelity interactive simulations, or prototypes, not only gave a precise understanding of how actions will be performed on the digital platforms but also what the end product is going to look like. It ensured our customers could visualise the product or solution faster during the requirement gathering phase.</p> <p>An enhanced digital exercise of this kind helps our clients take their users through a visual experience of the end results and helps build confidence and a true partnership to achieve their common goal of delighting their customers.</p> <p>This is just one of the many success stories which the RDV methodology is driving to boost revenues across financial services organizations. It’s crucial to understand the client’s requirements to make this happen. Every customer wants a wow factor — and for us, working in Adobe XD has allowed us to create effective solutions that save time and promote faster output while meeting the requirements for our financial services clientele.</p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2017/07/12/building-a-new-service-in-six-weeks-with-adobe-xd.html">https://blog.adobe.com/en/publish/2017/07/12/building-a-new-service-in-six-weeks-with-adobe-xd.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/09/zoom-plugin-xd-virtual-meeting-integrations.html">https://blog.adobe.com/en/publish/2021/02/09/zoom-plugin-xd-virtual-meeting-integrations.html</a></li> <li><a href="https://blog.adobe.com/en/2021/01/14/user-testing-feedback-plugins.html">https://blog.adobe.com/en/2021/01/14/user-testing-feedback-plugins.html</a></li> </ul> </div> <div> <p>Topics: Insights &amp; Inspiration, Customer Stories, Financial Services, Creative Cloud,</p> <p>Products: XD,</p> </div> </div>

March 02, 2021 12:00 AM

Introducing the 2020 Sundance Ignite x Adobe Fellows

Adobe Web Platform

<div class="embed embed-internal embed-internal-introducingthe2021sundanceignitexadobefellows embed-internal-02"> <div> <h1 id="introducing-the-2020-sundance-ignite-x-adobe-fellows">Introducing the 2020 Sundance Ignite x Adobe Fellows</h1> </div> <div> <img src="/hlx_62bbe79d98cee36bf976e0eb99b46e1932dd27a8.jpeg" alt="Collage of 10 different filmmakers selected for Sundance Ignite x Adobe Fellowship."> </div> <div> <p>By Adobe Communications Team</p> <p>Posted on 03-02-2021</p> </div> <div> <p>Creativity plays a crucial role in how we tell our stories, and it’s never been more important to elevate new voices, particularly in a medium like film. For the past six years, we’ve partnered with Sundance Institute with the common vision of inspiring creativity and bringing stories from the next generation of filmmakers to life. We are proud to be a founding partner of the Sundance Ignite x Adobe Fellowship program, empowering a new wave of emerging, diverse voices with the tools, mentorship and resources needed to tell their stories. Today we are thrilled to announce the 2020 Sundance Ignite x Adobe Fellows. Meet the 10 emerging filmmakers, selected from over 1,600 submissions around the world, who represent the next generation of storytellers.</p> <div class="embed embed-oembed embed-slideshare"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%; padding-top: 38px;"><iframe src="https://www.slideshare.net/slideshow/embed_code/key/6hJo2bA9XJRk5K?ref=https%3A%2F%2Fblog.adobe.com%2F&amp;kind=embed-slideshare&amp;provider=slideshare" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media" title="content from slideshare" loading="lazy"></iframe></div></div> <p>With support from Adobe, the Fellowship fosters the filmmakers both professionally and artistically. Each Fellow is matched with a Sundance Institute alumni mentor, is eligible for internships and program opportunities, and receives an artist grant to produce future work in addition to a complimentary subscription to Adobe Creative Cloud. Later in the summer, several of the Fellows’ winning short films were screened as part of <a href="https://london.sundance.org/">Celebration of Sundance Film Festival: London</a>.</p> <p>The future is bright for this new class of fellows. Fellows from previous cohorts – represented across 15 countries – have gone on to <a href="https://theblog.adobe.com/adobe-at-sundance-2020/">produce more work</a> (98%), had their work commissioned, and have earned prestigious awards and <a href="https://www.sundance.org/blogs/sundance-ignite--meet-the-next-generation-of-independent-filmmakers">recognition</a> (60%). Beyond recognition, these filmmakers are bringing important topics to light and sharing diverse points of view. “We are so proud to partner with Sundance in the Sundance Ignite x Adobe program and look forward to working with the 2020 Fellows to help bring their stories, creativity and perspectives to the world,” said John Travis, Adobe’s VP Brand Marketing.</p> <p>Learn more about the <a href="https://www.sundance.org/initiatives/ignite">Sundance Ignite program</a> and get ready to <a href="https://collab.sundance.org/catalog/ignite">submit your short film</a> to the 2021 Sundance Ignite x Adobe Short Film Challenge. Through the challenge, we’ll identify 10 new Sundance Ignite x Adobe Fellows, who will have yearlong access to powerful mentorships and professional development opportunities. The challenge is open now through April 6.</p> <p>At Adobe, we believe creativity doesn’t just open doors—it opens worlds. Given the overwhelming number of submissions from young filmmakers around the globe, we’re expanding our support for those looking to hone their skills and explore filmmaking opportunities with 500 full-access scholarships to <a href="https://collab.sundance.org/">Sundance Co//ab</a>—a platform providing emerging creators with online resources to create and share their stories.</p> <p>“Our mission is to enable creativity for all. We believe that everyone has a story to tell and that those stories deserve to be heard. When we elevate a broader and more diverse set of voices, we can create change within ourselves, our communities and the world,” said Travis.</p> <p>For more resources and opportunities to foster into your creativity, check out <a href="https://www.adobe.com/corporate-responsibility/creativity/scholarship-programs.html">other scholarships, grants and career development programs</a> from Adobe.</p> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/03/01/time-to-shine-announcing-the-2021-adobe-experience-maker-awards.html">https://blog.adobe.com/en/publish/2021/03/01/time-to-shine-announcing-the-2021-adobe-experience-maker-awards.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/28/how-je-delve-turns-young-creators-into-music-video-pros.html">https://blog.adobe.com/en/publish/2021/01/28/how-je-delve-turns-young-creators-into-music-video-pros.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/27/celebrating-creativity-at-2021-sundance-film-festival.html">https://blog.adobe.com/en/publish/2021/01/27/celebrating-creativity-at-2021-sundance-film-festival.html</a></li> </ul> </div> <div> <p>Topics: Creativity, Responsibility, Diversity &amp; Inclusion,</p> <p>Products:</p> </div> </div>

March 02, 2021 12:00 AM

Celebrating Black Life, Love, and Legacy at Adobe

Adobe Web Platform

<div class="embed embed-internal embed-internal-celebratingblacklifeloveandlegacy embed-internal-02"> <div> <h1 id="celebrating-black-life-love-and-legacy-at-adobe">Celebrating Black Life, Love, and Legacy at Adobe</h1> </div> <div> <img src="/hlx_a29180d9b7a661eebc91a9babc0242f7e66d4536.png" alt="Black History Month 2021, a celebration of Black life, love and legacy."> </div> <div> <p>By Karen Robinson</p> <p>Posted on 03-02-2021</p> </div> <div> <p>Black History Month (BHM) is a time for me to reflect, celebrate, and honor my ancestry, but this year BHM took on a greater significance as I’ve yearned for connections and community amidst the pandemic and on-going racial injustice incidents. My Adobe family has helped lift me up during this challenging time through the strong ties I have developed with my fellow members in Adobe’s Black Employee Network (BEN), and through engagement with leaders and colleagues to address how we can support the success of our Black community.</p> <p>This year, as the Executive Sponsor of BEN, I was proud to be a part of Adobe’s BHM events centered around the theme: <em>Celebrating Black Life, Love, and Legacy</em>. Our goal was to bring members of BEN and our many allies together to celebrate the talent, leadership, creativity, and stories from the Black community. We helped unite the global Adobe community through these programming highlights:</p> <ul> <li>An energizing kick-off event, with more than 1,000 employees across 11 countries, showcasing talented artists and musicians including virtual performances by artist <a href="https://www.kwamebrathwaite.com/">Kwame S. Brathwaite</a>, musical guests <a href="http://brandeeyounger.com/">Brandee Younger</a>, <a href="https://youtu.be/U33YkWySr4w">Marcus Gilmore</a>, <a href="https://youtu.be/f_dSlxgD2Fo">Nicholas Payton</a>, and hip-hop performance group <a href="https://www.1520arts.org/about-1520arts/">1520 Arts</a><u>.</u></li> <li>A series of employee story spotlights with <a href="https://www.youtube.com/watch?v=7Kzlk7RmQ_s">Ronell Hugh</a>, <a href="https://youtu.be/buTHXtCwlfE">Bria Alexander</a>, and <a href="https://youtu.be/pQAv_tXFbiI">TJ Rhodes</a><u>,</u> sharing their legacy at Adobe and beyond, and <a href="https://blog.adobe.com/en/publish/2021/02/04/create-the-miracle.html#gs.uglf27">Markeia Brox-Chester</a> and <a href="https://blog.adobe.com/en/publish/2021/02/18/afa-earnest-mack-the-talk.html#gs.ugldl6">Earnest Mack</a> sharing their personal journeys and triumphs.</li> <li>A fireside chat with Major General William J. Walker, the leader of the Washington, DC National Guard, who shared his experiences as a Black leader in both the military and law enforcement.</li> <li>A Black Women Creatives Panel, a food and cultural showcase, and a virtual marketplace to support Black-owned businesses.</li> <li>A celebration of <a href="https://blog.adobe.com/en/publish/2021/02/01/adobe-celebrates-black-history-month.html#gs.ugn7ju">Black creativity</a> across the industry by elevating and amplifying diverse voices through multi-media content, grants and scholarships, and media partnerships.</li> <li>Donations to help the larger Black community through corporate and individual contributions to: <a href="https://100blackmen.org/">100 Black Men</a>, <a href="https://evc.org/">Educational Video Center</a>, and <a href="https://www.greenescholars.org/what-we-do">Greene Scholars Program</a>.</li> </ul> <p>Throughout the month I was humbled by the courage and vulnerability of my Black colleagues in telling their unique personal experiences. And as the daughter and granddaughter of servicemen, hearing from Major General William J. Walker not only brought nostalgia for my beloved father and grandfather, but also underscored the sincere appreciation I have to all those who serve.</p> <p>Our impactful BHM events and programs helped set the stage for ongoing investments and progress throughout the year. A key driver for this is Adobe’s Taking Action Initiative (TAI), which was established in 2020, in <a href="https://blog.adobe.com/en/2020/06/10/listening-learning-and-taking-action.html#gs.jqbxlh">response</a> to the death of George Floyd and other incidents of racial injustice. I am honored to be a co-leader of the initiative with the mission to accelerate the representation, development, and success of Adobe’s Black employees through five task forces: Community, Hiring &amp; Recruitment, Growth &amp; Advancement, Responsibility &amp; Advocacy, and Transparency &amp; Governance. I collaborate with task force participants – encompassing BEN members and organizational subject matter owners – to strengthen our commitment and initiate new programs. Given the level of focus and engagement from senior leadership, including our CEO and Chief People Officer, I am encouraged that TAI will make an impact on the success of our Black community and overall representation at the company.</p> <p>And while I know this is a marathon, where meaningful progress will take time, I’m motivated by recent TAI milestones including <a href="https://theblog--adobe.hlx.page/en/publish/2020/12/23/five-diversity-and-inclusion-lessons.html#gs.ugjwz5">establishing aspirational goals</a> relative to employee representation, increasing investments in Historically Black Colleges and Universities (HBCUs), and investing in growth and development through programs such as the McKinsey Black Leadership Academy.</p> <p>As BHM concludes I am invigorated by the commitment of BEN members and allies who united employees at a whole new level throughout BHM 2021 and by the ongoing efforts at Adobe to accelerate the success of Adobe’s Black employees while creating a change in the broader landscape of social justice.</p> <div class="embed embed-internal embed-internal-adobeforall embed-internal-adobelife"> <div><p><img src="/hlx_c8c81ea35423b0511a921017439ef0a520f84d33.png" alt=""></p> <h3 id="adobe-for-all">Adobe For All</h3> <p>We believe that when people feel respected and included they can be more creative, innovative, and successful, which is why we are committed to investing in building a diverse and inclusive environment for our employees, customers, partners, and the tech industry as a whole.</p> <p><a href="https://www.adobe.com/diversity.html">Learn more</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured posts</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/01/adobe-celebrates-black-history-month.html#gs.uzo3u7">https://blog.adobe.com/en/publish/2021/02/01/adobe-celebrates-black-history-month.html#gs.uzo3u7</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/19/amplifying-black-creatives-ben.html#gs.uzo3io">https://blog.adobe.com/en/publish/2021/02/19/amplifying-black-creatives-ben.html#gs.uzo3io</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/04/create-the-miracle.html#gs.uzo4zk">https://blog.adobe.com/en/publish/2021/02/04/create-the-miracle.html#gs.uzo4zk</a></li> </ul> </div> <div> <p>Topics: #AdobeForAll, Brand, Adobe Life, Adobe Culture, Diversity &amp; Inclusion, Celebrating the Black Community</p> <p>Products:</p> </div> </div>

March 02, 2021 12:00 AM

February 28, 2021

Manuel Rego: :focus-visible in WebKit - February 2021

Igalia WebKit

One month has passed since the previous report so it’s time for a status update.

As you probably already know, Igalia is working on the implementation of :focus-visible in WebKit, a project that is being sponsored by many people and organizations through the Open Prioriziatation campaing. We’ve reached 84% of the goal, thanks you all! 🎉 And if you haven’t contributed yet, you’re still in time to do it if you believe this work is important for you.

The main highlight for February is that initial work has started to land in WebKit, though some important bits are still under review.

Spec issues

There were some open discussions on the spec side regarding different topics, let’s review here the status of them:

  • :focus-visible on <select> element: After some discussion on the CSS Working Group (CSSWG) issue it was agreed to remove the <select> element from the tests and each browser could decide whether to match :focus-visible when it’s clicked, as there was not a clear agreement regarding how to interpret if it allows or not keybaord input.

    In any case Chromium has still a bug on elements that have a popup (like a <select>). When you click them they match :focus-visible, but they don’t show the focus indicator (because they want to avoid showing two focus indicators, one on the <select> and another one on the <option>). As it’s not showing a focus indicator, it shouldn’t match :focus-visible in that situation actually.

  • :focus-visible on script focus: The spec is not totally clear about when an element should match (or not) :focus-visible after script focus. There has been a nice proposal from Alice Boxhall and Brian Kardell on the CSSWG issue, but when we discussed this in the CSSWG it was decided that was not the proper forum for these discussions. This was because the CSSWG has defined that :focus-visible should match when the browser shows a focus indicator to the user, but it doesn’t care when it’s actually showed or not. That definition is very clear, and despite that each browser shows the focus indicator (or not) in different situations, the definition is still correct.

    Currently the list of heuristics on the spec are not normative, they’re just hints to the browsers about when to match :focus-visible (when to show a focus indicator). But I believe it’d be really nice to have interoperability here, so the people using this feature won’t find weird problems here and there. So the suggestion from the CSSWG was to discuss this in the HTML spec directly, proposing there a set of rules about when a browser should show a focus indicator to the user, those rules would be the current heuristics on the :focus-visible spec with some slight modifications to cover the corner cases that have been discussed. Hopefully we can reach an agreement between the different parties, and manage to define this properly on the HTML spec, so all the implementations can be interoperable on this regard.

    I believe we need to dig deeper in the specific case of script focus, as I’m not totally sure how some scenarios (e.g. blur() before focus() and things like that) should work. For that reason I worked on a set of tests trying to clarify the different situations when the browser should show or not a focus indicator. These need to be discussed with more people to see if we can reach an agreement and prepare some spec text for HTML.

  • :focus-visible and Shadow DOM: Another topic already explained in the previous report. My proposal to the CSSWG was to avoid matching :focus-visible on the ShadowRoot when some element in the Shadow Tree has the focus, in order to avoid having two focus indicators.

    There has been raised a concern, as this would allow to guess if an element is or not a ShadowRoot by focusing it via script and then checking if it matches or not :focus-visible (but ShadowRoot elements shouldn’t be observable). However that’s already possible in WebKit that currently uses :-webkit-direct-focus in the default User Agent style sheet, to avoid the double focus indicator in this case. In WebKit you can focus via script an element and check if it has or not an outline to determine if it’s a ShadowRoot.

    Anyway like in the previous case, this would be part of the heuristics so according to CSSWG’s suggestion, this should be discussed on the HTML spec directly.

Default User Agent style sheet

Early this month I landed a patch to start using :focus-visible in Chromium User Agent style sheet, this is included in version 90. 🚀 This means that from that version on you won’t see an outline when you click on a <div tabindex="0">, only when you focus it with the keyboard. Also the hack :focus:not(:focus-visible) won’t be needed anymore (actually it has been removed from the spec too).

In addition, Firefox is also using :focus-visible on their User Agent style sheet since version 87.

More about tests

During this month there has been still some extra work on the tests. While I was implementing things on WebKit I realized about some minor issues in the tests that have been fixed along the way.

I also found out some limitations of WebKit with regard to testdriver.js support for simulating keyboard inputs. Some of the :focus-visible tests use the method test_driver.send_keys() to send keys like Control or Enter. I added support for them on WebKit. Apart from that, I fixed how modifier keys are identified in WebKitGTK and WPE, as they were not following other browsers exactly (e.g. event.ctrlKey was not set on keydown event, only on keyup).

WebKit implementation

And the most important part, the actual WebKit implementation has been moving forward during this month. I managed to have a patch that passed all the tests, and split it a little bit in order to merge things upstream.

The first patch that just do the parsing of the new pseudo-class and adds a experimental flag has already landed.

Now a second patch is under review. It originally contained the whole implementation that passes all the tests, but due to some discussion on the script focus issues, that part has been removed. Anyway the review is ongoing and hopefully it’ll land soon and you could start testing it in the WebKit nightlies.

Some numbers

Like in the previous post, let’s again review the numbers of what has been done during in this project:

  • 20 PRs merged in WPT (7 in February).
  • 14 patches landed in WebKit (9 in February).
  • 7 patches landed in Chromium (3 in February).
  • 1 PR merged in Selectors spec (1 in February).
  • 1 PR merged in HTML spec (1 in February).

Next steps

First thing is to get the main patch landed in WebKit and verify that things are working as expected on the different platforms.

Another issue we need to solve is to reach an agreement on how script focus should work regarding :focus-visible, and then get that implemented in WebKit covering all the cases.

After that we could request to enable the feature by default in WebKit. Once that’s done, we could discuss the possibility to change the default User Agent style sheet to use :focus-visible too.

There are some interop work pending to do. A few things are failing on Firefox and we could try to help to fix them. Also some weird issues with <select> elements in Chromium that might need some love too. And depending on the ongoing spec discussions there might be some changes needed or not in the different browsers. Anyway, we might find the time or not to do this, let’s see how things evolve in the next weeks.

Big thanks to everyone that has contributed to this project, you’re all wonderful for letting us work on this. 🙏 Stay tuned for more updates in the future!

February 28, 2021 11:00 PM

February 25, 2021

Release Notes for Safari Technology Preview 121

Surfin’ Safari

Safari Technology Preview Release 121 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 271794-272845.

Web Inspector

  • Sources
    • Collapsed blackboxed call frames in the Call Stack section (r272371)


  • Added support for aspect-ratio on grid items (r272307)
  • Added support for logical variants of scroll-padding and scroll-margin (r272035)
  • Added support for color(a98-rgb ...), color(prophoto-rgb ...), color(rec2020 ...), color(xyz ...), hwb() as part of CSS Color 4 (r271992, r272125, r272123, r272311, r272344)
  • Added support for percentages when parsing color(srgb ...) and color(display-p3 ...) per-spec (r271866)
  • Changed sRGB to XYZ conversion matrix values to match values in the latest spec (r272498)
  • Fixed max-height percentages that are wrongly resolved for replaced grid items (r272309)
  • Fixed grid item to fill the grid area for stretch or normal self alignment (r272308)
  • Fixed animation of rotate or scale property to correctly account for static translate property (r272201)
  • Fixed font-stretch to apply to system-ui (r272073)
  • Fixed the nested grid container which has a replaced item with max-height incorrectly getting width(0px) (r272338, r272711)
  • Implemented scroll-snap-stop for scroll snapping (r272610)
  • Handled aspect-ratio: auto m/n for replaced elements (r272360)
  • Handled min-width: auto or min-height: auto for aspect-ratio (r272718)
  • Handled zero aspect-ratio width or height (r271948)
  • Made auto && <ratio> use content box-sizing (r272569)

GPU Process

  • Enabled audio capture in GPUProcess by default (r272735)
  • Enabled audio capture for speech recognition in GPUProcess (r272434)
  • Enabled GPU WebRTC codecs in GPUProcess by default on macOS (r272496)
  • Enabled video capture in GPUProcess by default on macOS (r272810)
  • Fixed <audio> not loading when the URL ends with .php causing some tests to time out (r272750)
  • Fixed implementation of WebGL power preference and discrete/internal GPU selection with ANGLE (r271880)


  • Added intermediate volume icon states between “mute” and “max” (r272375)
  • Changed media controls to show the total duration, only switching to time remaining when clicked (r272373)
  • Changed MediaStream-backed video elements to not compute the mediaType based on track muted states (r272583)
  • Connected MediaSession with MediaRemote and NowPlaying (r272445, r272589)
  • Fixed sound on YouTube after switching back to foreground (r272829)
  • Fixed playback of WebM/Opus generated from Chrome MediaRecorder (r272822)
  • Fixed Picture-in-Picture video pauses when scrolling on Twitter (r271870)
  • Updated media controls time scrubber styles (r272352, r272438)
  • Updated media controls to use new SF Symbols (r272339)


  • Fixed the return key binding for date inputs conflicting with pressing return to submit a form (r272495
  • Fixed selecting a date on datetime-local inputs unexpectedly adding second and millisecond fields (r272368)
  • Fixed rendering a pattern with an SVG image (r272549)
  • Forbid “|” in URL hosts (r271899)
  • Reduced the overhead of HTMLDocumentParser in innerHTML setter (r272622)


  • Added @ in Error#stack even if function name does not exist (r272139)
  • Added Atomics support for BigInt64Array and BigUint64Array behind a runtime flag (JSC_useSharedArrayBuffer=1) (r272341)
  • Adjusted properties order of host JS functions (r272099)
  • Changed Object.assign to throw for property creation on non-extensible target (r272411)
  • Handled milliseconds in Date’s timezone without floating point rounding (r272127)
  • Implemented BigInt64Array and BigUint64Array (r272170, r272215)
  • Implemented private methods behind flag (JSC_usePrivateMethods=1)(r272580)
  • Made JSON.parse faster by using table for fast string parsing (r272570)


  • Implemented WebAssembly.instantiateStreaming and WebAssembly.compileStreaming (r271993)
  • Implemented streaming compilation/instantiation for the Blob type (r272221)
  • Updated WebAssembly.Global to support Funcref and Externref (r272071, r272081, r272119)
  • Enabled Wasm bulk-memory and reference-types (r272074)


  • Exposed focusable elements even if the element or ancestor has aria-hidden=true (r272390)
  • Fixed long narrow tables to not be made into data tables unnecessarily (r272024)

Speech Recognition

  • Used the user media permission prompt for Speech Recognition (r272165)

February 25, 2021 09:26 PM

February 16, 2021

Combine PDF files with ease using Acrobat online tools

Adobe Web Platform

<div class="embed embed-internal embed-internal-combinepdffileswitheaseusingacrobatonlinetools embed-internal-16"> <div> <h1 id="combine-pdf-files-with-ease-using-acrobat-online-tools">Combine PDF files with ease using Acrobat online tools</h1> <p>Take the hassle out of merging PDF files. With the Adobe Acrobat online Merge PDFs tool, it’s easy to combine PDFs with great results.</p> </div> <div> <img src="/hlx_c22612eb4acc0b72286baef2ef683b403c0441ef.jpeg" alt="Merge files into one PDF"> </div> <div> <p>By Adobe Document Cloud Team</p> <p>Posted on 02-16-2021</p> </div> <div> <p>Combining multiple PDFs into a single document can streamline your work in so many ways. But if you have ever tried it, you know that many methods commonly used to merge PDFs can be time consuming or produce less-than-ideal results. With Adobe Acrobat <a href="https://www.adobe.com/acrobat/online.html">online PDF tools</a>, combining PDFs into a single document is quick, easy and effective.</p> <div class="embed embed-internal embed-internal-acrobat embed-internal-documentcloud"> <div><p><img src="/hlx_3e0e58654cbc37f0005f6ba2a61edb9314c3feaf.png" alt=""></p> <h3 id="adobe-acrobat">Adobe Acrobat</h3> <p>Stay connected to your team with simple workflows across desktop, mobile, and web — no matter where you’re working.</p> <p><a href="https://acrobat.adobe.com/us/en/acrobat.html">Learn more</a></p></div> </div> <h3 id="take-the-hassle-out-of-merging-pdfs">Take the hassle out of merging PDFs</h3> <p>There are plenty of reasons to combine PDF files. Maybe you need to create an end-of-year report for your business by merging monthly reports, or maybe you need to combine various forms and sheets into a single cohesive package for your next meeting. If you are an educator, perhaps you would like to create a single set of reading materials for your students from multiple sources. Or maybe you would like an easy way to compile your favorite home recipes into a digital cookbook.</p> <p>With the new Acrobat <a href="http://www.adobe.com/acrobat/online/merge-pdf.html">Merge PDFs</a> tool, you can complete any of these tasks in just a few moments. Simply open the tool in any browser, select the PDFs you want to combine, and Acrobat takes care of the rest. The Merge PDFs tool is just one of 19 powerful online PDF tools that Acrobat lets you try for free. And if you need to make your workday even more efficient, consider an Acrobat DC subscription. You can start a seven-day <a href="https://acrobat.adobe.com/us/en/free-trial-download.html">free trial</a><a href="https://acrobat.adobe.com/us/en/free-trial-download.html"> of Acrobat Pro DC</a> (then US$14.99/mo) to get unlimited access to all tools today.</p> <h3 id="easy-document-organizing-with-acrobat">Easy document organizing with Acrobat</h3> <p>Once you combine your PDFs, you will likely want to reorganize or edit the pages in the merged PDF. If you are already a subscriber or have started your free trial, you will have access to these Acrobat organization tools to finalize and polish your documents:</p> <ul> <li>The <a href="https://www.adobe.com/acrobat/online/rearrange-pdf.html">Reorder PDF Pages</a> tool to move pages around within a PDF.</li> <li>The <a href="https://www.adobe.com/acrobat/online/delete-pdf-pages.html">Delete PDF Pages</a> tool to delete pages from a PDF.</li> <li>The <a href="https://www.adobe.com/acrobat/online/rotate-pdf.html">Rotate PDF Pages</a> tool to rotate individual pages right or left within a PDF</li> <li>The <a href="https://www.adobe.com/acrobat/online/pdf-editor.html">Edit PDF</a> tool to add comments and annotations to a PDF.</li> </ul> <p>With help from Acrobat, you can make sure your documents present content just the way you want with quality you can trust. Discover all you can do with PDFs with a free trial of Acrobat DC and Acrobat’s online PDF tools today.</p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/02/adobe-adds-new-acrobat-tools-to-tackle-pdf-tasks-in-the-browser.html#gs.sqtpfv">https://blog.adobe.com/en/publish/2021/02/02/adobe-adds-new-acrobat-tools-to-tackle-pdf-tasks-in-the-browser.html#gs.sqtpfv<br> </a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/19/get-ahead-of-tax-planning-for-2021-with-adobe-acrobat-tools.html#gs.sqp1ad">https://blog.adobe.com/en/publish/2021/01/19/get-ahead-of-tax-planning-for-2021-with-adobe-acrobat-tools.html#gs.sqp1ad<br> </a></li> <li><a href="https://blog.adobe.com/en/publish/2020/08/13/sign-here-there-and-everywhere.html#gs.sqtrol">https://blog.adobe.com/en/publish/2020/08/13/sign-here-there-and-everywhere.html#gs.sqtrol</a></li> </ul> </div> <div> <p>Topics: Future of Work, Productivity, Document Cloud,</p> <p>Products: Document Cloud, Acrobat,</p> </div> </div>

February 16, 2021 12:00 AM

5 reasons why you shouldn’t miss Adobe Summit this year

Adobe Web Platform

<div class="embed embed-internal embed-internal-5reasonswhyyoushouldntmissadobesummitthisyear embed-internal-16"> <div> <h1 id="5-reasons-why-you-shouldnt-miss-adobe-summit-this-year">5 reasons why you shouldn’t miss Adobe Summit this year</h1> <p>Why you shouldn’t miss Adobe Summit 2021, the world’s premiere conference for Experience Makers.</p> </div> <div> <img src="/hlx_a20b34305718bb4a8f4029ec50d8a0849a144426.jpeg" alt="Abstract art that is colorful on a white background. "> </div> <div> <p>By Adobe Communications Team</p> <p>Posted on 02-16-2021</p> </div> <div> <p>Adobe Summit 2021 registration is off to a great start. Set for April 27 — 29, the virtual event will bring together Experience Makers from some of the world’s most-recognized brands. No matter where you are in your career, you’ll be able to attend sessions that will leave you and your company future-proof. This is true regardless of your area of expertise — from content creation and personalization, to data and analytics.</p> <p><a href="https://portal.adobe.com/pages/adobe/as21/signin">Register for free</a> to build the skills to create client experiences that inspire loyalty and drive growth.</p> <h3 id="explore-the-latest-customer-experience-trends">Explore the latest customer experience trends</h3> <p>This year’s Adobe Summit comes at a time of seismic shifts in the customer experience landscape. That’s why we’ve planned more than 250 sessions and training workshops that will highlight game-changing innovation in how brands deliver experiences at every stage of the customer journey.</p> <p>“Driving demand during COVID is top-of-mind for a lot of businesses,” says Alexandra Quick, a product marketing manager at Adobe. “That’s a subject we’re attacking head-on with a dedicated session.” Overall, participants can expect a “huge emphasis on digital marketing,” she adds. Included within this emphasis will be a deep dive into all the ways companies are using new technologies to advance their mobile commerce capabilities.</p> <p>“In the case of digital commerce, 2020 has been an incredible year for growth,” says Shannon Hane, senior product marketing manager for commerce at Adobe. This has been true on both the B2C and the B2B sides of industries. Adobe Summit 2021, she says, represents a unique opportunity to learn “how people are managing that growth and adapting their commerce capabilities and experiences to this new environment.”</p> <h3 id="get-inspired-by-top-brands-from-around-the-world">Get inspired by top brands from around the world</h3> <p>This year’s conference will represent one of the most industry-diverse entries in the Summit series to date, with speakers from every corner of the tech sector, as well as education, financial services, sports, retail, and more. Leaders from each industry will share the field notes they’ve amassed during the unprecedented upheaval of the last year. Among the subjects you can expect them to cover are the following:</p> <ul> <li><a href="https://www.adobe.com/experience-platform.html">Adobe Experience Platform</a></li> <li>Analytics, Insights, and Activation</li> <li>B2B Marketing and ABM</li> <li>Campaign Management</li> <li>Collaborative Work Management</li> <li>Content Creation</li> <li>Developer Ecosystem</li> <li>Digital Commerce</li> <li>Digital Document Productivity</li> <li>Personalization</li> <li>Trends and Inspiration</li> </ul> <h3 id="discover-best-practices-and-new-techniques-and-products">Discover best practices and new techniques and products</h3> <p>Maybe you’re a B2B company looking to sell directly to consumers, or a retailer looking to recreate the physical showroom experience online. Whatever your challenge, you’re likely to find a case study at this year’s Adobe Summit that offers you a shortcut to success in building customer experience, from IT to creative — and everything in between.</p> <p>Finding and delighting customers takes more than the right strategy, however. It also requires the right tools, not to mention knowing how to use them. This April, you can expect to learn all about the latest in task-automation software, progressive web applications, 3D visualization, and more.</p> <p>Customers will also get a sneak peek at the exciting new features coming to Adobe products, including <a href="https://www.adobe.com/marketing/experience-manager.html">Adobe Experience Manager</a> and <a href="https://www.adobe.com/marketing/marketo.html">Marketo Engage</a>.</p> <p>“We’re going to showcase the innovations that we’re launching this year that give customers and brands the ability to make those experiences happen in a more agile, personalized way on every channel,” says product marketing manager Vebeka Guess.</p> <h3 id="connect-with-adobe-experts-and-your-peers-worldwide">Connect with Adobe experts and your peers worldwide</h3> <p>Opportunities for connecting with other Experience Makers can be hard to come by these days. That’s why networking activities are included with your Summit registration. Just because this year’s Summit is virtual, doesn’t mean it can’t be interactive.</p> <p>“It’s not just the content that draws people to Summit, but the ability to network and interact with experts at Adobe, and also your peers,” Vebeka says. “People come to Summit to learn from each other as well as from Adobe.”</p> <p>To this end, we’ll be launching Braindate, an interactive networking platform that will enable you to engage and interact with Summit attendees, expanding your learning and your network.</p> <h3 id="its-free-and-its-virtual">It’s free and it’s virtual</h3> <p>Adobe Summit 2021 is the most accessible conference ever for Experience Makers around the globe, and represents a unique opportunity for those who haven’t always been able to join in person. Reserve your spot now for the world’s premier conference on customer experience, where you’ll learn how to drive demand in a rapidly shifting marketplace, explore the latest technological and strategic trends, and draw inspiration from experts who are changing the world — one customer experience at a time.</p> <p><em>Have questions? Visit our official Adobe Summit 2021 <a href="http://apps.enterprise.adobe.com/go/7015Y000002e5NKQAY">website</a>. You can also connect with us on social media using the hashtags #AdobeSummit, #MagentoImagine, or #MktgNation.</em></p> <div class="embed embed-internal embed-internal-summitregistration embed-internal-summit"> <div><p><img src="/hlx_87e087ba6e238de864812fc9d69832a821c85f64.jpeg" alt=""></p> <h3 id="expand-your-genius-at-adobe-summit">Expand your genius at Adobe Summit</h3> <p>A free virtual event April 27-29, 2021</p> <p>Join us to expand your skills, engage with other Experience Makers, and be inspired to create exceptional experiences that drive business growth and customer loyalty.</p> <p><a href="http://apps.enterprise.adobe.com/go/7015Y000002e4q9QAA">Register for free</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/01/registration-is-open-adobe-summit-2021-is-free-virtual-and-global.html">https://blog.adobe.com/en/publish/2021/02/01/registration-is-open-adobe-summit-2021-is-free-virtual-and-global.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/04/29/10-brands-that-shifted-their-advertising-strategy-amid-covid-19.html#gs.t9pzk4">https://blog.adobe.com/en/publish/2020/04/29/10-brands-that-shifted-their-advertising-strategy-amid-covid-19.html#gs.t9pzk4</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/09/03/adobe-events-go-virtual-through-june-2021.html#gs.t9pzpj">https://blog.adobe.com/en/publish/2020/09/03/adobe-events-go-virtual-through-june-2021.html#gs.t9pzpj</a></li> </ul> </div> <div> <p>Topics: Adobe Summit, News, B2B, Experience Cloud,</p> <p>Products: Experience Manager, Marketo&nbsp;Engage,</p> </div> </div>

February 16, 2021 12:00 AM

Apply to be a 2021-22 Creative Resident

Adobe Web Platform

<div class="embed embed-internal embed-internal-applytobea202122creativeresident embed-internal-16"> <div> <h1 id="apply-to-be-a-2021-22-creative-resident">Apply to be a 2021-22 Creative Resident</h1> </div> <div> <img src="/hlx_24468c73e708a3ce0b899b5ada4ca2c80d3d7af9.png" alt="Adobe Creative Residency on colorful background. "> </div> <div> <p>By Julia Tian</p> <p>Posted on 02-16-2021</p> </div> <div> <p>The Adobe Creative Residency is a year-long program that allows artists in the early stages of their creative careers to focus on building up their local creative communities and their own dream portfolios. Each Resident is supported with a full salary, health benefits, mentorship, access to Adobe software, speaking opportunities, and other project-specific support to make their vision a reality. The ideal candidate is passionate about building their community, skilled in their field, in the beginning stages of their creative career, and has the desire to make the Creative Residency their professional focus for one year.</p> <p>We are excited to announce that applications for the 2021 Adobe Creative Residency will be open February 16 9a.m. PST — March 14 11:59 p.m. PST. If you’re interested in growing your creative career, read on for application information, tips, and other resources to help you with your submission.</p> <p><img src="/hlx_5b2cda6c0bbcd804359c2df7cae93320eb3156cf.png" alt=""></p> <h3 id="application-information">Application information</h3> <p>Every year we look for a variety of ideas, people, and cultures representative of the diversity of Adobe’s creative community. This year, applications will be accepted from candidates living the United States.</p> <p>We are looking for future Residents who have a strong area of focus in one of these categories:</p> <ul> <li>Short-form online video</li> <li>Photography</li> <li>Illustration: <a href="https://www.adobe.com/creativecloud/tools/drawing-software.html">digital drawing</a> and painting</li> <li>Multi-disciplinary design: print and/or digital</li> <li>Experience/Interaction design using Adobe XD</li> </ul> <p>If your area of expertise isn’t listed above, we still welcome your application. See our <a href="https://www.adobe.com/about-adobe/creative-residency/alumni.html">alumni page</a> to get a sense of the diversity of fields and styles we have supported in the past.</p> <p><img src="/hlx_92fe789c844c645667e8c8fd4a8e6bc975a0191a.png" alt=""></p> <h3 id="application-tips">Application tips</h3> <p><strong>Get started</strong></p> <ul> <li>Begin by reading through the Creative Residency <a href="https://www.adobe.com/about-adobe/creative-residency.html">site</a>. Visit the How to Apply and FAQ pages for all the answers to your questions. Make sure you learn what the Residency is about and whether you are an eligible candidate.</li> <li>Bookmark this article and the links in it that you find helpful, and refer to them as you build your application.</li> <li>Review what previous residents submitted for their applications. Watch resident Anna Daviscourt’s <a href="https://youtu.be/DKpEX9VUgmw?t=512">video of five tips for applying</a>.</li> </ul> <p><strong>Form your project vision</strong></p> <ul> <li>Take time to think through your project. What is your motivation? What will your project accomplish? What materials or support will you need? How much time will each portion of your project take? Give yourself time to prepare thoughtful answers to the application questions.</li> </ul> <p><strong>Complete the application form</strong></p> <ul> <li>Make a copy or download the application <a href="https://docs.google.com/presentation/d/1DhIgHFAFXwing80_ymgg1G_Pw5PRZQuIf_SOu1AjtlY/edit?usp=sharing">template deck</a> as a PPT file and use it as a starting place for building out your project proposal. Make sure that you address all items and questions outlined in the template.</li> <li>Upload your proposal as a PDF into the application online.</li> </ul> <p><img src="/hlx_8ba459e9af97f383249346546a69228744413fe9.png" alt=""></p> <h3 id="application-resources">Application resources</h3> <p>This is your opportunity to use your creative strengths to communicate your style, voice, and process. Craft a project proposal layout, storyline, and design that exemplify your creative strengths.</p> <p>Refer to the past residents’ applications on our website. Notice how they used their artistic and organizational strengths to build a strong proposal. Remember that every submission is unique, so make sure yours reflects your style, brand, and voice.</p> <p>In addition to the project proposal, we also consider your previous creative work, flexibility in trying new things, past work experiences, and willingness to take on new challenges. You can <a href="https://vimeo.com/256816569">watch 2018 resident Aaron Bernstein’s application video here</a> to see how he uses stop motion animation, audio, and photography to communicate his project concept. This gave us a good feel for his level of skill, what style to expect from his project, and his ability to pitch his design concepts visually and succinctly.</p> <p><em>Good luck! We look forward to reviewing your application and hope to meet you soon.</em></p> <div class="embed embed-internal embed-internal-creativeresidency embed-internal-creativecloud"> <div><p><img src="/hlx_020e0eec1e5c8b6ed4488341e9641abd6d547991.png" alt=""></p> <h3 id="adobe-creative-residency">Adobe Creative Residency</h3> <p>Create. Share. Activate. Empower. The Adobe Creative Residence and Community Fund supports the creative community with grants, tools, resources and guidance, giving creatives the opportunity to focus on building their dream portfolios.</p> <p><a href="https://www.adobe.com/about-adobe/creative-residency/community-fund.html">Learn more</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/2020/05/01/creative-resident-aiko-fukuda-teams-up-with-pantone-for-mermay.html#gs.tcfutn">https://blog.adobe.com/en/2020/05/01/creative-resident-aiko-fukuda-teams-up-with-pantone-for-mermay.html#gs.tcfutn</a></li> <li><a href="https://blog.adobe.com/en/2020/08/28/100-pencils-project-a-love-letter-to-drawing.html">https://blog.adobe.com/en/2020/08/28/100-pencils-project-a-love-letter-to-drawing.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/13/amazon-creative-jam-gives-future-designers-a-lift-through-mentorship.html#gs.tcgn93">https://blog.adobe.com/en/publish/2021/01/13/amazon-creative-jam-gives-future-designers-a-lift-through-mentorship.html#gs.tcgn93</a></li> </ul> </div> <div> <p><em>Topics: Creativity, Art,</em></p> <p><em>Products:</em></p> </div> </div>

February 16, 2021 12:00 AM

Spring call for content: Signs of renewal

Adobe Web Platform

<div class="embed embed-internal embed-internal-spring2021adobestockcallforcontent embed-internal-16"> <div> <h1 id="spring-call-for-content-signs-of-renewal">Spring call for content: Signs of renewal</h1> <p>Now is the time for Adobe Stock, when clients are looking at new beginnings and looking for designs that capture the season of rebirth.</p> </div> <div> <p><img src="/hlx_38bda93ad67d54b32ab92f46080a7f1512dcf9d1.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/pensive-black-woman-in-flowery-land/380733334">JimenaRoquero/Stocksy</a></em> <em>.</em></p> </div> <div> <p>By Brenda Milis</p> <p>Posted on 02-16-2021</p> </div> <div> <p>With subzero temperatures chilling much of the northern hemisphere, spring can seem ages away. But now’s the time at <a href="https://stock.adobe.com/">Adobe Stock</a> when everyone is looking at new beginnings, planning their campaigns, and looking for images, designs, illustrations, and videos that capture the sparkling excitement of the season of rebirth.</p> <div class="embed embed-internal embed-internal-stockspring2021 embed-internal-creativecloud"> <div><p><img src="/hlx_2278364dc79d9438b4563396fab411af0b9cd079.png" alt=""></p> <h3 id="spring-2021">Spring 2021</h3> <p>Highlights from Adobe Stock.</p> <p><a href="https://stock.adobe.com/collections/bLKnfC8p8gn3tyuCLjp7rrwaSMH02aEU">Learn more</a></p></div> </div> <p>While you will find a need for classics like colorful eggs, gingham blankets, and bundles upon bundles of flowers (after an especially isolated and static winter), visual trends are urging us towards a spring that is richer, more colorful, and more dreamlike than before. Here are some of the key topics and themes for spring 2021 at Adobe Stock.</p> <p><img src="/hlx_8317b7a1d2d78214349889b027c226030a9553bb.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/a-grandparent-waters-the-garden-flowers-with-his-grandchildren-older-person-and-child-doing-everyday-things-together/369646942">Antonio Rodriguez</a>.</em></p> <h3 id="spring-2021s-exuberant-colors">Spring 2021’s exuberant colors</h3> <p>Traditionally, the arrival of spring brings cheerful pastels. The soft and creamy roses, lavenders, buttercups, and blue bells we see sprouting each year find a home in everything from Easter eggs and linen dresses to macaroons.</p> <p>This year there is a thirst for the most saturated and rich possibilities of these vernal mainstays. Perhaps it is a response to our dull, monochrome days behind a screen, but people are gravitating towards images, designs, and creations that convey vibrance and brightness. These rich colors play alongside the more muted pastel members of the same family, seen in the luscious Soft Start color palette.</p> <p><img src="/hlx_1a29b36241b9f52948c6a06f0c8842178aa74551.jpeg" alt=""></p> <p><em>Image source: Right: Adobe Stock/ <a href="https://stock.adobe.com/images/cheerful-woman-in-floral-hijab-against-black-wall-during-covid-19/398532442">Jose Luis CARRASCOSA/Westend61</a> Left: Adobe Stock / <a href="https://stock.adobe.com/images/man-wint-flowers-covering-his-eyes-behind-a-foral-bakground/187417312">Thais Ramos Varela/Stocksy</a>.</em></p> <p><img src="/hlx_fe74885b5e51a2950e5db93bd7f30f3e758a1687.jpeg" alt=""></p> <p><em>Image source: Adobe Stock /</em> <a href="https://stock.adobe.com/images/close-up-of-colorful-rose/379904296">Hayden Williams/Stocksy</a>.</p> <p><img src="/hlx_c73e9e473527a6861c9d94f8486bf2e36c764cb5.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://color.adobe.com/Soft%20Start-color-theme-16559542">Soft Start color palette for spring on Adobe Color</a>.</em></p> <p>Desire for energetic and flashy colors this spring can be seen as part of the 2021 <a href="https://blog.adobe.com/en/publish/2021/01/26/positively-colorful-adobe-stocks-2021-visual-trend-mood-boosting-color.html#gs.rfbvcm">Mood Boosting Color</a> trend. After a stressful and challenging 2020, artists and clients are letting a sense of optimism, pride, and exuberance glow through their color choices.</p> <p><img src="/hlx_842b85d2bbcded5ef7b638a8d079d2e3a5e0fc8f.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/kid-hiding-behind-colorful-clothes/380739343">JimenaRoquero/Stocksy</a></em> <em>.</em></p> <h3 id="expanding-the-family-bubble">Expanding the family bubble</h3> <p>One upside to our lifestyles in a time of quarantining, restricted travel, and social distancing, has been a renewed appreciation for the communities we live with every day. Roommates shared our birthdays, holidays, griefs, and joys with us.</p> <p>Building a bubble together helped us appreciate the sense of connection we get from shared spaces and activities. Stock images, videos, and designs will reflect our expanded and enriched sense of communal life.</p> <p><img src="/hlx_ad95255197dd87ae1f1782222825c5a4c4aaa461.jpeg" alt=""></p> <p><em>Image source: Left: Adobe Stock / <a href="https://stock.adobe.com/images/badminton-rackets-on-pink-and-blue-background/194518654">AudreyShtecinjo/Stocksy</a></em> <em>Right: Adobe Stock / <a href="https://stock.adobe.com/images/skate-park-with-men-riding-bicycles/224201519">ADDICTIVE STOCK</a>.</em></p> <p>As we head into the warmer months, we expect a transition from solitary exercise to group activities and play. Creatives can turn to the nostalgic appeal of sports like roller-skating, badminton, cycling, and jump rope to offer us new ways to explore the same backyards and local parks. Meanwhile more adventurous outings like paddle boarding or surfing give people more opportunities to play together in a socially distant setting. The <a href="https://stock.adobe.com/collections/CZopNySbTpKmoXOLaShZx9EvZuuHxVk0">Back in Action</a> stock collection showcases these shared ways of getting fresh air into our lungs.</p> <p>Families have also adapted to our new reality. By July 2020, 52 percent of young adults resided with a parent, up from 47 percent just four months earlier. Even before a global pandemic, a record 64 million Americans lived in multi-generational housing (a kind of family life that has been popular outside the West for centuries). The <a href="https://stock.adobe.com/collections/WYswEM4UocKbGFa2JquhWxFy6qjRtg3F">Family Circle</a> collection at Adobe captures these larger, more diverse, and multigenerational families in a world of images and designs that celebrate the junctures between children and grandparents as they emerge into a new season.</p> <p><img src="/hlx_39d42d69d151828762ab74ead945d820985bfd19.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/young-mixed-race-girls-playing-on-tablet-wearing-masks-on-brownstone-stoop/354781980">Granger Wootz</a>.</em></p> <h3 id="the-delights-of-small-things">The delights of small things</h3> <p>In our screen-focused lifestyles and work environments, our day-to-day stimulation takes on a kind of visual monotony. Netflix following Zoom following social media. But the blooming of flowers, chirping of birds, and thawing of soil reminds us of the inexhaustible simple pleasures that surround us. As we emerge from our comfortable winter havens, spring offers a whole new dimension of experiences.</p> <p>There is a new kind of intensity to all kinds of sensory pleasures and delights this spring, engaging not only the eyes but the senses of sound, taste, smell, and touch as well. The artwork in the <a href="https://stock.adobe.com/collections/f0twSN7GhH2nzU226sDazNIT292vDLpj">Spring Awakening</a> collection evoke the scent of freshly picked herbs and lettuces and brewed tea, the tarty sweetness of young berries, and the bracing flutter of wind blowing through fields of grass.</p> <p>There is a focus on ways to experience nature closer to home and in simpler ways: planting a new patch of oregano in your garden or windowsill, or picking local wildflowers as opposed to flying across the world to a foreign land. And as we bring this embodied approach inside on the cool nights and blustery days, we bring with us a desire to nourish our senses with playful luxuries like baking, fort-building, and drawing.</p> <p><img src="/hlx_b594409ce775849a95259f76538435ee84fba53d.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/farmers-hands-hold-organic-green-salad-vegetables-in-the-plot-concept-of-healthy-eating-non-toxic-food-growing-vegetables-to-eat-at-home/327299516">SUPERMAO</a>.</em></p> <h3 id="reveries-of-spring">Reveries of spring</h3> <p>We often describe spring as the season of rebirth and new life. This year, it is also the host to our most sumptuous dreams and fantasies. If our bodies are stuck in place, our minds are wandering to worlds that are sun-drenched, saturated, soft, and luxurious.</p> <p>The <a href="https://stock.adobe.com/collections/vcEFDEldJZDnXA7UHIqThHU1USm3y2Lj">Dreaming of Spring</a> Adobe Stock collection communicates a cottagecore-inflected fantasy — a sense of being in places unstuck from time and technology. People wander in endless fields of daisies, poppies, and dandelions wearing simple rustic fabrics. Chickens and sheep remind us of our connection to the rest of nature’s creatures. Friends and lovers picnic in lush and picturesque settings where time seems to have stopped. There is a freedom and creativity that these dreamlike themes allow, with images that employ double exposure, abstraction, and playfully surreal compositions.</p> <p><img src="/hlx_a996078543798100626f817f8f14b1a6d5b040c8.jpeg" alt=""></p> <p><em>Image source: Adobe Stock / <a href="https://stock.adobe.com/images/young-woman-and-alpaca-in-field/375016607">JovanaRikalo/Stocksy</a></em> <em>.</em></p> <p><em>Get inspired with our <a href="https://stock.adobe.com/collections/bLKnfC8p8gn3tyuCLjp7rrwaSMH02aEU">Spring 2021 Collection gallery on Adobe Stock</a>. You can also<a href="https://discord.gg/tZbBPRe"></a></em> <em><a href="https://discord.gg/tZbBPRe">join our Discord channel</a> for stock artists and follow the #call-for-content channel. When you are ready,<a href="https://contributor.stock.adobe.com/"></a></em> <em><a href="https://contributor.stock.adobe.com/">submit your latest content to Adobe Stock</a>.</em></p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/09/austere-romanticism-2021-design-trend-of-cottagecore-dreams.html#gs.shpwhw">https://blog.adobe.com/en/publish/2021/02/09/austere-romanticism-2021-design-trend-of-cottagecore-dreams.html#gs.shpwhw</a></li> <li><a href="https://blog.adobe.com/en/2021/01/26/positively-colorful-adobe-stocks-2021-visual-trend-mood-boosting-color.html">https://blog.adobe.com/en/2021/01/26/positively-colorful-adobe-stocks-2021-visual-trend-mood-boosting-color.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/14/adobe-stock-motion-audio-creative-trends-2021.html">https://blog.adobe.com/en/publish/2021/01/14/adobe-stock-motion-audio-creative-trends-2021.html</a></li> </ul> </div> <div> <p>Topics: Creativity, Creative Inspiration &amp; Trends, Design, Illustration, Photography, Creative Cloud,</p> <p>Products: Creative Cloud, Stock,</p> </div> </div>

February 16, 2021 12:00 AM

Adobe Japan celebrates being named a Best Company to Work For

Adobe Web Platform

<div class="embed embed-internal embed-internal-adobejapanbestcompanytoworkfor embed-internal-16"> <div> <h1 id="adobe-japan-celebrates-being-named-a-best-company-to-work-for">Adobe Japan celebrates being named a Best Company to Work For</h1> </div> <div> <img src="/hlx_80432212e698eb38c9a8c9687d95eed1594c8875.png" alt="For the fifth year in a row Adobe Japan is named a Best Company to Work For."> </div> <div> <p>By Adobe Life Team</p> <p>Posted on 02-16-2021</p> </div> <div> <p>For the fifth year in a row, Adobe Japan has been selected as one of the “Best Companies to Work For” by the Great Place to Work Institute, <a href="https://hatarakigai.info/ranking/japan/2021.html?utm_source=twitter&amp;amp;utm_medium=social&amp;amp;utm_campaign=tw200305?utm_source=twitter&amp;amp;utm_medium=social&amp;amp;utm_campaign=tw200515#main">coming in at no. 17 in the medium-size category</a> (100-999 employees), skyrocketing 23 spots from last year!</p> <p>The Best Companies to Work For ranking is held annually by the Great Place to Work Institute Japan (GPTW Japan), which scores an organization’s level of job satisfaction. In this year’s ranking, the focus of the list was around newly established and expanded benefit programs and the effectiveness of internal communications, especially in response to the coronavirus.</p> <p>Adobe has made the safety and health of its employees, as well as the health of their families and the people around them, its top priority. Since instructing employees to work from home since March 2020, the Adobe Japan team has introduced several new benefit programs. Here is a snapshot of what has made a big impact in the lives of our employees.</p> <h3 id="global-days-off">Global days-off</h3> <p>While many employees feel that working from home has been productive, there is no doubt that it is still a challenge, especially with trying to maintain their work life with their personal life. To address this issue, we have set up monthly global days-off for all our employees around the world. These global days-off allow <em>all</em> our employees to take a break together and focus on maintaining their wellbeing.</p> <h3 id="wellness-reimbursement">Wellness reimbursement</h3> <p>We know that having a healthy lifestyle is a top priority for our employees. To that end, we expanded our Wellness Reimbursement program, which covers the cost of wellbeing resources that help employees, and their families live healthy lives. Resources that we cover include gym equipment, virtual fitness classes, running shoes, and more.</p> <h3 id="work-from-home-reimbursement">Work from home reimbursement</h3> <p>In order to create a comfortable office setup at home, we have given employees a work from home reimbursement fund. This covers computer equipment, desks, chairs, and everything an employee needs to be successful at home. We even provide a separate internet reimbursement!</p> <h3 id="transparent-communication">Transparent communication</h3> <p>Outside of new benefits, we also know that employees want transparent communication. To that end, our leadership team gives frequent updates around the future of work, business goals, and more. Not only that, but monthly newsletters have also helped employees with wellbeing tips, and career development resources. For our Adobe Japan team, they regularly look forward to monthly virtual happy hours hosted by the Adobe Japan president, Jim McCready!</p> <h3 id="supporting-our-customers">Supporting our customers</h3> <p>Our Adobe Japan team is also especially dedicated to our customers. Which is why they’ve gone above and beyond to help them succeed. As the effects of the coronavirus continue, Adobe has introduced <a href="https://www.adobe.com/covid-19-response.html">several support programs</a> to ensure that our customers keep their businesses moving forward. This includes implementing new capabilities in our products to improve collaboration, hosting monthly virtual events that teach customers new ways to use our products, and more. We also established the <a href="https://www.adobe.com/about-adobe/creative-residency/community-fund.html">Adobe Creative Residency Community Fund</a> to help visual creators stay on track and achieve their dreams.</p> <p>Even in the wake of the coronavirus, Adobe Japan has made it an absolute priority to create a comfortable work from home environment for its employees. We are honored to receive this recognition and we couldn’t be prouder of our Adobe Japan team, who have been incredibly resilient during these challenging times. As business development manager Hiroto Ryuzaki sums it up, “I feel we have a culture that encourages people to think creatively and generate new ideas. We are then empowered to put those ideas into action to overcome challenges together.”</p> <div class="embed embed-internal embed-internal-adobecareers embed-internal-adobelife"> <div><p><img src="/hlx_517674f7ac3435de41eb5f1470e52828fa149895.png" alt=""></p> <h3 id="adobe-careers">Adobe Careers</h3> <p>We believe in hiring the very best and are committed to creating exceptional work experiences for all. Great ideas can come from everywhere in the organization, and we know the next big idea could be yours. Let’s create experiences that matter.</p> <p><a href="https://www.adobe.com/careers.html">Explore opportunities</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/09/forbes-award-americas-best-employers.html">https://blog.adobe.com/en/publish/2021/02/09/forbes-award-americas-best-employers.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/09/founders-award-proudest-career-moments.html">https://blog.adobe.com/en/publish/2021/02/09/founders-award-proudest-career-moments.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/01/fortune-worlds-most-admired-companies-adobe.html#gs.toxu8g">https://blog.adobe.com/en/publish/2021/02/01/fortune-worlds-most-admired-companies-adobe.html#gs.toxu8g</a></li> </ul> </div> <div> <p>Topics: Adobe Life, Brand, Adobe Culture, Adobe Life - Asia</p> <p>Products:</p> </div> </div>

February 16, 2021 12:00 AM

February 11, 2021

Release Notes for Safari Technology Preview 120

Surfin’ Safari

Safari Technology Preview Release 120 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 271358-271794.

Web Inspector

  • Elements
    • RTL content inside elements is reversed and unreadable (r271458)
    • Font details sidebar panel
      • Improved line wrapping of table row titles (r271528)
      • Updated fractional variation axis ranges and default values to not be rounded (r271620)
      • Changed the “Historical Figures” section name to “Alternate Glyphs” (r271612)
  • Sources
    • Allowed breakpoint actions to be evaluated as a user gesture (r271373)
  • Console
    • Fixed bidi confusion when evaluation result has RTL text (r271726)


  • Changed slow-scrolling reasons to not propagate across frame boundaries (r271508)
  • Fixed scroll-chaining to trigger before the complete end of overscroll (r271730)
  • Fixed scroll-padding to affect paging operations (r271788)
  • Fixed scroll-snap points to be triggered during programmatic scroll (r271439)


  • Added relayout for grid items when definiteness changes (r271745)
  • Added support for lab(), lch(), and color(lab ...) colors (r271362)
  • Fixed incorrect scroll-snap-align parsing (r271480)
  • Handled shapeMargin becoming NaN (r271738)
  • Implemented logical border-radius (r271447)
  • Included aspect-ratio in percentage resolution (r271375)
  • Supported transferred min/max block size for aspect-ratio (r271554, r271648)
  • Optimized :hover and :active style invalidation for deep trees and descendant selectors (r271584)
  • Updated font when resolving letter-spacing: calc(...) values (r271688)
  • Reversed transform animation not applied alongside other transform animations (r271524)


  • Fixed super accesses on arrow functions defined as a class field (r271420)


  • Accelerated HTMLInputElement creation (r271672)
  • Changed to use the event loop to set the page title (r271514)
  • Changed a non-integer tabindex to match the element behavior as if the tabindex was omitted (r271523)
  • Disabled the context menu item and menu bar item to toggle automatic spelling correction where autocorrect="off" (r271459)
  • Fixed elements in a table getting incorrectly selected in JavaScript (r271635)
  • Included an origin identifier when writing promised image data to the drag pasteboard (r271685)


  • A video element needs to ignore the request to enter/exit fullscreen before the current fullscreen mode change is completed (r271377)
  • Allowed MediaStream and non MediaStream backed videos to play together (r271698)
  • Changed to prevent two pages in the same process to not play media stream backed video elements at the same time (r271670)
  • Fixed videos not playing on Facebook Stories (r271725)
  • Fixed picture-in-picture video subtitles that stop updating when Safari is backgrounded (r271737)
  • Fixed playback failure on marketwatch.com (r271531)
  • Fixed Netflix controls to correctly fade out after entering fullscreen (r271656)
  • Fixed Facebook pausing video in picture-in-picture during scroll (r271470)
  • Introduced a MediaSessionGroupIdentifier (r271643)
  • Updated buttons of the media permission prompt (r271485)


  • Enabled WebRTC VP9 profile 0 by default (r271641)
  • Disabled verification timer in case of capture suspension (r271749)
  • Changed to notify capture state immediately on page close (r271640)

Web Audio

  • Addressed WebRTC live Opus audio stream stutters (r271575)


  • Implemented aria-braillelabel and aria-brailleroledescription (r271416)
  • Fixed AT-synthesized key events for common user actions such as increment or decrement (r271760, r271536)

Payment Request

  • Changed to use the first item in shippingOptions even when it’s not selected (r271735)
  • Fixed constructor to throw an error if the same payment method is provided more than once (r271734)
  • Fixed issue where the shippingOption of the PaymentResponse is null after updateWith (r271703)

Speech Recognition

  • Made SpeechRecognition permission error more informative (r271381)
  • Updated media state for active speech recognition as it uses audio capture (r271636)

Private Click Measurement

  • Enabled output logs by default, including to Web Inspector (r271473)

Bug fixes

  • Fixed “Blocked Plug-in” instead of showing a PDF (r271650)
  • Fixed combinations of nested perspective and transforms causing blurry layers on retina displays (r271388)
  • Fixed SVG reference filter chain with errors applying only some of the filters and producing incorrect output (r271785)
  • Removed explicit clamp to SRGB for Lab colors on CG platforms that support wide color (r271712)

February 11, 2021 06:00 PM

How Dell transformed customer experiences with Adobe Experience Cloud

Adobe Web Platform

<div class="embed embed-internal embed-internal-howdelltransformedcustomerexperienceswithadobeexperiencecloud embed-internal-11"> <div> <h1 id="how-dell-transformed-customer-experiences-with-adobe-experience-cloud">How Dell transformed customer experiences with Adobe Experience Cloud</h1> </div> <div> <img src="/hlx_3e6d4bba8d596d47c2af04a9887d8af1f2277a02.jpeg" alt="Woman using a laptop"> </div> <div> <p>By Sunil Menon</p> <p>Posted on 02-11-2021</p> </div> <div> <p>For over 35 years, Dell technology has powered work, play and school for people all over the world. The company was founded in 1984 when Michael Dell was just a young college student customizing computer upgrades for classmates, and today is one of the leading technology providers in the world. Since early inception, Dell has been laser-focused on empowering their customers to achieve their goals, and that mission continues today.</p> <h3 id="customer-convenience-as-the-north-star">Customer convenience as the north star</h3> <p>At the heart of Dell’s marketing efforts is customer convenience. This focus is evident across multiple touchpoints with the customer. Take, for example, Dell.com, the company’s site. The site has multiple sections tailored to simplify the search for the right technology. A new gamer who wants to play Fortnite may not know which computer or accessories they may need to meet the minimal requirements for gameplay. By navigating to the site’s gaming section, the customer can click on Fortnite as the game they want to play and be presented with options for the computers built for optimal performance.</p> <div class="embed embed-internal embed-internal-experiencecloud embed-internal-experiencecloud"> <div><p><img src="/hlx_34720ecd0ad7c3510509b6fa3a9337f0696639ab.png" alt=""></p> <h3 id="adobe-experience-cloud">Adobe Experience Cloud</h3> <p>AI-driven customer experience applications for marketing, analytics, advertising, and commerce.</p> <p><a href="https://www.adobe.com/experience-cloud.html">Learn more</a></p></div> </div> <p>Additionally, Dell recently made updates to its site to make shopping from home a little easier for their customers. The site was enhanced with <a href="https://www.dell.com/en-us/member/shop/laptops/new-13/spd/xps-13-9310-laptop">interactive 360-degree 3D demos</a> of products and <a href="https://content.hmxmedia.com/xps-15-9500-laptop-AR/index.html">augmented reality shopping widgets</a> (fueled by <a href="https://www.adobe.com/marketing/experience-manager-assets/dynamic-media.html">Adobe Dynamic Media</a>) to give customers a better feel for the products they are purchasing from the convenience of their own homes.</p> <p>Understanding that the customer experience extends beyond just Dell’s web properties, the company turned to Adobe applications to create custom and engaging experiences across email, mobile, and even their direct mail campaigns.</p> <h3 id="a-tailored-approach-to-customer-experience">A tailored approach to customer experience</h3> <p>And to better tailor the emails customers receive, the company took their email marketing in-house with the help of <a href="https://www.adobe.com/marketing/campaign.html">Adobe Campaign</a>, part of <a href="https://www.adobe.com/experience-cloud.html">Adobe Experience Cloud</a>. Through Adobe Campaign, Dell sends 1.6 billion emails to customers every year. The global transition to Adobe Campaign has enabled Dell to better understand how customers engage with their email and marketing campaigns. For example, the marketing team can now see how content promoting new product innovations is performing and make any necessary adjustments quickly to provide a better experience for their customers.</p> <p>Dell is also using <a href="http://www.adobe.com/marketing/target.html">Adobe Target</a> to personalize the customer experience, as well as <a href="https://www.adobe.com/analytics/adobe-analytics.html">Adobe Analytics</a> to measure the performance of their marketing campaigns.</p> <p>Customer-centricity truly is core to Dell’s DNA. Teams across the company are constantly testing new technologies and ways to improve the customer experience. After the successful rollout of Adobe Campaign for email marketing, Dell tested a new Adobe Campaign capability, SMS communications, just in time for the 2020 holiday season. Customers who opted into the pilot received text messages with offers on the hottest tech of the holiday season, making it easier for consumers to check off their gift lists.</p> <p>During the short pilot, Dell successfully delivered approximately 70,000 text messages to American customers and saw some great results. The pilot saw return on ad spend increase significantly and click rates for text message promotions performed 20 — 40 times higher than email. Dell also discovered that pilot participants were purchasing higher-value products. 48 percent of customers in the pilot who made a purchase bought premium products.</p> <table> <thead> <tr> <th>Pull Quote</th> </tr> </thead> <tbody> <tr> <td><h2>“Adobe Campaign has given us the ability to getter a deeper understanding of how our marketing campaigns perform. The scalability and flexibility to make campaign changes quickly have paved the way for us to expand our marketing channels and test more cutting-edge technology like artificial intelligence to give our customers even more personalized experiences in real-time.”</h2><p><strong>Nishanth Yata,</strong> head of digital transformation, applications development at Dell</p></td> </tr> </tbody> </table> <p>Dell is looking forward to testing and implementing new technologies to deliver engaging, personalized experiences to power the lives of its customers. And the lessons learned from this past holiday season and future innovation will pave the way for many years of leadership in delivering exceptional digital experiences.</p> </div> <div> <h2 id="featured-posts">Featured posts</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/01/26/how-adobe-experience-cloud-helped-powersports-company-brp-take-adventure-to-the-next-level.html">https://blog.adobe.com/en/publish/2021/01/26/how-adobe-experience-cloud-helped-powersports-company-brp-take-adventure-to-the-next-level.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/12/23/fighting-covid-19-how-cdc-delivering-critical-information.html">https://blog.adobe.com/en/publish/2020/12/23/fighting-covid-19-how-cdc-delivering-critical-information.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/26/how-discount-tire-became-a-market-leader-by-focusing-on-cx.html">https://blog.adobe.com/en/publish/2021/01/26/how-discount-tire-became-a-market-leader-by-focusing-on-cx.html</a></li> </ul> </div> <div> <p>Topics: Digital Transformation, Customer Stories, High Tech, Experience Cloud,</p> <p>Products: Experience Cloud,</p> </div> </div>

February 11, 2021 12:00 AM

Ramy brings a fresh perspective through the lens of a first-generation Muslim-American

Adobe Web Platform

<div class="embed embed-internal embed-internal-ramybringsafreshperspectivethroughthelensofafirstgenerationmuslimamerican embed-internal-11"> <div> <h1 id="ramy-brings-a-fresh-perspective-through-the-lens-of-a-first-generation-muslim-american">Ramy brings a fresh perspective through the lens of a first-generation Muslim-American</h1> <p>Ramy (award-nominated TV show) editor Joanna Naugle shares editing tips and tricks and walks through one of her favorite scenes on the hit show</p> </div> <div> <p><img src="/hlx_0904ef9380dde4a540a57ca54116bf78ef842fc9.jpeg" alt=""></p> <p><em>Photo from Hulu</em></p> </div> <div> <h2 id="by-adobe-communications-team">By Adobe Communications Team</h2> <p>Posted on 02-11-2021</p> </div> <div> <p>Nominated for multiple Emmy and Critics Choice Awards, the acclaimed series <em>Ramy</em> follows a first generation Egyptian-American man as he tackles heavy themes of self-discovery, faith, commitment, and compassion within and outside of his community. The show combines comedy and drama to take the audience on Ramy’s spiritual journey while he struggles between the values of his Muslim community and life as a millennial living in the moment.</p> <p><a href="http://www.senior-post.com/">Senior Post</a> lead editor and co-owner Joanna Naugle breaks down a key scene for us in the behind-the-scenes video below, and discusses her work on <em>Ramy</em>, the transition to a remote workflow halfway through post-production, and balancing the unique tones to achieve the overall creative vision for the show’s second season.</p> <div class="embed embed-internal embed-internal-premierepro embed-internal-creativecloud"> <div><p><img src="/hlx_6437eac3f9725128f1febd131fffb983cdd30b0b.png" alt="Inserting image..."></p> <h3 id="premiere-pro">Premiere Pro</h3> <p>Video editing that’s always a cut above.</p> <p><a href="https://www.adobe.com/products/premiere.html">Learn more</a></p></div> </div> <table> <thead> <tr> <th>Block Embed</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.youtube.com/watch?v=wFiA4zOauHU&amp;feature=youtu.be">https://www.youtube.com/watch?v=wFiA4zOauHU&amp;feature=youtu.be</a></td> </tr> </tbody> </table> <h3 id="how-and-where-did-you-first-learn-to-edit">How and where did you first learn to edit?</h3> <p>I first experimented with editing by using in-camera techniques on my home video camera growing up and then using iMovie for school projects in high school. However, I really fell in love with editing while attending Tisch School of the Arts and taking a class where we shot on film and then cut it together on a Steenbeck. This was an incredibly formative experience for me, and literally cutting and taping together pieces of film made me realize how crucial editing was to the storytelling experience and made me want to specialize in this part of the filmmaking process. In 2012, I met Josh Senior and later became a co-owner of Senior Post. Everything I’ve cut at our post house, whether it be a feature film, television show, comedy special, music video, commercial or documentary, has been using <a href="https://blog.adobe.com/en/topics/premiere-pro.html">Adobe Premiere Pro</a>.</p> <h3 id="how-do-you-begin-a-projectset-up-your-workspace">How do you begin a project/set up your workspace?</h3> <p>I’m lucky that we had really talented and organized assistant editors on <em>Ramy</em>, and so by the time the projects came to me, all the footage was synced and organized by scene. There were always two cameras shooting (sometimes three) so the multi-cam function was crucial for keeping things organized. I also had the assistant editors create line by lines for each scene, putting each take for each line in a sequence so I could easily watch them all back-to-back and choose the best option for the scene. Those line by lines proved to be supremely helpful in the long run, especially when Ramy wanted to see line readings when we were working remotely. I could just export those sequences and send to him to review quickly.</p> <h3 id="tell-us-about-a-favorite-scene-or-moment-from-this-project-and-why-it-stands-out-to-you">Tell us about a favorite scene or moment from this project and why it stands out to you.</h3> <p>I have a couple of favorite moments from this season, but I had the most fun cutting episode #4, where Ramy and Zainab go to Prince Bin Khaled’s estate to ask for money. There were so many absurd moments (the breast milk, Mia Khalifa, the parrot) and we really tried to create a foreboding and bizarre atmosphere while they were at this strange location. The archery scene was especially fun to build out. I really wanted to emphasize the twist halfway through the scene, where we think Ramy has succeeded but then Bin Khaled turns the tables on him. We had to find that right balance between making sure the stakes felt real while still finding moments of humor throughout that scene. And then of course the episode ends with a surprising proposition from Sheikh Ali, and making that moment feel sincere and vulnerable was important for sticking the landing.</p> <h3 id="what-were-some-specific-post-production-challenges-you-faced-that-were-unique-to-your-project-how-did-you-go-about-solving-them">What were some specific post-production challenges you faced that were unique to your project? How did you go about solving them?</h3> <p>Going into quarantine halfway through editing the season was definitely challenging to say the least! But luckily our post producer, Josh Senior, and our post supervisor, Andrew Rowley, figured out a contingency plan quickly — we copied all the transcoded media onto hard drives and sent one home with each of our editors, assistant editors, and online editors. Then we were easily able to share projects remotely by simply reconnecting to the media on each of our drives. Anytime someone added a new asset, we had to all be sure we labeled it the same way and downloaded it to our respective hard drives. Another challenge was editing a show that has a huge component in a language I do not speak. Salah Anwar, our wonderful assistant editor, was hugely helpful with that. In Arabic-heavy scenes, he would go through in advance and create subtitles to give me and the other editor, Matthew Booras, context. He would also use markers to call out takes in Arabic that were especially funny, which allowed us to create the best possible versions of the scenes.</p> <h3 id="what-adobe-tools-did-you-use-on-this-project-and-why-did-you-choose-them">What Adobe tools did you use on this project and why did you choose them?</h3> <p>I used Premiere Pro to edit <em>Ramy</em>. The multi-cam function is incredibly helpful for our show since we’re shooting with multiple cameras. I was hesitant to use it at first on Season 1 because I was worried it would be too complicated, but it was really easy to set up. Without it, it would have made watching dailies two or three times as long. I also used markers way more often on this project than before, since we didn’t have the luxury of being in the same room as our assistant editors during project handoffs. These markers served as a shorthand for reminders or details about the project that we didn’t want to get lost in the shuffle. When sending projects back and forth, we also frequently used the command that allows you to export a sequence as its own Premiere Pro Project. This made the files much smaller and more streamlined, and required less relinking and uploading time between different users.</p> <p><img src="/hlx_6ff71fbbfae9beabcad080340d05107a58eca9c1.png" alt=""></p> <h3 id="what-do-you-like-about-premiere-pro-andor-any-of-the-other-tools-you-used">What do you like about Premiere Pro, and/or any of the other tools you used?</h3> <p>My favorite thing about Premiere Pro is that it’s so easy to get started. You can just add media and go, which is what makes it great for editing so many different types and sizes of projects. It’s also great to be able to integrate the other Adobe programs so easily, like <a href="https://www.adobe.com/products/aftereffects.html?sdid=KKQOW&amp;mv=search&amp;kw=test&amp;ef_id=Cj0KCQiA3smABhCjARIsAKtrg6KA37YhfcUHhxL8dFi4xXYsaWtiyrItbHFVjLRkLGg9LDo8FEuBFr4aAhmCEALw_wcB:G:s&amp;s_kwcid=AL!3085!3!469198201844!e!!g!!adobe%20after%20effects&amp;gclid=Cj0KCQiA3smABhCjARIsAKtrg6KA37YhfcUHhxL8dFi4xXYsaWtiyrItbHFVjLRkLGg9LDo8FEuBFr4aAhmCEALw_wcB">After Effects</a> and <a href="https://blog.adobe.com/en/topics/photoshop.html">Photoshop</a>. Having files auto-update across the programs saves a lot of time and energy. I’m also a huge fan of the new title tool; it keeps the project so much cleaner to not have individual files in bins for each corresponding piece of text.</p> <h3 id="whats-your-favorite-hidden-gem-workflow-hack-in-adobe-creative-cloud">What’s your favorite hidden gem/ workflow hack in Adobe Creative Cloud?</h3> <p>One of my favorite features of Premiere Pro is the ability to customize label colors. I’m an incredibly visual person and so while working on episodes, I label each scene a different color. This makes it super easy to know exactly where one scene starts and the next finishes, and I can easily zoom out and see how much of the run time each scene takes up proportionally. If my sequence doesn’t look clean and organized, I can’t concentrate on making creative choices so keeping things color coded and simplified is key for my productivity. Also changing the “Nudge Selection Up” and “Nudge Selection Down” commands to one of my key shortcuts totally changed my workflow. I use that command constantly and only needing one keystroke makes me much more efficient as an editor.</p> <h3 id="who-is-your-creative-inspiration-and-why">Who is your creative inspiration and why?</h3> <p>Thelma Schoonmaker is definitely an editing hero of mine. I remember seeing her win the Oscar for editing “The Departed” and it was just as I was getting ready to attend film school and I thought, “maybe I could be her one day.” She made my wildest dreams seem possible and I really admire the longtime collaboration between her and Martin Scorsese. The Coen Brothers are also a huge creative inspiration for me; they are so skilled at seamlessly shifting between genres within their films to create a totally unique experience. “Fargo” is my all-time favorite film, and I’m definitely drawn to similar projects that blur the line between comedy and drama, just like “Ramy” does.</p> <h3 id="whats-the-toughest-thing-youve-had-to-face-in-your-career-and-how-did-you-overcome-it-what-advice-do-you-have-for-aspiring-filmmakers-or-content-creators">What’s the toughest thing you’ve had to face in your career and how did you overcome it? What advice do you have for aspiring filmmakers or content creators?</h3> <p>I’ve had some amazing creative collaborations in my career and some that didn’t pan out the way I had hoped, and when that happens it can be easy to beat yourself up or obsess over what you did wrong during the process. It took some time, but I’ve learned not to take these things personally — there will be people who you communicate really well with and you’ll feel like equal creative partners working and creating together. And there will be other people who are incredibly talented and creative, but they just don’t know how to convey their ideas or engage with you as an editor in order to accomplish their vision. There has to be a level of trust between the director and editor in order for that collaboration to be most effective and fruitful. So, the most important thing you can do starting out as an editor is to work with as many different directors and collaborators as possible, so you can find the people who are the best creative match for you and build that trust with them while working together. I’m lucky to count Ramy as one of those successful collaborators for me, and he strikes a great balance between having a singular and focused creative vision while still leaving room for input and suggestions from me and the rest of the team.</p> <h3 id="share-a-photo-of-where-you-work">Share a photo of where you work.</h3> <p><img src="/hlx_80a91b238c9fc0fb73066349e440c2c8326b1c4c.jpeg" alt=""></p> <h3 id="whats-your-favorite-thing-about-your-workspace-and-why">What’s your favorite thing about your workspace and why?</h3> <p>It’s been an adjustment working from home the past few months but the best part is sharing the space with my fiancé, Michael Litwak. He is a writer/director and so it’s really great to be able to bounce ideas off each other whenever we’re at a creative roadblock. I’ve also gotten very used to working in sweatpants and slippers… it’s going to be tough going back to wearing jeans and real shoes every day.</p> <p><em>Ramy</em> is available to stream on Hulu.</p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/mPDQ5bUsZxM?rel=0&amp;v=mPDQ5bUsZxM&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> </div> <div> <h2 id="featured-posts">Featured posts</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/01/22/exploring-a-new-side-of-an-icons-fight-for-justice-and-equality.html#gs.rghis7">https://blog.adobe.com/en/publish/2021/01/22/exploring-a-new-side-of-an-icons-fight-for-justice-and-equality.html#gs.rghis7</a></li> <li><a href="https://blog.adobe.com/en/2021/01/21/disability-civil-rights-at-the-forefront-of-netflix-crip-camp.html#gs.rciw19">https://blog.adobe.com/en/2021/01/21/disability-civil-rights-at-the-forefront-of-netflix-crip-camp.html#gs.rciw19</a></li> <li><a href="https://blog.adobe.com/en/2020/11/30/scare-me-pushes-the-boundaries-of-horror-anthology.html#gs.rghk2u">https://blog.adobe.com/en/2020/11/30/scare-me-pushes-the-boundaries-of-horror-anthology.html#gs.rghk2u</a></li> </ul> </div> <div> <p>Topics: Media &amp; Entertainment, News, Creative Cloud,</p> <p>Products: Creative Cloud, Premiere Pro,</p> </div> </div>

February 11, 2021 12:00 AM

Introducing your 2021 Marketo Engage Champions!

Adobe Web Platform

<div class="embed embed-internal embed-internal-introducingyour2021marketoengagechampions embed-internal-11"> <div> <h1 id="introducing-your-2021-marketo-engage-champions">Introducing your 2021 Marketo Engage Champions!</h1> <p>We are proud to announce the 10th anniversary class of our most elite group of Marketo Engage advocates, the 2021 Marketo Engage Champions!</p> </div> <div> <img src="/hlx_964d1da5191ed7e94ca8bc47380e1d700f3bf296.png" alt="Colorful abstract art"> </div> <div> <p>By Jess Darnell</p> <p>Posted on 02-11-2021</p> </div> <div> <p>Today we are thrilled to announce the 2021 Marketo Engage Champions! This year marks the 10th anniversary of the Marketo Engage Champion program, which recognizes and celebrates the most passionate, knowledgeable Marketo Engage users across the globe.</p> <p>Since its inception ten years ago, the Marketo Engage Champion program has grown in prestige and has become a career catalyst for many Marketo Engage users.</p> <p>Each member is a Marketo Engage expert and avid Marketo Engage ambassador, focused on actively sharing their knowledge and expertise with fellow users. They have demonstrated outstanding leadership and are loyal advocates who consistently go above and beyond to support others on their journey with Marketo Engage.</p> <p>This group will embark on a year filled with career development, knowledge sharing, exclusive speaking opportunities, connection with fellow Marketo Engage users, and opportunities to share feedback with Adobe product teams to inform the Marketo Engage product roadmap.</p> <p>Here’s how a few Marketo Engage Champions describe their experience with the program:</p> <p><em>“The Champion program has grown my career by leaps and bounds. I have met the smartest, friendliest and most ambitious group of like-minded people in this program and I am forever grateful for the opportunities (and laughs) it has given me!” Chelsea Kiko, Marketing Automation Manager, McGraw-Hill</em></p> <p><em>“Being a Marketo Engage Champion has accelerated my career and gave me access to the top talent in marketing operations” Christina Zuniga, Sr. Marketing Automation Manager, Databricks</em></p> <p>Please join us in congratulating Adobe’s 2021 Marketo Engage Champions!</p> <p>Get to know our 2021 Marketo Engage Champions by checking out their profiles below:</p> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/c65570a3d937f02825427646a4b2d377fe539c2f#image.png" alt=""><p></p><p><a href="https://www.linkedin.com/in/ajay-sarpal/">Ajay Sarpal</a>, Founder and CEO, Unicorn Martech DBA Ajay Sarpal</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/cb8b9ca16ebcae72fbcadd725581a90cdbb650ea#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/alexgreger/">Alex Greger</a>, Marketing Automation Manager, Skillsoft</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/f7953a9d6a7aabe7e8024f4224a17045f51378f2#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/alexandra-lane-609b8752/">Alexandra Lane</a>, Manager, Marketing Automation, John Hancock</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/5d286ffc7b81a3157203a1cc884f927f5e552617#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/heyamandat/">Amanda Thomas</a>, Marketing Technology Consultant, Etumos</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/5cbfba28b54e03b94d234e2d582d89f13b9d5bc7#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/amitjainmaexpert/">Amit Jain,</a> Marketo IT Specialist - Sales and MarTech, Atlassian</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/81022272b8bd7463d43038d08f989045c8c3406e#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/amygoldfine/">Amy Goldfine,</a> Senior Marketing Operations Manager, Iterable</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/52a7295e8b4f62f0811d6dd53a72c73681baa5be#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/andy-caron/">Andy Caron</a>, Head of Martech Consulting, Revenue Pulse</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/c79c904511e04c394ef8d4a21ccf5b5ad101684a#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/bethmassura/">Beth Massura,</a> Associate Platform Operations Consultant, Etumos</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/7ac193b0765039663d6657bf3298cf4597219911#image.png" alt=""><p></p><p><a href="https://www.linkedin.com/in/brandon-benjamin/">Brandon Benjamin</a>, Manager, Marketing Technology, PagerDuty</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/80583ebc7d473e5f21ba9f273a2df7780744857a#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/britneynyoung/">Britney Young</a>, Marketing Operations Specialist, McKesson</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/7efe08883758097548f27bd6b8a7c934188600ea#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/brookembartos/">Brooke Bartos</a>, Senior Manager, Marketing Operations, Walker Sands</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/e170cd071e41f1fd2f5932ed88be5219c6e259c1#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/carriejchandler/">Carrie Chandler</a>, Digital Marketing Manager, Genworth Mortgage Insurance</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/549b516ed0aa1a934371c9d16fb1d73e2c6c60f7#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chelsea-kiko/">Chelsea Kiko</a>, Marketing Operations Manager, McGraw Hill</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/5ac4cc9b04793baf06d03ec708d0dd55c7a3a2f4#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chiarariga/">Chiara Riga</a>, Marketing Operations Manager, Digital Shadows</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/6e4a030750ac03eefe4e85c2574336787465d9f9#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chloepott/">Chloe Pott</a>, Senior Manager, Marketing Operations &amp; Analytics, Sqreen</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/4a71bff5e1f61580e31e82caa66fc4216279dc18#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chris-saporito-26374530/">Chris Saporito</a>, Sr. Manager, Marketing Operations, Paycor</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/8fd06b3a8cb16cec0ff565d29b35eda64edfc4ce#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chriswilcoxmba/">Chris Wilcox</a>, Director, Digital Marketing, Hartford Funds</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/3bd5918d0a7222e1e61737b8b80d7424b2d9bf5b#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/chriswillis96/">Chris Willis</a>, Global Marketing Operations Manager, Integration Marketing Strategies, Trimble</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><img src="/hlx_65ad0c5baeb29e559c4422c85c53832dd2221c38.jpeg" alt=""><a href="https://www.linkedin.com/in/christinazuniga/">Christina Zuniga</a>, Senior Marketing Operations Manager, Databricks</td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/85f86ea427e4b0cc03348e4929a7289050a515aa#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/baylesscorey/">Corey Bayless</a>, Marketing Automation Program Manager II, Amazon Web Services</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/fb2f80e92ec530aeac9cc6bd49fa468be4db81c6#image.png" alt=""><p></p><p><a href="https://www.linkedin.com/in/courtneytobe/">Courtney Tobe</a>, Manager of Marketing Operations, AvidXchange Inc.</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/72545923b8a48662b53d93fa5e2fa377167bced4#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/diederikmartens/">Diederik Martens</a>, Chief Marketing Technologist, Chapman Bright</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/1aaea473f352efc432340cca0f0cc24a4b7e1318#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/enricodeleon/">Enrico de Leon,</a> Senior Associate, Digital Marketing, Altisource</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/e9aedc705a23c50f2df2d0bdc553752f30936925#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/gracebrebner/">Grace Brebner</a>, Senior Manager of Client Services, Digital Pi</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/105c98df4c5a052a4af26ad8482b727622cd8a67#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/gregahsue/">Greg Ah Sue</a>, Director of Marketing Operations, Blip</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/743186e08411735e20031e37c410d3510002d959#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/helen-abramova-8b308935/">Helen Abramova</a>, Marketing Technology Lead, Verizon</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/6ee0ec8abce0f9e9a1141a1f0d2d34aa0ba0a96b#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/jdmarketer/">JD Nelson</a>, Senior Manager, Marketing Operations, Vimeo</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/84172458a8406fada9d9ac53726e20cf449fe6a5#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/jenniferdimaria/">Jenn DiMaria</a>, Director of Client Services, Digital Pi</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/989870a3d49e89f53a28b1a2c7b103e0eb11301e#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/jenny-robertson-5a326439/">Jenny Robertson</a>, Senior Vice President, Technology Solutions &amp; Architecture, ANNUITAS</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/7df87e06ecd3e324b5bbe8e93ddffcd36764fecc#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/jesskao/">Jessica Kao</a>, Director of Client Strategy, Digital Pi</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/acf3260e27c144b5fb83aee1c51c57df219e1f17#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/grundyje/">John Grundy</a>, Marketing Program Manager, Booking.com</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/b8b184081df03cbcea20f6179e17f323e1c8e50c#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/joshypickles/">Josh Pickles</a>, Customer Experience Lead, Crimson Global Academy</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/5a98672eb25a1bb5d89acf796eae80d2d1233a76#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/julijames/">Julz James</a>, Sr. Manager, Marketing Operations, Blue Prism</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/d96f589c2a9d0d6506422923a0b03157f5361508#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/kimberlygalitz/">Kimberly Galitz</a>, Marketing Operations Manager, Bandwidth</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/3f5a12b357b67dee2b98f5fb3f98b0f34043fb76#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/kylehmccormick/">Kyle McCormick</a>, Senior Manager, Marketing Technology and Operations, Palo Alto Networks</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/2ce0f2433e6fc86994a33e0189749fa957171e8d#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/lauren-mccormack-7b20a62/">Lauren McCormack</a>, Senior Manager, Digital and Marketing Automation, Neo4J</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/c6e10752f172e891ba739c5ac094f7ba75547dbe#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/makikohultz/">Makiko Hultz</a>, Sr. Digital Marketing Supervisor, Honeywell</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/f6e0b09c4c302028f303a60eb4dcc86e04ad9ab1#image.png" alt=""><p></p><p><a href="https://www.linkedin.com/in/maxmaurier/">Maxwell Maurier,</a> Director, Marketing Operations &amp; Analytics, Druva</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/ae8b2953870e3e2149d42d19636d9ff3c9479e6f#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/megancrone/">Megan Crone</a>, Marketing Automation Administrator, Insight</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/cfd9671716d60fe215cc71de67133101f1c9d0e9#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/melissadaymarketing/">Melissa Day</a>, Global Digital Marketing Leader, Chemours</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/5eb1cf80dba1b2b182b6d39424850746b907a46a#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/mjtucker647/">Michael Tucker</a>, Marketing Technology Consultant/Owner, The Conversion Store</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/f80cbc0cc860b3cc079b54d8f096c91d20d3c53a#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/moni-oloyede/">Moni Oloyede</a>, Sr. Marketing Operations Manager, Fidelis Cybersecurity</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/eb907e6e32a2e3af0c5b372b91633211958ef92f#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/natalie-kremer/">Natalie Kremer</a>, Marketing Automation Manager, McGraw Hill</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/2b94bebd4ce5f820ddea86abb24bf9ee916de15d#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/weilioceane/">Oceane Li</a>, Marketing Operations Manager II, Google</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/cc2ac4e404da9ad098f02876dfcd26c6540eccc7#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/phillip-wild-456a4431/">Phillip Wild</a>, Global Lead, Marketing Technology, G Adventures</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/cf50848dfb2ccb0cac0a173d6bc1b697e46aaf8c#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/shaynawilczynski/">Shayna Wilczynski,</a> Director of Revenue Operations, JRNI</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/fec5f24cc41ed0630b89e8eda46e99edafdd39f3#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/taishi-yamada/">Taishi Yamada</a>, Director, Enterprise Marketing, Softbank</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/a8c0219e6bc600b39c0015b5d276f1272592fc64#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/veronicalaz/">Veronica Lazarovici</a>, Marketing Operations Consultant, Hotelbeds Holdings</p></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><p></p><img src="https://hlx.blob.core.windows.net/external/502ca4020f04a0b6ff49f4624792e81e538db481#image.png" alt=""><p></p><p><a href="https://www.linkedin.com/in/vladislav-vagner-seo/">Vladislav Vagner</a>, Sr. Marketing Automation Admin, Mitel</p></td> <td><p></p><img src="https://hlx.blob.core.windows.net/external/94581fe385313e1576b27ed4c669c92005c57b67#image.jpeg" alt=""><p></p><p><a href="https://www.linkedin.com/in/socialwarren/">Warren Stokes</a>, Lead Consultant, Kniva Ltd</p></td> </tr> </tbody> </table> <div class="embed embed-internal embed-internal-marketoengage embed-internal-experiencecloud"> <div><p><img src="/hlx_4e9ab0dbcb1130b4a6f3d5bd44dcb7213fc322ee.png" alt=""></p> <h3 id="marketo-engage">Marketo Engage</h3> <p>Radically transform your customer experience management by aligning sales and marketing at every touchpoint.</p> <p><a href="https://www.adobe.com/marketing/marketo.html">Learn more</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/2021/01/14/how-to-build-a-marketing-team-ready-to-win-2021.html">https://blog.adobe.com/en/2021/01/14/how-to-build-a-marketing-team-ready-to-win-2021.html</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/01/26/new-year-new-marketing-2021.html">https://blog.adobe.com/en/publish/2021/01/26/new-year-new-marketing-2021.html</a></li> <li><a href="https://blog.adobe.com/en/2021/01/07/healthcare-goes-digital-and-personal.html#gs.t6gmxm">https://blog.adobe.com/en/2021/01/07/healthcare-goes-digital-and-personal.html#gs.t6gmxm</a></li> </ul> </div> <div> <p>Topics: Future of Work, News, Digital Transformation, Commerce, Experience Cloud,</p> <p>Products: Marketo&nbsp;Engage,</p> </div> </div>

February 11, 2021 12:00 AM

Scaling fantastic user experiences, over drinks

Adobe Web Platform

<div class="embed embed-internal embed-internal-scalingfantasticuserexperiencesoverdrinks embed-internal-11"> <div> <h1 id="scaling-fantastic-user-experiences-over-drinks">Scaling fantastic user experiences, over drinks</h1> <p>Omnichannel at ZX Venture’s design system utilizes Adobe XD and Adobe Creative Cloud to maintain quality of user experience across brands, continents, languages, and cultures.</p> </div> <div> <img src="/hlx_c585f1f8e349b4f7032f584a3146dfb4b99ee813.png" alt=""> </div> <div> <p>By Courtney Spencer</p> <p>Posted on 02-11-2021</p> </div> <div> <p>There’s always been a universal, enjoyable aspect to getting together with your friends and family over beverages. People talk, build relationships, and enjoy the company of others. It’s one of the most time-honored ways of bringing people together.</p> <p>As the global head of design, omnichannel at ZX Ventures, DTC Group inside AB-InBev, I rely on design and user experiences to translate this camaraderie to our digital experiences. Our venture consists of startups that sell beer straight to customers — for example subscription websites, traditional e-commerce websites, or courier apps. It’s an exciting world to be a part of.</p> <p>As part of the omnichannel team, my role encompasses product and marketing design, product and marketing, overseeing the numerous ventures and their design teams. I support and advise our portfolio companies from design operations as well as personally designing for them. Supporting our businesses and bringing designs to life would be impossible without a great framework and toolset in place.</p> <p>For us, the framework is a design system and templated approach to using it, and the toolset is <a href="https://www.adobe.com/products/xd.html">Adobe XD</a> and <a href="https://www.adobe.com/creativecloud.html">Adobe Creative Cloud</a>. With this approach, we’ve been able to effectively scale our work and maintain quality and consistency of experience across dozens of different brands — working across continents, languages, and cultures.</p> <div class="embed embed-internal embed-internal-xd embed-internal-creativecloud"> <div><p><img src="/hlx_14c20520abab7f5a26508a03e5607d6f48395803.png" alt=""></p> <h3 id="adobe-xd">Adobe XD</h3> <p>Create and share designs for websites, mobile apps, voice interfaces, games, and more.</p> <p><a href="https://www.adobe.com/products/xd.html">Learn more</a></p></div> </div> <h3 id="using-design-systems-to-manage-dozens-of-brands">Using design systems to manage dozens of brands</h3> <p>When overseeing a portfolio of early stage businesses, it is important to prioritize more than 20 brands — we’re essentially popping in and out of our various businesses at any given time and solving problems and helping however we can. Our job is to ensure that each brand stays true to its vision and what makes it special, while also staying consistent with the overall branding and quality standards of ZX Ventures and Anheuser-Busch.</p> <p>We have a master ZX Ventures design system, which we use as a template for each brand we invest in — this template provides us with a brand foundation when building digital experiences, a way to avoid any redundancies (why build a new button when you have a great one already?). Additionally, having this template and sharing it with each brand gives them and us a chance to build on top of our central vision to create something truly unique. We achieve this personalization in two key ways.</p> <p>First, we use data from existing ventures to inform how our design system visually communicates the value proposition of a specific brand; this will then be constant in each single application for that brand, going forward. We then identify any variables — like colors, logos, and copy — that we curate for a specific culture market or business type. For instance, while copy changes from language to language, the tone and general layout remains the same; while the logo template is constant across different businesses, the brand name will differ.</p> <p>Second, we rely on iterative testing to make sure we get it right. Every time we bring a product or design system to market, we rely on A/B testing to determine what’s working and what’s not. We map out our certainties, doubts, and assumptions — then, we establish which design decisions we made on data and which ones we made on assumptions. For example, we recently tested lifestyle or product imagery to see what content performs better and how that can inform other design decisions made on assumptions. You could say we are slightly obsessed with testing.</p> <p>We’re also addressing the inevitable challenges in language created by working with so many diverse teams. At the moment, we design in English and then get either a product manager or local designer to translate into the local language, which takes time and requires a lot of testing after the fact. Since having more designers who speak Spanish or Portuguese is vital, we’re hiring more designers in Mexico and other Latin American countries.</p> <p>Simultaneously, we’re establishing standards for linguistics. For instance, we want to enshrine our tone of voice for UX and the consistency of copy for calls to action.</p> <h3 id="a-ux-designer-for-other-ux-designers">A UX designer, for other UX designers</h3> <p>Our work with our various ventures and their individual designers takes a very bottom-up approach — we are here to be the problem solvers and support. Designers and founders come to us with their biggest challenges, and we solve them collaboratively. Together, we identify patterns in their problems and then find a solution for those problems.</p> <p>To do this, we follow a three-step process:</p> <ul> <li>Evaluate the symptoms</li> <li>Identify the root cause of the problem</li> <li>Build out a program to address the design problem</li> </ul> <p>For example, in one case, we determined that one of our brand’s designers was spending too much time manually choosing their primary colors and fonts since nothing was standardized. This is a case where the solution was to use our templated design system, creating a custom design system for this brand, to maximize efficiency. For another brand, however, we created simple UI kits for already existing experiences, allowing us to update some of the user interfaces on their websites.</p> <p>Overall, our design systems have led to massive time savings and increased productivity for our individual designers. For example, one designer was able to reduce her design time by 50 percent and onboard her intern in just one hour.</p> <h3 id="leaning-on-adobe-xd-to-do-the-heavy-lifting">Leaning on Adobe XD to do the heavy lifting</h3> <p>Working with many different designers, scattered in teams across different countries (many of whom are now working from home), we need a solid tool we can count on that would help us realize our vision of a central design team, ensuring brand consistency while also empowering individual brands to be unique. For us, that is <a href="https://www.adobe.com/ca/products/xd.html">Adobe XD</a>.</p> <p>For our team, the biggest benefit is the many collaboration capabilities in XD. Since we’re dealing with different time zones and different parts of teams, it’s crucial for us to clearly and quickly see what we’re all working on. To that end, using <a href="https://www.adobe.com/ca/creativecloud/libraries.html">Creative Cloud Libraries</a> has been a huge help, too — we’re always going from <a href="https://www.adobe.com/products/illustrator.html">Illustrator</a> to <a href="https://www.adobe.com/products/photoshop.html">Photoshop</a> to XD in our workflows.</p> <p>For example, on the Creative Cloud library panel, when I need to copy our master design system and create a new one, I can do that in just one click. In my new design system, I can then change things like colors for the links. By changing these colors in the assets panel, XD applies the change to each instance of it in the design document, saving us massive amounts of time.</p> <h3 id="uniting-people-through-great-digital-experiences">Uniting people through great digital experiences</h3> <p>With our work at ZX Ventures, I believe we are bringing people together and helping our designers do great work, wherever in the world they are or whichever ventures they’re working at. The cool thing about my role is I get to support other designers — as they improve their skill set, it helps our businesses grow as well. I find this super empowering.</p> <p>Ultimately, I believe this results in better digital experiences for our customers, too — like great drinks at a party, great digital experiences go a long way in connecting people, especially when we can’t all be together in person.</p> <p><img src="/hlx_6af4bb7ea720be92ece3d92523496a725829fd04.jpeg" alt="Photograph of 4 deisgners at a table. "></p> <p><em>ZX Ventures &amp; Zé Delivery Designers in São Paulo, Brazil. Left to Right: Diego Carrion, Fernanda Magalhaes, and Otávio Sueitt.</em></p> </div> <div> <h2 id="featured-posts">Featured Posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/2021/01/13/amazon-creative-jam-gives-future-designers-a-lift-through-mentorship.html#gs.sn6zm0">https://blog.adobe.com/en/2021/01/13/amazon-creative-jam-gives-future-designers-a-lift-through-mentorship.html#gs.sn6zm0<br> </a></li> <li><a href="https://blog.adobe.com/en/2021/01/11/baronfig-new-website-a-thinkers-journey.html#gs.sn71t7">https://blog.adobe.com/en/2021/01/11/baronfig-new-website-a-thinkers-journey.html#gs.sn71t7<br> </a></li> <li><a href="https://blog.adobe.com/en/2020/05/18/penske-ux-design-pre-post-covid.html?sdid=7N826B8Q&amp;mv=social#gs.sn73sv">https://blog.adobe.com/en/2020/05/18/penske-ux-design-pre-post-covid.html?sdid=7N826B8Q&amp;mv=social#gs.sn73sv</a></li> </ul> </div> <div> <p>Topics: Digital Transformation, Insights &amp; Inspiration, Customer Stories, Content Management, Creative Cloud, no-interlinks</p> <p>Products: Creative Cloud, Illustrator, Photoshop, XD,</p> </div> </div>

February 11, 2021 12:00 AM

Adobe employees from around the world welcome the Year of the Ox

Adobe Web Platform

<div class="embed embed-internal embed-internal-employeeswelcomeyearoftheox embed-internal-11"> <div> <h1 id="adobe-employees-from-around-the-world-welcome-the-year-of-the-ox">Adobe employees from around the world welcome the Year of the Ox</h1> </div> <div> <img src="/hlx_ce7ae3791229a7a19c82682466b9bc075843f10b.png" alt="Lunar New Year decorations in Singapore."> </div> <div> <p>By Celest Lim</p> <p>Posted on 02-11-2021</p> </div> <div> <p>The <a href="https://spark.adobe.com/page/ww4crunNvCPwQ/">Lunar New Year</a> (LNY) — also known as Chinese New Year or Spring Festival — marks the first new moon of the lunisolar calendars that are traditionally used in many east Asian countries, including China, Singapore, Korea, and Vietnam. It is one of the most significant and vibrant celebrations around the world.</p> <p>With Adobe’s upcoming Global Day Off falling on the first day of LNY, employees around the world shared their own traditions and how they’ll be celebrating this year.</p> <h3 id="year-of-the-ox-and-festive-decorations">Year of the Ox and Festive Decorations</h3> <p>February 12 marks the beginning of the Year of the Ox, the second animal on the Chinese zodiac, bringing a renewed source of strength and resilience to the year ahead.</p> <table> <thead> <tr> <th>Image 50</th> <th>Image 50</th> </tr> </thead> <tbody> <tr> <td><img src="/hlx_d778a2f75176993af3274e2437868bf444af4fb3.png" alt="Lunar New Year decorations in Singapore, showing the Year of the Ox."></td> <td><img src="/hlx_03f8f7ab479b9d8a98cc01a75c66b66772b65c88.png" alt="Food stand in Singapore." title="Inserting image..."></td> </tr> </tbody> </table> <p>“Every LNY, I would stroll down Chinatown to enjoy the massive lantern displays. This year even with the beautiful Ox displays, the streets were quieter and stalls empty. I’m missing the Adobe Lion Dance and big loud cheering during Yusheng — a uniquely Singaporean tossing of fresh vegetables and raw fish for good luck.”</p> <p><em><strong>— Vincent Chia, DME enterprise account executive, Singapore</strong></em></p> <p><img src="/hlx_61a6cd38e499eb7f16bad601c06b37ad53af9a11.png" alt="The ‘Chun Lian’ (Spring Couplet), also known as 'a pair of antithetical phrases,' is a special form of literature in China."></p> <p>“The ‘Chun Lian’ (Spring Couplet), also known as ‘a pair of antithetical phrases,’ is a special form of literature in China. Often written with black or gold ink on red paper, these couplets are traditionally pasted on the sides of the front door and above the door frame as LNY decorations and wishes for happiness and a good year ahead.”</p> <p><em><strong>— Enno Zhong, senior solutions consultant, Guangzhou, China</strong></em></p> <h3 id="auspicious-tunes-red-envelopes-and-family-gatherings">Auspicious Tunes, Red Envelopes, and Family Gatherings</h3> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/34nU-CZvz7c?rel=0&amp;v=34nU-CZvz7c&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“Nothing quite like the blasting of festive tunes at the malls and supermarkets to remind us that LNY is ‘round the corner! Here’s us singing one of our favorites called “He Xin Nian (贺新年)” — a song wishing all a happy new year.”</p> <p><em><strong>— Angela Lee, JAPAC lifecycle marketing specialist &amp; Joyce Neo, APAC marketing manager, Singapore</strong></em></p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/owCUzmyFo_s?rel=0&amp;v=owCUzmyFo_s&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“I always looked forward to receiving Hongbao from my parents. It’s a red envelope with money inside for good luck. Now my dog, Burger, gets Hongbao treats too.”</p> <p><em><strong>— Zihan Miao, customer insights manager, San Francisco, U.S.</strong></em></p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/p0PBtwb6UOw?rel=0&amp;v=p0PBtwb6UOw&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“My family would spend hours together, baking trays of pineapple tarts for friends and relatives. Mom’s recipe is the best!”</p> <p><em><strong>— Clarissa Nah, marketing manager, Singapore</strong></em></p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/RbAo6GSveTM?rel=0&amp;v=RbAo6GSveTM&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“Lunar New Year is a really special time for me, as my family gets together to celebrate my Grandmother and treat her with new clothes, flowers and her favorite food!”</p> <p><em><strong>— Zara Un, EMEA e-commerce manager, London, UK</strong></em></p> <h3 id="food-glorious-food">Food, Glorious Food</h3> <p>Food is the heart of LNY. While the family reunion dinner on LNY eve is the most significant meal of the celebrations, holiday feasting often starts weeks before and lasts through the first 15 days of LNY. For our employees, we’ll be hosting a Lunar New Year Celebration and Cooking Demonstration on February 18! We can’t wait to celebrate with everyone virtually!</p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/ZkIn7w4YS2I?rel=0&amp;v=ZkIn7w4YS2I&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“My wife’s family typically mixes the Asian must-haves (roast pig) with the Australian (sashimi and oysters) — it becomes a very fusion plate.”</p> <p><em><strong>— William Leung, Japan &amp; APAC, head of acquisitions and search marketing, Sydney, Australia</strong></em></p> <p><img src="/hlx_4f9c704941a089d677c99396ca6ac9e9041672b8.png" alt="Franklin celebrating Lunar New Year with his family."></p> <p>“LNY for me means a big family gathering with each household bringing dishes; wearing red and new clothes signifying a new year and a fresh start; and handing out red envelopes to the kids.”</p> <p><em><strong>— Franklin Tjhin, APAC search marketing manager, Sydney, Australia</strong></em></p> <p><img src="/hlx_b9eda85217a4b7f5cb927697f5b924413531fda3.png" alt="Dan Jiao’ — egg wrap with seasoned pork, which symbolizes &quot;treasures of gold,&quot; and ‘Jiu Niang Tang’ — sweet wine-rice soup with small rice balls symbolizing &quot;family reunion and perfection.&quot; "></p> <p>“For LNY, we would prepare ‘Dan Jiao’ — egg wrap with seasoned pork, which symbolizes “treasures of gold,” and ‘Jiu Niang Tang’ — sweet wine-rice soup with small rice balls symbolizing “family reunion and perfection.” This year, to commemorate the Year of the Ox, we will have a beef dish too.”</p> <p><em><strong>— Chloe Wang, executive assistant, Shanghai, China</strong></em></p> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/JSg81DmzfmU?rel=0&amp;v=JSg81DmzfmU&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 75%;"><iframe src="https://www.youtube.com/embed/HanLdv_0Qxc?rel=0&amp;v=HanLdv_0Qxc&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <h3 id="cross-culture-celebration">Cross-culture Celebration</h3> <div class="embed embed-oembed embed-youtube"><div style="left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.25%;"><iframe src="https://www.youtube.com/embed/chR_XZebpR8?rel=0&amp;v=chR_XZebpR8&amp;feature=emb_title&amp;kind=embed-youtube&amp;provider=youtube" style="border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;" allowfullscreen="" scrolling="no" allow="encrypted-media; accelerometer; clipboard-write; gyroscope; picture-in-picture" title="content from youtube" loading="lazy"></iframe></div></div> <p>“Special greetings in Korean, French, and English!”</p> <p><em><strong>— Hyojin Shin, marketing manager, Seoul, South Korea</strong></em></p> <p><img src="/hlx_fd0cf5d4f2ba67d63d02436d8d75388fa591f322.png" alt=""></p> <p>“We are celebrating LNY in a special way this year. Our 10-year-old daughter was selected as a young ambassador to represent the U.S at the Silk Road Children’s Emissary in China, which is being broadcast to over 82 countries. Makenna was chosen because of her singing ability and her relationship as the great great niece of Helen Foster Snow, a journalist from Utah who was nominated twice for the Nobel Peace Prize for her contributions to modern Chinese history in the 1930s.”</p> <p><em><strong>— Adam Foster, talent management and development, Workfront, Utah, U.S.</strong></em></p> <div class="embed embed-internal embed-internal-adobeforall embed-internal-adobelife"> <div><p><img src="/hlx_c8c81ea35423b0511a921017439ef0a520f84d33.png" alt=""></p> <h3 id="adobe-for-all">Adobe For All</h3> <p>We believe that when people feel respected and included they can be more creative, innovative, and successful, which is why we are committed to investing in building a diverse and inclusive environment for our employees, customers, partners, and the tech industry as a whole.</p> <p><a href="https://www.adobe.com/diversity.html">Learn more</a></p></div> </div> </div> <div> <h2 id="featured-posts">Featured posts:</h2> <ul> <li><a href="https://blog.adobe.com/en/publish/2021/02/11/adobe-celebrates-lunar-new-year.html#gs.t8wzd5">https://blog.adobe.com/en/publish/2021/02/11/adobe-celebrates-lunar-new-year.html#gs.t8wzd5</a></li> <li><a href="https://blog.adobe.com/en/publish/2020/05/04/inyong-kim.html#gs.t6i2kb">https://blog.adobe.com/en/publish/2020/05/04/inyong-kim.html#gs.t6i2kb</a></li> <li><a href="https://blog.adobe.com/en/publish/2021/02/09/forbes-award-americas-best-employers.html">https://blog.adobe.com/en/publish/2021/02/09/forbes-award-americas-best-employers.html</a></li> </ul> </div> <div> <p>Topics: #AdobeForAll, Brand, Adobe Life, Adobe Culture, Adobe Life - Asia</p> <p>Products:</p> </div> </div>

February 11, 2021 12:00 AM

February 10, 2021

Updates to the Storage Access API

Surfin’ Safari

The Storage Access API allows third-party web content to ask for permission to get access to its unpartitioned storage, typically in order to authenticate the user. In the case of Safari and WebKit, using the Storage Access API enables cookie access under Intelligent Tracking Prevention.

This blog post covers two changes to the Storage Access API in Safari and WebKit as well as a how-to guide on adoption based on questions we’ve been asked the last two years.

Changes to API Functionality

iOS and iPadOS 14.5, and macOS Big Sur 11.3 betas feature two sought after changes to the Storage Access API in Safari and WebKit – per-page storage access and support for nested iframes. Both of these changes were driven by the standards process in W3C Privacy CG.

Per-Page Storage Access

If a request for storage access is granted to embedee.example, access is now granted to all embedee.example resource loads under the current first party webpage. This includes sibling embedee.example iframes but also other, non-document resources.

Nested Iframes Can Request Storage Access

Imagine a webpage embedding a cross-site iframe from embedeeOne.example which in turn embeds a cross-site iframe from embedeeTwo.example which makes the latter a so called nested iframe. As of this release, nested iframes such as embedeeTwo.example are also allowed to request storage access. Note that we may require first parties to explicitly delegate this capability through Permissions Policy at a later stage. Mozilla has expressed an interest in such control.

How To Use the Storage Access API

For the purposes of this guide we will use the domains social.example for the embedded content in need of cookie access and news.example as the first party website embedding social.example.

First, Cross-Site Iframes Call the API

The Storage Access API is called from inside cross-site, or third-party, iframes. You don’t have to call the API if your website is first party and first party websites cannot call the API on behalf of third-parties.

How-To #1: Meet and Greet the User as First Party

If you want to make use of the Storage Access API as a third-party, you first need to take these steps as a first party:

  1. Make sure you are using regular browsing mode, i.e. not Private Browsing. We will cover Private Browsing at the end of this guide.
  2. Take the user to your domain as first party. This is your website showing itself and giving the user a chance to recognize your brand and domain name. Recognition is important since the prompt for storage access features your embedded iframe’s domain. In our example, this is taking the user to a webpage with social.example in the URL bar, either though a navigation or a popup.
  3. Have the user interact (tap, click, or use the keyboard) with your website as first party. This tells the browser that the user has actually seen and used the site. Note: Navigating to and from your website in a redirect without user interaction does not count. Formally, WebKit’s requirement is user interaction as first party the last 30 days of browser use. Being granted storage access through the Storage Access API counts as such user interaction. In our example, this is having the user tap/click on the webpage with social.example in the URL bar.
  4. Set cookies when you are first-party. This establishes the website as “visited” for the purposes of the underlying cookie policy. Third parties without cookies cannot set cookies in Safari and never have since Safari 1.0 in 2003. This means you cannot use the Storage Access API as third-party until you have set at least one cookie as first party. In our example, this is setting cookies for social.example with social.example in the URL bar.

The above requirements are there to make sure the sometimes 50-100 embedded third-parties on a single webpage cannot all prompt the user for storage access, only the ones the user has visited and interacted with can.

How-To #2: Use the Storage Access API as Third Party

Once you have had the user interact with your website as first party and have set cookies as first party, you are ready to make use of the Storage Access API.

  1. In shipping Safari, your cross-site iframe that is about to request storage access must be a direct child frame of the top frame. Nested iframes can request storage access as of iOS 14.5 and macOS 11.3 (currently in beta).
  2. Make your cross-site iframe call document.hasStorageAccess() as soon as it’s rendered to check your status. Note: Don’t call this function upon a user gesture since it’s asynchronous and will consume the gesture. Once the user gesture is consumed, making a subsequent call to document.requestStorageAccess() will fail because it’s not called when processing a user gesture. In our example this is social.example‘s iframe.
  3. If document.hasStorageAccess() returns false, your iframe doesn’t have storage access. Now set an event handler on elements that represent UI which requires storage access and make the event handler call document.requestStorageAccess() on a tap or click. This is the API that requires a user gesture. In our example this is social.example‘s iframe calling the API.
  4. Render the page with your cross-site iframe. Tap or click on an element with an event handler in the iframe. In our example this is rendering a page from news.example with thesocial.example‘s iframe and clicking on an element in the social.example iframe’s document.
  5. If the user has not yet opted in to storage access for social.example under news.example there will now be a prompt. Choose “Don’t Allow” in the prompt. Tip: Don’t choose “Allow” yet because it’ll be remembered and you’ll have to delete browser history to reset it. If you are not getting the prompt, you either have not gotten user interaction as first party and set cookies for social.example yet (see How-To #1) or you have already chosen “Allow” earlier which is remembered.
  6. Test the behavior for the “Don’t Allow” case. You can do it repeatedly. Do it until you’re happy with how your code handles it. Note that when the user chooses “Don’t Allow” in the prompt, their user gesture is consumed and any further API calls in your iframe that require a user gesture will have to get the user to tap or click again. We’ve deliberately designed it this way to make sure that an explicit deny from the user doesn’t trigger further privileged API calls from the iframe. The user should at this point be able to continue with other things.
  7. Now tap or click the iframe again and this time choose “Allow” in the prompt. This should open up cookie access on resolution of the promise.
  8. Test the behavior for the “Allow” case. Note that when the user chooses “Allow” in the prompt, their user gesture is preserved and any further API calls in your iframe that require a user gesture can go right ahead. We’ve deliberately designed it this way so that when you get access to cookies and note that the user is not in the desired state, such as not logged in, you can open a popup or navigate them to your website without further user gestures. In our example this would be a popup or navigation to social.example.
  9. Now reload the webpage. This will reset your per-page storage access. Tap or click the iframe to trigger the call to document.requestStorageAccess(). This should open up cookie access without a prompt since the user has already opted in and that choice is remembered.
  10. Finally test the flow in Private Browsing Mode. In that mode, the user must interact with your website as first party (see How-To #1) in the same tab as where you later request storage access as third-party. This is because Private Browsing Mode uses a separate ephemeral session for each new tab the user opens, i.e. the state of those tabs are separate. The rest should work the same as in regular mode.

February 10, 2021 05:00 PM

February 01, 2021

Introducing Private Click Measurement, PCM

Surfin’ Safari

This blog post covers a new feature called Private Click Measurement, or PCM, for measuring ad clicks across websites and from iOS apps to websites. It is part of iOS and iPadOS 14.5 betas.

Motivation and Goals

Classic ad attribution on the web is done with cookies carrying user or device IDs. Such attribution constitutes cross-site tracking which WebKit is committed to preventing. Websites should not be able to attribute data of an ad click and a conversion to a single user as part of large scale tracking.

At the same time, we want to support measurement of online advertising. PCM achieves this tradeoff by sending attribution reports with limited data in a dedicated Private Browsing mode without any cookies, delaying reports randomly between 24 and 48 hours to disassociate events in time, and handling data on-device.

The Feature in a Nutshell

  • A new, on-by-default feature called Private Click Measurement, or PCM, for privacy-preserving measurement of ad clicks across websites and from iOS apps to websites in iOS and iPadOS 14.5 betas.
  • An 8-bit identifier on the click source side, which means 256 parallel ad campaigns can be measured per website or app.
  • A 4-bit identifier on the conversion side, which means 16 different conversion events can be distinguished.
  • Fraud prevention via unlinkable tokens will be coming.

A Proposed Standard

We first proposed privacy-preserving measurement of ad clicks in May 2019. Since then the proposal has changed name to Private Click Measurement and been discussed extensively in the W3C Privacy Community group, both through meetings and on GitHub.

A proposal needs two independent implementations to be on track to become a web standard. This means another browser such as Firefox, Brave, Chrome, or Edge needs to independently implement PCM before it can move further along the standards track. We are working with them to get there.

Nevertheless, we are happy to be the first browser to enable a proposed web standard for measuring advertising!

On By Default

You may ask why we are enabling PCM by default before there is a second independent implementation and before we’ve added the fraud prevention mechanism discussed in W3C Privacy CG. The reasons are:

  • Early access. We recognize the need for early access so that advertisers, websites, and apps can adopt the technology, analyze real data, tune their measurement, and report any issues to us.
  • Equal access. We want to provide everyone with the opportunity to test and use this technology from the get-go. An alternative would be to only run it with selected partners but we have opted for an open approach.
  • Attribution data is stable. Fraud prevention tokens will be added and naming of data labels might change, but the functionality and attribution data is stable, namely 8 bits on the click source side and 4 bits on the attribute-on side. Thus, full scale tests of PCM are meaningful and useful at this point.

Web-to-Web Click Measurement

PCM web-to-web is the case covered by the proposed standard, i.e. a user clicks a link on a webpage, is navigated cross-site, and up to seven days later, there’s a signal on the destination website saying it would like attribution for any previous clicks that took the user here.

For the purposes of the examples below, we assume the click happens on a website called social.example and the click navigates the user to shop.example.

The Click Side

Links that want to store click measurement data should look like this:

<!-- Link on social.example --> 
<a href="https://shop.example/product.html" 
   attributionsourceid="[8-bit source ID]"



The two mandatory attributes are:

  • attributionsourceid: The 8-bit attribution source ID, allowed to be between 0 and 255. This was earlier referred to as the ad campaign ID but since PCM is not technically tied to advertising, it was decided in the standards discussion that its attributes and key names should not use advertising terms.
  • attributeon. The click destination website which wants to attribute incoming navigations to clicks. Note that PCM only uses the registrable domain or eTLD+1, i.e. there is no separation based on subdomains. This is so that the destination cannot be set up as https://janeDoeTracking.shop.example to track user Jane Doe.

If the click indeed navigated the user to the attributeon website, the attributionsourceid is stored as a click from social.example to shop.example for 7 days.

Note that this data is not accessible to websites. It’s silently stored in the browser.

The Triggering Event

To trigger click attribution, the “attribute on” website has to make an HTTP GET request to the website(s) where it is running click-through ads. This way of doing it is intended to support existing “tracking pixels” and make adoption easy. In our example this would be the shop.example site making an HTTP GET request to social.example. For a more modern way of triggering attribution, see the Future Enhancements section.

The HTTP GET request to social.example triggers attribution if it is redirected to https://social.example/.well-known/private-click-measurement/trigger-attribution/[``4-bit`` trigger data]/[optional 6-bit priority]. Note: The first beta lacks the /trigger-attribution path component since this was a very recent decision in the standards group.

The two URL path parameters are:

  • Trigger data. This is a 4-bit value between 00 and 15 that encodes the triggering event (note the mandatory two digits). This was earlier referred to as the conversion value but again, PCM is not technically tied to advertising so it doesn’t use advertising terms.
  • Optional priority. This is a 6-bit value between 00 and 63 which allows multiple triggering events to result in a single attribution report for the event with the highest priority (again, note the two digits). For instance, there might be multiple steps in a sales funnel where each step triggers attribution but steps further down the funnel have higher priority. This value only controls which trigger data goes into the attribution report and is not part of the attribution report itself. You may ask why this isn’t a 4-bit value like the trigger data. The reason is to support easy changes to what’s being measured without having to remap several trigger-data-to-priority pairs. Triggering events 00-15 may start out as mapped to priority 00-15 but then the shop owner wants to drill into events 5-7. With the extra bits, it’s easy to assign triggering events 05-07 to priority 20-22 so as to focus attribution reports to those.

Once a triggering event matches a stored click, a single attribution report is scheduled by the browser to be sent out randomly between 24 and 48 hours later, or the earliest time thereafter when the browser is running. As long as an attribution report has not yet been sent, it can be rescheduled based on a triggering event with higher priority.

The Attribution Report

PCM attribution reports are sent as HTTP POST requests to /.well-known/private-click-measurement/report-attribution/ on the website where the click happened, in our example https://social.example/.well-known/private-click-measurement/report-attribution/. Note: The first beta lacks the /report-attribution path component since this was a very recent decision in the standards group.

The report is in JSON and looks like this:

  "source_engagement_type" : "click",
  "source_site" : "social.example",
  "source_id" : [8-bit source ID],
  "attributed_on_site" : "shop.example",
  "trigger_data" : [4-bit trigger data],
  "version": 1

Notes on the non-obvious key-values above:

  • source_engagement_type is always “click” for PCM. This field allows for future use of this report mechanism for other types of attribution such as view-through.
  • version signals to the receiving end which version of the attribution feature this is. You should expect this number to be increased when fraud prevention tokens are added or something else about the mechanism is changed. This allows concurrent versions to work in parallel and provides a signal to developers that there may be things they need to change or adopt on their side.

App-to-Web Click Measurement

This is exciting – we’re adding the capability to measure ad clicks from iOS and iPadOS apps to Safari!

Many advertisers in apps want to take the user to their website where the user can buy a product or sign up for a service. This is exactly the kind of ad PCM app-to-web allows them to measure.

The Click Side

The only thing that differs from PCM web-to-web is on the click side which is in an iOS app. To adopt this technology you need to do this:

  1. Add a URL to where you want PCM’s ad attribution reports to be sent when ads are clicked in your app. You do this under the key NSAdvertisingAttributionReportEndpoint in your Info.plist. The naming of this endpoint is deliberately not tied to PCM. Potential future ad measurement reports associated with apps may use this URL with a differing well-known location if appropriate. Note that the subsequent HTTP redirect to trigger attribution needs to go to this website.
  2. Populate and add the new UIEventAttribution to the options of your call to openURL:. See below for what fields you need to enter in UIEventAttribution.
  3. Overlay the parts of the click-through ad that will trigger navigations to websites with the new UIEventAttributionView. This view only serves as a checkpoint for Apple’s code on-device to check that a user gesture happened before the navigation. The view does not consume the gesture and you are free to decide whether or not to navigate to a website even if the gesture happened on one of these views. A user gesture is required for your UIEventAttribution object to be forwarded to the browser as part of the call to openURL:. Note that PCM app-to-web is so far only supported in Safari and only on iOS and iPadOS. We intend to add WebKit API to enable other default browsers to be the destination of PCM app-to-web too.


This is the optional data structure you submit in your call to openURL: when you want to measure clicks:

open class UIEventAttribution : NSObject, NSCopying {
    open var sourceIdentifier: UInt8 { get }
    open var destinationURL: URL { get }
    open var reportEndpoint: String? { get }
    open var sourceDescription: String { get }
    open var purchaser: String { get }
    public init(sourceIdentifier: UInt8,
                destinationURL: URL,
                sourceDescription: String,
                purchaser: String)
  • sourceIdentifier is the same as PCM’s attributionsourceid attribute for links. Allowed values are 0-255.
  • destinationURL is the same as PCM’s attributeon attribute for links but it should be a full URL with protocol. The report will be sent to the URL’s registrable domain (eTLD+1) and over HTTPS.
  • reportEndpoint will be picked up by Apple code from your info.plist’s NSAdvertisingAttributionReportEndpoint. As you can see, the init function does not take this parameter. This is where PCM will send any subsequent ad attribution reports. The reason why it need to be stated in the static Info.plist is so that it cannot be used as a tracking vector by dynamically submitting user-specific reporting URLs such as janeDoeTracking.example.
  • sourceDescription is a human-readable description of the content that was tapped. This string should be no longer than roughly 100 characters and can be localized according to the context. It will not be seen by Apple or the destination website. Instead it’s intended to be able to show to users what ad click data they have stored.
  • purchaser is a human-readable name or description of the purchaser of the content that was tapped, typically the ad buyer. This string should be no longer than roughly 100 characters and can be localized according to the context. It will not be seen by Apple or the destination website. Instead it’s intended to be able to show to users what ad click data they have stored.

UIEventAttribution Sample Code

func openAdLink() {
    let adURL = URL(string: "https://shop.example/tabletStandDeluxe.html")!
    let eventAttribution =
        UIEventAttribution(sourceIdentifier: 4,
                           destinationURL: adURL,
                           sourceDescription: "Banner ad for Tablet Stand Deluxe.",
                           purchaser: "Shop Example, Inc.")

    // If using scene lifecycle.
    let sceneOpenURLOptions = UIScene.OpenExternalURLOptions()
    sceneOpenURLOptions.eventAttribution = eventAttribution
                                        options: sceneOpenURLOptions,
                                        completionHandler: nil)

    // If using application lifecycle.
    let appOpenURLOptions: [UIApplication.OpenExternalURLOptionsKey : Any] = [
        .eventAttribution: eventAttribution
                              options: appOpenURLOptions,
                              completionHandler: nil)


UIEventAttributionView is the view that is placed over the tappable content, typically an ad. It’s used by the system to verify that a user gesture has occurred.

open class UIEventAttributionView : UIView {

The view is invisible and very lightweight. The simplest use case is to create one of these views and stretch it over your entire tappable content. You can also place multiple over a single piece of content if you for instance want to create specific tappable areas.

To ensure your UIEventAttributionView works correctly:

  • Ensure isUserInteractionEnabled is false. This is the default value for this view and ensures the view doesn’t consume events which would otherwise go to the content beneath it.
  • Ensure there are no views placed on top of the event attribution view. The user should be tapping this view for it to count as a user gesture for the purposes of PCM app-to-web.
  • Ensure your tap handling occurs on a touch up event. This automatically occurs if your content is tapped in response to a UITapGestureRecognizer firing or at the .ended state of a UILongPressGestureRecognizer.

UIEventAttributionView Sample Code

func addEventAttributionView() {
    // Create an event attribution view.
    let eventAttributionView = UIEventAttributionView()

    // Place it over your ad however you'd like.
    eventAttributionView.translatesAutoresizingMaskIntoConstraints = false
        adView.topAnchor.constraint(equalTo: eventAttributionView.topAnchor),
        adView.leadingAnchor.constraint(equalTo: eventAttributionView.leadingAnchor),
        adView.trailingAnchor.constraint(equalTo: eventAttributionView.trailingAnchor),
        adView.bottomAnchor.constraint(equalTo: eventAttributionView.bottomAnchor)

Testing and Debugging

WebKit has an experimental feature called Private Click Measurement Debug Mode. You’ll find it under Develop–>Experimental Features on macOS and under Settings–>Safari–>Advanced–>Experimental Features on iOS and iPadOS. When you enable this mode and restart Safari, reports go out a mere 10 seconds after the triggering event instead of 24-48 hours later. This allows quick turnaround in testing and debugging.

The debug mode also enables debug output in Web Inspector’s console. This output will show up by default in a later beta.

Remember to disable debug mode once you’re done testing.

Future Enhancements

As is always the case with web standards, proposed or established, there are enhancement requests, corner cases, and a need to evolve the specification as the platform progresses. Below is a list of prominent and relevant issues that may show up as changes to our implementation of PCM in upcoming releases. Please take part on GitHub if you have input.

  • Fraud prevention with unlinkable tokens, GitHub issue #27. A proposed solution was presented to W3C Privacy CG in May 2020. It will use what is traditionally called blinded signatures (we call them unlinkable tokens). The intention is to offer websites to cryptographically sign tokens which will be included in attribution reports in a format that makes it impossible to link them back to the event when they were signed. These tokens serve as proof to the report recipient that they trusted the events involved (link click and attribution trigger) without telling them which events.
  • Modern JavaScript API for triggering event instead of legacy tracking pixels, GitHub issue #31. The intent here is to let a JavaScript call serve as the triggering event instead of redirected tracking pixels. This will remove the requirement for making third-party requests all together.
  • Attribution reports to advertisers too, GitHub issue #53. We have expressed that we’d like the attribution report to be sent to both the click source and the advertiser site. However, this sparked a conversation on sending reports to designated third-parties and you can read and join that conversation in GitHub issue #57.
  • Support PCM links in nested iframes, GitHub issue #7. This is about measuring click-through ads served in cross-site iframes. Since subsequent attribution reports will be sent to the first-party click source site, it’s not clear how that first party should control click measurement requested on its behalf. Part of this conversation covers not just serving of ads by third parties but also reporting to such third-parties. The privacy risk of such a scheme is explored in GitHub issue #57.

Misuse or Use Together With Tracking May Lead To Blocking

PCM is intended to support privacy-preserving measurement of clicks across websites or from apps to websites. It is not intended to be used to track users, events, or devices across those contexts.

If PCM is being misused for tracking purposes or being used in conjunction with unrelated means of tracking users, events, or devices, we may block the offending party from using PCM and potential future measurement features.


  • What about PCM web-to-app? We are interested in this but don’t have a solution yet.
  • What about view-through ad attribution? We are interested in this but don’t have a privacy-preserving solution yet.
  • Is there a reason why the click has to take the user to the device’s browser? Yes. Stored clicks are valid for 7 days. Let’s assume that the user doesn’t trigger attribution right after they click but want to think about it first. When they choose to re-engage a few hours or days later they will most likely go to their browser and either look up the tab where they left off, use a bookmark they might have saved, use their search provider to find the right webpage, or enter the website’s address directly in the URL bar. For the stored click data to be readily available when the user re-engages in this fashion, the initial click needs to take the user to their browser since PCM data just like other website data is not shared between browsers and WebViews. In short: The user’s browser is the most likely place where delayed click-through attribution will happen.
  • Does use of PCM app-to-web require the app to be granted permission to track according to AppTrackingTransparency? No.
  • How do users delete stored clicks? Stored clicks are deleted when they delete website data.
  • Can users opt out of PCM? Yes. There is a new Safari privacy setting for privacy-preserving ad measurement. If the user has opted out, no click metadata will be stored and no attribution reports will be sent out.
  • Is PCM enabled in Private Browsing Mode? No.
  • What is the maximum number of parallel ad campaigns per source website or source app? 256, with the actual value being between 0 and 255.
  • What is the maximum number of triggering events I can distinguish? 16, with the actual value being between 0 and 15.
  • What is the maximum time between a click and a triggering event to still get attribution? 7 days.
  • Can I use PCM app-to-web with WebViews? No. Apps have too much control over WebViews for a feature like PCM to be able to protect the data.
  • Can I use PCM app-to-web with SFSafariViewController? We are interested in this but don’t have a solution yet.
  • Can other default browsers on iOS and iPadOS participate in PCM app-to-web? It is our intention to add such an API at a later point. Please let us know if you are interested.
  • Where can I provide feedback? Please file any web-facing issues or issues with the attribution report mechanism directly to WebKit: https://bugs.webkit.org. Please use Feedback Assistant for any issues with UIKit APIs or the Info.plist integration: https://developer.apple.com/bug-reporting/.

Thank You

We’d like to thank the W3C Privacy Community Group for all the work filing issues, suggesting changes, and engaging with us on this work. Please continue to do so as we move forward. Also, a big thank you to the engineers who’ve helped implement this feature – Anant, Kate, Jon, Chris, Jonathan, Chris, and Glen.

February 01, 2021 07:30 PM

January 27, 2021

Manuel Rego: :focus-visible in WebKit - January 2021

Igalia WebKit

Let’s do a small introduction as this is a kind of special post.

As you might already know, last summer Igalia launched the Open Prioritization experiment, and :focus-visible in WebKit was the winner according to the pledges that the different projects got. Now it has moved into collecting funds stage, so far we’ve reached 80% of the goal and Igalia has already started to work on this. If you are interested and want to help sponsoring this work, please visit the project page at Open Collective.

In our regular client projects in Igalia, we provide periodic progress reports about the status of tasks and next plans. This blog post is a kind of monthly report, but this time the project has many customers, so it looks like this is a better format to share information about the status of things. Thank you all for supporting us in this development! 🙏

Understanding :focus-visible

Disclaimer: This is not a blog post explaining how :focus-visible works or the implications it has, you can read other articles if you’re looking for that.

First things first, my initial thoughts were that :focus-visible was a pseduo-class which would match an element when the browser natively shows a focus indicator (focus ring, outline) when an element of a page is focused. And that’s more or less what the spec says on the first sentence:

The :focus-visible pseudo-class applies while an element matches the :focus pseudo-class and the user agent determines via heuristics that the focus should be made evident on the element.

They key part here is that native behavior doesn’t show the focus indicator on purpose in some situations when the :focus pseudo-class matches, mainly because usability studies indicate that showing it in all the cases is not what the user expects and wants. Before having :focus-visible the web authors have not way to access the same criteria to style the focus indicator only when it’s going to be shown natively, and still keep the website being accessible.

Apart from that the spec has a set of heuristics that despite being non-normative, it looks like all implementations are following them. Summarizing them briefly they’d be something like:

  • If you use the mouse to focus an element it won’t match :focus-visible.
  • If you use the keyboard it’ll match :focus-visible.
  • Elements that support keyboard input (like <input> or contenteditable) always match :focus-visible.
  • When a script focuses a new element it’ll match or not :focus-visible depending on the previous active element.

This is just a quick & dirty summary, please read the spec for all the details. There have been years of research around these topics (how focus should work or not on the different use cases, what are the users and accessibility needs, how websites are managing focus, etc.) and these heuristics are somehow the result of all that work.

:focus-visible in the default UA style sheet

At this point it looks like we can more or less understand what :focus-visible is about. So let’s start playing with it. The definition seems very clear, but testing things in the current implementations (Chromium and Firefox) you might find some unexpected situations.

Let’s use a very basic example:

  :focus-visible { background: lime; }
<div tabindex="0">Focus me.</div>
#example1:focus-visible { background: lime; } #warning1 { display: none; color: red; font-size: 75%; font-style: italic; } @supports not (selector(:focus-visible)) { #warning1 { display: block; } }
WARNING: :focus-visible is not supported on your browser.
Focus me.

If you focus the <div> with a mouse click, :focus-visible doesn’t match per spec, so in this case the background doesn’t become green (if you use the keyboard to focus it will match :focus-visible and the background would be green). This works the same in Chromium and Firefox, but Chromium (despite the element doesn’t match :focus-visible) shows a focus indicator. Somehow the first spec definition is already not working as expected on Chromium… The issue here is that Chromium still uses :focus { outline: auto; } in the default UA style sheet, and the element matches :focus after the mouse click, that’s why it’s showing a focus indicator while not matching :focus-visible.

Actually this was already on the spec, but Chromium is not following that yet:

User agents should also use :focus-visible to specify the default focus style, so that authors using :focus-visible will not also need to disable the default :focus style.

There was already a related CSSWG issue on the topic, as the spec currently suggests the following code:

:focus:not(:focus-visible) {
  outline: 0;

This works as a kind of workaround for this issue, but if the default UA style sheet uses :focus-visible that won’t be needed.

Anyway, I’ve reported the Chromium bug and created WPT tests, during the tests review Emilio Cobos realized that this needed a change on the HTML spec and he wrote a PR to update it. After some discussion with Alice Boxhall the HTML change and the tests were approved and merged. I even was brave enough to write a patch to change this in Chromium which is still under review and needs some extra work.

The tests

WebKit is the third browser engine adding support for :focus-visible so the first natural step was to import the WPT tests related to the feature. This looked like something simple but it ended up needing some work to improve the tests.

There was a bunch of :focus-visible tests already in the WPT suite, but they need some love:

  • Some parts of the spec were not covered by tests, so I added some new tests.
  • Some tests were passing in WebKit, even when there was not an implementation yet, so I modified them to fail if there’s no :focus-visible support.

Then I imported the tests in WebKit and I discovered a bug related to focus event and :focus pseudo-class. :focus pseudo-class was not matching inside the focus event handler. This is probably not important for web authors, but :focus-visbile tests were relying on that. Actually this had been fixed in Chromium more than 5 years ago, so first I moved to WPT the Chromium internal test and used it to fix the problem in WebKit.

Once the tests were imported in WebKit, the problem was that a bunch of them were timing out in Mac platforms. After investigating the issue I realized that it’s because focus event is not dispatched when you click on a button in Mac. Digging deeper I found this old bug with lots of discussions on the topic, it looks like this is done to keep alignment with the native platform behavior, and also in order to avoid showing a focus indicator. Even Firefox has the same behavior on Mac. However, Chromium always dispatch the event independently of the platform. This makes that some of the tests don’t work automatically on Mac, as they wait for a focus event that is never dispatched. Anyway maybe once :focus-visible is implemented, it could be rediscussed the possibility of modifying this behavior, thought it might be not possible anyway. In any case, WebKitGTK port, the one I’m using for the development, does trigger the focus event in this case; and I’ve also changed WPE port to do the same (maybe Windows port will follow too).

One more thing about the tests, lots of these :focus-visible tests use testdriver.js to simulate user actions. For example for clicking an element they use test_driver.click(element), however that simple instruction is causing some kind of memory leak on Chromium when running the tests. The actual Chromium bug hasn’t been fixed yet, but I landed some workarounds that prevent the issue in these tests (waiting for the promise to be resolved before marking the test as done).

Status of WPT :focus-visible tests in the different browsers Status of WPT :focus-visible tests in the different browsers

To close the tests part, you can check the status in wpt.fyi, most of them are passing in all implementations which is great, but there are some interoperability issues that we’ll review next.

Interop issues

As I mentioned the wpt.fyi website helps to easily identify the interop issues between the different implementations.

  • :focus-visible on the default UA style sheet: This has been already commented before, but this is the reason why Chromium fails focus-visible-018.html test. Firefox fails focus-visible-017.html because the default UA style sheet mentions outline: auto, but Firefox uses a dotted outline.

  • :focus-visible on <select> element: There’s a Firefox failure on focus-visible-002.html because it doesn’t match :focus-visible when you click a <select> element. I opened a CSSWG issue to discuss this, and I initially thought that the agreement was that Firefox behavior is the right one. So I did a patch to change Chromium’s behavior and update the tests, but during the review I was pointed to a Chromium bug about this topic that was closed as WONTFIX, the reason is that when you click a <select> element you can type letters to select the option from the keyboard. Right now the discussion has been reopened and we’ll need to wait for the final resolution on the topic, to see which is the right implementation.

  • Keyboard interaction once an element is focused: This is tested by focus-visible-007.html. The example here is that you click an element to focus it, initially the element doesn’t match :focus-visible but then you use the keyboard (for example you type a letter), in that situation Chromium will start matching :focus-visible while Firefox won’t. The spec is quite explicit on the topic so it looks like a Firefox bug:

    If the user interacts with the page via the keyboard, the currently focused element should match :focus-visible (i.e. keyboard usage may change whether this pseudo-class matches even if it doesn’t affect :focus).

  • Programmatic focus and :focus-visible: What should happen with :focus-visible when the website uses element.focus() from a script to move the focus? The spec has some heuristics that depend on if the active element before focus() is called was matching (or not) :focus-visible. But I’ve opened a CSSWG issue to discuss what should happen when there’s no active element. The discussion is still ongoing and depending on that there might be changes in the current implementations. Right now there are some subtle differences between Chromium and Firefox here.


Probably you don’t know what’s that, but it’s somehow related to :focus-visible so I believe it’s worth to mention it here.

WebKit is the browser that supports better :focus pseudo-class behavior on Shadow DOM (see the WPT tests results). The issue here is that the ShadowRoot should match :focus if some of the descendants are focused, so if you have an <input> element in the Shadow Tree, and you focus it, you’ll have 2 elements matching :focus the ShadowRoot and the <input>.

  #host {  padding: 1em; background: lightgrey; }
  #host:focus { background: lime; }
<div id="host"></div>
  shadowRoot = host.attachShadow(
    {mode: 'open', delegatesFocus: true});
  shadowRoot.innerHTML =
    '<input value="Focus me">';
#example2host { padding: 1em; background: lightgrey; } #example2host:focus { background: lime; }

In Chromium if you use delegatesFocus=true in element.attachShadow(), and you have an example like the one described above, you’ll get two focus indicators, one in the ShadowRoot and one in the <input>. Firefox doesn’t match :focus in the ShadowRoot so the issue is not present there.

WebKit matches :focus independently of delegatesFocus value (which is the right behavior per spec), so it’d be even more common to have a situation of getting two focus indicators. To avoid that WebKit introduced :-webkit-direct-focus pseudo-class, that is not web exposed, but it’s used in the default UA style sheet to avoid this bad effect of having a focus indicator on the ShadowRoot.

I believe :focus-visible spec should describe that behavior regarding how it works on ShadowRoot so it doesn’t match on those situations. That way WebKit could get rid of :-webkit-direct-focus and use :focus-visible once it’s implemented. I’ve reported a CSSWG issue to discuss this topic.

WIP implementation

So far I haven’t talked about the implementation at all, but the reason is that all the previous work is required in order to be able to do a proper implementation, with good quality and that is interoperable between the different browsers. :focus-visbile is a new feature, and despite all the interop mess regarding how focus works in the different browsers and platforms, we should aim to have a :focus-visible implementation as much interoperable as possible.

Despite all this related work, I’ve also found some time to work on a patch. It’s still not ready to be sent upstream but it’s already doing some things and passing some of the WPT tests. Of course several things are still missing, but next you can see quick screen recording with :focus-visible working on WebKit.

:focus-visible example running on WebKitGTK MiniBrowser

Some numbers

I know this is not really relevant, but it helps to get a grasp on what has been happening during this month:

  • 3 CSSWG issues reported.
  • 13 PRs merged in WPT.
  • 5 patches landed in WebKit.
  • 4 patches landed in Chromium.
  • And many discussions with different people, special thanks to Alice and Emilio that have been really helpful.

Next steps

The plan for February is to try to find an agreement on the CSSWG issues, close them, and update the WPT tests accordingly. Maybe this work could include even landing some patches on the current implementations. And of course, focus (pun intended) the effort on implementation of :focus-visible in WebKit.

I hope this blog post helps you to understand better the work that goes behind the scenes when a web platform feature is implemented, especially if you want to do it on a way that ensures browser interoperability and reduces web authors’ pain.

If you enjoyed this project update, stay tuned as there will more in the future.

January 27, 2021 11:00 PM

Release Notes for Safari Technology Preview 119

Surfin’ Safari

Safari Technology Preview Release 119 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 270749-271358.

Web Inspector

  • Elements
    • Enabled independent Styles details sidebar panel in the Elements Tab by default (r271319)
    • Improved the visibility of values by de-emphasizing range information in the Font details sidebar panel (r271329)
  • Timelines
    • Added a banner to the JavaScript Allocation timeline when new heap snapshots are added that are immediately filtered (r271236)

Speech Recognition

  • Enabled SpeechRecognition by default (r270854)
  • Added webkit- prefix to SpeechRecognition (r270868)
  • Added availability check of speech recognition service before requesting permissions (r271031)
  • Changed to fail speech recognition when the page is muted for audio capture (r271154)
  • Implemented recognizer for SpeechRecognition (r270772)
  • Stopped speech recognition if page becomes invisible (r271169, r271205)


  • Added support for aspect-ratio on positioned elements (r271061)
  • Changed to take aspect-ratio into account for percentage resolution (r271293)
  • Fixed an issue where toggling pointer-events on <body> prevented child elements from scrolling (r270849)
  • Fixed CSS Scroll Snap when the user scrolls via the keyboard (r270838)
  • Fixed :focus to match inside the focus event (r271146)
  • Fixed the default namespace getting ignored inside non-type selectors for :is() and :not() (r270955)
  • Fixed width: max-content with box-sizing: border-box to leave space for padding (r271003)
  • Implemented ::file-selector-button pseudo-element (r270784)
  • Prevented layout overflow from being computed incorrectly inside Flexbox and breaking sticky positioning (r271053)


  • Fixed scrolling issues when scrolling on only one axis is enabled (r271090)
  • Sibling element wheel event regions can be wrong (r271054)


  • Fixed non-enumerable property to shadow inherited enumerable property from for-in (r270874)
  • Fixed Intl.DateTimeFormat#formatRange to generate the same output to Intl.DateTimeFormat#format if startDate and endDate are “practically-equal” (r271224)
  • Implemented arbitrary-module-namespace-identifier-names (r270923)
  • Improved performance of Object rest and spread (r271343)


  • Used low-power audio buffer sizes for more output devices (r270943)
  • Updated the video element to ignore requests to enter or exit fullscreen before the current fullscreen mode change is completed (r271341)


  • Added support for memory.copy, memory.init, and data.drop behind flag (r270948)
  • Added support for memory.fill behind flag (r270855)
  • Added support for type-annotated select behind flag (r270827)
  • Updated WebAssembly instance’s exports object (r271112)
  • Updated WebAssembly multi-value to iterate iterable result from JS function first before converting values (r271113)
  • Updated WebAssembly Table/Memory/Global to allow inheritance (r271115)
  • Implemented WebAssembly BigInt handling (r271168)

Web Animations

  • Fixed animation issue on sibling elements caused by style sharing (r270837)


  • Fixed aria-orientation getting ignored on input[type="range"] (r271166)
  • Implemented prefers-contrast: more (r270823)
  • Updated list heuristics to include linked lists inside navigation containers (r270896)


  • Adjusted date input placeholder color based on specified text color (r270875)
  • Corrected the intrinsic size stored for SVG images (r271129)
  • Fixed “Open with Preview” menu item in PDF context menus on Big Sur (r270946)
  • Fixed some issues with PDFs as <object>. (r270998)
  • Fixed Service Worker inspection (r271294)
  • Changed text fields to not be translated while typing (r271262)

Bug Fixes

  • Fixed text content alignment inside an inline-block element (r271284)
  • Fixed inline block baseline with clipped overflow (r271348)

January 27, 2021 06:45 PM

January 20, 2021

Sergio Villar: Flexbox Cats (a.k.a fixing images in flexbox)

Igalia WebKit

In my previous post I discussed my most recent contributions to flexbox code in WebKit mainly targeted at reducing the number of interoperability issues among the most popular browsers. The ultimate goal was of course to make the life of web developers easier. It got quite some attention (I loved Alan Stearns' description of the post) so I decided to write another one, this time focused in the changes I recently landed in WebKit (Safari’s engine) to improve the handling of elements with aspect ratio inside flexbox, a.

January 20, 2021 10:45 AM

January 06, 2021

Release Notes for Safari Technology Preview 118

Surfin’ Safari

Safari Technology Preview Release 118 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 270230-270749.

Web Inspector

  • Elements
    • Added an experimental Font details sidebar panel for showing information about the currently used font of the selected node (r270637)
  • Sources
    • Added support for intercepting and overriding network requests (r270604)


  • Implemented Definite and Indefinite Sizes in flexbox (r270578)
  • Corrected cases in which box-sizing was border-box and didn’t use the content box to compute size based on aspect ratio (r270617)
  • Fixed preserving aspect ratio when computing cross size of flexed images in auto-height flex container (r270288)
  • Added support for aspect-ratio on replaced and non-replaced elements (r270551, r270618)
  • Changed text-decoration-color animation not to be discrete (r270597)
  • Changed getComputedStyle rounding lineHeight to nearest pixel (r270248)
  • Changed to trigger web font loads earlier (r270590)


  • Made only the first wheel event in a gesture to be cancelable (r270425)


  • Enabled “at” methods (r270550)
  • Changed get and set for object literal and class to not be escaped (r270487)
  • Accepted escaped keywords for class and object property names (r270481)
  • Aligned %TypedArray% constructor/slice behavior with the specification strictly (r270552, r270371)
  • Added a JSC API to allow acquiring the JSLock to accelerate performance (r270659)
  • Removed unnecessary JSLock use from various JSC APIs (r270665)
  • Aligned [[DefineOwnProperty]] method of mapped arguments object with the specification strictly (r270664)
  • Changed Reflect.preventExtensions not throwing if called on WindowProxy or Location (r270702)


  • Fixed rasterizer discard interfering with implicit clears in WebGL 2 (r270253)


  • Implemented WebVTT VTTCue region attribute (r270738)

Private Click Measurement

  • Exposed an API for enabling or disabling Private Click Measurement (r270710)


  • Added support for RTCRtpSender::setStreams (r270486)
  • Enabled use of new socket API for WebRTC TLS connections by default (r270680)
  • Fixed ICE not resolving for turns relay candidates rooted in LetsEncrypt CA (r270626)
  • Improved RTCRtpSender and RTCRtpReceiver transforms support (r270641, r270290, r270294, r270507, r270532)
  • Introduced an experimental flag specific to VP9 profile 2 (r270256)


  • Changed to allow blob URLs with fragments (r270269)
  • Fixed lazy loaded iframe to not lazy load when scripting is disabled (r270300)
  • Changed Reflect.preventExtensions to not throw if called on WindowProxy or Location (r270702)
  • Changed sessionStorage to not be cloned when a window is opened with rel=noopener (r270273)
  • Updated the list of blocked ports according fetch specification (r270321)


  • Fixed VoiceOver not announcing the aria-checked state for ARIA treeitem (r270333)


  • Fixed the onClicked listener not being called for page actions

January 06, 2021 09:10 PM

December 22, 2020

Manuel Rego: 2020 Recap

Igalia WebKit

2020 is not a great year to do any kind of recap, but there have been some positive things happening in Igalia during this year. Next you can find a highlight of some of these things in no particular order.

CSS Working Group A Coruña F2F

The year couldn’t start better, on January Igalia hosted a CSS Working Group face-to-face meeting in our office in A Coruña (Galicia, Spain). Igalia has experience arranging other events in our office, but this was the first time that the CSSWG came here. It was an amazing week and I believe everyone enjoined the visit to this corner of the world. 🌍

Brian Kardell from Igalia was talking to everybody about Container Queries. This is one of the features that web authors have been asking for since ever, and Brian was trying to push the topic forward and find some kind of solution (even if not 100% feature complete) for this topic. In that week there were discussions about the relationship with other topics like Resize Observer or CSS Containment, and new ideas appeared too. Brian posted a blog post after the event, explaining some of those ideas. Later my colleague Javi Fernández worked on an experiment that Brian mentioned on a recent post. The good news is that all these conversations managed to bring this topic back to life, and past November Google announced that they have started working on a Container Queries prototype in Chromium.

During the meeting Jen Simmons (in Mozilla at that time, now in Apple) presented some topics from Mozilla, including a detailed proposal for Masonry Layout based on Grid, this has been something authors have also showed interest, and Firefox has already a prototype implementation behind a runtime flag.

Apart from the three days full of meetings and interesting discussions, some of the CSSWG members participated in a local meetup giving 4 nice talks:

Finally, I remember some corridor conversations about the Mozilla layoffs that had just happened just a few days before the event, but nobody could expect what was going to happen during the summer. It looks like 2020 has been a bad year for Mozilla in general and Servo in particular. 😢

Open Prioritization

This summer Igalia launched the Open Prioritization campaign, where we proposed a list of topics to be implemented on the different browser engines, and people supported them with different pledges; I wrote a blog post about it by that time.

Open Prioritization: :focus-visible in Safari/WebKit: $30.8K pledged out of $35K. Open Prioritization: :focus-visible in Safari/WebKit

This was a cool experiment, and it looks like a successful one, as :focus-visible in WebKit/Safari has been the winner. Igalia is currently collecting funds through Open Collective in order to start the implementation of :focus-visible in WebKit, you still have time to support it if you’re interested. If everything goes fine this should happen during the first quarter of 2021. 🚀

Igalia Chats

This actually started in later 2019, but it has been ongoing during the whole 2020. Brian Kardell has been recording a podcast series about the web platform and some of its features with different people from the industry. They have been getting more and popular, and Brian was even asked to record one of these for the last BlinkOn edition.

So far 8 episodes of around 1 hour length have been published, with 13 different guests. More to come in 2021! If you are curious and want to know more, you can find them at Igalia website or in your favourite podcasting platform.

Igalia contributions

This is not a comprehensive list but just some highlights of what Igalia has been doing in 2020 around CSS:

We’re working on a demo about these features, that we’ll be publishing next year.

In February Chromium published the requirements to become API owner. Due to my involvement on the Blink project since the fork from WebKit back in 2013, I was nominated and became Blink API Owner past March. 🥳

Yoav Weiss on the BlinkOn 13 Keynote announcing me as API owner Yoav Weiss on the BlinkOn 13 Keynote announcing me as API owner

The API owners met on a weekly basis to review the intent threads and discuss about them, it’s an amazing learning experience to be part of this group. In my case when reviewing intents I usually pay attention to things related to interoperability, like the status of the spec, test suites and other implementations. In addition, I have the support from all my awesome colleagues at Igalia that help me to play this role, thank you all!

2021 and beyond…

Igalia keeps growing and a bunch of amazing folks will join us soon, particularly Delan Azabani and Felipe Erias are already starting these days as part of the Web Platform team.

Open Prioritization should have the first successful project, as :focus-visible is advancing funding and it gets implemented in WebKit. We hope this can lead to new similar experiments in the future.

And I’m sure many other cool things will happen at Igalia next year, stay tuned!

December 22, 2020 11:00 PM

December 14, 2020

CSS Individual Transform Properties

Surfin’ Safari

CSS Transforms appeared on the Web along with CSS Animations and CSS Transitions to add visual effects and motion on the Web. Those technologies have been a staple of the Web platform and Web developers’ toolkit for well over a decade. In fact, the CSS transform property first shipped in Safari all the way back in July 2008 when iPhone OS 2.0 shipped. You can find some historical posts about initial support in WebKit from October 2007, and another post from July 2009 focusing on 3D transforms when CSS Transforms shipped in Mac OS X Leopard.

And now, there is some news in the world of CSS Transforms: individual transform properties are enabled by default in Safari Technology Preview 117. This means that, as in Firefox and Chrome Canary, you can now use the new translate, rotate and scale CSS properties to specify what have so far been functions of the transform property, including 3D operations.

Using these properties is simple and should make Web developers feel right at home. Consider these two equivalent examples:

div.transform-property {
    transform: translate(100px, 100px) rotate(180deg) scale(2);

div.individual-properties {
    translate: 100px 100px;
    rotate: 180deg;
    scale: 2;

But why would you use these new properties over the transform property? One reason is convenience, as you might deem it simpler to write scale: 2 rather than transform: scale(2) when all you intend to do is scale an element.

But I think the main draw here is that you are now free to compose those various transform properties any way you see fit. For instance, you can easily write a CSS class to flip an element using the scale property without worrying that you might override other transform-related properties:

.flipped {
    scale: -1;

Your flipped class will work just fine even if a rotate or transform property applies a rotation to the element.

This feature also comes in handy when animating transforms. Let’s say you’re writing an animation that scales an element up over its entire duration but also applies a rotation for the second half of that animation. With the transform, property you would have had to pre-compute what the intermediate values for the scale should have been when the rotation would start and end:

@keyframes scale-and-rotate {
    0%   { transform: scale(1) }
    50%  { transform: scale(1.5) rotate(0deg) }
    100% { transform: scale(2) rotate(180deg) }

While this may not look like such a big deal when you look at it, making any further changes to those keyframes would require recomputing those values. Now, consider this same animation written with the individual transform properties:

@keyframes scale-and-rotate {
    0%   { scale: 0 }
    50%  { rotate: 0deg } 
    100% { scale: 1; rotate: 180deg; }

You can easily change the keyframes and add other properties as you like, leaving the browser to work out how to correctly apply those individual transform properties.

But that’s not all; there is also the case where you want separate animations to apply to an element at the same time. You could split out this single set of keyframes into two different sets and tweak the timing instead:

.animated {
    /* Apply the scale keyframes for 1s and the rotate
       keyframes for 500ms with a 500ms delay. */
    animation: scale 1s, rotate 500ms 500ms;

@keyframes scale {
    from { scale: 0 }
    to   { scale: 1 }

@keyframes rotate {
    from { rotate: 0deg }
    to   { rotate: 180deg }

Now keyframes applying to transforms are not only easier to author, but you can better separate the timing and the keyframes by composing multiple transform animations. And if you are a seasoned CSS Animations developer, you’ll know how important this can be when you factor in timing functions.

Additionally, animating the new individual transform properties retains the same great performance as animating the transform property since these properties support hardware acceleration.

But what about the transform property? How does it relate to those new individual transform properties?

First, remember that the transform property supports transform functions that are not represented as individual transform properties. There are no equivalent CSS properties for the skew(), skewX() and skewY() functions and no property equivalent to the matrix() function.

But what happens when you specify some of the individual transform properties as well as the transform property? The CSS Transform Level 2 specification explains how individual transform properties and the transform-origin and transform properties are composed to form the current transformation matrix. To summarize, first the individual transform properties are applied – translate, rotate, and then scale – and then the functions in the transform property are applied.

This means that there’s a clear model to use those individual transform properties and the transform property together to enhance your ability to transform content on the Web platform.

And before you start using these new properties, it is important that you know how to detect their availability and use transform as a fallback. Here, the @supports rule will allow you to do what you need:

@supports (translate: 0) {
    /* Individual transform properties are supported */
    div {
        translate: 100px 100px;

@supports not (translate: 0) {
    /* Individual transform properties are NOT supported */
    div {
        transform: translate(100px, 100px);

We encourage you to start exploring how to use those three new properties in Safari Technology Preview in your projects and file bug reports on bugs.webkit.org should you encounter unexpected issues. You can also send a tweet to @webkit or @jonathandavis to share your thoughts on individual transform properties.

December 14, 2020 06:00 PM

December 10, 2020

Release Notes for Safari Technology Preview 117

Surfin’ Safari

Safari Technology Preview Release 117 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 268651-270230.

Web Inspector

  • Elements
    • Added the option to “Edit Breakpoint…” or “Reveal Breakpoint” in Sources Tab (r269337)
    • Extra closing parenthesis added after var in styles panel (r269201)
  • Network
    • Fixed background color of rows from previous session (r269127)
    • Truncated data URLs in the Resources sidebar and Headers panel (r269075)
  • Search
    • Prevented stealing focus from the search field when shown (r269074)
  • Sources
    • Changed the default breakpoint action to be evaluate (r269547)
  • Console
    • Exposed console command line API to breakpoint conditions and actions (r269023, r269044)
    • Fixed using Show JavaScript Console in an empty tab in Safari Technology Preview (r270060)
  • Other Changes
    • Updated styles to use CSS properties with neutral directionality (r269166)


  • Added support for discrete animations of many CSS properties (r269812, r269333, r269357, r268792, r268718, r268726)
  • Added support for animations on more pseudo-elements (such as :marker) (r269813)
  • Added support for more properties on ::marker (r269774)
  • Added parse support for aspect-ratio CSS property (r269641)
  • Made CSS font shorthands parsable within a worker (r269957)
  • Changed images as flex items to use the overridingLogicalHeight when defined to compute the logical width (r270073)
  • Changed images as flex items to use the overridingLogicalWidth when defined to compute the logical height (r270116)
  • Changed background-size to not accept negative values (r269237)
  • Fixed issues with percentage height on grid item replaced children when the grid item has a scrollbar (r269717)
  • Serialized aspect ratio with spaces around the slash (r268659)


  • Enabled static public class fields (r269922, r269939)
  • Enabled static and instance private class fields (r270066)
  • Implemented Intl.DateTimeFormat.formatRangeToParts (r269706)
  • Implemented Intl.ListFormat (r268956)
  • Aligned %TypedArray% behavior with recent spec adjustments (r269670)
  • Implemented @@species support in ArrayBuffer#slice (r269574)
  • Fixed toLocaleDateString() resolving incorrect date for some old dates (r269502)
  • Resurrected SharedArrayBuffer and Atomics behind a flag (JSC_useSharedArrayBuffer=1) (r269531)


  • Added wasm atomics instructions, partially behind a flag (JSC_useSharedArrayBuffer=1) (r270208)
  • Fixed opcodes for table.grow and table.size (r269790)
  • Implemented shared WebAssembly.Memory behind a flag (JSC_useSharedArrayBuffer=1) (r269940)
  • Implemented i32 sign-extension-ops (r269929)


  • Added proper garbage collection to ResizeObserver (r268860)
  • Changed Worklet.addModule() to reject promise with an AbortError when the network load fails (r270033)
  • Changed event targets to be cleared after dispatch if the target pointed to a shadow tree (r269546)
  • Changed WebSocket constructor to not throw when the port is blocked (r269459)
  • Fixed toggling dark mode to update the scrollbar appearance in overflow: scroll elements (r269437)
  • Fixed navigator.clipboard to be exposed on *.localhost pages (r269960)
  • Fixed auto-focus of text input to not select text (r269587)
  • Fixed Canvas drawImage to not raise an IndexSizeError on empty sources (r270126)
  • Fixed getIndexedParameter indexing crash (r270160)
  • Fixed text getting clobbered when assigning to input.defaultValue (r269528)
  • Fixed <input disabled> to fire click events after dispatchEvent (r269452)
  • Fixed the space between minute and meridiem fields in time inputs being too large (r270148)
  • Fixed window.event to not be affected by nodes moving post-dispatch (r269500)
  • Improved exception messages when AudioContext.suspend() / resume() promises are rejected (r268999)
  • Promises returned by our DOM API have the caller’s global instead of the callee’s (r269227)
  • Removed unneeded whitespace between content and <br> (r268958, r269036)

Speech Recognition

  • Added audio capture for SpeechRecognition (r270158)
  • Added a default action for SpeechRecognition permission request (r269918)
  • Implemented basic permission check for SpeechRecognition (r269810)


  • Added WebRTC SFrame transform (r269830)
  • Added infrastructure for WebRTC transforms (r269764)
  • Added support for RTCPeerConnection.onicecandidateerror event (r270101)
  • Added support for RTCRtpScriptTransform (r270107)
  • Added support for VP9 Profile 2 (10-bit color) in WebRTC (r268971)
  • Increased camera failing timer to 30 seconds (r269190)


  • A video element may fail to enter picture-in-picture from fullscreen (r268816)
  • Added handling trackId changes across Initialization Segments in MSE (r269121)
  • Added addOutput() and removeOutput() utility functions to AudioSummingJunction (r268820)
  • Added skeleton implementation of Media Session API (r268735)
  • Changed to ensure WebAudio API throws exceptions with useful error messages (r268812)
  • Changed AudioBuffer channels to be neuterable and detachable (r269108)
  • Fixed an infinite loop in sample eviction when duration is NaN in MSE (r270106)
  • Fixed Web Audio continuing to play when navigating off the web page via an iframe (r268893)
  • Fixed poor resampling quality when using AudioContext sampleRate parameter (r270141, r270157)
  • Fixed AudioBuffer.getChannelData(x) to keep returning the same JavaScript wrapper for a given channel (r269081)
  • Fixed AudioContext.suspend() to not reject promise when the audio session is interrupted (r269039)
  • Fixed transparent video poster image to keep element transparent once the first frame is preloaded (r269407)
  • Fixed fetching an audio worklet module using a data URL (r270046)
  • Improved the speed of audio and video element creation up to 50x faster (r269077)

Web Animations

  • Ensured animation updates are not scheduled when there are no styles to update (r269963)
  • Fixed KeyframeEffect.pseudoElement to return a valid string when targeting ::marker or ::first-letter (r269623)
  • Fixed accelerated animations of individual transform properties to apply rotate before scale (r269527)


  • Changed programmatic scroll to stop rubberbanding (r269373, r269559)
  • Changed to update scrolling geometry immediately for programmatic scrolls (r269558)

Scroll Snap

  • Fixed scroll snap specified on :root (r269506)
  • Fixed scroll-snap on root aligning to the body margin edge, not the viewport edge (r269622)
  • Made axis in scroll-snap-type required (r268665)
  • Made scroll-margin independent of scroll snapping and applied it when scrolling to anchors (r269144)
  • Made scroll-padding independent of scroll-snap and have it affect scrollIntoView (r270023)
  • Stopped creating implicit snap points at scrollmin and scrollmax (r268856)

Private Click Measurement

  • Added persistence for pending ad clicks and attributions so they survive browser restart (r270136)
  • Changed to accept ad click data when the link opens a new window (r269129)
  • Changed attribute and JSON key names according to the W3C conversation (r269886)
  • Switched to JSON report format (r269489)

Web Driver

  • Added handling for surrogate pairs in keyboard actions (r269421)
  • Added support for a sequence of character key presses (r269035)
  • Added handling HTTPS configuration for WebDriver tests (r268723)
  • Fixed elements in Shadow DOM incorrectly marked as stale (r268867)

December 10, 2020 09:15 PM

November 29, 2020

Philippe Normand: Catching up on WebKit GStreamer WebAudio backends maintenance

Igalia WebKit

Over the past few months the WebKit development team has been working on modernizing support for the WebAudio specification. This post highlights some of the changes that were recently merged, focusing on the GStreamer ports.

My fellow WebKit colleague, Chris Dumez, has been very active lately, updating the WebAudio implementation …

By Philippe Normand at November 29, 2020 12:45 PM

November 26, 2020

Víctor Jáquez: Notes on using Emacs (LSP/ccls) for WebKit

Igalia WebKit

I used to regard myself as an austere programmer in terms of tooling: Emacs —with a plain configuration— and grep. This approach forces you to understand all the elements involved in a project.

Some time ago I have to code in Rust, so I needed to learn the language as fast as possible. I looked for packages in MELPA that could help me to be productive quickly. Obviously, I installed rust-mode, but I also found racer for auto-completion. I tried it out. It was messy to setup and unstable, but it helped me to code while learning. When I felt comfortable with the base code, I uninstalled it.

This year I returned to work on WebKit. The last time I contributed to it was around five years ago, but now in a different area (still in the multimedia stack). WebKit is huge, and because of C++, I found gtags rather limited. Out of curiosity I looked for something similar to racer but for C++. And I spent a while digging on it.

The solution consists in the integration of three MELPA packages:

  • lsp-mode: a client for Language Server Protocol for Emacs.
  • company-mode: a text completion framework.
  • ccls: A C/C++ language server. Besides emacs-ccls adds more functionality to lsp-mode.

(I known, there’s a simpler alternative to lsp-mode, but I haven’t tried it yet).

First we might explain what’s LSP. It stands for Language Server Protocol, defined with JSON-RPC messages, between the editor and the language server. It was orginally developed by Microsoft for Visual Studio, which purpose is to support auto-completion, finding symbol’s definition, to show early error markers, etc., inside the editor. Therefore, lsp-mode is an Emacs mode that communicates with different language servers in LSP and operates in Emacs accordingly.

In order to support the auto-completion use-case lsp-mode uses the company-mode. This Emacs mode is capable to create a floating context menu where the editing cursor is placed.

The third part of the puzzle is, of course, the language server. There’s a language servers for different programming languages. For C & C++ there are two servers: clangd and ccls. The former uses Clang compiler, the last can use either Clang, GCC or MSVC. Along this text ccls will be used for reasons exposed later. In between, emacs-ccls leverages and extends the support of ccls in lsp-mode, though it’s not mandatory.

In short, the basic .emacs configuration, using use-package, would have these lines:

(use-package company
  :config (global-company-mode 1))

(use-package lsp-mode
  :diminish "L"
  :init (setq lsp-keymap-prefix "C-l"
              lsp-enable-file-watchers nil
              lsp-enable-on-type-formatting nil
              lsp-enable-snippet nil)
  :hook (c-mode-common . lsp-deferred)
  :commands (lsp lsp-deferred))

(use-package ccls
  :init (setq ccls-sem-highlight-method 'font-lock)
  :hook ((c-mode c++-mode objc-mode) . (lambda () (require 'ccls) (lsp-deferred))))

The snippet first configures company-mode. It is enabled globally because, normally, it is a nice feature to have, even in non-coding buffers, such as this very one, for writing a blog post in markdown format. Diminish mode hides or abbreviates the mode description in the Emacs’ mode line.

Later comes lsp-mode. It’s big and aims to do a lot of things, basically we have to tell it to disable certain features, such as file watcher, something not viable in massive projects as WebKit; as I don’t use snippet (generic text templates), I also disable it; and finally, lsp-mode tries to format the code at typing, I don’t know how the code style is figured out, but in my experience, it’s always detected wrong, so I disabled it too. Finally, lsp-mode is launched when a text uses the c-mode-common, shared by c++-mode too. lsp-mode is launched deferred, meaning it’ll startup until the buffer is visible; this is important since we might want to delay ccls session creation until the buffer’s .dir-locals.el file is processed, where it is configured for the specific project.

And lastly, ccls-mode configuration, hooked until c-mode or c++-mode are loaded up in a deferred fashion (already explained).

It’s important to understand how ccls works in order to integrate it in our workflow of a specific project, since it might need to be configured using Emacs’ per-directory local variales.

We are living in a post-Makefile world (almost), proof of that is ccls, which instead of a makefile, it uses a compilation database, a record of the compile options used to build the files in a project. It’s commonly described in JSON and it’s generated automatically by build systems such as meson or cmake, and later consumed by ninja or ccls to execute the compilation. Bear in mind that ccls uses a cache, which can eat a couple gigabytes of disk.

Now, let’s review the concrete details of using these features with WebKit. Let me assume that WebKit local repository is cloned in ~/WebKit.

As you may know, the cool way to compile WebKit is with flatpak. Flatpak adds an indirection in the compilation process, since it’s done in an isolated environment, above the native system. As a consequence, ccls has to be the one inside the Flatpak environment. In ~/.local/bin/webkit-ccls:

set -eu
cd $HOME/WebKit/
exec Tools/Scripts/webkit-flatpak -c ccls "$@"

Basically the scripts calls ccls inside flatpak, which is available in the SDK. And this is why ccls instead of clang, since clang is not provided.

By default ccls assumes the compilation database is in the project’s root directory, but in our case, it’s not, thus it is required to configure the database directory for our WebKit setup. For it, as we already said, a .dir-locals.el file is used.

  (indent-tabs-mode . nil)
  (c-basic-offset . 4))
  (indent-tabs-mode . nil)
  (c-basic-offset . 4))
  (indent-tabs-mode . nil)
  (c-basic-offset . 4))
  (indent-tabs-mode . nil))
  (fill-column . 100)
  (ccls-executable . "/home/vjaquez/.local/bin/webkit-ccls")
  (ccls-initialization-options . (:compilationDatabaseDirectory "/app/webkit/WebKitBuild/Release"
                                  :cache (:directory ".ccls-cache")))
  (compile-command . "build-webkit --gtk --debug")))

As you can notice, ccls-execute is defined here, though it’s not a safe local variable. Also the ccls-initialization-options, which is a safe local variable. It is important to notice that the compilation database directory is a path inside flatpak, and always use the Release path. I don’t understand why, but Debug path didn’t work for me. This mean that WebKit should be compiled as Release frequently, even if we only use Debug type for coding (as you may see in my compile-command).

Update: Now we can explain why it’s important to configure lsp-mode as deferred: to avoid connections to ccls before processing the .dir-locals.el file.

And that’s all. Now I have early programming errors detection, auto-completion, and so on. I hope you find these notes helpful.

Update: Sadly, because of flatpak indirection, symbols’ definition finding won’t work because the file paths stored in ccls cache are relative to flatpak’s file system. For that I still rely on global and its Emacs mode.

By vjaquez at November 26, 2020 04:20 PM

November 23, 2020

MediaRecorder API

Surfin’ Safari

Safari Technology Preview 105 and Safari in the latest iOS 14.3 beta enabled support for the MediaRecorder API by default. This API takes as input live audio/video content to produce compressed media. While the immediate use case is to record from the camera and/or microphone, this API can take any MediaStreamTrack as input, be it a capture track, coming from the network using WebRTC, or generated from HTML (Canvas, WebAudio), as illustrated in the chart below.

The generated output, exposed as blobs, can be readily rendered in a video element to preview the content, edit it, and/or upload to servers for sharing with others.

This API can be feature-detected, as can the set of supported file/container formats and audio/video codecs. Safari currently supports the MP4 file format with H.264 as video codec and AAC as audio codec. MediaRecorder support can be checked as follows:

function supportsRecording(mimeType)
    if (!window.MediaRecorder)
        return false;
    if (!MediaRecorder.isTypeSupported)
        return mimeType.startsWith("audio/mp4") || mimeType.startsWith("video/mp4");
    return MediaRecorder.isTypeSupported(mimeType);

The following example shows how camera and microphone can be recorded as mp4 content and locally previewed on the same page.

<button onclick="startRecording()">start</button><br>
<button onclick="endRecording()">end</button>
<video id="video" autoplay playsInline muted></video>
let blobs = [];
let stream;
let mediaRecorder;
async function startRecording()
    stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
    mediaRecorder = new MediaRecorder(stream);
    mediaRecorder.ondataavailable = (event) => {
       // Let's append blobs for now, we could also upload them to the network.
       if (event.data)
    mediaRecorder.onstop = doPreview;
    // Let's receive 1 second blobs
function endRecording()
    // Let's stop capture and recording
    stream.getTracks().forEach(track => track.stop());
function doPreview()
    if (!blobs.length)
    // Let's concatenate blobs to preview the recorded content
    video.src = URL.createObjectURL(new Blob(blobs, { type: mediaRecorder.mimeType }));

Future work may extend the support to additional codecs as well as supporting options like video/audio bitrates.

getUserMedia in WKWebView

Speaking of Safari in latest iOS 14.3 beta and local capture, navigator.mediaDevices.getUserMedia can now be exposed to WKWebView applications. navigator.mediaDevices.getUserMedia is automatically exposed if the embedding application is able to natively capture either audio or video. Please refer to Apple documentation to meet these requirements. Access to camera and microphone is gated by a user prompt similar to Safari and SafariViewController prompts. We hope to extend WKWebView APIs to allow applications to further control their camera and microphone management in future releases.

We hope you will like these new features. As always, please let us know if you encounter any bugs (or if you have ideas for future enhancements) by filing bugs on bugs.webkit.org.

November 23, 2020 06:00 PM

November 20, 2020

Paulo Matos: A tour of the for..of implementation for 32bits JSC

Igalia WebKit

We look at the implementation of the for-of intrinsic in 32bit JSC (JavaScriptCore).


By Paulo Matos at November 20, 2020 02:00 PM

November 19, 2020

Release Notes for Safari Technology Preview 116

Surfin’ Safari

Safari Technology Preview Release 116 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 267959-268651.

Web Extensions

  • Added support for non-persistent background pages
  • Fixed browser.tabs.update() to accept calls without a tabId parameter
  • Fixed browser.tabs.update() to allow navigations to a URL with a custom scheme

Web Inspector

  • Sources
    • Added support for creating a local override from resources that failed to load (r267977)
    • Added a + to the Local Overrides section in the navigation sidebar to streamline creating custom local overrides (r267979)
    • Fixed issue where event breakpoints were not editable after being added (r267976)
    • Fixed issue where line-based JavaScript breakpoints were not added on reload (r268629)
    • Fixed issue where the Sources Tab had wrong icon when paused (r268427)

Web Audio API

  • Enabled AudioWorklet API by default (r268459)
  • Added implementation for AudioWorkletGlobalScope.registerProcessor() (r268103)
  • Added implementation for AudioWorkletGlobalScope‘s currentFrame, currentTime, and sampleRate attributes (r268076)
  • Changed to use AudioWorkletProcessor to process audio (r268365)
  • Changed calling AudioContext.resume() right after AudioContext.suspend() to be a no-op (r268368)
  • Changed AudioWorkletGlobalScope to perform a microtask checkpoint after each rendering quantum (r268369)
  • Fixed parameters argument for AudioWorkletProcessor.process() to be spec-compliant (r268414)


  • Enabled video capture by default on macOS (r268052)
  • Added support for MediaRecorder bitrate getters (r268363)
  • Added support for MediaRecorder pause and resume (r268130)
  • Added support for respecting enabled and muted tracks (r267987)
  • Added support for BlobEvent.timecode (r268136)
  • Fixed MediaRecorder .stop to not throw in Inactive state (r268477)
  • Made sure to fire the correct set of events in case MediaRecorder stream has track changes (r268119)


  • Added support for the individual transform properties translate, rotate, scale, including accelerated animation (r267985, r268627)
  • Fixed flex-grow property to be animatable (r268516)
  • Fixed CSS image-orientation: none to be ignored for cross-origin images (r268249)
  • CSS transform computed style should not reflect individual transform properties (r268263)
  • Added painting CSS highlights over images (r268487)
  • Fixed clip-path: path() ignoring page zooming (r268138)
  • Fixed background-clip: var(--a) invalidating -webkit-background-clip: text when --a: text (r268158)


  • Respect the font size when presenting the <select> dropdown when custom fonts are used (r268126)


  • Changed arguments.callee to become ThrowTypeError if the function has a complex-parameter-list (spec-term) (r268323)
  • Changed BigInt constructor to be constructible while it always throws an error (r268322)
  • Fixed Array.prototype.sort‘s sortBucketSort which accessed an array in an invalid way leading to incorrect results with indexed properties on the prototype chain (r268375)
  • Improved the essential internal methods for %TypedArray% to adhere to spec (r268640)


  • Removed the alg field from the attestation statement (r268602)


  • Fixed AirPlay menu not showing up when the AirPlay button is clicked (r268308)
  • Improved computation of default audio input and output devices (r268396)


  • Allowed passive mouse wheel event listeners to not force synchronous scrolling (r268476)
  • Implemented Blob.stream (r268228)
  • Updated FileReader.result to return null if it isn’t done yet (r268232)
  • Improved xhr.response conformance to the specification (r267959)

URL Parsing

  • Aligned URL setters to reasonable behaviors of other browsers (r268050)
  • Changed to parse “#” as a delimiter for fragment identifier in data URIs (r267995)
  • Changed to fail parsing URLs with hosts containing invalid punycode encodings (r267965)
  • Fixed UTF-8 encoding in URL parsing (r267963)

Storage Access API

  • Enabled per-page storage access scope (r267973)


  • Fixed accessibility on Presidential Executive Order pages (r268117, r268206)


  • Fixed WebDriver Input clear/value commands when the target is inside a Shadow DOM (r267978)

November 19, 2020 10:17 PM

November 16, 2020

New WebKit Features in Safari 14

Surfin’ Safari

With the release of Safari 14 for macOS Big Sur, iPadOS 14, iOS 14, and watchOS 7, WebKit brings significant improvements to performance and privacy along with a host of new features for web developers.

Take a look at all of the improvements WebKit is adding with the release of Safari 14.

Safari Web Extensions

This release brings support for Safari Web Extensions. They are a type of extension primarily built with JavaScript, HTML, and CSS packaged with native apps. This allows extension developers to maintain a single codebase that can be packaged for other browsers.

It also means developers with extensions for other browsers can easily bring their projects to Safari with a command-line tool. It jump-starts your development by converting your web extension into an Xcode project, ready to build and test. After testing, you can submit it to the App Store.

You can learn more about Safari’s web extension support by watching the “Meet Safari Web Extensions” session from WWDC 2020.

Webpage Translation

WebKit with Safari 14 on macOS Big Sur, iOS 14, and iPad OS 14 allows users to translate webpages between English, Spanish, Simplified Chinese, French, German, Russian, and Brazilian Portuguese. Safari automatically detects the language of webpages and offers translation based on the user’s Preferred Languages list.

Content authors can instruct Safari on the specific elements that should or should not be translated. Enable translation of element contents with an empty translate attribute or translate="yes", or disable with translate="no". It’s best to mark specific elements and avoid using the attribute on a single container for the entire document.

Performance Improvements

One area of focus in WebKit was on performance. Significant performance gains improve page load performance and page performance for developers. Loading a previously unvisited page is 13% faster, and loading recently visited pages is 42-52% faster. Tab closing performance improved from 3.5 seconds to 50 milliseconds. WebKit also added support for incrementally loading PDF files and now renders the first page up to 60× faster.

For web developers, WebKit improved asynchronous scrolling for iframes and overflow: scroll containers on macOS. Faster IndexedDB operations, for-of loops, JavaScript Promises, JavaScript cookie access, and JavaScript delete operations improve page performance for web developers and users.

WebKit and Safari can now use platform support for HTTP/3 for improved network efficiency and faster load times. HTTP/3 makes use of multiplexed connections over UDP to reduce congestion and transport latency. It all adds up to better perceived performance for your web apps.

For more details, see the “What’s new for web developers” session from WWDC 2020.

Improved Compatibility

Another area of focus was improving WebKit’s interoperability. One measure of that is passing Web Platform Tests. It’s a set of tests used by browser developers to ensure implementations are cross-browser compatible helping developers have more interoperable code. In these releases, WebKit improved the pass rates for over 140,000 tests across Service Workers, SVG, CSS, XHR+Fetch, and more.

Learn more by watching the “What’s new for web developers” session from WWDC 2020.

Privacy Updates

With each release, WebKit refines its privacy protections for users. This year WebKit enabled full third-party cookie blocking and added support for the Storage Access API in Private Browsing mode in Safari. In addition, Safari added a Privacy Report that shows users the trackers that Intelligent Tracking Prevention prevented from accessing identifying information.

Learn more about WebKit’s privacy enhancements in the “CNAME Cloaking and Bounce Tracking Defense” and “Full Third-Party Cookie Blocking and More” blog posts.

Touch ID and Face ID for the Web

Web developers can now support logging into websites with Face ID and Touch ID. New platform authenticator support in WebKit’s Web Authentication implementation provides a highly secure alternative to usernames and passwords. Support for WebAuthn was introduced in Safari 13 on macOS and iOS 13.3 with support for hardware security keys. New in this release is added support for PIN entry and account selection on external Web Authentication security keys.

For more, read the “Meet Face ID and Touch ID for the Web” blog post.

WebP Support

Improvements for media in WebKit include support for a new image format and new video playback capabilities. This release of WebKit in Safari 14 adds support for the WebP open-source image format. It offers content authors smaller file sizes for lossy and lossless formats with advanced features like alpha-channel transparency and animations.

Learn more about WebP support from the “What’s new for web developers” talk from WWDC 2020.

Reserving Layout Space for Images

Another image-related improvement eliminates layout shifting. It comes from a change to how WebKit derives the aspect ratio of an image. Web authors can simply add width and height attributes to an <img> element with a numeric value to tell WebKit the proportions of an image to reserve when calculating image size from CSS. It’s a simple change that significantly improves the user experience.

To see this in action watch the “What’s new for web developers” session from WWDC 2020.

New CSS Features

Safari 14 supports the image-orientation property in CSS to override WebKit’s default behavior of rotating based on image EXIF data. The default image-orientation: from-image can be set to image-orientation: none to override the behavior and ignore the EXIF orientation flag.

New support for the :is() pseudo-selector works as a synonym for the previously supported :matches(). It can be used to match a list of selectors with the specificity of the most specific selector.

It can be used to avoid repetitive selectors. Compare the following:

/* Removing margins from any subsequent headings */
h1, h2, h3, h4, h5, h6 {
    margin-top: 3em;

h1 + h2, h1 + h3, h1 + h4, h1 + h5, h1 + h6,
h2 + h3, h2 + h3, h2 + h4, h2 + h5, h2 + h6,
h3 + h4, h3 + h3, h3 + h4, h3 + h5, h3 + h6,
h4 + h5, h4 + h3, h4 + h4, h4 + h5, h4 + h6,
h5 + h6, h5 + h3, h5 + h4, h5 + h5, h5 + h6 {
    margin-top: 0;

The override could be written with the :is() pseudo-selector like this instead:

:is(h1, h2, h3, h4, h5, h6) + :is(h1, h2, h3, h4, h5, h6) {
    margin-top: 0;

The :where() pseudo-selector is also supported and works like :is() except it resets the specificity back to 0 making it easy to override complex matches.

Other notable CSS additions include support for line-break: anywhere to break long content before it overflows the container, and image-set() support for all other image functions including image(), -webkit-canvas(), -webkit-cross-fade(), and -webkit-*-gradient().

Learn more about these CSS features by watching the “What’s new for web developers” from WWDC 2020.

Media Enhancements

For video, Safari on iOS 14 adds support for the Picture-in-Picture API for iPhone. On macOS, new support for high-dynamic range (HDR) video playback is added. Content authors can use media-queries or the matchMedia method in JavaScript to detect high-dynamic range display capability and deliver a progressively enhanced experience for users with HDR displays.

    @media only screen (dynamic-range: high) {
        /* HDR-only CSS rules */

    if (window.matchMedia("dynamic-range: high")) {
        // HDR-specific JavaScript

You can learn more about these media enhancements by watching the “What’s new for web developers” from WWDC 2020.

JavaScript Improvements

Beyond performance improvements, WebKit added several new capabilities to its JavaScript engine. This release includes support for BigInt, a new datatype for integers that are larger than the MAX_SAFE_INTEGER.

let bigInt = BigInt(Number.MAX_SAFE_INTEGER) + 2n;

Three new types of logical assignment operators are available: AND, OR, and nullish. Using these operators only evaluates the left-hand side of an expression once and can be used non-destructively when assigning values.

let foo = null;

foo ??= 1; // nullish assignment operator
> 1

foo &&= 2; // AND assignment operator
> 2

foo ||= 3; // OR assignment operator
> 2

foo ??= 4; // nullish assignment operator
> 2

WebKit also introduces support for the optional chaining operator that gives you a shortcut for safely accessing object properties.

function optionalChaining(object) {
    return object?.foo;

function optionalChainingTranspiled(object) {
    if (object !== null && object !== undefined)
        return object.foo;
    return undefined;

There’s also added support of the EventTarget constructor which means developers can create custom instances of EventTarget of their own design without the overhead of repurposing a DOM element, giving non-DOM objects an interface for dispatching custom events.

You can learn more about JavaScript improvements by watching the “What’s new for web developers” from WWDC 2020.

Web Inspector Updates

Web Inspector in Safari 14 on macOS added the Source Tab combing the Resources Tab and Debugger Tab together. It lists all resources loaded by the inspected page since Web Inspector opened, along with XHR+Fetch resources and long-lived WebSockets. Web Inspector’s JavaScript debugging tools are here too, with all of the stepping and the breakpoint controls, organized in a more compact and unified way alongside the resources of the inspected page. The Sources Tab also offers new capabilities such as organizing by file path instead of file type, Local Overrides for completely replacing the content and headers of responses loaded over the network, and the Inspector Bootstrap Script to evaluate JavaScript before anything else in the page.

In the Timelines Tab is the new Media & Animations timeline to capture events related to media elements, CSS animations and CSS transitions. It makes it easy to correlate activity captured in other timelines to state changes in media elements, such as pausing or resuming playback, or CSS animations or transitions, such as when they’re created and each time they iterate.

Among the enhancements Web Inspector offers improved VoiceOver support and a new HSL color picker with Display-P3 color support.

You can learn more watching the “What’s new in Web Inspector” video session from WWDC 2020 or referring to the Web Inspector Reference documentation.


These improvements are available to users running watchOS 7, iOS 14 and iPadOS 14, macOS Big Sur, macOS Catalina and macOS Mojave. These features were also available to web developers with Safari Technology Preview releases. Changes in this release of Safari were included in the following Safari Technology Preview releases: 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109. Download the latest Safari Technology Preview release to stay on the forefront of future web platform and Web Inspector features. You can also use the WebKit Feature Status page to watch for changes to your favorite web platform features.

Send a tweet to @webkit or @jonathandavis to share your thoughts on this release. If you run into any issues, we welcome your bug reports for Safari, or WebKit bugs for web content issues.

November 16, 2020 05:00 PM

November 12, 2020

CNAME Cloaking and Bounce Tracking Defense

Surfin’ Safari

This blog post covers several enhancements to Intelligent Tracking Prevention (ITP) in Safari 14 on macOS Big Sur, Catalina, and Mojave, iOS 14, and iPadOS 14 to address our latest discoveries in the industry around tracking.

CNAME Cloaking Defense

ITP now caps the expiry of cookies set in so-called third-party CNAME-cloaked HTTP responses to 7 days. On macOS, this enhancement is specific to Big Sur.

What Is CNAME Cloaking?

In the eyes of web browsers, the first party of a website is typically defined by its registrable domain. This means that www.blog.example and comments.blog.example are considered same-site and the same party. If the user loads a webpage from www.blog.example, and that page makes a subresource request to comments.blog.example, that request will carry all cookies that are set to cover the blog.example site, including login cookies and user identity cookies. In addition, the response to that comments.blog.example subresource request can set cookies for blog.example, and those cookies will be first-party cookies.

Enter CNAMEs. CNAME stands for canonical name record and maps one domain name to another as part of the Domain Name System, or DNS. This means a site owner can configure one of their subdomains, such as sub.blog.example, to resolve to thirdParty.example, before resolving to an IP address. This happens underneath the web layer and is called CNAME cloaking — the thirdParty.example domain is cloaked as sub.blog.example and thus has the same powers as the true first party.

CNAME Cloaking and Tracking

Cross-site trackers have convinced site owners to set up CNAME cloaking in order to circumvent tracking prevention, such as ITP’s 7-day expiry cap on cookies set in JavaScript. In our blog case, this would be making track.blog.example resolve to tracker.example.

A recent paper from researchers at the Graduate University for Advanced Studies (Sokendai) and the French National Cybersecurity Agency (ANSSI) found 1,762 websites CNAME cloaking 56 trackers in total.

CNAME Cloaking and Website Security

Site owners who set up CNAME cloaking risk full website takeovers or customer cookie hijacking if the CNAME records aren’t properly managed, for instance if CNAME cloaking isn’t decommissioned when no longer in use. It was recently reported that 250 websites of banks, healthcare companies, restaurant chains, and civil rights groups had been compromised through mismanaged CNAME cloaking. In June this year, Microsoft documented these attacks and how their cloud customers should prevent them.

ITP’s Defense Against CNAME Cloaking Tracking

ITP now detects third-party CNAME cloaking requests and caps the expiry of any cookies set in the HTTP response to 7 days. This cap is aligned with ITP’s expiry cap on all cookies created through JavaScript.

Third-party CNAME cloaking is defined as a first-party subresource that resolves through a CNAME that differs from the first-party domain and differs from the top frame host’s CNAME, if one exists. Yes, the whole site can be CNAME cloaked, when it uses so called edge servers.

The best way to explain this is through a table (1p means first-party, 3p means third-party):

1p host, e.g. www.blog.example 1p subdomain other than the 1p host, e.g. track.blog.example Capped cookie expiry?
No cloaking No cloaking No cap
No cloaking other.blog.example (1p cloaking) No cap
No cloaking tracker.example (3p cloaking) 7-day cap
abc123.edge.example (cloaking) No cloaking No cap
abc123.edge.example (cloaking) abc123.edge.example (matching cloaking) No cap
abc123.edge.example (cloaking) other.blog.example (1p cloaking) No cap
abc123.edge.example (cloaking) tracker.example (3p cloaking) 7-day cap

SameSite=Strict Cookie Jail for Bounce Trackers

In June 2018, we announced an update to ITP to detect and defend against first party bounce trackers. In March 2020, we announced an enhancement to also detect delayed bounce tracking. Since then, we have received a report of one specific website engaged in bounce tracking while also being likely to get frequent user interaction. To combat such issues, we proposed to the W3C Privacy Community Group what we call a SameSite=Strict jail as well as other escalations.

What the SameSite=strict jail does is detect bounce tracking and, at a certain threshold, rewrite all the tracking domain’s cookies to SameSite=strict. This means that they will not be sent in cross-site, first-party navigations, and they can no longer be used for simple redirect-based bounce tracking.

Our implementation is rather relaxed, with the threshold set to 10 unique navigational, first-party redirects (unique in the sense of going to unique domains), and an automatic reset of that counter once the cookies are rewritten to SameSite=strict. This automatically gives the domain a new chance so that they can disengage in bounce tracking and “get out of jail.”

Our current list of domains we subject to this protection is empty because the domain reported to us has stopped their bounce tracking. But this protection remains in our toolbox.

Partitioned Ephemeral IndexedDB

Up until now, WebKit has blocked cross-origin IndexedDB. WebKit now allows partitioned and ephemeral third-party IndexedDB in an effort to align with other browsers now that they are interested in storage partitioning too. You can partake in the ongoing standardization effort for storage partitioning on GitHub.

Partitioned means unique IndexedDB instance per first-party site and ephemeral means in-memory-only, i.e. goes away on browser quit.

Third-Party Cookie Blocking and Storage Access API In Private Browsing

Private Browsing in Safari is based on WebKit’s ephemeral sessions where nothing is persisted to disk. This means ITP would not be able to learn things between launches of Safari. Further, Private Browsing also uses a separate ephemeral session for each new tab the user opens. To uphold this separation between tabs, ITP wouldn’t be able to classify cross-site trackers from the user’s full browsing even in-memory.

However, full third-party cookie blocking doesn’t need classification and is now enabled by default in Private Browsing. This might seem simple to support but the challenge was to make the Storage Access API work with the aforementioned tab separation. This is how it works: Say identityProvider.example wants to request storage access as third-party on the login page for social.example in Tab A. Interacting with identityProvider.example as a first party website in Tab B will not suffice to allow it to request storage access in Tab A since that would leak state between the separate ephemeral sessions. Thus, the user must interact with identityProvider.example in the same tab as where identityProvider.example later requests storage access as third-party. This makes sure that login flows where two different parties are involved and third-party cookie access is required, is possible in Private Browsing mode.

Home Screen Web Application Domain Exempt From ITP

Back in March 2020, when we announced ITP’s 7-day cap on all script-writeable storage, developers asked about home screen web applications and whether they were exempt from this 7-day cap. We explained how ITP’s counter of “days of use” and capture of user interaction effectively made sure that the first party of home screen web applications would not be subjected to the new 7-day cap. To make this more clear, we have implemented an explicit exception for the first-party domain of home screen web applications to make sure ITP always skips that domain in its website data removal algorithm.

In addition, the website data of home screen web applications is kept isolated from Safari and thus will not be affected by ITP’s classification of tracking behavior in Safari.

Thanks To My Coworkers

The above updates to WebKit and ITP would not have been possible without the help from Kate, Jiten, Scott, Tommy, Sihui, and David. Thank you!

November 12, 2020 06:30 PM

October 29, 2020

Claudio Saavedra: Thu 2020/Oct/29

Igalia WebKit

In this line of work, we all stumble at least once upon a problem that turns out to be extremely elusive and very tricky to narrow down and solve. If we&aposre lucky, we might have everything at our disposal to diagnose the problem but sometimes that&aposs not the case – and in embedded development it&aposs often not the case. Add to the mix proprietary drivers, lack of debugging symbols, a bug that&aposs very hard to reproduce under a controlled environment, and weeks in partial confinement due to a pandemic and what you have is better described as a very long lucid nightmare. Thankfully, even the worst of nightmares end when morning comes, even if sometimes morning might be several days away. And when the fix to the problem is in an inimaginable place, the story is definitely one worth telling.

The problem

It all started with one of Igalia&aposs customers deploying a WPE WebKit-based browser in their embedded devices. Their CI infrastructure had detected a problem caused when the browser was tasked with creating a new webview (in layman terms, you can imagine that to be the same as opening a new tab in your browser). Occasionally, this view would never load, causing ongoing tests to fail. For some reason, the test failure had a reproducibility of ~75% in the CI environment, but during manual testing it would occur with less than a 1% of probability. For reasons that are beyond the scope of this post, the CI infrastructure was not reachable in a way that would allow to have access to running processes in order to diagnose the problem more easily. So with only logs at hand and less than a 1/100 chances of reproducing the bug myself, I set to debug this problem locally.


The first that became evident was that, whenever this bug would occur, the WebKit feature known as web extension (an application-specific loadable module that is used to allow the program to have access to the internals of a web page, as well to enable customizable communication with the process where the page contents are loaded – the web process) wouldn&apost work. The browser would be forever waiting that the web extension loads, and since that wouldn&apost happen, the expected page wouldn&apost load. The first place to look into then is the web process and to try to understand what is preventing the web extension from loading. Enter here, our good friend GDB, with less than spectacular results thanks to stripped libraries.

#0  0x7500ab9c in poll () from target:/lib/libc.so.6
#1  0x73c08c0c in ?? () from target:/usr/lib/libEGL.so.1
#2  0x73c08d2c in ?? () from target:/usr/lib/libEGL.so.1
#3  0x73c08e0c in ?? () from target:/usr/lib/libEGL.so.1
#4  0x73bold6a8 in ?? () from target:/usr/lib/libEGL.so.1
#5  0x75f84208 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#6  0x75fa0b7e in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#7  0x7561eda2 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#8  0x755a176a in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#9  0x753cd842 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#10 0x75451660 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#11 0x75452882 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#12 0x75452fa8 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#13 0x76b1de62 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#14 0x76b5a970 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#15 0x74bee44c in g_main_context_dispatch () from target:/usr/lib/libglib-2.0.so.0
#16 0x74bee808 in ?? () from target:/usr/lib/libglib-2.0.so.0
#17 0x74beeba8 in g_main_loop_run () from target:/usr/lib/libglib-2.0.so.0
#18 0x76b5b11c in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#19 0x75622338 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#20 0x74f59b58 in __libc_start_main () from target:/lib/libc.so.6
#21 0x0045d8d0 in _start ()

From all threads in the web process, after much tinkering around it slowly became clear that one of the places to look into is that poll() call. I will spare you the details related to what other threads were doing, suffice to say that whenever the browser would hit the bug, there was a similar stacktrace in one thread, going through libEGL to a call to poll() on top of the stack, that would never return. Unfortunately, a stripped EGL driver coming from a proprietary graphics vendor was a bit of a showstopper, as it was the inability to have proper debugging symbols running inside the device (did you know that a non-stripped WebKit library binary with debugging symbols can easily get GDB and your device out of memory?). The best one could do to improve that was to use the gcore feature in GDB, and extract a core from the device for post-mortem analysis. But for some reason, such a stacktrace wouldn&apost give anything interesting below the poll() call to understand what&aposs being polled here. Did I say this was tricky?

What polls?

Because WebKit is a multiprocess web engine, having system calls that signal, read, and write in sockets communicating with other processes is an everyday thing. Not knowing what a poll() call is doing and who is it that it&aposs trying to listen to, not very good. Because the call is happening under the EGL library, one can presume that it&aposs graphics related, but there are still different possibilities, so trying to find out what is this polling is a good idea.

A trick I learned while debugging this is that, in absence of debugging symbols that would give a straightforward look into variables and parameters, one can examine the CPU registers and try to figure out from them what the parameters to function calls are. Let&aposs do that with poll(). First, its signature.

int poll(struct pollfd *fds, nfds_t nfds, int timeout);

Now, let's examine the registers.

(gdb) f 0
#0  0x7500ab9c in poll () from target:/lib/libc.so.6
(gdb) info registers
r0             0x7ea55e58	2124766808
r1             0x1	1
r2             0x64	100
r3             0x0	0
r4             0x0	0

Registers r0, r1, and r2 contain poll()&aposs three parameters. Because r1 is 1, we know that there is only one file descriptor being polled. fds is a pointer to an array with one element then. Where is that first element? Well, right there, in the memory pointed to directly by r0. What does struct pollfd look like?

struct pollfd {
  int   fd;         /* file descriptor */
  short events;     /* requested events */
  short revents;    /* returned events */

What we are interested in here is the contents of fd, the file descriptor that is being polled. Memory alignment is again in our side, we don&apost need any pointer arithmetic here. We can inspect directly the register r0 and find out what the value of fd is.

(gdb) print *0x7ea55e58
$3 = 8

So we now know that the EGL library is polling the file descriptor with an identifier of 8. But where is this file descriptor coming from? What is on the other end? The /proc file system can be helpful here.

# pidof WPEWebProcess
1944 1196
# ls -lh /proc/1944/fd/8
lrwx------    1 x x      64 Oct 22 13:59 /proc/1944/fd/8 -> socket:[32166]

So we have a socket. What else can we find out about it? Turns out, not much without the unix_diag kernel module, which was not available in our device. But we are slowly getting closer. Time to call another good friend.

Where GDB fails, printf() triumphs

Something I have learned from many years working with a project as large as WebKit, is that debugging symbols can be very difficult to work with. To begin with, it takes ages to build WebKit with them. When cross-compiling, it&aposs even worse. And then, very often the target device doesn&apost even have enough memory to load the symbols when debugging. So they can be pretty useless. It&aposs then when just using fprintf() and logging useful information can simplify things. Since we know that it&aposs at some point during initialization of the web process that we end up stuck, and we also know that we&aposre polling a file descriptor, let&aposs find some early calls in the code of the web process and add some fprintf() calls with a bit of information, specially in those that might have something to do with EGL. What can we find out now?

Oct 19 10:13:27.700335 WPEWebProcess[92]: Starting
Oct 19 10:13:27.720575 WPEWebProcess[92]: Initializing WebProcess platform.
Oct 19 10:13:27.727850 WPEWebProcess[92]: wpe_loader_init() done.
Oct 19 10:13:27.729054 WPEWebProcess[92]: Initializing PlatformDisplayLibWPE (hostFD: 8).
Oct 19 10:13:27.730166 WPEWebProcess[92]: egl backend created.
Oct 19 10:13:27.741556 WPEWebProcess[92]: got native display.
Oct 19 10:13:27.742565 WPEWebProcess[92]: initializeEGLDisplay() starting.

Two interesting findings from the fprintf()-powered logging here: first, it seems that file descriptor 8 is one known to libwpe (the general-purpose library that powers the WPE WebKit port). Second, that the last EGL API call right before the web process hangs on poll() is a call to eglInitialize(). fprintf(), thanks for your service.

Number 8

We now know that the file descriptor 8 is coming from WPE and is not internal to the EGL library. libwpe gets this file descriptor from the UI process, as one of the many creation parameters that are passed via IPC to the nascent process in order to initialize it. Turns out that this file descriptor in particular, the so-called host client file descriptor, is the one that the freedesktop backend of libWPE, from here onwards WPEBackend-fdo, creates when a new client is set to connect to its Wayland display. In a nutshell, in presence of a new client, a Wayland display is supposed to create a pair of connected sockets, create a new client on the Display-side, give it one of the file descriptors, and pass the other one to the client process. Because this will be useful later on, let&aposs see how is that currently implemented in WPEBackend-fdo.

    int pair[2];
    if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, pair)  0)
        return -1;

    int clientFd = dup(pair[1]);

    wl_client_create(m_display, pair[0]);

The file descriptor we are tracking down is the client file descriptor, clientFd. So we now know what&aposs going on in this socket: Wayland-specific communication. Let&aposs enable Wayland debugging next, by running all relevant process with WAYLAND_DEBUG=1. We&aposll get back to that code fragment later on.

A Heisenbug is a Heisenbug is a Heisenbug

Turns out that enabling Wayland debugging output for a few processes is enough to alter the state of the system in such a way that the bug does not happen at all when doing manual testing. Thankfully the CI&aposs reproducibility is much higher, so after waiting overnight for the CI to continuously run until it hit the bug, we have logs. What do the logs say?

WPEWebProcess[41]: initializeEGLDisplay() starting.
  -> wl_display@1.get_registry(new id wl_registry@2)
  -> wl_display@1.sync(new id wl_callback@3)

So the EGL library is trying to fetch the Wayland registry and it&aposs doing a wl_display_sync() call afterwards, which will block until the server responds. That&aposs where the blocking poll() call comes from. So, it turns out, the problem is not necessarily on this end of the Wayland socket, but perhaps on the other side, that is, in the so-called UI process (the main browser process). Why is the Wayland display not replying?

The loop

Something that is worth mentioning before we move on is how the WPEBackend-fdo Wayland display integrates with the system. This display is a nested display, with each web view a client, while it is itself a client of the system&aposs Wayland display. This can be a bit confusing if you&aposre not very familiar with how Wayland works, but fortunately there is good documentation about Wayland elsewhere.

The way that the Wayland display in the UI process of a WPEWebKit browser is integrated with the rest of the program, when it uses WPEBackend-fdo, is through the GLib main event loop. Wayland itself has an event loop implementation for servers, but for a GLib-powered application it can be useful to use GLib&aposs and integrate Wayland&aposs event processing with the different stages of the GLib main loop. That is precisely how WPEBackend-fdo is handling its clients&apos events. As discussed earlier, when a new client is created a pair of connected sockets are created and one end is given to Wayland to control communication with the client. GSourceFunc functions are used to integrate Wayland with the application main loop. In these functions, we make sure that whenever there are pending messages to be sent to clients, those are sent, and whenever any of the client sockets has pending data to be read, Wayland reads from them, and to dispatch the events that might be necessary in response to the incoming data. And here is where things start getting really strange, because after doing a bit of fprintf()-powered debugging inside the Wayland-GSourceFuncs functions, it became clear that the Wayland events from the clients were never dispatched, because the dispatch() GSourceFunc was not being called, as if there was nothing coming from any Wayland client. But how is that possible, if we already know that the web process client is actually trying to get the Wayland registry?

To move forward, one needs to understand how the GLib main loop works, in particular, with Unix file descriptor sources. A very brief summary of this is that, during an iteration of the main loop, GLib will poll file descriptors to see if there are any interesting events to be reported back to their respective sources, in which case the sources will decide whether to trigger the dispatch() phase. A simple source might decide in its dispatch() method to directly read or write from/to the file descriptor; a Wayland display source (as in our case), will call wl_event_loop_dispatch() to do this for us. However, if the source doesn&apost find any interesting events, or if the source decides that it doesn&apost want to handle them, the dispatch() invocation will not happen. More on the GLib main event loop in its API documentation.

So it seems that for some reason the dispatch() method is not being called. Does that mean that there are no interesting events to read from? Let&aposs find out.

System call tracing

Here we resort to another helpful tool, strace. With strace we can try to figure out what is happening when the main loop polls file descriptors. The strace output is huge (because it takes easily over a hundred attempts to reproduce this), but we know already some of the calls that involve file descriptors from the code we looked at above, when the client is created. So we can use those calls as a starting point in when searching through the several MBs of logs. Fast-forward to the relevant logs.

socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, [128, 130]) = 0
dup(130)               = 131
close(130)             = 0
fcntl64(128, F_DUPFD_CLOEXEC, 0) = 130
epoll_ctl(34, EPOLL_CTL_ADD, 130, {EPOLLIN, {u32=1639599928, u64=1639599928}}) = 0

What we see there is, first, WPEBackend-fdo creating a new socket pair (128, 130) and then, when file descriptor 130 is passed to wl_client_create() to create a new client, Wayland adds that file descriptor to its epoll() instance for monitoring clients, which is referred to by file descriptor 34. This way, whenever there are events in file descriptor 130, we will hear about them in file descriptor 34.

So what we would expect to see next is that, after the web process is spawned, when a Wayland client is created using the passed file descriptor and the EGL driver requests the Wayland registry from the display, there should be a POLLIN event coming in file descriptor 34 and, if the dispatch() call for the source was called, a epoll_wait() call on it, as that is what wl_event_loop_dispatch() would do when called from the source&aposs dispatch() method. But what do we have instead?

poll([{fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=59, events=POLLIN}, {fd=110, events=POLLIN}, {fd=114, events=POLLIN}, {fd=132, events=POLLIN}], 6, 0) = 1 ([{fd=34, revents=POLLIN}])
recvmsg(30, {msg_namelen=0}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)

strace can be a bit cryptic, so let&aposs explain those two function calls. The first one is a poll in a series of file descriptors (including 30 and 34) for POLLIN events. The return value of that call tells us that there is a POLLIN event in file descriptor 34 (the Wayland display epoll() instance for clients). But unintuitively, the call right after is trying to read a message from socket 30 instead, which we know doesn&apost have any pending data at the moment, and consequently returns an error value with an errno of EAGAIN (Resource temporarily unavailable).

Why is the GLib main loop triggering a read from 30 instead of 34? And who is 30?

We can answer the latter question first. Breaking on a running UI process instance at the right time shows who is reading from the file descriptor 30:

#1  0x70ae1394 in wl_os_recvmsg_cloexec (sockfd=30, msg=msg@entry=0x700fea54, flags=flags@entry=64)
#2  0x70adf644 in wl_connection_read (connection=0x6f70b7e8)
#3  0x70ade70c in read_events (display=0x6f709c90)
#4  wl_display_read_events (display=0x6f709c90)
#5  0x70277d98 in pwl_source_check (source=0x6f71cb80)
#6  0x743f2140 in g_main_context_check (context=context@entry=0x2111978, max_priority=, fds=fds@entry=0x6165f718, n_fds=n_fds@entry=4)
#7  0x743f277c in g_main_context_iterate (context=0x2111978, block=block@entry=1, dispatch=dispatch@entry=1, self=)
#8  0x743f2ba8 in g_main_loop_run (loop=0x20ece40)
#9  0x00537b38 in ?? ()

So it&aposs also Wayland, but on a different level. This is the Wayland client source (remember that the browser is also a Wayland client?), which is installed by cog (a thin browser layer on top of WPE WebKit that makes writing browsers easier to do) to process, among others, input events coming from the parent Wayland display. Looking at the cog code, we can see that the wl_display_read_events() call happens only if GLib reports that there is a G_IO_IN (POLLIN) event in its file descriptor, but we already know that this is not the case, as per the strace output. So at this point we know that there are two things here that are not right:

  1. A FD source with a G_IO_IN condition is not being dispatched.
  2. A FD source without a G_IO_IN condition is being dispatched.

Someone here is not telling the truth, and as a result the main loop is dispatching the wrong sources.

The loop (part II)

It is at this point that it would be a good idea to look at what exactly the GLib main loop is doing internally in each of its stages and how it tracks the sources and file descriptors that are polled and that need to be processed. Fortunately, debugging symbols for GLib are very small, so debugging this step by step inside the device is rather easy.

Let&aposs look at how the main loop decides which sources to dispatch, since for some reason it&aposs dispatching the wrong ones. Dispatching happens in the g_main_dispatch() method. This method goes over a list of pending source dispatches and after a few checks and setting the stage, the dispatch method for the source gets called. How is a source set as having a pending dispatch? This happens in g_main_context_check(), where the main loop checks the results of the polling done in this iteration and runs the check() method for sources that are not ready yet so that they can decide whether they are ready to be dispatched or not. Breaking into the Wayland display source, I know that the check() method is called. How does this method decide to be dispatched or not?

    [](GSource* base) -> gboolean
        auto& source = *reinterpret_cast(base);
        return !!source.pfd.revents;

In this lambda function we&aposre returning TRUE or FALSE, depending on whether the revents field in the GPollFD structure have been filled during the polling stage of this iteration of the loop. A return value of TRUE indicates the main loop that we want our source to be dispatched. From the strace output, we know that there is a POLLIN (or G_IO_IN) condition, but we also know that the main loop is not dispatching it. So let&aposs look at what&aposs in this GPollFD structure.

For this, let&aposs go back to g_main_context_check() and inspect the array of GPollFD structures that it received when called. What do we find?

(gdb) print *fds
$35 = {fd = 30, events = 1, revents = 0}
(gdb) print *(fds+1)
$36 = {fd = 34, events = 1, revents = 1}

That&aposs the result of the poll() call! So far so good. Now the method is supposed to update the polling records it keeps and it uses when calling each of the sources check() functions. What do these records hold?

(gdb) print *pollrec->fd
$45 = {fd = 19, events = 1, revents = 0}
(gdb) print *(pollrec->next->fd)
$47 = {fd = 30, events = 25, revents = 1}
(gdb) print *(pollrec->next->next->fd)
$49 = {fd = 34, events = 25, revents = 0}

We&aposre not interested in the first record quite yet, but clearly there&aposs something odd here. The polling records are showing a different value in the revent fields for both 30 and 34. Are these records updated correctly? Let&aposs look at the algorithm that is doing this update, because it will be relevant later on.

  pollrec = context->poll_records;
  i = 0;
  while (pollrec && i  n_fds)
      while (pollrec && pollrec->fd->fd == fds[i].fd)
          if (pollrec->priority = max_priority)
              pollrec->fd->revents =
                fds[i].revents & (pollrec->fd->events | G_IO_ERR | G_IO_HUP | G_IO_NVAL);
          pollrec = pollrec->next;


In simple words, what this algorithm is doing is to traverse simultaneously the polling records and the GPollFD array, updating the polling records revents with the results of polling. From reading how the pollrec linked list is built internally, it&aposs possible to see that it&aposs purposely sorted by increasing file descriptor identifier value. So the first item in the list will have the record for the lowest file descriptor identifier, and so on. The GPollFD array is also built in this way, allowing for a nice optimization: if more than one polling record – that is, more than one polling source – needs to poll the same file descriptor, this can be done at once. This is why this otherwise O(n^2) nested loop can actually be reduced to linear time.

One thing stands out here though: the linked list is only advanced when we find a match. Does this mean that we always have a match between polling records and the file descriptors that have just been polled? To answer that question we need to check how is the array of GPollFD structures filled. This is done in g_main_context_query(), as we hinted before. I&aposll spare you the details, and just focus on what seems relevant here: when is a poll record not used to fill a GPollFD?

  n_poll = 0;
  lastpollrec = NULL;
  for (pollrec = context->poll_records; pollrec; pollrec = pollrec->next)
      if (pollrec->priority > max_priority)

Interesting! If a polling record belongs to a source whose priority is lower than the maximum priority that the current iteration is going to process, the polling record is skipped. Why is this?

In simple terms, this happens because each iteration of the main loop finds out the highest priority between the sources that are ready in the prepare() stage, before polling, and then only those file descriptor sources with at least such a a priority are polled. The idea behind this is to make sure that high-priority sources are processed first, and that no file descriptor sources with lower priority are polled in vain, as they shouldn&apost be dispatched in the current iteration.

GDB tells me that the maximum priority in this iteration is -60. From an earlier GDB output, we also know that there&aposs a source for a file descriptor 19 with a priority 0.

(gdb) print *pollrec
$44 = {fd = 0x7369c8, prev = 0x0, next = 0x6f701560, priority = 0}
(gdb) print *pollrec->fd
$45 = {fd = 19, events = 1, revents = 0}

Since 19 is lower than 30 and 34, we know that this record is before theirs in the linked list (and so it happens, it&aposs the first one in the list too). But we know that, because its priority is 0, it is too low to be added to the file descriptor array to be polled. Let&aposs look at the loop again.

  pollrec = context->poll_records;
  i = 0;
  while (pollrec && i  n_fds)
      while (pollrec && pollrec->fd->fd == fds[i].fd)
          if (pollrec->priority = max_priority)
              pollrec->fd->revents =
                fds[i].revents & (pollrec->fd->events | G_IO_ERR | G_IO_HUP | G_IO_NVAL);
          pollrec = pollrec->next;


The first polling record was skipped during the update of the GPollFD array, so the condition pollrec && pollrec->fd->fd == fds[i].fd is never going to be satisfied, because 19 is not in the array. The innermost while() is not entered, and as such the pollrec list pointer never moves forward to the next record. So no polling record is updated here, even if we have updated revent information from the polling results.

What happens next should be easy to see. The check() method for all polled sources are called with outdated revents. In the case of the source for file descriptor 30, we wrongly tell it there&aposs a G_IO_IN condition, so it asks the main loop to call dispatch it triggering a a wl_connection_read() call in a socket with no incoming data. For the source with file descriptor 34, we tell it that there&aposs no incoming data and its dispatch() method is not invoked, even when on the other side of the socket we have a client waiting for data to come and blocking in the meantime. This explains what we see in the strace output above. If the source with file descriptor 19 continues to be ready and with its priority unchanged, then this situation repeats in every further iteration of the main loop, leading to a hang in the web process that is forever waiting that the UI process reads its socket pipe.

The bug – explained

I have been using GLib for a very long time, and I have only fixed a couple of minor bugs in it over the years. Very few actually, which is why it was very difficult for me to come to accept that I had found a bug in one of the most reliable and complex parts of the library. Impostor syndrome is a thing and it really gets in the way.

But in a nutshell, the bug in the GLib main loop is that the very clever linear update of registers is missing something very important: it should skip to the first polling record matching before attempting to update its revents. Without this, in the presence of a file descriptor source with the lowest file descriptor identifier and also a lower priority than the cutting priority in the current main loop iteration, revents in the polling registers are not updated and therefore the wrong sources can be dispatched. The simplest patch to avoid this, would look as follows.

   i = 0;
   while (pollrec && i  n_fds)
+      while (pollrec && pollrec->fd->fd != fds[i].fd)
+        pollrec = pollrec->next;
       while (pollrec && pollrec->fd->fd == fds[i].fd)
           if (pollrec->priority = max_priority)

Once we find the first matching record, let&aposs update all consecutive records that also match and need an update, then let&aposs skip to the next record, rinse and repeat. With this two-line patch, the web process was finally unlocked, the EGL display initialized properly, the web extension and the web page were loaded, CI tests starting passing again, and this exhausted developer could finally put his mind to rest.

A complete patch, including improvements to the code comments around this fascinating part of GLib and also a minimal test case reproducing the bug have already been reviewed by the GLib maintainers and merged to both stable and development branches. I expect that at least some GLib sources will start being called in a different (but correct) order from now on, so keep an eye on your GLib sources. :-)

Standing on the shoulders of giants

At this point I should acknowledge that without the support from my colleagues in the WebKit team in Igalia, getting to the bottom of this problem would have probably been much harder and perhaps my sanity would have been at stake. I want to thank Adrián and &Zcaronan for their input on Wayland, debugging techniques, and for allowing me to bounce back and forth ideas and findings as I went deeper into this rabbit hole, helping me to step out of dead-ends, reminding me to use tools out of my everyday box, and ultimately, to be brave enough to doubt GLib&aposs correctness, something that much more often than not I take for granted.

Thanks also to Philip and Sebastian for their feedback and prompt code review!

October 29, 2020 01:10 PM

October 22, 2020

Release Notes for Safari Technology Preview 115

Surfin’ Safari

Safari Technology Preview Release 115 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 267325-267959.

Web Inspector

  • Sources Tab
    • Added a checkbox to the popover when configuring a local override to allow it to skip the network (r267723)

Web Audio

  • Enabled the modern unprefixed WebAudio API (r267488, r267504)
  • Changed AnalyserNode to downmix input audio to mono (r267346)
  • Changed AnalyserNode’s getByteFrequencyData() and getFloatFrequencyData() to only do FFT analysis once per render quantum (r267349)
  • Changed AudioBufferSourceNode to update grain parameters when the buffer is set after rendering has started (r267386)
  • Updated AudioParam.setValueCurveAtTime() to have an implicit call to setValueAtTime() at the end (r267435)
  • Updated AudioParams with automations to process timelines (r267432)
  • Fixed BiquadFilterNode’s lowpass and highpass filters (r267444)
  • Fixed Web Audio API outputting silence for 302 redirected resource (r267507, r267532)
  • Made AudioBufferSourceNode loop fixes (r267443)
  • Changed to properly handle AudioParam.setTargetAtTime() followed by a ramp (r267381)
  • Improved AudioBufferSourceNode resampling (r267453)

  • Added stubs for AudioWorklet (r267744)

  • Added basic infrastructure for AudioWorklet (r267859)
  • Added stubs for AudioWorkletProcessor and AudioWorkletGlobalScope (r267891)


  • Fixed BigInt to work with Map and Set (r267373)
  • Enabled Intl.DateTimeFormat dayPeriod (r267454)
  • Updated Intl rounding behavior to align with specifications update (r267500)
  • Updated functions to consistently enumerate length property before name property (r267364)
  • Updated Array.prototype.sort to be consistent with specifications (r267514)
  • Implemented item method proposal, note that this will be renamed to at later (r267814)


  • Performance.navigation and Performance.timing are incorrectly exposed to workers (r267333)
  • Update User Timing interfaces to User Timing Level 3 (r267402)
  • Fixed visibilitychange:hidden event to fire during page navigations (r267614)
  • Re-aligned HTMLElement with the HTML spec (r267893)


  • Added support for HTMLMediaElement.setSinkId (r267472)
  • Fixed webkitfullscreenchange to fire for Shadow DOM elements (r267724)


  • Added support for the individual transform properties translate, rotate, scale, including accelerated animation (r267887, r267937, r267958)
  • Changed to clear the override width to properly compute percent margins in CSS Grid (r267503)

  • Implemented the CSS math-style property (r267578)

  • Implemented row-gap and column-gap for flex layout (r267829)
  • Implemented list-style-type: <string> (r267940)
  • Fixed CSS Selector an-plus-b serialization (r267812)
  • CSS serialization expects comments between certain tokens (r267766)
  • Fixed CSS variable causing a background url() to get resolved with a different base (r267951)
  • Updated to repaint as needed when adding and removing highlights (r267863)


  • Changed to not set the UV option if the authenticator doesn’t support it (r267369)

Selection API

  • Fixed selectAllChildren to return InvalidNodeTypeError when passed a DocumentType node (r267327)
  • Improved VisibleSelection, FrameSelection, and DOMSelection to preserve anchor and focus (r267329)


  • Updated toRTCIceProtocol to handle ssltcp candidates (r267401)


  • Added support for accessing the ‘SameSite’ cookie attribute (r267919)
  • Fixed several issues when switching to new browser context (r267918)

October 22, 2020 09:00 PM

October 19, 2020

Meet Face ID and Touch ID for the Web

Surfin’ Safari

People often see passwords are the original sin of authentication on the web. Passwords can be easy to guess and vulnerable to breaches. Frequent reuse of the same password across the web makes breaches even more profitable. As passwords are made stronger and unique, they can quickly become unusable for many users. Passwords indeed look notorious, but are passwords themselves the problem, or is it their use as a sole factor for authentication?

Many believe the latter, and thus multi-factor authentication has become more and more popular. The introduction of a second factor does fix most of the security issues with passwords, but it inevitably makes the whole authentication experience cumbersome with an additional step. Therefore, multi-factor authentication has not become the de facto authentication mechanism on the web. Face ID and Touch ID for the web provides both the security guarantees of multi-factor authentication and ease of use. It offers multi-factor authentication in a single step. Using this technology, available on over a billion capable Apple devices, web developers can now broadly offer traditional multi-factor authentication with a smooth, convenient experience. And being built on top of the Web Authentication API makes Face ID and Touch ID phishing resistant as well.

This blog post extends the content of WWDC 2020 “Meet Face ID and Touch ID for the web” session by providing detailed examples to assist developers’ adoption of this new technology, including how to manage different user agent user interfaces, how to propagate user gestures from user-activated events to WebAuthn API calls, and how to interpret Apple Anonymous Attestation. This article will end by summarizing the unique characteristics of Apple’s platform authenticator and the current status of security key support. If you haven’t heard about WebAuthn before, you’re strongly encouraged to first watch the WWDC 2020 session, which covers the basic concepts. Otherwise, please enjoy.

Managing User Experiences

Although user agents are not required to offer UI guidance to users during WebAuthn flows, the reality is that all of them do. This allows user agents to share some of the burden from websites to manage the user experience, but it creates another complexity for websites as each user agent has a different way of presenting the WebAuthn ceremony in its UI. A WebAuthn ceremony could either be the authentication process or the registration process. This section presents how WebAuthn ceremony options map to WebKit/Safari’s UI and the recommended user experience for Face ID and Touch ID for the web.

One challenge is to manage different user experiences among the platform authenticator and security keys. Although the WebAuthn API allows presenting both options to the user simultaneously, it’s not the best approach. First, most users are probably only familiar with the branding of the platform authenticator, i.e., Face ID and Touch ID on Apple’s platforms, but are unfamiliar with security keys. Offering both at the same time can confuse users and make it difficult for them to decide what to do. Secondly, the platform authenticator has different behaviors and use cases from security keys. For example, Face ID and Touch ID are suitable for use as a more convenient, alternative mechanism to sign in when most security keys are not. And credentials stored in security keys can often be used across different devices and platforms while those stored in the platform authenticator are typically tied to a platform and a device. Therefore, it is better to present these two options to the user separately.

Presenting Face ID and Touch ID Alone

What follows is the recommended way to invoke Face ID and Touch ID for the web. Below is the corresponding Safari UI for registration ceremonies. Here, the Relying Party ID is picked to be displayed in the dialog.

Here is the corresponding code snippet to show the above dialog.

const options = {
    publicKey: {
        rp: { name: "example.com" },
        user: {
            name: "john.appleseed@example.com",
            id: userIdBuffer,
            displayName: "John Appleseed"
        pubKeyCredParams: [ { type: "public-key", alg: -7 } ],
        challenge: challengeBuffer,
        authenticatorSelection: { authenticatorAttachment: "platform" }

const publicKeyCredential = await navigator.credentials.create(options);

The essential option is to specify authenticatorSelection: { authenticatorAttachment: "platform" } , which tells WebKit to only invoke the platform authenticator. After the publicKeyCredential is returned, one of the best practices is to store the Credential ID in a server-set, secure, httpOnly cookie, and mark its transport as "internal". This cookie can then be used to improve the user experience of future authentication ceremonies.

To protect users from tracking, the WebAuthn API doesn’t allow websites to query the existence of credentials on a device. This important privacy feature, however, requires some extra effort for websites to store provisioned Credential IDs in a separate source and query it before the authentication ceremony. The separate source is often on the backend server. This practice works well for security keys given that they can be used across platforms. Unfortunately, it does not work for the platform authenticator as credentials can only be used on the device where they were created. A server-side source cannot tell whether or not a particular platform authenticator indeed preserves a credential. Hence, a cookie is especially useful. This cookie should not be set through the document.cookie API since Safari’s Intelligent Tracking Prevention caps the expiry of such cookies to seven days. It’s also important to mark those credentials as "internal" such that websites could supply it in the authentication ceremony options to prevent WebKit from asking users for security keys at the same time.

Below are two different UIs for authentication ceremonies. The first one is streamlined for the case where the user agent only has a single credential, while the second one shows how the user agent allows the user to select one of many credentials. For both cases, only user.name submitted in the registration ceremony is selected to display. For the second case, the order of the list is sorted according to the last used date of the credential. WebKit keeps track of the last used date. Websites thus do not need to worry about it.

Here is the corresponding code snippet to show the above dialogs.

const options = {
    publicKey: {
        challenge: challengeBuffer,
        allowCredentials: [
            { type: "public-key", id: credentialIdBuffer1, transports: ["internal"] },
            // ... more Credential IDs can be supplied.

const publicKeyCredential = await navigator.credentials.get(options);

To be noted, even though an improvement over WebKit can be made such that transports: ["internal"] is not necessary to prevent WebKit from asking users for security keys as long as all allowed credentials are found within the platform authenticator, it is for the happy path only. In the case where no credentials are found, this extra property can tell WebKit to show an error message instead of asking the user for security keys.

Presenting Face ID and Touch ID along with Security Keys

Despite the fact that the following usage is discouraged, WebKit/Safari has prepared dedicated UI to allow the user to select a security key in addition to the platform authenticator. Below is the one for registration ceremonies.

The above dialog can be obtained by deleting authenticatorSelection: { authenticatorAttachment: "platform" } from the registration ceremony code snippet above.

The above dialog will be shown if any entry in the allowCredentials array from the authentication ceremony code snippet above doesn’t have the transports: ["internal"] property.

To be noted, security keys can be used immediately in both cases after the UI is shown. “Use Security Key” and “Account from Security Key” options are there to show instructions of how to interact with security keys.

Specifying allowCredentials or not

allowCredentials is optional for authentication ceremonies. However, omitting it will result in undetermined behavior in WebKit/Safari’s UI. If credentials are found, the authentication ceremony UI above will be shown. If no credentials are found, WebKit will ask the user for their security keys. Therefore, it is highly recommended not to omit this option.

Propagating User Gestures

Unsolicited permission prompts are annoying. Mozilla has conducted surveys [1, 2] that verify this. Even though WebAuthn prompts are not as often seen on the web as notification prompts today, this situation will change with the release of Face ID and Touch ID for the web.

Websites don’t ask for notification permission for fun. They ask because notifications can bring users back to their sites and increase their daily active users metric. A similar financial incentive could be found with WebAuthn prompts especially when platform authenticators are available as a fulfilled authentication request results in a high fidelity, persistent unique identifier of the user. This is a universal truth about authentication and that is why many sites ask for it before users even interact with the site. Though it is inevitable that WebAuthn credential will be leveraged to serve targeted ads to users, at least a similar protection that Mozilla did in Firefox for notification permission prompts can be utilized to make those WebAuthn prompts less annoying to users, which is to require user gestures for the WebAuthn API to eliminate annoying ‘on load’ prompts.

We foresaw this problem some time ago and filed an issue on the WebAuthn specification, but it didn’t get much traction back then. One reason is that it is a breaking change. Another reason is that the risk is not as high with security keys since they are not that popular and not always attached to the platform. The amount of unsolicited prompts has been surprisingly low. The situation is different with the release of Face ID and Touch ID for the web. So, Face ID and Touch ID for the web require user gestures to function. (User gestures are not required for security keys for backward compatibility.)

A user gesture is an indicator to signal WebKit that the execution of the current JavaScript context is a direct result of a user interaction, or more precisely from a handler for a user activated event, such as a touchend, click, doubleclick, or keydown event [3]. Requiring user gestures for the WebAuthn API means API calls must happen within the above JavaScript context. Normally, the user gesture will not be propagated to any async executors within the context. Since it is popular for websites to fetch a challenge asynchronously from a server right before invoking WebAuthn API, WebKit allows WebAuthn API to accept user gestures propagated through XHR events and the Fetch API. Here are examples of how websites can invoke Face ID and Touch ID for the web from user activated events.

Calling the API Directly from User Activated Events

// Fetching the challengeBuffer before the onclick event.

button.addEventListener("click", async () => {
    const options = {
        publicKey: {
            challenge: challengeBuffer,

    const publicKeyCredential = await navigator.credentials.create(options);

Propagating User Gestures Through XHR Events

button.addEventListener("click", () => {
    const xhr = new XMLHttpRequest();
    xhr.onreadystatechange = async function() {
        if (this.readyState == 4 && this.status == 200) {
            const challenge = this.responseText;
            const options = {
                publicKey: {
                    challenge: hexStringToUint8Array(challenge), // a custom helper

            const publicKeyCredential = await navigator.credentials.create(options);
    xhr.open("POST", "/WebKit/webauthn/challenge", true);
    xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");

Propagating User Gestures Through Fetch API

button.addEventListener("click", async () => {
    const response = await fetch("/WebKit/webauthn/challenge", { method: "POST" });
    const challenge = await response.text();

    const options = {
        publicKey: {
            challenge: hexStringToUint8Array(challenge), // a custom helper
    const publicKeyCredential = await navigator.credentials.create(options);

To be noted, readable streams cannot propagate user gestures yet (related bug). Also, the user gesture will expire after 10 seconds for both XHR events and Fetch API.

Easter Egg: Propagating User Gestures Through setTimeout

button.addEventListener("click", () => {
    setTimeout(async () => {
        const options = { ... };
        const publicKeyCredential = await navigator.credentials.create(options);
    }, 500);

The user gesture in the above example will expire after 1 second.

On iOS 14, iPadOS 14 and macOS Big Sur Beta Seed 1, only the very first case is supported. Thanks to early feedback from developers, we were able to identify limitations and add the later cases. This also helped us recognize that user gestures are not a well understood concept among web developers. Therefore, we are going to contribute to the HTML specification and help establish a well established concept of a user gesture for consistency among browser vendors. Depending on how it goes, we might reconsider expanding the user gesture requirement to security keys.

Interpreting Apple Anonymous Attestation

Attestation is an optional feature which provides websites a cryptographic proof of the authenticator’s provenance such that websites that are restricted by special regulations can make a trust decision. Face ID and Touch ID for the web offers Apple Anonymous Attestation. Once verified, this attestation guarantees that an authentic Apple device performed the WebAuthn registration ceremony, but it does not guarantee the operating system running on that device is untampered. If the operating system is untampered, it also guarantees that the private key of the just generated credential is protected by the Secure Enclave and the usage of the private key is guarded with Face ID or Touch ID. (A note: the guard falls back to device passcode if biometric fails multiple times in a row.)

Apple Anonymous Attestation is first of its kind, providing a service like an Anonymization CA, where the authenticator works with a cloud operated CA owned by its manufacturer to dynamically generate per-credential attestation certificates such that no identification information of the authenticator will be revealed to websites in the attestation statement. Furthermore, among data relevant to the registration ceremony, only the public key of the credential along with a hash of the concatenated authenticator data and client data are sent to the CA for attestation, and the CA will not store any of these. This approach makes the whole attestation process privacy preserving. In addition, this approach avoids the security pitfall of Basic Attestation that the compromising of a single device results in revoking certificates from all devices with the same attestation certificate.

Enabling Apple Anonymous Attestation

const options = {
    publicKey: {
        attestation: "direct", // the essential option

const publicKeyCredential = await navigator.credentials.create(options);

Verifying the Statement Format

This is the definition of the Apple Anonymous Attestation statement format. Issue 1453 is tracking the progress of adding this statement format to the WebAuthn standard.

$$attStmtType //= (
                       fmt: "apple",
                       attStmt: appleStmtFormat

appleStmtFormat = {
                       x5c: [ credCert: bytes, * (caCert: bytes) ]

The semantics of the above fields are as follows:
credCert followed by its certificate chain, each encoded in X.509 format.
The credential public key certificate used for attestation, encoded in X.509 format.

Here is the verification procedure given inputs attStmt, authenticatorData and clientDataHash:

  1. Verify that attStmt is valid CBOR conforming to the syntax defined above and perform CBOR decoding on it to extract the contained fields.
  2. Concatenate authenticatorData and clientDataHash to form nonceToHash.
  3. Perform SHA-256 hash of nonceToHash to produce nonce.
  4. Verify nonce matches the value of the extension with OID ( 1.2.840.113635.100.8.2 ) in credCert. The nonce here is used to prove that the attestation is live and to protect the integrity of the authenticatorData and the client data.
  5. Verify credential public key matches the Subject Public Key of credCert.
  6. If successful, return implementation-specific values representing attestation type Anonymous CA and attestation trust path x5c.

The final step is to verify x5c is a valid certificate chain starting from the credCert to the Apple WebAuthn root certificate, which then proves the attestation. (This step is usually shared among different types of attestations that utilize x5c [4].) To be noted, the AAGUID is all zeros even if the attestation is enabled as all Apple devices that support Face ID and Touch ID for the web should have the same properties as explained at the beginning of this section and no other devices can request Apple Anonymous Attestation.

Unique Characteristics of Apple’s Platform Authenticator

Here is a summary about unique characteristics of Apple’s platform authenticator, i.e., Face ID and Touch ID for the web.

  • Different option set results in different UI, and therefore please specify it wisely.
  • Only RP ID and user.name are selected to display in the UI.
  • User gestures are required to invoke the platform authenticator.
  • Apple Anonymous Attestation is available. Use it only if attestation is necessary for you.
  • AAGUID is all zero even if attestation is used.
  • Face ID and Touch ID for the web is available in Safari, SFSafariViewController and ASWebAuthenticationSession on iOS 14, iPadOS 14 and macOS Big Sur. For macOS, Safari 14 with downlevel OS will not get this feature because the attestation relies on a new system framework.
  • All public key credentials generated by the platform authenticator are resident keys regardless of what option is specified.
  • Credentials can only be cleared for all via Safari > History > Clear History… on Mac Safari or Settings > Safari > Clear History and Website Data on iOS & iPadOS.
  • The signature counter is not implemented and therefore it is always zero. Secure Enclave is used to prevent the credential private key from leaking instead of a software safeguard.

Current Status of Security Key Support

Besides the introduction of Face ID and Touch ID for the web, iOS 14, iPadOS 14 and Safari 14 on all supported macOS also have improved security key support including PIN entry and account selection. Here is a list of features that are currently supported. All of them have been supported since iOS 13.3, iPadOS 13.3 and Safari 13 except the two aforementioned.

  • All MUST features in WebAuthn Level 1 and all optional features except CollectedClientData.tokenBinding and most of the extensions. Only the appid extension is supported.
  • All CTAP 2.0 authenticator API except setPin and changePin.
  • USB, Lightning, and NFC transports are supported on capable devices.
  • U2F security keys are supported via CTAP 2.0 but not CTAP 1/U2F JS.
  • Like Face ID and Touch ID for the web, security key support is available in Safari, SFSafariViewController and ASWebAuthenticationSession.


In this blog post, we introduced Face ID and Touch ID for the web. We believe it is a huge leap forward for authentication on the web. It serves as a great alternative way to sign in, especially for traditional multi-factor authentication mechanisms. With the assistance of this technology, we believe multi-factor authentication will replace sole-factor password as the de facto authentication mechanism on the web. Developers, please start testing this feature today and let us know how it works for you by sending feedback on Twitter (@webkit, @alanwaketan, @jonathandavis) or by filing a bug.

October 19, 2020 05:00 PM

October 08, 2020

Release Notes for Safari Technology Preview 114

Surfin’ Safari

Safari Technology Preview Release 114 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 265893-267325.

Web Inspector

  • Elements Tab
    • Changed to grey out properties that aren’t used or don’t apply (r266066)
    • Changed to hide non-inheritable properties when viewing inherited rules (r266069)
    • Changed to not show inline swatches for properties that aren’t used or don’t apply (r266070)
  • Sources Tab
    • Changed to allow event breakpoints to be configured (r266074, r266480)
    • Changed to evaluate breakpoint conditions before incrementing the ignore count (r266138)
    • Changed to allow DOM breakpoints to be configured (r266669)
    • Changed to allow special JavaScript breakpoints to be configured (r266534)
    • Changed to allow URL breakpoints to be configured (r266538)
  • Network Tab
    • Fixed WebSockets to be reported as type websocket (r266441)
    • Fixed issue where response content was not shown for 304 responses from XHR requests (r266568)
  • Timelines Tab
    • Fixed duplicate “Timeline Recording 1” on open (r266477)
    • Fixed re-enabling the JavaScript Allocations timeline to show previously captured heap snapshots in the table (r266463)
    • Fixed the record button disappearing when interface is narrow (r266537)
    • Fixed the Stop Recording button to actually stop the recording (r267038)
  • Audit Tab
    • Allow audits to be created and edited in Edit mode in Web Inspector (r266317)
  • Miscellaneous
    • Fixed issue where the docking buttons wouldn’t work when docked if the window is too small (r267031)


  • Added Intl.DateTimeFormat dateStyle and timeStyle (r266035)
  • Added Intl.Segmenter (r266032)
  • Added a syntax error for async function in a single-statement context (r266340)
  • Added Object.getOwnPropertyNames caching and accelerated Object.getOwnPropertyDescriptor (r265934)
  • Aligned legacy Intl constructor behavior to spec (r266655)
  • Applied Intl.DateTimeFormat hour-cycle correctly when timeStyle is used (r267108)
  • Enabled Intl.DisplayNames (r266029)
  • Changed to not allow let [ sequence to appear in an ExpressionStatement context (r266327)
  • Changed to allow new super.property syntax (r266322)
  • Changed to allow new import.meta() syntax (r266318)
  • Changed to use locale-sensitive grouping for grouping options in IntlRelativeTimeFormat (r266341)
  • Implemented Intl.DateTimeFormat dayPeriod (r266323)
  • Implemented Intl Language Tag Parser (r266039)
  • Implemented Intl.DateTimeFormat.prototype.formatRange (r266033)
  • Implemented unified Intl.NumberFormat (r266031)
  • Fixed an invalid early error for object literal method named proto (r266117)
  • Fixed implementation of the class “extends” clause incorrectly using proto for setting prototypes (r266106)
  • Fixed Performance and PerformanceNavigation interfaces missing toJSON operations (r267316)
  • Updated Intl.Collator to take a collation option (r267102)
  • Updated Array.prototype.push to always perform Set in strict mode (r266581, r266641)
  • Updated Promise.prototype.finally to perform PromiseResolve (r266896)

Date and Time Inputs

  • Added editing to <input type="datetime-local"> (r266830)
  • Updated date inputs to contain editable components (r266351)
  • Updated date picker appearance to match system date pickers (r267085)
  • Updated date picker when the inner control is edited (r266461)
  • Updated date pickers to respect the document’s color scheme (r267131)
  • Updated date/time inputs to focus the next editable component when entering a separator key (r267281)
  • Updated date/time inputs to preserve focus on value change (r266739)
  • Updated date/time inputs to not use user-specified formats to prevent fingerprinting (r267283)

Web Audio

  • Added AudioParam.automationRate attribute (r265980)
  • Added proper support for AudioContextOptions.sampleRate (r267014)
  • Allowed direct creation of replacement codec (r266466)
  • Changed AudioParam.value setter to call setValueAtTime(value, now) (r266293)
  • Changed AudioParam.linearRampToValueAtTime() formula to match specification (r266261)
  • Changed AudioBufferSourceNode to use final values for playbackRate and detune (r265981)
  • Fixed AnalyserNode.getFloatFrequencyData() to fill array with -Infinity when input is silent (r267202)
  • Fixed AudioBufferSourceNode.start() behavior when the offset is past the end of the buffer (r267169)
  • Fixed AudioBufferSourceNode.start() ignoring when parameter when the pitch rate is 0 (r267170)
  • Fixed AudioContext not rendering until an AudioNode is constructed (r266922)
  • Fixed AudioDestinationNode.maxChannelCount always returning 0 (r266559)
  • Fixed AudioParam.linearRampToValueAtTime() and exponentialRampToValueAtTime() having no effect when there is no preceding event (r266788)
  • Fixed BiquadFilterNode.getFrequencyResponse() to return NaN for out-of-bounds frequencies (r266541)
  • Fixed the types of Panner.setPosition() and setOrientation() parameters to not be unrestricted float (r267071)
  • Dropped non-standard AudioBuffer.gain (r267065)
  • Made AudioParam.cancelScheduledValues() standards compliant (r266558)
  • Improved interpolation algorithm in OscillatorNode (r266627)
  • Introduced StereoPannerNode Interface (r265962)
  • Stopped performing “de-zippering” when applying gain (r266794)


  • Enabled MediaRecorder by default on macOS (r267225)
  • End of media capture should not be reported before 3 seconds of the start of capture (r267081)
  • MediaRecorder timeslice parameter causing internal error on longer videos (r266611)

Paint Timing

  • Enabled paint timing by default (r267235)


  • Enabled WebGL2 by default (r267027)
  • Added WebGL and WebGL2 context support to OffscreenCanvas (r266275)
  • WebGL goes in a bad state where glContext.createProgram() returns null (r266362)


  • Fixed text-transform inheritance to ::marker (r266288)
  • Changed to set available column space before grid items prelayout (r266173)
  • Added support for flow-relative shorthand and offset properties (r266674)
  • Changed to allow indefinite size flex items to be definite with respect to resolving percentages inside them (r266696)
  • Changed to not skip flexboxes with auto height for percentage computations in quirks mode (r266716)
  • Changed to use min-content size for intrinsic maximums resolution (r266675)
  • Fixed min-height: auto not getting applied to nested flexboxes (r266695)
  • Fixed :visited color taken on a non-visited link when using CSS variables (r266656)
  • Fixed CSS revert to serialize as “revert”, not “Revert” (r266660)
  • Updated to safely handle overly-long CSS variable values (r266989)


  • Aligned length properties of function prototypes with specificcations (r266018)
  • Updated ReadableStream.pipeTo implementation to match specifications (r266129)
  • Updated Web Share API to prevent non-HTTP(S) URLs (r266151)
  • Aligned ISO-8859-{3,6,7,8,8-I} and windows-{874,1253,1255,1257} encodings with specifications (r266527)
  • Changed XML documents in iframes to not inherit encoding from the parent frame (r266671)
  • Changed Element to not set an attribute inside its constructor (r267074)
  • Changed new URL("#") to throw an error (r266748)
  • Fixed consecutive requestAnimationFrame callbacks that may get passed the same timestamp (r266526)
  • Fixed XHR.timeout getting affected by long tasks (r267227)
  • Fixed taking too long to fetch images from memory cache (r266699)
  • Implemented encodeInto() TextEncoder method (r266533)
  • Updated the URL fragment percent encode set (r266399)

Lazy Loading


  • Fixed the PiP window getting closed when the video element is removed from the DOM (r265904)
  • Fixed an HDCP error for all streams on Netflix (r266176)
  • Fixed <video> element preventing screen from sleeping even after playback finishes (r266410)


  • Added RTCRtpSynchronizationSource.rtpTimestamp (r266052)
  • Exposed RTCPeerConnection.restartIce (r266511)
  • Safari is not able to hear audio when using WebRTC in multiple tabs (r266454)


  • Fixed animations invalidating too often (r266229)
  • Fixed flickering on sedona.dev (r266189)
  • Fixed the cut off scrollbar on Facebook posts with lots of comments has cut off scrollbar that couldn’t scroll to the bottom (r266156)
  • Changed to handle fonts that lie about being monospaced (r266118)
  • Fixed programmatic selection of text in a text field that causes the highlight overlay to spill out (r266051)
  • Fixed overflow: scroll rubber-banding getting interrupted by post-layout scrolling (r267002, r266337)
  • Fixed a flash when closing a webpage (r267250)

Text Rendering

  • Changed letter-spacing to disable ligatures (r266683)


  • Fixed vertical scrolling getting stuck when a horizontal scroller is under the mouse (r266292)
  • Fixed select element scrolling after scrolling the page (r266262)

Back-Forward Cache

  • Added support for third-party domains to get stored for back-forward navigations (r265916)

Storage Access API

  • Allowed requests for storage access from nested iframes (r266479)

October 08, 2020 05:40 PM

October 01, 2020

Sergio Villar: Closing the gap (in flexbox 😇)

Igalia WebKit

Flexbox had a lot of early problems, but by mid-May 2020 where our story begins, both Firefox and Chromium had done a lot of work on improving things with this feature. WebKit, however, hadn’t caught up. Prioritizing the incredible amounts of work a web engine requires is difficult. The WebKit implementation was still passable for very many (most) cases of the core features, and it didn’t have problems that caused crashes or something that urgently demanded attention, so engineers dedicated their limited time toward other things. The net result, however, was that as this choice repeated many times, the comparative state of WebKit’s flexbox implementation had fallen behind pretty significantly.
Web Platform Tests (WPT) is a huge ongoing effort from many people to come up with a very extensive list of tests that could help both spec editors and implementors to make sure we have great compatibility. In the case of flexbox, for example, there are currently 773 tests (2926 subtests) and WebKit was failing a good amount of them. This matters a lot because there are things that flexbox is ideal for, and it is exceptionally widely used. In mid-May, Igalia was contracted to improve things here, and in this post, I’ll explain and illustrate how we did that.

The Challenge

The main issues were (in no particular order):
  • min-width:auto and min-height:auto handling
  • Nested flexboxes in column flows
  • Flexboxes inside tables and viceversa
  • Percentages in heights with indefinite sizes
  • WebKit CI not runnning many WPT flexbox tests
  • and of course… lack of gap support in Flexbox
Modifying Flexbox layout code is a challenge by itself. Tiny modifications in the source code could cause huge differences in the final layout. You might even have a patch that passes all the tests and regresses multiple popular web sites.
Good news is that we were able to tackle most of those issues. Let’s review what changes you could eventually expect from future releases of Safari (note that Apple doesn’t disclose information about future products and/or releases) and the other WebKit based browsers (like GNOME Web).

Flexbox gaps 🥳🎉

Probably one of the most awaited features in WebKit by web developers. It’s finally here after Firefox and Chrome landed it not so long ago. The implementation was initially inspired by the one in Chrome but then it diverged a bit in the final version of the patch. The important thing is that the behaviour should be the same, at least all the tests in WPT related to gaps are passing now in WebKit trunk.

<div style="display: flex; flex-wrap: wrap; gap: 1ch">
  <div style="background: magenta; color: white">Lorem</div>
  <div style="background: green; color: white">ipsum</div>
  <div style="background: orange; color: white">dolor</div>
  <div style="background: blue; color: white">sit</div>
  <div style="background: brown; color: white">amet</div>

Tables as flex items

Tables should obey the flex container sizing whenever they are flex items. As it can be seen in the examples bellow, the tables’ layout code was kicking in and ignoring the constraints set by the flex container. Tables should do what the flex algorithm mandates and thus they should allow being stretched/squeezed as required.

<div style="display:flex; width:100px; background:red;">
  <div style="display:table; width:10px; max-width:10px; height:100px; background:green;">
    <div style="width:100px; height:10px; background:green;"></div>

Tables with items exceeding the 100% of available size

This is the case of tables placed inside flex items. The automatic layout table algorithm was generating tables with unlimited widths when the sum of the sizes of their columns (expressed in percentages) was exceeding the 100%. It was impossible to fulfill at the same time the constraints set by tables and flexbox algorithms.

<div style="display:flex; width:100px; height:100px; align-items:flex-start; background:green;">
  <div style="flex-grow:1; flex-shrink:0;">
    <table style="height:50px; background:green;" cellpadding="0" cellspacing="0">
        <td style="width:100%; background:green;"> </td>
        <td style="background:green;"> </td>

Note how the table was growing indefinitely (I cropped the “Before” picture to fit in the post) to the right before the fix.

Alignment in single-line flexboxes

Interesting case. The code was considering that single-line flexboxes were those where all the flex items were placed in a single line after computing the required space for them. Though sensible, that’s not what a single line flexbox is, it’s a flex container with flex-wrap:nowrap. This means that a flex container with flex-wrap:wrap whose children do not need more than 1 flex line to be placed is not a single-line flex container from the specs POV (corolary: implementing specs is hard).

<div style="display: flex; flex-wrap: wrap; align-content: flex-end; width: 425px; height: 70px; border: 2px solid black">
  <div style="height: 20px">This text should be at the bottom of its container</div>

Percentages in flex items with indefinite sizes

One of the trickiest ones. Although it didn’t involve a lot of code it caused two serious regressions in Youtube’s upload form and when viewing Twitter videos in fullscreen which required some previous fixes and delayed a bit the landing of this patch. Note that this behaviour was really conflictive from the pure specification POV as there were many changes over the time. Defining a good behaviour is really complicated. Without entering in too much details, flexbox has a couple of cases were sizes are considered as definite when they are theoretically indefinite. In this case we consider that if the flex container main size is definite then the post-flexing size of flex items is also treated as definite.

<div style="display: flex; flex-direction: column; height: 150px; width: 150px; border: 2px solid black;">
    <div style="height: 50%; overflow: hidden;">
      <div style="width: 50px; height: 50px; background: green;"></div>
  <div style="flex: none; width: 50px; height: 50px; background: green;"></div>

Hit testing with overlapping flex items

There were some issues with pointer events passing through overlapping flex items (due to negative margins for example). This was fixed by letting the hit testing code proceed in reverse (the opposite to painting) order-modified document order instead of using the raw order from the DOM.

<div style="display:flex; border: 1px solid black; width: 300px;">
  <a style="width: 200px;" href="#">Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua</a>
  <div style="margin-left: -200px; width: 130px; height: 50px; background: orange;"></div>

In the “Before” case hit testing was bypassing the orange block and thus, the cursor was showing a hand because it detected that it was hovering a link. After the fix, the cursor is properly rendered as an arrow because the orange block covers the underneath link.

Computing percentages with scrollbars

In this case the issue was that, in order to compute percentages in heights, we were incorrectly using the size of the scrollbars too.

<div style="display: inline-flex; height: 10em;">
  <div style="overflow-x: scroll;">
    <div style="width: 200px; height: 100%; background: green"></div>

Note that in the “After” picture the horizontal scrollbar background is visible while in the “Before” the wrong height computation made the flex item overlap the scrollbar.

Image items with specific sizes

The flex layout algorithm needs the intrinsic sizes of the flex items to compute their sizes and the size of the flex container. Changes to those intrinsic sizes should trigger new layouts, and the code was not doing that.

<!-- Just to showcase how the img bellow is not properly sized -->
<div style="position: absolute; background-color: red; width: 50px; height: 50px; z-index: -1;"></div>
<div style="display: flex; flex-direction: column; width: 100px; height: 5px;">
  <img style="width: 100px; height: 100px;" src="https://wpt.live/css/css-flexbox/support/100x100-green.png">

Nested flexboxes with ‘min-height: auto’

Another tricky one and another one related to the handling of nested column flexboxes. As in the previous issue with nested column flexboxes the problem was that we were not supporting this case. For those wanting to have a deeper understanding of the issue this bug was about implementing section 4.5 of the specs. This was one of the more complicated ones to fix, Edward Lorenz would love that part of the layout code, the slightest change in one of those source code lines could trigger huge changes in the final rendering.

<div style='display:flex; flex-direction: column; overflow-y: scroll; width: 250px; height: 250px; border: 1px solid black'>
  <div style='display:flex;'>
    <div style="width: 100px; background: blue"></div>
    <div style='width: 120px; background: orange'></div>
    <div style='width: 10px; background: yellow; height: 300px'></div>

As it can be seen, in the “Before” picture the blue and orange blocks are sized differently to the yellow one. That’s fixed in the “After” picture.

Percentages in quirks mode

Another one affecting how percentages are computed in heights, but this one specific to quirks mode. We’re matching now Firefox, Chrome and pre-Chromium Edge, i.e., flexbox should not care much about quirks mode since it was invented many years after quirky browsers dominated the earth.

<div style="width: 100px; height: 50px;">
  <div style="display: flex; flex-direction: column; outline: 2px solid blue;">
    <div style="flex: 0 0 50%"></div>

Percentages in ‘flex-basis’

Percentages were working generally fine inside flex-basis, however there was one particular problematic case. It arose whenever that percentage was refererring to, oh surprise, and indefinite height. And again, we’re talking about nested flexboxes with column flows. Indeed, definite/indefinite sizes is one of the toughest things to get right from the layout POV. In this particular case, the fix was to ignore the percentages and and treat them as height: auto.

<div style="display: flex; flex-direction: column; width: 200px;">
  <div style="flex-basis: 0%; height: 100px; background: red;">
    <div style="background: lime">Here's some text.</div>

Flex containers inside STF tables

Fixing a couple of test cases submitted by an anonymous Opera employee 8! years ago. This is another case of competing layout contexts trying to do things their own way.

<div style="display: table; background:red">
   <div style="display: flex; width: 0px">
      <p style="margin: 1em 1em;width: 50px">Text</p>
      <p style="margin: 1em 1em;width: 50px">Text</p>
      <p style="margin: 1em 1em;width: 50px">Text</p>

After the fix the table is properly sized to 0px width and thus no red is seen.


These examples are just some interesting ones I’ve chosen to highlight. In the end, almost 50 new flexbox tests are passing in WebKit that weren’t back in May!. I wouldn’t like to forget the great job done by my colleague Carlos Lopez who imported tons of WPT flexbox tests into the WebKit source tree. He also performed awesome triage work which made my life a lot easier.
Investing in interoperability is a huge deal for the web. It’s good for everyone, from spec authors to final users, including browser vendors, downstream ports or web authors. So if you care about the web, or your business orbits around web technologies, you should definitely promote and invest on interoperability.

Implementing standards or fixing bugs in web engines is the kind of work we happily do at Igalia on a daily basis. We are the second largest contributor to both WebKit and Chrome/Blink, so if you have an annoying bug on a particular web engine (Gecko and Servo as well) that you want to be fixed, don’t hesitate and contact us, we’d be glad to help. Also, should you want to be part of a workers-owned cooperative with an asambleary decision-making mechanism and a strong focus on free software technologies join us!.


Many thanks to WebKit reviewers from Apple and Igalia like Darin Adler, Manuel Rego, Javier Fernández or Daniel Bates who made the process really easy for me, always providing very nice feedback for the patches I submitted.
I’m also really thankful to Googlers like Christian Biesinger, David Grogan and Stephen McGruer who worked on the very same things in Blink and/or provided very nice guidance and support when porting patches.

By svillar at October 01, 2020 11:34 AM

September 28, 2020

Adrián Pérez de Castro: Sunsetting NPAPI support in WebKitGTK (and WPE)

Igalia WebKit

  1. Summary
  2. What is NPAPI?
  3. What is NPAPI used for?
  4. Why are NPAPI plug-ins being phased out?
  5. What are other browsers doing?
  6. Is WebKitGTK following suit?


Here’s a tl;dr list of bullet points:

  • NPAPI is an old mechanism to extend the functionality of a web browser. It is time to let it go.
  • One year ago, WebKitGTK 2.26.0 removed support for NPAPI plug-ins which used GTK2, but the rest of plug-ins kept working.
  • WebKitGTK 2.30.x will be the last stable series with support for NPAPI plug-ins at all. Version 2.30.0 was released a couple of weeks ago.
  • WebKitGTK 2.32.0, due in March 2021, will be the first stable release to ship without support for NPAPI plug-ins.
  • We have already removed the relevant code from the WebKit repository.
  • While the WPE WebKit port allowed running windowless NPAPI plug-ins, this was never advertised nor supported by us.

What is NPAPI?

In 1995, Netscape Navigator 2.0 introduced a mechanism to extend the functionality of the web browser. That was NPAPI, short for Netscape Plugin Application Programming Interface. NPAPI allowed third parties to add support for new content types; for example Future Splash (.spl files), which later became Flash (.swf).

When a NPAPI plug-in is used to render content, the web browser carves a hole in the rectangular location where content handled by the plug-in will be placed, and hands off the rendering responsibility to the plug-in. This would end up calling call for trouble, as we will see later.

What is NPAPI used for?

A number of technologies have used NPAPI along the years for different purposes:

  • Displaying of multimedia content using Flash Player or the Silverlight plug-ins.
  • Running rich Java™ applications in the browser.
  • Displaying documents in non-Web formats (PDF, DjVu) inside browser windows.
  • A number of questionable practices, like VPN client software using a browser plug‑in for configuration.

Why are NPAPI plug-ins being phased out?

The design of NPAPI makes the web browser give full responsibility to plug-ins: the browser has no control whatsoever over what plug-ins do to display content, which makes it hard to make them participate in styling and layout. More importantly, plug-ins are compiled, native code over which browser developers cannot exercise quality control, which resulted in a history of security incidents, crashes, and browser hangs.

Today, Web browsers’ rendering engines can do a better job than plug-ins, more securely and efficiently. The Web platform is mature and there is no place to blindly trust third party code to behave well. NPAPI is a 25 years old technology showing its age—it has served its purpose, but it is no longer needed.

The last nail in the coffin was Adobe’s 2017 announcement that the Flash plugin will be discontinued in January 2021.

What are other browsers doing?

Glad that you asked! It turns out that all major browsers have plans for incrementally reducing how much of NPAPI usage they allow, until they eventually remove it.


Let’s take a look at the Firefox roadmap first:

Version Date Plug-in support changes
47 June 2016 All plug-ins except Flash need the user to click on the element to activate them.
52 March 2017 Only loads the Flash plug‑in by default.
55 August 2017 Does not load the Flash plug‑in by default, instead it asks users to choose whether sites may use it.
56 September 2017 On top of asking the user, Flash content can only be loaded from http:// and https:// URIs; the Android version completely removes plug‑in support. There is still an option to allow always running the Flash plug-in without asking.
69 September 2019 The option to allow running the Flash plug-in without asking the user is gone.
85 January 2021 Support for plug-ins is gone.
Table: Firefox NPAPI plug-in roadmap.

In conclusion, the Mozilla folks have been slowly boiling the frog for the last four years and will completely remove the support for NPAPI plug-ins coinciding with the Flash player reaching EOL status.

Chromium / Chrome

Here’s a timeline of the Chromium roadmap, merged with some highlights from their Flash Roadmap:

Version Date Plug-in support changes
? Mid 2014 The interface to unblock running plug-ins is made more complicated, to discourage usage.
? January 2015 Plug-ins blocked by default, some popular ones allowed.
42 April 2015 Support for plug-ins disabled by default, setting available in chrome://flags.
45 September 2015 Support for NPAPI plug-ins is removed.
55 December 2016 Browser does not advertise Flash support to web content, the user is asked whether to run the plug-in for sites that really need it.
76 July 2019 Flash support is disabled by default, can still be enabled with a setting.
88 January 2021 Flash support is removed.
Table: Chromium NPAPI/Flash plug-in roadmap.

Note that Chromium continued supporting Flash content even when it already removed support for NPAPI in 2015: by means of their acute NIH syndrome, Google came up with PPAPI, which replaced NPAPI and which was basically designed to support Flash and is currently used by Chromium’s built-in PDF viewer—which will go away also coinciding with Flash being EOL, nevertheless.


On the Apple camp, the story is much easier to tell:

  • Their handheld devices—iPhone, iPad, iPod Touch—never supported NPAPI plug-ins to begin with. Easy-peasy.
  • On desktop, Safari has required explicit approval from the user to allow running plug-ins since June 2016. The Flash plug-in has not been preinstalled in Mac OS since 2010, requiring users to manually install it.
  • NPAPI plug-in support will be removed from WebKit by the end of 2020.

Is WebKitGTK following suit?

Yes. In September 2019 WebKitGTK 2.26 removed support for NPAPI plug-ins which use GTK2. This included Flash, but the PPAPI version could still be used via freshplayerplugin.

In March 2021, when the next stable release series is due, WebKitGTK 2.32 will remove the support for NPAPI plug-ins. This series will receive updates until September 2021.

The above gives a full two years since we started restricting which plug-ins can be loaded before they stop working, which we reckon should be enough. At the moment of writing this article, the support for plug-ins was already gone from the WebKit source the GTK and WPE ports.

Yes, you read well, WPE supported NPAPI plug-ins, but in a limited fashion: only windowless plug-ins worked. In practice, making NPAPI plug-ins work on Unix-like systems required using the XEmbed protocol to allow them to place their rendered content overlaid on top of WebKit’s, but the WPE port does not use X11. Provided that we never advertised nor officially supported the NPAPI support in the WPE port, we do not expect any trouble removing it.

September 28, 2020 09:50 PM

September 09, 2020

Release Notes for Safari Technology Preview 113

Surfin’ Safari

Safari Technology Preview Release 113 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 265179-265893.

Web Inspector

  • Timelines Tab
    • Fixed background colors for odd and even items in Dark Mode in the Timelines tab (r265498)
    • Media & Animations timeline shouldn’t shift when sorting (r265356)
  • Adapted Web Inspector’s user interface and styling to better match macOS Big Sur (r265237, r265507)

Web Audio

  • Added constructor for GainNode (r265227)
  • Added constructor for BiquadFilterNode (r265290)
  • Added constructor for ConvolverNode (r265298)
  • Added constructor for DelayNode (r265221)
  • Added constructor for AudioBuffer (r265210)
  • Added constructor for AnalyserNode (r265196)
  • Added support for suspending and resuming an OfflineAudioContext (r265701)
  • Added constructor for the MediaElementAudioSourceNode interface (r265330)
  • Aligned AudioListener with the W3C specification (r265266)
  • Aligned BiquadFilterNode.getFrequencyResponse() with the specification (r265291)
  • Fixed BiquadFilterNode’s lowpass filter (r265517)
  • Fixed missing length attribute on OfflineAudioContext (r265388)
  • Fixed missing baseLatency attribute on the AudioContext interface (r265393)


  • Added support for MediaRecorder bitrate options (r265328)


  • Updated to avoid triggering redundant compositing updates when trying to run a steps() animation on transform (r265358)
  • Fixed inconsistent spacing of Chinese characters in Safari for macOS Big Sur (r265488)


  • Enabled H.264 low latency code path by default for macOS (r265547)
  • Fixed the picture-in-picture button disappearing in the fullscreen YouTube player after starting a new video in a playlist (r265690)


  • Changed to apply aspect ratios when computing flex-basis (r265855)
  • Fixed updating min-height: auto after an image loads when the image has a specified height and width (r265858)
  • Fixed @font-face font-weight descriptor to reject bolder and lighter (r265677)
  • Fixed the CSS specificity of :host() pseudo-classes (r265812)


  • Fixed window.print to not invoke native UI (r265207)


  • Added VoiceOver access to font styling at insertion point (r265259)


  • Fixed font loads quickly followed by navigations failing indefinitely (r265603)


  • Implemented Canvas.transferControlToOffscreen and OffscreenCanvasRenderingContext2D.commit (r265543)
  • Implemented createImageBitmap(ImageData) (r265360)
  • Implemented PerfomanceObserverInit.buffered (r265390)
  • Fixed text input autocorrect="off" attribute getting ignored on macOS (r265509)

Gamepad API

  • Added a special HID mapping for the Google Stadia controller (r265180)
  • Added HID mapping for the Logitech F310/F710 controllers. (r265183)


  • Fixed table data incorrectly translated in some articles on wikipedia.org (r265188)
  • Fixed leading and trailing spaces to be ignored when comparing content (r265361)

September 09, 2020 05:32 PM

September 07, 2020

Víctor Jáquez: Review of Igalia Multimedia activities (2020/H1)

Igalia WebKit

This blog post is a review of the various activities the Igalia Multimedia team was involved in during the first half of 2020.

Our previous reports are:

Just before a new virus turned into pandemics we could enjoy our traditional FOSDEM. There, our colleague Phil gave a talk about many of the topics covered in this report.


GstWPE’s wpesrc element, produces a video texture representing a web page rendered off-screen by WPE.

We have worked on a new iteration of the GstWPE demo, focusing on one-to-many, web-augmented overlays, broadcasting with WebRTC and Janus.

Also, since the merge of gstwpe plugin in gst-plugins-bad (staging area for new elements) new users have come along spotting rough areas and improving the element along the way.

Video Editing

GStreamer Editing Services (GES) is a library that simplifies the creation of multimedia editing applications. It is based on the GStreamer multimedia framework and is heavily used by Pitivi video editor.

Implemented frame accuracy in the GStreamer Editing Services (GES)

As required by the industry, it is now possible to reference all time in frame number, providing a precise mapping between frame number and play time. Many issues were fixed in GStreamer to reach the precision enough for make this work. Also intensive regression tests were added.

Implemented time effects support in GES

Important refactoring inside GStreamer Editing Services have happened to allow cleanly and safely change playback speed of individual clips.

Implemented reverse playback in GES

Several issues have been fixed inside GStreamer core elements and base classes in order to support reverse playback. This allows us to implement reliable and frame accurate reverse playback for individual clips.

Implemented ImageSequence support in GStreamer and GES

Since OpenTimelineIO implemented ImageSequence support, many users in the community had said it was really required. We reviewed and finished up imagesequencesrc element, which had been awaiting review for years.

This feature is now also supported in the OpentimelineIO GES adapater.

Optimized nested timelines preroll time by an order of magnitude

Caps negotiation, done while the pipeline transitions from pause state to playing state, testing the whole pipeline functionality, was the bottleneck for nested timelines, so pipelines were reworked to avoid useless negotiations. At the same time, other members of GStreamer community have improved caps negotiation performance in general.

Last but not least, our colleague Thibault gave a talk in The Pipeline Conference about The Motion Picture Industry and Open Source Software: GStreamer as an Alternative, explaining how and why GStreamer could be leveraged in the motion picture industry to allow faster innovation, and solve issues by reusing all the multi-platform infrastructure the community has to offer.

WebKit multimedia

There has been a lot of work on WebKit multimedia, particularly for WebKitGTK and WPE ports which use GStreamer framework as backend.

WebKit Flatpak SDK

But first of all we would like to draw readers attention to the new WebKit Flatpak SDK. It was not a contribution only from the multimedia team, but rather a joint effort among different teams in Igalia.

Before WebKit Flatpak SDK, JHBuild was used for setting up a WebKitGTK/WPE environment for testing and development. Its purpose to is to provide a common set of well defined dependencies instead of relying on the ones available in the different Linux distributions, which might bring different outputs. Nonetheless, Flatpak offers a much more coherent environment for testing and develop, isolated from the rest of the building host, approaching to reproducible outputs.

Another great advantage of WebKit Flatpak SDK, at least for the multimedia team, is the possibility of use gst-build to setup a custom GStreamer environment, with latest master, for example.

Now, for sake of brevity, let us sketch an non-complete list of activities and achievements related with WebKit multimedia.

General multimedia

Media Source Extensions (MSE)

Encrypted Media Extension (EME)

One of the major results of this first half, is the upstream of ThunderCDM, which is an implementation of a Content Decryption Module, providing Widevine decryption support. Recently, our colleague Xabier, published a blog post on this regard.

And it has enabled client-side video rendering support, which ensures video frames remain protected in GPU memory so they can’t be reached by third-party. This is a requirement for DRM/EME.



Though we normally contribute in GStreamer with the activities listed above, there are other tasks not related with WebKit. Among these we can enumerate the following:

GStreamer VAAPI

  • Reviewed a lot of patches.
  • Support for media-driver (iHD), the new VAAPI driver for Intel, mostly for Gen9 onwards. There are a lot of features with this driver.
  • A new vaapioverlay element.
  • Deep code cleanups. Among these we would like to mention:
    • Added quirk mechanism for different backends.
    • Change base classes to GstObject and GstMiniObject of most of classes and buffers types.
  • Enhanced caps negotiation given current driver’s constraints


The multimedia team in Igalia has keep working, along the first half of this strange year, in our three main areas: browsers (mainly on WebKitGTK and WPE), video editing and GStreamer framework.

We worked adding and enhancing WebKitGTK and WPE multimedia features in order to offer a solid platform for media providers.

We have enhanced the Video Editing support in GStreamer.

And, along these tasks, we have contribuited as much in GStreamer framework, particulary in hardware accelerated decoding and encoding and VA-API.

By vjaquez at September 07, 2020 03:12 PM

September 02, 2020

Xabier Rodríguez Calvar: Serious Encrypted Media Extensions on GStreamer based WebKit ports

Igalia WebKit

Encrypted Media Extensions (a.k.a. EME) is the W3C standard for encrypted media in the web. This way, media providers such as Hulu, Netflix, HBO, Disney+, Prime Video, etc. can provide their contents with a reasonable amount of confidence that it will make it very complicated for people to “save” their assets without their permission. Why do I use the word “serious” in the title? In WebKit there is already support for Clear Key, which is the W3C EME reference implementation but EME supports more encryption systems, even privative ones (I have my opinion about this, you can ask me privately). No service provider (that I know) supports Clear Key, they usually rely on Widevine, PlayReady or some other.

Three years ago, my colleague Žan Doberšek finished the implementation of what was going to be the shell of WebKit’s modern EME implementation, following latest W3C proposal. We implemented that downstream (at Web Platform for Embedded) as well using Thunder, which includes as a plugin a fork of what was Open Content Decryption Module (a.k.a. OpenCDM). The OpenCDM API changed quite a lot during this journey. It works well and there are millions of set-top-boxes using it currently.

The delta between downstream and the upstream GStreamer based WebKit ports was quite big, testing was difficult and syncing was not always easy, so we decided reverse the situation.

Our first step was done by my colleague Charlie Turner, who made Clear Key work upstream again while adapted some changes the Apple folks had done meanwhile. It was amazing to see Clear Key tests passing again and his work with the CDMProxy related classes was awesome. After having ClearKey working, I had to adapt them a bit to accomodate Thunder. To explain a bit about the WebKit EME architecture, I must say that there are two layers. The first is the crossplatform one, which implements the W3C API (MediaKeys, MediaKeySession, CDM…). These classes rely on the platform ones (CDMPrivate, CDMInstance, CDMInstanceSession) to handle the platform management, message exchange, etc. which would be the second layer. Apple playback system is fully integrated with their DRM system so they don’t need anything else. We do because we need to integrate our own decryptors to defer to Thunder for decryption so in the GStreamer based ports we also need the CDMProxy related classes, which would be CDMProxy, CDMInstanceProxy, CDMInstanceSessionProxy… The last two extend CDMInstance and CDMInstanceSession respectively to be able to deal with the key management, that is abstracted to the KeyHandle and KeyStore.

Once the abstraction is there (let’s remember that the abstranction works both for Clear Key and Thunder), the Thunder implementation is quite simple, just gluing the CDMProxy, CDMInstanceProxy and CDMInstanceSessionProxy classes to the Thunder system and writing a GStreamer decryptor element for it. I might have made a mistake when selecting the files but considering Thunder classes + the GStreamer common decryptor code, cloc says it is just 1198 lines of platform code. I think it is pretty low for what it does. Apart from that, obviously, there are 5760 lines of crossplatform code.

To build and run all this you need to do several things:

  1. Build the dependencies with WEBKIT_JHBUILD=1 JHBUILD_ENABLE_THUNDER="yes" to enable the old fashioned JHBuild build and force it to build the Thunder dependencies. All dependendies are on JHBuild, even Widevine is referenced but to download it you need the proper credentials as it is closed source.
  2. Pass --thunder when calling build-webkit.sh.
  3. Run MiniBrowser with WEBKIT_GST_EME_RANK_PRIORITY="Thunder" and pass parameters --enable-mediasource=TRUE --enable-encrypted-media=TRUE --autoplay-policy=allow. The autoplay policy is usually optional but in this case it is necessary for the YouTube TV tests. We need to give the Thunder decryptor a higher priority because of WebM, that does not specify a key system and without it the Clear Key one can be selected and fail. MP4 does not create trouble because the protection system is specified and the caps negotiation does its magic.

As you could have guessed if you have a closer look at the GStreamer JHBuild moduleset, you’ll see that only Widevine is supported. To support more, you only have to make them build in the Thunder ecosystem and add them to CDMFactoryThunder::supportedKeySystems.

When I coded this, all YouTube TV tests for Widevine were green in the desktop. At the moment of writing this post they aren’t because of some problem with the Widevine installation that will be sorted quickly, I hope.

By calvaris at September 02, 2020 02:59 PM

August 27, 2020

Chris Lord: OffscreenCanvas, jobs, life

Igalia WebKit

Hoo boy, it’s been a long time since I last blogged… About 2 and a half years! So, what’s been happening in that time? This will be a long one, so if you’re only interested in a part of it (and who could blame you), I’ve titled each section.

Leaving Impossible

Well, unfortunately my work with Impossible ended, as we essentially ran out of funding. That’s really a shame, we worked on some really cool, open-source stuff, and we’ve definitely seen similar innovations in the field since we stopped working on it. We took a short break (during which we also, unsuccessfully, searched for further funding), after which Rob started working on a cool, related project of his own that you should check out, and I, being a bit less brave, starting seeking out a new job. I did consider becoming a full-time musician, but business wasn’t picking up as quickly as I’d hoped it might in that down-time, and with hindsight, I’m glad I didn’t (Covid-19 and all).

I interviewed with a few places, which was certainly an eye-opening experience. The last ‘real’ job interview I did was for Mozilla in 2011, which consisted mainly of talking with engineers that worked there, and working through a few whiteboard problems. Being a young, eager coder at the time, this didn’t really phase me back then. Turns out either the questions have evolved or I’m just not quite as sharp as I used to be in that very particular environment. The one interview I had that involved whiteboard coding was a very mixed bag. It seemed a mix of two types of questions; those that are easy to answer (but unless you’re in the habit of writing very quickly on a whiteboard, slow to write down) and those that were pretty impossible to answer without specific preparation. Perhaps this was the fault of recruiters, but you might hope that interviews would be catered somewhat to the person you’re interviewing, or the work they might actually be doing, neither of which seemed to be the case? Unsurprisingly, I didn’t get past that interview, but in retrospect I’m also glad I didn’t. Igalia’s interview process was much more humane, and involved mostly discussions about actual work I’ve done, hypothetical situations and ethics. They were very long discussions, mind, but I’m very glad that they were happy to hire me, and that I didn’t entertain different possibilities. If you aren’t already familiar with Igalia, I’d highly recommend having a read about them/us. I’ve been there a year now, and the feeling is quite similar to when I first joined Mozilla, but I believe with Igalia’s structure, this is likely to stay a happier and safer environment. Not that I mean to knock Mozilla, especially now, but anyone that has worked there will likely admit that along with the giddy highs, there are also some unfortunate lows.


I joined Igalia as part of the team that works on WebKit, and that’s what I’ve been doing since. It almost makes perfect sense in a way. Surprisingly, although I’ve spent overwhelmingly more time on Gecko, I did actually work with WebKit first while at OpenedHand, and for a short period at Intel. While celebrating my first commit to WebKit, I did actually discover it wasn’t my first commit at all, but I’d contributed a small embedding-related fix-up in 2008. So it’s nice to have come full-circle! My first work at Igalia was fixing up some patches that Žan Doberšek had prototyped to allow direct display of YUV video data via pixel shaders. Later on, I was also pleased to extend that work somewhat by fixing some vc3 driver bugs and GStreamer bugs, to allow for hardware decoding of YUV video on Raspberry Pi 3b (this, I believe, is all upstream at this point). WebKit Gtk and WPE WebKit may be the only Linux browser backends that leverage this pipeline, allowing for 1080p30 video playback on a Pi3b. There are other issues making this less useful than you might think, but either way, it’s a nice first achievement.


After that introduction, I was pointed at what could be fairly described as my main project, OffscreenCanvas. This was also a continuation of Žan’s work (he’s prolific!), though there has been significant original work since. This might be the part of this post that people find most interesting or relevant, but having not blogged in over 2 years, I can’t be blamed for waffling just a little. OffscreenCanvas is a relatively new web standard that allows the use of canvas API disconnected from the DOM, and within Workers. It also makes some provisions for asynchronously updated rendering, allowing canvas updates in Workers to bypass the main thread entirely and thus not be blocked by long-running processes on that thread. The most obvious use-case for this, and I think the most practical, is essentially non-blocking rendering of generated content. This is extremely handy for maps, for example. There are some other nice use-cases for this as well – you can, for example, show loading indicators that don’t stop animating while performing complex DOM manipulation, or procedurally generate textures for games, asynchronously. Any situation where you might want to do some long-running image processing without blocking the main thread (image editing also springs to mind).

Currently, the only complete implementation is within Blink. Gecko has a partial implementation that only supports WebGL contexts (and last time I tried, crashed the browser on creation…), but as far as I know, that’s it. I’ve been working on this, with encouragement and cooperation from Apple, on and off for the past year. In fact, as of August 12th, it’s even partially usable, though there is still a fair bit missing. I’ve been concentrating on the 2d context use-case, as I think it’s by far the most useful part of the standard. It’s at the point where it’s mostly usable, minus text rendering and minus some edge-case colour parsing. Asynchronous updates are also not yet supported, though I believe that’s fairly close for Linux. OffscreenCanvas is enabled with experimental features, for those that want to try it out.

My next goal, after asynchronous updates on Linux, is to enable WebGL context support. I believe these aren’t particularly tough goals, given where it is now, so hopefully they’ll happen by the end of the year. Text rendering is a much harder problem, but I hope that between us at Igalia and the excellent engineers at Apple, we can come up with a plan for it. The difficulty is that both styling and font loading/caching were written with the assumption that they’d run on just one thread, and that that thread would be the main thread. A very reasonable assumption in a pre-Worker and pre-many-core-CPU world of course, but increasingly less so now, and very awkward for this particular piece of work. Hopefully we’ll persevere though, this is a pretty cool technology, and I’d love to contribute to it being feasible to use widely, and lessen the gap between native and the web.

And that’s it from me. Lots of non-work related stuff has happened in the time since I last posted, but I’m keeping this post tech-related. If you want to hear more of my nonsense, I tend to post on Twitter a bit more often these days. See you in another couple of years 🙂

By Chris Lord at August 27, 2020 08:56 AM

August 18, 2020

Release Notes for Safari Technology Preview 112

Surfin’ Safari

Safari Technology Preview Release 112 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 264601-265179.

Web Inspector

  • Changed the default tab order to display most commonly used tabs first (r264959)
  • Changed the background, text, and border colors to match the OS (r265120)
  • Changed to only show scrollbars when needed (r265118)
  • Fixed issue where a failed initial subresource load would break the Sources Tab (r264717)
  • Fixed the ability to save files that are base64 encoded (r264669)
  • Prevented blurring the add class input when a class is added in the Styles sidebar of the Elements tab (r264667)


  • Fixed pop-up dialog sizing for percentage height values applied to <html> (r264960)
  • Added support for replacing a Safari App Extension with a Safari Web Extension by specifying the SFSafariAppExtensionBundleIdentifiersToReplace key in the NSExtension element in your Safari Web Extension Info.plist file. The value for the key should be an array of strings, each of which is the bundle identifier on a Safari App Extension you want to replace.


  • Fixed align-content in grid containers with small content area (r265020)
  • Fixed the CSS clip-path being applied to the view-box coordinates (r264622)
  • Fixed scroll snap when using RTL layout (r264908)


  • Implemented Intl.DisplayNames (r264639)
  • Changed eval?.() to be an indirect eval (r264633)


  • Added support for SVG <a> element’s rel and relList attributes (r264789)


  • Added behaviors for YouTube to offer HDR variants to devices which support HDR (r265167)
  • Adopted AVPlayer.videoRangeOverride (r264710)
  • Added HDR decode support in software-decoded VP9 (r265073)
  • Fixed becoming unresponsive after playing a video from a YouTube playlist in picture-in-picture mode (r264684)


  • Added OfflineAudioContext constructor (r264657)
  • Fixed scaleResolutionDownBy on RTCRtpSender (r265047)


  • Added support for the type attribute to PerformanceObserver (r265001)
  • Changed date and time input types to have a textfield appearance (r265157)
  • Changed to propagate the user gesture through Fetch API (r264853)
  • Fixed highlight color to update after being set it system preferences (r265072)
  • Fixed datalist dropdown scrollbar position to match the visible region (r264783)
  • Made mousemove event cancelable (r264658)

Text Manipulation

  • Changed text manipulation to not extract non-breaking spaces (r264947)
  • Fixed article headlines being split across multiple lines after translating (r264729)


  • Changed to allow IndexedDB in third-party frames (r264790)

August 18, 2020 06:20 PM

August 13, 2020

Javier Fernández: Improving CSS Custom Properties performance

Igalia WebKit

Chrome 84 reached the stable channel a few weeks ago, and there are already several great posts describing the many important additions, interesting new features, security fixes and improvements in privacy policies (([1], [2], [3], [4]) it contains. However, there is a change that I worked on in this release which might have passed unnoticed by most, but I think is very valuable: A change regarding CSS Custom Properties (variables) performance.

The design of CSS, in general, takes great care in considering how features are designed with respect to making it possible for them to perform well. However, implementations may not perform as well as they could, and it takes a considerable amount of time to understand how authors use the features and which cases are more relevant for them.

CSS Custom Properties are an interesting example to look at here: They are a wonderful feature that provides a lot of advantages for web authors. For a whole lot of cases, all of the implementations of CSS Custom Properties perform well enough that most people won’t notice. However, we at Igalia have been analyzing several use cases and looking at some reports around their performance in different implementations.

Let’s consider a fairly straightforward example in which an author sets a single property in a toggleable class in the body, and then uses that property several times deeper in the tree to change the foreground color of some text.

   .red { --prop: red; }
   .green { --prop: green; }

Only about 20% of those actually use this property, 5 elements deep into the tree, and only to change the foreground color.

To evaluate Chromium’s performance in a case like this we can define a new perf tests, using the perf tools the Chromium project has available for browser engineers. In this case, we want a huge tree so that we can evaluate better the impact of the different optimizations.

    .green { --prop: green; }
    .red { --prop: red; }


These are the results obtained runing the test in Chrome 83:

avg median



163.74 ms 163.79 ms 3.69 ms 158.59 ms 163.74 ms

I admit that it’s difficult to evaluate the results, especially considering the number of nodes of such a huge DOM tree. Lets compare the results of the same test on Firefox, using different number of nodes.

Nodes 50K 20K 10K 5K 1K 500
Chrome 83 163.74 ms 55.05 ms 25.12 ms 14.18 ms 2.74 ms 1.50 ms
FF 78 28.35 ms 12.05 ms 6.10 ms 3.50 ms 1.15 ms 0.55 ms
1/6 1/5 1/4 1/4 1/2 1/3

As I commented before, the data are more accurate when the DOM tree has a lot of nodes; in any case, the difference is quite clear and shows there is plenty room for improvement. WebKit based browsers have results more similar to Chromium as well.

Performance tests like the one above can be added to browsers for tracking improvements and regressions over time, so we’ve added (r763335) that to Chromium’s tree: We’d like to see it get faster over time, and definitely cannot afford regressions (see Chrome Performance Dashboard and the ChangeStyleCustomPropertyDeclaration test for details) .

So… What can we do?

In Chrome 83 and lower, whenever the custom property declaration changed, the new declaration would be inherited by the whole tree. This inheritance implied executing the whole CSS cascade and recalculating the styles of all the nodes in the entire tree, since with this approach, all nodes may be affected.

Chrome had already implemented an optimization on the CSS cascade implementation for regular CSS properties that don’t depend on any other to resolve their value. These subset of CSS properties are defined as Independent Properties in the Chromium codebase. The optimization mentioned before affects how the inheritance mechanism is implemented for these Independent properties. Whenever one of these properties changes, instead of recalculating the styles of the inherited properties, children can just copy the whole parent’s computed style. Blink’s style engine has a component known as Matched Properties Cache responsible of deciding when is possible to avoid the style resolution of an element and instead, performing an efficient copy of the matched computed style. I’ll get back to this concept in the last part of this post.

In the case of CSS Custom Properties, we could apply a similar approach as a good step. We can consider that the nodes with computed styles that don’t have references to custom properties declarations shouldn’t be affected by the new declaration, and we can implement the inheritance directly by copying the parent’s computed style. The patch with the optimization I’ve implemented in r765278 initially landed in Chrome 84.0.4137.0

Let’s look at the result of this one action in the Chrome Performance Dashboard:

That’s a really good improvement!

However, it’s also just a first step. It’s clear that Chrome still has a wide margin for improvement in this case, as well any WebKit based browser – Firefox is still, impressively, markedly faster as it’s been described in the bug report filed to track this issue. The following table shows the result of the different browsers together; even disabling the muti-thread capabilities of Firefox’s Stylo engine (STYLO_THREAD=1), FF is much faster than Chrome with the optimization applied.

Chrome 83 Chrome 84 FF 78 FF 78 th=1
163.74 ms
163.79 ms
3.69 ms
158.59 ms
163.74 ms
117.37 ms
117.52 ms
1.98 ms
113.66 ms
120.87 ms
28.35 ms
28.50 ms
0.93 ms
26.00 ms
30.00 ms
38.25 ms
38.50 ms
1.86 ms
35.00 ms
41.00 ms

Before continue, I want get back to the Matched Properties Cache (MPC) concept, since it has an important role on these style optimizations. This cache is not a new concept in the Chrome’s engine; as a matter of fact, it’s also used in WebKit, since it was implemented long ago, before the fork that created the new blink engine. However, Google has been working a lot on this area in the last years and some of the most recent changes in the MPC have had an important impact on style resolution performance. As a result of this work, elements with independent and non-independent properties using CSS Variables might produce cache hits in the MPC. The results of the Performance Dashboard show a considerable improvement in the mentioned ChangeStyleCustomPropertyDeclaration test (avg: 108.06 ms)

Additionally, there are several other cases where the use of CSS Variables has a considerable impact on performance, compared with using regular CSS properties. Obviously, resolving CSS Variables has a cost, so it’s clear that we could apply additional optimizations that reduce the impact of the variable resolution, especially for handling specific style changes that might not affect to a substantial portion of the DOM tree. I’ve been experimenting with the MPC to explore the idea an independent CSS Custom Properties cache; nodes with variables referencing the same custom property will produce cache hits in the MPC, even though other properties don’t match. The preliminary approach I’ve been implementing consists on a new matching function, specific for custom properties, and a mechanism to transfer/copy the property’s data to avoid resolving the variable again, since the property’s declaration hasn’t change. We would need to apply the css cascade again, but at least we could save the cost of the variable resolution.

Of course, at the end of the day, improving performance has costs and challenges – and it’s hard to keep performance even once you get it. Bit if we really want performant CSS Custom Properties, this means that we have to decide to prioritize this work. Currently there is reluctance to explore the concept of a new Custom Properties specific cache – the challenge is big and the risks are not non-existent; cache invalidation can get complicated. But, the point is that we have to understand that we aren’t all going to agree what is important enough to warrant attention, or how much investment, or when. Web authors must convince vendors that these use cases are worth being optimized and that the cost and risks of such a complex challenges should be assumed by them.

This work has been sponsored by Bloomberg, which I consider one of the most important contributors of the Web Platform. After several years, the vision of this company and its responsibility as consumer of the platform has lead to many and important contributions that we all enjoy now. Although CSS Grid Layout might be the most remarkable one, there are may other not that big, like this work on CSS Custom Properties, or several other new features of the CSS Text specification. This is a perfect example of an company that tries to change priorities and adapt the web platform to its needs and the use cases they consider more aligned with their business strategy.

I understand that not every user of the web platform can do this kind of investment. This is why I believe that initiatives like Open Priorization could help to move the web platform in a positive direction. By providing a way for us to move past a lot of these conversation and focus on the needs that some web authors and users of the platform consider more important, or higher priority. Improving performance for CSS Custom Properties isn’t currently one of the projects we’ve listed, but perhaps it would be an interesting one we might try in the future if we are successful with these. If you haven’t already, have a look and see if there is something there that is interesting to you or your company – pledges of any size are good – ten thousand $1 donations are every bit as good as ten $1000 donations. Together, we can make a difference, and we all benefit.

Also, we would love to hear about your ideas. Is improving CSS Custom Properties performance important to you? What else is? Share your comments with us on Twitter, either me (@lajava77) or our developer advocate Brian Kardell (@briankardell), or email me at jfernandez@igalia.com. I’d be glad to answer any question about the Open Priorization experiment.

By jfernandez at August 13, 2020 06:16 PM

July 29, 2020

Release Notes for Safari Technology Preview 111

Surfin’ Safari

Safari Technology Preview Release 111 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 263988-264601.

Web Inspector

  • Added an error message if unable to fetch shader source in the Canvas tab (r264045)
  • Fixed Heap Snapshot Object Graph view not getting populated in some cases when inspecting a JSContext (r264124)
  • Updated the tab bar colors of undocked Web Inspector to match Safari in macOS Big Sur (r264410)
  • Updated the title bar of undocked Web Inspector to be white in macOS Big Sur (r264204)

Web Extensions

  • Fixed chrome.tabs.update() so it does not open a new tab for safari-web-extension URLs
  • Fixed chrome.tabs.create() so it passes a valid tab object to the callback for relative extension URLs


  • Fixed content changes not triggering re-snapping with scroll snap after a scroll gesture (r264190)
  • Fixed scrolling pages with non-invertable transforms in children of an overflow: scroll element (r264031)
  • Fixed stuttery scrolling by ensuring a layout-triggered scroll snap does not happen if a user scroll is in progress on the scrolling thread (r264203)


  • Fixed high CPU usage on Bitbucket search results pages (r264008)


  • Fixed line name positions after implicit grid track (r264465)


  • Made String.protoytpe.toLocaleLowerCase'savailableLocales` HashSet more efficient (r264293)
  • Changed Intl.Locale maximize, minimize to return Intl.Locale instead of String (r264275)
  • Fixed Math.max() yielding the wrong result for max(0, -0) (r264507)
  • Fixed redefining a property that should not change its insertion index (Object.keys order) (r264574)

Web Authentication

  • Added a console message to indicate a user gesture is required to use the platform authenticator (r264490)
  • Relaxed the user gesture requirement to allow it to be propagated through XHR events (r264528)


  • Fixed the ability to pause to pause playback of MediaStream video track (r264312)
  • Added support for parsing VP-style codec strings. (r264367)


  • Changed URL.host to not override the port (r264516)
  • Fixed autocapitalize="words" capitalizing every word’s second character (r264112)
  • Multiplexed the HID and GameController gamepad providers on macOS (r264207)
  • Removed the concept of “initial connected gamepads” (r264004)

Storage Access API

  • Added the capability to open a popup and get user interaction so we can call the Storage Access API as a quirk, on behalf of websites that should be doing it themselves (r263992)

Intelligent Tracking Prevention

  • Added an artificial delay to WebSocket connections to mitigate port scanning attacks (r264306)


  • Implemented user action specifications for Escape action (r264000)

Text Manipulation

  • Fixed text manipulation to observe manipulated text after update (r264305)
  • Fixed text manipulation to ignore white spaces between nodes (r264120)
  • Fixed the caret leaving trails behind when the editable content is subpixel positioned (r264386)

July 29, 2020 05:32 PM

Speculation in JavaScriptCore

Surfin’ Safari

This post is all about speculative compilation, or just speculation for short, in the context of the JavaScriptCore virtual machine. Speculative compilation is ideal for making dynamic languages, or any language with enough dynamic features, run faster. In this post, we will look at speculation for JavaScript. Historically, this technique or closely related variants has been applied successfully to Smalltalk, Self, Java, .NET, Python, and Ruby, among others. Starting in the 90’s, intense benchmark-driven competition between many Java implementations helped to create an understanding of how to build speculative compilers for languages with small amounts of dynamism. Despite being a lot more dynamic than Java, the JavaScript performance war that started in the naughts has generally favored increasingly aggressive applications of the same speculative compilation tricks that worked great for Java. It seems like speculation can be applied to any language implementation that uses runtime checks that are hard to reason about statically.

This is a long post that tries to demystify a complex topic. It’s based on a two hour compiler lecture (slides also available in PDF). We assume some familiarity with compiler concepts like intermediate representations (especially Static Single Assignment Form, or SSA for short), static analysis, and code generation. The intended audience is anyone wanting to understand JavaScriptCore better, or anyone thinking about using these techniques to speed up their own language implementation. Most of the concepts described in this post are not specific to JavaScript and this post doesn’t assume prior knowledge about JavaScriptCore.

Before going into the details of speculation, we’ll provide an overview of speculation and an overview of JavaScriptCore. This will help provide context for the main part of this post, which describes speculation by breaking it down into five parts: bytecode (the common IR), control, profiling, compilation, and OSR (on stack replacement). We conclude with a small review of related work.

Overview of Speculation

The intuition behind speculation is to leverage traditional compiler technology to make dynamic languages as fast as possible. Construction of high-performance compilers is a well-understood art, so we want to reuse as much of that as we can. But we cannot do this directly for a language like JavaScript because the lack of type information means that the compiler can’t do meaningful optimizations for any of the fundamental operations (even things like + or ==). Speculative compilers use profiling to infer types dynamically. The generated code uses dynamic type checks to validate the profiled types. If the program uses a type that is different from what we profiled, we throw out the optimized code and try again. This lets the optimizing compiler work with a statically typed representation of the dynamically typed program.

Types are a major theme of this post even though the techniques we are describing are for implementing dynamically typed languages. When languages include static types, it can be to provide safety properties for the programmer or to help give an optimizing compiler leverage. We are only interested in types for performance and the speculation strategy in JavaScriptCore can be thought of in broad strokes as inferring the kinds of types that a C program would have, but using an internal type system purpose built for our optimizing compiler. More generally, the techniques described in this post can be used to enable any kind of profile-guided optimizations, including ones that aren’t related to types. But both this post and JavaScriptCore focus on the kind of profiling and speculation that is most natural to think if as being about type (whether a variable is an integer, what object shapes a pointer points to, whether an operation has effects, etc).

To dive into this a bit deeper, we first consider the impact of types. Then we look at how speculation gives us types.

Impact of Types

We want to give dynamically typed languages the kind of optimizing compiler pipeline that would usually be found in ahead-of-time compilers for high-performance statically typed languages like C. The input to such an optimizer is typically some kind of internal representation (IR) that is precise about the type of each operation, or at least a representation from which the type of each operation can be inferred.

To understand the impact of types and how speculative compilers deal with them, consider this C function:

int foo(int a, int b)
    return a + b;

In C, types like int are used to describe variables, arguments, return values, etc. Before the optimizing compiler has a chance to take a crack at the above function, a type checker fills in the blanks so that the + operation will be represented using an IR instruction that knows that it is adding 32-bit signed integers (i.e. ints). This knowledge is essential:

  • Type information tells the compiler’s code generator how to emit code for this instruction. We know to use integer addition instructions (not double addition or something else) because of the int type.
  • Type information tells the optimizer how to allocate registers for the inputs and outputs. Integers mean using general purpose registers. Floating point means using floating point registers.
  • Type information tells the optimizer what optimizations are possible for this instruction. Knowing exactly what it does allows us to know what other operations can be used in place of it, allows us to do some algebraic reasoning about the math the program is doing, and allows us to fold the instruction to a constant if the inputs are constants. If there are types for which + has effects (like in C++), then the fact that this is an integer + means that it’s pure. Lots of compiler optimizations that work for + would not work if it wasn’t pure.

Now consider the same program in JavaScript:

function foo(a, b)
    return a + b;

We no longer have the luxury of types. The program doesn’t tell us the types of a or b. There is no way that a type checker can label the + operation as being anything specific. It can do a bunch of different things based on the runtime types of a and b:

  • It might be a 32-bit integer addition.
  • It might be a double addition.
  • It might be a string concatenation.
  • It might be a loop with method calls. Those methods can be user-defined and may perform arbitrary effects. This’ll happen if a or b are objects.
Figure 1. The best that a nonspeculative compiler can do if given a JavaScript plus operation. This figure depicts a control flow graph as a compiler like JavaScriptCore’s DFG might see. The Branch operation is like an if and has outgoing edges for the then/else outcomes of the condition.

Based on this, it’s not possible for an optimizer to know what to do. Instruction selection means emitting either a function call for the whole thing or an expensive control flow subgraph to handle all of the various cases (Figure 1). We won’t know which register file is best for the inputs or results; we’re likely to go with general purpose registers and then do additional move instructions to get the data into floating point registers in case we have to do a double addition. It’s not possible to know if one addition produces the same results as another, since they have loops with effectful method calls. Anytime a + happens we have to allow for the the possibility that the whole heap might have been mutated.

In short, it’s not practical to use optimizing compilers for JavaScript unless we can somehow provide types for all of the values and operations. For those types to be useful, they need to help us avoid basic operations like + seeming like they require control flow or effects. They also need to help us understand which instructions or register files to use. Speculative compilers get speed-ups by applying this kind of reasoning to all of the dynamic operations in a language — ranging from those represented as fundamental operations (like + or memory accesses like o.f and o[i]) to those that involve intrinsics or recognizable code patterns (like calling Function.prototype.apply).

Speculated Types

This post focuses on those speculations where the collected information can be most naturally understood as type information, like whether or not a variable is an integer and what properties a pointed-to object has (and in what order). Let’s appreciate two aspects of this more deeply: when and how the profiling and optimization happen and what it means to speculate on type.

Figure 2. Optimizing compilers for C and JavaScript.

Let’s consider what we mean by speculative compilation for JavaScript. JavaScript implementations pretend to be interpreters; they accept JS source as input. But internally, these implementations use a combination of interpreters and compilers. Initially, code starts out running in an execution engine that does no speculative type-based optimizations but collects profiling about types. This is usually an interpreter, but not always. Once a function has a satisfactory amount of profiling, the engine will start an optimizing compiler for that function. The optimizing compiler is based on the same fundamentals as the one found in a C compiler, but instead of accepting types from a type checker and running as a command-line tool, here it accepts types from a profiler and runs in a thread in the same process as the program it’s compiling. Once that compiler finishes emitting optimized machine code, we switch execution of that function from the profiling tier to the optimized tier. Running JavaScript code has no way of observing this happening to itself except if it measures execution time. (However, the environment we use for testing JavaScriptCore includes many hooks for introspecting what has been compiled.) Figure 2 illustrates how and when profiling and optimization happens when running JavaScript.

Roughly, speculative compilation means that our example function will be transformed to look something like this:

function foo(a, b)
    return a + b;

The tricky thing is what exactly it means to speculate. One simple option is what we call diamond speculation. This means that every time that we perform an operation, we have a fast path specialized for what the profiler told us and a slow path to handle the generic case:

if (is int)
    int add
    Call(slow path)

To see how that plays out, let’s consider a slightly different example:

var tmp1 = x + 42;
... // things
var tmp2 = x + 100;

Here, we use x twice, both times adding it to a known integer. Let’s say that the profiler tells us that x is an integer but that we have no way of proving this statically. Let’s also say that x‘s value does not change between the two uses and we have proved that statically.

Figure 3. Diamond speculation that x is an integer.

Figure 3 shows what happens if we speculate on the fact that x is an integer using a diamond speculation: we get a fast path that does the integer addition and a slow path that bails out to a helper function. Speculations like this can produce modest speed-ups at modest cost. The cost is modest because if the speculation is wrong, only the operations on x pay the price. The trouble with this approach is that repeated uses of x must recheck whether it is an integer. The rechecking is necessary because of the control flow merge that happens at the things block and again at more things.

The original solution to this problem was splitting, where the region of the program between things and more things would get duplicated to avoid the branch. An extreme version of this is tracing, where the entire remainder of a function is duplicated after any branch. The trouble with these techniques is that duplicating code is expensive. We want to minimize the number of times that the same piece of code is compiled so that we can compile a lot of code quickly. The closest thing to splitting that JavaScriptCore does is tail duplication, which optimizes diamond speculations by duplicating the code between them if that code is tiny.

A better alternative to diamond speculations or splitting is OSR (on stack replacement). When using OSR, a failing type check exits out of the optimized function back to the equivalent point in the unoptimized code (i.e. the profiling tier’s version of the function).

Figure 4. OSR speculation that x is an integer.

Figure 4 shows what happens when we speculate that x is an integer using OSR. Because there is no control flow merge between the case where x is an int and the case where it isn’t, the second check becomes redundant and can be eliminated. The lack of a merge means that the only way to reach the second check is if the first check passed.

OSR speculations are what gives our traditional optimizing compiler its static types. After any OSR-based type check, the compiler can assume that the property that was checked is now fact. Moreover, because OSR check failure does not affect semantics (we exit to the same point in the same code, just with fewer optimizations), we can hoist those checks as high as we want and infer that a variable always has some type simply by guarding all assignments to it with the corresponding type check.

Note that what we call OSR exit in this post and in JavaScriptCore is usually called deoptimization elsewhere. We prefer to use the term OSR exit in our codebase because it emphasizes that the point is to exit an optimized function using an exotic technique (OSR). The term deoptimization makes it seem like we are undoing optimization, which is only true in the narrow sense that a particular execution jumps from optimized code to unoptimized code. For this post we will follow the JavaScriptCore jargon.

JavaScriptCore uses OSR or diamond speculations depending on our confidence that the speculation will be right. OSR speculation has higher benefit and higher cost: the benefit is higher because repeated checks can be eliminated but the cost is also higher because OSR is more expensive than calling a helper function. However, the cost is only paid if the exit actually happens. The benefits of OSR speculation are so superior that we focus on that as our main speculation strategy, with diamond speculation being the fallback if our profiling indicates lack of confidence in the speculation.

Figure 5. Speculating with OSR and exiting to bytecode.

OSR-based speculation relies on the fact that traditional compilers are already good at reasoning about side exits. Trapping instructions (like for null check optimization in Java virtual machines), exceptions, and multiple return statements are all examples of how compilers already support exiting from a function.

Assuming that we use bytecode as the common language shared between the unoptimizing profiled tier of execution and the optimizing tier, the exit destinations can just be bytecode instruction boundaries. Figure 5 shows how this might work. The machine code generated by the optimizing compiler contains speculation checks against unlikely conditions. The idea is to do lots of speculations. For example, the prologue (the enter instruction in the figure) may speculate about the types of the arguments — that’s one speculation per argument. An add instruction may speculate about the types of its inputs and about the result not overflowing. Our type profiling may tell us that some variable tends to always have some type, so a mov instruction whose source is not proved to have that type may speculate that the value has that type at runtime. Accessing an array element (what we call get_by_val) may speculate that the array is really an array, that the index is an integer, that the index is in bounds, and that the value at the index is not a hole (in JavaScript, loading from a never assigned array element means walking the array’s prototype chain to see if the element can be found there — something we avoid doing most of the time by speculating that we don’t have to). Calling a function may speculate that the callee is the one we expected or at least that it has the appropriate type (that it’s something we can call).

While exiting out of a function is straightforward without breaking fundamental assumptions in optimizing compilers, entering turns out to be super hard. Entering into a function somewhere other than at its primary entrypoint pessimises optimizations at any merge points between entrypoints. If we allowed entering at every bytecode instruction boundary, this would negate the benefits of OSR exit by forcing every instruction boundary to make worst-case assumptions about type. Even allowing OSR entry just at loop headers would break lots of loop optimizations. This means that it’s generally not possible to reenter optimized execution after exiting. We only support entry in cases where the reward is high, like when our profiler tells us that a loop has not yet terminated at the time of compilation. Put simply, the fact that traditional compilers are designed for single-entry multiple-exit procedures means that OSR entry is hard but OSR exit is easy.

JavaScriptCore and most speculative compilers support OSR entry at hot loops, but since it’s not an essential feature for most applications, we’ll leave understanding how we do it as an exercise for the reader.

Figure 6. Speculation broken into the five topics of this post.

The main part of this post describes speculation in terms of its five components (Figure 6): the bytecode, or common IR, of the virtual machine that allows for a shared understanding about the meaning of profiling and exit sites between the unoptimized profiling tier and the optimizing tier; the unoptimized profiling tier that is used to execute functions at start-up, collect profiling about them, and to serve as an exit destination; the control system for deciding when to invoke the optimizing compiler; the optimizing tier that combines a traditional optimizing compiler with enhancements to support speculation based on profiling; and the OSR exit technology that allows the optimizing compiler to use the profiling tier as an exit destination when speculation checks fail.

Overview of JavaScriptCore

Figure 7. The tiers of JavaScriptCore.

JavaScriptCore embraces the idea of tiering and has four tiers for JavaScript (and three tiers for WebAssembly, but that’s outside the scope of this post). Tiering has two benefits: the primary benefit, described in the previous section, of enabling speculation; and a secondary benefit of allowing us to fine-tune the throughput-latency tradeoff on a per-function basis. Some functions run for so short — like straight-line run-once initialization code — that running any compiler on those functions would be more expensive than interpreting them. Some functions get invoked so frequently, or have such long loops, that their total execution time far exceeds the time to compile them with an aggressive optimizing compiler. But there are also lots of functions in the grey area in between: they run for not enough time to make an aggressive compiler profitable, but long enough that some intermediate compiler designs can provide speed-ups. JavaScriptCore has four tiers as shown in Figure 7:

  • The LLInt, or low-level interpreter, which is an interpreter that obeys JIT compiler ABI. It runs on the same stack as the JITs and uses a known set of registers and stack locations for its internal state.
  • The Baseline JIT, also known as a bytecode template JIT, which emits a template of machine code for each bytecode instruction without trying to reason about relationships between multiple instructions in the function. It compiles whole functions, which makes it a method JIT. Baseline does no OSR speculations but does have a handful of diamond speculations based on profiling from the LLInt.
  • The DFG JIT, or data flow graph JIT, which does OSR speculation based on profiling from the LLInt, Baseline, and in some rare cases even using profiling data collected by the DFG JIT and FTL JIT. It may OSR exit to either baseline or LLInt. The DFG has a compiler IR called DFG IR, which allows for sophisticated reasoning about speculation. The DFG avoids doing expensive optimizations and makes many compromises to enable fast code generation.
  • The FTL JIT, or faster than light JIT, which does comprehensive compiler optimizations. It’s designed for peak throughput. The FTL never compromises on throughput to improve compile times. This JIT reuses most of the DFG JIT’s optimizations and adds lots more. The FTL JIT uses multiple IRs (DFG IR, DFG SSA IR, B3 IR, and Assembly IR).

An ideal example of this in action is this program:

"use strict";

let result = 0;
for (let i = 0; i < 10000000; ++i) {
    let o = {f: i};
    result += o.f;


Thanks to the object allocation inside the loop, it will run for a long time until the FTL JIT can compile it. The FTL JIT will kill that allocation, so then the loop finishes quickly. The long running time before optimization virtually guarantees that the FTL JIT will take a stab at this program’s global function. Additionally, because the function is clean and simple, all of our speculations are right and there are no OSR exits.

Figure 8. Example timeline of a simple long loop executing in JavaScriptCore. Execution times recorded on my computer one day.

Figure 8 shows the timeline of this benchmark executing in JavaScriptCore. The program starts executing in the LLInt. After about a thousand loop iterations, the loop trigger causes us to start a baseline compiler thread for this code. Once that finishes, we do an OSR entry into the baseline JITed code at the for loop’s header. The baseline JIT also counts loop iterations, and after about a thousand more, we spawn the DFG compiler. The process repeats until we are in the FTL. When I measured this, I found that the DFG compiler needs about 4× the time of the baseline compiler, and the FTL needs about 6× the time of the DFG. While this example is contrived and ideal, the basic idea holds for any JavaScript program that runs long enough since all tiers of JavaScriptCore support the full JavaScript language.

Figure 9. JavaScriptCore tier architecture.

JavaScriptCore is architected so that having many tiers is practical. Figure 9 illustrates this architecture. All tiers share the same bytecode as input. That bytecode is generated by a compiler pipeline that desugars many language features, such as generators and classes, among others. In many cases, it’s possible to add new language features just by modifying the bytecode generation frontend. Once linked, the bytecode can be understood by any of the tiers. The bytecode can be interpreted by the LLInt directly or compiled with the baseline JIT, which mostly just converts each bytecode instruction into a preset template of machine code. The LLInt and Baseline JIT share a lot of code, mostly in the slow paths of bytecode instruction execution. The DFG JIT converts bytecode to its own IR, the DFG IR, and optimizes it before emitting code. In many cases, operations that the DFG chooses not to speculate on are emitted using the same code generation helpers as the Baseline JIT. Even operations that the DFG does speculate on often share slow paths with the Baseline JIT. The FTL JIT reuses the DFG’s compiler pipeline and adds new optimizations to it, including multiple new IRs that have their own optimization pipelines. Despite being more sophisticated than the DFG or Baseline, the FTL JIT shares slow path implementations with those JITs and in some cases even shares code generation for operations that we choose not to speculate on. Even though the various tiers try to share code whenever possible, they aren’t required to. Take the get_by_val (access an array element) instruction in bytecode. This has duplicate definitions in the bytecode liveness analysis (which knows the liveness rules for get_by_val), the LLInt (which has a very large implementation that switches on a bunch of the common array types and has good code for all of them), the Baseline (which uses a polymorphic inline cache), and the DFG bytecode parser. The DFG bytecode parser converts get_by_val to the DFG IR GetByVal operation, which has separate definitions in the DFG and FTL backends as well as in a bunch of phases that know how to optimize and model GetByVal. The only thing that keeps those implementations in agreement is good convention and extensive testing.

To give a feeling for the relative throughput of the various tiers, I’ll share some informal performance data that I’ve gathered over the years out of curiosity.

Figure 10. Relative performance of the four tiers on JetStream 2 on my computer at the time of that benchmark’s introduction.

We’re going to use the JetStream 2 benchmark suite since that’s the main suite that JavaScriptCore is tuned for. Let’s first consider an experiment where we run JetStream 2 with the tiers progressively enabled starting with the LLInt. Figure 10 shows the results: the Baseline and DFG are more than 2× better than the tier below them and the FTL is 1.1× better than the DFG.

The FTL’s benefits may be modest but they are unique. If we did not have the FTL, we would have no way of achieving the same peak throughput. A great example is the gaussian-blur subtest. This is the kind of compute test that the FTL is built for. I managed to measure the benchmark’s performance when we first introduced it and did not yet have a chance to tune for it. So, this gives a glimpse of the speed-ups that we expect to see from our tiers for code that hasn’t yet been through the benchmark tuning grind. Figure 11 shows the results. All of the JITs achieve spectacular speed-ups: Baseline is 3× faster than LLInt, DFG is 6× faster than Baseline, and FTL is 1.6× faster than DFG.

Figure 11. Relative performance of the four tiers on the guassian-blur subtest of JetStream 2.

The DFG and FTL complement one another. The DFG is designed to be a fast-running compiler and it achieves this by excluding the most aggressive optimizations, like global register allocation, escape analysis, loop optimizations, or anything that needs SSA. This means that the DFG will always get crushed on peak throughput by compilers that have those features. It’s the FTL’s job to provide those optimizations if a function runs long enough to warrant it. This ensures that there is no scenario where a hypothetical competing implementation could outperform us unless they had the same number of tiers. If you wanted to make a compiler that compiles faster than the FTL then you’d lose on peak throughput, but if you wanted to make a compiler that generates better code than the DFG then you’d get crushed on start-up times. You need both to stay in the game.

Another way of looking at the performance of these tiers is to ask: how much time does a bytecode instruction take to execute in each of the tiers on average? This tells us just about the throughput that a tier achieves without considering start-up at all. This can be hard to estimate, but I made an attempt at it by repeatedly running each JetStream 2 benchmark and having it limit the maximum tier of each function at random. Then I employed a stochastic counting mechanism to get an estimate of the number of bytecode instructions executed at each tier in each run. Combined with the execution times of those runs, this gave a simple linear regression problem of the form:

ExecutionTime = (Latency of LLInt) * (Bytecodes in LLInt)
              + (Latency of Baseline) * (Bytecodes in Baseline)
              + (Latency of DFG) * (Bytecodes in DFG)
              + (Latency of FTL) * (Bytecodes in FTL)

Where the Latency of LLInt means the average amount of time it takes to execute a bytecode instruction in LLInt.

After excluding benchmarks that spent most of their time outside JavaScript execution (like regexp and wasm benchmarks) and fiddling with how to weight benchmarks (I settled on solving each benchmarks separately and computing geomean of the coefficients since this matches JetStream 2 weighting), the solution I arrived at was:

Execution Time = (3.97 ns) * (Bytecodes in LLInt)
               + (1.71 ns) * (Bytecodes in Baseline)
               + (.349 ns) * (Bytecodes in DFG)
               + (.225 ns) * (Bytecodes in FTL)

In other words, Baseline executes code about 2× faster than LLInt, DFG executes code about 5× faster than Baseline, and the FTL executes code about 1.5× faster than DFG. Note how this data is in the same ballpark as what we saw for gaussian-blur. That makes sense since that was a peak throughput benchmark.

Although this isn’t a garbage collection blog post, it’s worth understanding a bit about how the garbage collector works. JavaScriptCore picks a garbage collection strategy that makes the rest of the virtual machine, including all of the support for speculation, easier to implement. The garbage collector has the following features that make speculation easier:

  • The collector scans the stack conservatively. This means that compilers don’t have to worry about how to report pointers to the collector.
  • The collector doesn’t move objects. This means that if a data structure (like the compiler IR) has many possible ways of referencing some object, we only have to report one of them to the collector.
  • The collector runs to fixpoint. This makes it possible to invent precise rules for whether objects created by speculation should be kept alive.
  • The collector’s object model is expressed in C++. JavaScript objects look like C++ objects, and JS object pointers look like C++ pointers.

These features make the compiler and runtime easier to write, which is great, since speculation requires us to write a lot of compiler and runtime code. JavaScript is a slow enough language even with the optimizations we describe in this post that garbage collector performance is rarely the longest pole in the tent. Therefore, our garbage collector makes many tradeoffs to make it easier to work on the performance-critical parts of our engine (like speculation). It would be unwise, for example, to make it harder to implement some compiler optimization as a way of getting a small garbage collector optimization, since the compiler has a bigger impact on performance for typical JavaScript programs.

To summarize: JavaScriptCore has four tiers, two of which do speculative optimizations, and all of which participate in the collection of profiling. The first two tiers are an interpreter and bytecode template JIT while the last two are optimizing compilers tuned for different throughput-latency trade-offs.

Speculative Compilation

Now that we’ve established some basic background about speculation and JavaScriptCore, this section goes into the details. First we will discuss JavaScriptCore’s bytecode. Then we show the control system for launching the optimizing compiler. Next will be a detailed section about how JavaScriptCore’s profiling tiers work, which focuses mostly on how they collect profiling. Finally we discuss JavaScriptCore’s optimizing compilers and their approach to OSR.


Speculation requires having a profiling tier and an optimizing tier. When the profiling tier reports profiling, it needs to be able to say what part of the code that profiling is for. When the optimizing compiler wishes to compile an OSR exit, it needs to be able to identify the exit site in a way that both tiers understand. To solve both issues, we need a common IR that is:

  • Used by all tiers as input.
  • Persistent for as long as the function that it represents is still live.
  • Immutable (at least for those parts that all tiers are interested in).

In this post, we will use bytecode as the common IR. This isn’t required; abstract syntax trees or even SSA could work as a common IR. We offer some insights into how we designed our bytecode for JavaScriptCore. JavaScriptCore’s bytecode is register-based, compact, untyped, high-level, directly interpretable, and transformable.

Our bytecode is register-based in the sense that operations tend to be written as:

add result, left, right

Which is taken to mean:

result = left + right

Where result, left, and right are virtual registers. Virtual registers may refer to locals, arguments, or constants in the constant pool. Functions declare how many locals they need. Locals are used both for named variables (like var, let, or const variables) and temporaries arising from expression tree evaluation.

Our bytecode is compact: each opcode and operand is usually encoded as one byte. We have wide prefixes to allow 16-bit or 32-bit operands. This is important since JavaScript programs can be large and the bytecode must persist for as long as the function it represents is still live.

Our bytecode is untyped. Virtual registers never have static type. Opcodes generally don’t have static type except for the few opcodes that have a meaningful type guarantee on their output (for example, the | operator always returns int32, so our bitor opcode returns int32). This is important since the bytecode is meant to be a common source of truth for all tiers. The profiling tier runs before we have done type inference, so the bytecode can’t have any more types than the JavaScript language.

Our bytecode is almost as high-level as JavaScript. While we use desugaring for many JavaScript features, we only do that when implementation by desugaring isn’t thought to cost performance. So, even the “fundamental” features of our bytecode are high level. For example, the add opcode has all of the power of the JavaScript + operator, including that it might mean a loop with effects.

Our bytecode is directly interpretable. The same bytecode stream that the interpreter executes is the bytecode stream that we will save in the cache (to skip parsing later) and feed to the compiler tiers.

Finally, our bytecode is transformable. Normally, intermediate representations use a control flow graph and make it easy to insert and remove instructions. That’s not how bytecode works: it’s an array of instructions encoded using a nontrivial variable-width encoding. But we do have a bytecode editing API and we use it for generatorification (our generator desugaring bytecode-to-bytecode pass). We can imagine this facility also being useful for other desugarings or for experimenting with bytecode instrumentation.

Compared to non-bytecode IRs, the main advantages of bytecode are that it’s easy to:

  • Identify targets for OSR exit. OSR exit in JavaScriptCore requires entering into an unoptimized bytecode execution engine (like an interpreter) at some arbitrary bytecode instruction. Using bytecode instruction index as a way of naming an exit target is intuitive since it’s just an integer.
  • Compute live state at exit. Register-based bytecode tends to have dense register numberings so it’s straightforward to analyze liveness using bitvectors. That tends to be fast and doesn’t require a lot of memory. It’s practical to cache the results of bytecode liveness analysis, for example.

JavaScriptCore’s bytecode format is independently implemented by the execution tiers. For example, the baseline JIT doesn’t try to use the LLInt to create its machine code templates; it just emits those templates itself and doesn’t try to match the LLInt exactly (the behavior is identical but the implementation isn’t). The tiers do share a lot of code – particularly for inline caches and slow paths – but they aren’t required to. It’s common for bytecode instructions to have algorithmically different implementations in the four tiers. For example the LLInt might implement some instruction with a large switch that handles all possible types, the Baseline might implement the same instruction with an inline cache that repatches based on type, and the DFG and FTL might try to do some combination of inline speculations, inline caches, and emitting a switch on all types. This exact scenario happens for add and other arithmetic ops as well as get_by_val/put_by_val. Allowing this independence allows each tier to take advantage of its unique properties to make things run faster. Of course, this approach also means that adding new bytecodes or changing bytecode semantics requires changing all of the tiers. For that reason, we try to implement new language features by desugaring them to existing bytecode constructs.

It’s possible to use any sensible IR as the common IR for a speculative compiler, including abstract syntax trees or SSA, but JavaScriptCore uses bytecode so that’s what we’ll talk about in the rest of this post.


Speculative compilation needs a control system to decide when to run the optimizing compiler. The control system has to balance competing concerns: compiling functions as soon as it’s profitable, avoiding compiling functions that aren’t going to run long enough to benefit from it, avoiding compiling functions that have inadequate type profiling, and recompiling functions if a prior compilation did speculations that turned out to be wrong. This section describes JavaScriptCore’s control system. Most of the heuristics we describe were necessary, in our experience, to make speculative compilation profitable. Otherwise the optimizing compiler would kick in too often, not often enough, or not at the right rate for the right functions. This section describes the full details of JavaScriptCore’s tier-up heuristics because we suspect that to reproduce our performance, one would need all of these heuristics.

JavaScriptCore counts executions of functions and loops to decide when to compile. Once a function is compiled, we count exits to decide when to throw away compiled functions. Finally, we count recompilations to decide how much to back off from recompiling a function in the future.

Execution Counting

JavaScriptCore maintains an execution counter for each function. This counter gets incremented as follows:

  • Each call to the function adds 15 points to the execution counter.
  • Each loop execution adds 1 point to the execution counter.

We trigger tier-up once the counter reaches some threshold. Thresholds are determined dynamically. To understand our thresholds, first consider their static versions and then let’s look at how we modulate these thresholds based on other information.

  • LLInt→Baseline tier-up requires 500 points.
  • Baseline→DFG tier-up requires 1000 points.
  • DFG→FTL tier-up requires 100000 points.

Over the years we’ve found ways to dynamically adjust these thresholds based on other sources of information, like:

  • Whether the function got JITed the last time we encountered it (according to our cache). Let’s call this wasJITed.
  • How big the function is. Let’s call this S. We use the number of bytecode opcodes plus operands as the size.
  • How many times it has been recompiled. Let’s call this R.
  • How much executable memory is available. Let’s use M to say how much executable memory we have total, and U is the amount we estimate that we would use (total) if we compiled this function.
  • Whether profiling is “full” enough.

We select the LLInt→Baseline threshold based on wasJITed. If we don’t know (the function wasn’t in the cache) then we use the basic threshold, 500. Otherwise, if the function wasJITed then we use 250 (to accelerate tier-up) otherwise we use 2000. This optimization is especially useful for improving page load times.

Baseline→DFG and DFG→FTL use the same scaling factor based on S, R, M, and U. The scaling factor is defined as follows:

(0.825914 + 0.061504 * sqrt(S + 1.02406)) * pow(2, R) * M / (M - U)

We multiply this by 1000 for Baseline→DFG and by 100000 for DFG→FTL. Let’s break down what this scaling factor does:

First we scale by the square root of the size. The expression 0.825914 + 0.061504 * sqrt(S + 1.02406) gives a scaling factor that is between 1 and 2 for functions smaller than about 350 bytecodes, which we consider to be “easy” functions to compile. The scaling factor uses square root so it grows somewhat gently. We’ve also tried having the staling factor be linear, but that’s much worse. It is worth it to delay compilations of large functions a bit, but it’s not worth it to delay it too much. Note that the ideal delay doesn’t just have to do with the cost of compilation. It’s also about running long enough to get good profiling. Maybe there is some deep reason why square root works well here, but all we really care about is that scaling by this amount makes programs run faster.

Then we introduce exponential backoff based on the number of times that the function has been recompiled. The pow(2, R) expression means that each recompilation doubles the thresholds.

After that we introduce a hyperbolic scaling factor, M / (M - U), to help avoid cases where we run out of executable memory altogether. This is important since some configurations of JavaScriptCore run with a small available pool of executable memory. This expression means that if we use half of executable memory then the thresholds are doubled. If we use 3/4 of executable memory then the thresholds are quadrupled. This makes filling up executable memory a bit like going at the speed of light: the math makes it so that as you get closer to filling it up the thresholds get closer to infinity. However, it’s worth noting that this is imperfect for truly large programs, since those might have other reasons to allocate executable memory not covered by this heuristic. The heuristic is also imperfect in cases of multiple things being compiled in parallel. Using this factor increases the maximum program size we can handle with small pools of executable memory, but it’s not a silver bullet.

Finally, if the execution count does reach this dynamically computed threshold, we check that some kinds of profiling (specifically, value and array profiling, discussed in detail in the upcoming profiling section) are full enough. We say that profiling is full enough if more than 3/4 of the profiling sites in the function have data. If this threshold is not met, we reset the execution counters. We let this process repeat five times. The optimizing compilers tend to speculate that unprofiled code is unreachable. This is profitable if that code really won’t ever run, but we want to be extra sure before doing that, hence we give functions with partial profiling 5× the time to warm up.

This is an exciting combination of heuristics! These heuristics were added early in the development of tiering in JSC. They were all added before we built the FTL, and the FTL inherited those heuristics just with a 100× multiplier. Each heuristic was added because it produced either a speed-up or a memory usage reduction or both. We try to remove heuristics that are not known to be speed-ups anymore, and to our knowledge, all of these still contribute to better performance on benchmarks we track.

Exit Counting

After we compile a function with the DFG or FTL, it’s possible that one of the speculations we made is wrong. This will cause the function to OSR exit back to LLInt or Baseline (we prefer Baseline, but may throw away Baseline code during GC, in which case exits from DFG and FTL will go to LLInt). We’ve found that the best way of dealing with a wrong speculation is to throw away the optimized code and try optimizing again later with better profiling. We detect if a DFG or FTL function should be recompiled by counting exits. The exit count thresholds are:

  • For a normal exit, we require 100 * pow(2, R) exits to recompile.
  • If the exit causes the Baseline JIT to enter its loop trigger (i.e. we got stuck in a hot loop after exit), then it’s counted specially. We only allow 5 * pow(2, R) of those kinds of exits before we recompile. Note that this can mean exiting five times and tripping the loop optimization trigger each time or it can mean exiting once and tripping the loop optimization trigger five times.

The first step to recompilation is to jettison the DFG or FTL function. That means that all future calls to the function will call the Baseline or LLInt function instead.


If a function is jettisoned, we increment the recompilation counter (R in our notation) and reset the tier-up functionality in the Baseline JIT. This means that the function will keep running in Baseline for a while (twice as long as it did before it was optimized last time). It will gather new profiling, which we will be able to combine with the profiling we collected before to get an even more accurate picture of how types behave in the function.

It’s worth looking at an example of this in action. We already showed an idealized case of tier-up in Figure 8, where a function gets compiled by each compiler exactly once and there are no OSR exits or recompilations. We will now show an example where things don’t go so well. This example is picked because it’s a particularly awful outlier. This isn’t how we expect our engine to behave normally. We expect amusingly bad cases like the following to happen occasionally since the success or failure of speculation is random and random behavior means having bad outliers.

_handlePropertyAccessExpression = function (result, node)
    result.possibleGetOverloads = node.possibleGetOverloads;
    result.possibleSetOverloads = node.possibleSetOverloads;
    result.possibleAndOverloads = node.possibleAndOverloads;
    result.baseType = Node.visit(node.baseType, this);
    result.callForGet = Node.visit(node.callForGet, this);
    result.resultTypeForGet = Node.visit(node.resultTypeForGet, this);
    result.callForAnd = Node.visit(node.callForAnd, this);
    result.resultTypeForAnd = Node.visit(node.resultTypeForAnd, this);
    result.callForSet = Node.visit(node.callForSet, this);
    result.errorForSet = node.errorForSet;

This function belongs to the WSL subtest of JetStream 2. It’s part of the WSL compiler’s AST walk. It ends up being a large function after inlining Node.visit. When I ran this on my computer, I found that JSC did 8 compilations before hitting equilibrium for this function:

  1. After running the function in LLInt for a bit, we compile this with Baseline. This is the easy part since Baseline doesn’t need to be recompiled.
  2. We compile with DFG. Unfortunately, the DFG compilation exits 101 times and gets jettisoned. The exit is due to a bad type check that the DFG emitted on this.
  3. We again compile with the DFG. This time, we exit twice due to a check on result. This isn’t enough times to trigger jettison and it doesn’t prevent tier-up to the FTL.
  4. We compile with the FTL. Unfortunately, this compilation gets jettisoned due to a failing watchpoint. Watchpoints (discussed in greater detail in later sections) are a way for the compiler to ask the runtime to notify it when bad things happen rather than emitting a check. Failing watchpoints cause immediate jettison. This puts us back in Baseline.
  5. We try the DFG again. We exit seven times due to a bad check on result, just like in step 3. This still isn’t enough times to trigger jettison and it doesn’t prevent tier-up to the FTL.
  6. We compile with the FTL. This time we exit 402 times due to a bad type check on node. We jettison and go back to Baseline.
  7. We compile with the DFG again. This time there are no exits.
  8. We compile with the FTL again. There are no further exits or recompilations.

This sequence of events has some intriguing quirks in addition to the number of compilations. Notice how in steps 3 and 5, we encounter exits due to a bad check on result, but none of the FTL compilations encounter those exits. This seems implausible since the FTL will do at least all of the speculations that the DFG did and a speculation that doesn’t cause jettison also cannot pessimise future speculations. It’s also surprising that the speculation that jettisons the FTL in step 6 wasn’t encountered by the DFG. It is possible that the FTL does more speculations than the DFG, but that usually only happens in inlined functions, and this speculation on node doesn’t seem to be in inlined code. A possible explanation for all of these surprising quirks is that the function is undergoing phase changes: during some parts of execution, it sees one set of types, and during another part of execution, it sees a somewhat different set. This is a common issue. Types are not random and they are often a function of time.

JavaScriptCore’s compiler control system is designed to get good outcomes both for functions where speculation “just works” and for functions like the one in this example that need some extra time. To summarize, control is all about counting executions, exits, and recompilations, and either launching a higher tier compiler (“tiering up”) or jettisoning optimized code and returning to Baseline.


This section describes the profiling tiers of JavaScriptCore. The profiling tiers have the following responsibilities:

  • To provide a non-speculative execution engine. This is important for start-up (before we do any speculation) and for OSR exits. OSR exit needs to exit to something that does no speculation so that we don’t have chains of exits for the same operation.
  • To record useful profiling. Profiling is useful if it enables us to make profitable speculations. Speculations are profitable if doing them makes programs run faster.

In JavaScriptCore, the LLInt and Baseline are the profiling tiers while DFG and FTL are the optimizing tiers. However, DFG and FTL also collect some profiling, usually only when it’s free to do so and for the purpose of refining profiling collected by the profiling tiers.

This section is organized as follows. First we explain how JavaScriptCore’s profiling tiers execute code. Then we explain the philosophy of how to profile. Finally we go into the details of JavaScriptCore’s profiling implementation.

How Profiled Execution Works

JavaScriptCore profiles using the LLInt and Baseline tiers. LLInt interprets bytecode while Baseline compiles it. The two tiers share a nearly identical ABI so that it’s possible to jump from one to the other at any bytecode instruction boundary.

LLInt: The Low Level Interpreter

The LLInt is an interpreter that obeys JIT ABI (in the style of HotSpot‘s interpreter). To that end, it is written in a portable assembly language called offlineasm. Offlineasm has a functional macro language (you can pass macro closures around) embedded in it. The offlineasm compiler is written in Ruby and can compile to multiple CPUs as well as C++. This section tells the story of why this crazy design produces a good outcome.

The LLInt simultaneously achieves multiple goals for JavaScriptCore:

  • LLInt is JIT-friendly. The LLInt runs on the same stack that the JITs run on (which happens to be the C stack). The LLInt even agrees on register conventions with the JITs. This makes it cheap for LLInt to call JITed functions and vice versa. It makes LLInt→Baseline and Baseline→LLInt OSR trivial and it makes any JIT→LLInt OSR possible.
  • LLInt allows us to execute JavaScript code even if we can’t JIT. JavaScriptCore in no-JIT mode (we call it “mini mode”) has some advantages: it’s harder to exploit and uses less memory. Some JavaScriptCore clients prefer the mini mode. JSC is also used on CPUs that we don’t have JIT support for. LLInt works great on those CPUs.
  • LLInt reduces memory usage. Any machine code you generate from JavaScript is going to be big. Remember, there’s a reason why they call JavaScript “high level” and machine code “low level”: it refers to the fact that when you lower JavaScript to machine code, you’re going to get many instructions for each JavaScript expression. Having the LLInt means that we don’t have to generate machine code for all JavaScript code, which saves us memory.
  • LLInt starts quickly. LLInt interprets our bytecode format directly. It’s designed so that we could map bytecode from disk and point the interpreter at it. The LLInt is essential for achieving great page load time in the browser.
  • LLInt is portable. It can be compiled to C++.

It would have been natural to write the LLInt in C++, since that’s what most of JavaScriptCore is written in. But that would have meant that the interpreter would have a C++ stack frame constructed and controlled by the C++ compiler. This would have introduced two big problems:

  1. It would be unclear how to OSR from the LLInt to the Baseline JIT or vice versa, since OSR would have to know how to decode and reencode a C++ stack frame. We don’t doubt that it’s possible to do this with enough cleverness, but it would create constraints on exactly how OSR works and it’s not an easy piece of machinery to maintain.
  2. JS functions running in the LLInt would have two stack frames instead of one. One of those stack frames would have to go onto the C++ stack (because it’s a C++ stack frame). We have multiple choices of how to manage the JS stack frame (we could try to alloca it on top of the C++ frame, or allocate it somewhere else) but this inevitably increases cost: calls into the interpreter would have to do twice the work. A common optimization to this approach is to have interpreter→interpreter calls reuse the same C++ stack frame by managing a separate JS stack on the side. Then you can have the JITs use that separate JS stack. This still leaves cost when calling out of interpreter to JIT or vice versa.

A natural way to avoid these problems is to write the interpreter in assembly. That’s basically what we did. But a JavaScript interpreter is a complex beast. It would be awful if porting JavaScriptCore to a new CPU meant rewriting the interpreter in another assembly language. Also, we want to use abstraction to write it. If we wrote it in C++, we’d probably have multiple functions, templates, and lambdas, and we would want all of them to be inlined. So we designed a new language, offlineasm, which has the following features:

  • Portable assembly with our own mnemonics and register names that match the way we do portable assembly in our JIT. Some high-level mnemonics require lowering. Offlineasm reserves some scratch registers to use for lowering.
  • The macro construct. It’s best to think of this as a lambda that takes some arguments and returns void. Then think of the portable assembly statements as print statements that output that assembly. So, the macros are executed for effect and that effect is to produce an assembly program. These are the execution semantics of offlineasm at compile time.

Macros allow us to write code with rich abstractions. Consider this example from the LLInt:

macro llintJumpTrueOrFalseOp(name, op, conditionOp)
    llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch)
        get(condition, t1)
        loadConstantOrVariable(size, t1, t0)
        btqnz t0, ~0xf, .slow
        conditionOp(t0, .target)



This is a macro that we use for implementing both jtrue and jfalse and opcodes. There are only three lines of actual assembly in this listing: the btqnz (branch test quad not zero) and the two labels (.target and .slow). This also shows the use of first-class macros: on the second line, we call llintOpWithJump and pass it a macro closure as the third argument. The great thing about having a lambda-like construct like macro is that we don’t need much else to have a pleasant programming experience. The LLInt is written in about 5000 lines of offlineasm (if you only count the 64-bit version).

To summarize, LLInt is an interpreter written in offlineasm. LLInt understands JIT ABI so calls and OSR between LLInt and JIT are cheap. The LLInt allows JavaScriptCore to load code more quickly, use less memory, and run on more platforms.

Baseline: The Bytecode Template JIT

The Baseline JIT achieves a speed-up over the LLInt at the cost of some memory and the time it takes to generate machine code. Baseline’s speed-up is thanks to two factors:

  • Removal of interpreter dispatch. Interpreter dispatch is the costliest part of interpretation, since the indirect branches used for selecting the implementation of an opcode are hard for the CPU to predict. This is the primary reason why Baseline is faster than LLInt.
  • Comprehensive support for polymorphic inline caching. It is possible to do sophisticated inline caching in an interpreter, but currently our best inline caching implementation is the one shared by the JITs.

The Baseline JIT compiles bytecode by turning each bytecode instruction into a template of machine code. For example, a bytecode instruction like:

add loc6, arg1, arg2

Is turned into something like:

0x2f8084601a65: mov 0x30(%rbp), %rsi
0x2f8084601a69: mov 0x38(%rbp), %rdx
0x2f8084601a6d: cmp %r14, %rsi
0x2f8084601a70: jb 0x2f8084601af2
0x2f8084601a76: cmp %r14, %rdx
0x2f8084601a79: jb 0x2f8084601af2
0x2f8084601a7f: mov %esi, %eax
0x2f8084601a81: add %edx, %eax
0x2f8084601a83: jo 0x2f8084601af2
0x2f8084601a89: or %r14, %rax
0x2f8084601a8c: mov %rax, -0x38(%rbp)

The only parts of this code that would vary from one add instruction to another are the references to the operands. For example, 0x30(%rbp) (that’s x86 for the memory location at frame pointer plus 0x30) is the machine code representation of arg1 in bytecode.

The Baseline JIT does few optimizations beyond just emitting code templates. It does no register allocation between instruction boundaries, for example. The Baseline JIT does some local optimizations, like if an operand to a math operation is a constant, or by using profiling information collected by the LLInt. Baseline also has good support for code repatching, which is essential for implementing inline caching. We discuss inline caching in detail later in this section.

To summarize, the Baseline JIT is a mostly unoptimized JIT compiler that focuses on removing interpreter dispatch overhead. This is enough to make it a ~2× speed-up over the LLInt.

Profiling Philosophy

Profiling in JSC is designed to be cheap and useful.

JavaScriptCore’s profiling aims to incur little or no cost in the common case. Running with profiling turned on but never using the results to do optimizations should result in throughput that is about as good as if all of the profiling was disabled. We want profiling to be cheap because even in a long running program, lots of functions will only run once or for too short to make an optimizing JIT profitable. Some functions might finish running in less time than it takes to optimize them. The profiling can’t be so expensive that it makes functions like that run slower.

Profiling is meant to help the compiler make the kinds of speculations that cause the program to run faster when we factor in both the speed-ups from speculations that are right and the slow-downs from speculations that are wrong. It’s possible to understand this formally by thinking of speculation as a bet. We say that profiling is useful if it turns the speculation into a value bet. A value bet is one where the expected value (EV) is positive. That’s another way of saying that the average outcome is profitable, so if we repeated the bet an infinite number of times, we’d be richer. Formally the expected value of a bet is:

p * B - (1 - p) * C

Where p is the probability of winning, B is the benefit of winning, and C is the cost of losing (both B and C are positive). A bet is a value bet iff:

p * B - (1 - p) * C > 0

Let’s view speculation using this formula. The scenario in which we have the choice to make a bet or not is that we are compiling a bytecode instruction, we have some profiling that implies that we should speculate, and we have to choose whether to speculate or not. Let’s say that B and C both have to do with the latency, in nanoseconds, of executing a bytecode instruction once. B is the improvement to that latency if we do some speculation and it turns out to be right. C is the regression to that latency if the speculation we make is wrong. Of course, after we have made a speculation, it will run many times and may be right sometimes and wrong sometimes. But B is just about the speed-up in the right cases, and C is just about the slow-down in the wrong cases. The baseline relative to which B and C are measured is the latency of the bytecode instruction if it was compiled with an optimizing JIT but without that particular OSR-exit-based speculation.

For example, we may have a less-than operation, and we are considering whether to speculate that neither input is double. We can of course compile less-than without making that speculation, so that’s the baseline. If we do choose to speculate, then B is the speed-up to the average execution latency of that bytecode in those cases when neither input is double. Meanwhile, C is the slow-down to the average execution latency of that bytecode in those cases when at least one input is a double.

For B, let’s just compute some bounds. The lower bound is zero, since some speculations are not profitable. A pretty good first order upper bound for B is the difference in per-bytecode-instruction latency between the baseline JIT and the FTL. Usually, the full speed-up of a bytecode instruction between baseline to FTL is the result of multiple speculations as well as nonspeculative compiler optimizations. So, a single speculation being responsible for the full difference in performance between baseline and FTL is a fairly conservative upper bound for B. Previously, we said that on average in the JetStream 2 benchmark on my computer, a bytecode instruction takes 1.71 ns to execute in Baseline and .225 ns to execute in FTL. So we can say:

B <= 1.71 ns - .225 ns = 1.48 ns

Now let’s estimate C. C is how many more nanoseconds it takes to execute the bytecode instruction if we have speculated and we experience speculation failure. Failure means executing an OSR exit stub and then reexecuting the same bytecode instruction in baseline or LLInt. Then, all subsequent bytecodes in the function will execute in baseline or LLInt rather than DFG or FTL. Every 100 exits or so, we jettison and eventually recompile. Compiling is concurrent, but running a concurrent compiler is sure to slow down the main thread even if there is no lock contention. To fully capture C, we have to account for the cost of the OSR exit itself and then amortize the cost of reduced execution speed of the remainder of the function and the cost of eventual recompilation. Fortunately, it’s pretty easy to measure this directly by hacking the DFG frontend to randomly insert pointless OSR exits with low probability and by having JSC report a count of the number of exits. I did an experiment with this hack for every JetStream 2 benchmark. Running without the synthetic exits, we get an execution time and a count of the number of exits. Running with synthetic exits, we get a longer execution time and a larger number of exits. The slope between these two points is an estimate of C. This is what I found, on the same machine that I used for running the experiments to compute B:

[DFG] C = 2499 ns
[FTL] C = 9998 ns

Notice how C is way bigger than B! This isn’t some slight difference. We are talking about three orders of magnitude for the DFG and four orders of magnitude for the FTL. This paints a clear picture: speculation is a bet with tiny benefit and enormous cost.

For the DFG, this means that we need:

p > 0.9994

For speculation to be a value bet. p has to be even closer to 1 for FTL. Based on this, our philosophy for speculation is we won’t do it unless we think that:

p ~ 1

Since the cost of speculation failure is so enormous, we only want to speculate when we know that we won’t fail. The speed-up of speculation happens because we make lots of sure bets and only a tiny fraction of them ever fail.

It’s pretty clear what this means for profiling:

  • Profiling needs to focus on noting counterexamples to whatever speculations we want to do. We don’t want to speculate if profiling tells us that the counterexample ever happened, since if it ever happened, then the EV of this speculation is probably negative. This means that we are not interested in collecting probability distributions. We just want to know if the bad thing ever happened.
  • Profiling needs to run for a long time. It’s common to wish for JIT compilers to compile hot functions sooner. One reason why we don’t is that we need about 3-4 “nines” of confidence that that the counterexamples didn’t happen. Recall that our threshold for tiering up into the DFG is about 1000 executions. That’s probably not a coincidence.

Finally, since profiling is a bet, it’s important to approach it with a healthy gambler’s philosophy: the fact that a speculation succeeded or failed in a particular program does not tell us if the speculation is good or bad. Speculations are good or bad only based on their average behavior. Focusing too much on whether profiling does a good job for a particular program may result in approaches that cause it to perform badly on average.

Profiling Sources in JavaScriptCore

JavaScriptCore gathers profiling from multiple different sources. These profiling sources use different designs. Sometimes, a profiling source is a unique source of data, but other times, profiling sources are able to provide some redundant data. We only speculate when all profiling sources concur that the speculation would always succeed. The following sections describe our profiling sources in detail.

Case Flags

Case flags are used for branch speculation. This applies anytime the best way to implement a JS operation involves branches and multiple paths, like a math operation having to handle either integers or doubles. The easiest way to profile and speculate is to have the profiling tiers implement both sides of the branch and set a different flag on each side. That way, the optimizing tier knows that it can profitably speculate that only one path is needed if the flags for the other paths are not set. In cases where there is clearly a preferred speculation — for example, speculating that an integer add did not overflow is clearly preferred overspeculating that it did overflow — we only need flags on the paths that we don’t like (like the overflow path).

Let’s consider two examples of case flags in more detail: integer overflow and property accesses on non-object values.

Say that we are compiling an add operation that is known to take integers as inputs. Usually the way that the LLInt interpreter or Baseline compiler would “know” this is that the add operation we’ll talk about is actually the part of a larger add implementation after we’ve already checked that the inputs are integers. Here’s the logic that the profiling tier would use written as if it was C++ code to make it easy to parse:

int32_t left = ...;
int32_t right = ...;
ArithProfile* profile = ...; // This is the thing with the case flags.
int32_t intResult;
JSValue result; // This is a tagged JavaScript value that carries type.
if (UNLIKELY(addOverflowed(left, right, &intResult))) {
    result = jsNumber(static_cast<double>(left) +

    // Set the case flag indicating that overflow happened.
} else
    result = jsNumber(intResult);

When optimizing the code, we will inspect the ArithProfile object for this instruction. If !profile->didObserveInt32Overflow(), we will emit something like:

int32_t left = ...;
int32_t right = ...;
int32_t result;
speculate(!addOverflowed(left, right, &result));

I.e. we will add and branch to an exit on overflow. Otherwise we will just emit the double path:

double left = ...;
double right = ...;
double result = left + right;

Unconditionally doing double math is not that expensive; in fact on benchmarks that I’ve tried, it’s cheaper than doing integer math and checking overflow. The only reason why integers are profitable is that they are cheaper to use for bit operations and pointer arithmetic. Since CPUs don’t accept floats or doubles for bit and pointer math, we need to convert the double to an integer first if the JavaScript program uses it that way (pointer math arises when a number is used as an array index). Such conversions are relatively expensive even on CPUs that support them natively. Usually it’s hard to tell, using profiling or any static analysis, whether a number that a program computed will be used for bit or pointer math in the future. Therefore, it’s better to use integer math with overflow checks so that if the number ever flows into an operation that requires integers, we won’t have to pay for expensive conversions. But if we learn that any such operation overflows — even occasionally — we’ve found that it’s more efficient overall to unconditionally switch to double math. Perhaps the presence of overflows is strongly correlated with the result of those operations not being fed into bit math or pointer math.

A simpler example is how case flags are used in property accesses. As we will discuss in the inline caches section, property accesses have associated metadata that we use to track details about their behavior. That metadata also has flags, like the sawNonCell bit, which we set to true if the property access ever sees a non-object as the base. If the flag is set, the optimizing compilers know not to speculate that the property access will see objects. This typically forces all kinds of conservatism for that property access, but that’s better than speculating wrong and exiting in this case. Lots of case flags look like sawNonCell: they are casually added as a bit in some existing data structure to help the optimizing compiler know which paths were taken.

To summarize, case flags are used to record counterexamples to the speculations that we want to do. They are a natural way to implement profiling in those cases where the profiling tiers would have had to branch anyway.

Case Counts

A predecessor to case flags in JavaScriptCore is case counts. It’s the same idea as flags, but instead of just setting a bit to indicate that a bad thing happened, we would count. If the count never got above some threshold, we would speculate.

Case counts were written before we realized that the EV of speculation is awful unless the probability of success is basically 1. We thought that we could speculate in cases where we knew we’d be right a majority of the time, for example. Initial versions of case counts had variable thresholds — we would compute a ratio with the execution count to get a case rate. That didn’t work as well as fixed thresholds, so we switched to a fixed count threshold of 100. Over time, we lowered the threshold to 20 or 10, and then eventually found that the threshold should really be 1, at which point we switched to case flags.

Some functionality still uses case counts. We still have case counts for determining if the this argument is exotic (some values of this require the function to perform a possibly-effectful conversion in the prologue). We still have case counts as a backup for math operations overflowing, though that is almost certainly redundant with our case flags for math overflow. It’s likely that we will remove case counts from JavaScriptCore eventually.

Value Profiling

Value profiling is all about inferring the types of JavaScript values (JSValues). Since JS is a dynamic language, JSValues have a runtime type. We use a 64-bit JSValue representation that uses bit encoding tricks to hold either doubles, integers, booleans, null, undefined, or pointers to cell, which may be JavaScript objects, symbols, or strings. We refer to the act of encoding a value in a JSValue as boxing it and the act of decoding as unboxing (note that boxing is a term used in other engines to refer specifically to the act of allocating a box object in the heap to hold a value; our use of the term boxing is more like what others call tagging). In order to effectively optimize JavaScript, we need to have some way of inferring the type so that the compiler can assume things about it statically. Value profiling tracks the set of values that a particular program point saw so that we can predict what types that program point will see in the future.

Figure 12. Value profiling and prediction propagation for a sample data flow graph.

We combine value profiling with a static analysis called prediction propagation. The key insight is that prediction propagation can infer good guesses for the types for most operations if it is given a starting point for certain opaque operations:

  • Arguments incoming to the function.
  • Results of most load operations.
  • Results of most calls.

There’s no way that a static analysis running just on some function could guess what types loads from plain JavaScript arrays or calls to plain JavaScript functions could have. Value profiling is about trying to help the static analysis guess the types of those opaque operations. Figure 12 shows how this plays out for a sample data flow graph. There’s no way static analysis can tell the type of most GetByVal and GetById oerations, since those are loads from dynamically typed locations in the heap. But if we did know what those operations return then we can infer types for this entire graph by using simple type rules for Add (like that if it takes integers as inputs and the case flags tell us there was no overflow then it will produce integers).

Let’s break down value profiling into the details of how exactly values are profiled, how prediction propagation works, and how the results of prediction propagation are used.

Recording value profiles. At its core, value profiling is all about having some program point (either a point in the interpreter or something emitted by the Baseline JIT) log the value that it saw. We log values into a single bucket so that each time the profiling point runs, it overwrites the last seen value. The code looks like this in the LLInt:

macro valueProfile(op, metadata, value)
    storeq value, %op%::Metadata::profile.m_buckets[metadata]

Let’s look at how value profiling works for the get_by_val bytecode instruction. Here’s part of the code for get_by_val in LLInt:

    op_get_by_val, OpGetByVal,
    macro (size, get, dispatch, metadata, return)
        macro finishGetByVal(result, scratch)
            get(dst, scratch)
            storeq result, [cfr, scratch, 8]
            valueProfile(OpGetByVal, t5, result)

        ... // more code for get_by_val

The implementation of get_by_val includes a finishGetByVal helper macro that stores the result in the right place on the stack and then dispatches to the next instruction. Note that it also calls valueProfile to log the result just before finishing.

Each ValueProfile object has a pair of buckets and a predicted type. One bucket is for normal execution. The valueProfile macro in the LLInt uses this bucket. The other bucket is for OSR exit: if we exit due to a speculation on a type that we got from value profiling, we feed the value that caused OSR exit back into the second bucket of the ValueProfile.

Each time that our execution counters (used for controlling when to invoke the next tier) count about 1000 points, the execution counting slow path updates all predicted types for the value profiles in that function. Updating value profiles means computing a predicted type for the value in the bucket and merging that type with the previously predicted type. Therefore, after repeated predicted type updates, the type will be broad enough to be valid for multiple different values that the code saw.

Predicted types use the SpeculatedType type system. A SpeculatedType is a 64-bit integer in which we use the low 40 bits to represent a set of 40 fundamental types. The fundamental types, shown in Figure 13, represent non-overlapping set of possible JSValues. 240 SpeculatedTypes are possible by setting any combination of bits.

Figure 13. All of the fundamental SpeculatedTypes.

This allows us to invent whatever types are useful for optimization. For example, we distinguish between 32-bit integers whose value is either 0 or 1 (BoolInt32) versus whose value is anything else (NonBoolInt32). Together these form the Int32Only type, which just has both bits set. BoolInt32 is useful for cases there integers are converted to booleans.

Prediction propagation. We use value profiling to fill in the blanks for the prediction propagation pass of the DFG compiler pipeline. Prediction propagation is an abstract interpreter that tracks the set of types that each variable in the program can have. It’s unsound since the types it produces are just predictions (it can produce any combination of types and at worst we will just OSR exit too much). However, it can be said that we optimize it to be sound; the more sound it is, the fewer OSR exits we have. Prediction propagation fills in the things that the abstract interpreter can’t reason about (loads from the heap, results returned by calls, arguments to the function, etc.) using the results of value profiling. On the topic of soundness, we would consider it to be a bug if the prediction propagation was unsound in a world where value profiling is never wrong. Of course, in reality, we know that value profiling will be wrong, so we know that prediction propagation is unsound.

Let’s consider some of the cases where prediction propagation can be sure about the result type of an operation based on the types of its inputs.

Figure 14. Some of the prediction propagation rules for Add. This figure doesn’t show the rules for string concatenation and objects. Figure 15. Some of the prediction propagation rules for GetByVal (the DFG opcode for subscript access like array[index]). This figure only shows a small sample of the GetByVal rules.

Figure 14 shows some of the rules for the Add operation in DFG IR. Prediction propagation and case flags tell us everything we want to know about the output of Add. If the inputs are integers and the overflow flag isn’t set, the output is an integer. If the inputs are any other kinds of numbers or there are overflows, the output is a double. We don’t need anything else (like value profiling) to understand the output type of Add.

Figure 15 shows some of the rules for GetByVal, which is the DFG representation of array[index]. In this case, there are types of arrays that could hold any type of value. So, even knowing that it is a JSArray isn’t enough to know the types of values inside the array. Also, if the index is a string, then this could be accessing some named property on the array object or one of its prototypes and those could have any type. It’s in cases like GetByVal that we leverage value profiling to guess what the result type is.

Prediction propagation combined with value profiling allows the DFG to infer a predicted type at every point in the program where a variable is used. This allows operations that don’t do any profiling on their own to still perform type-based speculations. It’s of course possible to also have bytecode instructions that can speculate on type collect case flags (or use some other mechanism) to drive those speculations — and that approach can be more precise — but value profiling means that we don’t have to do this for every operation that wants type-based speculation.

Using predicted types. Consider the CompareEq operation in DFG IR, which is used for the DFG lowering of the eq, eq_null, neq, neq_null, jeq, jeq_null, jneq, and jneq_null bytecodes. These bytecodes do no profiling of their own. But CompareEq is one of the most aggressive type speculators in all of the DFG. CompareEq can speculate on the types it sees without doing any profiling of its own because the values it uses will either have value profiling or will have a predicted type filled in by prediction propagation.

Type speculations in the DFG are written like:

CompareEq(Int32:@left, Int32:@right)

This example means that the CompareEq will specuate that both operands are Int32. CompareEq supports the following speculations, plus others we don’t list here:

CompareEq(Boolean:@left, Boolean:@right)
CompareEq(Int32:@left, Int32:@right)
CompareEq(Int32:BooleanToNumber(Boolean:@left), Int32:@right)
CompareEq(Int32:BooleanToNumber(Untyped:@left), Int32:@right)
CompareEq(Int32:@left, Int32:BooleanToNumber(Boolean:@right))
CompareEq(Int32:@left, Int32:BooleanToNumber(Untyped:@right))
CompareEq(Int52Rep:@left, Int52Rep:@right)
CompareEq(DoubleRep:DoubleRep(Int52:@left), DoubleRep:DoubleRep(Int52:@right))
CompareEq(DoubleRep:DoubleRep(Int52:@left), DoubleRep:DoubleRep(RealNumber:@right))
CompareEq(DoubleRep:DoubleRep(Int52:@left), DoubleRep:DoubleRep(Number:@right))
CompareEq(DoubleRep:DoubleRep(Int52:@left), DoubleRep:DoubleRep(NotCell:@right))
CompareEq(DoubleRep:DoubleRep(RealNumber:@left), DoubleRep:DoubleRep(RealNumber:@right))
CompareEq(DoubleRep:..., DoubleRep:...)
CompareEq(StringIdent:@left, StringIdent:@right)
CompareEq(String:@left, String:@right)
CompareEq(Symbol:@left, Symbol:@right)
CompareEq(Object:@left, Object:@right)
CompareEq(Other:@left, Untyped:@right)
CompareEq(Untyped:@left, Other:@right)
CompareEq(Object:@left, ObjectOrOther:@right)
CompareEq(ObjectOrOther:@left, Object:@right)
CompareEq(Untyped:@left, Untyped:@right)

Some of these speculations, like CompareEq(Int32:, Int32:) or CompareEq(Object:, Object:), allow the compiler to just emit an integer compare instruction. Others, like CompareEq(String:, String:), emit a string compare loop. We have lots of variants to optimally handle bizarre comparisons that are not only possible in JS but that we have seen happen frequently in the wild, like comparisons between numbers and booleans and comparisons between one value that is always a number and another that is either a number or a boolean. We provide additional optimizations for comparisons between doubles, comparisons between strings that have been hash-consed (so-called StringIdent, which can be compared using comparison of the string pointer), and comparisons where we don’t know how to speculate (CompareEq(Untyped:, Untyped:)).

The basic idea of value profiling — storing a last-seen value into a bucket and then using that to bootstrap a static analysis — is something that we also use for profiling the behavior of array accesses. Array profiles and array allocation profiles are like value profiles in that they save the last result in a bucket. Like value profiling, data from those profiles is incorporated into prediction propagation.

To summarize, value profiling allows us to predict the types of variables at all of their use sites by just collecting profiling at those bytecode instructions whose output cannot be predicted with abstract interpretation. This serves as the foundation for how the DFG (and FTL, since it reuses the DFG’s frontend) speculates on the types of JSValues.

Inline Caches

Property accesses and function calls are particularly difficult parts of JavaScript to optimize:

  • Objects behave as if they were just ordered mappings from strings to JSValues. Lookup, insertion, deletion, replacement, and iteration are possible. Programs do these operations a lot, so they have to be fast. In some cases, programs use objects the same way that programs in other languages would use hashtables. In other cases, programs use objects the same way that they would in Java or some sensibly-typed object-oriented language. Most programs do both.
  • Function calls are polymorphic. You can’t make static promises about what function will be called.

Both of these dynamic features are amenable to optimization with Deutsch and Schiffman’s inline caches (ICs). For dynamic property access, we combine this with structures, based on the idea of maps in the Chambers, Ungar, and Lee’s Self implementation. We also follow Hölzle, Chambers, and Ungar: our inline caches are polymorphic and we use data from these caches as profiling of the types observed at a particular property access or call site.

It’s worth dwelling a bit on the power of inline caching. Inline caches are great optimizations separately from speculative compilation. They make the LLInt and Baseline run faster. Inline caches are our most powerful profiling source, since they can precisely collect information about every type encountered by an access or call. Note that we previously said that good profiling has to be cheap. We think of inline caches as negative cost profiling since inline caches make the LLInt and Baseline faster. It doesn’t get cheaper than that!

This section focuses on inline caching for dynamic property access, since it’s strictly more complex than for calls (accesses use structures, polymorphic inline caches (PICs), and speculative compilation; calls only use polymorphic inline caches and speculative compilation). We organize our discussion of inline caching for dynamic property access as follows. First we describe how structures work. Then we show the JavaScriptCore object model and how it incorporates structures. Next we show how inline caches work. Then we show how profiling from inline caches is used by the optimizing compilers. After that we show how inline caches support polymorphism and polyvariance. Finally we talk about how inline caches are integrated with the garbage collector.

Structures. Objects in JavaScript are just mappings from strings to JSValues. Lookup, insertion, deletion, replacement, and iteration are all possible. We want to optimize those uses of objects that would have had a type if the language had given the programmer a way to say it.

Figure 16. Some JavaScript objects that have x and y properties. Some of them have exactly the same shape (only x and y in the same order).

Consider how to implement a property access like:

var tmp = o.x;


o.x = tmp;

One way to make this fast is to use hashtables. That’s certainly a necessary fallback mechanism when the JavaScript program uses objects more like hashtables than like objects (i.e. it frequently inserts and deletes properties). But we can do better.

This problem frequently arises in dynamic programming languages and it has a well-understood solution. The key insight of Chambers, Ungar, and Lee’s Self implementation is that property access sites in the program will typically only see objects of the same shape. Consider the objects in Figure 16 that have x and y properties. Of course it’s possible to insert x and y in two possible orders, but folks will tend to pick some order and stick to it (like x first). And of course it’s possible to also have objects that have a z property, but it’s less likely that a property access written as part of the part of the program that works with {x, y} objects will be reused for the part that uses {x, y, z}. It’s possible to have shared code for many different kinds of objects but unshared code is more common. Therefore, we split the object representation into two parts:

  • The object itself, which only contains the property values and a structure pointer.
  • The structure, which is a hashtable that maps property names (strings) to indices in the objects that have that structure.
Figure 17. The same objects as in Figure 16, but using structures.

Figure 17 shows objects represented using structures. Objects only contain object property values and a pointer to a structure. The structure tells the property names and their order. For example, if we wanted to ask the {1, 2} object in Figure 17 for the value of property x, we would load the pointer to its structure, {x, y}, and ask that structure for the index of x. The index is 0, and the value at index 0 in the {1, 2} object is 1.

A key feature of structures is that they are hash consed. If two objects have the same properties in the same order, they are likely to have the same structure. This means that checking if an object has a certain structure is O(1): just load the structure pointer from the object header and compare the pointer to a known value.

Structures can also indicate that objects are in dictionary or uncacheable dictionary mode, which are basically two levels of hashtable badness. In both cases, the structure stops being hash consed and is instead paired 1:1 with its object. Dictionary objects can have new properties added to them without the structure changing (the property is added to the structure in-place). Uncacheable dictionary objects can have properties deleted from them without the structure changing. We won’t go into these modes in too much detail in this post.

To summarize, structures are hashtables that map property names to indices in the object. Object property lookup uses the object’s structure to find the index of the property. Structures are hash consed to allow for fast structure checks.

Figure 18. The JavaScriptCode object model.

JavaScriptCore object model. JavaScriptCore uses objects with a 64-bit header that includes a 32-bit structure ID and 32 bits worth of extra state for GC, type checks, and arrays. Figure 18 shows the object model. Named object properties may end up either in the inline slots or the out-of-line slots. Objects get some number of inline slots based on simple static analysis around the allocation site. If a property is added that doesn’t fit in the inline slots, we allocate a butterfly to store additional properties out-of-line. Accessing out-of-line properties in the butterfly costs one extra load.

Figure 19 shows an example object that only has two inline properties. This is the kind of object you would get if you used the object literal {f:5, g:6} or if you assigned to the f and g properties reasonably close to the allocation.

Figure 19. Example JavaScriptCore object together with its structure.

Simple inline caches. Let’s consider the code:

var v = o.f;

Let’s assume that all of the objects that flow into this have structure 42 like the object in Figure 19. Inline caching this property access is all about emitting code like the following:

if (o->structureID == 42)
    v = o->inlineStorage[0]
    v = slowGet(o, "f")

But how do we know that o will have structure 42? JavaScript does not give us this information statically. Inline caches get this information by filling it in once the code runs. There are a number of techniques for this, all of which come down to self-modifying code. Let’s look at how the LLInt and Baseline do it.

In the LLInt, the metadata for get_by_id contains a cached structure ID and a cached offset. The cached structure ID is initialized to an absurd value that no structure can have. The fast path of get_by_id loads the property at the cached offset if the object has the cached structure. Otherwise, we take a slow path that does the full lookup. If that full lookup is cacheable, it stores the structure ID and offset in the metadata.

The Baseline JIT does something more sophisticated. When emitting a get_by_id, it reserves a slab of machine code space that the inline caches will later fill in with real code. The only code in this slab initially is an unconditional jump to a slow path. The slow path does the fully dynamic lookup. If that is deemed cacheable, the reserved slab is replaced with code that does the right structure check and loads at the right offset. Here’s an example of a get_by_id initially compiled with Baseline:

0x46f8c30b9b0: mov 0x30(%rbp), %rax
0x46f8c30b9b4: test %rax, %r15
0x46f8c30b9b7: jnz 0x46f8c30ba2c
0x46f8c30b9bd: jmp 0x46f8c30ba2c
0x46f8c30b9c2: o16 nop %cs:0x200(%rax,%rax)
0x46f8c30b9d1: nop (%rax)
0x46f8c30b9d4: mov %rax, -0x38(%rbp)

The first thing that this code does is check that o (stored in %rax) is really an object (using a test and jnz). Then notice the unconditional jmp followed by two long nop instructions. This jump goes to the same slow path that we would have branched to if o was not an object. After the slow path runs, this is repatched to:

0x46f8c30b9b0: mov 0x30(%rbp), %rax
0x46f8c30b9b4: test %rax, %r15
0x46f8c30b9b7: jnz 0x46f8c30ba2c
0x46f8c30b9bd: cmp $0x125, (%rax)
0x46f8c30b9c3: jnz 0x46f8c30ba2c
0x46f8c30b9c9: mov 0x18(%rax), %rax
0x46f8c30b9cd: nop 0x200(%rax)
0x46f8c30b9d4: mov %rax, -0x38(%rbp)

Now, the is-object check is followed by a structure check (using cmp to check that the structure is 0x125) and a load at offset 0x18.

Inline caches as a profiling source. The metadata we use to maintain inline caches makes for a fantastic profiling source. Let’s look closely at what this means.

Figure 20. Timeline of using an inline cache at each JIT tier. Note that we end up having to generate code for this `get_by_id` *six times* in the berst case that each tier compiles this only once.

Figure 20 shows a naive use of inline caches in a multi-tier engine, where the DFG JIT forgets everything that we learned from the Baseline inline cache and just compiles a blank inline cache. This is reasonably efficient and we fall back on this approach when the inline caches from the LLInt and Baseline tell us that there is unmanagable polymorphism. Before we go into how polymorphism is profiled, let’s look at how a speculative compiler really wants to handle simple monomorphic inline caches like the one in Figure 20, where we only see one structure (S1) and the code that the IC emits is trivial (load at offset 10 from %rax).

When the DFG frontend (shared by DFG and FTL) sees an operation like get_by_id that can be implemented with ICs, it reads the state of all ICs generated for that get_by_id. By “all ICs” we mean all ICs that are currently in the heap. This usually just means reading the LLInt and Baseline ICs, but if there exists a DFG or FTL function that generated an IC for this get_by_id then we will also read that IC. This can happen if a function gets compiled multiple times due to inlining — we may be compiling function bar that inlines a call to function foo and foo already got compiled with FTL and the FTL emitted an IC for our get_by_id.

If all ICs for a get_by_id concur that the operation is monomorphic and they tell us the structure to use, then the DFG frontend converts the get_by_id into inline code that does not get repatched. This is shown in Figure 21. Note that this simple get_by_id is lowered to two DFG operations: CheckStructure, which OSR exits if the given object does not have the required structure, and GetByOffset, which is just a load with known offset and field name.

Figure 21. Inlining a simple momomorphic inline cache in DFG and FTL.

CheckStructure and GetByOffset are understood precisely in DFG IR:

  • CheckStructure is a load to get the structure ID of an object and a branch to compare that structure ID to a constant. The compiler knows what structures are. After a CheckStructcure, the compiler knows that it’s safe to execute loads to any of the properties that the structure says that the object has.
  • GetByOffset is a load from either an inline or out-of-line property of a JavaScript object. The compiler knows what kind of property is being loaded, what its offset is, and what the name of the property would have been.

The DFG knows all about how to model these operations and the dependency between them:

  • The DFG knows that neither operation causes a side effect, but that the CheckStructure represents a conditional side exit, and both operations read the heap.
  • The DFG knows that two CheckStructures on the same structure are redundant unless some operation between them could have changed object structure. The DFG knows a lot about how to optimize away redundant structure checks, even in cases where there is a function call between two of them (more on this later).
  • The DFG knows that two GetByOffsets that speak of the same property and object are loading from the same memory location. The DFG knows how to do alias analaysis on those properties, so it can precisely know when a GetByOffset’s memory location got clobbered.
  • The DFG knows that if it wants to hoist a GetByOffset then it has to ensure that the corresponding CheckStructure gets hoisted first. It does this using abstract interpretation, so there is no need to have a dependency edge between these operations.
  • The DFG knows how to generate either machine code (in the DFG tier) or B3 IR (in the FTL tier) for CheckStructure and GetByOffset. In B3, CheckStructure becomes a Load, NotEqual, and Check, while GetByOffset usually just becomes a Load.
Figure 22. Inlining two momomorphic inline caches, for different properties on the same object, in DFG and FTL. The DFG and FTL are able to eliminate the CheckStructure for the second IC.

The biggest upshot of lowering ICs to CheckStructure and GetByOffset is the redundancy elimination. The most common redundancy we eliminate is multiple CheckStrutures. Lots of code will do multiple loads from the same object, like:

var f = o.f;
var g = o.g;

With ICs, we would check the structure twice. Figure 22 shows what happens when the speculative compilers inline these ICs. We are left with just a single CheckStructure instead of two thanks to the fact that:

  • CheckStructure is an OSR speculation.
  • CheckStructure is not an IC. The compiler knows exactly what it does, so that it can model it, so that it can eliminate it.

Let’s pause to appreciate what this technique gives us so far. We started out with a language in which property accesses seem to need hashtable lookups. A o.f operation requires calling some procedure that is doing hashing and so forth. But by combining inline caches, structures, and speculative compilation we have landed on something where some o.f operations are nothing more than load-at-offset like they would have been in C++ or Java. But this assumes that the o.f operation was monomorphic. The rest of this section considers minimorphism, polymorphism, and polyvariance.

Minimorphism. Certain kinds of polymorphic accesses are easier to handle than others. Sometimes an access will see two or more structures but all of those structures have the property at the same offset. Other times an access will see multiple structures and those structures do not agree on the offset of the property. We say that an access is minimorphic if it sees more than one structure and all structures agree on the offset of the property.

Our inline caches handle all forms of polymorphism by generating a stub that switches on the structure. But in the DFG, minimorphic accesses are special because they still qualify for full inlining. Consider an access o.f that sees structures S1 and S2, and both agree that f is at offset 0. Then we would have:

CheckStructure(@o, S1, S2)
GetByOffset(@o, 0)

This minimorphic CheckStructure will OSR exit if @o has none of the listed structures. Our optimizations for CheckStructure generally work for both monomorphic and minimorphic variants. So, minimorphism usually doesn’t hurt performance much compared to monomorphism.

Polymorphism. But what about some access sees different structures, and those structures have the property at different offsets? Consider an access to o.f that sees structures S1 = {f, g}, S2 = {f, g, h}, and S3 = {g, f}. This would be a minimorphic access if it was just S1 or S2, but S3 has f at a different offset. In this case, the FTL will convert this to:

MultiGetByOffset(@o, [S1, S2] => 0, [S3] => 1)

in DFG IR and then lower it to something like:

if (o->structureID == S1 || o->structureID == S2)
    result = o->inlineStorage[0]
    result = o->inlineStorage[1]

in B3 IR. In fact, we would use B3’s Switch since that’s the canonical form for this code pattern in B3.

Note that we only do this optimization in the FTL. The reason is that we want polymorphic accesses to remain ICs in the DFG so that we can use them to collect refined profiling.

Figure 23. Polyvariant inlining of an inline cache. The FTL can inline the inline cache in foo-inlined-into-bar after DFG compiles bar and uses an IC to collect polyvariant profiling about the get_by_id.

Polyvariance. Polyvariance is when an analysis is able to reason about a function differently depending on where it is called from. We achieve this by inlining in the DFG tier and keeping polymorphic ICs as ICs. Consider the following example. Function foo has an access to o.f that is polymorphic and sees structures S1 = {f, g}, S2 = {f, g, h}, and S3 = {g, f}:

function foo(o)
    // o can have structure S1, S2, or S3.
    return o.f;

This function is small, so it will be inlined anytime our profiling tells us that we are calling it (or may be calling it, since call inlining supports inlining polymorphic calls). Say that we have another function bar that always passes objects with structure S1 = {f, g} to foo:

function bar(p)
    // p.g always happens to have structure S1.
    return foo(p.g);

Figure 23 shows what happens. When the DFG compiles bar (step 3), it will inline foo based on the profiling of its call opcode (in step 2). But it will leave foo‘s get_by_id as an IC because foo‘s Baseline version told us that it’s polymorphic (also step 2). But then, since the DFG’s IC for foo‘s get_by_id is the context of that call from bar, it only ever sees S1 (step 4). So, when the FTL compiles bar and inlines foo, it knows that this get_by_id can be inlined with a monomorphic structure check for just S1 (step 5).

Inline caches also support more exotic forms of property access, like loading from objects in the prototype chain, calling accessors, adding/replacing properties, and even deleting properties.

Inline caches, structures, and garbage collection. Inline caches results in objects that are allocated and referenced only for inline caching. Structures are the most notorious example of these kinds of objects. Structures are particularly problematic because they need strong references to both the object’s prototype and its global object. In some cases, a structure will only be reachable from some inline cache, that inline cache will never run again (but we can’t prove it), and there is a large global object only referenced by that structure. It can be difficult to determine if that means that the structure has to be deleted or not. If it should be deleted, then the inline cache must be reset. If any optimized code inlined that inline cache, then that code must be jettisoned and recompiled. Fortunately, our garbage collector allows us to describe this case precisely. Since the garbage collector runs to fixpoint, we simply add the constraint that the pointer from an inline cache to a structure only marks the structure if the structure’s global object and prototype are already marked. Otherwise, the pointer behaves like a weak pointer. So, an inline cache will only be reset if the only way to reach the structure is through inline caches and the corresponding global object and prototype are dead. This is an example of how our garbage collector is engineered to make speculation easy.

To summarize, inline caching is an optimization employed by all of our tiers. In addition to making code run faster, inline caching is a high-precision profiling source that can tell us about the type cases that an operation saw. Combined with structures, inline caches allow us to turn dynamic property accesses into easy-to-optimize instructions.


We allow inline caches and speculative compilers to set watchpoints on the heap. A watchpoint in JavaScriptCore is nothing more than a mechanism for registering for notification that something happened. Most watchpoints are engineered to trigger only the first time that something bad happens; after that, the watchpoint just remembers that the bad thing had ever happened. So, if an optimizing compiler wants to do something that is valid only if some bad thing never happened, and the bad thing has a watchpoint, the compiler just checks if the watchpoint is still valid (i.e. the bad thing hasn’t happened yet) and then associates its generated code with the watchpoint (so the code will only get installed if the watchpoint is still valid when the code is done getting compiled, and will be jettisoned as soon as the watchpoint is fired). The runtime allows for setting watchpoints on a large number of activities. The following stick out:

  • It’s possible to set a watchpoint on structures to get a notification whenever any object switches from that structure to another one. This only works for structures whose objects have never transitioned to any other structure. This is called a structure transition watchpoint. It establishes a structure as a leaf in the structure transition tree.
  • It’s possible to set a watchpoint on properties in a structure to get a notification whenever the property is overwritten. Overwriting a property is easy to detect because the first time this happens, it usually involves repatching a put_by_id inline cache so that it’s in the property replacement mode. This is called a property replacement watchpoint.
  • It’s possible to set a watchpoint on the mutability of global variables.

Putting these watchpoints together gives the speculative compiler the ability to constant-fold object properties that happen to be immutable. Let’s consider a simple example:

Math.pow(42, 2)

Here, Math is a global property lookup. The base object is known to the compiler: it’s the global object that the calling code belongs to. Then, Math.pow is a lookup of the pow propery on the Math object. It’s extremely unlikely that the Math property of the global object or the pow property of the Math object had ever been overwritten. Both the global object and the Math object have structures that are unique to them (both because those structures have special magic since those are special objects and because those objects have what is usually a globally unique set of properties), which guarantees that they have leaf structures, so the structure transition watchpoint can be set. Therefore, except for pathological programs, the expression Math.pow is compiled to a constant by the speculative compiler. This makes lots of stuff fast:

  • It’s common to have named and scoped enumerations using objects and object properties, like TypeScript.NodeType.Error in the typescript compiler benchmark in JetStream 2. Watchpoints make those look like a constant to the speculative compiler.
  • Method calls like o.foo(things) are usually turned just into a structure check on o and a direct call. Once the structure is checked, watchpoints establish that the object’s prototype has a property called foo and that this property has some constant value.
  • Inline caches use watchpoints to remove some checks in their generated stubs.
  • The DFG can use watchpoints to remove redundant CheckStructures even when there is a side effect between them. If we set the structure transition watchpoint then we know that no effect can change the structure of any object that has this structure.
  • Watchpoints are used for lots of miscellaneous corner cases of JavaScript, like having a bad time.

To summarize, watchpoints let inline caches and the speculative compilers fold certain parts of the heap’s state to constants by getting a notification when things change.

Exit Flags

All of the profiling sources in our engine have a chance of getting things wrong. Profiling sources get things wrong because:

  • The program may change behavior between when we collected the profiling and when we speculated on it.
  • The profiling has some stochastic element and the program is getting unlucky, leading to wrong profiling.
  • The profiling source has a logic bug that makes it not able to see that something happened.
  • We neglected to implement a profiler for something and instead just speculated blind.

The first of these issues – behavior change over time – is inevitable and is sure to happen for some functions in any sufficiently large program. Big programs tend to experience phase changes, like some subroutine going from being called from one part of a larger library that uses one set of types, to being called from a different part with different types. Those things inevitably cause exits. The other three issues are all variants of the profiling being broken. We don’t want our profiling to be broken, but we’re only human. Recall that for speculation to have good EV, the probability of being right has to be about 1. So, it’s not enough to rely on profiling that was written by imperfect lifeforms. Exit flags are a check on the rest of the profiling and are there to ensure that we get things right eventually for all programs.

In JavaScriptCore, every OSR exit is tagged with an exit kind. When a DFG or FTL function exits enough times to get jettisoned, we record all of the exit kinds that happened along with the bytecode locations that semantically caused the exits (for example if we do a type check for add at bytecode #63 but then hoist the check so that it ends up exiting to bytecode #45, then we will blame #63 not #45). Whenever the DFG or FTL decide whether to perform a kind of speculation, they are expected to check whether there is an exit flag for that speculation at the bytecode that we’re compiling. Our exit flag checking discipline tends to be strictly better than our profiling discipline, and it’s way easier to get right — every phase of the DFG has fast access to exit flags.

Here’s an example of an actual OSR exit check in DFG:

    OutOfBounds, JSValueRegs(), 0,
        MacroAssembler::Address(storageReg, Butterfly::offsetOfPublicLength())));

Note that the first argument is OutOfBounds. That’s an example exit kind. Here’s another example, this time from the FTL:

speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(left, m_out.int32Zero));

Again, the the first argument is the exit kind. This time it’s NegativeZero. We have 26 exit kinds, most of which describe a type check condition (some are used for other uses of OSR, like exception handling).

We use the exit kinds by querying if an exit had happened at the bytecode location we are compiling when choosing whether to speculate. We typically use the presence of an exit flag as an excuse not to speculate at all for that bytecode. We effectively allow ourselves to overcompensate a bit. The exit flags are a check on the rest of the profiler. They are telling the compiler that the profiler had been wrong here before, and as such, shouldn’t be trusted anymore for this code location.

Summary of Profiling

JavaScriptCore’s profiling is designed to be cheap and useful. Our best profiling sources tend to either involve minimal instrumentation (like just setting a flag or storing a value to a known location) or be intertwined with optimizations (like inline caching). Our profilers gather lots of rich information and in some cases we even collect information redundantly. Our profiling is designed to help us avoid making speculative bets that turn out to be wrong even once.

Compilation and OSR

Now that we have covered bytecode, control, and profiling, we can get to the really fun part: how to build a great speculative optimizing compiler. We will discuss the OSR aspect of speculation in tandem with our descriptions of the two optimizing compilers.

This section is organized into three parts. First we give a quick and gentle introduction to DFG IR, the intermediate representation used by both the DFG and FTL tiers. Then we describe the DFG tier in detail, including how it handles OSR. Finally we describe how the FTL tier works.


The most important component of a powerful optimizing compiler is the IR. We want to have the best possible speculative optimizing compiler for JavaScript, so we have the following goals for our IR:

  • The IR has to describe all of the parts of the program that are interesting to the optimizer. Like other high quality optimizing IRs, DFG IR has good support for talking about data flow, aliasing, effects, control flow, and debug information. Additionally, it’s also good at talking about profiling data, speculation decisions, and OSR.
  • The IR has to be mutable. Anything that is possible to express when first lowering a program to the IR should also be expressible during some later optimization. We prefer that decisions made during lowering to the IR can be be refined by optimizations later.
  • The IR has to have some validation support. It’s got to be possible to catch common mistakes in a validator instead of debugging generated code.
  • The IR has to be purpose-built. If there exists an optimization whose most comprehensive implementation requires a change to the IR or one of its core data structures, then we need to be able to make that change without asking anyone for permission.

Note that IR mutability is closely tied to how much it describes and how easy it is to validate. Any optimization that tries to transform one piece of code into a different, better, piece of code needs to be able to determine if the new code is a valid replacement for the old code. Generally, the more information the IR carries and the easier it is to validate, the easier it is to write the analyses that guard optimizations.

Let’s look at what the DFG IR looks like using a simple example:

function foo(a, b)
    return a + b;

This results in bytecode like:

[   0] enter             
[   1] get_scope         loc3
[   3] mov               loc4, loc3
[   6] check_traps       
[   7] add               loc6, arg1, arg2
[  12] ret               loc6

Note that only the last two lines (add and ret) are important. Let’s look at the DFG IR that we get from lowering those two bytecode instructions:

  23:  GetLocal(Untyped:@1, arg1(B<Int32>/FlushedInt32), R:Stack(6), bc#7)
  24:  GetLocal(Untyped:@2, arg2(C<BoolInt32>/FlushedInt32), R:Stack(7), bc#7)
  25:  ArithAdd(Int32:@23, Int32:@24, CheckOverflow, Exits, bc#7)
  26:  MovHint(Untyped:@25, loc6, W:SideState, ClobbersExit, bc#7, ExitInvalid)
  28:  Return(Untyped:@25, W:SideState, Exits, bc#12)

In this example, we’ve lowered the add opcode to four operations: two GetLocals to get the argument values from the stack (we load them lazily and this is the first operation that needs them), a speculative ArithAdd instruction, and a MovHint to tell the OSR part of the compiler about the ArithAdd. The ret opcode is just lowered to a Return.

In DFG jargon, the instructions are usually called nodes, but we use the terms node, instruction, and operation interchangeably. DFG nodes are simultaneously nodes in a data flow graph and instructions inside of a control flow graph, with semantics defined “as if” they executed in a particular order.

Figure 24. Explanation of an example ArithAdd DFG instruction.

Let’s consider the ArithAdd in greater detail (Figure 24). This instruction is interesting because it’s exactly the sort of thing that the DFG is designed to optimize: it represents a JavaScript operation that is dynamic and impure (it may call functions) but here we have inferred it to be free of side effects using the Int32: type speculations. These indicate that that before doing anything else, this instruction will check that its inputs are Int32’s. Note that the type speculations of DFG instructions should be understood like function overloads. ArithAdd also allows for both operands to be double or other kinds of integer. It’s as if ArithAdd was a C++ function that had overloads that took a pair of integers, a pair of doubles, etc. It’s not possible to add any type speculation to any operand, since that may result in an instruction overload that isn’t supported.

Another interesting feature of this ArithAdd is that it knows exactly which bytecode instruction it originated from and where it will exit to. These are separate fields in the IR (the semantic and forExit origins) but when the are equal we dump them as one, bc#7 in the case of this instruction.

Any DFG node that may exit will have the Exits flag. Note that we set this flag conservatively. For example, the Return in our example has it set not because Return exits but because we haven’t found a need to make the exit analysis any more precise for that instruction.

Figure 25. Example data flow graph.

DFG IR can be simultaneously understood as a sequence of operations that should be performed as if in the given order and as a data flow graph with backwards pointers. The data flow graph view of our running example is shown in Figure 25. This view is useful since lots of optimizations are concerned with asking questions like: “what instructions produce the values consumed by this instruction?” These data flow edges are the main way that values move around in DFG IR. Also, representing programs this way makes it natural to add SSA form, which we do in the FTL.

Figure 26. DFG and FTL compiler architecture. The pass pipeline depicted above the dotten line is shared between the DFG and FTL compilers. Everything below the dotted line is specialized for DFG or FTL.

DFG, in both non-SSA and SSA forms, forms the bulk of the DFG and FTL compilers. As shown in Figure 26, both JITs share the same frontend for parsing bytecode and doing some optimizations. The difference is what happens after the DFG optimizer. In the DFG tier, we emit machine code directly. In the FTL tier, we convert to DFG SSA IR (which is almost identical to DFG IR but uses SSA to represent data flow) and do more optimizations, and then lower through two additional optimizers (B3 and Assembly IR or Air). The remaining sections talk about the DFG and FTL compilers. The section on the DFG compiler covers the parts of DFG and FTL that are common.

DFG Compiler

The point of the DFG compiler is to remove lots of type checks quickly. Fast compilation is the DFG feature that differentiates it from the FTL. To get fast compilation, the DFG lacks SSA, can only do very limited code motion, and uses block-local versions of most optimizations (common subexpression elimination, register allocation, etc). The DFG has two focus areas where it does a great job despite compiling quickly: how it handles OSR and how it uses static analysis.

This section explains the DFG by going into these three concepts in greater detail:

  • OSR exit as a first-class concept in the compiler.
  • Static analysis as the main driver of optimization.
  • Fast compilation so that we get the benefits of optimization as soon as possible.

OSR Exit

OSR is all about flattening control flow by making failing checks exit sideways. OSR is a difficult optimization to get right. It’s especially difficult to reason about at a conceptual level. This section tries to demystify OSR exit. We’re going to explain the DFG compiler’s approach to OSR, which includes both parts that are specific to the DFG tier and parts that are shared with the FTL. The FTL section explains extensions to this approach that we use to do more aggressive optimizations.

Our discussion proceeds as follows. First we use a high-level example to illustrate what OSR exit is all about. Then we describe what OSR exit means at the machine level, which will take us into the details of how optimizing compilers handle OSR. We will show a simple OSR exit IR idea based on stackmaps to give a sense of what we’re trying to achieve and then we describe how DFG IR compresses stackmaps. Finally we talk about how OSR exit is integrated with watchpoints and invalidation.

High-level OSR example. To start to demystify DFG exit, let’s think of it as if it was an optimization we were doing to a C program. Say we had written code like:

int foo(int* ptr)
    int w, x, y, z;
    w = ... // lots of stuff
    x = is_ok(ptr) ? *ptr : slow_path(ptr);
    y = ... // lots of stuff
    z = is_ok(ptr) ? *ptr : slow_path(ptr);
    return w + x + y + z;

Let’s say we wanted to optimize out the second is_ok check. We could do that by duplicating all of the code after the first is_ok check, and having one copy statically assume that is_ok is true while another copy either assumes it’s false or makes no assumptions. This might make the fast path look like:

int foo(int* ptr)
    int w, x, y, z;
    w = .. // lots of stuff
    if (!is_ok(ptr))
        return foo_base1(ptr, w);
    x = *ptr;
    y = ... // lots of stuff
    z = *ptr;
    return w + x + y + z;

Where foo_base1 is the original foo function after the first is_ok check. It takes the live state at that point as an argument and looks like this:

int foo_base1(int* ptr, int w)
    int x, y, z;
    x = is_ok(ptr) ? *ptr : slow_path(ptr);
    y = ... // lots of stuff
    z = is_ok(ptr) ? *ptr : slow_path(ptr);
    return w + x + y + z;

What we’ve done here is OSR exit. We’re optimizing control flow on the fast path (removing one is_ok check) by exiting (tail-calling foo_base1) if !is_ok. OSR exit requires:

  • Somewhere to exit, like foo_base1 in this case. It should be a thing that can complete execution of the current function without getting stuck on the same speculation.
  • The live state at exit, like ptr and w in this case. Without that, the exit target can’t pick up where we left off.

That’s OSR exit at a high level. We’re trying to allow an optimizing compiler to emit checks that exit out of the function on failure so that the compiler can assume that the same check won’t be needed later.

OSR at the machine level. Now let’s look at what OSR exit looks like at a lower level. Figure 27 shows an example of OSR at a particular bytecode index.

Figure 27. OSR exit at the machine level for an example bytecode instruction.

OSR is all about replacing the current stack frame and register state, which correspond to some bytecode index in the optimizing tier, with a different frame and register state, which correspond to the same point in the profiling tier. This is all about shuffling live data from one format to another and jumping to the right place.

Knowing where to jump to is easy: each DFG node (aka instruction or operation) has forExit, or just exit, origin that tells us which bytecode location to exit to. This may even be a bytecode stack in case of inlining.

The live data takes a bit more effort. We have to know what the set of live data is and what its format is in both the profiling and optimizing tiers. It turns out that knowing what the set of live data is and how to represent it for the profiling tiers is easy, but extracting that data from the optimizing tier is hard.

First let’s consider what’s live. The example in Figure 27 says that we’re exiting at an add and it has loc3, loc4, and loc8 live before. We can solve for what’s live at any bytecode instruction by doing a liveness analysis. JavaScriptCore has an optimized bytecode liveness analysis for this purpose.

Note that the frame layout in the profiling tier is an orderly representation of the bytecode state. In particular, locN just means framePointer - 8 * N and argN just means framePointer + FRAME_HEADER_SIZE + 8 * N, where FRAME_HEADER_SIZE is usually 40. The only difference between frame layouts between functions in the profiling tier is the frame size, which is determined by a constant in each bytecode function. Given the frame pointer and the bytecode virtual register name, it’s always possible to find out where on the stack the profiling tiers would store that variable. This makes it easy to figure out how to convert any bytecode live state to what the Baseline JIT or LLInt would expect.

The hard part is the optimizing tier’s state. The optimizing compiler might:

  • Allocate the stack in any order. Even if a variable is on the stack, it may be anywhere.
  • Register-allocate a variable. In that case there may not be any location on the stack that contains the value of that variable.
  • Constant-fold a variable. In that case there may not be any location on the stack or in the register file that contains the value of that variable.
  • Represent a variable’s value in some creative way. For example, your program might have had a statement like x = y + z but the compiler chose to never actually emit the add except lazily at points of use. This can easily happen because of pattern-matching instruction selection on x86 or ARM, where some instructions (like memory accesses) can do some adds for free as part of address computation. We do an even more aggressive version of this for object allocations: some program variable semantically points to an object, but because our compiler is smart, we never actually allocated any object and the object’s fields may be register-allocated, constant-folded, or represented creatively.

We want to allow the optimizing compiler to do things like this, since we want OSR exit to be an enabler of optimization rather than an inhibitor. This turns out to be tricky: how do we let the optimizing compiler do all of the optimizations that it likes to do while still being able to tell us how to recover the bytecode state?

The trick to extracting the optimized-state-to-bytecode-state shuffle from the optimizing compiler is to leverage the original bytecode→IR conversion. The main difference between an SSA-like IR (like DFG IR) and bytecode is that it represents data flow relationships instead of variables. While bytecode says add x, y, z, DFG IR would have an Add node that points to the nodes that produced y and z (like in Figure 25). The conversion from bytecode to DFG IR looks like this pseudocode:

case op_add: {
    VirtualRegister result = instruction->result();
    VirtualRegister left   = instruction->left();
    VirtualRegister right  = instruction->right();

    stackMap[result] = createAdd(
        stackMap[left], stackMap[right]);

This uses a standard technique for converting variable-based IRs to data-flow-based IRs: the converter maintains a mapping from variables in the source IR to data flow nodes in the target IR. We’re going to call this the stackMap for now. Each bytecode instruction is handled by modeling the bytecode’s data flow: we load the left and right operands from the stackMap, which gives us the DFG nodes for those locals’ values. Then we create an ArithAdd node and store it into the result local in the stackMap to model the fact that the bytecode wanted to store the result to that local. Figure 28 shows the before-and-after of running this on the add bytecode in our running example.

Figure 28. Example of stackMap before and after running the SSA conversion on add at bc#42 along with an illustration of the data flow graph around the resulting ArithAdd.

The stackMap, pruned to bytecode liveness as we are doing in these examples, represents the set of live state that would be needed to be recovered at any point in bytecode execution. It tells us, for each live bytecode local, what DFG node to use to recover the value of that local. A simple way to support OSR would be to give each DFG node that could possibly exit a data flow edge to each node in the liveness-pruned stackMap.

This isn’t what the DFG actually does; DFG nodes do not have data flow edges for the stackmap. Doing literally that would be too costly in terms of memory usage since basically every DFG node may exit and stackmaps have O(live state) entries. The DFG’s actual approach is based on delta-compression of stackmaps. But it’s worth considering exactly how this uncompressed stackmap approach would work because it forms part of the FTL’s strategy and it gives a good mental model for understanding the DFG’s more sophisticated approach. So, we will spend some time describing the DFG IR as if it really did have stackmaps. Then we will show how the stackmap is expressed using delta compression.

OSR exit with uncompressed stackmaps. Imagine that DFG nodes really had extra operands for the stackmap. Then we would have an ArithAdd like the following, assuming that bc#42 is the exit origin and that loc3, loc4, and loc8 are live, as they are in Figures 27 and 28:

c: ArithAdd(@a, @b, loc3->@s, loc4->@a, loc8->@b, bc#42)

In this kind of IR, we’d let the first two operands of ArithAdd behave the expected way (they are the actual operands to the add), and we’d treat all of the other operands as the stackmap. The exit origin, bc#42, is a control flow label. Together, this tells the ArithAdd where to exit (bc#42) and the stackmap (@s, @a, and @b). The compiler treats the ArithAdd, and the stackmap operands, as if the ArithAdd had a side exit from the function the compiler was compiling.

One way to think about it is in terms of C pseudocode. We are saying that the semantics of ArithAdd and any other instruction that may exit are as if they did the following before any of their effects:

if (some conditions)
    return OSRExit(bc#42, {loc3: @s, loc4: @a, loc8: @b});

Where the return statement is an early return from the compiled function. So, this terminates the execution of the compiled function by tail-calling (jumping to) the OSRExit. That operation will transfer control to bc#42 and pass it the given stackmap.

Figure 29. Example of control flow in a compiler with OSR exit. OSR exit means having an additional implicit set of control flow edges that come out of almost every instruction and represent a side exit from the control flow graph.

This is easy to model in a compiler. We don’t allocate any kind of control flow constructs to represent the condition check and side exit but we assume it to exist implicitly when analyzing ArithAdd or any other node that may exit. Note that in JavaScript, basically every instruction is going to possibly exit, and JavaScriptCore’s may exit analysis defaults to true for most operations. Figure 29 illustrates what this looks like. We are going to have three kinds of control flow edges instead of the usual two:

  1. The normal control flow edges between basic blocks. This is what you normally think of as “control flow”. These edges are explicitly represented in the IR, as in, there is an actual data structure (usually vector of successors and vector of predecessors) that each block uses to tell what control flow edges it participates in.
  2. The implicit fall-through control flow for instructions within blocks. This is standard for compilers with basic blocks.
  3. A new kind of control flow edge due to OSR, which goes from instructions in blocks to OSR exit landing sites. This means changing the definition of basic blocks slightly. Normally the only successors of basic blocks are the ones in the control flow graph. Our basic blocks have a bunch of OSR exit successors as well. Those successors don’t exist in the control flow graph, but we have names for them thanks to the exit origins found in the exiting instructions. The edges to those exit origins exit out of the middle of blocks, so they may terminate the execution of blocks before the block terminal.

The OSR landing site is understood by the compiler as having the following behaviors:

  • It ends execution of this function in DFG IR. This is key, since it means that there is no merge point in our control flow graph that has to consider the consequences of exit.
  • It possibly reads and writes the whole world. The DFG has to care about the reads (since they may observe whatever happened just before the exit) but not the writes (since they affect execution after execution exited DFG).
  • It reads some set of values, namely those passed as the stackmap.

This understanding is abstract, so the compiler will just assume the worst case (after exit every location in memory is read and written and all of the bits in all of the values in the stackmap are etched into stone).

This approach is great because it allows precise reconstruction of baseline state when compiling OSR exit and it mostly doesn’t inhibit optimization because it “only” involves adding a new kind of implicit control flow edge to the control flow graph.

This approach allows for simple reconstruction of state at exit because the backend that compiles the DFG nodes would have treated the stackmap data flow edges (things like loc3->@s in our example) the same way it would have treated all other edges. So, at the ArithAdd, the backend would know which registers, stack slots, or constant values to use to materialize the stackmap values. It would know how to do this for the same reason that it would know how to materialize the two actual add operands.

If we survey the most common optimizations that we want the compiler to do, we find that only one major optimization is severely inhibited by this approach to OSR exit. Let’s first review the optimizations this doesn’t break. It’s still possible to perform common subexpression elimination (CSE) on ArithAdd. It’s still possible to hoist it out of loops, though if we do that then we have to edit the exit metadata (the exit destination and stackmap will have to be overwritten to be whatever they are at the loop pre-header). It’s still possible to model the ArithAdd to be pure in lots of the ways that matter, like that if there are two loads, one before and one after the ArithAdd, then we can assume them to be redundant. The ArithAdd could only cause effects on the exit path, in which case the second load doesn’t matter. It’s still possible to eliminate the ArithAdd if it’s unreachable.

The only thing we cannot easily do is what compilers call dead code elimination, i.e. the elimination of instructions if their results are not used. Note that the compiler terminology is confusing here. Outside the compiler field we use the term dead code to mean something that compilers call unreachable code. Code is unreachable if control flow doesn’t reach it and so it doesn’t execute. Outside the compiler field, we would say that such code is dead. It’s important that compilers be able to eliminate unreachable code. Happily, our approach to OSR has no impact on unreachable code elimination. What compilers call dead code is code that is reached by control flow (so live in the not-compiler sense) but that produces a result that no subsequent code uses. Here’s an example of dead code in the compiler sense:

int tmp = a + b;
// nobody uses tmp.

Dead code elimination (DCE) is the part of a compiler that removes this kind of code. Dead code elimination doesn’t quite work for the ArithAdd because:

  • ArithAdd’s speculation checks must be assumed live even if the result of the add is unused. We may do some optimization to a later check because we find that it is subsumed by checks done by this ArithAdd. That’s a pretty fundamental optimization that we do for OSR checks and it’s the reason why OSR ultimately flattens control flow. But we don’t bother recording whenever this ArithAdd’s check is used to unlock a later optimization, so we have to assume that some later operation is already depending on the ArithAdd doing all of its checks. This means that: say that the result of some operation A is used by a dead operation B. B will still have to do whatever checks it was doing on its inputs, which will keep A alive even though B is dead. This is particularly devastating for ArithAdd, since ArithAdd usually does an overflow check. You have to do the add to check overflow. So, ArithAdd is never really dead. Consider the alternative: if we did not considider the ArithAdd’s overflow check’s effect on abstract state, then we wouldn’t be able to do our range analysis, which uses the information inferred from overflow checks to remove array bounds checks and vice versa.
  • The ArithAdd is almost sure to end up in the stackmap of some later operation, as is basically every node in the DFG program, unless the node represents something that was dead in bytecode. Being dead in bytecode is particularly unlikely because in bytecode we must assume that everything is polymorphic and possibly effectful. Then the add is really not dead: it might be a loop with function calls, after all.

The DFG and FTL still do DCE, but it’s hard and usually only worth the effort for the most expensive constructs. We support decaying an operation just to its checks, for those rare cases where we can prove that the result is not used. We also support sinking to OSR, where an operation is replaced by a phantom version of itself that exists only to tell OSR how to perform the operation for us. We mainly use this complex feature for eliminating object allocations.

To summarize the effect on optimizations: we can still do most of the optimizations. The optimization most severely impacted is DCE, but even there, we have found ways to make it work for the most important cases.

The only real downside of this simple approach is repetition: almost every DFG operation may exit and the state at exit may easily have tens or hundreds of variables, especially if we have done significant inlining. Storing the stackmap in each DFG node would create a case of O(n2) explosion in memory usage and processing time within the compiler. Note that the fact that this explosion happens is somewhat of a JavaScript-specific problem, since JavaScript is unusual in the sheer number of speculations we have to make per operation (even simple ones like add or get_by_id). If the speculations were something we did seldom, like in Java where they are mostly used for virtual calls, then the simple approach would be fine.

Stackmap compression in DFG IR. Our solution to the size explosion of repeated stackmaps is to use a delta encoding. The stackmaps don’t change much. In our running example, the add just kills loc8 and defines loc7. The kill can be discovered by analyzing bytecode, so there’s no need to record it. All we have to record about this operation is that it defines loc7 to be the ArithAdd node.

We use an operation called MovHint as our delta encoding. It tells which bytecode variable is defined by which DFG node. For example, let’s look at the MovHint we would emit for the add in Figure 28:

c: ArithAdd(@a, @b, bc#42)
   MovHint(@c, loc7, bc#42)

We need to put some care into how we represent MovHints so that they are easy to preserve and modify. Our approach is two-fold:

  • We treat MovHint as a store effect.
  • We explicitly label the points in the IR where we expect it to be valid to exit based on the state constructed out of the MovHint deltas.

Let’s first look at how we use the idea of store effects to teach the compiler about MovHint. Imagine a hypothetical DFG IR interpreter and how it would do OSR exit. They key idea is that in that interpreter, the state of the DFG program comprises not just the mapping from DFG nodes to their values, but also an OSR exit state buffer containing values indexed by bytecode variable name. That OSR exit state buffer contains exactly the stack frame that the profiling tiers would use. MovHint’s interpreter semantics are to store the value of its operand into some slot in the OSR exit state buffer. This way, the DFG interpreter is able to always maintain an up-to-date bytecode stack frame in tandem with the optimized representation of program state. Although no such interpreter exists, we make sure that the way we compile MovHint produces something with semantics consistent with what this interpreter would have done.

MovHint is not compiled to a store. But any phase operating on MovHints or encountering MovHints just needs to understand it as a store to some abstract location. The fact that it’s a store means that it’s not dead code. The fact that it’s a store means that it may need to be ordered with other stores or loads. Lots of desirable properties we need for soundly preserving MovHints across compiler optimizations fall out naturally from the fact that we tell all the phases that it’s just a store.

The compiler emits zero code for MovHint. Instead, we use a reaching defs analysis of MovHints combined with a bytecode liveness analysis to rebuild the stackmaps that we would have had if each node carried a stackmap. We perform this analysis in the backend and as part of any optimization that needs to know what OSR is doing. In the DFG tier, the reaching defs analysis happens lazily (when the OSR exit actually occurs — so could be long after the DFG compiled the code), which ensures that the DFG never experiences the O(n2) blow-up of stackmaps. OSR exit analysis is not magical: in the “it’s just a store” model of MovHint, this analysis reduces to load elimination.

DFG IR’s approach to OSR means that OSR exit is possible at some points in DFG IR and not at others. Consider some examples:

  • A bytecode instruction may define multiple bytecode variables. When lowered to DFG IR, we would have two or more MovHints. It’s not possible to have an exit between those MovHints, since the OSR exit state is only partly updated at that point.
  • It’s not possible to exit after a DFG operation that does an observable effect (like storing to a JS object property) but before its corresponding MovHint. If we exit to the current exit origin, we’ll execute the effect again (which is wrong), but if we exit to the next exit origin, we’ll neglect to store the result into the right bytecode variable.

We need to make it easy for DFG transformations to know if it’s legal to insert operations that may exit at any point in the code. For example, we may want to write instrumentation that adds a check before every use of @x. If that use is a MovHint, then we need to know that it may not be OK to add that check right before that MovHint. Our approach to this is based on the observation that the lowering of a bytecode instruction produces two phases of execution in DFG IR of that instruction:

  • The speculation phase: at the start of execution of a bytecode, it’s both necessary and possible to speculate. It’s necessary to speculate since those speculations guard the optimizations that we do in the subsequent DFG nodes for that bytecode instruction. It’s possible to speculate because we haven’t done any of the instruction’s effects, so we can safely exit to the start of that bytecode instruction.
  • The effects phase: as soon as we perform any effect, we are no longer able to do any more speculations. That effect could be an actual effect (like storing to a property or making a call) or an OSR effect (like MovHint).

To help validate this, all nodes in DFG IR have an exitOK flag that they use to record whether they think that they are in the speculative phase (exitOK is true) or if they think that they might be in the effects phase (exitOK is false). It’s fine to say that exitOK is false if we’re not sure, but to say exitOK is true, we have to be completely sure. The IR validation checks that exitOK must become false after operations that do effects, that it becomes true again only at prescribed points (like a change in exit origin suggesting that we’ve ended the effects phase of one instruction and begun the speculation phase of the next one), and that no node that may exit has exitOK set to false. This validator helps prevent errors, like when dealing with bytecode operations that can be lowered to multiple effectful DFG nodes. One example is when put_by_id (i.e. something like o.f = v) is inferred to be a transition (the property f doesn’t exist on o so we need to add it), which results in two effects:

  • Storing a value v into the memory location for property o.f.
  • Changing o‘s structure to indicate that it now has an f.

The DFG IR for this will look something like:

CheckStructure(@o, S1)
PutByOffset(@o, @v, f)
PutStructure(@o, S2, ExitInvalid)

Note that PutStructure will be flagged with ExitInvalid, which is the way we say that exitOK is false in IR dumps. Failing to set exitOK to false for PutStructure would cause a validation error since PutByOffset (right before it) is an effect. This prevents us from making mistakes like replacing all uses of @o with some operation that could speculate, like:

a: FooBar(@o, Exits)
   CheckStructure(@a, S1)
b: FooBar(@o, Exits)
   PutByOffset(@b, @v, f)
c: FooBar(@o, Exits)
   PutStructure(@c, S2, ExitInvalid)

In this example, we’ve used some new FooBar operation, which may exit, as a filter on @o. It may seem absurd to instrument code this way, but it is a goal of DFG IR to:

  • Allow replacing uses of nodes with uses of other nodes that produce an equivalent value. Let’s assume that FooBar is an identity that also does some checks that may exit.
  • Allow inserting new nodes anywhere.

Therefore, the only bug here is that @c is right after the PutByOffset. The validator will complain that it is not marked ExitInvalid. It should be marked ExitInvalid because the previous node (PutByOffset) has an effect. But if you add ExitInvalid to @c, then the validator will complain that a node may exit with ExitInvalid. Any phase that tries to insert such a FooBar would have all the API it needs to realize that it will run into these failures. For example, it could ask the node that it’s inserting itself in front of (the PutStructure) whether it has ExitInvalid. Since it is ExitInvalid, we could do either of these things instead of inserting @c just before the PutStructure:

  1. We could use some other node that does almost what FooBar does but without the exit.
  2. We could insert @c earlier, so it can still exit.

Let’s look at what the second option would look like:

a: FooBar(@o, Exits)
   CheckStructure(@a, S1)
b: FooBar(@o, Exits)
c: FooBar(@o, Exits)
   PutByOffset(@b, @v, f)
   PutStructure(@c, S2, ExitInvalid)

Usually this is all it takes to deal with regions of code with !exitOK.

Note that in cases where something like FooBar absolutely needs to do a check after an effect, DFG IR does support exiting into the middle of a bytecode instruction. In some cases, we have no choice but to use that feature. This involves introducing extra non-bytecode state that can be passed down OSR exit, issuing OSR exit state updates before/after effects, using an exit origin that indicates that we’re exiting to some checkpoint in the middle of a bytecode instruction’s execution, and implementing a way to execute a bytecode starting at a checkpoint during OSR exit. It’s definitely possible, but not the sort of thing we want to have to do every time that some DFG node needs to do an effect. For this reason, canonical DFG IR use implies having !exitOK phases (aka effect phases) during some bytecode instructions’ execution.

Watchpoints and Invalidation. So far we have considered OSR exit for checks that the compiler emits. But the DFG compiler is also allowed to speculate by setting watchpoints in the JavaScript heap. If it finds something desirable — like that Math.sqrt points to the sqrt intrinsic function — it can often incorporate it into optimization without emitting checks. All that is needed is for the compiler to set a watchpoint on what it wants to prove (that the Math and sqrt won’t change). When the watchpoint fires, we want to invalidate the compiled code. That means making it so that the code never runs again:

  • no new calls to that function go to the optimized version and
  • all returns into that optimized function are redirected to go to baseline code instead.

Ensuring that new calls avoid optimized code is easy: we just patch all calls to the function to call the profiled code (Baseline, if available, or LLInt) instead. Handling returns is the interesting part.

One approach to handling invalidation is to walk the stack to find all returns to the invalidated code, and repoint those returns to an OSR exit. This would be troublesome for us due to our use of effects phases: it’s possible for multiple effects to happen in a row in a phase of DFG IR execution where it is not possible to exit. So, the DFG approach to invalidation involves letting the remaining effects of the current bytecode instruction finish executing in optimized code and then triggering an OSR exit right before the start of the next bytecode instruction.

Figure 30. How OSR exit and invalidation might work for hypothetical bytecodes.

Invalidation in DFG IR is enabled by the InvalidationPoint instruction, which is automatically inserted by the DFG frontend at the start of every exit origin that is preceded by effects that could cause a watchpoint to fire. InvalidationPoint is modeled as if it was a conditional OSR exit, and is given an OSR exit jump label as if there was a branch to link it to. But, InvalidationPoint emits no code. Instead, it records the location in the machine code where the InvalidationPoint would have been emitted. When a function is invalidated, all of those labels are overwritten with unconditional jumps to the OSR exit.

Figure 30 shows how OSR exit concepts like speculation and effect phases combine with InvalidationPoint for three hypothetical bytecode instructions. We make up intentionally absurd instructions because we want to show the range of possibilities. Let’s consider wat in detail. The first DFG IR node for wat is an InvalidationPoint, automatically inserted because the previous bytecode (foo) had an effect. Then wat does a CheckArray, which may exit but has no effects. So, the next DFG node, Wat, is still in the speculation phase. Wat is in a sort of perfect position in DFG IR: it is allowed to perform speculations and effects. It can perform speculations because no previous node for wat‘s exit origin has performed effects. It can also perform effects, but then the nodes after it (Stuff and Derp) cannot speculate anymore. But, they can perform more effects. Since wat has effects, an InvalidationPoint is immediately inserted at the start of the next bytecode (bar). Note that in this example, Foo, Wat, and StartBar are all in the perfect position (they can exit and have effects). Since Stuff, Derp, and FinishBar are in the effects region, the compiler will assert if they try to speculate.

Note that InvalidationPoint makes code layout tricky. On x86, the unconditional jump used by invalidation is five bytes. So, we must ensure that there are no other jump labels in the five bytes after an invalidation label. Otherwise, it would be possible for invalidation to cause one of those labels to point into the middle of a 5-byte invalidation jump. We solve this by adding nop padding to create at least a 5-byte gap between a label used for invalidation and any other kind of label.

To summarize, DFG IR has extensive support for OSR exit. We have a compact delta encoding of changes to OSR exit state. Exit destinations are encoded as an exit origin field in every DFG node. OSR exit due to invalidation is handled by automatic InvalidationPoint insertion.

Static Analysis

The DFG uses lots of static analysis to complement how it does speculation. This section covers three static analyses in the DFG that have particularly high impact:

  • We use prediction propagation to fill in predicted types for all values based on value profiling of some values. This helps us figure out where to speculate on type.
  • We use the abstract interpreter (or just AI for short in JavaScriptCore jargon) to find redundant OSR speculations. This helps us emit fewer OSR checks. Both the DFG and FTL include multiple optimization passes in their pipelines that can find and remove redundant checks but the abstract interpreter is the most powerful one. The abstract interpreter is the DFG tier’s primary optimization and it is reused with small enhancements in the FTL.
  • We use clobberize to get aliasing information about DFG operations. Given a DFG instruction, clobberize can describe the aliasing properties. In almost all cases that description is O(1) in time and space. That description implicitly describes a rich dependency graph.

Both the prediction propagator and the abstract interpreter work by forward-propagating type infromation. They’re both built on the principles of abstract interpretation. It’s useful to understand at least some of that theory, so let’s do a tiny review. Abstract interpreters are like normal interpreters, except that they describe program state abstractly rather than considering exact values. A classic example due to Kildall involves just remembering which variables have known constant values and forgetting any variable that may have more than one value. Abstract interpreters are run to fixpoint: we keep executing every instruction until we no longer observe any changes. We can execute forward (like Kildall) or backward (like liveness analysis). We can either have sets that shrink as we learn new things (like Kildall, where variables get removed if we learn that they may have more than one value) or we can have sets that grow (like liveness analysis, where we keep adding variables to the live set).

Now let’s go into more details about the two abstract interpreters and the alias analysis.

Prediction propagation. The prediction propagator’s abstract state comprises variable to speculated type (Figure 13) mappings. The speculated type is a set of fundamental types. The sets tell which types a value is predicted to have. The prediction propagator is not flow sensitive; it has one copy of the abstract state for all program statements. So, each execution of a statement considers the whole set of input types (even from program statements that can’t reach us) and joins the result with the speculated type of the result variable. Note that the input to the prediction propagator is a data flow IR, so multiple assignments to the same variable aren’t necessarily joined.

The prediction propagator doesn’t have to be sound. The worst case outcome of the prediction propagator being wrong is that we either:

  • do speculations that are too strong, and so we exit too much and then recompile.
  • do speculations that are too weak, so we run slower than we could forever.

Note that the second of those outcomes is generally worse. Recompiling and then speculating less at least means that the program eventually runs with the optimal set of speculations. Speculating too weakly and never recompiling means that we never get to optimal. Therefore, the prediction propagator is engineered to sometimes be unsound instead of conservative, since unsoundness can be less harmful.

The abstract interpreter. The DFG AI is the DFG tier’s most significant optimization. While there are many abstract interpreters throughout JavaScriptCore, this one is the biggest in terms of total code and the number of clients — hence to us it is the abstract interpreter.

The DFG AI’s abstract state comprises variable to abstract value mappings where each abstract value represents a set of possible JSValues that the variable could have. Those sets describe what type information we have proved from past checks. We join abstract states at control flow merge points. The solution after the fixpoint is a minimal solution (smallest possible sets that have a fixpoint). The DFG AI is flow-sensitive: it maintains a separate abstract state per instruction boundary. AI looks at the whole control flow graph at once but does not look outside the currently compiled function and whatever we inlined into it. AI is also sparse conditional.

The DFG abstract value representation has four sub-values:

  • Whether the value is known to be a constant, and if so, what that constant is.
  • The set of possible types (i.e. a SpeculatedType bitmap, shown in Figure 13).
  • The set of possible indexing types (also known as array modes) that the object pointed to by this value can have.
  • The set of possible structures that the object pointed to by this value can have. This set has special infinite set powers.

The last two sub-values can be mutated by effects. DFG AI assumes that all objects have escaped, so if an effect happens that can change indexing types and structures, then we have to clobber those parts of all live abstract values.

We interpret the four sub-values as follows: the abstract value represents the set of JSValues that reside in the intersection of the four sub-value sets. This means that when interpreting abstract values, we have the option of just looking at whichever sub-value is interesting to us. For example, an optimization that removes structure checks only needs to look at the structure set field.

Figure 31. Examples of check elimination with abstract interpretation.

The DFG AI gives us constant and type propagation simultaneously. The type propagation is used to remove checks, simplify checks, and replace dynamic operations with faster versions.

Figure 31 shows examples of checks that the DFG AI lets us remove. Note that in addition to eliminating obvious same-basic-block check redundancies (Figure 31(a)), AI lets us remove redundancies that span multiple blocks (like Figure 31(b) and (c)). For example, in Figure 31(c), the AI is able to prove that @x is an Int32 at the top of basic block #7 because it merges the Int32 states of @x from BB#5 and #6. Check elimination is usually performed by mutating the IR so that later phases know which checks are really necessary without having to ask the AI.

The DFG AI has many clients, including the DFG backend and the FTL-to-B3 lowering. Being an AI client means having access to its estimate of the set of JSValues that any variable or DFG node can have at any program point. The backends use this to simplify checks that were not removed. For example, the backend may see an Object-or-Undefined, ask AI about it, and find that AI already proved that that we must have either an object or a string. The backend will be able to combine those two pieces of information to only emit an is-object check and ignore the possibility of the value being undefined.

Type propagation also allows us to replace dynamic heap accesses with inlined ones. Most fast property accesses in DFG IR arise from inline cache feedback telling us that we should speculate, but sometimes the AI is able to prove something stronger than the profiler told us. This is especially likely in inlined code.

Clobberize. Clobberize is the alias analysis that the DFG uses to describe what parts of the program’s state an instruction could read and write. This allows us to see additional dependency edges between instructions beyond just the ones expressed as data flow. Dependency information tells the compiler what kinds of instruction reorderings are legal. Clobberize has many clients in both the DFG and FTL. In the DFG, it’s used for common subexpression elimination, for example.

To understand clobberize, it’s worth considering what it is about a program’s control flow that a compiler needs to remember. The control flow graph shows us one possible ordering of the program and we know that this ordering is legal. But both the DFG and FTL tiers want to move code around. The DFG tier mostly only moves code around within basic blocks rather than between them while the FTL tier can also move code between basic blocks. Even with the DFG’s block-local code motion, it’s necessary to know more than just the current ordering of the program. It’s also necessary to know how that ordering can be changed.

Some of this is already solved by the data flow graph. DFG IR provides a data flow graph that shows some of the dependencies between instructions. It’s obvious that if one instruction has a data flow edge to another, then only one possible ordering (source executes before sink) is valid. But what about:

  • Stores to memory.
  • Loads from memory.
  • Calls that can cause any effects.
  • OSR effects (like MovHint).

Data flow edges don’t talk about those dependencies. Data flow also cannot tell which instructions have effects at all. So, the data flow graph cannot tell us anything about the valid ordering of instructions if those instructions have effects.

The issue of how to handle dependencies that arise from effects is particularly relevant to JavaScript compilation — and speculative compilation in general — because of the precision about aliasing that speculation gives us. For example, although the JavaScript o.f operation could have any effect, after speculation we often know that it can only affect properties named f. Additionally, JavaScript causes us to have to emit lots of loads to fields that are internal to our object model and it’s good to know exactly when those loads are redundant so that we can remove as many of them as possible. So, we need to have the power to ask, for any operation that may access internal VM state, whether that state could be modified by any other operation, and we want that answer to be as precise as it can while being O(1)-ish.

Clobberize is a static analysis that augments the data flow and control flow graphs by telling us constraints on how instructions can be reordered. The neat thing about clobberize is that it avoids storing dependency information in the instructions themselves. So, while the compiler is free to query dependency information anytime it likes by running the analysis, it doesn’t have to do anything to maintain it.

Figure 32. Some of the abstract heap hierarchy. All heaps are subsets of World, which is subdivided into Heap, Stack and SideState. For example, JS function calls say that they write(Heap) and read(World). Subheaps of Heap include things like JSObject_butterfly, which refer to fields that are internal to the JSC object model and are not directly user-visible, and things like NamedProperties, a heap that contains subheaps for every named property the function accesses.

For each DFG instruction, clobberize reports zero or more reads or writes. Each read or write says which abstract heaps it is accessing. Abstract heaps are sets of memory locations. A read (or write) of an abstract heap means that the program will read (or write) from zero or more actual locations in that abstract heap. Abstract heaps form a hierarchy with World at the top (Figure 32). A write to World means that the effect could write to anything, so any read might see that write. The hierarchy can get very specific. For example, fully inferred, direct forms of property access like GetByOffset and PutByOffset report that they read and write (respectively) an abstract heap that names the property. So, accesses to properties of different names are known not to alias. The heaps are known to alias if either one is a descendant of the other.

It’s worth appreciating how clobberize combined with control flow is just a way of encoding a dependence graph. To build a dependence graph from clobberize information, we apply the following rule. If instruction B appears after instruction A in control flow, then we treat B as having a dependence edge to A (B depends on A) if:

  • any heap read by B overlaps any heap written by A, or
  • any heap written by B overlaps any heap read or written by A.

Conversely, any dependence graph can be expressed using clobberize. An absurd but correct representation would involve giving each edge in the dependence graph its own abstract heap and having the source of the edge write the heap while the sink reads it. But what makes clobberize such an efficient representation of dependence graphs is that every dependence that we’ve tried to represent can be intuitively described by reads and writes to a small collection of abstract heaps.

Those abstract heaps are either collections of concrete memory locations (for example the "foo" abstract heap is the set of memory locations used to represent the values of properties named “foo”) or they are metaphorical. Let’s explore some metaphorical uses of abstract heaps:

  • MovHint wants to say that it is not dead code, that it must be ordered with other MovHints, and that it must be ordered with any real program effects. We say this in clobberize by having MovHint write SideState. SideState is a subheap of World but disjoint from other things, and we have any operation that wants to be ordered with OSR exit state either read or write something that overlaps SideState. Note that DFG assumes that operations that may exit implicitly read(World) even if clobberize doesn’t say this, so MovHint’s write of SideState ensures ordering with exits.
  • NewObject wants to say that it’s not valid to hoist it out of loops because two successive executions of NewObject may produce different results. But it’s not like NewObject clobbers the world; for example if we had two accesses to the same property on either sides of a NewObject then we’d want the second one to be eliminated. DFG IR has many NewObject-like operations that also have this behavior. So, we introduce a new abstract heap called HeapObjectCount and we say that NewObject is metaphorically incrementing (reading and writing) the HeapObjectCount. HeapObjectCount is treated as a subheap of Heap but it’s disjoint from the subheaps that describe any state visible from JS. This is sufficient to block hoisting of NewObject while still allowing interesting optimizations to happen around it.
Figure 33. Sample sequence of DFG IR instructions and their dependence graph. DFG IR never stores the dependence graph in memory because we get the information implicitly by running clobberize.

The combination of clobberize and the control flow graph gives a scalable and intuitive way of expressing the dependence graph. It’s scalable because we don’t actually have to express any of the edges. Consider for example a dynamic access instruction that could read any named JavaScript property, like the Call instruction in Figure 33. Clobberize can say this in O(1) space and time. But a dependence graph would have to create an edge from that instruction to any instruction that accesses any named property before or after it. In short, clobberize gives us the benefit of a dependence graph without the cost of allocating memory to represent the edges.

The abstract heaps can also be efficiently collected into a set, which we use to summarize the aliasing effects of basic blocks and loops.

To summarize, the DFG puts a big emphasis on static analysis. Speculation decisions are made using a combination of profiling and an abstract interpreter called prediction propagation. Additionally, we have an abstract interpreter for optimization, simply called the DFG abstract interpreter, which serves as the main engine for redundant check removal. Abstract interpreters are a natural fit for the DFG because they give us a way to forward-propagate information about types. Finally, the DFG uses the clobberize analysis to describe dependencies and aliasing.

Fast Compilation

The DFG is engineered to compile quickly so that the benefits of OSR speculations can be realized quickly. To help reduce compile times, the DFG is focused about what optimizations it does and how it does them. The static analysis and OSR exit optimizations discussed so far represent the most powerful things that the DFG is capable of. The DFG does a quick and dirty job with everything else, like instruction selection, register allocation, and removal of redundant code that isn’t checks. Functions that benefit from the compiler doing a good job on those optimizations will get them if they run long enough to tier up into the FTL.

The DFG’s focus on fast compilation happened organically, as a result of many separate throughput-latency trade-offs. Initially, JavaScriptCore just had the Baseline JIT and then later Baseline as the profiling tier and DFG as the optimizing tier. The DFG experienced significant evolution during this time, and then experienced additional evolution after the FTL was introduced. While no single decision led to the DFG’s current design, we believe that it was most significantly shaped by tuning for short-running benchmarks and the introduction of the FTL.

The DFG was tuned for a diverse set of workloads. On the one hand, it was tuned for long-running tests in which one full second of warm-up was given to the speculative compiler for free (like the old V8 benchmarks, which live on in the form of Octane and JetStream, albeit without the freebie warmup), but on the other hand, it was also tuned for shorter-running benchmarks like SunSpider and page load tests. SunSpider focused on smallish programs running for very short bursts of time with little opportunity for warm-up. Compilers that do more optimizations than the DFG tend to lose to it on SunSpider because they fail to complete their optimizations before SunSpider finishes running. We continue to use tests that are in the spirit of SunSpider, like Speedometer and JetStream. Speedometer has a similar code-size-to-running-time ratio, so like SunSpider, it benefits a lot from DFG. JetStream includes a subset of SunSpider and puts a big emphasis on short-running code in all of its other tests. That’s not to say that we don’t also care about long-running code. It’s just that our methodology for improving the DFG was to try to get speed-ups on both short-running things and long-running things with the same engine. Since any long-running optimization would regress the short-running tests, we often avoided adding any long-running optimizations to the DFG. But we did add cheap versions of many sophisticated optimizations, giving respectable speed-ups on both short-running and long-running workloads.

The introduction of the FTL solidified the DFG’s position as the compiler that optimizes less. So long as the DFG generates reasonably good code quickly, we can get away with putting lots of expensive optimizations into the FTL. The FTL’s long compile times mean that many programs do not run long enough to benefit from the FTL. So, the DFG is there to give those programs a speculative optimization boost in way less time than an FTL-like compiler could do. Imagine a VM that only had one optimizing compiler. Unless that one compiler compiled as fast as the DFG and generated code that was as good as the FTL, it would end up being reliably slower than JavaScriptCore on some workloads. If that compiler compiled as fast as the DFG but didn’t have the FTL’s throughput then any program that ran long enough would run faster in JavaScriptCore. If that compiler generated code that was as good as the FTL but compiled slower than the DFG then any program that ran short enough to tier up into the DFG but not that compiler would run faster in JavaScriptCore. JavaScriptCore has multiple compiler tiers because we believe that it is not possible to build a compiler that compiles as fast as the DFG while generating code that is as good as the FTL.

To summarize, the DFG focuses on fast compilation because of the combination of the history of how it was tuned and the fact that it sits as the tier below the FTL JIT.

Figure 34. Illustration of a sample DFG IR program with all three graphs: local data flow, global data flow, and control flow.

The DFG compiler’s speed comes down to an emphasis on block-locality in the IR. The DFG IR used by the DFG tier has a two-level data flow graph:

  • Local data flow graph. The local data flow graph is used within basic blocks. This graph is a first-class citizen in the IR, when working with data flow in the DFG’s C++ code, it sometimes seems like this is the only data flow graph. DFG IR inside a basic block resembles SSA form in the sense that there’s a 1:1 mapping between instructions and the variables they assign and data flow is represented by having users of values point at the instructions (nodes) that produce those values. This representation does not allow you to use a value produced by an instruction in a different block except through tedious escape hatches.
  • Global data flow graph. We say global to mean the entire compilation unit, so some JS function and whatever the DFG inlined into it. So, global just means spanning basic blocks. DFG IR maintains a secondary data flow graph that spans basic blocks. DFG IR’s approach to global data flow is based on spilling: to pass a value to a successor block, you store it to a spill slot on the stack, and then that block loads it. But in DFG IR, we also thread data flow relationships through those loads and stores. This means that if you are willing to perform the tedious task of traversing this secondary data flow graph, you can get a global view of data flow.

Figure 34 shows an example of how this works. The compilation unit is represented as three graphs: a control flow graph, local data flow, and global data flow. Data flow graphs are represented with edges going from the user to the value being used. The local data flow graphs work like they do in SSA, so any SSA optimization can be run in a block-local manner on this IR. The global data flow graph is made of SetLocal/GetLocal nodes that store/load values into the stack. The data flow between SetLocal and GetLocal is represented completely in DFG IR, by threading data flow edges through special Phi nodes in each basic block where a local is live.

From the standpoint of writing outstanding high-throughput optimizations, this approach to IR design is like kneecapping the compiler. Compilers thrive on having actual SSA form, where there is a single data flow graph, and you don’t have to think about an instruction’s position in control flow when traversing data flow. The emphasis on locality is all about excellent compile times. We believe that locality gives us compile time improvements that we can’t get any other way:

  • Instruction selection and register allocation for a basic block can be implemented as a single pass over that basic block. The instruction selector can make impromptu register allocation decisions during that pass, like deciding that it needs any number of scratch registers to emit code for some DFG node. The combined instruction selector and register allocator (aka the DFG backend) compiles basic blocks independently of one another. This kind of code generation is good at register allocating large basic blocks but bad for small ones. For functions that only have a single basic block, the DFG often generates code that is as good as the FTL.
  • We never have to decompress the delta encoding of OSR exit. We just have the backend record a log of its register allocation decisions (the variable event stream). While the DFG IR for a function is thrown out after compilation, this log along with a minified version of the DFG IR (that only includes MovHints and the things they reference) is saved so that we can replay what the backend did whenever an OSR exit happens. This makes OSR exit handling super cheap in the DFG – we totally avoid the O(n2) complexity explosion of OSR stackmaps despite the fact that we speculate like crazy.
  • There is no need to enter or exit SSA. On the one hand, SSA conversion performance is a solved problem: it’s a nearly-linear-time operation. Even so, the constant factors are high enough that avoiding it entirely is profitable. Converting out of SSA is worse. If we wanted to combine SSA with our block-local backend, we’d have to add some sort of transformation that discovers how to load/store live state across basic blocks. DFG IR plays tricks where the same store that passes data flow to another block doubles as the OSR exit state update. It’s not obvious that exiting out of SSA would discover all of the cases where the same store can be reused for both OSR exit state update and the data flow edge. This suggests that any version of exiting out of SSA would make the DFG compiler either generate worse code or run slower. So, not having SSA makes the compiler run faster because entering SSA is not free and exiting SSA is awful.
  • Every optimization is faster if it is block-local. Of course, you could write block-local optimizations in an SSA IR. But having an IR that emphasizes locality is like a way to statically guarantee that we won’t accidentally introduce expensive compiler passes to the DFG.

The one case where global data flow is essential to the DFG’s mission is static analysis. This comes up in the prediction propagator and the abstract interpreter. Both of them use the global data flow graph in addition to the local data flow graphs, so that they can see how type information flows through the whole compilation unit. Fortunately, as shown in Figure 34, the global data flow graph is available. It’s in a format that makes it hard to edit but relatively easy to analyze. For example, it implicitly reports the set of live variables at each basic block boundary, which makes merging state in the abstract interpreter relatively cheap.

Figure 35. The DFG pipeline.

Figure 35 shows the complete DFG optimization pipeline. This is a fairly complete pipeline: it has classics like constant folding, control flow simplification, CSE, and DCE. It also has lots of JavaScript-specifics like deciding where to put checks (unification, prediction injection and propagation, prediction propagation, and fixup), a pass just to optimize common patterns of varargs, some passes for GC barriers, and passes that help OSR (CPS rethreading and phantom insertion). We can afford to do a lot of optimizations in the DFG so long as those optimizations are block-local and don’t try too hard. Still, this pipeline is way smaller than the FTL’s and runs much faster.

To summarize, the DFG compiler uses OSR exit and static analysis to emit an optimal set of type checks. This greatly reduces the number of type checks compared to running JavaScript in either of the profiled tiers. Because the benefit of type check removal is so big, the DFG compiler tries to limit how much time it spends doing other optimizations by restricting itself to a mostly block-local view of the program. This is a trade off that the DFG makes to get fast compile times. Functions that run long enough that they’d rather pay the compile time to get those optimizations end up tiering up to the FTL, which just goes all out for throughput.

FTL Compiler

We’ve previously documented some aspects of the FTL’s architecture in the original blog post and when we introduced B3. This section provides an updated description of this JIT’s capabilities as well as a deep dive into how FTL does OSR. We will structure our discussion of the FTL as follows. First we will enumerate what optimizations it is capable of. Then we will describe how it does OSR exit in detail. Finally we will talk about patchpoints — an IR operation based on a lambda.

All The Optimizations

The point of the FTL compiler is to run all the optimizations. This is a compiler where we never compromise on peak throughput. All of the DFG’s decisions that were known trade-offs in favor of compile time at the expense of throughput are reversed in the FTL. There is no upper limit on the amount of cycles that a function compiled with the FTL will run for, so it’s the kind of compiler where even esoteric optimizations have a chance to pay off eventually. The FTL combines multiple optimization strategies:

  • We reuse the DFG pipeline, including the weird IR. This ensures that any good thing that the DFG tier ever does is also available in the FTL.
  • We add a new DFG SSA IR and DFG SSA pipeline. We adapt lots of DFG phases to DFG SSA (which usually makes them become global rather than local). We add lots of new phases that are only possible in SSA (like loop invariant code motion).
  • We lower DFG SSA IR to B3 IR. B3 is an SSA-based optimizing JIT compiler that operates at the abstraction level of C. B3 has lots of optimizations, including global instructcion selection and graph coloring register allocation. The FTL was B3’s first customer, so B3 is tailored for optimizing at the abstraction level where DFG SSA IR can’t.

Having multiple ways of looking at the program gives the FTL maximum opportunities to optimize. Some of the compiler’s functionality, particularly in the part that decides where to put checks, thrives on the DFG’s weird IR. Other parts of the compiler work best in DFG SSA, like the DFG’s loop-invariant code motion. Lots of things work best in B3, like most reasoning about how to simplify arithmetic. B3 is the first IR that doesn’t know anything about JavaScript, so it’s a natural place to implement textbook optimization that would have difficulties with JavaScript’s semantics. Some optimizations, like CSE, work best when executed in every IR because they find unique opportunities in each IR. In fact, all of the IRs have the same fundamental optimization capabilities in addition to their specialized optimizations: CSE, DCE, constant folding, CFG simplification, and strength reductions (sometimes called peephole optimizations or instruction combining).

Figure 36. The FTL pipeline. Note that Lower DFG to B3 is in bold because it’s FTL’s biggest phase; sometimes when we say “FTL” we are just referring to this phase.

The no-compromise approach is probably best appreciated by looking at the FTL optimization pipeline in Figure 36. The FTL runs 93 phases on the code in encounters. This includes all phases from Figure 35 (the DFG pipeline), except Varargs Forwarding, only because it’s completely subsumed by the FTL’s Arguments Elimination. Let’s review some of the FTL’s most important optimizations:

  • DFG AI. This is one of the most important optimizations in the FTL. It’s mostly identical to the AI we run in the DFG tier. Making it work with SSA makes it slightly more precise and slightly more expensive. We run the AI a total of six times.
  • CSE (common subexpression elimination). We run this in DFG IR (Local Common Subexpression Elimination), DFG SSA IR (Global Common Subexpression Elimination), B3 IR (Reduce Strength and the dedicated Eliminate Common Subexpressions), and even in Air (Fix Obvious Spills, a CSE focused on spill code). Our CSEs can do value numbering and load/store elimination.
  • Object Allocation Sinking is a must-points-to analysis that we use to eliminate object allocations or sink them to slow paths. It can eliminate graphs of object allocations, including cyclic graphs.
  • Integer Range Optimization is a forward flow-sensitive abtract interpreter in which the state is a system of equations and inequalities that describe known relationships between variables. It can eliminate integer overflow checks and array bounds checks.
  • The B3 Reduce Strength phase runs a fixpoint that includes CFG simplification, constant folding, reassociation, SSA clean-up, dead code elimination, a light CSE, and lots of miscellaneous strength reductions.
  • Duplicate Tails, aka tail duplication, flattens some control flow diamonds, unswitches small loops, and undoes some cases of relooping. We duplicate small tails blindly over a CFG with critical edges broken. This allows us to achieve some of what splitting achieved for the original speculative compilers.
  • Lower B3 to Air is a global pattern matching instruction selector.
  • Allocate Registers By Graph Coloring implements the IRC and Briggs register allocators. We use IRC on x86 and Briggs on arm64. The difference is that IRC can find more opportunities for coalescing assignments into a single register in cases where there is high register pressure. Our register allocators have special optimizations for OSR exit, especially the OSR exits we emit for integer overflow checks.

OSR Exit in the FTL

Now that we have enumerated some of the optimizations that the FTL is capable of, let’s take a deep dive into how the FTL works by looking at how it compiles and does OSR. Let’s start with this example:

function foo(a, b, c)
    return a + b + c;

The relevant part of the bytecode sequence is:

[   7] add loc6, arg1, arg2
[  12] add loc6, loc6, arg3
[  17] ret loc6

Which results in the following DFG IR:

  24:  GetLocal(Untyped:@1, arg1(B<Int32>/FlushedInt32), R:Stack(6), bc#7)
  25:  GetLocal(Untyped:@2, arg2(C<BoolInt32>/FlushedInt32), R:Stack(7), bc#7)
  26:  ArithAdd(Int32:@24, Int32:@25, CheckOverflow, Exits, bc#7)
  27:  MovHint(Untyped:@26, loc6, W:SideState, ClobbersExit, bc#7, ExitInvalid)
  29:  GetLocal(Untyped:@3, arg3(D<Int32>/FlushedInt32), R:Stack(8), bc#12)
  30:  ArithAdd(Int32:@26, Int32:@29, CheckOverflow, Exits, bc#12)
  31:  MovHint(Untyped:@30, loc6, W:SideState, ClobbersExit, bc#12, ExitInvalid)
  33:  Return(Untyped:@3, W:SideState, Exits, bc#17)

The DFG data flow from the snippet above is illustrated in Figure 37 and the OSR exit sites are illustrated in Figure 38.

Figure 37. Data flow graph for FTL code generation example. Figure 38. DFG IR example with the two exiting nodes highlighted along with where they exit and what state is live when they exit.

We want to focus our discussion on the MovHint @27 and how it impacts the code generation for the ArithAdd @30. That ArithAdd is going to exit to the second add in the bytecode, which requires restoring loc6 (i.e. the result of the first add), since it is live at that point in bytecode (it also happens to be directly used by that add).

This DFG IR is lowered to the following in B3:

Int32 @42 = Trunc(@32, DFG:@26)
Int32 @43 = Trunc(@27, DFG:@26)
Int32 @44 = CheckAdd(@42:WarmAny, @43:WarmAny, generator = 0x1052c5cd0,
                     earlyClobbered = [], lateClobbered = [], usedRegisters = [],
                     ExitsSideways|Reads:Top, DFG:@26)
Int32 @45 = Trunc(@22, DFG:@30)
Int32 @46 = CheckAdd(@44:WarmAny, @45:WarmAny, @44:ColdAny, generator = 0x1052c5d70,
                     earlyClobbered = [], lateClobbered = [], usedRegisters = [],
                     ExitsSideways|Reads:Top, DFG:@30)
Int64 @47 = ZExt32(@46, DFG:@32)
Int64 @48 = Add(@47, $-281474976710656(@13), DFG:@32)
Void @49 = Return(@48, Terminal, DFG:@32)

CheckAdd is the B3 way of saying: do an integer addition, check for overflow, and if it overflows, execute an OSR exit governed by a generator. The generator is a lambda that is given a JIT generator object (that it can use to emit code at the jump destination of the OSR exit) and a stackmap generation parameters that tells the B3 value representation for each stackmap argument. The B3 value reps tell you which register, stack slot, or constant to use to get the value. B3 doesn’t know anything about how exit works except that it involves having a stackmap and a generator lambda. So, CheckAdd can take more than 2 arguments; the first two arguments are the actual add operands and the rest are the stackmap. It’s up to the client to decide how many arguments to pass to the stackmap and only the generator will ever get to see their values. In this example, only the second CheckAdd (@46) is using the stackmap. It passes one extra argument, @44, which is the result of the first add — just as we would expect based on MovHint @27 and the fact that loc6 is live at bc#12. This is the result of the FTL decompressing the delta encoding given by MovHints into full stackmaps for B3.

Figure 39. The stackmaps and stackmap-like mappings maintained by the FTL to enable OSR.

FTL OSR exit means tracking what happens with the values of bytecode locals through multiple stages of lowering. The various stages don’t know a whole lot about each other. For example, the final IRs, B3 and Air, know nothing about bytecode, bytecode locals, or any JavaScript concepts. We implement OSR exit by tracking multiple stackmap-like mappings per exit site that give us the complete picture when we glue them together (Figure 39):

  • The DFG IR stackmaps that we get be decompressing MovHint deltas. This gives a mapping from bytecode local to either a DFG node or a stack location. In some cases, DFG IR has to store some values to the stack to support dynamic variable introspection like via function.arguments. DFG OSR exit analysis is smart enough recognize those cases, since it’s more optimal to handle those cases by having OSR exit extract the value from the stack. Hence, OSR exit analysis may report that a bytecode local is available through a DFG node or a stack location.
  • The B3 value reps array inside the stackmap generation parameters that B3 gives to the generator lambdas of Check instructions like CheckAdd. This is a mapping from B3 argument index to a B3 value representation, which is either a register, a constant, or a stack location. By argument index we mean index in the stackmap arguments to a Check. This is three pieces of information: some user value (like @46 = CheckAdd(@44, @45, @44)), some index within its argument list (like 2), and the value that index references (@44). Note that since this CheckAdd has two argument indices for @44, that means that they may end up having different value representations. It’s not impossible for one to be a constant and another to be a register or spill slot, for example (though this would be highly unlikely; if it happened then it would probably be the result of some sound-but-inefficient antipattern in the compiler). B3’s client gets to decide how many stackmap arguments it will pass and B3 guarantees that it will give the generator a value representation for each argument index in the stackmap (so starting with argument index 2 for CheckAdd).
  • The FTL OSR exit descriptor objects, which the FTL’s DFG→B3 lowering creates at each exit site and holds onto inside the generator lambda it passes to the B3 check. Exit descriptors are based on DFG IR stackmaps and provide a mapping from bytecode local to B3 argument index, constant, stack slot, or materialization. If the DFG IR stackmap said that a bytecode local is a Node that has a constant value, then the OSR exit descriptor will just tell us that value. If the DFG stackmap said that a local is already on the stack, then the OSR exit descriptor will just tell that stack slot. It could be that the DFG stackmap tells us that the node is a phantom object allocation — an object allocation we optimized out but that needs to be rematerialized on OSR exit. If it is none of those things, the OSR exit descriptor will tell us which B3 argument index has the value of that bytecode local.
  • The FTL’s DFG→B3 lowering already maintains a mapping from DFG node to B3 value.
  • The FTL OSR Exit object, which is a mapping from bytecode local to register, constant, stack slot, or materialization. This is the final product of the FTL’s OSR exit handling and is computed lazily from the B3 value reps and FTL OSR exit descriptor.

These pieces fit together as follows. First we compute the DFG IR stackmap and the FTL’s DFG node to B3 value mapping. We get the DFG IR stackmap from the DFG OSR exit analysis, which the FTL runs in tandem with lowering. We get the DFG to B3 mapping implicitly from lowering. Then we use that to compute the FTL OSR exit descriptor along with the set of B3 values to pass to the stackmap. The DFG IR stackmap tells us which DFG nodes are live, so we turn that into B3 values using the DFG to B3 mapping. Some nodes will be excluded from the B3 stackmap, like object materializations and constants. Then the FTL creates the Check value in B3, passes it the stackmap arguments, and gives it a generator lambda that closes over the OSR exit descriptor. B3’s Check implementation figures out which value representations to use for each stackmap argument index (as a result of B3’s register allocator doing this for every data flow edge), and reports this to the generator as an array of B3 value reps. The generator then creates a FTL::OSRExit object that refers to the FTL OSR exit descriptor and value reps. Users of the FTL OSR exit object can figure out which register, stack slot, constant value, or materialization to use for any bytecode local by asking the OSR exit descriptor. That can tell the constant, spill slot, or materialization script to use. It can also give a stackmap argument index, in which case we load the value rep at that index, and that tells us the register, spill slot, or constant.

This approach to OSR exit gives us two useful properties. First, it empowers OSR-specific optimization. Second, it empowers optimizations that don’t care about OSR. Let’s go into these in more detail.

FTL OSR empowers OSR-specific optimizations. This happens in DFG IR and B3 IR. In DFG IR, OSR exit is a mutable part of the IR. Any operation can be optimized by adding more OSR exits and we even have the ability to move checks around. The FTL does sophisticated OSR-aware optimizations using DFG IR, like object allocation sinking. In B3 IR, OSR exit gets special register allocation treatment. The stackmap arguments of Check are understood by B3 to be cold uses, which means that it’s not expensive if those uses are spilled. This is powerful information for a register allocator. Additionally, B3 does special register allocation tricks for addition and subtraction with overflow checks (for example we can precisely identify when the result register can reuse a stackmap register and when we can coalesce the result register with one of the input registers to produce optimal two-operand form on x86).

FTL OSR also empowers optimizations that don’t care about OSR exit. In B3 IR, OSR exit decisions get frozen into stackmaps. This is the easiest representation of OSR exit because it requires no knowledge of OSR exit semantics to get right. It’s natural for compiler phases to treat extra arguments to an instruction opaquely. Explicit stackmaps are particularly affordable in B3 because of a combination of factors:

  1. the FTL is a more expensive compiler anyway so the DFG OSR delta encoding optimizations matter less,
  2. we only create stackmaps in B3 for exits that DFG didn’t optimize out, and
  3. B3 stackmaps only include a subset of live state (the rest may be completely described in the FTL OSR exit descriptor).

We have found that some optimizations are annoying, sometimes to the point of being impractical, to write in DFG IR because of explicit OSR exit (like MovHint deltas and exit origins). It’s not necessary to worry about those issues in B3. So far we have found that every textbook optimization for SSA is practical to do in B3. This means that we only end up having a bad time with OSR exit in our compiler when we are writing phases that benefit from DFG’s high-level knowledge; otherwise we write the phases in B3 and have a great time.

This has some surprising outcomes. Anytime FTL emits a Check value in B3, B3 may duplicate the Check. B3 IR semantics allow any code to be duplicated during optimization and this usually happens due to tail duplication. Not allowing code duplication would restrict B3 more than we’re comfortable doing. So, when the duplication happens, we handle it by having multiple FTL OSR exits share the same OSR exit descriptor but get separate value reps. It’s also possible for B3 to prove that some Check is either unnecessary (always succeeds) or is never reached. In that case, we will have one FTL OSR exit descriptor but zero FTL OSR exits. This works in such a way that DFG IR never knows that the code was duplicated and B3’s tail duplication and unreachable code elimination know nothing about OSR exit.

Patchpoints: Lambdas in the IR

This brings us to the final point about the FTL. We think that what is most novel about this compiler is its use of lambdas in its IRs. Check is one example of this. The DFG has some knowledge about what a Check would do at the machine code level, but that knowledge is incomplete until we fill in some blanks about how B3 register-allocated some arguments to the Check. The FTL handles this by having one of the operands to a B3 Check be a lambda that takes a JIT code generator object and value representations for all of the arguments. We like this approach so much that we also have B3 support Patchpoint. A Patchpoint is like an inline assembly snippet in a C compiler, except that instead of a string containing assembly, we pass a lambda that will generate that assembly if told how to get its arguments and produce its result. The FTL uses this for a bunch of cases:

  • Anytime the B3 IR generated by the FTL interacts with JavaScriptCore’s internal ABI. This includes all calls and call-like instructions.
  • Inline caches. If the FTL wants to emit an inline cache, it uses the same inline cache code generation logic that the DFG and baseline use. Instead of teaching B3 how to do this, we just tell B3 that it’s a patchpoint.
  • Lazy slow paths. The FTL has the ability to only emit code for a slow path if that slow path executes. We implement that using patchpoints.
  • Instructions we haven’t added to B3 yet. If we find some JavaScript-specific CPU instruction, we don’t have to thread it through B3 as a new opcode. We can just emit it directly using a Patchpoint. (Of course, threading it through B3 is a bit better, but it’s great that it’s not strictly necessary.)

Here’s an example of the FTL using a patchpoint to emit a fast double-to-int conversion:

if (MacroAssemblerARM64::
    supportsDoubleToInt32ConversionUsingJavaScriptSemantics()) {
    PatchpointValue* patchpoint = m_out.patchpoint(Int32);
        [=] (CCallHelpers& jit,
             const StackmapGenerationParams& params) {
                params[1].fpr(), params[0].gpr());
    patchpoint->effects = Effects::none();
    return patchpoint;

This tells B3 that it’s a Patchpoint that returns Int32 and takes a Double. Both are assumed to go in any register of B3’s choice. Then the generator uses a C++ lambda to emit the actual instruction using our JIT API. Finally, the patchpoint tells B3 that the operation has no effects (so it can be hoisted, killed, etc).

This concludes our discussion of the FTL. The FTL is our high throughput compiler that does every optimization we can think of. Because it is a speculative compiler, a lot of its design is centered around having a balanced handling of OSR exit, which involves a separation of concerns between IRs that know different amounts of things about OSR. A key to the FTL’s power is the use of lambdas in B3 IR, which allows B3 clients to configure how B3 emits machine code for some operations.

Summary of Compilation and OSR

To summarize, JavaScriptCore has two optimizing compilers, the DFG and FTL. They are based on the same IR (DFG IR), but the FTL extends this with lots of additional compiler technology (SSA and multiple IRs). The DFG is a fast compiler: it’s meant to compile faster than typical optimizing compilers. But, it generates code that is usually not quite optimal. If that code runs long enough, then it will also get compiled with the FTL, which tries to emit the best code possible.

Related Work

The idea of using feedback from cheap profiling to speculate was pioneered by the Hölzle, Chambers, and Ungar paper on polymorphic inline caches, which calls this adaptive compilation. That work used a speculation strategy based on splitting, which means having the compiler emit many copies of code, one for each possible type. The same three authors later invented OSR exit, though they called it dynamic deoptimization and only used it to enhance debugging. Our approach to speculative compilation means using OSR exit as our primary speculation strategy. We do use splitting in a very limited sense: we emit diamond speculations in those cases where we are not sure enough to use OSR and then we allow tail duplication to split the in-between code paths if they are small enough.

This speculative compilation technique, with OSR or diamond speculations but not so much splitting, first received extraordinary attention during the Java performance wars. Many wonderful Java VMs used combinations of interpreters and JITs with varied optimization strategies to profile virtual calls and speculatively devirtualize them, with the best implementations using inline caches, OSR exit, and watchpoints. Java implementations that used variants of this technique include (but are not limited to):

  • the IBM JIT, which combined an interpreter and an optimizing JIT and did diamond speculations for devirtualization.
  • HotSpot and HotSpot server, which combined an interpreter and an optimizing JIT and used diamond speculations, OSR exit, and lots of other techniques that JavaScriptCore uses. JavaScriptCore’s FTL JIT is similar to HotSpot server in the sense that both compilers put a big emphasis on great OSR support, comprehensive low-level optimizations, and graph coloring register allocation.
  • Eclipse J9, a major competitor to HotSpot that also uses speculative compilation.
  • Jikes RVM, a research VM that used OSR exit but combined a baseline JIT and an optimizing JIT. I learned most of what I know about this technique from working on Jikes RVM.

Like Java, JavaScript has turned out to be a great use case for speculative compilation. Early instigators in the JavaScript performance war included the Squirrelfish interpreter (predecessor to LLInt), the Squirrelfish Extreme JIT (what we now call the Baseline JIT), the early V8 engine that combined a baseline JIT with inline caches, and TraceMonkey. TraceMonkey used a cheap optimizing JIT strategy called tracing, which compiles lots of speculative paths. This JIT sometimes outperformed the baseline JITs, but often lost to them due to overspeculation. V8 upped the ante by introducing the speculative compilation approach to JavaScript, using the template that had worked so well in Java: a lower tier that does inline caches, then an optimizing JIT (called Crankshaft) that speculates based on the inline caches and exits to the lower tier. This version of V8 used a pair of JITs (baseline JIT and optimizing JIT), much like Jikes RVM did for Java. JavaScriptCore soon followed by hooking up the DFG JIT as an optimizing tier for the baseline JIT, then adding the LLInt and FTL JIT. During about the same time, TraceMonkey got replaced with IonMonkey, which uses similar techniques to Crankshaft and DFG. The ChakraCore JavaScript implementation also used speculative compilation. JavaScriptCore and V8 have continued to expand their optimizations with innovative compiler technology like B3 (a CFG SSA compiler) and TurboFan (a sea-of-nodes SSA compiler). Much like for Java, the top implementations have at least two tiers, with the lower one used to collect profiling that the upper one uses to speculate. And, like for Java, the fastest implementations are built around OSR speculation.


JavaScriptCore includes some exciting speculative compiler technology. Speculative compilation is all about speeding up dynamically typed programs by placing bets on what types the program would have had if it could have types. Speculation uses OSR exit, which is expensive, so we engineer JavaScriptCore to make speculative bets only if they are a sure thing. Speculation involves using multiple execution tiers, some for profiling, and some to optimize based on that profiling. JavaScriptCore includes four tiers to also get an ideal latency/throughput trade-off on a per-function basis. A control system chooses when to optimize code based on whether it’s hot enough and how many times we’ve tried to optimize it in the past. All of the tiers use a common IR (bytecode in JavaScriptCore’s case) as input and provide independent implementation strategies with different throughput/latency and speculation trade-offs.

This post is an attempt to demystify our take on speculative compilation. We hope that it’s a useful resource for those interested in JavaScriptCore and for those interested in building their own fast language implementations (especially the ones with really weird and funny features).

July 29, 2020 05:00 PM

July 16, 2020

Release Notes for Safari Technology Preview 110

Surfin’ Safari

Safari Technology Preview Release 110 is now available for download for macOS Big Sur and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 263214-263988.


  • Added a functional WebRTC VP9 codec (r263734, r263820)
  • Allowed registering VP9 as a VT decoder (r263894)
  • Added support for freeze and pause receiver stats (r263351)
  • Added MediaRecorder.onstart support (r263671, r263896)
  • Changed MediaRecorder to support peer connection remote video tracks (r263928)
  • Enabled VTB required low latency code path (r263931)
  • Fixed MediaRecorder stopRecorder() returning an empty Blob after first use (r263511, r263633, r263891)
  • Fixed MediaRecorder.start() Method ignoring the timeslice parameter (r263565, r263651, r263892)
  • Fixed RTCDataChannel.bufferedAmount to stay the same even if channel is closed (r263655)
  • Updated the max width and height for mock sources (r263844)

Web Authentication

  • Improved UI for PIN entry for security keys

Web Animations

  • Keyframe animation with infinite iteration count doesn’t show up in the Animations timeline (r263400)


  • Changed to require a <form> to be connected before it can be submitted (r263624)
  • Fixed window.location.replace with invalid URLs to throw (r263647)
  • Fixed the behavior when setting url.search="??" (two question marks) (r263637)
  • Changed to allow selecting HEIF images if the ‘accept’ attribute includes an image MIME type that the platform can transcode (r263949)
  • Added referrerpolicy attribute support for <link> (r263356, r263442)
  • Allow setting empty host/hostname on URLs if they use file scheme (r263971)
  • Allow the async clipboard API to write data when copying via menu action or key binding (r263480)


  • Changed to check for mode=“showing” to consider a text track as selected in the tracks panel (r263802)


  • Changed to allow indefinite size flex items to be definite with respect to resolving percentages inside them (r263399)
  • Changed to not include scrollbar extents when computing sizes for percentage resolution (r263794)
  • Fixed pointer events (click/hover/etc) passing through flex items, if they have negative margin (r263659)


  • Changed to resolve viewport units against the preferred content size (r263311)


  • Fixed overlapping content when margin-right is present (r263550)
  • Fixed content sometimes missing in nested scrollers with border-radius (r263578)


  • Fixed honoring aria-modal nodes wrapped in aria-hidden (r263673)
  • Implemented relevant simulated key presses for custom ARIA widgets for increment and decrement (r263823)

Bug Fixes

  • Fixed the indeterminate progress bar animation periodically jumping in macOS Big Sur (r263952)


  • Enabled RelativeTimeFormat and Locale by default (r263227)
  • Configured option-offered numberingSystem in Intl.NumberFormat through locale (r263837)
  • Changed Intl.Collator to set usage:”search” option through ICU locale (r263833)
  • Fixed Promise built-in functions to be anonymous non-constructors (r263222)
  • Fixed incorrect TypedArray.prototype.set with primitives (r263216)

Storage Access API

  • Added the capability to call the Storage Access API as a quirk, on behalf of websites that should be doing it themselves (r263383)

Text Manipulation

  • Updated text manipulation to exclude text rendered using icon-only fonts (r263527)
  • Added a new text manipulation heuristic to decide paragraph boundary (r263958)


  • Enabled referrer policy attribute support by default (r263274)
  • Changed image crossorigin mutations to be considered “relevant mutations” (r263345, r263350)

Web Inspector

  • Added a tooltip to the icon of resources replaced by a local override explaining what happened (r263429)
  • Allow selecting text of Response (DOM Tree) in Network tab (r263872)
  • Adjusted the height of title area when Web Inspector is undocked to match macOS Big Sur (r263377, r263402)

July 16, 2020 05:35 PM

July 12, 2020

Manuel Rego: Open Prioritization and CSS Containment

Igalia WebKit

Igalia is a major contributor to all the open source web rendering engines (Blink, Gecko, Servo and WebKit). We have been doing different kind of contributions for years, which has led us to have an important position on the different communities. This allows us to help our customers to solve their problems through upstream contributions that also benefit the whole web community.

Implementing a feature in a rendering engine (or in several) might look very simple at first sight, but contributing them upstream can take a while depending on the standarization status, the related bugs, the browser architecture, and many other factors. You can find examples of things implemented by Igalia in the past on my previous blog posts, and you will realize about all the work behind some of those features.

There’s a common thing everywhere, people usually get really angry because that bug they reported years ago is still not fixed in a given browser. That can be for a variety of reasons, and not simply because the developers of that browser are very lazy and not paying attention to that particular bug. In many cases the answer to why that hasn’t been solved yet is pretty simple: priorities. Different companies and individuals contributing to the projects have their own interests and priorities, they prioritize the different issues and tasks and put the focus and effort on the ones that have a higher priority for them. A possible solution for that, now that major browsers are all open source, would be to look for a consulting company like Igalia that can fix that bug for you; but you as an individual, or even as a company, maybe you don’t have the budget to make that happen.

What would happen if we allow several parties to contribute together to the development of some features? That would make possible that both individuals and organizations that don’t have the power to implement them alone, could contribute their piece of the cake in order to add support for those features on the web platform.

Open Prioritization

Igalia is launching Open Prioritization, a crowd-founding campaign for the web platform. We believe this can open the door to many different people and organizations to prioritize the development of some features on the different web engines. Initially we have defined 6 tasks that can be found on the website, together with a FAQ explaining all the details of the campaign. 🚀

Let’s hope we can make this happen. If this is a success and some of these items get funded and implemented, probably there’ll be more in the future, including new things or ideas that you can share with us.

Open Prioritization by Igalia. An experiment in crowd-funding prioritization. Open Prioritization by Igalia

One of the tasks of the Open Prioritization campaign we’re starting this week is about adding CSS Containment support in WebKit, and we have experience working on that in Chromium.

Why CSS Containment in WebKit?

Briefly speaking CSS Containment is a standard focused in improving the rendering performance of web pages, it allows author to isolate DOM subtrees from the rest of the document, so any change that happens on the “contained” subtree doesn’t affect anything outside that element.

This is the spec behind the contain property, that can have a few values defining the “type of containment”: layout, paint, size and style. I’m not going to go deeper into this and I’ll refer to my introductory post or my CSSconf EU talk if you’re interested in getting more details about this specification.

So why we think this is important? Currently we have an issue with CSS Containment, it’s supported in Chromium and Firefox (except style containment) but not in WebKit. This might be not a big deal as it’s a performance oriented feature, so if you don’t have support you’ll simply have a worse performance and that’s all. But that’s not completely true as the different type of containments have some restrictions that apply to the contained element (e.g. layout containment makes the element become the containing block of positioned descendants), which might cause interoperability issues if you start to use the contain property in your websites.

The main goal of this task would be add CSS Containment support in WebKit, at least to the level that it’s spec compliant with the other implementations, and if time permits to implement some optimizations based on it. Once we have interoperability you can start using it wihtout any concern in your web pages, as the behavior won’t change between the different browsers and you might get some perf improvements (that will vary depending on each browser implementation).

In addition this will allow WebKit to implement further optimizations thanks to the information that the web authors provide through the contain property. On top of that, this initial support is a requirement in order to implement new features that are based on it; like the new CSS properties content-visibility and contain-intrinsic-size which are part of Display Locking feature.

If you think this is an important feature for you, please go ahead and do your pledge so it can get prioritized and implemented in WebKit upstream.

Really looking forward to seeing how this Open Prioritization campaign goes in the coming weeks. 🤞

July 12, 2020 10:00 PM