Software Engineer Contractor, Photographer.
19 stories
·
2 followers

Improving rendering performance with CSS content-visibility | Read the Tea Leaves

1 Comment

Recently I got an interesting performance bug on emoji-picker-element:

I’m on a fedi instance with 19k custom emojis […] and when I open the emoji picker […], the page freezes for like a full second at least and overall performance stutters for a while after that.

If you’re not familiar with Mastodon or the Fediverse, different servers can have their own custom emoji, similar to Slack, Discord, etc. Having 19k (really closer to 20k in this case) is highly unusual, but not unheard of.

So I booted up their repro, and holy moly, it was slow:

Screenshot of Chrome DevTools with an emoji picker showing high ongoing layout/paint costs and 40,000 DOM nodes

There were multiple things wrong here:

  • 20k custom emoji meant 40k elements, since each one used a <button> and an <img>.
  • No virtualization was used, so all these elements were just shoved into the DOM.

Now, to my credit, I was using <img loading="lazy">, so those 20k images were not all being downloaded at once. But no matter what, it’s going to be achingly slow to render 40k elements – Lighthouse recommends no more than 1,400!

My first thought, of course, was, “Who the heck has 20k custom emoji?” My second thought was, “*Sigh* I guess I’m going to need to do virtualization.”

I had studiously avoided virtualization in emoji-picker-element, namely because 1) it’s complex, 2) I didn’t think I needed it, and 3) it has implications for accessibility.

I’ve been down this road before: Pinafore is basically one big virtual list. I used the ARIA feed role, did all the calculations myself, and added an option to disable “infinite scroll,” since some people don’t like it. This is not my first rodeo! I was just grimacing at all the code I’d have to write, and wondering about the size impact on my “tiny” ~12kB emoji picker.

After a few days, though, the thought popped into my head: what about CSS content-visibility? I saw from the trace that lots of time is spent in layout and paint, and plus this might help the “stuttering.” This could be a much simpler solution than full-on virtualization.

If you’re not familiar, content-visibility is a new-ish CSS feature that allows you to “hide” certain parts of the DOM from the perspective of layout and paint. It largely doesn’t affect the accessibility tree (since the DOM nodes are still there), it doesn’t affect find-in-page (+F/Ctrl+F), and it doesn’t require virtualization. All it needs is a size estimate of off-screen elements, so that the browser can reserve space there instead.

Luckily for me, I had a good atomic unit for sizing: the emoji categories. Custom emoji on the Fediverse tend to be divided into bite-sized categories: “blobs,” “cats,” etc.

Screenshot of emoji picker showing categories Bobs and Cats with different numbers of emoji in each but with eight columns in a grid for all

Custom emoji on mastodon.social.

For each category, I already knew the emoji size and the number of rows and columns, so calculating the expected size could be done with CSS custom properties:

.category {
  content-visibility: auto;
  contain-intrinsic-size:
    /* width */
    calc(var(--num-columns) * var(--total-emoji-size))
    /* height */
    calc(var(--num-rows) * var(--total-emoji-size));
}

These placeholders take up exactly as much space as the finished product, so nothing is going to jump around while scrolling.

The next thing I did was write a Tachometer benchmark to track my progress. (I love Tachometer.) This helped validate that I was actually improving performance, and by how much.

My first stab was really easy to write, and the perf gains were there… They were just a little disappointing.

For the initial load, I got a roughly 15% improvement in Chrome and 5% in Firefox. (Safari only has content-visibility in Technology Preview, so I can’t test it in Tachometer.) This is nothing to sneeze at, but I knew a virtual list could do a lot better!

So I dug a bit deeper. The layout costs were nearly gone, but there were still other costs that I couldn’t explain. For instance, what’s with this big undifferentiated blob in the Chrome trace?

Screenshot of Chrome DevTools with large block of JavaScript time called "mystery time"

Whenever I feel like Chrome is “hiding” some perf information from me, I do one of two things: bust out chrome:tracing, or (more recently) enable the experimental “show all events” option in DevTools.

This gives you a bit more low-level information than a standard Chrome trace, but without needing to fiddle with a completely different UI. I find it’s a pretty good compromise between the Performance panel and chrome:tracing.

And in this case, I immediately saw something that made the gears turn in my head:

Screenshot of Chrome DevTools with previous mystery time annotated as ResourceFetcher::requestResource

What the heck is ResourceFetcher::requestResource? Well, even without searching the Chromium source code, I had a hunch – could it be all those <img>s? It couldn’t be, right…? I’m using <img loading="lazy">!

Well, I followed my gut and simply commented out the src from each <img>, and what do you know – all those mystery costs went away!

I tested in Firefox as well, and this was also a massive improvement. So this led me to believe that loading="lazy" was not the free lunch I assumed it to be.

At this point, I figured that if I was going to get rid of loading="lazy", I may as well go whole-hog and turn those 40k DOM elements into 20k. After all, if I don’t need an <img>, then I can use CSS to just set the background-image on an ::after pseudo-element on the <button>, cutting the time to create those elements in half.

.onscreen .custom-emoji::after {
  background-image: var(--custom-emoji-background);
}

At this point, it was just a simple IntersectionObserver to add the onscreen class when the category scrolled into view, and I had a custom-made loading="lazy" that was much more performant. This time around, Tachometer reported a ~40% improvement in Chrome and ~35% improvement in Firefox. Now that’s more like it!

Note: I could have used the contentvisibilityautostatechange event instead of IntersectionObserver, but I found cross-browser differences, and plus it would have penalized Safari by forcing it to download all the images eagerly. Once browser support improves, though, I’d definitely use it!

I felt good about this solution and shipped it. All told, the benchmark clocked a ~45% improvement in both Chrome and Firefox, and the original repro went from ~3 seconds to ~1.3 seconds. The person who reported the bug even thanked me and said that the emoji picker was much more usable now.

Something still doesn’t sit right with me about this, though. Looking at the traces, I can see that rendering 20k DOM nodes is just never going to be as fast as a virtualized list. And if I wanted to support even bigger Fediverse instances with even more emoji, this solution would not scale.

I am impressed, though, with how much you get “for free” with content-visibility. The fact that I didn’t need to change my ARIA strategy at all, or worry about find-in-page, was a godsend. But the perfectionist in me is still irritated by the thought that, for maximum perf, a virtual list is the way to go.

Maybe eventually the web platform will get a real virtual list as a built-in primitive? There were some efforts at this a few years ago, but they seem to have stalled.

I look forward to that day, but for now, I’ll admit that content-visibility is a good rough-and-ready alternative to a virtual list. It’s simple to implement, gives a decent perf boost, and has essentially no accessibility footguns. Just don’t ask me to support 100k custom emoji!

Read the whole story
derekgates
57 days ago
reply
I love seeing reductions in JS code to achieve perf improvements!
Pensacola, FL, USA
Share this story
Delete

Garbage collection and closures - JakeArchibald.com

1 Comment

Me, Surma, and Jason were hacking on a thing, and discovered that garbage collection within a function doesn't quite work how we expected.

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); const id = setTimeout(() => { console.log(bigArrayBuffer.byteLength); }, 1000); return () => clearTimeout(id);
} globalThis.cancelDemo = demo();

With the above, bigArrayBuffer is leaked forever. I didn't expect that, because:

  • After a second, the function referencing bigArrayBuffer is no longer callable.
  • The returned cancel function doesn't reference bigArrayBuffer.

But that doesn't matter. Here's why:

JavaScript engines are reasonably smart

This doesn't leak:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); console.log(bigArrayBuffer.byteLength);
} demo();

The function executes, bigArrayBuffer is no longer needed, so it's garbage collected.

This also doesn't leak:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); setTimeout(() => { console.log(bigArrayBuffer.byteLength); }, 1000);
} demo();

In this case:

  1. The engine sees bigArrayBuffer is referenced by inner functions, so it's kept around. It's associated with the scope that was created when demo() was called.
  2. After a second, the function referencing bigArrayBuffer is no longer callable.
  3. Since nothing within the scope is callable, the scope can be garbage collected, along with bigArrayBuffer.

This also doesn't leak:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); const id = setTimeout(() => { console.log('hello'); }, 1000); return () => clearTimeout(id);
} globalThis.cancelDemo = demo();

In this case, the engine knows it doesn't need to retain bigArrayBuffer, as none of the inner-callables access it.

The problem case

Here's where it gets messy:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); const id = setTimeout(() => { console.log(bigArrayBuffer.byteLength); }, 1000); return () => clearTimeout(id);
} globalThis.cancelDemo = demo();

This leaks, because:

  1. The engine sees bigArrayBuffer is referenced by inner functions, so it's kept around. It's associated with the scope that was created when demo() was called.
  2. After a second, the function referencing bigArrayBuffer is no longer callable.
  3. But, the scope remains, because the cleanup function within is still callable.
  4. bigArrayBuffer is associated with the scope, so it remains in memory.

I thought engines would be smarter, and GC bigArrayBuffer since it's no longer referenceable, but that isn't the case.

globalThis.cancelDemo = null;

Now bigArrayBuffer can be GC'd, since nothing within the scope is callable.

This isn't specific to timers, it's just how I encountered the issue. For example:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); globalThis.innerFunc1 = () => { console.log(bigArrayBuffer.byteLength); }; globalThis.innerFunc2 = () => { console.log('hello'); };
} demo(); globalThis.innerFunc1 = undefined; globalThis.innerFunc2 = undefined;

TIL!

An IIFE is enough to trigger the leak

I originally thought this 'capturing' of values only happened for functions that outlive the initial execution of the parent function, but that isn't the case:

function demo() { const bigArrayBuffer = new ArrayBuffer(100_000_000); (() => { console.log(bigArrayBuffer.byteLength); })(); globalThis.innerFunc = () => { console.log('hello'); };
} demo();

Here, the inner IIFE is enough to trigger the leak.

It's a cross-browser issue

This whole thing is an issue across browsers, and is unlikely to be fixed due to performance issues.

I'm not the first to write about this

And no, this is not due to eval()

Folks on Hacker News and Twitter were quick to point out that this is all because of eval(), but it isn't.

Eval is tricky, because it means code can exist within a scope that can't be statically analysed:

function demo() { const bigArrayBuffer1 = new ArrayBuffer(100_000_000); const bigArrayBuffer2 = new ArrayBuffer(100_000_000); globalThis.innerFunc = () => { eval(whatever); };
} demo();

Are either of the buffers accessed within innerFunc? There's no way of knowing. But the browser can statically determine that eval is there. This causes a deopt where everything in the parent scopes is retained.

The browser can statically determine this, because eval acts kinda like a keyword. In this case:

const customEval = eval; function demo() { const bigArrayBuffer1 = new ArrayBuffer(100_000_000); const bigArrayBuffer2 = new ArrayBuffer(100_000_000); globalThis.innerFunc = () => { customEval(whatever); };
} demo();

…the deopt doesn't happen. Because the eval keyword isn't used directly, whatever will be executed in the global scope, not within innerFunc. This is known as 'indirect eval', and MDN has more on the topic.

This behaviour exists specifically so browsers can limit this deopt to cases that can be statically analysed.

Read the whole story
derekgates
112 days ago
reply
Amazing to be learning more gotchas with memory latching in JS
Pensacola, FL, USA
Share this story
Delete

Testing HTML With Modern CSS

2 Shares

A long time ago, I wrote a reasonably popular bit of open source code called REVENGE.CSS (the caps are intentional). You should know upfront, this hasn’t been maintained for years and if I ever did get round to maintaining it, it would only be to add the “No Maintenance Intended” badge. Alas, that would technically count as maintenance.

Anyway, I was recently reminded of its existence because, curiously, I was contacted by a company who were looking to sponsor its development. Nothing came of this, which is the usual way these impromptu side quests go. But it got me thinking again about CSS-based testing (testing HTML integrity using CSS selectors) and what recent advancements in CSS itself may have to offer.

In a nutshell, the purpose of REVENGE.CSS is to apply visual regressions to any markup anti-patterns. It makes bad HTML look bad, by styling it using a sickly pink color and the infamous Comic Sans MS font. It was provided as a bookmarklet for some time but I zapped that page in a Marie Kondo-inspired re-platforming of this site.

The selectors used to apply the vengeful styles make liberal use of negation, which was already available *squints at commit history* about 11 years ago?

Here are a few rules pertaining to anchors:

a:not([href]), a[href=""], a[href$="#"], a[href^="javascript"] {...}

Respectively, these cover anchors that

  1. Don’t have href attributes (i.e. don’t conventionally function as links and are not focusable by keyboard)
  2. Have an empty href attribute
  3. Have an href attribute suffixed with a # (an unnamed page fragment)
  4. Are doing some bullhonky with JavaScript, which is the preserve of <button>s

Since I released REVENGE.CSS, I did some more thinking about CSS-based testing and even gave a 2016 talk about it at Front Conference Zurich called “Test Driven HTML”.

One of the things I recommended in this talk was the use of an invalid CSS ERROR property to describe the HTML shortcomings. This way, you could inspect the element and read the error in developer tools. It’s actually kind of neat, because you get a warning icon for free!

crossed out error property with the text "you screwed up here" followed by a yellow warning symbol

You can even restyle this particular ERROR property in your Chrome dev tools (to remove the line-through style, for starters) should you wish.

For context, REVENGE.CSS previously used pseudo-content to describe the errors/anti-patterns on the page itself. As you might imagine, this came up against a lot of layout issues and often the errors were not (fully) visible. Hence diverting error messages into the inspector.

Custom properties

In 2017, a year or so after the Zurich conference, we would get custom properties: a standardized way to create arbitrary properties/variables in CSS. Not only does this mean we can now define and reuse error styling, but we can also secrete error messages without invalidating the stylesheet:

:root {
  --error-outline: 0.25rem solid red;
}

a:not([href]) {
  outline: var(--error-outline);
  --error: 'The link does not have an href. Did you mean to use a <button>?';
}

Of course, if there are multiple errors, only one would take precedence. So, instead, it makes sense to give them each a unique—if prefixed—name:

a[href^="javascript"] {
  outline: var(--error-outline);
  --error-javascript-href: 'The href does not appear to include a location. Did you mean to use a <button>?';
}

a[disabled] {
  outline: var(--error-outline);
  --error-anchor-disabled: 'The disabled property is not valid on anchors (links). Did you mean to use a <button>?';
}

Now both errors will show up upon inspection in dev tools.

Expressive selectors

Since 2017, we’ve benefited from a lot more CSS selector expressiveness. For example, when I wrote REVENGE.CSS, I would not have been able to match a <label> that both

  • lacks a for attribute and
  • does not contain an applicable form element.

Now I can match such a thing:

label:not(:has(:is(input,output,textarea,select))):not([for]) {
  outline: var(--error-outline);
  --error-unassociated-label: 'The <label> neither uses the `for` attribute nor wraps an applicable form element'
}

By the same token, I can also test for elements that do not have applicable parents or ancestors. In this case, I’m just using a --warning-outline style, since inputs outside of <form>s are kind of okay, sometimes.

input:not(form input) {
  outline: var(--warning-outline);
  --error-input-orphan: 'The input is outside a <form> element. Users may benefit from <form> semantics and behaviors.'
}

(Side note: It’s interesting to me that :not() allows you to kind of “reach up” in this way.)

Cascade layers

The specificity of these testing selectors varies wildly. Testing for an empty <figcaption> requires much less specificity than testing for a <figure> that doesn’t have an ARIA label or a descendant <figcaption>. To ensure all the tests take precedence over normal styles, they can be placed in the highest of cascade layers.

@layer base, elements, layout, theme, tests;

To ensure errors take precedence over warnings we’re probably looking at declaring error and warning layers within our tests.css stylesheet (should we be maintaining just one). Here is how that might look for a suite of <figure> and <figcaption> tests:

@layer warnings {

  figure[aria-label]:not(:has(figcaption)) {
    outline: var(--warning-outline);
    --warning-figure-label-not-visible: 'The labeling method used is not visible and only available to assistive software';
  }

  figure[aria-label] figcaption {
    outline: var(--warning-outline);
    --warning-overridden-figcaption: 'The figure has a figcaption that is overridden by an ARIA label';
  }
  
}

@layer errors {

  figcaption:not(figure > figcaption) {
    outline: var(--error-outline);
    --error-figcaption-not-child: 'The figcaption is not a direct child of a figure';
  }

  figcaption:empty {
    padding: 0.5ex; /* give it some purchase */
    outline: var(--error-outline);
    --error-figcaption-empty: 'The figcaption is empty';
  }

  figure:not(:is([aria-label], [aria-labelledby])):not(:has(figcaption)) {
    outline: var(--error-outline);
    --error-no-figure-label: 'The figure is not labeled by any applicable method';
  }
  
  figure > figcaption ~ figcaption {
    outline: var(--error-outline);
    --error-multiple-figcaptions: 'There are two figcaptions for one figure';
  }
  
}

Testing without JavaScript?

Inevitably, some people are going to ask “Why don’t you run these kinds of tests with JavaScript? Like most people already do?”

There’s nothing wrong with using JavaScript to test JavaScript and there’s little wrong with using JavaScript to test HTML. But given the power of modern CSS selectors, it’s possible to test for most kinds of HTML pattern using CSS alone. No more elem.parentNode shenanigans!

As a developer who works visually/graphically, mostly in the browser, I prefer seeing visual regressions and inspector information to command line logs. It’s a way of testing that fits with my workflow and the technology I’m most comfortable with.

I like working in CSS but I also think it’s fitting to use declarative code to test declarative code. It’s also useful that these tests just live inside a .css file. Separation of concerns means you can use the tests in your development stack, across development stacks, or lift them out into a bookmarklet to test any page on the web.

I’m a believer in design systems that do not provide behavior (JavaScript). Instead, I prefer to provide just styles and document state alongside them. There are a few reasons for this but the main one is to sublimate the design system away from JS framework churn. The idea of shipping tests with the component CSS written in CSS sits well with me.

How I use this in client work

I had a client for whom I was auditing various sites/properties for accessibility. In the process, I identified a few inaccessible patterns that were quite unique to them and not something generic tests (like those that make up the Lighthouse accessibility suite) would identify.

One of these patterns was the provision of breadcrumb trails unenclosed by a labeled <nav> landmark (as recommended by the WAI). I can identify any use of this pattern with the following test:

ol[class*="breadcrumb"]:not(:is(nav[aria-label], nav[aria-labelledby]) ol) {
  outline: var(--error-outline);
  --error-undiscoverable-breadcrumbs: 'It looks like you have provided breadcrumb navigation outside a labeled `<nav>` landmark';
}

(Note that this test finds both the omission of a <nav> element and the inclusion of a <nav> element but without a label.)

Another issue that came up was content not falling within a landmark (therefore escaping screen reader landmark navigation):

body :not(:is(header,nav,main,aside,footer)):not(:is(header,nav,main,aside,footer) *):not(.skip-link) {
  outline: var(--error-outline);
  --error-content-outside-landmark: 'You have some content that is not inside a landmark (header, nav, main, aside, or footer)';
}

(A more generalized version of this test would have to include the equivalent ARIA roles [role="banner"], [role="navigation"] etc.)

As a consultant, I’m often not permitted to access a client’s stack directly, to set up or extend accessibility-related tests. Where I am able to access the stack, there’s often a steep learning curve as to how everything fits together. I also have to go through various internal processes to contribute. It’s often the case there are multiple stacks/sites/platforms involved and they each have idiosyncratic approaches to testing. Some may not have Node-based testing in place yet at all. Some may do testing in a language I can barely read or write, like Java.

Since a test stylesheet is just CSS I can provide it independently. I don’t need to know the stack (or stacks) to which it can be applied. It’s an expedient way for clients to locate instances of specific bad patterns I’ve identified for them—and without having to “onboard” me to help them do so.

And it’s not just accessibility issues CSS tests can be used to find. What about HTML bloat?

:is(div > div > div > div > *) {
  outline: var(--warning-outline);
  --warning-divitis: 'There’s a whole lot of nesting going on here. Is it needed to achieve the layout? (it is not)';
}

Or general usability?

header nav:has(ul > ul) {
  outline: var(--warning-outline);
  --warning-nested-navigation: 'You appear to be using tiered/nested navigation in your header. This can be difficult to traverse. Index pages with tables of content are preferable.';
}

If you liked this post, please check out my videos about the web and maybe buy a T-shirt or hoodie or something.

Read the whole story
derekgates
118 days ago
reply
Pensacola, FL, USA
Share this story
Delete

Golang disables Nagle's algorithm, making it evil on shitty networks

1 Comment and 2 Shares
Comments
Read the whole story
derekgates
118 days ago
reply
Pensacola, FL, USA
Share this story
Delete
1 public comment
JayM
127 days ago
reply
Grrr.
Atlanta, GA

Mozilla's Original Sin

jwz
2 Comments and 7 Shares
Some will tell you that Mozilla's worst decision was to accept funding from Google, and that may have been the first domino, but I hold that implementing DRM is what doomed them, as it led to their culture of capitulation. It demonstrated that their decisions were the decisions of a company shipping products, not those of a non-profit devoted to preserving the open web.

Those are different things and are very much in conflict. They picked one. They picked the wrong one.

In light of Mozilla's recent parade of increasingly terrible decisions, there have been cries of "why doesn't someone fork it?" followed by responses of "here are 5 sketchy forks of it that get no development and that nobody uses". And inevitably following that, several people have made comments in the "Mozilla is an advertising company now" thread to the effect that it is now impossible for a non-corporate, open source project to actually implement a web browser, since a full implementation requires implementing DRM systems which you cannot implement without a license that the Content Mafia will not give you.

This is technically true. ("Technically" being the best kind of "true" in some circles.)

Blaming and shaming:

  • It used to be that to watch Netflix (and others) in an open browser required the use of a third party proprietary plugin. That doesn't work any more: now Netflix will only work in a browser that natively implements DRM.

  • That step happened because Mozilla took that license and implented DRM.

  • That happened because: "it's in the W3C spec, we didn't have a choice."

  • How did it get into the spec? Oh, it got into the spec because when the Content Mafia pressured W3C to include it, Mozilla caved. At the end of the day they said, "We approve of this and will implement it". Their mission -- their DUTY -- was to pound their shoe on the god damned table and say: "We do not approve, and will not implement if approved."

    But they went and did it just the same.

"But muh market shares!" See, now we're back to the kitten-meat deli again.

(BTW, how's that market share looking these days? Adding DRM really helped you juice those numbers, did it? Nice hockey-stick growth you got there? Good, good.)

If you were unable to watch Netflix in Mozilla out of the box, yes, that would have impacted their market share. You know what else would have happened? Some third party patch would have solved that problem.

When Netscape released the first version of the Mozilla source with no cryptography in it due to US export restrictions, it was approximately 30 minutes before someone outside the US had patched it back in. I'm not exaggerating, it happened that night. This is the sort of software activism at which the open source community excels, even if it is "technically" illegal. ("Technically", again, being the best kind of illegal in some circles.)

Mozilla had a duty to preserve the open web.

Instead they cosplayed as a startup, chasing product dreams of "growth hacking", with Google's ad money as their stand-in for a VC-funding firehose, with absolutely predictable and tragic results.

And those dreams of growth and market penetration failed catastrophically anyway.

(Except for the C-suite, who made out quite well. And Google, who got exactly what they paid for: a decade of antitrust-prosecution insurance. It was never about ad revenue. The on-paper existence of Firefox as a hypothetical competitor kept the Federal wolves at bay, and that's all Google cared about.)


Now hear me out, but What If...? browser development was in the hands of some kind of nonprofit organization?

As I have said many times:

In my humble but correct opinion, Mozilla should be doing two things and two things only:

  1. Building THE reference implementation web browser, and
  2. Being a jugular-snapping attack dog on standards committees.
  3. There is no 3.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
derekgates
144 days ago
reply
Pensacola, FL, USA
Share this story
Delete
2 public comments
jlvanderzwan
141 days ago
reply
Let's see what Ladybird will turn into. Although the "looks more like the Meta logo than the actual Meta logo" rebranding doesn't get my hopes up
satadru
150 days ago
reply
💯💯💯💯💯
New York, NY

Mozilla is an advertising company now

jwz
1 Comment and 2 Shares
This seems completely normal and cool and not troublesome in any way.

Mozilla has acquired Anonym, a [blah blah blah] raise the bar for the advertising industry [blah blah blah] while delivering effective advertising solutions. [...]

Anonym was founded with two core beliefs: [blah blah blah] and second, that digital advertising is critical for the sustainability of free content, services and experiences. [...]

As we integrate Anonym into the Mozilla family, we are excited about the possibilities this partnership brings. While Anonym will continue to serve its customer base, together, we are poised to lead the industry toward a future where privacy and effective advertising go hand in hand, supporting a free and open internet.

Anonym was founded in 2022 by former Facebook executives Brad Smallwood and Graham Mudd. The company was backed by Griffin Gaming Partners, Norwest Venture Partners, Heracles Capital as well as a number of strategic individual investors.

Now hear me out, but What If...? browser development was in the hands of some kind of nonprofit organization?

Oh wait.

Previously, previously, previously, previously, previously.

Read the whole story
derekgates
144 days ago
reply
Pensacola, FL, USA
Share this story
Delete
1 public comment
satadru
150 days ago
reply
Sigh...
New York, NY
Next Page of Stories