Hacker News new | past | comments | ask | show | jobs | submit login
The Era of the Trident Engine (schepp.dev)
415 points by ttepasse on Jan 25, 2020 | hide | past | favorite | 186 comments



> One part of why Microsoft's ideas didn't really catched on was that we developers just didn't get it. Most of us were amateurs and had no computer degree.

Personally, I'd say it was mainly because we were tech geeks who, back then, actually believed in a web that should be accessible to as many different platforms as possible using as many different browsers as possible, not just Windows and IE users.

In a sense, that notion prevailed: the web is ubiquitous, browser lock-in is considered a douche move and you can still access some really worthwhile sites even with rather simple means.

In another sense, we all failed, lending our hands to help create the current privacy-invading, dark-patterned UX nightmare we're still trying to pass off as a reasonable way of making the world a better place.

I never thought I'd lament the day Microsoft stopped making their own browser core. As a web developer, one less rendering engine to GAF about is nice. As a netizen, the Google dominance is going from unsettling to scary.


> As a web developer, one less rendering engine to GAF about is nice

I started out using Mozilla Communicator Suite, Phoenix, and now Firefox as my primary browser. I never defaulted to IE and never to Chrome but I have always strived to make everything I wrote work across as many browsers as possible.

Something funny happened around IE 10, the HTML, CSS, and JavaScript all started running correctly on the first try on all browsers, even mobile.

It was glorious for about a year or two then everything started to diverge again and with Mobile versions of browsers we're starting to head back towards the hell that was developing for the Web around 2004.

There are certainly more platforms to juggle today but they're more similar than different at this point.


If the web's reliance on the Chromium code proves to be a hindrance, it will take a monumental effort to correct. Brendan Eich, speaking of Brave, said something to the effect that it would take hundreds of engineers years to build a new browser equivalent to Chromium in capability. For a free piece of software.

I think it's more likely that a new communication paradigm will supplant the web than that a Chromium competitor will replace Chromium, just as the new smartphone/cloud computing platforms finally broke the Windows stranglehold. PCs are still dominated by Windows.


The fact that this software is so complex is the problem imo. The focus at this point should be on cleanup and simplification instead of feature bloat. Every design decision should consider "what can we delete?"


Complex software is what customers want. A rendering engine for clean, valid XHTML could be very simple. But everyone wants to be able to see web pages created by technically incompetent designers with "tag soup" HTML. There's no point in complaining, that's just the reality of the modern web.


The complexity in modern browser engines has nothing to do with parsing invalid HTML and everything to do with the vast, absurd feature set of the modern web platform, and the complex interactions between the components.


Yeah, if you were to design HTML, CSS, and JS from the ground up today they'd likely look very different.

The bigger issue is backwards compatibility; the web has bent over backwards to maintain it, which is a blessing and a curse. On the one hand it makes the feature set absurd, but on the other hand the web is (relatively) easy to archive since format interpreters for it are widespread, and pages can be built and then forgotten about because they still work.


> Yeah, if you were to design HTML, CSS, and JS from the ground up today they'd likely look very different.

I'd like to see that experiment done.


This is sort of what Flutter claims to be.


Flutter is a GUI toolkit, it's a different use case entirely with some small overlap (flutter can run on the web or could be used to build the front-end for a new browser)


I think he's referring to the fact that Flutter was started as a project, by some of Google's browser and web standards people, to see what the could come up with if they designed a browser from scratch with the knowledge we have today.

This is talked about in a few videos and articles by it's creators.


WASM does seem to be on the road to eventually being a better JS.


I wish WASM was AST based instead of stack based architecture.


What does that statement mean? As far as i know stack VS AST is Apple VS. Oranges. If you meant a register based architecture that would make sense. I don't know is wasm code is verifiable, in which case it's trivial to rewrite it to a register like architecture (like dalvik does for jvm bytecode).


I think we can drop all non-flexbox layout without any degradation in functionality.


You mean grid? I'm suspect you can emulate most if not all of flexbox in grid, but definitely not vice-versa.


> A rendering engine for clean, valid XHTML could be very simple. But everyone wants to be able to see web pages created by technically incompetent designers with "tag soup" HTML.

The HTML5 parsing algorithm is not _that_ hard:

https://www.w3.org/TR/2011/WD-html5-20110113/parsing.html

Do you really think that the complexity of a full XML implementation of the half dozen specs required to implement XHTML would really be that significant a savings compared to the actual features browser developers actually spend their time on?

You're correct that complex software is what people want but that's complex as in an advanced document layout system with advanced language support, rich media, forms, etc. rather than the format those features are implemented in.


That algorithm - for what it is, namely pretty basic tree construction - is absurdly complicated. Did you actually read and grok all that stuff about the stack of open elements, and how all kinds of elements have special gotcha clauses? How you can't nest stuff like `<div>`s in `<p>`s, even in descendants - except in those weird corner cases where you can? And let's not forget that the page you linked to is only one of a few pages you need to parse html; stuff like tokenization also has a bunch of weird, legacy modes, and so on. It's pretty crazy; worse than quite a few programming languages, and they have a conceptually much, much more complex domain. Worst perhaps isn't just the sheer size, it's the haphazard inconsistency. If you don't know all the exceptions, it's hard to predict which part will be exceptional.

Seriously, XML is arguably bad, but html5 is absurdly horrible. The only reason it's acceptable is because it's so widely used that parsers are huge shared projects and most bugs are shallow.

I guess it's all a matter of perspective: sure, you might argue, hey, the spec isn't hundreds of pages, so it's humanly comprehensible. But from my perspective that's a really low bar, and it clears it just barely.

It matters too, because these weird quirks have often hidden things like performance issues, misparses, and security issues due to incorrect normalization.


Again, it's not like there's no work to implement it but … do you think that's more or less complexity than implementing CSS Grids, text layout systems for complex scripts, building a high-performance JavaScript engine, etc.? The browser teams are not shy about saying when they think changes are important for security, performance, or reliability so I'd consider it an indicator that it's not a huge part of their ongoing efforts.


Browser do much more than just HTML5 parsing.

https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_...


Yes, that was my point? Even if it was plausible to switch to XHTML, that’s a small and rather insignificant part of browser code bases.


"Complex software is what customers want."

I think I disagree. Most of my customers don't even want software. They want to watch a movie, or read something, or make some data transform, or something else that is their actual goal.

Complex software is what we make in order to enable them to achieve their goal; perhaps if we were smarter and better, we'd be able to make simple software that enabled their goals.


Much of the complexity isn’t about HTML rendering, but instead about security and device optimizations.


Examples?

Or perhaps more useful: domains / classes of such issues?

I suspect a lot is:

- XHR

- JS generally

- Cookies

A heatmap of problem areas might be interesting.

Update: that's sort of here:

https://www.cvedetails.com/vulnerabilities-by-types.php


I think you hit on the big ones, but also issues around file system access, secure+performant+efficient use of memory across a lot of devices, and complying with an ever-changing landscape of external standards.

Great link!


That would be incompatible with the mantra 'don't break the web', I'm afraid.


There will probably never be another complete new web browser developed from scratch. Browsers are already being supplanted by native mobile apps, wearable devices, voice interfaces, and (just starting) AR / VR. Browser use is still growing worldwide but the trend is toward slower growth. No one will invest hundreds of person years to build a product in a market with declining growth and entrenched competition.


doubtful claim, with all the operating systems being brewed as insurance policy against google pulling a google on android. they will all have their browser too.


Modern browser engines are reasonably portable. When a new OS arrives the developers will just port Firefox or Chromium to it.


Haiku had a port of Mozilla and Web+ is Webkit based, and it still doesn’t have a port of Chromium or Firefox.


I don't get it. Firefox works great, maybe better


Firefox is built on a similar level of effort. Brendan Eich is talking about the effort required for a company to develop a new, full-featured browser engine to complete with Firefox and Chromium.


It may be true that starting from scratch would be impractical, but it might be practical to take Firefox/Chromium and make deep changes, to the point that it's effectively another browser.

Mozilla themselves have been doing this with Firefox. They've been replacing C++ by Rust (with the Servo rendering-engine project). They've replaced the JavaScript JIT more than once, if I recall correctly.

We've already seen WebKit give rise to two divergent major browsers: Chrome and Safari.


The problem is keeping pace with the upstream: it's not that it's impossible but that it's very expensive — when Google forked WebKit they dedicated a large and very talented (i.e. expensive) engineering team to the project. The same could be done again but you're looking at companies like Microsoft, not startups.


The resources needed to maintain a Gecko-based or Blink-based browser will depend on the amount of customisation. Vivaldi/Opera/Brave are doing fine, but they make relatively shallow changes over Chromium.

I just discovered Goanna on Wikipedia, a fork of the Gecko engine, presumably with relatively thin resources. Don't know how well it compares to mainstream engines though. [0]

I suppose the short version is that the workload is a function of the goals.

[0] https://en.wikipedia.org/wiki/Goanna_(software)


That’s the point: if you’re not customizing Blink, you’re not changing the huge influence which Google has over de facto web standards. If you want to make more than simple customizations you need a significant commitment just to keep pace with the upstream – Microsoft can afford that, Samsung can, etc. but it’s not clear that Brave or Opera can.


Mozilla is rebuilding much of an engine with Servo and related (e.g. the CSS bits that are now actually integrated into FF) projects. I can really see them replacing more and more of FF with Rust in the future, because Mozilla really seem to be doubling down on trust, in all forms. For example, they recently announced that the UI is web component based - which means that using Servo for the window chrome is in the realm of possibility.

Edit: at that point I wouldn't really consider it "FF of yore." A rewrite is really a new product.


Firefox has been using Gecko for chrome since before it was even called Firefox. It's pretty much the reason why the Google browser has the name that it does. Anyone contributing to Firefox leading up to the birth of Chrome, including major contributors on Google payroll, dealt with chrome:// documents (or bindings, or scripts, or...) on a daily basis.


Yeah, but that was via XUL, not via web standards.


That's orthogonal to the comment I was responding to.


Your response to my comment isn't only orthogonal, it isn't even in the same realm of what I was talking about. I was referring to "the window chrome." I never once mentioned "Chrome."

The comment you called out is directly related to what I was talking about, it is not orthogonal.


You can't rewrite history. SahAssar's comment moved the goalposts away from being about the browser engine and towards whether XUL was standardized. Whereas your comment is undeniably about using the browser engine to handle the Firefox UI—which yes, I grasped—although you seem not to have grasped my (incredibly straightforward) response to it, which contained an aside about how Chrome got its name.

I'll repeat myself: Firefox is already using the browser engine for its own UI, and that's been the case—I'll repeat again—since before it was even called Firefox.


I'm not sure what you are trying to say, but the parent comment said

> they recently announced that the UI is web component based - which means that using Servo for the window chrome is in the realm of possibility

IMO some of the key words there are "web component" and "Servo":

1. IIRC you can't render XUL strictly with web components or anything that would be called "web", since that usually refers to web standards or something you would use from a webpage but not included as a standard (non-standardized extensions, flash, silverlight and so on).

2. Regarding the Servo part, I don't think that Servo ever included XUL support at all. Servo isn't used wholesale as the rendering engine in firefox, but the parent comment talks about the possibility to use it and web standards for the browser chrome.

I feel like you are responding to a comment that never was and then responding to a comment on that comment by saying the original comment was something you thought it was, but it wasn't.


Funny, I feel the exact same way.

> IIRC you can't render XUL strictly with web components or anything that would be called "web"

> use [Servo] and web standards for the browser chrome.

Geez, this is excruciating.

If you want to use web standards for the browser chrome, this is not a new development. Because Gecko supports web standards. And Gecko has been used for the browser UI for years. There is nothing particular to Servo here. There's nothing particular to Rust.

Using standardized web tech for the "window chrome" is not "in the realm of possibility". It is possible. Full stop. It has never not been the case.


I'll happily "loose" this whatever this discussion is if I'm wrong (but the "win" reference is now removed from your comment) but the basic idea is:

1. Servo supports many web standards, not XUL

2. The firefox UI used to be built in XUL

3. The firefox UI is now not built on XUL, but rather on web standards

This means that the firefox UI can now be rendered by servo or similar components.

That's basically the parent comment. If you disagree with any of those facts that's an interesting discussion, but I think you think that somebody said that the firefox UI could not be built on gecko previously, which nobody said.


So what we're dealing with here is Mitch Hedberg's "I used to do drugs" bit, only you're not joking.


Android put Chrome over the top.


Firefox worked great for a few months after the big fuss over Quantum coming out and then went right back to just being a browser for enthusiasts and privacy activists. And that's after that whole dumb advertising push they did with billboards reading scary stuff like "Big Browser is watching."

If anything, Firefox is a good example of why it's probably better for aspiring browser competitors to start off by just spinning off from Chromium like Brave did. Not that Brave is perfect either, especially not on desktops, but the saved resources let them focus on finding ways to surpass the competition like improving the mobile ui.


Turning Firefox into a Chrome copycat* with built-in blocklists and Tor Browser Bundle features certainly didn't make it gain more users. Mozilla should've left Brave occupy that space. Brave, Pale Moon and Vivaldi deserve praise.

* References:

1. https://www.dedoimedo.com/computers/firefox-suckfest.html

2. https://www.dedoimedo.com/computers/firefox-disable-australi...

3. https://www.dedoimedo.com/computers/firefox-directory-tiles....

[Edit (fixing link)]: 4. https://www.dedoimedo.com/computers/firefox-addons-future.ht...


The thing costing Firefox users is the billion-dollar advertising push and heavy promotion on Google properties. That won’t be changed by trying to satisfy a small number of critics who are not exactly jumping to contribute significant effort to the project, whereas the Mozilla team inevitably has solid answers about maintenance costs bashing their decisions.


Well, apparently IE has a thousand developers for a few years, so it should still be possible for a megacorp.


I think the article missed the part where Microsoft basically went to sleep and stopped doing anything with the browser.

Just like the 1st edition of Bill Gates book, Microsoft decided the internet thing wasn’t as interesting as whatever they had cooked up. To their credit, they figured out they were wrong.


It’s in there:

> The final nail in the coffin was that after the release of Internet Explorer 6, Microsoft decided to tightly couple new Internet Explorer releases to Windows releases. So they dismantled the Internet Explorer team and integrated it into the Windows product team.


I recall obtaining a new version of IE was a happy occasion, from IE4 to 5 was such an improvement! Then 5.5! And then 6 came with XP and it was... the last release anybody really was eager for. I recall switching to Firefox not long after and I've not switched since.


IE6 was so dominant at the time there would have been little incentive to continue innovating rapidly.

I also wonder if they realised they were just building infrastructure that competitors would use to supplant them. Already by 2001 and 2002, Google was starting to dominate search.


I’m sure that someone did, but My guess is that it was more about some internal political warfare than anything.

Besides, enterprise customer adoption of browser based tech sold lots of SQL Server.


Even if they had they missed mobile as well, so IE likely would have suffered a similar fate.


For me, it was Firebug not some noble agenda. Microsoft actually had some quite good debugging tools, but getting the tool chain setup was damn near impossible.


I've been doing .NET dev for the last decade. I have no idea how to get the whole browser part of the toolchain working with Visual Studio. I just don't bother. The client side debugging is done with a browser that is basically Chrome (Brave these days). You can set up everything to go through VS but there I don't see any benefit in doing so.


> we're still trying to pass off as a reasonable way of making the world a better place.

Outside of the SV bubble, this is seen as the creepy grandstanding that it is.


Just because a common open source engine has been decided upon doesn’t mean there’s no competition or that no innovation can happen.


No, it doesn't. But with Microsoft, Google and Amazon all relying on Chromium, there's going to be massive pull in that direction no matter what. Together, these three companies will now, basically, control the desktop OS market, the browser market and the cloud market. I'm not exactly giddy with excitement about that.


No, this is incorrect. It’s open source.

It’s like saying that because Microsoft Amazon and Google all use Linux - that we’ll have no innovation.


It just goes to show how tone-deaf most of Microsoft is, and always has been.

They have a handful of products that dominate the market and everything else is a joke. And it seems like every good idea/invention they have, gets turned into a mess by marketing/management.


Most tech companies have 1-5 successful products and the rest are all "jokes"


the webkit monoculture is saddening.

just yesterday i ran into chrome's 2016 img/flex-basis bug which works properly in firefox but requires an extra wrapping div as a work-around in chrome.

https://bugs.chromium.org/p/chromium/issues/detail?id=625560

what possible motivation is there to fix it when you're not competing with anyone?

hopefully microsoft can help fix it now?

also yesterday, i was writing some ui tests that use getBoundingClientRect() at different media query breakpoints. not only does chrome intermittently fail to deliver consistent results between runs (even with judicious timeouts), at different screen pixel densities its rounding errors are several pixels off and accumulate to bork all tests in a major way. on the other hand, firefox behaves deterministically across test runs and there's a single pixel (non-accumulating) error in one of several hundred tests.

somehow, i made it through the dark ages of IE6 without permanent hair loss, but i dont have fond memories of those years in my career.

now manifest v3 is starting to roll out in Chrome 80. once uBlock Origin stops working, i will use chrome even less (i try only to use it for its devtools currently)


what possible motivation is there to fix it when you're not competing with anyone?

Considering the bug hasn't been fixed in 4 years with completion from Firefox and Edge, I think your assuming that competition drives bug fixes might be a bit off. It clearly doesn’t.


> I think your assuming that competition drives bug fixes might be a bit off.

i guess phrased differently, dominant market position (which Chrome has had for those 4 years) is not conducive to "boring" tasks, such as bug fixes - at least those for which workarounds exist. but this is also what happened with IE6 - devs found clumsy workarounds for its bugs, so they never got fixed because even if end users switched to firefox, it's not like devs could suddenly ignore the 800lb gorilla with 90% market share, their sites had to continue to work everywhere, greatly reducing the incentive to switch (users will say "it works fine in both browsers!"). it becomes a self-fulfilling prophecy.

that being said, i've had some positive experience with a rendering bug i've reported getting fixed: https://bugs.chromium.org/p/chromium/issues/detail?id=899342, but that bug was relatively simple since it did not affect layout, just paint.


Chromium and WebKit are not synonymous


ok, it's Blink now.

nuances aside, the fact that 75-80% of the landscape will be blink-based is unfortunate.


blink and webkit are diverging in not-insignificant ways. Is it really that important that various rendering engines have zero shared lineage?


Why does nobody say “the TCP/IP monoculture is maddening,” or “the HTML monoculture is maddening” ?

Seriously, I need somebody to explain it to me because I don’t get it.

Having one core base of code is not a bad thing to me.

What am I missing here?


There is no single TCP monoculture codebase -- even among Unix-alikes, the Linux networking stack is a reimplementation, with no common ancestry with the BSD stack. (To say nothing of routers with their own implementations, often including hardware assist at the high end.)


Specification vs implementation.

And there certainly are people that say the HTML monoculture is maddening. Perhaps you've heard people complaining about Electron being used for desktop apps.


TCP/IP is a protocol and HTML is a format. The reason that’s OK is that lots of people have built different TCP/IP stacks and HTML renderers, and so no one person/organization can control them. They aren’t “one codebase”.


There's a fundamental difference between a standard (eg TCP/IP, HTML...) and an implementation (Chromium/Blink/Trident...).


There are definitely people pushing back against TCP. QUIC is one example. IP is a bit too low level for most people to care about. Also, pretty much every other thing we've tried at that layer has been worse for general internet traffic. I still remember the days of ATM networks and what a constant headache they were to keep working.


You're conflating standards with products.


Quick thanks for everyone’s feedback on this. Some really good points that I hadn’t thought about before.


Well, it's almost the same thing, since Google now controls web standards (and Mozilla).


html/webtech is a horrible shit show (imo), most people would not say that about tcp/ip


Although I can see the advantages for Microsoft, I don't think it's a good thing that the browser engine landscape is going to get even more homogeneous. Firefox is now the only web browser of note that is not based on a WebKit/Blink/Chromium derivative.

Sure, it's one fewer target to test against, but it saddens me to think that this is going to make it even more likely that web developers target Chrome and its ilk only and that the layout bugs in it are becoming the de-facto standard, just as happened with IE6 for ages. This is actually bad for Firefox even in the parts where it adheres to the standards when other browsers won't.


It seems to me that having a single pervasive open source web rendering engine is actually the ideal state of affairs. How is fragmentation helpful in this area? It's duplicated effort and it makes Web development harder (and less efficient) for the millions of Web developers out there. I personally don't see people taking this approach to, say, Linux; generally people are happy that it's dominant in the server realm and that they can learn one thing and use it everywhere.

IE used a proprietary rendering engine. It's now being replaced with a free and open source one. This seems like a strict improvement. It's the opposite situation -- a single proprietary engine being dominant -- that is the doomsday scenario, and that's what we saw a decade and a half ago with IE. The farther we get from that being a possibility, the better.

For related reasons, I'm not happy that DRM has become part of the standard Web feature set.


> It seems to me that having a single pervasive open source web rendering engine is actually the ideal state of affairs. How is fragmentation helpful in this area?

Because the Chromium monoculture has allowed Google to dominate the web standards process. They can veto any feature or force one through by shipping it and pushing sites to depend on it (including their own).

There is an army of Googlers whose job it is to keep tacking on new web standards. And Google will implement the features before proposing the specs, so their competitors—well, now it's just Mozilla and Apple, I guess—are kept playing constant catch-up. Meanwhile, anything that comes from outside of Google will have to brave the same army trying to smother it in committee.

Just ask anyone who's dealt with web standards politics from outside of Google. It isn't fun anymore.

(Oh, yeah, and because there's essentially no accountability now, plenty of these new features rushed through the door are buggy and introduce security holes. It's like IE all over again.)


Apple’s WebKit is kept in-sync with Chromium - whenever Google adds something then Apple gets it for free within a couple of months - though Apple tends to disable or vendor-prefix new features it doesn’t like.


This hasn't been true since Google hard-forked WebKit to Blink in 2013.


The opposite is true. WebKit is now quite far diverged from Blink. Apple very much does not keep it "in sync" with Chromium.


Ah - my mistake. I was operating on the assumption the Blink and WebKit teams were exchanging patches regularly.

That's a dang shame then :/

Apple's a big company - but we saw how they mishandled their own first-party Maps service after divorcing from Google - I can see Apple's Safari potentially falling behind badly if they can't keep-up with Google's work on Blink.


They already are; there are some pretty glaring bugs/missing features in webkit nowadays.

In fact, I can't think of any webkit developments that positively surprised me the past few years; development seems glacial, at best. A list of somewhat notable stuff chromium and gecko have that webkit is still missing:

Stuff because it's better to make your devs pay licenses for no good reason:

- https://caniuse.com/#feat=webp

- https://caniuse.com/#feat=av1

- https://caniuse.com/#feat=opus

- https://caniuse.com/#feat=ogg-vorbis

- In "fairness": https://caniuse.com/#feat=hevc

There's a whole bunch of stuff that would make it easier for webapps to replace app store apps or otherwise appear native; can't have that!

- https://caniuse.com/#feat=vibration (trying to push people to the apple app store?)

- https://caniuse.com/#feat=webgl2

- https://caniuse.com/#feat=fullscreen

- https://caniuse.com/#feat=registerprotocolhandler

- https://caniuse.com/#feat=css-containment

Weird stuff:

- https://caniuse.com/#feat=flow-root (supported on osx, not ios?)

- https://caniuse.com/#feat=input-datetime (mostly supported on ios, but not osx?)

There are the missing features that just seem there to bug users and devs:

- https://caniuse.com/#feat=link-icon-png (I mean, seriously?)

Then there's useful stuff they don't seem to be willing to work with:

- https://caniuse.com/#feat=css-text-align-last

- https://caniuse.com/#feat=requestidlecallback

- https://caniuse.com/#feat=shadowdomv1

- https://caniuse.com/#feat=custom-elementsv1

Obviously, there are features that webkit has that others do not, but by and large they're not as interesting or plausibly useful.

Webkit is definitely not blink; not anymore.


The monoculture around Linux is actually a problem. Linux' design is suboptimal in many ways, particularly how drivers run with full privilege. (For a sobering look at how this impacts security, watch [1].) It's unfortunate that the monoculture of Linux is going to make this very difficult to change.

[1]: https://www.youtube.com/watch?v=qrBVXxZDVQY


But it's not monoculture that's the problem here, it's architecture. Different OS with similar architecture, like FreeBSD, can't solve those problems.

The same way reimplementing Chromium in Firefox can't fix anything.


There are significant architectural differences in components of Blink, WebKit, Gecko. For example, the CSS style recalculation is very different—Blink uses a single-threaded engine in C++, WebKit uses a C++ JIT, and Gecko does multithreaded style recalculation in Rust. It would be hard to do WebKit or Gecko's approach with the current Chromium governance, because WebKit's CSS JIT uses parts of JavaScriptCore (per my understanding) and because Chromium doesn't currently allow Rust code.


Sure, but not impactful enough to matter in practice. FreeBSD can claim a lot of architectural differences too and yet they don't solve the problems you were talking about. And the actually different OSes that can help with those problems are not really usable except for some niche use cases.


It's about governance and dominance, not implementation.

It doesn't matter if the code for a single web rendering engine is available if the standards process is closed.

Although in fact there are also implementation issues. Nowhere has it been proven that open source implementations are optimally efficient, secure, and robust. In fact the various debacles around SSL etc strongly suggest otherwise.

The fact that development is either open source or proprietary continues to be a huge problem. They both have strong points, but they also have obvious weaknesses. Realistically neither is optimal for critical public infrastructure.

Currently Google has far too much over influence over infrastructure - rather like Microsoft did in the late 90s, but even more so.

Open source won't fix this. Anti-trust action - which is long overdue - might.


Another way of looking at this is that we lost 0 open source engines, and all major engines are open source.

A huge benefit from that is that companies like Igalia are able to push features forward across all engines via OSS contributions.


"WebKit/Blink/Chromium" is a really poor choice in grouping, as far as the context of this conversation goes. Blink is not WebKit.


If Microsoft wanted to take their toys and leave the browser market, they would have done just that — perhaps bundling Firefox (to stab at Google) for a fraction of the cost of continued Edge development.

Trident was not much of a contender in 2010s. It’s formal death does not decrease diversity. Now that Microsoft _embraced_ Chrome, I expect the browser market to become more diverse, not less.


> perhaps bundling Firefox (to stab at Google) for a fraction of the cost of continued Edge development.

I think that would have been a preferable outcome, for it would have pushed at least some fraction of users towards Firefox, potentially helping to shore up the only browser using a (significantly) different engine. Instead we have another Chromium-based browser, which doesn't add anything of significance to the browser landscape.


> If Microsoft wanted to take their toys and leave the browser market, they would have done just that...

With all the coupling with other Windows subsystems and some features just to enable a Windows-only PC ecosystem, I'm not sure that IE/Edge/Trident/EdgeHTML can be open sourced in a whim.

> Now that Microsoft _embraced_ Chrome, I expect the browser market to become more diverse, not less.

Internet explorer just became a OS vendor backed version of NeoPlanet browser. Just the same thing, in different shell.


Former Internet Explorer SE here: it’s not the coupling with Windows that makes it impossible (MSHTML is actually really self-contained). The main reason Microsoft doesn’t open-source legacy code is because there a huge chunks of licensed third-party source code that cannot be publicly disclosed.

While those portions could be stripped-out, it would be a mammoth task to go through the source code history and identify what belongs to who and stub it out if necessary (let alone replace it with first-party code).

Whereas MS’ new projects (like .NET Core) were made open-source from the start and made in the open (Cathedral and the Bazaar) - so there’s no mean ol’ lawyers from LCA to stop people having fun.


Hey, thanks for the information. That's really enlightening. Now I know better.


Really enjoyed the overview of all the ahead-of-their-time features introduced by IE!

I feel like many people forget that, back when it completed against Netscape, IE really was the best browser on the market. The problem was that once they had “won” the browser war, Microsoft just completely abandoned development, allowing the product to languish and become the terrible abomination many of us remember having to write ridiculous workarounds to support.


Having more features does not make a browser superior. IE had a universe of memory leak and CSS rendering quirks. Their rendering was so overly permissive that "anyone" could put a mess together and it'd render. That permissive client almost single handedly slowed web development progress by a decade. Because nobody wanted to produce a browser that failed to render some web pages that could be rendered in other browsers. Nobody cared that the pages in question were gibberish. If anything all these IE only features did more to harm web dev than help it. They're why we're in the situation where all this enterprise software still requires old versions of IE to operate.


The idea that "if only web browsers didn't render bad HTML, the web would be so much better" is one of the oldest myths in web development, and I'm kind of amazed that it's still showing up.

What you call the permissiveness of web browsers - in other words, their insistence on attempting to render invalid or badly-formed HTML - is what has made the web succeed at all.

Firstly, it was fundamental from the start: NCSA Mosaic was implemented that way, as was Netscape, so there's no point blaming Microsoft.

Secondly, and far more importantly, the robustness of web browsers is the reason why you can read 99.9% of web pages at all, including the one you're reading right now. (Yes, it's invalid: https://validator.w3.org/nu/?doc=https%3A%2F%2Fnews.ycombina... )

I know it's tempting to believe that draconian error handling would have forced people to code web pages "properly". Unfortunately, when draconian error handling was added to the web (as XHTML), it failed to take off. Check the history: https://www.w3.org/html/wg/wiki/DraconianErrorHandling

Mark Pilgrim wrote several excellent pieces about why non-draconian error handling is better, and as someone who wrote XML feed parsers and validators that were among the most robust and thorough in existence, he is deeply qualified to know. My favourite of those pieces is the "Thought Experiment"[1] but I also recommend [2], which includes:

There are no exceptions to Postel’s Law. Anyone who tries to tell you differently is probably a client-side developer who wants the entire world to change so that their life might be 0.00001% easier. The world doesn’t work that way.

[1] http://web.archive.org/web/20080609005748/http://diveintomar...

[2] http://web.archive.org/web/20090306160434/http://diveintomar...


But your publishing tool had a bug, and it automatically inserted their illegal characters into your carefully and validly authored page, and now all hell has broken loose.

I'm not entirely sure an XML-error-deface is the worst way to expose a program that automatically inserts anyone's garbage in your web page while not having a clear model of acceptable garbage.


@yoz is right. If only web browsers didn't render bad HTML, the web would not be so much better, it would not have worked.

There is a historical precedent to show it.

I remember when XHTML was the future, about 2002-2005. Pages were loaded in Firefox by a XML parser. If the page was invalid XML for any reason, Firefox would render a parser error message: "error X in line Y, column Z" with a copy of the offending line and a nice caret under the error position thanks to a monospace font.

Wrong percent encoding? No page rendered. Invalid entity? No page rendered. Messy comment separator (two minus signs)? No page rendered. Inserting an element where not allowed ? I guess no page rendered.

This is nice for a rigorous developer perspective, I appreciated it. But (I used to hate that but a wise person sees the world as it is) it is a catastrophe for real-world adoption.

Fixing one static page on your dev machine, thanks to the error message, is a thing. Making a dynamic website become practically impossible unless all your engineers are extremely rigorous and well-organized, and/or use a framework that generates guaranteed valid XHTML any time.

But all frameworks (except a few unknown ones) had (have?) no notion of a document tree or proper escaping but just concatenate text snippets.

From a business perspective, it means your website is much more difficult to get displayed at all (let alone correctly displayed). And even if it works today, it can blow up at any time because of a minor fix anywhere. Worse, the pages your team tests are okay, but real-world visitors will hit some corner case and get an error message intended for a developer.

One may have hoped that some cleaner framework would appear and serve guaranteed valid XHTML any time. I would have liked this option. Developer would create tree hierarchies in memory and serialize them into XHTML. Please commenter name some that do and how popular they are. Did it save XHTML?

That aspect may be the reason number one why XHTML was ditched in favor of HTML5: the web worked because it did a best effort to render invalid pages. Any solution that strays away from this principle will not be adopted at large.

Meta bonus: we're discussing the HTML level but this kind of discussion we would have had at any other level, had the stack been consistent a few levels higher (script) or lower (HTTP, TCP). It's funny how HTTP and TCP looks like they just work, but they have their own corner cases and spec holes. The ecosystem just happened to have mostly converged on a few implementations that mostly work okay. (No, let's not talk about IPv4, NAT, and the like. ;-)


XHTML was late to the party by at least four years. And the problem was not specifically about doctype validation, it was about how pages looked. Contracts with web users are visual first.


"How page looked"? Any reference on this? When I ask Google why XHTML fails it replies https://www.quora.com/Why-did-the-XHTML-specification-fail which mentions first invalid pages not rendering then more subtle interoperability problems, not "how page look".


I'd say it depends on the features: smooth scrolling was pretty cool at the time (1998/1999), so to me, a nicer and faster UI made a huge difference given other user experience improvements as well.

Also, Netscape was extremely buggy as well: I remember moving friends and family to IE as it was a much better experience in general. For example, one very annoying bug (given the slow downloads at 28.8) was Netscape would often not use its cache for certain code paths and would re-download files again, even if it had them in its cache. In particular window resizing would cause it to re-download files for no reason (this was obviously well before anything like responsive layouts, or imgsrc), and you'd often wait 30-40 seconds to see the page again.

Similarly, I seem to remember IE's "offline mode" worked almost perfectly from the cache and you could revisit pages when not dialed up, but Netscape often would not show anything. (Obviously there were other ways of handling this, like Teleport Pro to download entire sites, but it was convenient IE's features generally just worked from a user perspective).


Oh, and: as someone who was doing a huge amount of web development between 1995 and 2000, I can confirm that IE took the lead in version 3 because it better in almost every way than Netscape. I spent ages trying to persuade Netscape versions 3 and 4.* to render HTML correctly and not crash.

I'm not saying that Netscape's engineers did a poor job, because web pages are hard to render correctly. (No, not just because of badly-formed HTML.) I'm saying that IE, for whatever reason, was faster, more stable and more correct.


IE took the lead with v4 because it was shipped with Win95 sp2.5 and Win98 v1. There was a lawsuit about it and everything. More importantly, web users dont know or care which browser renders sites "more correctly." They did understand which sites could and could not make a page that worked in either. They probably did care about crashes. But they also cared about security vulnerabilities created by the other inappropriate applications of postel's law for which Microsoft remains famous.


This article is not just about the web, though. MsHTA was a great development platform and supported many features that IE did not because of the trusted nature of the applications it ran. This was nevertheless made possible by Trident.


I would argue that in 2005 the best browser was Opera. A lot of things we now take for granted in browsers were pioneered by them. I was sad they cancelled development of their engine instead of open-sourcing it.


Opera was probably using licensed third-party code internally which they couldn't legally release as open source.


I believe this comment is talking about a time prior to that, closer to 2000.


I was using Opera as my chief browser in 2002..2010 period, so 2005 was Opera's peak, around versions 8-9 and a race to pass ACID2 test.

I finally switched away to chrome because Opera became unbearable on Ubuntu, it slowed down and a lot of graphics issues appeared.


Yup, you're correct -- by the time 2005 had rolled around the Netscape-successors had finally produced a good product, Firebird/Firefox! It felt leagues beyond IE6, which came out near 2000 and was basically untouched for 5 years at that point.


Well, Firefox passed the ACID2 test almost 2 years later than Opera (only in 2008), and was rather slow at the time.


Opera was amazing back then. Ran rings around IE in features and was super fast. It’s definitely too bad they didn’t open source it and build a large community around it.


I find the article to be pretty lineant, or amusingly silent, on "why" IE technologies never got adopted or standardized.

I am old enough to remember the time where VML and the others were invented. And at this time "Open Source" was still called by Microsoft a "cancer", and Linux users frightened for patent violation every 2 month by MS.

It would have been insane to re-implement or standardized anything coming from Microsoft at this time, leading you for sure in front of a judge for patent infringement...

Anything coming from Microsoft was radioactive due to stupid political decisions and aggressive patent & IP attitude.

This is sad, and cause us to loose 10 years in the web evolution, reinventing the wheel many many times.

Without even consider the millions of hours of engineering wasted to fight with broken HTML compatibilities, locked-in technologies (flash, Sliverlight, ActiveX, Vbscript) and continuously deprecated proprietary APIs.


I was really disappointed that the OP wasn't talking about the UGM-133 Trident:

https://en.wikipedia.org/wiki/UGM-133_Trident_II

(Alas, I think they'll be around for a few decades more, unless someone is wicked enough to use them.)


I thought about Trident VGA cards... thinking "they're still around!?".


This is what I was thinking as well, until I saw the “.dev” in the URL.


Thought the same - always think of this when I hear Trident - the nuclear Missile Harrods would sell https://www.youtube.com/watch?v=XyJh3qKjSMk


I thought of the Trident missile, as well.

Anyone who's either a member of the Baby Boom or Generation X will bear witness to the Cold War for most of the rest of this century.


Ditto. As I understand it, that program largely put me and my sister through college...


So every cloud has a silver lining!

(Even the fiery mushroom cloud surrounding the fireball burning your face off a few years later ...)


Yes he's not talking about Trident missiles.


I don't like IE. I never have. But this part rings true:

"Internet Explorer already had many of the things that we came to reinvent later and that we now celebrate as innovations."

It happens all the time in tech, and IE probably reinvented some things from Hypercard or whatever came before it.


Every one of these "innovations" was invented by Microsoft alone, without any dialog or input with or input from the standards committees. Their intent was clear: to subvert web standards in any way they possibly could in order to force people to Windows. And in large part they succeeded.

Treating them as advances that were, sadly, not adopted by the rest of the web takes a lot of chutzpah.


That's was the standard way of doing things at the beginning of the web : Let browsers experiment things then standardize what's working. Every body was doing this.

The problem with Microsoft was aggressive pricing and forced default installation, it was not the technical side of the browser.


This is not true. It was only Microsoft's innovations that were tied to a specific operating system. Other browsers were cross platform. Even Microsoft's short-lived IE for Mac was not really IE.

ActiveX nearly destroyed the web, and as recently as a few years ago there were still enterprises digging themselves out of the proprietary mess they developed themselves into.


It's unreasonable to blame ActiveX for this. NPAPI was not implicitly portable in some fashion, it was still just native code being embedded in the browser, with no particular guarantee of quality and no guarantee that a Win32 NPAPI plugin had Mac and Linux ports.

The ActiveX API was also well-specified and debuggable in a way NPAPI was not, and it was possible to embed it in other runtime environments like Office documents and Visual Basic applications relatively easily, because COM was truly wonderful technology (even if using it was, at times, very painful). It's not a coincidence that Firefox made heavy use of COM for a long time (though they've rightly been removing it).

Having used COM and ActiveX extensively, despite their flaws they were vastly superior technologies compared to NPAPI and they were a pleasure to work with. The security model was bad but again none of the competitor technologies were any better. I shipped large-scale native apps that successfully embedded ActiveX controls (like the flash player) and this was reasonable specifically because of how good the APIs were.

Even after NPAPI and ActiveX made an exit, the web still was infected by swf files and unity games and what have you. Those things are all either dead now or on life support because it turns out browser vendors don't want to maintain them and they're not portable.


Sounds like what chrome is doing now.


>MHTML was proposed as an open standard to the IETF but somehow it never took off.

I had always wished MHT replaced PDF. But due to the rivalry at the time Firefox refuse to support MHT ( even to this day ). Webkit has WebArchive which as far as I know isn't support outside of Apple's ecosystem.

I dont actually buy the argument it was Vista that slows down IE development. IE 7 wasn't that much difference to IE 6. It shows Microsoft had very little incentive to improve the Web. I dont know how many actually hate them for not complying with ACID "Standards". I certainly dont. But at the time the Web has so many low hanging fruit a lot of people ( or just me ) were pissed Microsoft didn't even bother improving while holding the web standard hostage with its IE dominance. Along with the crap called Windows Vista.

Luckily we got the first iPhone, 2 (?) years later. And the rest is history.


MHT does not do what PDF does. Web text rendering remains abyssal in comparison to what 1980's computer technology can achieve.


But it would do a lot of the things people use PDF for much better.


This is a shame in some ways. Internet explorer was always very strict on how it works.

Anything before 8 was a challenge due to some atrocious bugs.

This had it problems but it really taught you not to write sloppy CSS and JS as it would usually just wouldn't work.

Versions After 7 basically anything that wasn't in the spec supported wasn't implemented so you had to write code pretty much bang on the spec.

Just this Friday I solved a rendering problem with IE where SVG TEXT elements weren't being rendered correctly, I was calling element.innerHTML to set the text which was incorrect. I should have been element.textContent. Using element.innerHTML is incorrect as SVG elements shouldn't have a innerHTML property (they are not HTML). IE11 was actually working correctly, where the latest Chrome behaviour was incorrect.

So spending time making it work in IE has improved my code.


>Using element.innerHTML is incorrect as SVG elements shouldn't have a innerHTML property

Is that definitely the case? Chrome, Firefox, and Safari all return a value for the innerHTML property of an element in an SVG document.

This W3C spec [0] specifically mentions XML documents in addition to HTML documents. And as I understand it, it seems like embedded SVG elements also inherit from the Element interface which includes InnerHTML.

IE11 might also be correct, following an older spec, but I don't think you can jump to the conclusion that Chrome is wrong just because the property is called innerHTML.

[0] https://w3c.github.io/DOM-Parsing/#the-innerhtml-mixin


That is interesting.

I assumed that innerHTML must have been wrong because textContent works in all the browsers I have tried it on whereas innerHTML doesn't work. A cursory search textContent vs innerHTML seemed to suggest textContent was the correct way.

It looks like it isn't a simple case of IE11 (I haven't had a chance to test on 9 & 10 yet) being correct and the others being incorrect. Thanks for the info.


Totally agree with your conclusion. I remember generating a JavaScript array in a server-side loop, and the code wouldn't work (at first) because of a trailing comma on the last element before a bracket. Firefox would ignore it, but IE 8 wanted a perfect no-extraneous-commas code.


I wish they'd open-source it and put it on GitHub... one thing I've noticed with IE is that for non-script/app HTML it tends to use far less memory and is faster at rendering than Firefox or the Webkit browsers. I suppose that's because it originally was written to work in Win95 with very limited memory, and so there's been a lot of optimisations around that.


> One part of why Microsoft's ideas didn't really catched on was that we developers just didn't get it. Most of us were amateurs and had no computer degree. Instead we were so busy learning about semantic markup and CSS that we totally missed the rest

Should we address the elephant in the room? For us without a CS degree, Flash was easily the first choice, IE have lagging problems whenver there's more than three layers of <DIV>s around. Yes it has lots of cool capabilities but it's rendered largely impractical. Even Adobe Flex was about take over the "business app" world.

On the Microsoft side, .NET happened and Silverlight happened.

But ultimately, the iPhone happened. 1-charge per day battery phones happened.

BTW the article didn't mention <IMG DYNSRC> and background MIDI music support.


I'm a bit irked at the specific way the article keeps on bringing up unstandardized and clunky-as-hell looking features (I never knew about IE's behavior attribute; that looks scary) that were only ever implemented in IE as if IE was being unfairly judged. It's neat to see IE did those things, but the article glosses over that web features are good when they're cross-platform, standardized, and mesh well with other/future features. It's easy to just add features if you're IE and not worried about those things. It's telling that the article links to a demo for one of the old features and mentions you need a VM with some specific old IE for the demo to work. One way standardized features are better is that they tend to stay working in future browsers.


I was wondering what the building is shown in the first photo of this article. It's this:

https://en.m.wikipedia.org/wiki/Buzludzha_monument


Skimming through this article, while all these IE features are cool, they seem to have been created with no common strategy or goal other than to make something that another MS team thought useful, like MSXML for the Outlook Web Access team.

The implementation of them seem to be totally inconsistant (sometimes weird nonstandard CSS syntax, weird meta tags, ".htc" files, etc. etc.), and very IE-specific, so it's almost impossible for other browsers to implement.

This is the real reason why they cranked out all these weird features: to vendor-lock people into IE.


I hope they bring the Chromium based version of Edge to the Xbox too. I was playing some Babylon.js demos and it kept freezing up once I had to force restart the whole thing. But webGL + the controller API means you could ship to the console directly.

Not sure if Playstation browser could do this either, but be nice since console's are more locked down but seems they are opening up since Fortnite I believe is the first cross platform game where your Playstation and Xbox friends can play together. Then I think if you created something like a virtual world where dynamic content is allowed, I think the console makers might not be too thrilled about that, so really like the idea of being able to publish console games as just a web app directly.

I also think Microsoft is more open too when it comes to consoles, for example you can go to Walmart and buy a Xbox and then turn it into a DevKit while I believe the others make you buy a expensive DevKit hardware that isn't the same as the console already shipped, but maybe this is because of Microsoft's PC background. So from my understand it's easier to publish to the Xbox if making a native game compared to the other consoles, can get started faster but still need approval to ship - while I think with the PlayStation you have to spend a lot of money just to license the tools before you even write that first line of code.


Yes, the Redmond Middle School science project called Internet Explorer prototyped some excellent concepts. For sure. You can get a LOT of cool concepts for a hundred megabucks a year from people of the intellectual and creative firepower hired by Microsoft.

But, why did it fail?

Of course, as the article says, one reason was the Ballmer-era tsunami of bureaucratic confusion that inundated Microsoft and stymied the release of the Windows versions that carried IE.

Another was security. Cybercreeps love IE. Drive-by malware? IE. "Internet Exploder."

A third was the cost of compatibility. It necessary for web developers and later, web app developers, to develop and test once on all the browsers and then again on each version of IE. It didn't help that it came bundled with Windows: large-org IT managers often forced their users to use an atrocity like IE6 years after it upgraded. This bogus standardization shackled a ball and chain to third-party developers.

A fourth was, paradoxically, the whole ActiveX Control subsystem. Apartment threading, anyone? Monikers, anyone? It was just barely good enough that DICOM and other high-end imaging systems could use it. That took away incentives to get <canvas>-like stuff working well.

Other companies have done similar things. DECNet, GM's MAP/TOP. Apollo Token Ring. SysV vs. BSD. But none of those things hobbled an industry quite like IE.

Trebuchets are cool tech too. But imagine if every UPS truck had to carry one to place packages on peoples' doorsteps.


Every now and then I daydream about what it would take to totally re-invent the frontend stack with today's knowledge that the web is a place for applications, not just documents. There are over 40 years' of legacy and backwards-compatible baggage in HTML+CSS+JS and it's basically impossible for a small entity with little funding to build their own browser engine now. What would a new spec for a platform for delivering applications look like?


An incredibly well cited and researched article! So many of these I have never heard of. (If only M$ had documented these as well as MDN, maybe folks would have used them more and demanded their implementation in the competing browsers. Ah, getting to the end of the article, this is mentioned).

>The other reason could have been a lack of platforms to spread knowledge to the masses. The internet was still in its infancy, so there was no MDN ...

Really incredible demos, too. You can see the URL in some of the demos; looks like the author stood up a VM and wrote many of the demoes. (Recent Star Wars trailers in Windows XP?) Ah, later there's an Internet Archive link to a M$ published VM image!

> You think Internet Explorer could not animate stuff? Not entirely true. Because, back in the days there was already SMIL, the Synchronized Multimedia Integration Language. SMIL is a markup language to describe multimedia presentations, defining markup for timing, layout, animations, visual transitions, and media embedding. While Microsoft was heavily involved in the creation of this new W3C standard, they ultimately decided against implementing it in Internet Explorer.

This brings back bad memories; I recall being taught SMIL briefly in a web-dev class in college. I think Mozilla implemented it. IIRC, you could declaratively animate SVG via XML-like tags, rather than JS or CSS. I didn't know it could access DOM/HTML, or play audio/video!

The implementation of the currentScript() example uses `i` without declaring made me quickly panic. (Thank god JS doesn't allow that to work; had to double check in a console quickly though; "surely `undefined`++ won't convert anything to a number).


> surely `undefined`++ won't convert anything to a number

You're right, it converts it to Not-A-Number. undefined++ becomes NaN.


This talk by Adam Bosworth from 2007 is on the same topic and quite interesting take on the early history of IE. https://www.eweek.com/networking/googles-bosworth-why-ajax-f...


We missed out in using these features in part because we were too concerned with making pixel perfect copies of designs that had to look exactly the same in every browser because that is what our clients signed off on. This was in the days before responsive and all our time was spent fiddling with margins and padding to get things pixel perfect. There was no scope to take a step back and play with some of this Microsoft technology.

Invariably the cast in stone designs were PDF drawings from Photoshop where the art was in second guessing what the designer was thinking of and where they stole their influences from.

You could not implement a table in a cool way on MS IE and in a more boring way on the other browsers, knowledge was just not there or the space to experiment.


I remember using page transitions as a teenager to emulate PowerPoint in a programmable info-screen thing for the school library. Baby's first PHP. Lots of copy-and-paste cos I didn't really understand the fuss about writing functions. Good times, RIP.


Oh yeah, Powerpoint Jeopardy! I remembered we did that in high school science class once. Totally forgot about powerpoint being able to link buttons to different slides but in that case I do think it was a lot of copy and pasting.


Why did Microsoft choose Chromium, and not Firefox?


Gecko is apparently a pain to embed. Other browsers built on Gecko such as Epiphany and Flock later moved to Webkit.


Blink is also not embeddable.


Market share. Many web developers these days are bad at their jobs (or are ordered to be by their bosses) and only test things on Google Chrome, causing many websites to have issues on Firefox (the same happened to Trident) . Using Chromium as a base allows them to ride Google's market share and have almost all websites work with their browser.


Here is another theory:

https://news.ycombinator.com/item?id=22058923

Microsoft was somewhat already vested in Chrome tech before starting, so skill and knowledge was there.

I would love to read an official interview though.


Current portability potential of the engine. The codebase of Firefox has some legacy burden in it plus Rust as a dependency for building.


The rust part is actually an improvement as it is not in addition to but instead of. Building any rust component together with all its dependencies is one step and fully modernized. That’s one of the benefits moving away from legacy components with competing build systems duct taped together.

Additionally, pure rust dependencies (which is the ultimate goal of the effort) are all statically linked and different versions of the same library can be used multiple times without symbol remaining voodoo, etc solving all sorts of typical DLL hell issues associated with adopting a huge foreign component into a project.


Yes, yes, I like the new libraries, too. The Quantum team is following the Boy Scout rule -- when they refactor code, they leave it better than they found it.

But Microsoft needs to release New Edge today. And today, Gecko is not made from a bunch of reusable, standalone Rust libraries. It's built from a couple of good libraries, plus a lot of ugly, tangled legacy. Some day, Gecko might be the best choice for software that wants to embed web technologies, but that's not today.


I never commented the viability of Gecko for MSEdge; I merely rejected the notion that Rust was part of the reason why it wasn’t viable.


A lot of it boils down to codebase quality. I know some hardcore browser developers, and they prefer Chromium.


I never use EDGE on my computer because I found it to be slower than Firefox.

But I still use IE11, ironically, because I like to develop quick HTA tools for enterprise in HTML and Typescript powered by excel documents or access database using COM instead of having to download Electron and what not, which I don't need since I'm only developing for Windows.

it's unfortunate that everybody lost nearly 10 years because Microsoft stopped taking Webtechs seriously in order to focus on Silverlight and what not, which they later abandoned anyway.

I need to find an alternative to HTA though that still support COM since eventually Windows will stop supporting HTA apps.


> it's unfortunate that everybody lost nearly 10 years because Microsoft stopped taking Webtechs seriously

Yes it is. It is also unfortunate that some people refuse to upgrade their browsers and engineers have to jump through hoops in order to still support them... ;)



Let's not celebrate embrace extend extinguish, but let's also not celebrate MS handing the keys to google so they can do the same with Chrome. Why help your competitor? They should have gone with Firefox.

Around 2012 when all 3 major browsers had similar market share [1] is what we want: everyone can make try extension but no one can ram them down our throats.

[1]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers


In adopting chromium they're not just helping their competitor, but their competitor is now also helping them.

At the end of the day, the default option for most users would naturally be to stick to Edge, but they instead have been installing Chrome. Why? Because Chrome was better. Now Edge is like Chrome.

There's much less of a reason for users to install Chrome as a result. Microsoft are likely to regain some marketshare with this approach. If Bing is good enough, that may also mean billions in revenue.


That's it, and everything syncs across easily. Installing Chrome is now pretty much redundant.


> If Bing is good enough

I've actually noticed it is better in some kinds of searches, and worse in others. Thanks to Edge, it's my default and to be honest, I don't even realize it's not Google most days.


1. Google will still push people to Chrome. Users could well still switch because of that.

2. Mozilla is not a competitor but they would get just as much free work. (Or MS sponsors Firefox for the Bing plug, and they get that side benefit.)


For others who weren't aware, "embrace, extend, extinguish" was an explicit strategy discussed internally at Microsoft for gaining control of the internet https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

Though I think there are real innovations discussed in the article, the portrayal of Microsoft being an altruistic actor trying to get others to adopt their tech is downright dishonest with what we know about their internal discussions of their motivations and strategy at the time.

I found the article really interesting, but I could do without the rose colored glasses.


I'm not sure this article is celebrating EEE, just more celebrating innovation in general. Regardless of the goal of these features, in isolation these features are interesting and fun to read about. And not to mention they did lead to things like SVG, fetch, etc, so it wasn't all bad in the end.


Warning: article is excellent, but contains rapidly flashing colors that could trigger a photosensitive epileptic reaction.


> One part of why Microsoft's ideas didn't really catched on was that we developers just didn't get it. Most of us were amateurs and had no computer degree.

Well if you only look at the web and exclude the bulk of enterprise software in the 00’s.


This is generally good but the conclusion is mostly wrong

> One part of why Microsoft's ideas didn't really catched on was that we developers just didn't get it. Most of us were amateurs and had no computer degree. Instead we were so busy learning about semantic markup and CSS that we totally missed the rest. And finally, I think too few people back then were fluent enough in JavaScript, let alone in architecting complex applications with JavaScript to appreciate things like HTML Components, Data Bindings or Default Behaviors. Not to speak of those weird XML sprinkles and VML. > > The other reason could have been a lack of platforms to spread knowledge to the masses. The internet was still in its infancy, so there was no MDN, no Smashing Magazine, no Codepen, no Hackernoon, no Dev.to and almost no personal blogs with articles on these things. Except Webmonkey.

People were building complex JavaScript apps back then (the term DHTML was coined in 1997), and there were plenty of experienced software developers and people interested in really learning how to code well. Similarly, there were many sites where you could learn techniques — not just A List Apart but many personal blogs and sites like Philip Greenspun's https://philip.greenspun.com/panda/ (1997). If you look at the first A List Apart entry in the Wayback Machine, they list many well-known resources:

http://web.archive.org/web/19991005040451fw_/http://www.alis...

(There is, however, a quite legitimate argument that back then English fluency was a very significant barrier)

The main problem was that 1990s Microsoft was all about “cutting off their air supply”. They threw huge amounts of money into building out tons of features, exclusive bundling agreements and promotions with various other companies, etc. but they were not nearly as lavish in spending on QA or developing web standards even before the collapse of Netscape lead them to pull most of the IE team away. If you tried to use most of the features listed, they often had performance issues, odd quirks and limitations, or even crashing bugs which made them hard to use in a production project and many of those bugs took years to be fixed or never were.

In many cases — XHR being perhaps the best example — going through the process of cleaning the spec up to the point where another browser would implement it might have lead to the modern era dawning a decade earlier with better than weekend-hackathon-grade quality implementation in IE. I look at that era of Microsoft as a tragedy of management where they had some very smart people but no strategy other than “prevent competition”.


Having the width include the border because that is how a physical box works has to be one of the all time stupid design decisions of all time. What sort of management structure would lead to something like that? It suggests there was some guy/girl who were supreme dictators who made lots of these decisions.

While committees often make crazy decisions this is the other end of the spectrum. Similarly, orders from the office team for extensions.

The stories behind these things would even better reading.


> Having the width include the border because that is how a physical box works has to be one of the all time stupid design decisions of all time.

...no? If I want two side by side divs, I could give them width: 50%. But with regular box-sizing, if I apply a border to those boxes they'll stack vertically. That's pretty dumb.


While having a very long, complete answer is interesting, there's a very simple tl;dr that answers the question just as well.

You only have N bandwidth available to the house. Cable companies have a tradition in providing content; content can be very complex, and very rarely was content ever coming back from the house. So, instead of doing N/2 upload N/2 download, they did, lets say, N/4 upload 3N/4 download. Why haven't they fixed it? Legacy systems. We've all been there.


[flagged]


Your comment is needlessly insulting and very patronizing without actually providing any substance or even a single point of contention.


Disclosure: I work at Google on Chrome. Opinions are my own.

Using chromium as a base, browsers will have to differentiate by offering a better product.

“Better” is less likely to be better performance, compatibility, or accessibility.

By using much of the same code as another browser implementer, any browser vendor [hint hint] that still makes their own engine could reduce the resources they put on the foundations and web platform and put more of it on the product itself.

Perhaps we’ll have more groundbreaking innovations to move browsers forward. The last few big ones: multi-process architecture, tabs.

In effect, by using the same foundations, the browser wars could in fact be reignited and the users could be the winners.

On the web platform side, I.e., the stuff you see on MDN and w3c specs, using the same “base” doesn’t mean the browsers won’t have different implementations of future APIs if the vendors’ opinions diverge strongly. Case in point: Chromium used to use WebKit as its renderer and now uses blink, a fork of WebKit.


> By using much of the same code as another browser implementer, any browser vendor [hint hint] that still makes their own engine could reduce the resources they put on the foundations and web platform and put more of it on the product itself.

In other words, throw out all the advantages that come from using Rust in the browser in favor of a codebase that has a policy forbidding any use of that language. No thanks.

Furthermore, I have to note the irony that you say nobody else should implement their own engine when your team is the one that forked Blink from WebKit in the first place.


Open browsers have been around for decades, Chromium didn't offer anything new there and as you say it even started off an open browser itself. If everyone did what you suggest there is a net loss for the user as where there used to be competition in browser backends there is now whatever Google considers in it's interest to merge into Chromium for others to also adopt (until the point someone manages a fork and we're back at the part where you say companies should drop their engines for what is popular instead).


About web APIs, didn’t want to imply that blink was used instead of WebKit for that reason, it was just an example that rendering engines could still evolve independently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: