Hacker News new | past | comments | ask | show | jobs | submit login
Flow browser passes the Acid tests (ekioh.com)
525 points by methyl on June 13, 2020 | hide | past | favorite | 321 comments



This actually looks pretty interesting.

They started out as an SVG engine for set-top boxes (embedded devices running on TV's) since browsers at the time weren't fast/light enough for the underpowered chips in set-top boxes. Then when the devices got better chips, they realized perf still wasn't good enough for big 4k screens – and that multicore rendering could help.

So they've implemented a full HTML/CSS/JS browser from scratch, all to take advantage of a multicore architecture (today's browsers all render on a single thread) which has enabled (they claim) greater than 4x better framerate during web animations than the stable channels of Chrome/Blink and Safari/Webkit. Oh, and 60% faster layout on a quad-core machine.

They also claim "extremely low memory consumption", which would be quite appealing to many HN'ers I think, though that may only be true of their HTML engine and not of the full browser (eg; when using multiple tabs and hungry JS apps).


Yesterday I stumbled upon a post here about a guy trying to use a Pi as his main machine for a couple of days, and one of his conclusions was that he needed 8GB of ram to run Chromium acceptably, and I thought, hardware evolved to these amazingly small and powerful machines but software is bloating faster. All this is to say that software such as Flow might be the way to go back to more performant ways of doing “essential” computing.


There was a meme when I was at Google (~2012) that went:

"What would you do if you had 16GB of RAM?"

"I'll tell you what I'd do with that. 2 Chrome tabs at the same time."

(Superimposed over the scene from Office Space.)

(On a more charitable note, I worked with the Chrome performance team on a few occasions. The reason for Chrome's massive memory requirements is that they make memory vs. CPU/performance/security tradeoffs on a regular basis, and usually make them against memory. For example, the reason you can have smooth animations in Chrome is because there are multiple in-memory representations of the layout and changing a CSS property relies on cached parsing/cascading/layout decisions and only needs a repaint on the GPU, using blocks of the page that have been pre-rendered into a texture. For another example, the multi-process architecture that gives each webpage isolation from another is a lot more RAM intensive than a shared-state architecture that could share common components, like say the object code from compiling shared JS libraries.)


> The reason for Chrome's massive memory requirements is that they make memory vs. CPU/performance/security tradeoffs on a regular basis

Do they actually test both approaches, or is it more a matter of "we can code it way A or way B, and way A uses more memory, but we think it'll be faster"?

Many years ago I worked at Watcom, back when they made "the" C++ compiler for DOS games. (Doom, Descent, Duke Nukem 3D, and Tomb Raider all used it.)

The compiler had various optimization settings, and some optimized for size, while other optimized for speed. The compiler was used to compile itself. They normally compiled with the "optimize for size" settings enabled, but one time they decided to try optimizing it for speed to see how fast it could be.

It turned out that optimizing it "for speed" made it considerably slower because of the cache.


The chrome team does exactly what you describe. They both compile for size on some platforms (because in practice it's faster for several reasons) as well as spend a lot of time researching speed vs memory trade offs and making those deliberately. In recent times the memory trade offs were also sometimes related to reducing power consumption.


There was a lot of debate back in the Gentoo days about whether to compile with -O3, -O2, or -Os

-Os meant faster launch times, which for desktop Linux made sense, but in classic Gentoo fashion a big group used -O3 and then did a bunch of hacks to make sure binaries would be loaded at startup and never be evicted from memory (even if the application itself wasn't running).

Gentoo was delightfully silly about this stuff, and mostly self aware that it was excessive.


Oh god I remember this. Compiling the compiler to compile it again to compile the kernel to install everything. Every few weeks. While trying to find something that used less memory than openbox.


Depending on the conditions, you can get faster speed with -Os even if the binary is never evicted from memory. If you are optimizing for size, then you may be able to fit more of the program into the CPU cache, which can be accessed much faster than the main memory.


Weren't there tests back then that showed -Os was actually faster on a lot of modern CPUs? Something like more of the binary staying in CPU cache?


> Do they actually test both approaches, or is it more a matter of "we can code it way A or way B, and way A uses more memory, but we think it'll be faster"?

A lot of memory consumption is various caches (at various layers); those caches weren't added for no reason, they were added because they provided a demonstrable a perf gain.


> The reason for Chrome's massive memory requirements is that they make memory vs. CPU/performance/security tradeoffs on a regular basis, and usually make them against memory.

This is generally a good trade. Memory is not very useful (modulo filesystem caches) when you're not using it, and has a very low cost when you are (extremely tiny power consumption). CPU cycles cost power (leading to heat dissipation and battery life loss) when you use any amount, and latency when you use too much. Latency directly affects user experience and efficiency, which is a much higher cost than "my RAM is at 75% usage".

Additionally, Chrome's tradeoff is such that it works out better on cheaper devices. Memory is far cheaper than more powerful processors - which also reduce battery life, which is even more expensive. On more expensive devices, you have all the memory you need for Chrome anyway, so you're just getting better latency.

Finally, there are very few cases where you should have a large number of Chrome tabs open anyway. Multi-tasking is impossible (the human brain simply doesn't support it) and rapid context-switches are inefficient. As much fun as it might be to have 5 tabs open on each of 10 different topics, you're simply decreasing your productivity and capacity for focus. Even when doing multi-disciplinary research, there are a ton of ways to get around the problem of running out of memory due to Chrome tabs: printing pages to PDF, turning off JavaScript, downloading research papers and reading them in a native PDF viewer, extracting content from webpages and inserting into local documents, and the classic using darn bookmarks instead of tabs like they were intended to be used.

(of course, in an ideal world, all web browsers would use absolutely no memory or processing power whatsoever - I'm just defending this particular engineering tradeoff)


> Finally, there are very few cases where you should have a large number of Chrome tabs open anyway. Multi-tasking is impossible (the human brain simply doesn't support it) and rapid context-switches are inefficient.

I agree that the human brain doesn't task switch effectively, but that is exactly why I want to have many tabs open. In the case of reading Hacker News, I will go through the main page once, opening each article and comment page in a new tab, then will use that as a to-read list. If I am researching how to do a task, I will open have several pages open for documentation, stack overflow posts, bug reports, etc. These are open not for the sake of multitasking, but in order to reduce the amount of working memory I need. For example, if a post mentions that a particular workaround may be required, then I want to keep that tab open until I have verified that that workaround is not needed in my case.


Sounds like you want bookmarks with state/tagging.


Are there any solutions with good UX for this workflow?


I guess I'm making two claims here: (1) you should only open tabs for the task you're currently working on and (2) that very few tasks require large numbers of tabs. Fine, maybe it's more then a dozen, but the graph of "number of tabs used by a task" versus "frequency of that number occurring" is an inverse exponential (e.g. most tasks require 1 tab, fewer require 2, even fewer require 3, etc.), and I've never noticed myself actually needing more than...probably 20 tabs to complete a single task before.

> In the case of reading Hacker News...

This isn't a "task" - you're reading a bunch of completely separate articles, one at a time. You're consuming, not producing/processing/working on a task/problem - and each element of consumption is separate than all of the others.

> If I am researching how to do a task...

I understand and agree with this definition and workflow! This is how tabs were designed to be used. However, I haven't ever seen a common task (for programmers - who have even more demanding requirements than the general populace) that requires a "large" number of tabs. I guess that's kind of vague and I can always change the number, but really, think about it - how often does a task require more than a dozen tabs? More than two dozen tabs?

Just for fun, I opened every frontpage article (as of the time I accessed it - whoops, should have used Wayback HN or something) in Chrome. All 30 articles took 3.9 GB RAM, and adding all 30 pages of comments consumed another 300 MB, so 4.2 GB. total.


I disagree with the first claim:

> (1) you should only open tabs for the task you're currently working on

In good part because there are no other simple "read later" alternatives in a browser.

Something that would be an option would be a special single use kind of bookmark that gets deleted once you reopen it.

I am on the extreme side of keeping too many tabs open, part of the reason is that they offer a very simple and unique functionality in the browser (it was my reason between other to switch to Firefox, it could easily handles a few hundred tabs (with some restarts from about:profiles))


> Finally, there are very few cases where you should have a large number of Chrome tabs open anyway.

from what i observe in others, it seems that tabs get created when a new task needs to be done in the same site (that is already open) but dont want to loose context of the previous ones...

from my own personal experience, i like to reuse them as much as i can, but it gets harder as more and more get added, its easier to start from scratch and get a new tab open and go where i want to go...

i think browsers could be doing much more than just tabs and bookmarks to help people organize, because right now it seems like the equivalent of a desk with lots of papers strewn around where someone grabs yet another blank peice to start writing on...


> i think browsers could be doing much more than just tabs and bookmarks to help people organize, because right now it seems like the equivalent of a desk with lots of papers strewn around where someone grabs yet another blank peice to start writing on...

Absolutely. Zettelkasten is the hotness right now. I'd love to see something like a cross between Zettelkasten and Zotero.

Personally I want something in between bookmarks (which I actually infrequently revisit) and history, which is everything and too much noise, and also not permanent in case of history wipe. Strangely I bookmark a lot but don't revisit often, but when I _don't_ bookmark something and need to find some memory thread, it's awful.

I've been scheming about something which I could hit a hotkey, grab the text and metadata, index it, and save the index, set of words/ngrams, or maybe just zipped html content, it's light enough. Then some personal search interface.

https://news.ycombinator.com/item?id=22160572


> from what i observe in others, it seems that tabs get created when a new task needs to be done in the same site (that is already open) but dont want to loose context of the previous ones...

Could you give a few examples of this? I still can't think of anything particularly common that would reasonably justify more than a dozen tabs at once...

> i think browsers could be doing much more than just tabs and bookmarks to help people organize, because right now it seems like the equivalent of a desk with lots of papers strewn around where someone grabs yet another blank peice to start writing on...

Yes, I completely agree. I'm working on a piece of software that goes several levels beyond what browsers currently offer for information organization, and (per the topic) it optimizes for CPU consumption at the expense of memory, for the reasons I gave in my comment. However, for the browsers that we have now, bookmarks are much better for organization than tabs. You can tag them, name them, and put them into folders - none of which you can do with tabs.


I find bookmarks literally useless nowadays.

For example, currently I'm researching bicycles for my mother. I'm on an online shop, and I just open potential candidates on new tabs, which means they start loading in the background while I keep browsing. And using Tree-Style Tabs, this makes them all "children" of the listing tab, which would be lost information if I bookmarked them instead. When I switch to another task, I can just collapse this tree, and when I get back, the listing will be exactly where I left it, which is particularly useful if the site uses JS search without updating the URL. Bookmarks lose all the memory except for the URL and page title.

Bookmarks could be useful for a long-term reference storage (months/years instead of days/weeks), but then chances are a bunch of those links will have rotted, so I need to use something that mirrors the content instead (I used Pinboard previously, now I just use the SingleFile addon to store it as a file).


> And using Tree-Style Tabs, this makes them all "children" of the listing tab, which would be lost information if I bookmarked them instead.

Bookmarks can be categorized with folders and tags - which, together, are much more flexible than tab groups. A tab can only exist in one group/tree at a time, but a bookmark can have infinitely many tags and exist in a tree-like structure (which is what you described) using bookmarks and be relabeled so it's easier for you to find using full-text search.

> Bookmarks lose all the memory except for the URL and page title.

Yes, because bookmarks are meant to be long-term storage, tabs are meant to be working memory, and transient web page state like whether a radio button is checked or not is not intended to be long-term. Web page state is supposed to either be stored server-side or client-side using cookies or HTML5 local storage - which persist after the tab is closed.

> if the site uses JS search without updating the URL

Then you can just keep that search tab open if you absolutely have to interrupt yourself and go to another task. Bookmarks will do just fine for all the stuff you found. It's extremely unlikely that you're going to have more than a few of these tasks that all absolutely require page state that isn't present in the URL or through local storage and you're accumulating them faster than you can complete them.


To be clear, I'm not arguing for just tabs; I agree that tab trees are limited for the long-term storage needs. I just think bookmarks are too crippled to serve as the solution for those needs.

> Web page state is supposed to either be stored server-side or client-side using cookies or HTML5 local storage - which persist after the tab is closed.

Supposed doesn't matter to me, frankly. A solution for me must work for most sites, regardless of whether the web developers of those sites do the "supposed" or not.

But even if they did, those cookies and local storage aren't tied to the bookmarks, so I can't rely on them. Chances are they'll expire much sooner, plus they generally only work on the same machine where they were created, which in the long-term may not be the same I'll be using later.

And the ultimate state loser still applies: URLs rot, and bookmarks rot with them. For long-term storage, they're absolutely not enough. Hence my use of page archiving as files, instead.

> It's extremely unlikely that you're going to have more than a few of these tasks that all absolutely require page state that isn't present in the URL or through local storage and you're accumulating them faster than you can complete them.

Maybe, but I don't care to waste time finding out which pages do or do not have their state in the URL or local storage if I need it. Tabs simply work.


>Could you give a few examples of this? I still can't think of anything particularly common that would reasonably justify more than a dozen tabs at once...

I clicked an internal link in this thread (in firefox) and after it loaded (1s) I clicked the back button. 0.7 seconds later I was back but not at the same position. So I had to scroll to find the position to continue to read. I can save 0.7s load time + 5s or 10s of search by opening a new tab instead.

Disadvantages of single tab usage (off the top of my head):

- additional load time

- formular inputs should be preserved, but I wouldn't count on it.

- highlighted text is not highlighted any more

- server can send me another version (or nonthing 404). So I lose the content alltogether.

-


That's not an actual example, though. That's an action you took. I was asking for a concrete example of a task (e.g. doing your online banking) that reasonably requires more than a dozen tabs (metaphorically - change that to fit your RAM usage, the actual number matters less than "unbounded" which is what many people use, or the number that would consume all the RAM on your device).


Tabs and windows are a very convenient UI for maintaining state. Browsers aren't used just to complete one "task" at a time. The "job to be done" isn't "do online banking" or "buy a bike", it's "organize active information". Across virtual dssktops you might have a "finance" window during tax season with several different sites, a window for each ongoing project with dozens of references queued for each, a window for comms (email, chat, social, etc) with half a dozen tabs, a window for news with all the day's threads in tabs, a window of "TODO" tabs with things to buy, a window for gaming, etc.


> Could you give a few examples of this? I still can't think of anything particularly common that would reasonably justify more than a dozen tabs at once...

as an example, someone asks "hey can you update the appstore screenshots" and while i may still have appstore open on anothet tab, ill open a mew one just for that task, then later somone asks, hey whats up with jira ticket 1234, now i already have 3 tickets open in jira, and i dont wanna disturb those, somill open another tab, same with online docs like confluence, or web searches on google... at some poont the overwhelming number of tabs themselves get to be too much, so clearing context to yet-another-tab, is a way of coping i think...

i see this all the time in group meetings btw

> I'm working on a piece of software that goes several levels beyond what browsers currently offer for information organization, and (per the topic)

that sounds cool. one idea ive had for a while is that sites (like confluence ot jira, or gdocs) are basically apps, so they should be treated as such and automatically grouped together (im windows or tabs) just like native apps...


> However, for the browsers that we have now, bookmarks are much better for organization than tabs

By "browsers we have now" you clearly mean "Chrome, the only browser I care about". Firefox can handle literally thousands of tabs with lower memory than Chrome using one or two dozen due to different trade-offs. Very few people ever use that many, but the presumption that no better should be done on the memory front because your favorite doesn't manage it isn't supported by the facts.

Bookmarks aren't used in the same way as tabs by most people and it's telling that your workflow so easily conflates them that you're making criticisms based the rest of us using them differently. I get the impression that what you want bookmarks for would be better done with tab groups for most people? There are certainly a decent number of happy OneTab users out there who're probably closer to your preferred interface than the rest of us, but even on Firefox its userbase is minuscule.


> By "browsers we have now" you clearly mean "Chrome, the only browser I care about".

False. I use Firefox for all of my browsing, hate Chrome, and only use Chrome as a whole-browser sandbox for Google products.

> Firefox can handle literally thousands of tabs with lower memory than Chrome using one or two dozen due to different trade-offs.

Yes, and I'm arguing that that's a tradeoff and that's a good thing based on the intent of tabs.

> the presumption that no better should be done on the memory front because your favorite doesn't manage it

Neither did I say that no better should be done, nor do I actually believe that. Presumptuous today, aren't we.

> isn't supported by the facts.

The facts are that tabs are intended to be a working set and are optimized for that. If you don't use them as such, you're going against the design of the tool and expecting it to do well, which is insane. It's equivalent to using files on a filesystem like RAM (expecting extremely quick access and no wear over time) and then complaining when they're slow and your SSD wears out in a few weeks.

> Bookmarks aren't used in the same way as tabs by most people and it's telling that your workflow so easily conflates them that you're making criticisms based the rest of us using them differently.

Correct, because they aren't different. Bookmarks are intended to be used for saving and organizing things for the long term. Tabs are intended to be used for actively working on a task. They're two different things. Thinking that the case where you reasonably want to work on a task that requires dozens of tabs to be open, all at once, all on the same topic, and you switch between all of them very frequently is a common case is completely unsupported by evidence, and logically incoherent.

This has nothing to do with my workflow, and everything to do with the design intent of the tool, and proper use of the tool. Again, see the example about using disk as working memory.

> I get the impression that what you want bookmarks for would be better done with tab groups for most people?

Again, this is something that bookmarks are much, much better optimized for than tabs. Bookmarks can exist in folders - which are probably like tab groups, but first-class, and much better supported - and tabs cannot. Bookmarks can be tagged, and tabs cannot. Bookmarks can be exported to a file, and tabs cannot. Bookmarks can be renamed, and tabs cannot. Bookmarks are straight-up better at their job then tabs are. This is not "better done with tab groups for most people" - this is "worse than tab groups for everyone".

To summarize: working memory is not the same as long-term storage (in the human brain, in operating systems, in physical desks, in the browser, and everywhere). Tabs are for working memory, and optimized for that. Bookmarks are for long-term storage, and optimized for that. Bookmarks are objectively better than tabs for long-term storage in non-performance metrics (can be organized using folders and tags, can be exported/imported, can be renamed) and performance metrics, and you and everyone else who insist on using tabs like bookmarks are (1) doing yourselves a disservice by using the tool wrong and suboptimally and (2) doing everyone else a disservice by demanding that the tool literally be made less optimal to fit your incorrect use of it.


What makes you believe that's the "design intent"?


Yes, I think a better way to quickly save and organize bookmarks, addressing needs for both immediate use and for long-term reference would be a good way forward.


Yeah and this is what makes a lot of mobile forms/fields so incredibly annoying. You're filling out a form or something and need to look something up, do a search, go check check another tab or app, switch back and sorry we just killed that context, wanna start over?


> Finally, there are very few cases where you should have a large number of Chrome tabs open anyway.

How users use the program determines how the program should be implemented, not the other way around.


We actually measured this when I was at Google. It turns out most Googlers and users of HN are weird - having lots of tabs open is the exception, not the rule, but it's an exception that's very common among the techie demographic that constitutes Google employees and people who criticize them. The average web user doesn't know what tabbed browsing is, and then another large plurality uses only a handful (~2-3) of tabs. The percentage of web users who have 10+ tabs open is in the low single digits.


> The average web user doesn't know what tabbed browsing is

My anecdotal observations are that this often causes extreme number of tabs. I've seen multiple examples from social media in the last month of people scrolling through their mom's (/little brother's/etc) phone, showing hundreds of tabs, but usually all duplicates of the same dozen or so. I remember seeing similar things firsthand (but not to as great an extent) when the first generation iPad came out. And I've also witnessed the same in offices, where doing something in an app or an existing page will arbitrarily open a new window or tab, leading to an explosion of windows and tabs all hidden beneath the IE11 icon in the Windows taskbar.

By all signs, it seems true that (a) average people don't know what tabs are, and (b) they have a lot of them. So it makes me wonder about your methodology. I used to be a Mozillian, and I remember the less-than-rigorous excuses that Firefox UX folks reached for when backfilling justifications for their design whims.


> I've seen multiple examples from social media in the last month of people scrolling through their mom's (/little brother's/etc) phone, showing hundreds of tabs, but usually all duplicates of the same dozen or so.

Honestly this is the only reasonable way to use tabs on a mobile browser. There is almost no drawback of keeping them and closing them is a chore. I get it sounds like the typical "boomers using the internet" but given that you should expect old inactive tabs to unload anyway having even hundreds of tabs open is a detail.


That's a very interesting result - in particular, because it doesn't agree with my anecdotal evidence of almost everyone I know outside of tech; I can't think of a single person whose computer I've seen that doesn't keep >10 tabs open in perpetuity, even across restarts.


Interesting. Depending on the percentages and methodology, that sounds like a reasonable argument for deprioritizing multiple tab performance.

That said, what you're saying is, "In practice, users don't have a lot of tabs open", which is very different from what I was responding to, which was "users shouldn't have a lot of tabs open".


> Memory is not very useful when you're not using it, and has a very low cost when you are ... CPU cycles cost power when you use any amount ...

Of course important to remember that increased memory utilization results in increased CPU utilization. The range of degradation is quite wide: Increased cache misses (which cost a significant number of cycles), access patterns not in line with the prefetcher, and increased TLB misses.

It's for good reason that one of the most potent optimization techniques for hot code is to reduce the memory footprint and memory access.


I specifically said that I was talking about an engineering tradeoff, however - that is, when you have to choose between one or the other. Excluded from that is the possibility of improving both memory and CPU consumption at the same time - that means that you haven't hit the edge of the curve, so to speak.


You wrote it as an "pick memory utilization or CPU utilization", stating memory as being practically free. I just wanted to correct that by saying that its utilization is very expensive for the CPU.

Rather the trade off you're thinking of is "caching vs. realtime computation" rather than just "memory vs. cpu". Outside caching, it's pretty universal that more memory utilization just means poorer performance, both in wallclock time and in CPU utilization.


> Memory is not very useful when you're not using it

It's also not very useful when someone else is hogging all of it! There is an opportunity cost to the user and other programs. It is supremely selfish to only think of oneself, and reserve all the common resources because "well it was just laying around".

I PAY for the capacity in case I NEED it, not for some bloated browser to snorf it all up because they can't be assed to write reasonable software.


Multiple open tabs is such a common scenerio.

A common use-case: I visit a site I open a new link in a separate tab. Those tabs are visited later.

Having gmail open, slack web client, jira, git and the dev and prod versions of a site I'm building is a common scenerio.


Re: large number of tabs

Use the great suspender extension; my friend in academia thanks me every single time we meet. And now I see them have 4 chrome windows open with each having ~20+ tabs in it!!!

For them everything happens in browser, its synced and the great suspender lets them run that monstrosity without choking their MacBook Airs


Doesn't this kill the primary benefit of tabs, namely that the content is already loaded and you don't need to hit the server again?


Kind of, but if you have enough tabs the cost may become to high. I my FF set up so that tabs are suspended only after a generous time limit and have also excluded some sites so that they are never suspended. Would be nice to be able to suspend the tabs to disk instead of throwing the state away completely.



“Memory is not very useful (modulo filesystem caches) when you're not using it, and has a very low cost when you are (extremely tiny power consumption”

One problem is that CPUs have various ways to use less power or even switch off parts when not in use, but you always pay the same for RAM.

For battery-operated devices that sleep most of the time such as smartphones, that means being careful with memory usage is the way to go. I think that’s part of the reason iPhones ship with less RAM then Android devices (reference counting needs less RAM than garbage collection for the same performance level)


I'm specifically referring to applications (like Chrome) that you use interactively in the foreground. When using one of these applications, your device is not off or in sleep mode - it is on, you are interacting with it, and it is either consuming more CPU or less. A memory-hungry, CPU-light design drains your battery more slowly than one the other way around.

Additionally, the extra power cost of adding more RAM to a device is very low relative to even using 25% of total CPU TDP. [1] gives 3W per DIMM for DDR3 (which is an upper bound, given that LPDDR3 will consume far less power while fully on, let alone in deep sleep mode) and [2] gives 2.5W for DDR2.

Edit: aha, found a paper[3] on RAM power consumption in a phone. Far less than 5% in either "suspended" or "idle" states, under 10% in almost every other benchmark, and universally lower than CPU power consumption.

[1] http://www.buildcomputers.net/power-consumption-of-pc-compon...

[2] https://superuser.com/questions/40113/does-installing-larger...

[3] https://www.usenix.org/legacy/event/atc10/tech/full_papers/C...


Unaffiliated plug: The Great Suspender is really handy for automatically suspending unused tabs after X minutes for us taboholics.


And for those using Safari, you could simply restart your browser and most of the tabs will not be loaded until you click on it.


Having seen the realtime-rendered Horizon Forbidden West trailer, justifying Chrome's ‘smooth’ animations this way is like stepping into a parallel world.

I can't even scroll Twitter without stutter on my hefty gaming laptop, my browser (the whole thing! not just the tab!) often freezes for 10+ seconds when first rendering a page, and toggling Reddit's sidebar takes half a second.

To the extent Chrome's anything can be deemed ‘smooth’, it's because computers are fast, not because they made some clever space-time trade-off.


Twitter and Reddit are both known to be poorly-engineered and ridiculously heavy applications. They're very slow on Firefox, too, and probably any browser you care to use.


Sometimes I feel like trying to test the reddit infinite scroll and measure GB and CPU% per meter.


What kind of websites need 8GB per tab? I have never had problems with chrome or firefox on my 4GB box.

EDIT: someone in this thread posted this:

  I can't even scroll Twitter without stutter on my hefty gaming laptop, my browser (the whole thing! not just the tab!) often freezes for 10+ seconds when first rendering a page, and toggling Reddit's sidebar takes half a second.
This explains a lot, I (almost) never visit twitter or reddit.


Would be a dream as a user to be able to choose between cpu-optimized or memory-optimized modes.


browser vendors can barely keep one renderer/javascript engine free of 0days, I can't imagine how secure a browser would be with double the codepaths.


Nothing is free of 0days.


hello_world.c is still pretty secure, right?


Not since spectre.


I wish this was a display option instead.

I.e. “Disable animations in all web pages (gracefully)”.


There is a CSS media query for that: prefers-reduced-motion

Now whether any sites actually make use of it is another matter. Also I don't know how the user would enable it. Perhaps an OS-specific, system-wide setting like dark mode.

https://developer.mozilla.org/en-US/docs/Web/CSS/@media/pref...


Again, disabling features is a feature given to the site developer, not to the visitor, who's actually running the code.

It would be so much better if browsers were User Agents and respected user settings before the will of the site creator.


They are? * { animation-play-state: paused !important; transition-duration: 0s !important; }


> Also I don't know how the user would enable it.

In Apple products, it's exposed as an accessibility option:

macOS: https://support.apple.com/guide/mac-help/reduce-screen-motio...

iOS: https://support.apple.com/en-us/HT202655

tvOS: https://support.apple.com/guide/tv/reduce-screen-motion-atvb... (which also disables autoplaying previews!)


A friend integrated it into Bootstrap for default animations last year, so I’d guess that a lot of sites are using it.


only for Safari on macOS and iOS




When would you ever not opt for a cpu optimized mode? Cpu's can't get faster, and you can always get more memory.


I, rather unfortunately for myself I suppose, find most animations on the web or even CPU intensive sites to be... well, quite frankly, piles of shit.

I would opt to stop letting web page authors just mangle my CPU cycles because they've opted to poorly animate an SVG or some HTML element. If anything I'd prefer each got a CPU budget and that was all they could use, unless they request more CPU time from the user, then voila! People who want to be annoyed can be, and the rest of us can go back to "browsing the web."


A perhaps more useful tradeoff would be if one could disable a subset of CSS/JS with a simple switch in the settings, so e.g. transition animations are skipped but the webpage is otherwise readable.

I think I have yet to see a webpage that uses animations in a way I enjoy, even when they’re smooth. (The same goes for most PowerPoint transitions and window manager animations...)


I also go back to switch off animations wherever I have the choice. Some animations look really good but long term they all just feel like fancy input delays.


For most people you can’t get more memory without swapping in your device for a newer one. All mobile phones, all tablets, and a good chunk of laptops have soldered in memory with no expansion possibilities.


Getting a faster CPU is about as difficult as getting more memory for almost everybody, because people need to buy new machines to do so. Either because they own a machine that can't be upgraded (hello soldered ram), or because they don't have the technical skills to do so.

Personally I'd rather wait longer for things to complete than have my whole system grind to a halt because it starts swapping like crazy.


I wonder did someone try to dissect where that Chrome memory is going to?

I can run Intellij Idea: Java IDE, load project with thousands of files into it and it won't eat 8 GB.

I understand that Chromium is maintained by extremely talented developers and all obvious improvements were made, so that memory usage is probably necessary. But I still wonder how is it possible. Gigabyte of RAM is a lot.


Not an exact answer to your question but there are a lot of things taking more memory than you would think. Today websites contain high res images. Well, you might say, 800KB isn't much. But that is the compressed size. The bitmap data is way bigger. All this data has to be kept in memory.

But yeah, 8GB is unbelievable. I have no clue why Chrome needs this.


> I can run Intellij Idea: Java IDE, load project with thousands of files into it and it won't eat 8 GB.

I don't have the same experience, PyCharm is absolutely killing my machine and I always have it in Power Save mode. Sure, it isn't eating 8GB but that is because the JVM by default doesn't have that much available. I had to raise the memory limit considerably to be able to open my projects without it hanging.


That’s not normal.

What plugins do you have installed? Maybe one is slowing it down?

And perhaps run “clear caches / restart” just in case there’s cache corruption

I run it fine on a Linux machine that has 4GB ram total — I’m curious how much memory you’re talking about.


I don't know about Chrome but in Firefox if you want to know, go to link about:memory. Caveat: it's very technical :)


That page ("Stats for nerds") was removed from Chrome ages ago. And before it was removed, it was a pretty petty thing since it spent time and effort on also showing memory usage from other currently running browsers.


I think it's a mistake to think that, just because someone works at Google or on Chromium, they are necessarily "talented" developers. They're just the developers that are working on Chromium. Normal people like the rest of us.


Google has a ~0.2% hiring acceptance rate, which is much more selective than the vast majority of software companies. For Google developers to be "normal people", the hiring process would have to be absolutely terrible.


On a related note, i once came across the notion that free memory isn't doing anything for you.

It's kind of obvious and does not at all explain chromium's lower bound of memory usage. But: if Chromium runs better with more memory and no other process is using it: please take it!

That is what it's there for, after all.


This is fine, until another application needs it. Then it becomes a power struggle for the throne, and both sides lose.


That's not correct for any modern OS. Free memory is used as a disk cache, so your computer is significantly faster with more free memory.

Well, not as significantly now, with SSD everywhere, but still faster.


If it would actually detect that, it could be great.

But I have a chromebook with 4GB of ram, and it often struggles to keep a particular set of 3 expensive tabs open because it runs out. It's definitely not reacting to memory pressure by being less greedy.


Your pages might have huge JS heaps allocated or contain supersampled assets.


This topic always reminds me of the work of Paul Virilio, on how system design often (deliberately or not) impedes our own natural processing capabilities by slowing us down / wasting our time.


You can hit F12 in a chromium based browser and hit the memory tab and find out where all the RAM is going...


My Chromebook doesn't have an F12 key.


Also works in FF.


All (browser) developers should be using a Pi as their main machine.


Yes, developers in general. Force them to use their own stuff on laptop HDDs, mounted via NFS.

Gnome and KDE folks. Although I stopped using either of those so I don't care.


Over remote X too, please. Doesn't have to be very remote (but you know, bonus points if it is)


Me too. I'm absolutely dreading moving to Wayland for this reason since there are issues using it over the network.


The W3C and the WHAT, too.

I keep obsolete computers and phones around to test with.


I used a Raspi 3B as my main desktop for a while a couple of years back. It worked fine for most things I wanted to do, except for web browsing. NetSurf was good enough for most things, but I had to use Chromium for online banking. It was slow, but it worked. Sure, 1 gig is a bit cramped these days but the problem in general are the web sites, not the browsers.


I'd say it's a vicious circle. Chrome folks look at real world performance and optimize for that. Web devs see their site performing better and add more shit until their dev machine is almost bogging down. Rinse and repeat.


Yeah it is mostly web devs being crap. No problem with sites like HN.


I tried installing a modern Xubuntu desktop on a 3B+ a few weeks ago. You can hardly even open Chromium or Firefox these days (maybe with a lighter desktop it would be slightly better)...


I think the best choice is still Raspbian. I was running Raspbian+Fvwm2 at the time.


> but software is bloating faster

This is called Wirth's Law: Software gets slower faster than hardware gets faster.

https://en.wikipedia.org/wiki/Wirth%27s_law


That depends on what you're doing with it, I suppose- chromebooks typically don't have that much RAM and are usually sufficiently powerful for most any website. My wife uses one exclusively for YouTube, Netflix, facebook etc.

Pretty much the only thing we've seen it stutter on is browsing very high res images taken with a high end camera off of a USB stick.


For that matter Sciter is pretty happy to run (render HTML/CSS and executing scripts) on rPI https://sciter.com/raspberry-pi-4/

So that's about architecture of the Chromium - multiple processes (not even threads) and IPC between them to render stuff.


Servo also uses multicore rendering. It would be interesting how flow solved concurrency issues. As far as I know all UI engines and web engines work single core as concurrency bugs are almost inevitable in that area without special language support like from rust.


Current engines also do multicore rendering. Theres a single thread that decides what goes where on the page, but the actual rendering (ie. putting text and images into gpu buffers) is done by a thread pool.

Thats why on iOS safari if you scroll fast enough you see a checkerboard pattern... that means that tile isn't yet rendered...


Safari got rid of that long ago. That would play fairly poorly with iOS 13's ability to drag the scrollbar around :)


All browsers still "checkerboard" including Safari. No browser that I'm aware of shows an actual checkerboard pattern anymore though, they paint white (or the layer background color).


> So they've implemented a full HTML/CSS/JS browser

The JS is SpiderMonkey, the rest indeed is written from scratch: https://twitter.com/flowbrowser/status/1200098712816631809


> "extremely low memory consumption"

Might be nice as an Electron replacement


Microsoft is working on this with their cross-OS Chromium based Edge - so far on Mac I see about a 60% reduction in RAM footprint using Edge. I believe the end goal is a single managed chromium runtime so each electron app wouldn’t need their own full Chromium instance.


>I believe the end goal is a single managed chromium runtime so each electron app wouldn’t need their own full Chromium instance.

I have spent hours searching the web for any clues as to Microsoft's intentions here. I would be grateful for any words on how you came to believe this.


Use to work in Windows and just from off the hand conversations with former colleagues.


Thanks for the reply.


He said "I believe", it's not an official statement from Microsoft, it's just his own opinion.


a7c32be153f91327db8a62932af28cac4dcfc7612a446c5021b9045a2af269b8


How will they manage different version requirements?


Isn’t that pushed into the app developers to maintain compatibility in other ecosystems?


My thoughts exactly... Now I have too many electron apps... So anything that helps to diminish my RAM consumption is welcome.


Good electron replacement should re-use your installed browser engine. That would reduce RAM consumption dramatically, while providing better security (because browsers usually update faster than individual apps).


You just created a “website”


There are a number of problems with locally-run websites.

One is that TLS is married to DNS, and no third-party CA would issue a certificate for localhost. TlS with self-signed certs generated on the spot becomes easy to forge.

Another is that no web browser suppoorts Unix domain sockets or a Windows equivalent as a an access protocol. You are stuck with IP-based access, wee above.

Third is that access to the OS is limited or forbidden inside the browser, for a good reason. Want to run a locally installed compiler from within your web IDE? Tough luck.

Electron solves all of these problems. It doesn't do it in the best way imaginable, but in some cases just a web site cannot be a replacement.


You mean, "website with native API's and (optionally) custom native chrome".

Developers don't use electron because it'll get its own browser engine instance.


Depends, there is a bunch of Electron apps that work just fine as websites (e.g. Slack, Spotify, Microsoft Teams, etc. As far as I know). I personally prefer running those in a browser tab to both save resources and benefit from privacy enhancing tools like content blockers, etc. I would be curious to know how many of the Electron apps actually need these extra native APIs.


Some of those you listed have extra features with electron. I was just being sarcastic with my remark. Like I don’t believe you can screen share with slack via a browser for example.


> Like I don’t believe you can screen share with slack via a browser for example.

If you can't, it's not the browser's fault. I can definitely screen share in Jitsi from the browser, so there's no technical reason Slack shouldn't be able to do the same thing.



You can show your screen but you cannot pass control.


Well, There is Apache Cordova. Cordova uses built-in webview. It also support windows and mac, but it's not as popular as electron because its api is not as rich as electron.


Electron supports native V8 extensions that are really fast. Cordova (at the time I tried it) only supported message passing which is very slow. Sufficient for many applications but for some it won't cut it.


Google had a project called Carlo to do this: https://github.com/GoogleChromeLabs/carlo But it's been discontinued now that the devs left the company.

I also see this project but you have to write Go to talk to the OS, not Node: https://github.com/zserge/lorca


Windows has had that feature since at least IE5. The main drawback was that "installed browser engine" might in practice mean "whatever ancient IE version this user who disabled windows update is using". Of course that is now mostly solved, and e.g. iOS offers the same feature.

But if you want broad cross-platform support "just get the locally installed browser engine" suddenly becomes a hard problem, unless this becomes a popular OS feature.


I think that the memory savings would be very limited, unfortunately.


This! A browser mode that removes all the browser UI but still uses the same engine would make web apps more practical as desktop replacements.


Back in the Win98 days, I think a .hta (html application) was meant to be that - like .html but opens as an "application". Never really took off though, I presume due to lack of APIs at the time.


It had access to the full VBScript or JScript runtime and WScript host. This means you could do OLE Automation, filesystem access, network access, registry access, and I believe invoke processes.

At the same time, companies were locking down WScript due to scripting viruses and HTAs that were impersonating other applications with their configrable icons.


This is, or was, sortof that: https://github.com/webview/webview/issues/305


Like Java?


There’s the “webview” project on github which does this.


Removing the electron apps works well. Usually they have a browser version, and that's still dogshit slow, but at least it shares more resources and has better sandboxing.


Are we stuck with HTML/CSS/JS forever? For today's interactive applications, those 3 are clearly insufficient. Can't the standards commitees come up with a new "web language" that actually satisfies the needs of 2020? Or is the goal to just have everything run on WebAssembly?

Kudos to the author.. but I was hoping it would be a "for real new" browser.


Can you elaborate, why is it clearly insufficient? There are myriads of web applications made with HTML/CSS/JS. So it's clearly sufficient for any kind of web applications IMO.

There are some weak points for web. One of them is performance, hopefully it'll be resolved by Webassembly, so you could write performance-sensitive portions using C++ or Rust. Another one is using hardware and that's resolved by evolving web standards like WebUSB, WebGPU and so on. I don't think that replacing HTML, CSS or JS with something different would solve any actual problem in the web. They are not ideal, but they are good enough and that's all that matters.

Well, technically you could replace JS with any other language compiling into JS or Webassembly. Then you could just use one full-screen canvas and draw anything you want there, so you could use Qt or something like that. That does not require any new APIs, but I don't think that's the good idea in the long run.


People always jump to criticizing JS, but frankly HTML and CSS are my biggest pain points whenever I work with them. Any reasonably complicated UI inevitably turns into a soup of divs. Why would you stick with markup and styling heavily tailored towards documents for an application platform if you could start fresh with something else?

To be fair, CSS is making big strides in this regard with Houdini[0]. It's not yet complete but it's a much better alternative to composing divs and spans in weird ways to get the look you want.

0. https://developer.mozilla.org/en-US/docs/Web/Houdini


Will the current web stack be the preferred language of applications that run in a browser forever? Maybe not. Having the browser just be a WASM target with (potentially) more than one rendering and layout engine doesn’t seem that far fetched.

However, I think the need to support the current web stack will continue for the foreseeable future since it would be a terrible waste to simply abandon a platform that so many applications are written for.


I agree. The current stack is here to stay and we will need to support it. However, as a web developer, I spend insane amounts of time dealing with simple issues that stem from the fact that we are building complex applications on top of it. Modal window has a JS select element in it? Now the whole select list has to have an absolute position to make sure it flows out of the modal. Oh wait... now the thing won't reposition itself when the user starts scrolling.

So much tech today is being built just to put a band-aid over this stuff, instead of addressing the core issue (the browser is no longer used to just deliver static documents). I can't be the only one that wishes for a new language/browser. WebAssembly might be a saviour, although I'm unsure how issues like accessibility will be addressed.


Just sounds like grass-is-greener hopefulness to me.

If it was just a matter of a fresh start with a new language/stack, then Android/iOS/macOS dev would be compelling experiences. But they're not. And they're changing just as much, bandaging over themselves so much that you end up credentializing in a whole lineage of idiosyncrasies worse than just putting a <select> in a modal, except unlike the web you're just building something that works on an iPhone.

Clientdev is hard.


Try Delphi/Lazarus, Qt or maybe even VisualBasic.

Client Dev has regressed since those days, now it's in a state where devs can't even imagine anything better, but we clearly had that...


> However, as a web developer, ... modal window has a JS select element in it?

Try Sciter then, it was designed precisely for running desktop UI using HTML/CSS as UI declaration languages.

I was native UI developer initially, then Web Frontend Architect who returned back to desktop development taking HTML/CSS with me to Sciter.


"So much tech today is being built just to put a band-aid over this stuff, instead of addressing the core issue (the browser is no longer used to just deliver static documents)"

Am I the only one who thinks the sooner the web goes back to just delivering static documents, the better?


This is never going to happen so what’s the point arguing that it should?

As a dev if have to choose between a flawed but still very capable open platform and proprietary walled gardens where corporations wield absolute and arbitrary control I’ll take the open option and so should you.


For the Web that would probably be good, yes. But many of us are application programmers using the browser as a UI platform, and that doesn't have that much to do with the Web proper.


The Web platform needs to be split again into a document and a clearly separate application platform. With the proper architecture for each of them. Right now it is a compromise between right and left shore, dropping you into the water...


Each of these languages you mentioned is extensible, so to a large degree you can shape them to fit your needs!

I don't do native software development but just looking at things, it seems like software is eating the world, but also at the same time that web technology is eating software development.

I don't think the web is the best development tech on any OS, but it's proving to be the best cross-platform tech that runs on any device. It will continue to evolve to be more suitable, but the idea that it's too weak or wrong is exaggerated.


They already did. You probably don't want to go through the SMIL spec on an empty stomach, though.


JS is already largely a compile target. Aside from ClojureScript, TypeScript, ReasonML etc even human authored JS is usually compiled to more legacy browser friendly JS.


That's gotten trendy in the recent past, but I think you have a generation or wave of developers who literally don't realize it's an interpreted language and you can write short, very sweet vanilla JS and have it run almost everywhere without needing any ahead of time optimizations. _Modern_ JS in _modern_ engines is a dream!


If you will try to profile your app on V8 you will find that almost everything is in binary code, not interpreted.


In this sense it means not requiring a precompilation step. JITing is an implementation detail; also modern engines compile to a bytecode first during interpretation.


Thhis got me to look up some of the ages, for anyone else curious: ClojureScript is 9-ish year old, CoffeeScript 10+, TypeScript 7.


No, but I think it will take a little re-imagining of what a "browser" should be. In my own opinion I find it very odd that we are using what is effectively a static publishing model to do so many dynamic activities. Modulo all the hacks that have been added since, the original intent and architecture of the web seems to be to share scientific papers for researchers at CERN. It's essentially a single file-server namespace with the ability to embed references to other parts of the namespace inside objects already attached to the namespace in some way. Sprinkle in JavaScript and CSS animations and now we have "applications".

Looking at what web browsers are both capable of and expected to do today I think what we really want is an "internet terminal" which would be able to contact resources over the internet, negotiate both local and remote resources for an intended purpose, and then share state bi-directionally (stateless is just zero state).

To just conjure a couple of quick examples you can imagine the differences required in providing a service that produces a simple JSON feed versus a full interactive experience that mutates a shared database. The JSON feed wouldn't require a display and could easily be consumed by autonomous agents while the other might require a minimum display size and want to use a widget set to produce controls and toolbars, etc. If my client agent is allowed access to my video display, mouse, and keyboard it could offer these to either service and allow it to respond in whatever way it feels appropriate. The JSON feed could provide a simple UI for viewing recent events, for example.

Now we do all of this today with HTML, CSS, JavaScript, etc. but none of it feels organized and all you see is re-invention after re-invention of essentially these same concepts. The fact that we even call them "applications" when they are "documents" is really what seals it to me. I personally would like to see something that is more true to the reality of computing and internet connectivity today and looks at it from a device-to-service POV. Locally I might have a graphical display, audio, and inputs but I might also have access to an SQL database or some other network-attached service. Assuming some level of trust and if negotiated properly there's no reason a remote application couldn't make use of these local resources using credentials and security profiles under my control. The can of worms here is that this is basically RPC which is the other never-ending disaster...

Internet is hard. Let's go shopping.


HTML isn't so bad these days because of grid and flexbox, but it is certainly no XAML.

Scoped CSS works really well in conjunction with web components. Languages like SCSS make everything more sane, especially in combination.

I'm currently messing about with Yew[1], which is an entire SPA framework in Rust/WebAssembly. There's about 5 lines of bootstrap JS involved (and fiddling about with bundlers, which is much more sane since Rollup[2]).

[1]: https://yew.rs/docs/ [2]: https://rollupjs.org/guide/en/


I’m basically going to treat the whole web platform as a compilation target. As long as the result is accessible, I’d like to forget about the “big three” for some time.


Realistically, the standards comittees follow where the implementations lead. And the problem with implementations is that all the large companies making them would much prefer to be either monopolists or drastically advantaged by the choice of technology. Whose native toolkit should it most resemble: Microsoft's, Apple's, or Google's?

The web is a mine-strewn no-mans-land between competing factions, which accounts for a lot of the strangeness.

(Ironically, the tech is leaking in the other direction as web apps become more popular on both mobile and desktop)


Isn’t Firefox at least partly rendering multi core now? Wasn’t that the reason Rust was built by them?


Firefox's CSS selector matching is now multi core, written in Rust, imported from Servo. Other things are work in progress.


The new engine is still a WIP which you can download and play with on MDN. But I’ve read that parts of FF proper are now using rust. Situation is murky to me.


> multicore architecture

This works if browser is the only application that runs on machine. So this is limiting possible use cases (like Electron) quite a lot.

> low memory consumption

Sciter's Quark (https://quark.sciter.com) - 5 MB binary and 28 MB RAM by default is probably the lowest what you can get for HTML/CSS/script engine.


> This works if browser is the only application that runs on machine.

I don't understand what you mean - individual applications within a multi-application system can absolutely utilise multiple cores. It isn't true that it won't work. The kernel schedules all the tasks of all applications across the available cores.

And even if you have many applications running, most of them don't need multiple cores all the time, so it makes sense for each application to be able to use as many cores as possible, when the user is looking at it and doing something.

For example if I'm using my text editor and browsing docs. My text editor only needs tiny a fraction of a core every now and again, but each time I navigate it'd be great if the browser temporarily used all sixteen cores to do that as quickly as possible.


While true one concern is that proliferation of threads/processes per application could reach the point where context switching and thread management alone consume a lot of the capacity. IIRC super computers often need special interconnects and software to minimize the distribution costs of so many cores.


"supercomputers" are designed (typically) to run CPU-bound tasks.

User applications with GUIs are almost never CPU-bound, so the comparison is not really appropriate.


Common misconception. Most supercomputer tasks are communication-bound or memory-throughput-bound, and devs just try to fill the resulting slack with more work.


> User applications with GUIs are almost never CPU-bound

I watch live streams of music producers working with DAWs and they actually end up needing to offload synthesis and filtering to multiple separate machines as they can't do it on one.

Also if you think things aren't CPU-bound then why do you think people spend hundreds or thousands of dollars on CPUs?


I'm the author of a crossplatform DAW. I know what they are capable of. Also, that story of multiple systems is becoming a thing of the past in these days of 32 core ryzen threadrippers.

The vast majority of what the vast majority of users do on computers remains non-CPU-bound. I chose not enumerate all the exceptions, because as usual, they are exceptions.


I'd assume the author of ardour knows what they are talking about! I remember running most things just fine on a dual athlon mp, cubase and ardour. Granted this is quite awhile ago.


This seems contradictory to me.

Most people aren't doing CPU-bound work and there's no point parallelising your application... but people are buying 32-core processors? Why do you think they're doing that?

I'm just a normal developer and I am very frequently CPU-bound in my work, and I see many other people in other fields also CPU-bound. I'd absolutely love to have a massive Xeon in my laptop. I think it'd have a genuine impact on my productivity.


"Just a normal developer" ... you're already in a tiny minority of computer users.

And sure, as someone who regularly spends 4 minutes waiting for my 16 core threadripper to build my application, I appreciate the sentiment.

But ... this is very very far from the majority of computer users do and quite far from what "supercomputers" are used for (frequent, repeated, long lived CPU-bound workloads).


> User applications with GUIs are almost never CPU-bound, so the comparison is not really appropriate.

Video conferencing, JS heavy web browsing, video editing, and gaming are all examples of CPU-intense applications that are becoming increasingly common


Of the things you listed only JS would count as "CPU intense" in the strict definition of CPU as microprocessor unit; the others are typically bound on specialized hardware -- hardware video encoders/decoders, and GPUs.

And, having done plenty of video dev work recently, beyond the codecs and GPUs, it's also memory bandwidth/lock contention/data path that's really key for getting good video performance. We're almost never decoding or encoding video by CPU anymore. If we can avoid it.

Arguably with JS heavy web browsing you're also looking at memory bandwidth or lock contention as a key limitation -- garbage collection in the JS interpreter, fetching and dispatching opcodes, etc. CPU is only part of the story.

Life just became so so very complicated in the 90s when CPU performance started to compltely outstrip main memory and bus performance.


1) I don't tend to think of games as applications with GUIs, but you're right, they really are, and yes, they can be CPU-bound sometimes. However, a game that was actually CPU-bound would be very unpleasant to play. That's not true for the applications that run on super-computers: they just take longer to run that someone would like.

2) video conferencing is not CPU-bound unless you're doing 1-to-many/many-to-1 feeds locally.

3) JS heavy web browsing ... 'nuff said.

4) Very few people do video editing that is CPU bound. Certainly for video editors, it is an issue.

Clearly, I should have said:

  "The majority of applications used by the majority of users are rarely CPU-bound"
But more importantly: even the exceptions are really nothing like "supercomputer workloads".


> While true one concern is that proliferation of threads/processes per application could reach the point where context switching and thread management alone consume a lot of the capacity.

Have you ever played around with BeOS or its community-driven successor Haiku? They are what's called "pervasively multithreaded" such that every single widget in the standard UI toolkit has its own thread. It's literally possible in BeOS/Haiku to lower the priority of a single radio button.

This is an OS designed around early '90s era dual CPU hardware. Considering that they made this choice at that time and still ended up with an OS that's remembered primarily for its performance and responsiveness when multitasking it seems hard to believe that threading overhead itself is too big of a deal even at extreme levels.

> IIRC super computers often need special interconnects and software to minimize the distribution costs of so many cores.

This is for entirely different reasons. The specialized supercomputer interconnects exist to minimize the cost of sharing data across nodes. Basically they provide DMA over a network fabric. It only really matters once your workload is large enough that you can't just throw a larger single machine at it but also impossible to split up in to smaller chunks that would fit within individual nodes.

If you're familiar with NUMA in multi-socket systems (or first-gen Epyc/Threadripper) you can think of the supercomputer interconnects as basically extending that principle up another tier to encompass multiple independent servers.


> It's literally possible in BeOS/Haiku to lower the priority of a single radio button.

This is not true, individual widgets most definitely do not have their own threads. Windows each have their own threads, which is indeed different from virtually every other OS which does not have this (or even allow off-the-main-thread UI operations.)


WPF can have a different Dispatcher per window, but inded, that's a fairly rare capability. I can't say I've ever needed to use this, though.


Win32 in general allows multiple threads each with its own UI event loop and associated windows.


It depends... On Raspberry Pi for example (ARM device) parallel compilation (make ... -j 4) makes its UI completely non-responsive.

The only reasonable option for UI rendering these days (high-DPI monitors) is to delegate rasterization to GPU. CPU is for running application logic. If you would do UI on CPU then this will be the only task your CPU will do. E.g. animations require stable frame rate (60 FPS or more) so UI must run on highest possible priority on your OS.


-j4 on a pi freezes the UI because you're using 100% of all cores continually for minutes - smaller chunks of work and better scheduling (or the "trick" of -j3) fixes this, and it's doubtful that a browser uses 100% of CPU for any length of time.


I know how to tackle such problems.

My comment is rather about "multi-core UI". Many UI tasks have almost real-time requirements (animations, synchronized rendering, video) so they behave as "make -j 4" meaning that the device is just busy rasterizing pixels but not doing anything meaningful.

And all that while we have dedicated powerful (YMMV) GPU that supposed to do those for us.


I don't think you can run layout code on a GPU effectively. It isn't a data-parallel problem.

Obviously actual pixel rendering is is done on a GPU - why would you do it it anywhere else, but that's not what we're talking about here.


It's real-time but not particularly CPU-hungry. Playback an animation and see how much CPU time it actually uses - you'll end up yielding most of the time, so other apps can use the remainder just fine.


If make -j4 makes your UI unresponsive, it is probably due to swapping. The CPU scheduler lowers the priority of CPU intensive tasks over time.

It could be that you need to “renice -5” X11 or something (I think some distributions used to tweak the UI runtime priorities. Not sure if they still do.)


>On Raspberry Pi for example (ARM device) parallel compilation (make ... -j 4) makes its UI completely non-responsive.

Have you tried using nice(1) to set the make's priority lower?


> Sciter's Quark (https://quark.sciter.com) - 5 MB binary and 28 MB RAM by default is probably the lowest what you can get for HTML/CSS/script engine.

That doesn't even run standard HTML/CSS/JavaScript. That fact alone would make me completely ignore it for any possible application.


I really like Sciter, the different script language may be annoying the first day but you'll get used to it, specially after using the Reactor Component, it has the best of both worlds: very good performance and separation of the app code and layout (in my case: Go | html/css). Took me 4 days to write (Go+Sciter) something that took a full month in Qt(C++) and the resource usage is almost the same (using the Skia renderer: linux: 50mb ram, windows: 25mb ram), I just wish the Nim bindings were up to date so that i could get binaries of <1mb? + 5mb(5 being sciter's size), currently with Go it is 13mb+5mb.


> very good performance

This may very well be true depending on the use case, but I'd be very surprised if V8 doesn't just blow Sciter out of the water with regards to performance, like you can be a genius and come up with a comparable engine that's 50x smaller (e.g. QuickJS), but V8 is not big for bigness sake, it's also ~35x faster according to this: https://bellard.org/quickjs/bench.html

I'm mentioning QuickJS here because I couldn't find benchmarks comparing Sciter Script and V8, if anybody has some I'd be interested in seeing them.


> V8 doesn't just blow Sciter out

And it really does not - in most of typical UI tasks Sciter is faster.

Reason is in different goals. Sciter is an embeddable engine. Script there is a glue: take output of one native function, transform it, and pass as input to other native function.

In places where UI needs maximum performance applications use native functions in C/C++, Rust, Go, etc. This allows to keep script VM compact and achieve maximum performance without sacrifices. Why do you need JIT or WebAsm (and tons of associated binaries) if you can compile what you need with battle-proof compilers to native code?

So good portions of jQuery were implemented natively. React[or] is also natively implemented - JSX is a part of script syntax (and so uses built-in native compiler), reconciliation (Element.merge()) is also native thus you don't need all that nightmare TS-to-JS, then load ReactJs script, then compile, then JIT-compile/warm-up, etc. All those ...

As of pure script performance ...

Slower than V8 but faster than, let's say, Python.


> And it really does not - in most of typical UI tasks Sciter is faster.

I'd really like to see some benchmarks about this, could you maybe make Sciter run the test suite QuickJS is benchmarked against and report back? https://bellard.org/quickjs/bench.html

> In places where UI needs maximum performance applications use native functions in C/C++, Rust, Go, etc.

That's cool, but I'm talking about JavaScript performance here, these kind of cross-language computations can totally be coded on top of Electron and I would imagine any other mature similar framework.

> if you can compile what you need with battle-proof compilers to native code?

The compiler might not be the problem here, not even the Chrome guys can manage to write completely memory-safe C++ for example, that's trivial to do in JavaScript and a reason why one might not want to write everything in C++ for example, among other potential reasons.


> QuickJS benchmarks

Typically we do not do such task in scripts. And indeed who would need ray tracing in script if you have native libraries for that? 100% that number crunching tasks will be slower in order of magnitude than native ones so why to bother?

For the comparison you can try https://notes.sciter.com/ that uses Sciter not just for UI but its logic layer is also using script (https://github.com/c-smile/sciter-sdk/tree/master/notes/res). The application has UI and complexity similar to VSCode, Slack, etc.

It is hard to come up with some formal benchmarks but startup times, CPU/RAM consumption, responsiveness can be estimated.


> Typically we do not do such task in scripts. And indeed who would need ray tracing in script if you have native libraries for that? 100% that number crunching tasks will be slower in order of magnitude than native ones so why to bother?

Those kind of synthetic benchmarks aren't there because ray tracing is what you would use JavaScript for, but for giving a measurable quantity about the speed of the interpreter.

Put it simply if V8 produces a number that's 10x higher than another interpreter you can be sure that it's going to be faster than that other interpreter pretty much across the board by a significant margin.

By the way V8 can compile some optimized JavaScript that executes at half the speed of optimized C++, not in "order of magnitude" slower, can Sciter do the same? I really don't think so, happy to be proven wrong with some numbers.

So can you run the benchmark suite with Sciter or not? Is Sciter even able to run it properly? Do you think that benchmark does an acceptable job at telling which interpreter is faster between the tested ones or not?

> The application has UI and complexity similar to VSCode

Sure it has, come on man, be real, there isn't even an app menu loading in this thing: https://i.imgur.com/MXw1LZX.png , this thing is crap.

These kind of claims are an insult to all the people working on VSCode.

> It is hard to come up with some formal benchmarks but startup times, CPU/RAM consumption, responsiveness can be estimated.

There are literally just already written benchmarks that one just runs and they output NUMBERS, how is that hard? Compare that to measuring "responsiveness".


Check this: https://sciter.com/wp-content/uploads/2020/06/ide.png

that one has menu, check menu.htm in that sample: https://github.com/c-smile/sciter-sdk/tree/master/samples/id...

> So can you run the benchmark suite with Sciter or not?

Yes, I may do that but purpose is not clear. Their compiled C versions in Sciter will run definitely faster.

These functions have nothing with UI per se.

Yet, you can extend Sciter with native DirectX or OpenGL code (see: https://sciter.com/sciter-and-directx/) , that will definitely beat any possible browser/Electron solution. So script benchmarks are orthogonal to the reality.

They make sense in Electron case as JS is the only realistic option to write app code. But not in Sciter case. It is not a browser for that matter.


You don't need Sciter's script to do the heavy stuff, you can do it in another language and feed the data to Sciter through it's script language, in my case i write Go for the app's backend and write Sciter's script to control the html/css UI, then send data to it by creating a function in Sciter's script and calling it from Go through `window.Call("the_script_function", args)` or if i want to get data from the app's backend: register a function in Go as script function with `window.DefineFunction("function_name", myFunc)` and run it in Sciter's script through the view namespace `var result = view.function_name(args);`.


Sure, I suppose maybe Sciter makes this easier to code, but one could totally do the same in Electron as well, that doesn't seem a good enough reason to not go with HTML/CSS/JavaScript proper rather than whatever Sciter implements.


Well, applications that use Sciter UI are native ones: https://sciter.com/#customers

And applications that use Electron are not. Just because the only reasonable option (for many reasons) to write code there is script. And so each such application will have full blown web browser and web server (NodeJs) under the hood.


What's your definition of "native" though? Sciter interprets "scripts", Electron interprets JavaScript, it'd say both will produce (to varying degrees) interpreted applications. Either of them can run code written in other languages, again with varying degrees of ease, e.g. https://keminglabs.com/blog/building-a-fast-electron-app-wit...

Last time I checked Evernote, mentioned in the page you linked, renders notes under macOS using WebKit, why don't they use Sciter?


> What's your definition of "native" though?

Let's take some typical native GUI calculator application.

The code a) creates window by calling CreateDialog(...,template,...). That template sits in .RC file - that is UI layout and structure definition.

Then you have b) event handling code and c) so called business layer code (a.k.a. data model code) that does calculation that end up in UI again.

Same thing in applications that use Sciter. It is just that UI definitions are not in some proprietary format but in HTML/CSS. Rest is the same - event handling is done either in script (that will end up in some native calls) or directly in native code - Sciter allows event handlers to be defined in native code directly. So Sciter applications are native ones in this regard.

As of EverNote...

EverNote 2.0 was using what later will become Sciter Engine.

Please check my article: https://notes.sciter.com/2017/09/11/motivation-and-a-bit-of-...

At some point they hired technical director who made decision to switch to WebKit. No one knows reasoning behind that very bad decision (IMO) but the company lost innovation momentum somewhere around that time.

My https://notes.sciter.com/ is just a proof that you don't need WebKit for those tasks.


> but I'd be very surprised if V8 doesn't just blow Sciter out of the water with regards to performance

V8 in Electron comes bogged down with Chromium and its bloat. Sciter's engine does not. For building UIs, that's a huge difference.


Yes, it uses its own script yet supports HTML5 and CSS2.1 + some CSS3 modules.

But if you just need to create some relatively small utility, let's say, calculator.exe then it is pretty adequate for the task.

If you want to run something that is designed for Web browser then you need browser and server (Electron or Atom for example).


> But if you just need to create some relatively small utility, let's say, calculator.exe then it is pretty adequate for the task.

If making a calculator with web technologies is more important to me than making a very resource efficient one why would I _ever_ use Sciter for this that doesn't give me not proper web technologies to use and neither an actually resource-efficient calculator?


That's probably true for calculator. But if you need some aside application for communication (e.g. Slack, Skype, whatever) accompanying your main workflow then it is better if that will consume as less resources as possible.

But even for more or less feature rich calculator, e.g. something like personal accounting/finances utility, you'd better have flexible UI.


> then it is better if that will consume as less resources as possible.

Which would automatically mean Sciter would not be a good fit for this.

> you'd better have flexible UI.

If that's more important then why not going with proper web technologies?


> 5 MB binary and 28 MB RAM by default

I wonder how WPE [1] compares to that.

[1]: https://webkit.org/wpe/


It's always nice to see you around! Usually I catch threads too late to participate.

Sciter is a really cool technology. However I've always wanted to ask you - do you ever feel that your deviation from standards has reduced your market? I was considering Sciter for a project previously, but ultimately decided against it because it required me to write two separate implementations for desktop and browser.

There's obviously benefits from deviating from HTML / CSS / JS, they're very much products of history. Sciter's native (and seamless) custom HTML tags, some built-in preprocessing, being more immediately feature complete, all just wonderful. I'd just like to hear your specific reasoning, though.

And if you're able to comment on it, how many people maintain this project? It seems like a huge task, but most of the discussions I've seen hint toward it just being yourself.

Finally, how is accessibility support? This seems like a big point, but it's not directly addressed in the documentation I've seen. In one forum post I saw you say that Sciter assigns some screen reader semantics based on a custom behavior tag in the CSS?


> deviation from standards has reduced your market

Just an example why that happens: Sciter got flexes and grid 10 years ago as they are the must for desktop UI. Web platform got them practically only in couple of last years. And it is simply to late to change anything - many customers rely on them. But its quite easy to solve this problem using @media sciter {...} sections. Features are easily transformable: https://terrainformatica.com/w3/flex-layout/flex-vs-flexbox....

Same is about script. classes, namespaces, symbols, async/await were in Sciter 7 years ago - I cannot change too much there. Support of existing customers is very critical for me.

> how many people maintain this project?

I did Sciter by myself only. Beside of customers there are three independent developers (Russia, Brazil and France) that requested and have access to Sciter source tree. I am accepting patches from them and customers. 3 AV vendors independently did security audit of the code. Yet Symantec (Norton Antivirus) provided Accessibility support implementation as for them Section 508 was the must.


Thank you for the quick response!


I used to write STB apps for an SVG ekioh browser, probably over a decade ago now. We were in the UK but had no UK customers.

These guys are obviously immensely impressive, but they're only one part of the chain. It frustrates me to this day that the UK market doesn't allow you to pick the DVB+IPTV STB you want and just expect it to work with any and all services.


> today's browsers all render on a single thread

That's not true anymore with webrender. It's available in FF for a while.


In the benchmarks provided on Flow’s site webrender was still 5x slower, and about 2x faster than FF regular.


That rationale doesn't seem to add up - if you render the same content in 1080p and 4K, the only change is the amount of memory bandwidth for filling pixels. Multicore support in the layout engine will help speed up layout calculations (and maybe CSS), but the amount of work involved in that ought to constant regardless of the display resolution.


If you'd just scale up the image yes, but rendering svgs, fonts, effects like shadows at 4x the resolution isn't just a matter of memory band with.


It also supports full Google Mail [1]. While I don’t think it will ever be competitive to Chrome or Firefox, it’s super cool to see it’s actually possible to start a new web browser project and get it to this point.

[1] https://www.ekioh.com/devblog/full-google-mail-in-a-clean-ro...


Yes, this is really surprising. Browsers have in many ways gotten more complex than operating systems, so creating a new one from scratch seemed off the table for now. A big player like Microsoft scrapping Edge certainly helped to strengthen that impression.

It's certainly not as compatible with the modern web as chrome and Firefox but hey, it's doing much more than I'd have expected.


I like the author's writing style too... This seems to be as much an exploration as anything:

>I’ll be the first to admit it’s not perfect yet – some vertical alignment is wrong in a few places, Hangouts doesn’t work, and our ‘contenteditable’ support is minimal. But I don’t think these detract from the achievement – as far as I’m aware, no browser engine other than Webkit, Blink and Gecko can run full Google Mail, and these have all had over twenty years of development work on them.


Why does he call it "Google Mail"? Isn't "Gmail" the official name (and the mostly widely used one)?


In many countries such as the UK (where the author is from) and Germany it was called Google Mail due to trademark disputes with local companies that had the Gmail name first. I think they've sorted most of these with licensing agreements or just buying the rights for the name, but a lot of UK people still have @googlemail addresses even.


That's an interesting trivia, thanks!


The author's blog site seems to be down; here's an archive.org link:

https://web.archive.org/web/20200613124642/https://www.ekioh...


And this is the project's homepage: https://web.archive.org/web/20190623202921/https://www.ekioh...

A quirksmode post about the project: https://www.quirksmode.org/blog/archives/2020/01/new_browser...

And the project's twitter handle: https://twitter.com/flowbrowser


From https://www.quirksmode.org/blog/archives/2020/01/new_browser...

>Is Flow open source?

>It’s not. There’s no current plan for that as we don’t have a large corporation backing our development.


Thanks for the links. I was looking everywhere for the information that ended up being in the "quirksmode" post. So far that post has had the most value in answering questions.


I wonder if they'll ever open source it, sounds interesting enough.


In the quirksmode interview they say they will not open source it, since they are a small company and don't have the resources.


Congratulations to the Flow team! Very nice work.

The world needs more browser engines (dang it Microsoft!), so seeing this development makes me really happy.

I've just started working on ACID2 compliance in my own engine a few days ago, and passing all three tests is no small feat :)


> a new, clean-sheet browser

I don't think that's entirely true. It may have its own HTML/CSS rendering engine, but apparently [0] it uses SpiderMonkey for JavaScript. Still very impressive, of course.

[0] https://www.quirksmode.org/blog/archives/2020/01/new_browser...


I think it's reasonable to not roll your own JS engine when Spidermonkey and V8 are intended to be integrated in other projects.


Of course.


Amusingly, I went to http://acid3.acidtests.org/ and was surprised to find that the latest Chrome 83.0.4103.97 (Official Build) (64-bit) on MacOS 10.13.6 apparently doesn't pass the acid3 test!

https://i.imgur.com/ZW4BYH2.png

Now I'm curious to know which 3 features can be ignored by the dominant browser...

EDIT: Oh my, I suspected it was one of my extensions. Nope. Now I'm getting a "You should not see this at all" in a fresh Chrome guest: https://imgur.com/CrAYhpI

I burst out laughing because it was so unexpected. I wonder what's going on.


"Acid3, in particular, contains some controversial tests and no longer reflects the consensus of the Web standards it purports to test, especially when it comes to issues affecting mobile browsers. The tests remain available for historical purposes and for use by browser vendors. It would be inappropriate, however, to use them as part of a certification process, especially for mobile browsers."

Wikipedia gives this as an example of a standard that changed: https://github.com/whatwg/dom/issues/319

Edit: there's an updated version at http://wpt.live/acid/acid3/test.html


And the updated one is in https://github.com/web-platform-tests/wpt/tree/master/acid for those wanting to look at the history (though it only includes the ppst-fork history, not the earlier updates).


To expand a bit more:

> Wikipedia gives this as an example of a standard that changed: https://github.com/whatwg/dom/issues/319

That's the reason why tests 23 and 25 fail (both because of the "wrong" exception being thrown); https://github.com/web-platform-tests/wpt/commit/3cbdbaa7ca9... is the Acid3 change for that.

The other test that fails, test 35, is down to https://www.w3.org/Style/CSS/Tracker/issues/317; https://github.com/web-platform-tests/wpt/commit/e6f63a6e69e... is the Acid3 change for that.

(The reason they're simply commented out is it follows the precedent set by Hixie when he was still maintaining Acid3, c.f., https://github.com/web-platform-tests/wpt/blob/master/acid/a..., plus there are other feature-specific tests for those features in WPT.)


> A quick note about the score: Acid3 dates back to 2008 and the web platform specs have changed over time so a perfect score, using the version hosted on acidtests.org, is now only 97/100. The version hosted on the web platform tests site has been updated to match the specs and so, using that, browsers should score 100/100. Similarly, on all browsers, Acid2 on a retina screen renders slightly differently, so the image above was taken on a non-retina display.

From the original blog post about Flow.


The web evolves, and some features are deliberately changed in backwards incompatible ways - eg. third party cookies, cross site resources, no audio before user interaction, removal of cross-thread arraybuffers, etc.

Browser makers seem to have this idea that if a feature is rarely used, it's fine to redefine or break it in the name of simplicity...


Browsers are user agents; the changes you mention all seem to the benefit of user experience and security. Standards are codification of things browsers have already done, or have attempted to do.


The web (html/css/javascript) standards have become so wide and complex that it's very hard to build a browser from scratch. It gets worse by the day, as more features get added to those standards.

This makes me wonder if you could define a subset of those standards that would support 90%+ of the use cases, would be simple and efficient to implement and would be supported by existing browsers. You could then push this light version of the web as a new base for a future standard.

The problem would of course be that you would have to keep around a different browser to render old sites. Could still be worth thinking about. Doesn't seem feasible as a long term strategy to have browsers support thousands of apis and increasing.


> The problem would of course be that you would have to keep around a different browser to render old sites.

Not really, you could polyfill the missing features like it's commonly done with many unimplemented new ones.

I'd prefer starting from scratch, with a completely new, simple language that can be translated quickly to HTML/CSS/JS. You'd have to remove those interested in being gatekeepers of the web and sabotaging any efforts to make browsers simpler from the W3C though.


> simple language

What is conceptually wrong (or complex) with current HTML/CSS ? What would you change?


Here's hoping though that the sites you actually want to visit, such as HN, don't need all that crap. As such, web bloat can serve as a negative marker of sorts ;)


> Here's hoping though that the sites you actually want to visit, such as HN, don't need all that crap.

Though HN is all tables, and table layout is still largely undefined behaviour in the specs.


Please see my comment above.

https://github.com/runvnc/noscriptweb

That GitHub project is just an idea and I am probably never going to actually code it because I have another side project. But I think the spec might be interesting to people.


Some feedback: From the README and the repo, I don't the idea of your attempts. You should try to be more concise, write some abstract, or move some of the texts in the README.md file into some DETAILS.md.


This would be good to promote and “test” ones progressive enhancement and/or graceful degradation.

Usually that is tested by using some older version of one of the major browsers, often IE11, but would be great to have a more well defined subset to target.


I think what we need is basically markdown (with good syntax highlighting) and web assembly (with a simple Canvas-like API). Where the two are totally separate so the wasm stuff can't interfere with the markdown loading.


The problem is that every site needs a different 90% of the spec.


Right, but there's still a lot of overlap between features. Take layout for example. If you wanted to arrange boxes using html and css, you have quite a few choices. You could use at least: tables, floats, flexbox or css grid. Imagine if there was only one of those that would cover all possible layouts.


> Only one of those that would cover all possible layouts.

In fact 10 years ago it was such an idea at w3c-styles: https://terrainformatica.com/2018/12/11/10-years-of-flexboxi...

Single CSS property `flow` that defines layout manager in classic Java AWT terms: https://docs.oracle.com/javase/tutorial/uiswing/layout/visua...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: