Hacker News new | past | comments | ask | show | jobs | submit login
Chrome 69: “www.” subdomain missing from URL (chromium.org)
1572 points by gouggoug on Sept 6, 2018 | hide | past | favorite | 876 comments



Considering a subdomain "trivial" is ridiculous... there's a difference between "www.example.com" and "example.com". Not only can they serve different sites, they can even have different DNS records!

It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".

I sincerely hope Firefox doesn't follow suit.


This is certainly subverting the domain name system. I can't see the value or gain in security by this.

(If you want to put focus on the domain, then display the host-part with less contrast, i.e. grey, but don't hide any potentially vital information. Otherwise, put out a RFC, defining "www" as a substitute for "*", or a zero-value atom, in order to guarantee consistent behavior.)

Edit: There are also legal concerns with catch-all domains in some countries. Blending the lines certainly doesn't help.


A proposal for better security with domain names:

The domain name system has been around for decades and it's a clever and proven system. It can – and should be – taught in school and, arguably, knowledge of it is, while not difficult to obtain, essential in our times. Additional ambiguity in this is probably not what we want.

Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs. This could be addressed by a) going back to codepages (Unicode subranges) and defining a valid subset for each range, and b) enforcing a domain name (hostname and domain) to be in a single codepage. Clients should derive the codepage by the Unicode range and generate a codepage-identifier, which may be displayed as a badge, identifying the respective range. And, of course, any mixed domains should be regarded illegal and invalid. (We may even want to make this codepage-identifier a mandatory particle of any URI, preceding the hostname.)


> Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs.

No way. The most sincere problem is that hostnames do not enforce any binding to a real world identity that users can understand (nobody inspects certs) and that the most trustworthy component of a hostname is the second to the last section (right before ".com"). Humans tend to look at the front of the URL, making "www.bank.evil.com" a mind bogglingly effective phishing technique.

Homoglyphs are almost always a sign of bad behavior and can just be banned to a large degree. The fact that "foo.com" or "foo.evil.com" are not necessarily owned by company foo is much worse.


Regarding the parsing of URLs, this is a common, but mostly counterintuitive argument: Take for example names in most western countries, addresses (street-zip-city-country), etc. Most of our most important identifiers work this way.

Regarding lacking binding of identity: On the other hand, this has been one of the most important features of the web, from the very beginning. Also, there is no way to setup a system, which will attribute to a single person in a readable and intuitive way. (E.g., names fail to do so.) Arguably, this should be left to (optional) extensions.

I'd argue, the knowledge required to parse a URI safely may be conveyed in couple of minutes. Why not enforce this knowledge? Why not have a URL-parsing note on the start screen of any browser? Why dumb down the system and introduce ambiguity – and by this even more insecurity – instead of educating users? URL-parsing is a vital skill, which can be acquired in less time than memorizing a basic table of partial addition results. Why do we still try to teach addition, if we can't teach URLs?


Except this is already broken by multi-part TLDs.

foo.com is owned by "foo" foo.evil.com is owned by "evil" foo.co.uk is NOT owned by "co"


The best security change we could make, imo, is rewriting domains so that they look like com.evil.bank.www/now/urls/go/from/most/specific/to/least


You're in good company. Tim Berners-Less said something similar when reflecting on what he would do differently if given the chance:

> Looking back on 15 years or so of development of the Web is there anything you would do differently given the chance? > "I would have skipped on the double slash - there’s no need for it. Also I would have put the domain name in the reverse order - in order of size so, for example, the BCS address would read: http:/uk.org.bcs/members. The last two terms of this example could both be servers if necessary."

From http://www.impactlab.net/2006/03/25/interview-with-tim-berne...


Yes. Think how you have to read "mobile.tasty.noodles.com/italian/fettuccine" to determine where it goes.

First start at the slash and work your way left: "com -> noodles -> tasty -> mobile" - then jump back to the slash and work your way right: "italian -> fettuccine".

This is counterintuitive and I doubt most users understand it. "com.noodles.tasty.mobile/italian/fettuccine" makes more sense to me.

Also, I think TLDs like "com" and "edu" and now "io" and "cool", etc, are misguided. I wish we had "country.language" as the only TLDs. For instance, "us.en.apple.www/mac". I see several advantages.

One, if "us.en.apple" and "uk.en.apple" were different entities, it would make legal sense, whereas "apple.com" and "apple.cool" being different entities makes no sense. Two, a use would likely notice if they ventured outside their usual TLD(s), and be less surprised by the different entity. Three, these TLDs could have rules about allowed characters; eg, only ASCII in "us.en". This would make homoglpyh attacks much more difficult.


I'm not so convinced. There would be still "uk.co.bbc" and "com.bbc" pointing to the same body behind it and any kind of confusion arising from this, like, "is 'ug.co.bbc' the same?" The most important part is teaching users that the identity isn't just "bbc" with some extra decoration. Also, we have the reverse example in software packaging (com/example/disruptiveLibrary) and it isn't fool-proof either (especially, if you only know of "disruptiveLibrary" and not about its origin).


Years ago, before the WWW, some parts of the world indeed ordered them that way.

* https://en.wikipedia.org/wiki/JANET_NRS


I don't see how you could enforce a hostname binding to some real world identity. Hostnames really need to have a non-ambiguous mapping from a name to a computer (more or less), but real world entities don't have that, without using really cumbersome indentifiers. Many natural persons share a name, so how do we decide who gets a hostname based on that; the same is true of corporate persons -- there are many that share names. Even if there was a way to disambiguate these things, it seems unlikely that the entity in charge of this would also want to rub a public registry -- so how do you make that work.


Absolutely. People don't have a single coherent model of identity in the real world. It's hard to glue certs to a pile of sand. Did I buy lunch from "Tim Horton's", from "The TDL Group Corp." or from some random company with an address for a name? The answer is yes to all three, despite only buying one lunch.


On a higher level, all those considerations are about a single question: Is the Web about communication (then it's probably OK as it is), or about a viable business platform with an entry-level as low as it can be?

Who's without interest may throw the first stone, er, browser extension.


Both techniques are deceptively effective; I know the following might be only anecdotally relevant, but it's the most recent case of a successful phishing attack I know of:

Recently a friend of mine didn't see the lower dot on the 'e' in a URL [0], and promptly ended up inadvertently broadcasting messages to everyone on her WhatsApp contacts list.

[0] www [dot] hẹb [dot] com/coupon/


How about displaying an identicon, that is rendered from the domain, in the address bar? People might soon learn what the icons of their important sites look like and will easily detect if somebody is trying to phish their bank account.


The space of easily visually distinguishable images has a certain size. Let's assume there's a deterministic, pseudorandom mapping from domains to images. For a given domain, how many plausible impostor domains are there? What's the chance that there's at least one impostor domain that happens to get the same image?

If you have 1000 distinct images, but a given domain has 5 letters that could each be replaced with any of 3 visually identical Unicode characters, then, well, the chances are very high that there exists a plausible impostor domain with the same image. I don't think this is a very workable approach.


Yes, I admit it's hard to get something like this secure. It's in the same problem space like hash functions and might require some research.


FWIW, OpenSSH already does this and calls it "randomart".


Right, didn't think of this.


We might even call it "favicon", for the fun of it… :-)


Any fake-Facebook website can copy Facebook's favicon, so that wouldn't add any security at all.

An identicon is a hash value represented as an icon. "facebook.com", for instance, may hash to a red image with a yellow line through it. While you wouldn't remember the icon initially, over time you would – or at least your subconsious would. If you ever visisted a fake-Facebook, you'd immediately notice that something was wrong if the icon suddenly was green with a blue dot in it, for instance.


Not sure if serious, but no. Anyone can copy a favicon; the point of an identicon is that it's generated from the domain name, so subverting it would require an attacker to find a hash collision with a visually similar domain.


Sorry, I mistook "that is rendered from the domain" for "rendered from a resource from the domain".

However, teach users to read domain names! If users do not grasp the general concept, e.g., if the supposed identity is just "example" (possibly with some decoration considered insignificant) and not "example.com", how are they supposed to survive? Domains have been around for more than a quarter of a century, the Internet is actually part of our lives… There is no excuse, and there is no sense in pretending that there was no harm in not understanding the basics. That said, there are real ambiguities that have to be addressed.


Yes, it would be nice if every child would learn these basics in school.


Which would be called computer literacy. I find it both awesome and terrifying that people can successfully do jobs that require working on a computer and still be computer illiterates. Awesome in a sense that it illustrates how good computer interfaces actually are and terrifying in a sense that in any profession with heavy machinery a person with a solution "adjust switches and dials until something happens" would be told to immediately vacate the place for safety reasons.


Yes, learning to be good consumers is way more important than basic skills such as maths, reading, writing and general critical thinking /s


We do teach kids not to talk to strangers in the street and consider it quite important, I think. What's so much different about teaching them how not to get robbed on-line? It's not about being a good consumer, but about minding your own feet.


Yeah, cause moving from a text domain with no collision possible to some sort of collision prone visual system to ensure are able to understand the domain they are viewing seems like a great idea.

FFS, if users cant see that somedomainname.com is different than somedomanname.com how does a randomized image of the domain name based on a hash solve this.


I'm not saying it's a good idea. I don't think it is. But it's not just a favicon.


How is the identicon designed such that its difficult to spoof?

I know of at least one site which users a user selected image in the login screen to thwart phishing attempts. Because its user selected its memorable, I think more so than a password for example. It would be hard for a scammer to spoof as well because they don't know the image the user selected when they created the account.

Unfortunately this would probably be less notable and thus memorable if everyone did it.


I dedicated a blog post to this idea: https://vorba.ch/2018/url-security-identicons.html

Here is the discussion on HN: https://news.ycombinator.com/item?id=17947467


>enforcing a domain name (hostname and domain) to be in a single codepage

this is essentially what is already implemented in most browsers. You can't mix characters from different scripts in a domain name, except for special cases (e.g. japanese and latin are frequently used together and have little potential for confusion)


However, these are just "anti-phishing" heuristics. We really should apply a rule on this.


do you have any examples of school class materials which teach stuff like this? I'd be really interested to read through them.


The reason why Google is doing this is because they are slowly trying to do away with URLs, as direct traffic is probably their greatest untapped segment.

Google is trying to get users to go through their doorway pages, which is exactly the kind of thing for which they penalize publishers.

Pay attention to when you enter direct addresses, let's say from a device/media subscription authorization page. The autosuggestion feature will often recommend Google searches, disguised as URLs, instead of helping you complete the very obvious URL.

If they help you get to the site directly, the opportunity to acquire your page views diminishes.

These behaviors are hostile toward users. I'd like to see further in their playbook to depreciate the URL as we know it.


Compare yesterdays Ars Technica piece, https://arstechnica.com/gadgets/2018/09/google-wants-to-get-...

As a comment reads there, do they want to reintroduce AOL keywords?

Edit: May we expect a non-standard subdomain "google-remote", which is more of a protocol-extension and will be also hidden?


They already reintroduced AOL keywords in 2011 with their Direct Connect "feature" for Google+, https://googleblog.blogspot.com/2011/11/google-pages-connect..., so one could go straight to Pepsi's Google+ page with +Pepsi.

They killed it off in 2014.


"Their complexity makes them a security hazard."

GTFOH! HANDS OFF!


Notably, what is the common answer to a system regarded to be too complex to be handled on a general level, so that it may be considered a common risk? Authority (read, trusted man in the middle).


Referencing the AMP URL controversy seems somewhat relevant in this context.


This needs to be a top-level comment.


> they are slowly trying to do away with URLs

They might be changing how they want to display them, but "do away with" is unsupported by the article:

> But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity.

https://www.wired.com/story/google-wants-to-kill-the-url/


Did you read the whole article?

"The focus right now, they say, is on identifying all the ways people use URLs to try to find an alternative that will enhance security and identity integrity on the web while also adding convenience for everyday tasks like sharing links on mobile devices."

My statement is clearly supported by the article. They paint a rosy picture of it, because this is a submarine piece, but they are definitely making moves against the url.


> My statement is clearly supported by the article.

You're ignoring a direct quote in favor of a Wired reporter paraphrase (one which mentions sharing links, no less). They cite an earlier effort, which was a display change. This issue is for a display change. None of this points to "trying to do away with URLs".


>"None of this points to 'trying to do away with URLs'"

Except for that "trying to identify an alternative" part. But let's ignore that, because doing so makes you comfortable.


Sorry, what? Could you expand on this? What do you mean by doing away with URLs?


From the linked article:

"I don’t know what this will look like, because it’s an active discussion in the team right now," says Parisa Tabriz, director of engineering at Chrome. "But I do know that whatever we propose is going to be controversial. That’s one of the challenges with a really old and open and sprawling platform. Change will be controversial whatever form it takes. But it’s important we do something, because everyone is unsatisfied by URLs. They kind of suck."

https://www.wired.com/story/google-wants-to-kill-the-url/

She's says it's important that they do something! GTFOH! Hands of our Internet!

The problem here is that they view Chrome as their platform. They have too much market share ala IE6. Instead of following and helping to shape standards, they are considering highjacking the project. Argh!!!!!


They seem to view the Internet as their platform, given the way they like to bully the tech sphere.


Their dominance has become problematic when they entertain concepts like this seriously. They are really growing into the monolith that we all feared.


+1 Why do they need to change anything? Of course it’s going to be « controversial »!! What happened to RFQs?

Hidding the url scheme was the first step down this path of utter stupidity and I vividly remember the hostility and hubris of the Chrome team at the time.

We still have Firefox, but many times they just blindly follow suit.


Many users never use the url bar. They just 'Google' for websites they want to access and follow the results.


It is worse than that for some users. I've seen actual users that type/paste real url's into google's search box in order to go to the site. They actually had no idea that the bar at the top of the browser that said "google" (since they/someone set their default homepage to google) was a place where they could delete "google.com" and type/paste the url they wanted to visit there instead to actually get to the site they wanted to visit.


You seem shocked at this with word usage like "actual users", "real url's" and "actually no idea"

But how are we to expect users to know any better until general technology literacy improves?

Many people can't tell you the difference between a modem, router, OS, browser, or website.

I remember years ago sitting down with my elderly grandmother trying to show her how to use a desktop...

We are too close to our work so everything is familiar and easy.

Even the concept of moving the mouse on a table to represent moving the mouse cursor on the screen is something we take for granted.

Tell someone who's never used a mouse before to double click something to open it. You have to start way back earlier at the concept of which physical button on the mouse to use.

This turned more into a general rant about how we overestimate regular users but I'ts been on my mind for awhile.


> "Many people can't tell you the difference between a modem, router, OS, browser, or website."

They don't care, nor should they. How many people know how many spark plugs are in their car?

You're correct. We, the more tech-literate, take too much for granted; and most experiences and learning curves are too far over the head of the "average" user.

It's not them. It's us.


It's not about knowing how many spark plugs are in their car. It's more about buying a car that comes with a custom power adapter plugged into the cigarette lighter, never realizing that you can plug your own accessories into the cigarette lighter instead of buying your phone charger or GPS from the car company, and then not caring when they just take away the cigarette lighter and replace it with their own custom port.


Since we know all analogies breakdown under close inspection, I'm pushing the idea that the best analogy is actually a brief description of the event / idea itself.

So in this case:

Not display www. in the address bar is actually a whole lot like not display www. in the address bar.


And if anyone doesn't understand why that is a bad idea, maybe we should explain it to them, which might require using admittedly imperfect analogies that they can nonetheless understand.


I think the comparison to spark plugs is misleading when we talk about URLs and security.

It's more like looking in the mirror before changing lanes. It's something you need to check in order to stay safe.

Mirrors, like URLs, are just an implementation detail. But since currently driving works with mirrors, you have to learn how to use them.


The benefit is obvious in that instance. There is a very direct connection between checking your mirrors and not hitting a car as you merge or similar.

Where is the cause and effect for a URL or SSL cert? There is no learning experience.

Furthermore as some have claimed, and I've personally witnessed, for some URLS's literally dont exist. Just type whatever site you want into the google box and hope you get lucky.


I think the spark plugs example is an excellent one. People used to require an extensive knowledge of how cars worked in order to have a prayer of using them effectively. Now they don't, because we realized none of that knowledge is necessary if you design the system correctly.

We have enough historical context to realize that things like parsing URLs by eye is unsafe for the general population, and always will be. The solution is to engineer that need out of existence.

You might want to consider that manufacturers have added blind spot detectors to cars as people are bad at changing lanes safely, even with all the training in the world.


when did you have to know how spark plugs work to drive a car? And isn’t this why car mechanics exist? On the other hand you had to learn at some point what and RPM gauge is... And we still have it in cars even though you could say you don’t really need it.


My car does not have a RPM gauge, instead it has two arrows that suggest when to gear up or down (it is manual)

One could say the interface was dumbed down to the minimum.


my brand new one has a lot of gauges... so i’d say my point is still valid. And i find them extremely useful, cos you can make better use of fuel if you know what they mean.


Do you seriously not know how many spark plugs are in your car? It's the same as the number of cylinders. How could you not know that?

They absolutely should care. They should be aware that when they store things in "the cloud" they are not stored on their device and are visible to third parties. They should understand what encryption is and how to use it. "I don't know what I'm doing, and I didn't get the result I wanted, but it's not my fault it's the machine" is not an acceptable statement, whether we're talking about cars or computers.


I don't even know how many cylinders I have!* Why should I care? Put key in. Press gas down. Car goes forward. Works for me.

"How do you not know that?"

Why would I need to know this? Why do I need to know what a cylinder is to drive? Is this even a logical question with electric cars now?

You are arguing what should be vs. what is.

* Well, I don't currently drive but I couldn't tell you with 100% accuracy the number of cylinders my last car had.


I guess, the simile isn't entirely on the same level. You may not know how many cylinders there are in your car, like you may not know the number of cores in the CPU of your computer. They are both essentially hidden.

But you do know how many pedals there are in the car and probably, how many switches there are for the lights, and that the wiper has different steps of speed etc. You even manage to control these few elements, because they are the user facing elements your dealing with, the interface. There's no need to unify the pedals into a single one and to have the car to decide, whether it means accelerate, break, or clutch. Doing so would alienate you from the very task of driving, from what it means and what risks are involved. Taking these few controls away from you in favor of an ambiguous I-know-it-all-so-you-should't-care interface of ultimate convenience would probably not increase the security of operations.

On the other hand, we may expect you, as a driver, to know that there is a engine, that this is why the car moves, that it needs gas/petrol in order to run, that deacceleration is proportional to speed, etc.

Why is it so different with anything involving a computer? Is it, because we're telling them so?


Computers are magic to the majority.

I bring up in a previous reply that mirrors, and now lights, pedals, and other controls, that these are directly user-facing and must be interacted with in order to get anything done. Even knowing there is an engine that might need engine-y things like water and oil.

But where is the requirement a user knows about URLS in order to use the web?

Way back when we had AOL keywords. Now we have Google and apps and other tools that make URLS unnecessary.

My grandmother that I I mentioned before. She browses solely through bookmarks and via Google results. That a URL exists is not only an implementation detail but completely unneeded and unused in her case.

Then something like an SSL cert? Where it will work just fine without? I don't even want to imagine trying to explain that to my grandmother before sending her off to her decades old AOL mail inbox.

Only recently with Chrome displaying "Not Secure" have I even noticed any concern or interest amongst non-technical friends and aquaintances.


But why is it that computers are that magical? Computers have been now around for nearly 70 years. It's a technology as new as airplanes were in 1980. (If we include digital accounting machines with storage, they have been around even before the first flight of the Wright brothers, they are even older than any living person.) Computers are also the means by which many, if not most, are dealing with for a living on a daily basis. If we consider users generally as unfit to grasp even the basics, why is it that anyone is still admitted to their kitchen? (There are really dangerous, pointy objects there, which may cause real-life harm, and, if you have a gas oven, you may even blow up the house or the entire block. How could ordinary people tell a knife from a dish and how could we assume that they would know where they put them? Isn't it possible that someone just wanted to have a glass of water from the tap and blew up the house instead?)

Also, I consider some of this very US centric. In many parts of the world, AOL wasn't a big thing. In many languages, people are used to the fact that important parts of a sentence come at the very end, e.g., the verb, at least in some tenses. Moreover, most important identifiers go from the minor, less important part to the bigger, most significant ones. Why can't we tell users that domains work just like their post address? (As in "street-city-country". And there are even funny ones, like "street-city-state-country" and even funnier ones, like "c/o", meaning it's not the usual addressee. Why are people able to deal with this?) If you're living in a western country, even your own name works probably like this. Why this, oh, it's magic, don't care?

I'd say, it is mostly, because we encourage them not to care. Because we say, "Yes, that's really difficult", where we ought to say, "No, it's really simple and you ought to know." The user is still the person in charge. Pampering and flattering the person in charge into incompetence isn't apt to end well.

I'd say, there's a chance to convey simple things, like, the cloud is not on your local machine, or how a URL is principally constructed. Or that a file is saved only, when you safe a file.

Edit: Returning to the obligatory-car-simile, when I did my driver's test, I had to know the intrinsics of an engine, of the braking mechanism, of the steering. I was tested for knowledge of ad-hoc technical repair. It was assumed reasonable for a driver to grasp, to memorize the details, to minutely describe them, and it was even mandatory to do so in order to obtain a license. However, it was less important to drive a car then (you could do well without this in most occupations) than it is to operate a computer nowadays.

Edit 2: And, to level up a bit, how comes that academics are able to correctly cite a book and page, but are unable to parse a URL – and are even flattered for the latter?


Without prejudicing the rest of your points, computers are very unlike most inventions. Computation is extremely powerful, our only working definition for what it is even is relies on an intuition, called the Church-Turing thesis, that essentially says the computers are doing categorically the same thing we are, but doesn't purport explain why that's so. It looks observably true and that's the best we have.

So, it's entirely unfair to suppose that since people got used to having tap water and so we are surprised if a person can't operate a tap, therefore they should be used to the entire complexity of computation by now.

You definitely _should not_ count machines that aren't actually computers ("digital accounting machines with storage") since those aren't Church-Turing, they're just another trivial machine like a calculator. Instead, compare the other working example we have of full-blown Church-Turing: Humans. Why aren't people somehow used to everything about people yet? People have been around a long time too. Why isn't everyone prepared for every idiosyncratic or even nonsensical behaviour from other people, they've surely had long enough right?


> Is it, because we're telling them so?

Anti-intellectualism runs deep in our society.


Some engines have two per cylinder :)


I find this interesting. The parallels up to this point. My intent is not to pick fun on anyone but to just relook at the conversation just had.

We're talking about users not understanding the technology they use daily.

jwalton, in trying to give an example with spark plugs, allowed a more knowledgeable user or practitioner, mirimir, to give a more technically-correct description.

It seems to echo the main problem we are discussing in which users of a technology are not the same as those who design or know the nitty-gritty details of that technology.

Assumptions learned from day to day use in that technology (all cylinders have one plug, the google box is the only box I need) can so easily be proven incorrect when speaking to an actual expert in that field.


Well, I was just being pedantic, I suppose.

But it's arguably not such a great example, because details of engine design are generally trivial for drivers. Maybe a better example is the low oil pressure indicator. Maybe most people don't know what that actually means, but not having one can lead to severe engine damage. Years ago, I had a car with an oil radiator, and the oil line failed. So I knew to stop immediately.


Ohhhh holy cow I never realized Google might want to encourage this. Thanks!


I'm occasionally doing that and especially suggest non-technical users to do exactly this thing. I can mistype URL. Google will correct me, if site is well-known. Otherwise I'm risking to go to phishing website.


I just finished helping out a friend who did exactly this thing, clicked on an ad at the results page thinking it was Google's top result and was redirected to an ESTA scam site where they lost a bunch of money.

What's easier to tell apart for nontechnical users? URL bar from Google search field or ads from Google results?


> There are also legal concerns with catch-all domains in some countries.

Wow, really? Could you expand a little? I tried to search but all I got was catch-all mail addresses and no legal issues. Thanks!


Here in Austria, we had a rather problematic court ruling regarding this. Following to this and to common recommendations catch-all domains were mostly disabled, at least, you run them at your own risk.

What it was about: Say, there was a review or best-price-search site (here, "service.at"), using catch-all and mapping subdomain requests to product searches. So "acme.service.at" would be remapped to, say, "service.at/search?q=acme". Now Acme sued, claiming anything containing the name "acme" on the web ought to point to their site, including the subdomain "acme.search.at", since they were the owner of the name "Acme". To almost everybody's surprise the court decided that this was true, according to naming rights, and that a subdomain containing this name, even if just implemented virtually by a catch-all mechanism, was an infringement. This also implies that "acme.example.at", which is included in the set of "*.example.at", mapped to the very same as just "example.at" is a possible infringement. – Strange, but this is as it is. And, yes, it's particularly about search engines, like Google.

(I really don't remember the particulars, since this has been some years ago by now, but we may assume that the results returned by the service weren't exactly favorable and that the particular search enjoyed a higher Page rank than the site of this vendor, or at least a rank, which brought it up near the site of the vendor in search results.)


Wow, that's particularly deranged if "service.at/search?q=acme" is considered acceptable (and if that's not, how could any search work?).

Thanks for the explanation. I honestly would not have imagined anything like that.


IANAL, and I'm just speculating here, but the law could have been termed as "copyright laws apply to domain names on the internet" (i.e you can't use the name of a brand you don't own in a domain name), and acme.service.at is a domain name, but service.at/search?q=acme is not.


That would be trademark, not copyright, by the way.


I think this is exactly right. All the letters of the name are important and cannot be left off. If people want to equate www.domain to domain they can put a 301 redirect on the www address but the browser has no business making assumptions about what the owner of the name space thinks are equivalent.


Once upon a time, back in the middle 1990s when it was a major WWW browser, Netscape Navigator assumed that it could wrap domain names in URLs within "www." and ".com".


Interestingly, in Firefox, you can type a word into the URL bar and hit ctrl+enter to add "www" and "com". Shift+enter adds "www" and "net".


Oh come on.

"www." was used as a way of delineating what was a web address. Hence the fashion of putting that there so people knew you had to do it in the browser. Before then people used to also put the "http://" on there, and the combination of the two on vehicles/signs was ridiculous.

We're now in a web world. People know what a URL is. "domain.com" isn't ambiguous, it's obvious to man, beast or child that you type it in the browser. Most decent websites revert "www." or without to whichever is the canonical version; the one without should be that tbf.

The 'm.' is ridiculous too and ruins shareability. If the link was the bare domain, and the frontend does any switch that's needed, we'd all be better off.


There is an actual (small) reason for the existence of "www" nowadays. You cannot have a CNAME record for the domain apex (example.com). Many dns providers implement a workaround by resolving the CNAME record into A/AAAA records when queried.

https://serverfault.com/questions/613829/why-cant-a-cname-re...


You can but it prevents you from having other records there, which includes things like an MX record. Just a nitpick, as in practice that prevents most people from being able to use a CNAME on the apex.


I upvoted you because you're correct - it's a failing of the system.

My original point still stands though - we used to use 'http://' and 'http://www.' as a signal that this was a web address. I cannot believe this will still stand in 5 years time. The default is now the domain name, not the phone number.


Something that I was struggling with just today. Of all the days to update Chrome... At first I thought my redirect was broken.


Another good point, I forgot about that - thank you.


"www" was not a marketing trick, it was legitimately a different domain, by convention. General users never understood it, so companies started to have to add it to match their weird expectations.

To associate a base domain with a company identity happens to be true MOST of the time, but isnt actually true. Plus, foo.example.com follows different security rules than bar.example.com (CORS, certs, etc)

The problem here is that the precise domain has a technical meaning...but consumers are using it for a different meaning. Once that is also useful BUT NOT THE SAME.

Pretending the url matches this new meaning (and altering the display to match) serves both groups poorly.


There was never a requirement for www. to be anything other than the bare domain for most people. It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was. This was serendipity, which turned out not to be serendipitous when people had to write it on signs / read it out on an advert etc.

I see no reason now to associate www. with the web version of your service. If I receive a request on port 80 or 443 for the bare domain, what's a better option than service the 99% of people who want a webpage?

You are splitting hairs on this one.


> It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was

You're missing some history here (or we're talking past one another) Back then ( source: lived through it) subdomains for particular protocols were pretty common (www.example.com, ftp.example.com, gopher.example.com, mail.example.com) were pretty common, though not a requirement at all. Almost all the users were technical, so this helped users AND admins. Plus, machines were FAR less powerful back then, so anything exposed to the "public" probably didn't want to handle multiple purposes anyway.

Then non-technical users came in, saw "www.example.com" being used many places, and assumed it was part of the system. New domains either created a "www" subdomain or lost traffic (until browsers started trying to compensate). Note that what we're discussing is a switch in behavior. Prior to what the article is discussing, a browser would try the domain as typed, and if it failed would try prepending "www" AND ADD IT.

> I see no reason now to associate www. with the web version of your service

First, you still have people that type the "www" automatically because they never learned that was technically incorrect.

Second, what if you're reselling subdomains? The concept of "base domain == identity" is relatively recent and possibly temporary.

Third, what if you don't HAVE a single "web version of your service"?

The internet (and the web) has succeeded (granted, half by accident) by providing loose rules so practices can evolve inside those rules. If we start encoding the current practices in the rules, the rules no longer handle evolution well (or possibly at all).

I'm splitting hairs because hairs sometimes matter.


Sure, perhaps there is no reason to associate www. But the issue here is a browser showing an incorrect url.


Message 3[1] in the linked discussion has a great counter-example: "How will you distinguish http://www.pool.ntp.org vs http://pool.ntp.org ?

One takes you to the website about the project, the other goes to a random ntp server."

I do totally agree about m., but it's not Google's place to dictate that, rather it's a decision for each entity to make for themselves.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=881410...


Easy answer. If you end up where you wanted to go, you are in the right place. If not, google it. This is how the vast vast majority of users behave.


What about amp. prefix? You know that's the whole point of this, right? They're going to hide the fact that you're viewing the entire web through amp.


Did you look at the bug report? It's rife with valid examples of why this behavior is wrong on Chrome's part. For example, when "www" isn't the first subdomain, Chrome still elides it.


This thread contains many examples of professional technologists who don't know what a URL is. You, for example, don't seem to know what a URL is.

I guarantee you, most non-tech people don't know what a URL is. People know what links are, to the extent that they can click/tap on them to get to some thing, or copy them and share/email them. That's not the same as knowing what they are.

You're making a dangerous assumption about what people know, including yourself.


> "domain.com" isn't ambiguous

But that's because it's .com. Now, there are too many gTLDs, and companies will build their brand around their use of .io, .me, .cs, .es, etc. Just the other day I saw a link that caught my eye to studio.zeldman, and I had to take a moment to hover over the link to see if that was some new branded gTLD.


Nitpicking: .io, .me, and .es are ccTLD (respectively British Indian Ocean Territory, Montenegro, and Spain) and have been around for at least a decade. .cs was a ccTLD for Czechoslovakia.


Ahem. "www" comes from the times, when you had to have a dedicated machine, or, at least, a dedicated network interface, for each service. Hence, you had an FTP server, creatively named "ftp", and your WWW service ran on a host surprisingly named "www". A concept similar to well-known addresses.


That time has never existed. Some did separate things that way, many did not.


That time absolutely existed. Was common. Source: lived through it.


No, it didn't. Some separated services that way, but it was never any more necessary than it is now. Source: used to run an ISP through the early days of the web, and collocated services on the same host all the time.

The idea of the commenter I replied to that you 'had to' have a separate host or interface for each service is flat out false.

When people split it, it was over capacity or manageability concerns, but often we also set up separate hostnames for different services just because it was what people expected; often it pointed to the same hosts.


yep. or ftp (or mail/smtp/whatever) was a single host separate from the web servers, and www was a CNAME to a virtual ip/load balancer.

Still separated physically.

That time wasn't even that long ago.


The point was that this was always a choice - there has never been a point where it was required. My first ISP back in '93 ran mail, web, ftp and shell accounts on a single pc. So did the ISP I cofounded in 95. It isn't and never has been a technical limitation, but a choice down to what worked for you. Especially as address rewriting firewalls also existed back then, so multiple services pointing to the same external IP in no way implied they had to be the same physical host.


For us (early regional ISP, mid-'90s), a lack of separate per-service hostnames caused significant scaling fragility.

In the initial rollout, all services were served from a single physical host with just one listening IP, which the bare 'example.net' resolved to. (Was this naive of us? You bet.) Other service hostnames (www., smtp., etc) were all just either CNAMEs to that hostname, or A records to that IP.

When our SMTP usage started to exceed the capacity of that single host, we tried to move 'smtp.example.net' to a different host. This is when we we discovered that many users were configured to use 'example.net' for SMTP instead. We had to update all of those users' configs before we could turn down SMTP on the original host. (We couldn't afford big-iron load balancers, and they were less common then - we just used DNS round-robin for load distribution).

At that point, we realized that customers were using bare "example.net" for everything - homepage, SMTP, POP3, IMAP, FTP, DNS, shell access - you name it. It was easy to remember - and it worked. So it was hard-coded everywhere - FTP scripts, non-dynamic DNS settings, etc. And this was looong before email clients had automatic configuration detection, so that was all hard-coded, too.

So we had to painfully track down all the users who were still hitting 'example.net' for SMTP, and help them update their configs before we could turn down SMTP on the original ancient host. The other services had to go through a similar painful transition.

We concluded that the only way to prevent this from happening again was to make sure that the bare hostname never offered any services at all - except for a single HTTP service whose sole purpose was to redirect 'example.net' to 'www.example.net'.

From then on, each new vISP domain had the same non-overlapping service namespace ... so that the otherwise inevitable configuration drift would be impossible.

Later, with the rise of things like email autoconfiguration, load balancers, and POP/IMAP multiplexors (like 'smunge'), we had more options. But at the time, avoiding services on 'example.net' was the only way to go (for us). Having a bare 'example.com' as the sole hostname in the browser bar was a sign of brokenness. :)


I wasn't claiming it was a technical limitation or a requirement, just that the time where this happened certainly did exist. Choice or not, the time existed. That was my point. Fair enough.


I disagree about the 'm.domain' convention; it's good and useful. I like the ability to retrieve a mobile site on my desktop, and vice versa. Sometimes I'll be on a site that's difficult to read on mobile, and speculatively try the 'm.domain' - often it will work. When the site itself tries to autodetect what device I'm using, it often makes a poor decision that is not subject to appeal.


On the downside, the m subdomain makes for terrible social media posting.

E.g. a commenter using Wikipedia's mobile site posts a link to said page and desktop viewers are unexpectedly taken to the mobile, not desktop, page.


And Google decision makes it even worse by making the incorrect display incomprehensible.


This whole comment is two-faced.

People know what a URL is. But this issue demonstrates misunderstandings of URLs, as "www.x.y" is not necessarily an official "x.y" page.

"www" == "web address", or "m. is ridiculous" which are annoying fashions (agreed there!), but that has literally zero impact on the security characteristics, implying yet again that people do not understand URLs.

---

No. This comment is a perfect example of why this is not safe to do. It's throwing open the door to abuse.


What makes you think that http requests are the only thing domains are used for? Mentioned elsewhere in this thread, Active Directory requires that the A record for the bare domain be pointed to the PDC.

If you're serving email from example.com, for reputation reasons you should have the bare domain's A record pointed at your primary MX.

I understand that here on HN we're focused around web-based companies, but for every other corporation, there is a plethora of other services served out of a domain -- of which web/www traffic is maybe 10%, if not less. Everything from email to voip to directory services to vpns to crazy internal apps all rely on the corp's domain/domains, and you definitely should not be pointing your bare domain at your web server (which, chances are, is some contractor-built page living on GoDaddy completely outside your own infrastructure).

In a typical company, you'd have some server serving example.com doing some or all of the above. It would then be running a light http server which accepts requests on 80/443 and permanent-redirects them to www.example.com.

This is why www matters.


How about older people? Can your parents explain the difference? Mine can't and I can assure you most of them are the same.


Try this:

Grandma: I'm typing domain.com into my browse on [random device] and it doesn't work.

Nerd grandchild: well that's because it doesn't exist.

Grandma: But it works on my other computer.

Nerd grandchild: that's because the browser tries lots of domains like www.domain, domain.org and so on when you enter domain.

Grandma: yes, I noticed that. When I enter www.domain.com, it automatically corrects it to domain.com. So that's must be the right one, surely?

Nerd grandchild: Nope. domain.com is the correct domain. It's just trying not to confuse you.

Grandma: ?


Good point - but even my mum (65 and not very good with phones or computers) will just bash the bare domain into her browser.If it's got "www." she'll use that, if not she won't. She still knows it's a web thing, the www. for her and everybody else is superfluous.

Things change. We've had something like 30-odd years of URLs. The people who can't deal with this are vanishingly small, and those that can't are likely not your target market; or they're the sort that'll just consider Facebook to be the web.

I'm not disagreeing with your point btw that some people can't deal with this - all I disagree is the extent.


> We've had something like 30-odd years of URLs.

24 from the RFC, 26 from the discussion that led to it, per Wikipedia.


Good point. I was on Janet / ARPANet before the web. Point conceded!


I'd bet good money this change was put in by people younger than the web.


That may be true, and while I wholeheartedly disagree with this change, those that are younger than the web are the next Shepard's of the web.

We're going to see more and more changes that the "old folks of the internet" are going to hate. Some, or even many, of these changes will actually be good changes. We shouldn't prejudice on age.

Again, to be clear, I think this particular change is horribly broken.


Older people care more about consistency than "usability". They can successfully complete long tasks, maybe with several retries but they can, but if they are consistent. Input a long text somewhere, dial 15-20 digit phone number etc. But they can't usually deal with unpredictable situations, where computer will "intelligently" help them, fill parts of the input, when same action is different on different devices, when they need to know how system will behave in advance.

PS: consider how you would guide an older person over the phone when he/she is accessing Citibank website (for which www.citibank is different website), with Chrome will "intelligently" hide essential part of the address.


Agree to that - my experience exactly. Very recent example. My father's using Skype to talk with family. They refreshed UI, he got new version installed and that was it. I've had to answer multiple questions what does this and that button do. "The same dad, it just looks a little bit different." What I got as an answer was "It's placed somewhere else. It is so confusing. Couldn't they just left everything were it's always been..." Thankfully I have remote access to his desktop so I can guide him around in situations like this.

And I am quite sure removing 'www' on the address bar's domains is as confusing.


m. is where you often receive a feature degraded, app-walled, or sign-in walled website that isn't present on the full-featured website.

I want to know that I'm receiving a degraded version of a website and that maybe dropping the m. will restore it.


It's also where you receive a lightweight, fast, to the point version.

Admittedly, it's not just "m", but If I _must_ use Facebook the only bearable version is mbasic.facebook.com


It’s certainly not true in the case of Reddit. Their mobile site is horrible by any measure, way slower than the old desktop site.


But they want you to use the app anyway, so they don't put effort into the web version.


m. is where you find a lightweight, no-bullshit version of a site that doesn't load excessive images or Flash or JS before required.

I want to know I'm on a faster version of a website.

(Seriously, mbasic.facebook.com allows chat, whereas m.facebook.com reminds you to install the Facebook app.)

You're the second person recently to lament mobile sites on HN. I think they're great. Responsive isn't there yet.


Some of them are great. Some of them do incomprehensible shit like loading text in chunks while scrolling, making it impossible to scroll to the end of an article without waiting 5-10 seconds for it to appear.

(This might be a result of my using a content blocker to block mobile ads, but the fact that it’s even possible infuriates me as a user. I mean, it’s text! Just show me the text!)

I agree with you 100% that responsive design isn’t there yet.


I'm in the "basic usually better" camp too, but this doesn't matter, as we're all in agreement here! We want to know whether we're on the "better" or "worse" version of the site, for whatever each of us mean by "better" or "worse".


Some sites look absolutely terrible on a wide monitor when they are designed for a narrow mobile screen.


I must say that m.twitter.com is a better desktop website than twitter.com itself.


Most people don't care about the location bar.

Everytime they want to use Facebook, they type 'facebook' or maybe 'facebook login' in the location/search bar.

And to get to their gmail account they might type their own email address in the location bar.


Ehh this might have been true 10+ years ago, but most people are more savvy than that today.. I think you are describing something that is less and less common (which may explain why Google is trying to encourage people to keep doing it).


You'd be really surprised. There are still people that do exactly what the grandparent comment relates, today, in 2018.


"Less common" isn't "doesn't exist".


Yes, but I suspect it is far more common (I have no evidence to supply however) than the grandparent comment appears to imply.

We (the HN crowd) can easily get caught in a 'bubble' where because we known the details, and those with whom we typically associate also know the details, that we extrapolate those observations to conclude that "most people" know the details.

But until one's been in a situation of providing support or training for a diverse user group, one does not see just how little technical knowledge the "average joe" (a set of which we the HN crowd are very much not a member of) has of these things. The "average joe"'s level of technical knowledge is astonishingly low compared to the HN crowd's level of the same.


Yahoo Search still shows a second search box under the first search result if you search for Google, to capture users that search for Google then type in their actual search term, even when they're already on a search engine.

They may not make up a huge proportion of users, but they still make up a huge number of people in actual terms.


Google knows best, DNS standards be damned. Practice makes standard. /s


What part of the DNS standard governs how URLs are displayed, exactly?


>there's a difference between "www.example.com" and "example.com"

Can you link to a site where these two are different?


Many orgs do this.

For example, with Active Directory, the DNS A record for your foo.com domain must resolve to your domain controllers. Your www.foo.com will resolve to a separate non-domain controller web server.

I think a lot of the commenters here are thinking solely in terms of commercial web services such as twitter.com and such, but there's so much more to the wider landscape.


Thinking about it that way gives me conflicted feelings. Much as I hate what Google has done here I also feel like any organization stupid enough to use their public domain name for their Active Directory domain name deserves every little pain they receive for it.


You lack the compassion that comes with experience.

My $dayjob has our AD root domain the same as our public root domain. Because we implemented AD in the year 2000, and this was Microsoft’s recommendation for domain naming way back then.

And if you use Exchange, you can’t rename your AD domain, you have to rebuild your forest and migrate piecemeal. So we’re stuck with it.

The practice of using Corp.example.com did not evolve until many years after Windows 2000 and Exchange 2000 were in the wild.

So we run http redirectors on each of our domain controllers to send traffic to www.


This one is kind of a "religious" topic for me, I guess. I'm sorry that it is, but it makes me exceedingly defensive.

I trained on Active Directory (AD) with a group of veteran sysadmins in 1999. I don't have access to the "Microsoft Official Curriculum" book from my class in '99 (long-since thrown away), but I have a distinct memory of a lively conversation in class re: the pitfalls of using a public domain name as an AD domain name (or, worse yet, a Forest Root domain name) during the class. It was very evident to our group of veteran sysadmins that using a public domain name in AD would create silly make-work scenarios (like installing IIS on every DC just to run redirect visitors to "www.example.com"-- just as you describe, albeit IIS didn't natively support sending redirects at the time).

I'd go further and suggest that anybody with a modicum of familiarity with DNS knows having multiple roots-of-authority for a single domain name is a bad idea. Microsoft not supporting split-horizon in their DNS server (like BIND does with 'views') compounded the difficulties with such a scenario in an all-Windows environment.

I certainly wouldn't argue that Microsoft has given exclusively good recommendations for AD domain names in the past (evidence ".local" in Windows Small Business Server), but I am reasonably certain that their documentation always suggested that using a subdomain of a public domain name was a supported and workable option.

I started deploying AD in 2000. I've deployed roughly 50 forests in different enterprises, and I've never used a public domain name as an AD domain name. I've domain-renamed all my subsequently-acquired Customers for whom it was an option (which it was, so long as they had not yet installed Exchange 2007), and have been rebuilding the Forests of Customers who made the wrong decision in the past, where it makes economical sense.


Microsoft has provided mechanisms for split-horizon DNS service since Server 2003. views are not the only way of providing split-horizon DNS service.

* http://jdebp.info./FGA/dns-split-horizon.html#SeparateConten...


Windows 2000 didn't support stub zones, however. At the time that Active Directory was new there wasn't a good way to do split-horizon DNS with the Windows DNS server.

As an aside: I really enjoy your writing about using SRV lookups. It makes me sad that SRV records aren't being as much as they could / should be.


I don’t know anything about AD, so this might be a stupid question: can you not just run a web server on the same host as the AD server or port forward all HTTP traffic to a different server?


A domain controller on the internal network might not be the right place to run a copy of the public-facing content HTTP server (which might be in a datacentre, or even managed and run by an outside party, and might not be served by IIS). Then there are considerations of firewalling rules, browser rules, anti-virus rules, and even DNS rules for machines on the internal network that access a public WWW site that DNS lookups map into non-public IP addresses. (To prevent certain forms of external attacks, system administrators have taken in recent years to preventing this very scenario from working by filtering DNS results.)

* http://jdebp.eu./FGA/dns-split-horizon-common-server-names.h...

* http://jdebp.eu./FGA/dns-ms-dcs-overwrite-domain-name.html

* http://jdebp.eu./FGA/dns-use-domain-names-that-you-own.html


From the two comments above, it sounds like yes, some people who named their AD the same as their root DNS zone now have to run Http forwarders.

And the other comment mentioned that this was a known issue 20 years ago because the old versions of IIS did not support redirecting.


We beat this to death on Serverfault.com 9 years ago, so I'll spare all the rehashing here: https://serverfault.com/questions/76715/windows-active-direc...

Having a disjoint DNS namespace (and the needless make-work that it creates) is the issue, more than running HTTP servers on all your DCs to do redirects. There is absolutely no practical advantage to running an Active Directory domain with a public DNS name. It's all downside. It has always been all downside, and anybody who had any experience with DNS could see that all the way back in the beta and RC releases of the product in 1999 and 2000.


From one of the comments there:

http://www.pool.ntp.org vs http://pool.ntp.org

One takes you to the website about the project, the other goes to a random ntp server.


OK, which one of you hooligans runs this NTP server[1] that plays some loud obnoxious dubstep track?

[1]: https://i.imgur.com/cEukhNu.jpg


Those go to the same place for me


Not me.

http://www.pool.ntp.org/ redirects me to https://www.ntppool.org/en/.

http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.

If you want another example, try google.com using Google's own DNS:

  PS U:\> nslookup - 8.8.8.8
  Default Server:  google-public-dns-a.google.com
  Address:  8.8.8.8
  
  > google.com
  Server:  google-public-dns-a.google.com
  Address:  8.8.8.8
  
  Non-authoritative answer:
  Name:    google.com
  Addresses:  2607:f8b0:4009:810::200e
            172.217.8.206
  
  > www.google.com
  Server:  google-public-dns-a.google.com
  Address:  8.8.8.8
  
  Non-authoritative answer:
  Name:    forcesafesearch.google.com
  Addresses:  216.239.38.120
            216.239.38.120
  Aliases:  www.google.com
Even if you ultimately end up at the same site through redirects, you're clearly not going to the same site initially.


>http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.

Either way, the ask was for a difference in www.example.com vs example.com. Not a difference in www.pool.example.com vs pool.example.com. In the latter case, the different subdomains will still be shown (AFAIK).

>Even if you ultimately end up at the same site through redirects, you're clearly not going to the same site initially.

Which is nothing that an end user is going to care about and doesn't provide an example to the asked question.


>In the latter case, the different subdomains will still be shown (AFAIK).

http://www.pool.example.com displays as http://pool.example.com

Here's a gif: https://vgy.me/61I0DA.gif

For fun I'm going to set up a www.www.www.www.www.www.www.www.www record.

http://www.www.www.www.www.www.www.www.www.www.example.com shows as example.com

E: I'll add it to my certs later but I did it: https://www.www.www.www.www.www.www.www.www.www.www.www.aish...

E2: http://www.example.www.example.org shows up as example.example.org - this is fun.


Re: E2 (http://www.example.www.example.org === example.example.org)

I just found the same thing. How exactly is this a feature? What an insane decision.


That is absolutely insane and someone should be fired and shamed for this. I didn't like just trimming a pure www. but trimming any www. in the hostname is just dumb behaviour.

How would I differentiate between loadbalancer1.www.intranet and loadbalancer1.intranet? THOSE ARE NOT THE SAME.


Wow. You could do some pretty amazing spoofing with the www.com domain, then.


Some small subset of pool servers run an HTTP server that redirects you to www. Not all of them. You just got lucky.


That's exactly right. www.pool.ntp.org is the project site. pool.ntp.org is for getting an NTP server. Which one you get will depend on your location and random chance. That server will run NTP, but what it happens to run on port 80, if anything, is up to the operator of the server.


I must be lucky too, as I got the same result from both.


They definitely do not for me (ios).


See the issue.

http://www.pool.ntp.org/ http://pool.ntp.org/

https://www.citibank.com.sg/ https://citibank.com.sg/

Plus, this actually removes any www part of the domain.

So subdomain.www.example.com shows as subdomain.example.com

Why even open that can of worms?


A) Consider any sharing platforms where unrelated bodies coexist with distinct subdomains under a common root domain (e.g., Blogspot, Tumbler, etc) While "www" is probably a reserved name and mostly not of practical concern, "m" may be a practical issue.

B) Consider subdomains for test-purpose like "www.test.www.example.com" (now displayed as "test.example.com", which is actually not even the root of the specific subdomain).

C) Users unsure, if they are on the full-featured or a reduced mobile site, when "m" is hidden.

D) I may actually want to have a service agnostic default host at the root and subdomains for dedicated servers (like "www", "ftp", "mail", "stun", "voip", etc). Maybe this one just returns a short text message by design, if accessed on port 80. Not every domain is just about the WWW. (Edit: While we may assume that such a server would forward in practice, this may be assuming too much.)


>> there's a difference between "www.example.com" and "example.com"

> Can you link to a site where these two are different?

There are 3rd level domains where everyone can register "www.{TLD}". E.g., .com.kg, .net.kg, .org.kg. Look at the www.com.kg. It's also available as www.www.com.kg. Or www.org.kg that's in fact www.www.org.kg. If you display just the last part (com.kg, org.kg), does that mean that you're viewing the root website? Nope, that doesn't. That means that chrome is fucked up.


Someone mentioned www.citibank.com.sg vs citibank.com.sg in the issue.

One of my school's websites: I can't remember what it was and this was before I understood what the difference is, but www worked much better than without iirc.

This also applies to m.*, so literally any web-app with a mobile version.


Consider the different types of records you need to add for those examples if your web host is Heroku or some other cloud provider:

https://devcenter.heroku.com/articles/custom-domains


I don't remember the site offhand, but I was going to one recently where example.com didn't even work, it was some weird error page -- you had to use www.example.com. If it comes to me, I'll post it.


I've seen this behaviour, and the reverse. Can't remember examples, but it does happen.


This is what Chrome's update is trying to fix. Developers are confuser when setting up dns if they should have www or not have www or only have www...


Not really fixing it thou because they just strip the www part from the name. If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything.

I haven’t tested it but it will most likely show up as domain.com in the address bar and will result in an error show to the customer.

If chrome wants to strip www as it’s essentially the same domain.com they can submit an RFC and not just decide for everyone. Honestly I hope they start making more stupid decisions like this so ppl move to Firefox so we have more competition.


> If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything

Yup, that's on the developers. Hopefully this fix will make it so that it will be easier to setup DNS with just one domain instead of 2. Props to Chrome.


Read the source link. A concrete example (Citibank) is given.


www.pool.ntp.org pool.ntp.org


for ages, my former high school's website did not respond to requests that omitted the www. subdomain :/


Many companies have their marketing site at www. and they're app at at, say, app. e.g. https://www.netlify.com/ vs https://app.netlify.com/


That's www vs app, not www vs lack-of.


Ah. Thanks for clarifying that.


app. subdomains are not hidden


Can't trivial subdomains just be colored gray and done instead of getting away with them? Also maybe color blue the .com and similar.


> It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".

Will they? I find it very unlikely that many users would even check the URL in the first place, let alone understand that m.foo and foo route to different places.


humm... for sure that wasn't a technical decision, and here start the problem.


This is going to break a lot of sites.


Lots of people saying this is for the benefit of non-technical users.

For me, this is a minor inconvenience, precisely because I'm technically capable/interested enough to handle the inconsistency.

But this kind of stuff (and I am speaking somewhat generally here) tends to frustrate me, precisely when I'm trying to educate or deal with a non-technical user in some capacity where it happens to matter. I can't just tell them, "that is the address of the page, and that will always lead to the exact same place if you type it fully and correctly, and that's that". Instead I have to get my head around what if they're using browser X or operating system Y, I have to ask on the phone first, "hang on, tell me what you see on your screen", I have to say to the lady who's eagerly sat in front of me with pen hovered above paper waiting for me to dictate how to do a thing in straightforward steps, "well it depends, first you have to check this thing, and if it's like this then you can do this but it might also be like that in which case it's a bit different, let me explain" - and this is usually the point at which the non-technical user gets tired and throws the book at me.

In short, I think consistency of information and process is usually much more understandable and useful to users of any level, than the dumb 'simplification' of this half-baked information-hiding.


Yeah, I think that consistency is greatly undervalued.

Grandma has no problem with technical details being shown, she just ignores them, she just knows that clicking on the button on the top left will go the the webmail and that she needs to click the big red button in order to write a new email. Change anything and she will get lost, click everywhere, and usually find the solution, but sometimes make a mess.

There are also security implications. I told her to be aware of any change, because it may imply a phishing attempt, or some malware. But how is it going to work if legitimate software always change. You are basically training them to stop thinking about what happens, which is terrible since thinking is the only thing that can protect them since they don't have the technical intuition most of us have.


This! Consistency.

Browser start screens with a large search box in the center haven't helped either. Some users do see no difference between the location-field and this search box. Some have even unlearned this. Arguably, it facilitates ignorance of the location, the significance of URLs and how they work. Reading a URL isn't witchcraft, it's just about three simple things. But dumbing things down towards convenience at the expense of consistency will not empower users.

(Surprisingly, ordinary people have been able to manually dial a phone or to parse a street address without the help of a map service in the past. It can't be that bad.)


I wholeheartedly agree with this. There are two sides of the debate now. One side says that the machines should be clever enough to guess the user's intention and go around of their mistakes, while the other side says that the machines should stay as dumb and square. The question is about who is handling the complexity of this world, and in my opinion it should be often left to the human, not to the machines.


The worse outcome may be, if there's an ambiguity involved and the smart system takes precedent, but occasionally happens to take the wrong route – and suppresses any feedback for the user to intervene or even notice. Which is much, where we have arrived by this. I guess, collateral damage has become a matter of everyday life.


Most comments assume that this is for solving user confusion, or security, or building a better URL scheme, et al.

It's not, that is all smokescreen.

As ivs wrote[1] They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.

And for that reason, it won't be reversed until people call them for what they are actually trying to do.

[1]: https://news.ycombinator.com/item?id=17928939


This should be the top comment. After this change, we are just one step away from using the browser's address bar only as a Google search box, and Google as the entire internet's gatekeeper. Google doesn't make money when you type the URL into your browser's address bar – it makes when you don't.


AMP pages are served through google.com, though? It's one of the big problems with them.


Not always. Sometimes Google results have taken me to websites like "amp.reddit.com" on mobile.


That makes sense. And not just AMP but they want to train users to NOT pay attention to domain/subdomains, leading to more room for other exploits.


Yeah... no. That's just baseless FUD.

They _are_ indeed planning to get rid of AMP cache URLs, but they'll be doing it through open W3C standards anyone can use, not through special-casing their own domains: https://amphtml.wordpress.com/2018/05/08/a-first-look-at-usi...


No, this Chrome update is about hiding the "amp." subdomain from the original URL. What Google wants to achieve, is to make it impossible for the average user to tell when the entire website is being served from Google Cache.


Google cache links aren't served from `amp.yoursite.com`, they're served from `cdn.ampproject.org`.

If you're visiting `amp.yoursite.com`, then the site _isn't_ being served from the Google cache.

Also "this Chrome update is about hiding the "amp." subdomain on the original site from the viewer" is patently false since this update _doesn't_ hide `amp.`; only `m.` and `www.`.


> Google cache links aren't served from `amp.yoursite.com`

That's not where things are going, according to your own source from the previous comment:

> Our approach uses one component of the emerging Web Packaging technologies—technologies that also support a range of other use cases. This component allows a publisher to sign an HTTP exchange (a request/response pair), which then allows a caching server to do the work of actually delivering that exchange to a browser. When the browser loads this “Signed Exchange”, it can show the original publisher’s web origin in the browser address bar because it can prove similar integrity and authenticity properties as a regular HTTPS connection.

So, the content will be served from Google Cache with the original publisher's URL in the address bar.

> this update _doesn't_ hide `amp.`; only `m.` and `www.`

It's Google, who decides what and when it wants to add to its browser's list of "trivial subdomains". Especially, when the websites with "amp." subdomains will become common.


Yes, once the Web Package Standard is finalized and implemented then AMP pages will indeed use the normal `amp.` URLs.

But at that point, what would be your concern with hiding `amp.`? That's no worse than hiding `m.`; it's just another subdomain which serves a different version of the same content. Heck, sites could serve their amp pages on `m.` domains if they wanted to; the actual subdomain they decide to use is irrelevant.


Seeing "amp." in the URL meant that it's not a "full version" of the site. Google wants to remove the separation for the end user, so that all publishers would serve their content through Google Cache. And that's a big concern to me, since it means, the entire web will be served from a single company's database.


> Seeing "amp." in the URL meant that it's not a "full version" of the site.

Yes, but once again that's no different from `m.`.

> And that's a big concern to me, since it means, the entire web will be served from a single company's database.

Are we talking about before or after the Web Package Standard is implemented here?

If before, then your concerns about the URL aren't applicable because `amp.` links aren't served from the Google cache (only `cdn.ampproject.org` links). If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.


> If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.

Does this mean that Google will no longer rank higher those, who implement AMP and serve through Google Cache, than those who don't?


Yes. https://amphtml.wordpress.com/2018/03/08/standardizing-lesso...

> Based on what we learned from AMP, we now feel ready to take the next step and work to support more instant-loading content not based on AMP technology in areas of Google Search designed for this, like the Top Stories carousel. This content will need to follow a set of future web standards and meet a set of objective performance and user experience criteria to be eligible.

Furthermore, once the Web Package Standard is finalized, the "Google Cache" won't exist anymore, at least not in the same way it does now.

The Web Package Standard allows any web page which supports origin signed responses to be served via cross-origin server push from any server that supports HTTP/2. So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results, but the actual content being served will be fully controlled by the original publisher and behave exactly as if your browser received the page directly from the publisher's server.


> So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results

And that's what I mean by saying, that the entire web will be served from a single company's database, which already controls the browser and the search. You will be able to browse the web without ever leaving Google servers, and Google will be able to track your every interaction on the web.


This doesn't increase Google's ability to track you at all. If you click a link on a Google search results page they already know you visited that site; them serving the initial page load via a cross-origin server push changes nothing.

It also doesn't give them any more control over the web, since the page contents are still strictly controlled by the original publisher (and that's cryptographically enforced).

So again, what's your actual concern?


Google now only knows the first page I visit from its search results. After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me. How is that not a concern?

Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web? Its blog article you linked to yourself says, that the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs".


> After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me.

That's not how it works. Only the initial page is loaded over cross-origin server push. After you actually navigate to that page you're no longer on Google's site (which is why the URL bar is able to show the domain of the site you just navigated to instead of still showing google.com), so obviously they don't have any enhanced ability to monitor what you do after that point.

> Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web?

The general web is already decentralized. This is about decentralizing AMP. And yes, decentralizing AMP is exactly what Google is doing here.

> the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs"

Yes, and they're accomplishing that by pursuing the development of open W3C standards which can be used by anyone. Just like how offline storage on the web started as a feature enabled by [a proprietary plugin developed by Google (Google Gears)][1] until Google pursued the development of open standards to replace it: https://www.w3.org/TR/service-workers-1/ (Check out who the editors are on that draft.)

Google's been following this pattern for over a decade now. They start with a proprietary initiative, then use the lessons learned from that effort to develop open web standards that improve the web for everyone. (I can give maybe a dozen more examples if you still don't believe me.) There's no reason to think AMP will be any different in this regard, especially since Google has already made their intentions on this matter clear.

[1]: https://support.google.com/code/answer/69197?hl=en


Another very real problem is not being able to share the real url rather than an amp link.

Is ampproject Google's website? On amp.google.com I can find the original url for sharing purposes, whereas on ampproject.org urls I can't.


This and many other changes over a course of a short period of time have caused me to go to Firefox exclusively now. I heard Firefox is going to stop third party cookie tracking altogether. Why not give Google the big finger and use a different browser? Vote with your cold hard actions if you feel so strongly about something.


I switched to firefox a year ago. Its a little slower, but im a lot happier.

Ive been trying to degoogle as much as reasonable. I moved to fastmail as well. Still using an android, but would switch if a reasonable alternative that wasnt iphone came up. Im not paranoid or a privacy nut, just think google is too involved in my life.


Maybe try an Android derivative? One tailored for privacy? I've heard of LineageOS, it's marketed as privacy-friendly.


Same here. Have you found any viable alternative to the Google Calendar? I'm at the point where I'm thinking about hosting a calendar project from GitHub myself.


Nextcloud, whether self-hosted or otherwise, works great! It's just WebDAV. You can get calendar, contacts, task, and note syncing, and it can even host your documents for reference management software like Zotero.


The calendar of Fastmail works. It's not great, but it does the job


If you own a Samsung phone, the calendar app is good. I wonder if you can install those Samsung apps (which for some are just forks of unmaintained AOSP apps) on a regular Android if you somehow get the apk.


I was in exactly the same camp as you a year ago. Then I played with a hand-me-down iPhone 6s and couldn't believe how much more pleasant it was to use iOS than to use Android (Nougat at the time). Having owned an iPhone 3G and 5, my memories were of a restrictive OS and a dumb Siri but both have really has come along since. I made the switch and can't imagine going back to Android now.


Are people still considering smaller, local ISPs for email? Or are there even enough of those to consider?


Upvoted from Firefox. Only reason I use Chrome nowadays is when apps launch it directly (whereupon I strongly consider uninstalling them) or when work requires it (... which is utterly ridiculous, and very likely why our web rendering performance and consistency is utter trash).


I'm now the same way, and it pains me to see "optimized for Google Chrome"

https://www.theverge.com/2018/1/4/16805216/google-chrome-onl...


Hangouts and GotoMeeting are the only things I open in Chrome. I'm also completely sold on Tree Style Tabs, and I don't think I could now live without...


I would love to, but Firefox just feels more clunky. Not sure what it is, but the scrolling doesn't feel native to me (MacOS, Magic Trackpad and Logitech Mouse)


You can turn off that scroll behavior.


I wonder if it has anything to do with the HiDPI issue:

https://bugzilla.mozilla.org/show_bug.cgi?format=default&id=...

(there's a couple of hits when I search for 'scrolling' in that thread)


Faster than Chrome for me on MacOS


Been using Firefox on my desktop and its amazing. For some reason the mobile version is incredibly slow to load pages though.


Firefox is a better browser due to tree style tabs. But it is noticeably slower.


Having to use both for supporting complex web apps, I can't really agree that FF is noticeably slower. Chrome does seem to have less silly bugs though, like quick searching doesn't find multi-select box text in FF. Works great in Chrome.

Regardless of FF's little quirks, I use it almost exclusively for personal stuff. I would rather deal with those types of things than the mentality Chrome brings to table.


I've been using Edge on Windows for at least a year now and I'm quite happy. Now that it supports plugins, I haven't fired up another browser for months now.


Because the battery drainage with Firefox is unacceptable.


I use Firefox as my "at home" / private browser. However for work i unfortunately feel forced to continue using chrome. First I just really prefer the chrome devtools and i just can't seem top find an equivalent replacement for the "manage people"/multi user built ion function that chrome offers. I really wish Firefox had something similar...


Multi-Account Containers do that part for me, and I find them much nicer than Chrome's similar functionality: https://addons.mozilla.org/en-GB/firefox/addon/multi-account...

For bonus fun, also install Temporary Containers: https://addons.mozilla.org/en-GB/firefox/addon/temporary-con...


firefox has both profiles (for actual different users) and containers for isolating stuff (i.e. a sub-profile) for a single user.

The containers have a great UI/UX. I'm looking at the profile stuff now (after having not in years) and it seems counterintuitive and clunky

https://support.mozilla.org/en-US/kb/profile-manager-create-...

https://support.mozilla.org/en-US/kb/containers

Anyway, "forced" is a strong word when you simply mean "prefer". The firefox dev tools are in the same league as the chrome ones, imo.


the containers are great thanks for sharing, will definitely use those at home! You're right I should have emphasized the feel part of my comment since the dev tools especially are just a matter of preference. However I stand by my point that the profiles are definitely not on par with chrome, since there doesn't seem to be a way to have multiple profiles open at once.


I’m sure there are other ways to do this, including specifying a profile when you initially launch the browser, but you can enter about:profiles in the address bar to see a UI to manage the profile. One of the options is to launch that profile in a new browser.


If you just need a "personal" profile and a "work" profile, what I do as a workaround is to use normal firefox for personal and firefox developer edition for work. They are completely sandboxed from each other.


Firefox does similar things though. They hide the URL scheme by default. And subdomain are displayed in a more subtle colour than the rest of the domain.


Firefox hides the scheme IFF it is `http://`. It doesn't hide `https://`. Also the subdomain AND the path is slightly toned down. The net effect is precisely what Google tries to do and Apple has been doing (namely, only showing the second level domain) without actually hiding any information.


You should really give the Brave browser from (www.)brave.com a try.


Yes. I've been using Brave exclusively for more than a year and I'm not going back.

I'm still undecided on which search engine to use though.


Have you tried Searx? Meta search engine, open source, multiple instances (domains) to choose from. Once configured to your particular needs, searx can prove very powerful.


Safari already hides "www.". In fact it hides everything except the root-level domain, e.g. "https://www.google.com/about/" shows just "[lock] google.com".

Firefox and Opera show the full domain but gray out everything in the entire URL except the root-level domain, so "www." is gray.

Just saying, de-emphasizing and hiding parts of the URL is clearly a trend. This isn't just a Google thing.


De-emphasizing is fine, hiding alltogether is not -- for both protocols and subdomains.


But Chrome does exactly that. If you put focus in the URL bar in Chrome 69 it shows the full domain including protocol so

amazon.de is visible on focus it's https://www.amazon.de


I thought that's how it worked too but it is not. 1 click shows the entire url and selects it. Still without "www.". Another click will show "www.".

Regardless of how you feel about the change, it does indeed hide 'www.' to the point where a power user could easily be fooled that it was the naked domain.

Edit: Here's a demo of how it works: https://www.useloom.com/share/f7d71b95d75b4c4582bb38cdc84326...


this is actually the main reason I cannot use Safari. It always boggled my mind that they made this decision.

For power users, they never look at the url unless they want information from it, in which case the `www` is valuable.

For low tech users, it can lead to straight up incomprehensible issues, like sites not rendering properly (think of a `m.*`).

The UI gains are so small, that part of the screen is never really looked at, but needs to be there, and typically has tons of horizontal room... I don't get it


Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.

However, if it shows the TLD, they can confirm it says “google.com”. Imagine they’re visiting a Paypal phishing link, to the domain:

www.paypal.com.www.com

The most important thing to show the user is “www.com”, because they’re expecting “paypal.com”. All the rest is nonessential for protecting users from bad actor sites.


Looking at the bug report, Chrome would actually show "www.paypal.com.www.com" as "paypal.com.com". At least Safari does the wrong thing the right way.

Personally, I always want to see the full URL. It's fine if part of the domain, the scheme etc. are grayed out to emphasize the second and top level domains, but don't omit elements that are necessary to fully identify the resource because the lowest common denominator may think that fishing.com/paypal.com is paypal.


Yep, I verified that bug as well, apparently they never planned for "www" being somewhere other than at the front of the domain name. Sounds like they already know, woot!


> Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.

They should. Children probably have difficulty with '6' vs. '9,' but they need to learn in order to use our number system. Likewise, users of the Internet need to learn the domain name system. Could there be better name systems? Sure. There could be better number systems, too, but this is what we have for now.


What difference is indicated by "news." rather than "www."?


Um, that they can be different websites?


The general public does not perceive that difference, likely as a direct result of dot-com inventiveness with respect to domain names. Thanks to the stupidity of “m.” (WAP is dead) and “amp.” (WAP lives!) and the cuteness of “baredoma.in” (Silicon Valley represent) and the insanity of “www1034.www” (here’s looking at you, HP), we have spent the last decade on the web directly teaching non-tech users that what used to matter (“www”) no longer means anything at all, and they’ve listened.


This is not a feature. Make users understand this, don't hide it, make the main domain glowing green, wash out the rest, anything, but this trend of hiding complexity will only lead to severe undereducation on the topic, and, eventually, it will reach professionals as well, who also won't understand, what they should.


Reducing the displayed value from { "is_secure" YES/NO, "http/https" ARGH/WHAT, "full URL" GIBBERISH } to { "is_secure" YES/NO, "domain" AOL KEYWORD } improves my chances of defending against a phishing attack someday, as well as those of non-tech users.

Reducing information density is a critical component of automobile safety measures. Dashboards in cars just prior to the "screens everywhere" era have been boiled down to the essence of what's necessary for a human being to operate a vehicle safely and without putting others at risk: One bright line showing speed, one bright line showing engine speed, one bright lint showing fuel remaining, and a few multicolored status icons; and then, a central info display where any logic more complex than "push to show next value" requires parking the car.

EDIT: Changed NAME to AOL KEYWORD.


You can still see the full URL by focusing the address bar with either a click or ⌘-L.

I think it makes sense for the default display to show the most security-relevant information (TLD, SLD, and presence + validity of the certificate) in the default display, while deferring the full display (incl. spurious or malicious information that might be in the full URL e.g. https://example.com/www/paypal/com/login) to a user request (click or shortcut).

That said, Chrome 69's decision to to hide /all/ instances of www in the domain is unconscionably bad.


> this is actually the main reason I cannot use Safari.

Then I have good news for you! If you go to Safari's preferences and select the Advanced tab, there's a checkbox called "Show full website address" that disables this behavior and shows the full URL in the search bar.


Unfortunately this is not in Safari for iOS.


Safari on iOS barely has enough room to show the domain, let alone the full URL. Tapping on the URL bar will present the full URL in an editable/scrollable text field.


You know it’s a setting, right?


Hiding file extensions in Windows is a setting, too.


Funny how life repeats itself.


Safari does hide www, but it's a default setting that can be changed by checking [0]

  Preferences > Advanced > Show full website address
[0] https://imgur.com/a/3VMo5zH

EDIT: Formatting. Add missing article. Change linked image.


There is quite a bit confusion how it actually works, nobody seems to had a look at it. Chrome simply hides the subdomain if the url bar is not in edit mode, these parts are still accessible/editable and the http host behaves (and is recovered from history) as before. Copying the url works as expected.

It is still confusing for tech people, because we often need awareness where we are.


>Chrome simply hides the subdomain if the url bar is not in edit mode

Not so. You have to click the Omnibox twice.[1] Clicking on the Omnibox once puts you in a completely new state, "edit mode [with corrupted URL]," and clicking on the Omnibox again puts you in "edit mode [with correct URL]."

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=881410...


It seems like it would be confusing for anyone. Especially that it doesn't just remove the lowest level domain if it's www, but any part if it's www. So "www.paypal.www.com" would be displayed as "paypal.com". If that isn't great for phishing I don't know what is.


It's a setting. Settings / Advanced / Show full website address to show the www.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: