What a weird piece of writing. Is this like just chicken scratch? Or is this seriously some kind of part of the W3C working process?
Section 2: Third party cookies have gotten bad. Ok.
Section 3: There are legitimate use cases that third party cookies currently cover. Also ok. Then they throw in, "Be aware that a set of new technologies which carry minimal risk individually, could be used in combination for tracking or profiling of web users." Yes? Huge scope increase in the document though and all of a sudden we're now talking about tons of tracking technologies in aggregate? The authors move on without further comment.
Section 4: I think the first half is essentially saying that new technology coming online in the web platform will make the third party cookie problem worse, so we should fix it soon. OK, I'm with back with you. Then the document suddenly pivots to proposing general standards for web privacy again, saying that the burden of proof is on the people originating the proposal to, before concluding by saying (apparently without irony?) that justifying the removal of third-party cookies' impact on business is outside of the scope of the document.
I'm missing a ton of cultural context here about how W3C works, so I'm guessing this probably amounts to rough notes that somebody intends to clean up later that I'm being overly critical of, and they didn't expect it to get any traction on hacker news.
They're very far from irrelevant, depends on what kind of Web development you do, I would say -- I have been writing WebAssembly by hand (I mean, a lot can be said about that but it's a thing) and the spec. is authored by W3C. There's plenty of other things they author, like, you know, either one of the many _CSS_-related specifications.
It's just that with the modern Web 7.0 (or whatever version we're on now), it's WHATWG that's most prominent since there's that one spec that defines 90% of what happens on the Web, it's called "The HTML standard" or some such. Then you have Google de-facto authoring specs., which may or may not find their way back into the HTML document, but even if they don't, they do make you feel like W3C is left behind.
It's exactly this, there is a group who come together and never agree on rules, but when they do, they never enforce them. It's I believe the definition of a paper tiger, sadly. A great idea executed horribly.
So not at all? Shipping something in chrome isn’t enforcing a standard in my opinion. Enforcing a standard would be a regulatory thing, like having to use USB-C in certain situations.
Chrome is in a monopol position. If they decide to ship a new feature .. then all the other browsers need to implement it as well, or their users assume their browser is broken.
Okay but that's still not the same as enforcing a standard, in any way... You're suggesting the W3C should simply roll a "reference browser" that supplants Chrome so they can force standards on users themselves. That really doesn't seem like a great way to do it.
Design by committee is more likely malice than accident or stupidity. Some factors work towards goals which are good for them but malice for the majority.
There are regulatory agencies which have specifically told Google it is not allowed to remove 3rd party cookies without a replacement as while Google would be able to continue to function fine, their competitors would take a major loss.
Seems like the CMA are concerned for other advertisers who profit from 3rd-party cookies, no concern for user's privacy. That poor billion dollar industry, how will it cope?
Adopting javascript universally, and consequentially making it a de facto standard, was a terrible mistake. I think the only way out of this nightmarish privacy-less state of things (in this regard) might be something like the EU putting out an extremely severe law banning all these bad practices. Banning all practices that undermine privacy is the only morally valid option; it's not only about 3rd-party cookies. It'd be an incontrovertible measure for everyone but bad actors, just like USB-C or user-removable batteries (February 2027).
If third-party cookies are removed, the tracking parties will just ask web sites to include the script on their web server, so their cookies become "first party" again. I don't understand how this helps the web unless protections against tracking itself, not the methods used, are established.
There's all kinds of cryptography available for solving trust problems. I guarantee you that within six months of third party cookies being removed someone will have built an impression signing system that is satisfactory to both the ad companies and the server owners.
There are also trust issues the other way. I've seen a lot of contention between developers and security teams and marketing about putting third party code or proxying third party domains on the first party site for analytics, tracking, ad attribution, etc.
I doubt that. Their script could as well be "fetch that script from that URL and run it". They would have fraud detections already in place on their side regardless of which script runs on the client.
but this request could be faked, if the first party wanted to fake the traffic (for example, to make ad revenue). This third party cookie is what prevents this faking at this moment.
That's vastly more expensive, though. Now you have to run extra servers to make outbound connections to the ad tracker's API server instead of turfing off all the work to visitors. It would be enough to significantly affect the ad market.
You also get to do it on your fast cloud backend infrastructure instead of the end-users home computer and ISP. They will appreciate the increase in page load speed and overall responsiveness, and as a bonus they can't use ad blockers or hostfile tricks anymore.
I don't think it's that expensive to do. All it takes is one well written package that is easy to install and this will be come standard.
I could even see a data broker centralizing this and distributing tracking to all of their clients. The client would just need to communicate with the central broker, which is not hard at all.
This doesn’t actually help. If you consider Prebid, Criteo already has js running on the site serving the ads, but that js has no mechanism to figure out whether the user has something in their cart and is eligible for retargeting.
The workaround is looking more and more like IP, fingerprinting, and AI. I’d argue this is worse than 3p cookies, which were at least dumb and easy to clear.
I think many adtech companies (at least in affiliate marketing) use redirects because third party cookies are unreliable and redirects make all the cookies first party. As mentioned elsewhere, they’ve also been switching to proxies and other such techniques to make it even harder to block their tracking endpoints.
Proxies for analytics are already a thing. E.g. plausable shows you how to set one up. A 3rd party cookie can however be the same value sent again and again from the same browser from different sites to the central server tracking you across the web. The global who you are is in the cookie.
> include the script on their web server, so their cookies become "first party" again.
That script would execute with the origin of the server. It's access to resources and /shared state/ would be hampered by this. So as a cross-site tracking strategy I don't think this works.
> I don't understand how this helps the web unless protections against tracking itself, not the methods used, are established.
Which is why I think state partitioning[0] and CHIPs[1] are good technologies. It allows previously existing standards, like cookies, to continue to exist and function mostly as expected, but provides the user a good amount of default security against cross site trackers and other malware.
Your point is pretty useless, as you assume the web server admins want to be more secure. The opposite is the case, usually they deliberately open up their security model to accomodate 3rd party tracking scripts. For example, Content-Security-Policy headers can effectively prevent all sorts of xss attacks, but they will also prevent 3rd party tracking scripts etc.
You've misunderstood my point. It's not what the server admins want it's what the security policy will allow. If two sites, on two different domains, both use the same script, served directly from their domains, it creates absolutely no workaround for third party cookies. This is because the two sites have different origins. CSP does not create a bypass in this case.
Any other tracking methods are way more obvious, and way harder to implement for the advertising industry. We shouldn't think in black/white here - the more difficult it is to track a user, the less likely it is implemented. It is okay if 30% of tracking sites dissapear as the cost/value ratio don't work for them. We don't have to sit in silence and do nothing, just because we can't have the 100% privacy.
I do think there is a point here: any technical means to block tracking is going to be overrun by technical means to overcome the anti-tracking tech. There are simply too many dollars at stake for anything else to happen. If anti-tracking stops some players, that just means the industry will consolidate into a few large and well-resourced players.
While I am all in favor of continuing the technical battle against tracking, it’s time to recognize that the war will only be won with legislation.
> At this point, do we need to using JS disabled browser to really get privacy on the web?
My thoughts are that we need a distinction between web pages (no JS) which are minimally interactive documents that are safe to view, and web apps (sites as they exist now) which require considerable trust to allow on your device. Of course, looking that the average person's installed app list indicates that we have a long way to go culturally with regards to establishing a good sense of digital hygiene, even for native software.
The problem is that there is a lot of grey area between pure document-style pages and full-on apps (take online shops for example) and even for the former category of pages a lot of UI niceties are only possible with scripting.
It doesn't help that web browsers aren't even trying to help users make the distinction. They have an ever-growing list of features and permissions that sites can take advantage of, with no attempt to coalesce anything into a manageable user interface. Instead, it takes a hundred clicks to fully trust or distrust a site/app.
More UI/UX distinction is needed! Just the green lock for security! The browser should indicate the level of privacy of the page. If the page use no js or any GPU compromising (css I'm looking at you), then it gets a green kind. For every privacy/security compromising feature you add the turns yellow. Once it start to ask for WebUSB, MIDI, then it should be in some kind of Native Mode. More like a UI/UX issue for the major browser makers!
It’s an interesting question: is it possible for JavaScript to be turing complete, able to read/write the DOM, and somehow prevent fingerprinting / tracking?
My gut says no, not possible.
Maybe we need a much lighter way to express logic for UI interactions. Declarative is nice, so maybe CSS grows?
But I don’t see how executing server-controlled JS could ever protect privacy.
I've always thought there should be a way to use the browser like a condom. It should obfuscate all the things that make a user uniquely identifiable. Mouse movement/clicks/typing cadence should be randomized and sanitized a bit. And no website should have any authority whatsoever to identify your extensions or other tabs, or even whether or not your tab is open. And it certainly shouldn't allow a website to overrule your right click functionality, or zoom, or other accessibility features.
Just create _strict_ content security profile, which doesn't allow any external requests (fetch) and only allow load of resources (css, image, whatever) from predefined manifest.
App cannot exfiltrate any data in that case.
You may add permissions mechanisms of course (local disk, some cloud user controls, etc).
That's a big challenge in standards and not sure if anyone is working on such strongly restricted profile for web/js.
It’s an interesting question: is it possible for JavaScript to be turing complete, able to read/write the DOM, and somehow prevent fingerprinting / tracking?
Yes, of course: restrict its network access. If JS can't phone home, it can't track you. This obviously lets you continue to write apps that play in a DOM sandbox (such as games) without network access.
You could also have an API whereby users can allow the JS application to connect to a server of the user's choosing. If that API works similarly to an open/save dialog (controlled entirely by the browser) then the app developer has no control over which servers the user connects to, thus cannot track the user unless they deliberately choose to connect to the developer's server.
This is of course how desktop apps worked back in the day. An FTP client couldn't track you. You could connect to whatever FTP server you wanted to. Only the server you chose to connect to has any ability to log your activity.
There's no point. If you diaable JS. Can track you other ways, fingerprint your dns packets like timestamp clock skew and other things. With IPV6 can assign you unique ip address for a dnslookup that can function like a cookie,
Don't want to be tracked. Don't go on the internet.
Websites can't fingerprint my dns packets by their clock skew, nor can they assign me a unique IP address for a dns lookup (what?). "Don't go on the internet" isn't a great starting point to improve things.
Why does it have to be a technological solution? That's what the media industry tried to do with DRM and it failed. The solution is legislation. We need the equivalent of DMCA for our privacy. Make it illegal to fingerprint.
I’m completely unsold on legislation. Another headline that recently hit the top of HN is about how Apple flagrantly ignored a court order. The judge has recommended the case for criminal contempt prosecution [1].
The comments on the story are completely unconvinced that anyone at Apple will ever be convicted. Any fines for the company are almost guaranteed to be a slap on the wrist since they stand to lose more money by complying with the law.
I think the same could be said about anti-cookie/anti-tracking legislation. This is an industry with trillions of dollars at stake. Who is going to levy the trillions of dollars in fines to rein it in? No one.
With a technological solution at least users stand a chance. A 3rd party browser like Ladybird could implement it. Or even a browser extension with the right APIs. Technology empowers users. Legislation is the tool of those already in power.
> The solution is legislation. We need the equivalent of DMCA for our privacy
and how does one know their privacy has been invaded? How does the user know to enforce the DMCA law for privacy?
I think the solution has to be technological. Just like encryption, we need some sort of standard to ensure all browsers are identical and unidentifiable (unless the user _chooses_ to be identified - like logging in). Tor-browser is on the right track.
I don't know what it is called but if you try to open a window from a timeOut it wont work. The user has to click on something then the click even grants the permission.
You could make something similar where fingerprint worthy information cant be posted or used to build an url. For example, you read the screen size then add it to an array. The array is "poisoned" and cant be posted anymore. If you use the screen size for anything those things and everything affected may stay readable but are poisoned too. New fingerprinting methods can be added as they are found. Complex calculations and downloads might make time temporarily into a sensitive value too.
In the old days, something similar to what you're calling "poisoned" was called "tainted" [0].
In those scenarios, tainted variables were ones which were read from untrusted sources, so could cause unexpected behaviour if made part of SQL strings, shell commands, or used to assemble html pages for users. Taint checking was a way of preventing potentially dangerous variables being sent to vulnerable places.
In your scenario, poisoned variables function similarly, but with "untrusted" and "vulnerable" being replaced with "secret" and "public" respectively. Variables read from privacy-compromising sources (e.g. screen size) become poisoned, and poisoned values can't be written to public locations like urls.
There's still some potential to leak information without using the poisoned variables directly, based on conditional behaviour - some variation on
if posioned_screenwidth < poisoned_screenheight then load(mobile_css) else load(desktop_css)
is sufficient to leak some info about poisoned variables, without specifically building URLs with the information included.
On me it had the opposite effect of what was intended:
I opened the website on non anonymous session safari: it asked my name. Then I opened another new non anonymous window on the same browser: it showed my name as expected. I then opened the same browser in incognito mode: it asked my name again. I then opened chrome (non anonymous) and again it asked my name.
Exactly what I expected to see; everything seems to be working as intended. Anonymization online seems to be working perfectly fine.
Just tried this with Brave and it didn't seem to work, assuming the site working means that it can remember me in an incognito browser. I gave the site a name, and then opened it in incognito (still using brave), and it acts as if I visited the site for the first time.
Unmodified server request headers contain enough information for tracking even if JS is disabled. If you're keen to modify http headers while browsing, then you could also modify any JS run on your system that snoops system information (or strip the info from any request sent to the server) and continue with JS enabled.
They run arbritrary code from sketchy servers called "websites" on people's hardware with way too many privileges. While free and open source standalone web applications exist that only use minimal JS code to access the same web resources with a much better user experience. Without trackers, without ads and third parties.
I want a browser to be able to run arbitrary code. That's the whole point. I want to play a game or use a complex application in the browser without having to install anything.
I don’t mean to sound glib. But people derive a ton of utility from the web as it stands today. If they were asked if they supported the removal of web browsers they would absolutely say no. The privacy costs are worth the gains. If you want change you have to tackle that perception.
IMO this service should straight up be made illegal. I love the tagline they have of supposedly "stopping fraud" or "bots", when it's obvious it's just privacy invasive BS that straight up shouldn't exist, least of all as an actual company with customers.
I have almost no hope that this is a matter that has a technical solution.
The GDPR shows that law - even if not global, and even if not widely enforced - is pretty good at getting people to act. And most importantly, it will make the largest players the most afraid as they have the most to lose. And if just a handful of the largest players online are looking after peoples privacy then that is a huge win for privacy.
Doing what this demo shows, is clearly a violation of the GDPR if it works the way I assume it does (via fingerprints stored server side).
They can track you just fine via CSS and countless other ways. They'll even fingerprint the subtle intricacies of your network stack.
What we need to do is turn the hoarding of personal information into a literal crime. They should be scrambling to forget all about us the second our business with them is concluded, not compiling dossiers on us as though they were clandestine intelligence agencies.
I've tried this recently and I found it very difficult. Cloudflare bot protection is everywhere, other anti-scrape protections, many 'document' sites using JS to render with no fallback, basic forms requiring JS, authentication requiring JS, payments requiring JS etc
Not intending to sound snarky but do you just not use the web much? Or if you're adding allows all the time, what's the net gain?
Google won't implement this spec. Currently, they're legally not allowed to, because advertisers called in the industry watchdog, asserting that without third party cookies to stalk users, they could not compete. Google extended their privacy sandbox, opened and closed it, talked about it, and eventually backed down from their plan to block third party cookies ASAP.
Maybe Chrome can get away with "the spec says it, sorry advertisers" but I doubt the courts will accept that.
That is, Firefox can reject third-party cookies because it's not made by a company that deals in online advertising, but Chrome cannot, because Google is the biggest online ads dealer and thus would have an unfair advantage over other ads dealers, correct?
This is kinda hollow while Google controls Chrome, and Chrome has majority market share[1]. And, if regulators get their way, and Google divests Chrome[2], I'm not expecting that the new highest bidder would do any better with it.
> Some of the use cases that are important enough to justify the creation of purpose-specific solutions include federated identity, authorizing access to cross-site resources, and fraud mitigation.
Unpopular opinion:
There are no privacy-preserving way for "fraud mitigation".
Either you accept fraud as cost to run business, or do away the privacy.
Most business owner don't want the fraudulent user to come back, ever.
If we value the privacy of user, we need to harm some business.
In theory it is by possible by "blind attestations" by a 3rd party, in an indirect way, that is what you get by Cloudflare, where they monitor traffic from an "agent" using their own heuristics for identity, without sharing that identity with you.
I've always assumed fingerprinting was already ubiquitous. I look at the absolute absurdity of tracking/fingerprinting permission dialogs on sites, stating up-front their data sharing with 'trusted partners' in the hundreds ranges (thingiverse.com with over 900, theverge.com on mobile with over 800) and find it more surprising that the default state of all clients shouldn't be to block everything by default.
Edit: for clarity, I believe anything with the ability to analyze the user environment via Javascript/etc on major sites is likely fingerprinting regardless. Blocking, environment isolation and spoofing is already necessary to mitigate this.
I have feeling that it is all related. When use see request to accept cookies with list of over 9000 trackers it doesn't mean that this page will have zillions of javasripts included on the page. It just means that site owners fingerprint user and process user interactions to third parties server side.
Only reason why we see this movement is because advertisers feels confident about removing third party cookies.
...thus raising the bar for privacy-preserving techniques in client side browsing. Aggressive fingerprinting arrived years ago; if we can move beyond cookies altogether and focus on it as the next issue to tackle, I would think that's a net win. Saying that we should keep 3rd part cookies alive and healthy because it will keep websites using them against users rather than fingerprinting is just throwing the majority of users who don't know to block them under the bus. Plus it still leaves the door open for even privacy-conscious users to be defeated by fingerprinting anyways if a server is keen on tracking particular individuals.
Fingerprinting defeating technology is just the kind of thing that I wish Firefox spent its effort developing instead of reimplementing features form Chrome like tab groups.
Yeah, the only way third-party cookies will block creepier fingerprinting crap is if the creepy stuff is prohibitively more expensive.
But once anyone gets a creepy fingerprinting system working, the barriers drop, and it becomes cheaper to resell the capability as a library or service.
It may offer some minor benefits in terms of enabling companies that "want to be more ethical than the competition", but that too seems like a long-shot. :p
I have always blocked third-party cookies. The only problem I've encountered (there may be others, but I haven't come across them) is that some embedded videos on certain web pages won't play and prompt me to enable cookies.
Sure but this neither makes an attempt to list the valid uses of third party cookies, nor a suggestion of what magic definitely not a third-party cookie unicorn is going to ride in and offer us the safety we need. Pretty fluffy through and through.
I suggest that we do just need to keep third-party cookies but they're explicitly opt-in. That could just be allowing (once) a third party to be present everywhere (like a SSO) and browsers making it known when a third party is accessing data.
> Some features of the web that people have come to expect, and which greatly improve user experience, currently depend on third-party cookies.
Idea: domains should be able to publish a text record in their DNS (similarly to SPF record for mail domains) designating other domains which are allowed to peek at their cookies.
Suppose I operate www.example.com. My cookie record could say that foo.com and bar.com may ask for example.com cookies (in addition to example.com, of course). A website from any other domain may not. As the operator of example.com, I can revoke that at any time.
Whenever a page asks for a cookie outside of its domain, the browser will perform a special DNS query for that cookie's domain. If that query fails, or returns data indicating that the page does not have access, then it is denied.
Ah, but in so doing they will have to publish their whitelist, which will exhaustively have to list every single affiliated domain.
Browsers and browser extensions will be able to use that info to identify shit sites, turning the whitelist around into blacklisting uses, like ad blocking and whatnot.
One simple mechanism would be for the browser to deny the cookie request if the requested domain's cookie DNS record contains more than, say, three affiliated domains. (At the discretion of the browser developer, and user settings.) The proliferation of that sort of config would discourage domains from being overly promiscuous with their tracking cookie access.
Plus, existing cookie control mechanisms don't go away.
Just feeling uncomfortable putting more data into DNS.
DNS is not encrypted. DNSSEC is easy to bypass (or break way too often that nobody want to enforce it).
Yes; if someone hijacks example.com's main A record, that gets caught at the SSL level.
If someone hijacks example.com's cookie record, that won't be caught; they just write themselves permission to have their page access example.com's cookies.
The same info could just be hosted by example.com (at some /.well-known path or whatever). The web could generate a lot of hits against that.
The DNS records could be (optionally?) signed. You'd need the SSL key of the domain to check the signature.
It's already used in a similar way for SPF records, in the context of e-mail.
Using a SPF record, a domain indicates hosts that are allowed to deliver mail on its behalf (meaning using an envelope sender address from that domain).
Replacement for what use case? The whole point is to eliminate the behavior, not provide another feature that has the same problems. What does failure mean? It's a problem for ad networks, not for regular humans.
The use case of not having to log in to system A which is being embedded within system B because you already logged in to system A? Without needing to introduce a third party SSO C? That's pretty "regular human", even if it's "medium sized corporation" instead of "Joe Regular" (but even Joe likes it if he doesn't have to log into the comment box on every site that uses THE_COMMENT_SYSTEM_HE_LIKES.)
This exists already. You can have cookies at higher level of the same domain. So foo.example.com and bar.example.com can share cookies at example.com. You can also use CORS to interact with a truly third party site. None of these require third party cookies.
A use case this doesn't address is embedding across two completely different domains, which is pretty common in the education space with LMS platforms like Canvas (https://www.instructure.com/canvas) embedding other tools for things like quizzes, textbooks, or grading. I ended up in a Chrome trial that disabled third-party cookies which broke a lot of these embeds because they can no longer set identity cookies that they rely on from within their iframe.
As nwalters also points out, this isn't the same at all. System A and System A' both from Source Α are not the same as System A (Source Α) and System B (Source Β).
Which you know, because you say "you can also use CORS to interact with a truly third party site". But now, I invite you to go the rest of the way - what if the third party site isn't Project Gutenburg but `goodreads.com/my-reading-lists`? That is, what if the information that you want to pull into System A from System B should only be available to you and not to anyone on the net?
The use case is web sites that want to earn income with as little user overhead as possible. Targeted ads have many downsides but they do pay websites without any money at all from the user, or even having to create an account.
So the problem for regular humans is the disappearance of features that they've grown used to having without paying any money. Finding a better way to support themselves has proven remarkably difficult.
Certainly a lot of people would care if Facebook disappeared.
There are also a billion other ad-supported web sites, each of which make ten people happy. Not a single one of them would be widely mourned, but 5 billion people would each be saddened by one of them.
For a long time I thought pinterest was search spam that no human could possibly want to see, but then I met real people in the world who like it and intentionally visit the site. I bet there are people who like ehow and the rest, too.
It is their problem when a feature that they like disappears.
They don't care about what happens to the business itself. But they do care about the things the business provides.
If they don't in fact care, then indeed, nothing is lost. But a lot of people will miss a lot of things. Whoever comes up with an alternative that suits the case will make a lot of people happy.
The article explicitly calls out that there are valid use cases (although doesn’t enumerate them). Federated sign-on and embedded videos seem like obvious examples
> Taking all of these factors into consideration, we’ve made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies.
Ah, now _that_ makes sense why this go published then. Glad to see that common sense prevailed. The day may come when all the use cases for third-party cookies that aren't "track Joe Regular all around the web" can be satisfied with other widely available web features, but until we have all those features I think taking a page from Linus' book and ensuring "we don't break userland" is important (and something I've always loved about the web and I'm glad to see it continuing).
Which use cases? I use Brave, which has a built in toggle to disable 3rd party cookies, which I have set to default, and at least my experience of 'the entire internet' works fine.
embedded iframes that need to authenticate logins but don't trust the parent domain to store the login data there is a problem. You can somewhat work around it with the Storage Access API if that browser supports it (brave doesn't), but it does mean every embed requires a click by the user first before it works properly
Company whose market cap reflects pervasive surveillance non-requested announces that after serious consideration they won’t be removing technologies that enable pervasive non-requested surreptitious surveillance.”
It is going to be interesting to see if anti-trust enforcement's manages to separate Google from its financial and practical hold on web standards/browsers.
The opportunity to increase ethical norms of web browsing would be welcome to me.
Google wants to remove third party cookies but they can't as the government sees it as anticompetitive to their competition. They dont need third party cookies, everyone else does.
Precisely - removing third-party cookies doesn't stop Google from tracking anyone. It just prevents anyone who doesn't own a browser and have one of the three major email providers from tracking everyone.
Well, it doesn't prevent them, but it does make it a little bit harder ...
I personally think this decision hurts users more than anything else. We must let Google's competitors continue tracking us or else it won't be fair to them?
I don't even understand how being forced to divest Chrome will even help. Once another company owns Chrome and can remove third party cookies, Google gets the same benefit.
Google has remarkable financial influence across the four major commercial entity related browsers.
So limiting Google's control over browsers will create more competition. More competition on implementations. And also more competition in terms of features and user centric service.
--
Question: Does Google really not gather information from anything but its search engine and first party apps? That would seem financially non-optimal for any advertising funded business.
I would think that sure, they log everything peopel use their search for.
But that they would also find a way to track post-search behavior as well. Google leaving money on the table seems ... unusual if there isn't some self-serving reason they would forgo that.
There are only 3 effective browsers - Chrome, Safari and Firefox. I don't see how limiting Google's control will create competition. The barrier to more browsers is the massive investment needed to create one, not any action that Google is doing.
You are correct, although its more correct to say there a only 3 major browser engines, Blink (used by all chromium derivatives), WebKit (used by Safari and some minor browsers), Gecko (used by Firefox and its derivatives). Creating a browser engine is hard, so hard that even a multi billion dollar company like Microsoft gave up on doing it.
And we may soon witness Gecko going away as a side effect of the Google antitrust lawsuit.
Google has set up a replacement that puts the user in control of their ad interest tracking. It has its upsides and downsides, but I think it's pretty balanced. Anti-tracking features are embedded into the API so the API can't be abused by advertisers.
Of course, ad companies scream bloody murder, and the UK market watchdog had to step in so Google wouldn't turn off third party cookies by default.
I've have them turned off since Firefox added the feature. Looks like that was around 2018, though I could have sworn it was much earlier than that. I've never had an issue where I had to make an exception for a site. Is there still some environment where it's common for them to be needed?
I don't recall a browser that didn't let you disable third-party cookies; given how long ago cookies were introduced, I could have forgotten about it, but I'm at least sure that Mozilla always supported it.
Firefox, especially in the first versions, permitted much less control on cookies than Mozilla did, but I think it still always allowed disabling third party cookies.
third-party cookies have done more harm than good, and it's time to fully remove them from the web platform. It is refreshing that their acknowledgment that replacements must not just be privacy-washed clones of the old model — purpose-built alternatives need to prove they don’t recreate the same surveillance infrastructure.
I had a little trouble when Safari rolled out ITP a while back. SSO providers scrambled to figure out how to fix federated logins, and because it affected every iPhone, they managed to do it with a quickness. I haven't had a single problem since.
Using a custom-built interception layer, I decouple session tokens from identifiable browser states, rotating my signature footprint every few requests via controlled entropy injection. “No more third-party cookies” sounds like a big shift, but it’s functionally irrelevant if your presence is already undetectable.
This is actually a somewhat inconvenient wish, because the alternative would increase the fingerprint investments required for all browsers to recognise us.
I block almost all 3rd party cookies, but at this point isn't it kind of nice to just have your google login follow you around, so you don't constantly have to login on other sites? Sure, it sucks for privacy, which is why your google account should never be tied to your phone number or your actual identity, but it's super convenient. Oh wait. It's tied to your real identity? Go back to square one and start a fake identity with all the root info. Buy a burner with a prepaid card, use it to set up a yahoo mail account, use that to set up a mail server you pay for in bitcoin, use that to verify a gmail account, and never let down your VPN. You're going to be tracked; the right move isn't to waste time worrying about that, it's to be someone invisible and untethered in the real world.
Fine. All that will happen is we'll see more sites switching to requiring a login to do anything on their website, so that they can track you with first-party cookies, and sell your information that way. Nothing will meaningfully change.
The only distinction is that I can do a decent job of blocking third-party cookies today with my existing solutions like uBlock Origin, but I will probably have a much more difficult time getting around login/paywalls.
They absolutely can. They have, at minimum, your account information and your IP address. Maybe you use a burner email address and/or phone number, and maybe a VPN, but chances are you’re not cycling your VPN IP constantly so there’s going to be some overlap there. And if you do cycle your IP, 99%+ of users probably aren’t clearing session cookies when doing so, which means you’re now tracked across IP/VPN sessions. Same deal if you ever connect without a VPN - that IP is tracked too. There’s tons of ways to fingerprint without third party cookies, they just make it easier (and also easier to opt out of if they exist, just disable third party cookies; if no one has third party cookies, sites are going to start relying on more intrusive tracking methods).
You can also easily redirect from your site to some third party tracking site that returns back to your successful login page - and fail the login if the user is blocking the tracking domain. The user then has to choose whether to enable tracking (by not blocking the tracking domain) or not seeing your website at all. Yes the site might lose viewers, but if they weren’t making the site any money, that might be a valid trade off if there’s no alternative.
Not saying I agree with any of this, btw, I hate ads and tracking with a passion - I run various DNS blocking solutions, have ad blockers everywhere possible, etc. Just stating what I believe these sort of sites would and can do.
There's no need for a login to track you with "first-party cookies", looking at the IP is perfectly adequate, at most adding some fingerprinting if you really want.
The only problem is that then the tracking companies have to place more trust on the first party that they're giving them real data.
But they're doing it, actually, see confection.io for example
Has anyone noticed this pattern that for some pulled out of my arse explanation, these standards groups and google suddenly remove features that would be useful to people, but they decided it's now not ok in the future. Like http referers now only show the domain, not the full url, because insert complete bs explanation. And now 3rd party cookies too...
UMatrix blocks those by default. Blocking third party cookies very rarely breaks anything. I can only think of one instance in the past five years, and that wasn't really a third party cookie, but one website using two different domains.
What a weird piece of writing. Is this like just chicken scratch? Or is this seriously some kind of part of the W3C working process?
Section 2: Third party cookies have gotten bad. Ok.
Section 3: There are legitimate use cases that third party cookies currently cover. Also ok. Then they throw in, "Be aware that a set of new technologies which carry minimal risk individually, could be used in combination for tracking or profiling of web users." Yes? Huge scope increase in the document though and all of a sudden we're now talking about tons of tracking technologies in aggregate? The authors move on without further comment.
Section 4: I think the first half is essentially saying that new technology coming online in the web platform will make the third party cookie problem worse, so we should fix it soon. OK, I'm with back with you. Then the document suddenly pivots to proposing general standards for web privacy again, saying that the burden of proof is on the people originating the proposal to, before concluding by saying (apparently without irony?) that justifying the removal of third-party cookies' impact on business is outside of the scope of the document.
I'm missing a ton of cultural context here about how W3C works, so I'm guessing this probably amounts to rough notes that somebody intends to clean up later that I'm being overly critical of, and they didn't expect it to get any traction on hacker news.
It's W3c... They've never been the most coherent with standards ironically.
Isn't W3C fairly irrelevant these days?
They're very far from irrelevant, depends on what kind of Web development you do, I would say -- I have been writing WebAssembly by hand (I mean, a lot can be said about that but it's a thing) and the spec. is authored by W3C. There's plenty of other things they author, like, you know, either one of the many _CSS_-related specifications.
It's just that with the modern Web 7.0 (or whatever version we're on now), it's WHATWG that's most prominent since there's that one spec that defines 90% of what happens on the Web, it's called "The HTML standard" or some such. Then you have Google de-facto authoring specs., which may or may not find their way back into the HTML document, but even if they don't, they do make you feel like W3C is left behind.
...or it's a design by committee thing, and some people in the room are doing their best to preserve current and future tracking technology.
It's exactly this, there is a group who come together and never agree on rules, but when they do, they never enforce them. It's I believe the definition of a paper tiger, sadly. A great idea executed horribly.
Standards bodies rarely enforce rules themselves.
Is it really on the W3C to enforce standards? How would that even work?
By shipping their own reference browser ..
In what way would that enforce standards?
Well, the same way google can enforce their standards via chrome.
(I did not say it is a realistic goal for a theoretical comitee)
The only reason Chrome can do that is because it has a huge chunk of the market. It does not work for a browser with no users.
So not at all? Shipping something in chrome isn’t enforcing a standard in my opinion. Enforcing a standard would be a regulatory thing, like having to use USB-C in certain situations.
Chrome is in a monopol position. If they decide to ship a new feature .. then all the other browsers need to implement it as well, or their users assume their browser is broken.
Okay but that's still not the same as enforcing a standard, in any way... You're suggesting the W3C should simply roll a "reference browser" that supplants Chrome so they can force standards on users themselves. That really doesn't seem like a great way to do it.
> A great idea executed horribly.
No. It's sabotage.
Never attribute to malice etc.
Design by committee is more likely malice than accident or stupidity. Some factors work towards goals which are good for them but malice for the majority.
The "replacement" is already being penned: https://www.w3.org/TR/privacy-preserving-attribution/
Which is just going to be in additional to 3rd-party cookies. Google's own study concluded removing 3rd-party cookies loses revenue and "privacy-preserving" tracking increases revenue: https://support.google.com/admanager/answer/15189422 So they'll just do both: https://privacysandbox.com/news/privacy-sandbox-next-steps/
There are regulatory agencies which have specifically told Google it is not allowed to remove 3rd party cookies without a replacement as while Google would be able to continue to function fine, their competitors would take a major loss.
Sounds like a great argument for running a different browser not developed by an advertising company, and thus not constrained by that.
Agreed. Curious what HNers feel is the most viable replacement. I'm experimenting w Arc this week...
Firefox with uBlock Origin, Privacy Badger at a minimum, other extensions to taste[0]
I’ve also been experimenting with Zen[1], which is Firefox based, recently and it seems quite promising in terms of a nicer default UI.
[0] I like Tab Stash, Vimium C, SponsorBlock, Decentraleyes, DeArrow, Archive Page, among others
[1] https://zen-browser.app/
Firefox is alright. I keep around a script called `chrome-new` for those rare case I still need Chrome.
I've been on Firefox for years, it's extremely good these days
Do you have links for this? I'm curious about which bodies and what was their argument.
https://www.gov.uk/cma-cases/investigation-into-googles-priv...
Seems like the CMA are concerned for other advertisers who profit from 3rd-party cookies, no concern for user's privacy. That poor billion dollar industry, how will it cope?
their mandate is to regulate competition
not privacy
Another "trusted" third party based tracking system. All I need to know to avoid it even when it is printed on toiletpaper.
Yep, definitely "trusted third party". For example:
https://blog.mozilla.org/en/mozilla/mozilla-anonym-raising-t...
Owned by Mozilla, ran by ex-Facebook employees. I'm sure it's entirely coincidentally this W3C draft was written by Mozilla and Facebook employees.
I just want someone to explain how I can edit my own privacy preserving attribution database. Is it a local SQLite database or something?
I feel like storing my "preferences" locally without letting me edit them as a stupid move.
Google's design stores the tracking data locally. Chrome already has a UI to manage topics of interest (chrome://settings/adPrivacy).
Adopting javascript universally, and consequentially making it a de facto standard, was a terrible mistake. I think the only way out of this nightmarish privacy-less state of things (in this regard) might be something like the EU putting out an extremely severe law banning all these bad practices. Banning all practices that undermine privacy is the only morally valid option; it's not only about 3rd-party cookies. It'd be an incontrovertible measure for everyone but bad actors, just like USB-C or user-removable batteries (February 2027).
If third-party cookies are removed, the tracking parties will just ask web sites to include the script on their web server, so their cookies become "first party" again. I don't understand how this helps the web unless protections against tracking itself, not the methods used, are established.
It's about trust, the third-party ad companies don't trust that the first party will be honest with them, not generating fake impressions or clicks.
There's all kinds of cryptography available for solving trust problems. I guarantee you that within six months of third party cookies being removed someone will have built an impression signing system that is satisfactory to both the ad companies and the server owners.
There are also trust issues the other way. I've seen a lot of contention between developers and security teams and marketing about putting third party code or proxying third party domains on the first party site for analytics, tracking, ad attribution, etc.
I doubt that. Their script could as well be "fetch that script from that URL and run it". They would have fraud detections already in place on their side regardless of which script runs on the client.
> "fetch that script from that URL and run it"
but if you cannot have a third party cookie, the remote site from the tracker cannot be sure that the script was actually downloaded, nor executed.
Generate dynamic, short lifetime URLs that are locked to the client IP.
sure you can, if their script is making a 3rd party xhr request to that tracker.
but this request could be faked, if the first party wanted to fake the traffic (for example, to make ad revenue). This third party cookie is what prevents this faking at this moment.
That's old hat, the future is server to server calls from sites to vendors, profile the client but don't try to run any tracking js on it.
That's vastly more expensive, though. Now you have to run extra servers to make outbound connections to the ad tracker's API server instead of turfing off all the work to visitors. It would be enough to significantly affect the ad market.
You also get to do it on your fast cloud backend infrastructure instead of the end-users home computer and ISP. They will appreciate the increase in page load speed and overall responsiveness, and as a bonus they can't use ad blockers or hostfile tricks anymore.
Oh no!
I don't think it's that expensive to do. All it takes is one well written package that is easy to install and this will be come standard.
I could even see a data broker centralizing this and distributing tracking to all of their clients. The client would just need to communicate with the central broker, which is not hard at all.
As long as your scale is tiny, sure. At a point you'd need to turn that into an async task queue etc etc.
BTW, I see this as a feature, not a bug. I'm glad it would be harder and more expensive to violate my privacy.
This setup already exists, they're called Supply Side Platforms.
That's also quite the possibility, and supports my point.
This doesn’t actually help. If you consider Prebid, Criteo already has js running on the site serving the ads, but that js has no mechanism to figure out whether the user has something in their cart and is eligible for retargeting.
The workaround is looking more and more like IP, fingerprinting, and AI. I’d argue this is worse than 3p cookies, which were at least dumb and easy to clear.
I think many adtech companies (at least in affiliate marketing) use redirects because third party cookies are unreliable and redirects make all the cookies first party. As mentioned elsewhere, they’ve also been switching to proxies and other such techniques to make it even harder to block their tracking endpoints.
Proxies for analytics are already a thing. E.g. plausable shows you how to set one up. A 3rd party cookie can however be the same value sent again and again from the same browser from different sites to the central server tracking you across the web. The global who you are is in the cookie.
> include the script on their web server, so their cookies become "first party" again.
That script would execute with the origin of the server. It's access to resources and /shared state/ would be hampered by this. So as a cross-site tracking strategy I don't think this works.
> I don't understand how this helps the web unless protections against tracking itself, not the methods used, are established.
Which is why I think state partitioning[0] and CHIPs[1] are good technologies. It allows previously existing standards, like cookies, to continue to exist and function mostly as expected, but provides the user a good amount of default security against cross site trackers and other malware.
[0]: https://developer.mozilla.org/en-US/docs/Web/Privacy/Guides/...
[1]: https://developer.mozilla.org/en-US/docs/Web/Privacy/Guides/...
Your point is pretty useless, as you assume the web server admins want to be more secure. The opposite is the case, usually they deliberately open up their security model to accomodate 3rd party tracking scripts. For example, Content-Security-Policy headers can effectively prevent all sorts of xss attacks, but they will also prevent 3rd party tracking scripts etc.
You've misunderstood my point. It's not what the server admins want it's what the security policy will allow. If two sites, on two different domains, both use the same script, served directly from their domains, it creates absolutely no workaround for third party cookies. This is because the two sites have different origins. CSP does not create a bypass in this case.
Feel like all this cookies thing is just white wash, when if you enable JS then they can track you no matter if you have cookies or not!
Nothing is private: https://nothingprivate.gkr.pw
More effort ought to be put into how to make web spec to NOT be able track user even if JS is turned on.
Browser vendor Brave, Firefox suppose to privacy browser are NOT doing anything about it.
At this point, do we need to using JS disabled browser to really get privacy on the web?
Any other tracking methods are way more obvious, and way harder to implement for the advertising industry. We shouldn't think in black/white here - the more difficult it is to track a user, the less likely it is implemented. It is okay if 30% of tracking sites dissapear as the cost/value ratio don't work for them. We don't have to sit in silence and do nothing, just because we can't have the 100% privacy.
I do think there is a point here: any technical means to block tracking is going to be overrun by technical means to overcome the anti-tracking tech. There are simply too many dollars at stake for anything else to happen. If anti-tracking stops some players, that just means the industry will consolidate into a few large and well-resourced players.
While I am all in favor of continuing the technical battle against tracking, it’s time to recognize that the war will only be won with legislation.
> At this point, do we need to using JS disabled browser to really get privacy on the web?
My thoughts are that we need a distinction between web pages (no JS) which are minimally interactive documents that are safe to view, and web apps (sites as they exist now) which require considerable trust to allow on your device. Of course, looking that the average person's installed app list indicates that we have a long way to go culturally with regards to establishing a good sense of digital hygiene, even for native software.
The problem is that there is a lot of grey area between pure document-style pages and full-on apps (take online shops for example) and even for the former category of pages a lot of UI niceties are only possible with scripting.
It doesn't help that web browsers aren't even trying to help users make the distinction. They have an ever-growing list of features and permissions that sites can take advantage of, with no attempt to coalesce anything into a manageable user interface. Instead, it takes a hundred clicks to fully trust or distrust a site/app.
More UI/UX distinction is needed! Just the green lock for security! The browser should indicate the level of privacy of the page. If the page use no js or any GPU compromising (css I'm looking at you), then it gets a green kind. For every privacy/security compromising feature you add the turns yellow. Once it start to ask for WebUSB, MIDI, then it should be in some kind of Native Mode. More like a UI/UX issue for the major browser makers!
https://nothingprivate.gkr.pw seems to (not) work fine in Firefox... I am running ublock-origin though, no other special things.
Same, they were "fooled" by a private window. I was recognized when just using a different Multi-Account Container[1] though.
[1] https://addons.mozilla.org/en-US/firefox/addon/multi-account...
Same here, it’s not just you. Judging by the other comments, it only seems to “work” on Blink-based browsers.
Also not working on Brave, without UBlock or similar extensions. Brave says it blocked one requests, probably that for fingerprinting.
The site also fails to track on mobile Safari with ”Prevent Cross-Site Tracking” turned on.
It’s an interesting question: is it possible for JavaScript to be turing complete, able to read/write the DOM, and somehow prevent fingerprinting / tracking?
My gut says no, not possible.
Maybe we need a much lighter way to express logic for UI interactions. Declarative is nice, so maybe CSS grows?
But I don’t see how executing server-controlled JS could ever protect privacy.
I've always thought there should be a way to use the browser like a condom. It should obfuscate all the things that make a user uniquely identifiable. Mouse movement/clicks/typing cadence should be randomized and sanitized a bit. And no website should have any authority whatsoever to identify your extensions or other tabs, or even whether or not your tab is open. And it certainly shouldn't allow a website to overrule your right click functionality, or zoom, or other accessibility features.
The obfuscation makes you more easily identifiable.
I think their idea was that it would be in the browser everyone uses.
How so?
Eldo Kim
you stand out when you obviously hide
only if you are the only one doing the obfuscation.
It's why tor browser is set to a specific dimension (in terms of pixel size), have the same set of available fonts etc.
And yet you still stand out if you use tor.
yes, and it's because not enough people use tor-browser (i meant the browser, not the network).
But if privacy is truly the desired goal, the regular browser ought to behave just like tor-browser.
Yes, it is.
Just create _strict_ content security profile, which doesn't allow any external requests (fetch) and only allow load of resources (css, image, whatever) from predefined manifest.
App cannot exfiltrate any data in that case.
You may add permissions mechanisms of course (local disk, some cloud user controls, etc).
That's a big challenge in standards and not sure if anyone is working on such strongly restricted profile for web/js.
It’s an interesting question: is it possible for JavaScript to be turing complete, able to read/write the DOM, and somehow prevent fingerprinting / tracking?
Yes, of course: restrict its network access. If JS can't phone home, it can't track you. This obviously lets you continue to write apps that play in a DOM sandbox (such as games) without network access.
You could also have an API whereby users can allow the JS application to connect to a server of the user's choosing. If that API works similarly to an open/save dialog (controlled entirely by the browser) then the app developer has no control over which servers the user connects to, thus cannot track the user unless they deliberately choose to connect to the developer's server.
This is of course how desktop apps worked back in the day. An FTP client couldn't track you. You could connect to whatever FTP server you wanted to. Only the server you chose to connect to has any ability to log your activity.
There's no point. If you diaable JS. Can track you other ways, fingerprint your dns packets like timestamp clock skew and other things. With IPV6 can assign you unique ip address for a dnslookup that can function like a cookie,
Don't want to be tracked. Don't go on the internet.
Websites can't fingerprint my dns packets by their clock skew, nor can they assign me a unique IP address for a dns lookup (what?). "Don't go on the internet" isn't a great starting point to improve things.
Why does it have to be a technological solution? That's what the media industry tried to do with DRM and it failed. The solution is legislation. We need the equivalent of DMCA for our privacy. Make it illegal to fingerprint.
I’m completely unsold on legislation. Another headline that recently hit the top of HN is about how Apple flagrantly ignored a court order. The judge has recommended the case for criminal contempt prosecution [1].
The comments on the story are completely unconvinced that anyone at Apple will ever be convicted. Any fines for the company are almost guaranteed to be a slap on the wrist since they stand to lose more money by complying with the law.
I think the same could be said about anti-cookie/anti-tracking legislation. This is an industry with trillions of dollars at stake. Who is going to levy the trillions of dollars in fines to rein it in? No one.
With a technological solution at least users stand a chance. A 3rd party browser like Ladybird could implement it. Or even a browser extension with the right APIs. Technology empowers users. Legislation is the tool of those already in power.
[1] https://news.ycombinator.com/item?id=43856795
> The solution is legislation. We need the equivalent of DMCA for our privacy
and how does one know their privacy has been invaded? How does the user know to enforce the DMCA law for privacy?
I think the solution has to be technological. Just like encryption, we need some sort of standard to ensure all browsers are identical and unidentifiable (unless the user _chooses_ to be identified - like logging in). Tor-browser is on the right track.
That'd be the GDPR
Which is only applicable in the EU
I don't know what it is called but if you try to open a window from a timeOut it wont work. The user has to click on something then the click even grants the permission.
You could make something similar where fingerprint worthy information cant be posted or used to build an url. For example, you read the screen size then add it to an array. The array is "poisoned" and cant be posted anymore. If you use the screen size for anything those things and everything affected may stay readable but are poisoned too. New fingerprinting methods can be added as they are found. Complex calculations and downloads might make time temporarily into a sensitive value too.
In the old days, something similar to what you're calling "poisoned" was called "tainted" [0].
In those scenarios, tainted variables were ones which were read from untrusted sources, so could cause unexpected behaviour if made part of SQL strings, shell commands, or used to assemble html pages for users. Taint checking was a way of preventing potentially dangerous variables being sent to vulnerable places.
In your scenario, poisoned variables function similarly, but with "untrusted" and "vulnerable" being replaced with "secret" and "public" respectively. Variables read from privacy-compromising sources (e.g. screen size) become poisoned, and poisoned values can't be written to public locations like urls.
There's still some potential to leak information without using the poisoned variables directly, based on conditional behaviour - some variation on
is sufficient to leak some info about poisoned variables, without specifically building URLs with the information included.[0] https://en.wikipedia.org/wiki/Taint_checking
Doesn't work on Brave. It says to check it on private mode, but when I switch to private mode it just asks for my name again.
On me it had the opposite effect of what was intended:
I opened the website on non anonymous session safari: it asked my name. Then I opened another new non anonymous window on the same browser: it showed my name as expected. I then opened the same browser in incognito mode: it asked my name again. I then opened chrome (non anonymous) and again it asked my name.
Exactly what I expected to see; everything seems to be working as intended. Anonymization online seems to be working perfectly fine.
Also doesn’t work on iOS (for me).
Just tried this with Brave and it didn't seem to work, assuming the site working means that it can remember me in an incognito browser. I gave the site a name, and then opened it in incognito (still using brave), and it acts as if I visited the site for the first time.
What am I supposed to witness?
It didn't work on Firefox mobile either... Why are all these browser companies breaking the web!
I think this is a bit overblown. Brave and Safari we're both private when I just tested. Chrome not so much, but thats expected.
Unmodified server request headers contain enough information for tracking even if JS is disabled. If you're keen to modify http headers while browsing, then you could also modify any JS run on your system that snoops system information (or strip the info from any request sent to the server) and continue with JS enabled.
Web Browsers Must Be Removed
They run arbritrary code from sketchy servers called "websites" on people's hardware with way too many privileges. While free and open source standalone web applications exist that only use minimal JS code to access the same web resources with a much better user experience. Without trackers, without ads and third parties.
I want a browser to be able to run arbitrary code. That's the whole point. I want to play a game or use a complex application in the browser without having to install anything.
It won’t happen because people don’t care enough.
I don’t mean to sound glib. But people derive a ton of utility from the web as it stands today. If they were asked if they supported the removal of web browsers they would absolutely say no. The privacy costs are worth the gains. If you want change you have to tackle that perception.
Works as advertised on Edge but not on safari
I can't get that site to work on Safari on my Mac, with JS enabled.
The more egregious and frankly disgusting one is https://fingerprint.com
IMO this service should straight up be made illegal. I love the tagline they have of supposedly "stopping fraud" or "bots", when it's obvious it's just privacy invasive BS that straight up shouldn't exist, least of all as an actual company with customers.
I have almost no hope that this is a matter that has a technical solution. The GDPR shows that law - even if not global, and even if not widely enforced - is pretty good at getting people to act. And most importantly, it will make the largest players the most afraid as they have the most to lose. And if just a handful of the largest players online are looking after peoples privacy then that is a huge win for privacy.
Doing what this demo shows, is clearly a violation of the GDPR if it works the way I assume it does (via fingerprints stored server side).
They can track you just fine via CSS and countless other ways. They'll even fingerprint the subtle intricacies of your network stack.
What we need to do is turn the hoarding of personal information into a literal crime. They should be scrambling to forget all about us the second our business with them is concluded, not compiling dossiers on us as though they were clandestine intelligence agencies.
I by default block JS on the web and only allow it for domains I accept. It's a tiny bit of work for a whole lot of safety.
I've tried this recently and I found it very difficult. Cloudflare bot protection is everywhere, other anti-scrape protections, many 'document' sites using JS to render with no fallback, basic forms requiring JS, authentication requiring JS, payments requiring JS etc
Not intending to sound snarky but do you just not use the web much? Or if you're adding allows all the time, what's the net gain?
Google won't implement this spec. Currently, they're legally not allowed to, because advertisers called in the industry watchdog, asserting that without third party cookies to stalk users, they could not compete. Google extended their privacy sandbox, opened and closed it, talked about it, and eventually backed down from their plan to block third party cookies ASAP.
Maybe Chrome can get away with "the spec says it, sorry advertisers" but I doubt the courts will accept that.
That is, Firefox can reject third-party cookies because it's not made by a company that deals in online advertising, but Chrome cannot, because Google is the biggest online ads dealer and thus would have an unfair advantage over other ads dealers, correct?
This is kinda hollow while Google controls Chrome, and Chrome has majority market share[1]. And, if regulators get their way, and Google divests Chrome[2], I'm not expecting that the new highest bidder would do any better with it.
[1] The exact figure may depend on which source you use, and there is some indication that ad and tracker blocking may artificially deflate Firefox and friends. https://gs.statcounter.com/browser-market-share [2] https://www.wired.com/story/the-doj-still-wants-google-to-di...
As long as the new steward of Chrome is not an advertising company, they will no longer be restricted from removing third-party cookies.
> Some of the use cases that are important enough to justify the creation of purpose-specific solutions include federated identity, authorizing access to cross-site resources, and fraud mitigation.
Unpopular opinion: There are no privacy-preserving way for "fraud mitigation".
Either you accept fraud as cost to run business, or do away the privacy. Most business owner don't want the fraudulent user to come back, ever. If we value the privacy of user, we need to harm some business.
In theory it is by possible by "blind attestations" by a 3rd party, in an indirect way, that is what you get by Cloudflare, where they monitor traffic from an "agent" using their own heuristics for identity, without sharing that identity with you.
Careful what you wish for. Removing third party cookies without a replacement will make aggressive fingerprinting ubiquitous.
I've always assumed fingerprinting was already ubiquitous. I look at the absolute absurdity of tracking/fingerprinting permission dialogs on sites, stating up-front their data sharing with 'trusted partners' in the hundreds ranges (thingiverse.com with over 900, theverge.com on mobile with over 800) and find it more surprising that the default state of all clients shouldn't be to block everything by default.
Edit: for clarity, I believe anything with the ability to analyze the user environment via Javascript/etc on major sites is likely fingerprinting regardless. Blocking, environment isolation and spoofing is already necessary to mitigate this.
Do you believe that while third party cookies exist, tracking companies aren't using other fingerprinting methods?
I have feeling that it is all related. When use see request to accept cookies with list of over 9000 trackers it doesn't mean that this page will have zillions of javasripts included on the page. It just means that site owners fingerprint user and process user interactions to third parties server side.
Only reason why we see this movement is because advertisers feels confident about removing third party cookies.
...thus raising the bar for privacy-preserving techniques in client side browsing. Aggressive fingerprinting arrived years ago; if we can move beyond cookies altogether and focus on it as the next issue to tackle, I would think that's a net win. Saying that we should keep 3rd part cookies alive and healthy because it will keep websites using them against users rather than fingerprinting is just throwing the majority of users who don't know to block them under the bus. Plus it still leaves the door open for even privacy-conscious users to be defeated by fingerprinting anyways if a server is keen on tracking particular individuals.
Fingerprinting defeating technology is just the kind of thing that I wish Firefox spent its effort developing instead of reimplementing features form Chrome like tab groups.
Yeah, the only way third-party cookies will block creepier fingerprinting crap is if the creepy stuff is prohibitively more expensive.
But once anyone gets a creepy fingerprinting system working, the barriers drop, and it becomes cheaper to resell the capability as a library or service.
It may offer some minor benefits in terms of enabling companies that "want to be more ethical than the competition", but that too seems like a long-shot. :p
I have always blocked third-party cookies. The only problem I've encountered (there may be others, but I haven't come across them) is that some embedded videos on certain web pages won't play and prompt me to enable cookies.
Sure but this neither makes an attempt to list the valid uses of third party cookies, nor a suggestion of what magic definitely not a third-party cookie unicorn is going to ride in and offer us the safety we need. Pretty fluffy through and through.
I suggest that we do just need to keep third-party cookies but they're explicitly opt-in. That could just be allowing (once) a third party to be present everywhere (like a SSO) and browsers making it known when a third party is accessing data.
> Some features of the web that people have come to expect, and which greatly improve user experience, currently depend on third-party cookies.
Idea: domains should be able to publish a text record in their DNS (similarly to SPF record for mail domains) designating other domains which are allowed to peek at their cookies.
Suppose I operate www.example.com. My cookie record could say that foo.com and bar.com may ask for example.com cookies (in addition to example.com, of course). A website from any other domain may not. As the operator of example.com, I can revoke that at any time.
Whenever a page asks for a cookie outside of its domain, the browser will perform a special DNS query for that cookie's domain. If that query fails, or returns data indicating that the page does not have access, then it is denied.
But then all the ad-supported websites will whitelist the ad tracking cookies, which is precisely what they are trying to avoid here.
Ah, but in so doing they will have to publish their whitelist, which will exhaustively have to list every single affiliated domain.
Browsers and browser extensions will be able to use that info to identify shit sites, turning the whitelist around into blacklisting uses, like ad blocking and whatnot.
One simple mechanism would be for the browser to deny the cookie request if the requested domain's cookie DNS record contains more than, say, three affiliated domains. (At the discretion of the browser developer, and user settings.) The proliferation of that sort of config would discourage domains from being overly promiscuous with their tracking cookie access.
Plus, existing cookie control mechanisms don't go away.
Not a bad idea, TBH.
Just feeling uncomfortable putting more data into DNS. DNS is not encrypted. DNSSEC is easy to bypass (or break way too often that nobody want to enforce it).
-- but these are not w3c's problem.
Yes; if someone hijacks example.com's main A record, that gets caught at the SSL level.
If someone hijacks example.com's cookie record, that won't be caught; they just write themselves permission to have their page access example.com's cookies.
The same info could just be hosted by example.com (at some /.well-known path or whatever). The web could generate a lot of hits against that.
The DNS records could be (optionally?) signed. You'd need the SSL key of the domain to check the signature.
When you say bypass, do you mean disable DNSSEC on your own computer? Or are there known vulnerabilities in DNSSEC cryptography or software?
DNSSEC isn't encrypted either.
I dont think DNS should be overloaded to have a security measure.
It's already used in a similar way for SPF records, in the context of e-mail.
Using a SPF record, a domain indicates hosts that are allowed to deliver mail on its behalf (meaning using an envelope sender address from that domain).
Replacement solutions must be provided before it's mandatory to remove third party cookies. Otherwise, it's doomed to fail.
Replacement for what use case? The whole point is to eliminate the behavior, not provide another feature that has the same problems. What does failure mean? It's a problem for ad networks, not for regular humans.
The use case of not having to log in to system A which is being embedded within system B because you already logged in to system A? Without needing to introduce a third party SSO C? That's pretty "regular human", even if it's "medium sized corporation" instead of "Joe Regular" (but even Joe likes it if he doesn't have to log into the comment box on every site that uses THE_COMMENT_SYSTEM_HE_LIKES.)
This exists already. You can have cookies at higher level of the same domain. So foo.example.com and bar.example.com can share cookies at example.com. You can also use CORS to interact with a truly third party site. None of these require third party cookies.
A use case this doesn't address is embedding across two completely different domains, which is pretty common in the education space with LMS platforms like Canvas (https://www.instructure.com/canvas) embedding other tools for things like quizzes, textbooks, or grading. I ended up in a Chrome trial that disabled third-party cookies which broke a lot of these embeds because they can no longer set identity cookies that they rely on from within their iframe.
As nwalters also points out, this isn't the same at all. System A and System A' both from Source Α are not the same as System A (Source Α) and System B (Source Β).
Which you know, because you say "you can also use CORS to interact with a truly third party site". But now, I invite you to go the rest of the way - what if the third party site isn't Project Gutenburg but `goodreads.com/my-reading-lists`? That is, what if the information that you want to pull into System A from System B should only be available to you and not to anyone on the net?
Use OAuth2 to get system B's access token, then use authenticated server-to-server API requests to pull needed information from system B.
This multiplies the cost of the integration by at least an order of magnitude
The use case is web sites that want to earn income with as little user overhead as possible. Targeted ads have many downsides but they do pay websites without any money at all from the user, or even having to create an account.
So the problem for regular humans is the disappearance of features that they've grown used to having without paying any money. Finding a better way to support themselves has proven remarkably difficult.
I feel like many people here wouldn't care if those websites simply stopped existing.
Certainly a lot of people would care if Facebook disappeared.
There are also a billion other ad-supported web sites, each of which make ten people happy. Not a single one of them would be widely mourned, but 5 billion people would each be saddened by one of them.
Many people would, though.
For a long time I thought pinterest was search spam that no human could possibly want to see, but then I met real people in the world who like it and intentionally visit the site. I bet there are people who like ehow and the rest, too.
The viability of their business model shouldn't be everyone's problem.
It is their problem when a feature that they like disappears.
They don't care about what happens to the business itself. But they do care about the things the business provides.
If they don't in fact care, then indeed, nothing is lost. But a lot of people will miss a lot of things. Whoever comes up with an alternative that suits the case will make a lot of people happy.
People made money on advertising before the existence of cookies and ubiquitous tracking. Nature will heal.
And people had websites before the existence of Internet advertising. Let's set our expectations higher for how much healing is needed.
The article explicitly calls out that there are valid use cases (although doesn’t enumerate them). Federated sign-on and embedded videos seem like obvious examples
Google/Chrome just declared that they won't be moving forward with removing 3rd party cookie support.
https://privacysandbox.com/news/privacy-sandbox-next-steps/
> Taking all of these factors into consideration, we’ve made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies.
Ah, now _that_ makes sense why this go published then. Glad to see that common sense prevailed. The day may come when all the use cases for third-party cookies that aren't "track Joe Regular all around the web" can be satisfied with other widely available web features, but until we have all those features I think taking a page from Linus' book and ensuring "we don't break userland" is important (and something I've always loved about the web and I'm glad to see it continuing).
Which use cases? I use Brave, which has a built in toggle to disable 3rd party cookies, which I have set to default, and at least my experience of 'the entire internet' works fine.
embedded iframes that need to authenticate logins but don't trust the parent domain to store the login data there is a problem. You can somewhat work around it with the Storage Access API if that browser supports it (brave doesn't), but it does mean every embed requires a click by the user first before it works properly
Same here, but other browsers. I’ve had zero issues since well before the dot com crash.
Company whose market cap reflects pervasive surveillance non-requested announces that after serious consideration they won’t be removing technologies that enable pervasive non-requested surreptitious surveillance.”
It is going to be interesting to see if anti-trust enforcement's manages to separate Google from its financial and practical hold on web standards/browsers.
The opportunity to increase ethical norms of web browsing would be welcome to me.
Google wants to remove third party cookies but they can't as the government sees it as anticompetitive to their competition. They dont need third party cookies, everyone else does.
Precisely - removing third-party cookies doesn't stop Google from tracking anyone. It just prevents anyone who doesn't own a browser and have one of the three major email providers from tracking everyone.
Well, it doesn't prevent them, but it does make it a little bit harder ...
I personally think this decision hurts users more than anything else. We must let Google's competitors continue tracking us or else it won't be fair to them?
I don't even understand how being forced to divest Chrome will even help. Once another company owns Chrome and can remove third party cookies, Google gets the same benefit.
Google has remarkable financial influence across the four major commercial entity related browsers.
So limiting Google's control over browsers will create more competition. More competition on implementations. And also more competition in terms of features and user centric service.
--
Question: Does Google really not gather information from anything but its search engine and first party apps? That would seem financially non-optimal for any advertising funded business.
I would think that sure, they log everything peopel use their search for.
But that they would also find a way to track post-search behavior as well. Google leaving money on the table seems ... unusual if there isn't some self-serving reason they would forgo that.
I am happy to become better informed.
There are only 3 effective browsers - Chrome, Safari and Firefox. I don't see how limiting Google's control will create competition. The barrier to more browsers is the massive investment needed to create one, not any action that Google is doing.
You are correct, although its more correct to say there a only 3 major browser engines, Blink (used by all chromium derivatives), WebKit (used by Safari and some minor browsers), Gecko (used by Firefox and its derivatives). Creating a browser engine is hard, so hard that even a multi billion dollar company like Microsoft gave up on doing it. And we may soon witness Gecko going away as a side effect of the Google antitrust lawsuit.
Google could have removed third-party ten years ago as Safari did…
Their long wait to do it is part of why we ended up in a regulatory mess
We don't need a replacement, they're not needed today. I've been blocking them for years and I can't remember the last time it caused a problem.
Google has set up a replacement that puts the user in control of their ad interest tracking. It has its upsides and downsides, but I think it's pretty balanced. Anti-tracking features are embedded into the API so the API can't be abused by advertisers.
Of course, ad companies scream bloody murder, and the UK market watchdog had to step in so Google wouldn't turn off third party cookies by default.
Do not worry, the ad networks will come up with ways to circumvent it as soon as it becomes mandatory.
done. third parties can be replaced with legally culpable first parties.
I've have them turned off since Firefox added the feature. Looks like that was around 2018, though I could have sworn it was much earlier than that. I've never had an issue where I had to make an exception for a site. Is there still some environment where it's common for them to be needed?
I don't recall a browser that didn't let you disable third-party cookies; given how long ago cookies were introduced, I could have forgotten about it, but I'm at least sure that Mozilla always supported it.
Firefox, especially in the first versions, permitted much less control on cookies than Mozilla did, but I think it still always allowed disabling third party cookies.
third-party cookies have done more harm than good, and it's time to fully remove them from the web platform. It is refreshing that their acknowledgment that replacements must not just be privacy-washed clones of the old model — purpose-built alternatives need to prove they don’t recreate the same surveillance infrastructure.
I haven't allowed third party cookies in a decade. No problem.
I had a little trouble when Safari rolled out ITP a while back. SSO providers scrambled to figure out how to fix federated logins, and because it affected every iPhone, they managed to do it with a quickness. I haven't had a single problem since.
How about third party js? The site doesn't render properly without third party js from www.w3.org.
Using a custom-built interception layer, I decouple session tokens from identifiable browser states, rotating my signature footprint every few requests via controlled entropy injection. “No more third-party cookies” sounds like a big shift, but it’s functionally irrelevant if your presence is already undetectable.
This is actually a somewhat inconvenient wish, because the alternative would increase the fingerprint investments required for all browsers to recognise us.
I block almost all 3rd party cookies, but at this point isn't it kind of nice to just have your google login follow you around, so you don't constantly have to login on other sites? Sure, it sucks for privacy, which is why your google account should never be tied to your phone number or your actual identity, but it's super convenient. Oh wait. It's tied to your real identity? Go back to square one and start a fake identity with all the root info. Buy a burner with a prepaid card, use it to set up a yahoo mail account, use that to set up a mail server you pay for in bitcoin, use that to verify a gmail account, and never let down your VPN. You're going to be tracked; the right move isn't to waste time worrying about that, it's to be someone invisible and untethered in the real world.
Fine. All that will happen is we'll see more sites switching to requiring a login to do anything on their website, so that they can track you with first-party cookies, and sell your information that way. Nothing will meaningfully change.
The only distinction is that I can do a decent job of blocking third-party cookies today with my existing solutions like uBlock Origin, but I will probably have a much more difficult time getting around login/paywalls.
First party cookies can't build a profile on you across multiple origins.
They absolutely can. They have, at minimum, your account information and your IP address. Maybe you use a burner email address and/or phone number, and maybe a VPN, but chances are you’re not cycling your VPN IP constantly so there’s going to be some overlap there. And if you do cycle your IP, 99%+ of users probably aren’t clearing session cookies when doing so, which means you’re now tracked across IP/VPN sessions. Same deal if you ever connect without a VPN - that IP is tracked too. There’s tons of ways to fingerprint without third party cookies, they just make it easier (and also easier to opt out of if they exist, just disable third party cookies; if no one has third party cookies, sites are going to start relying on more intrusive tracking methods).
You can also easily redirect from your site to some third party tracking site that returns back to your successful login page - and fail the login if the user is blocking the tracking domain. The user then has to choose whether to enable tracking (by not blocking the tracking domain) or not seeing your website at all. Yes the site might lose viewers, but if they weren’t making the site any money, that might be a valid trade off if there’s no alternative.
Not saying I agree with any of this, btw, I hate ads and tracking with a passion - I run various DNS blocking solutions, have ad blockers everywhere possible, etc. Just stating what I believe these sort of sites would and can do.
All they need to do is redirect you through a central hub after login.
Can't you just work around all of this by proxying to the third party site(s) with a subdomain?
I think you're right. I imagine if third party cookies were ever banned, we'd quickly see googleads.whatever.com become a common sight.
There's no need for a login to track you with "first-party cookies", looking at the IP is perfectly adequate, at most adding some fingerprinting if you really want.
The only problem is that then the tracking companies have to place more trust on the first party that they're giving them real data.
But they're doing it, actually, see confection.io for example
Has anyone noticed this pattern that for some pulled out of my arse explanation, these standards groups and google suddenly remove features that would be useful to people, but they decided it's now not ok in the future. Like http referers now only show the domain, not the full url, because insert complete bs explanation. And now 3rd party cookies too...
UMatrix blocks those by default. Blocking third party cookies very rarely breaks anything. I can only think of one instance in the past five years, and that wasn't really a third party cookie, but one website using two different domains.
You even don't need uMatrix for that. Every major website has a toggle for it in the settings.
Sounds like a diversion. Websites can use local storage and fingerprinting to do anything they want at this point.
So, the web Ad marked is being monopolized on platforms. Google and Facebook make overwhelming revenue from their own websites.
Now, down with the rest.
Facebook pixel works just fine without third party cookies.
Here we go again!