🛡️ Beyond cookies: browser fingerprinting in 2025 (Part 1)
Earlier this year, I was asked to give a talk on the state of browser fingerprinting by the Public Interest Technology Group (PITG). I wrote a blog post based on the talk that was published on the PITG blog. I decided to reproduce it in full on my own blog because I made some edits post-publication, and I think it reads better as a two-part blog post. So here’s part one. You can find part two here.
Beyond cookies: browser fingerprinting in 2025
Cookies are optional. Fingerprinting isn’t. In 2025, the easiest way for trackers and third-party advertisers to follow you across the Web is to read the traits your browser can’t help revealing (screen, fonts, GPU quirks) and stitch them into a stable ID. The third-party advertising and tracking ecosystem has metastasized to a point that even US intelligence agencies use ad blockers internally for security reasons. The connection between real-time bidding and personal data leaks is well-established. This personal data often ends up with data brokers and eventually leads to users experiencing financial fraud.
This blog post gives an overview of how browser fingerprinting is used for tracking, how browsers protect users, and how users can protect themselves. This latter part is important, because most browsers leave anti-fingerprinting off by default. Web privacy is a fast-evolving and complicated field, so the goal of this writeup is to give a clear mental model and links for diving deeper.
What is a browser fingerprint?
A browser fingerprint is much like a human fingerprint: a unique identifier that is hard to change. The more ways in which you’re different from other users, the more identifiable your browser fingerprint, and the easier you are to track across the Web. If all a website comes to know is that you’re on an iPhone 16, that’s not particularly identifying, since you are far (far, far) from the only iPhone 16 user. But websites also need to know things like your screen size (to properly display the website for your screen), your timezone (to show you your calendar), whether or not you have dark mode enabled (for accessibility as well as general hacker vibes), etc. In combination, all of these small differences contribute to making your browser look unique.
For a browser, this presents a dilemma: break the ability for websites to detect dark mode and you incur the wrath of your most vocal users whose hacker aesthetics you just committed photocide against (ask me how I know). Don’t, and that’s yet another bit of information exposed to malicious tracking scripts. It gets even more complicated with more advanced fingerprinting techniques that rely on subtle differences between how different computers render pixels, or how sound cards process sound. We’ll come back to this point when discussing anti-fingerprinting strategies, but generally, the more modded and customized your computer setup, the more identifiable it is.
This majorly sucks, because the power of the Web is in its dynamism and diversity. JavaScript and other Web technologies let developers design immersive experiences and power the Web economy. Also, the same Wikipedia.org website can work across different operating systems, device manufacturers, form factors and hardware capabilities, ranging from my Apple device to my colleague’s bespoke Sailfish-flashed handset, and I think that’s beautiful. Powerful browsers and adaptive websites are a good thing!
Who does browser fingerprinting?
Advertisers
Advertisers want to know very legal and very cool things like whether that Nike ad you saw on Instagram ended up being responsible for a purchase you made on Nike’s website later that week. Without this kind of tracking data, they have no idea if the billions of dollars they pay advertising platforms like Meta is paying off. Advertising networks also want to know who you are in order to increase the chances you click on an ad. There is an overwhelming financial incentive to get any kind of user tracking they can. Interestingly, browser fingerprinting is controversial even within the advertising industry, though it happens anyway.
Anti-fraud and anti-bot vendors
Anti-fraud and bot-mitigation companies aim to identify unwanted clients by fingerprinting their browsers. “Unwanted” typically means “could be a security threat” or “is a bot”. Identifying non-human traffic is a growing concern, especially as LLMs get better at solving CAPTCHAs. NYTimes and other news websites were caught harvesting local IP addresses as an anti-bot strategy a few years ago.
Law enforcement and nation states
Government agencies frequently use whatever data collection mechanism they can get their hands on. NSA’s XKEYSCORE program hoovered up Internet traffic directly from fiber optic cables around the world, and extracted browser fingerprints to assess exploitability of their targets. The UK tax revenue agency (HMRC) recently asked around for fingerprinting solutions to detect tax fraud.
Why fingerprint (when you can cookie)?
After much vacillation, Google Chrome announced in April 2025 that they will be rolling back their latest already-watered-down proposal to bring third-party cookie blocking to users, and will now be doing (checks notes) absolutely nothing. The working title of this blog post was “tracking in a post-cookie world”, but it looks like that world is still far away, given Chrome’s reluctance to touch third-party cookies and their dominant browser market share. More than half the Web’s traffic comes from Chrome (exact numbers vary depending on who you ask for interesting reasons that deserve their own blog post).
Other major browsers, thankfully, do block and partition third-party cookies. Even so, browser fingerprinting is still widely used by trackers and third-party advertisers to overcome the limitations of cookie-based tracking.
Cookies can be isolated (e.g. Private Browsing)
Users can use dedicated browsing sessions, isolating cookies and other storage. The classic example is Private (aka Incognito) windows which also clear storage when users exit them, but Firefox’s Containers or Chromium’s Profiles serve the same purpose of making sure that cookies are isolated to a particular storage area. If I visit a website in my normal profile and then re-visit in a Private window, and the website is able to tell that I’m the same user across both visits, that’s a major privacy leak.
Browser fingerprinters try to pierce session isolation in order to re-identify users. The NSA used Evercookie to unmask Tor users by recreating cookies even after they were deleted.
Cookies can be cleared
Cookies and other kinds of storage can be proactively cleared by the user even within the same session. Brave and DuckDuckGo offer ways to automatically clear storage when a tab/site/app is closed. Several browsers use heuristics to figure out when it’s safe to automatically clear a website’s storage so as to prevent tracking while still preserving functionality; it would be pretty annoying if I had to re-login to my favorite website every time I visited it. Bounce tracking mitigation is one category of this work that is implemented by most browsers, with varying degrees of aggressiveness. Again, Chrome lags behind other browsers by not applying bounce tracking mitigations by default.
Compared to cookies, a browser fingerprint is a lot more pernicious and hard to clear, since it relies on inherent characteristics of your machine.
Fingerprinting is invisible
Browser fingerprinting is often passive: the malicious website or script doesn’t need to do anything observable in order to fingerprint you. This is unlike cookies, where the tracking script actually has to leave data on your browser which can be cleared as mentioned above. If a website is using your User-Agent string to create a fingerprint for your browser, there’s not much you can do about it since you won’t even know that the website is doing it on their backend. Brave has a way for users to see if a website invoked a Web API that has fingerprinting protections applied, but fundamentally there’s no way for the browser to know if the information requested by the website is going to be used for nefarious or benign purposes.
Harder for regulators to enforce
Regulators have mostly enforced laws against storage-based tracking, since violations are much easier to detect. Cookie consent notices are a very visible example of this: you’re inundated with them as websites try to comply with laws that require explicit consent for storage on the user’s device. This leaves fingerprint-related profiling under-enforced since it happens by websites and trackers on their backend.
Google announced in 2024 that they will no longer prohibit their advertising customers from fingerprinting users, which was (thankfully) sharply rebuked by the UK ICO.
Conclusion
Browser fingerprinting thrives because it exploits the Web’s necessary, high-entropy surfaces (screen, fonts, device quirks) quietly and across sites. Part two turns from diagnosis to defenses: how browsers approach this problem, what trade-offs they face, and concrete steps you can take to raise your anonymity set without breaking the modern Web.
That’s it for part 1! You can read part 2 here, where I’ll talk about how browsers defend against browser fingerprinting and what you can do to protect yourself.