As a self-proclaimed privacy engineer, I build privacy-centric products for users and review features across the company to prevent privacy flaws from shipping. At a previous company, I worked on infrastructure, making sure that the company’s DNS was not leaking sensitive customer data in transit. Prior to that, I worked on creating privacy compliance tools for enterprise customers. Much like software engineers, these are all valid things a privacy engineer can work on! Privacy as an engineering function, as opposed to a purely legal and policy one, is a fairly new concept. Some big companies like Google had internal privacy teams, but it was only under threat of the incoming storm of privacy regulation led by GDPR and FTC consent decrees in the US that everyone panicked about building privacy into products and engineering processes. Now, there’s high demand for privacy engineers, but every tech company interprets the role of a privacy engineer differently.

I’ll go over what motivates companies to hire privacy engineers, and what privacy engineer archetypes arise because of those needs.

Why hire privacy engineers?

Privacy engineering addresses the following business needs, arranged in roughly descending order of importance for most companies:

Need #1: Comply with privacy laws

GDPR can allow levying of up to 4% of annual global revenue. CCPA penalty in California is more interesting: it’s capped at $7500 per violation, where every single customer whose privacy rights were violated would count as one violation. So if you have millions of users, good luck!

Need #2: Stay out of the news

A classic adage in the field of privacy engineering, a common cudgel used by privacy review teams: even if we’re not explicitly breaking the law with this feature, are we okay being on the front page of New York Times for this? The answer is typically no.

Need #3: Build trust with users

Customers like to use products they trust. If they believe the company will use their data responsibly, they spend more money, which is behaviour companies like to encourage. Nothing ruins that trust like a data breach or evidence of sale of data to a third-party.

Need #4: Build new privacy-preserving products

People want privacy-preserving tools. Many brands have built their product strategy around privacy. Examples: Apple, Signal, Brave, your favourite VPN du jour. I also include privacy products that address enterprise needs here, such as a data redaction product marketed at companies looking to fufill Needs 1 through 3.

Not all companies have the same needs, and some companies have multiple needs. Now, on to the archetypes.

Archetypes

Archetype #1: The Product Privacy Engineer

Product privacy engineers are typically regular software engineers who work on features that explicitly enhance user privacy such as VPNs, encrypted messaging apps and user-facing privacy controls such as privacy dashboards.

Sadly, there aren’t that many companies which sell privacy as a product. The best product privacy work happens when privacy dovetails with functionality. Biased example, given that I work at Brave: ad-blocking not only enhances browsing experience by speeding up page loads and conserving battery but also blocks real security and privacy threats. Another example is Apple’s Hide My Email feature: email addresses are pernicious tracking vectors, so being able to have a unique one per-website is extremely good for protecting online privacy. Plus, it lets you manage the emails you’ve used across websites and apps and easily deactivate them, which is pretty useful in itself. Web browsers often do a lot of interesting product privacy work, probably because the Web is a tracking cesspool. Building new privacy products fulfills Need #4.

Privacy dashboards empower users to manage privacy preferences (should your activity be public on Strava?) and consent around things like data sharing (should Twitter use your data to show you personalized ads?). They also let users do things like request their data and ask for it to be deleted or exported. Software engineers who work on these dashboards may be members of a dedicated privacy engineering team or engaged on an as-needed basis by the Privacy Compliance or Legal teams. Giving users privacy controls goes towards being compliant with privacy laws such as GDPR and CCPA, so fulfills Need #1. It also makes users feel that their privacy rights are being protected and lets them exercise those rights, so I’d argue it also goes towards Need #3.

Archetype #2: The Privacy Infrastructure Engineer

Privacy infrastructure work often takes the form of figuring out how data flows internally and externally and building shared privacy tooling for feature teams. These engineers are also sometimes called Privacy Architects.

To comply with data subject access requests, it’s crucial to know where user data lives across the backend. User data tends to be like a fungus that quickly spreads its tendrils throughout the host database. Mapping data and its flows is complicated and never-ending work. Also, you need to figure out how to surface data to your feature teams without violating privacy laws. How long can feature teams keep data around? They could write their own cron jobs to delete data after the promised duration, but this quickly gets messy. At some point, you want to build a centralized way of handling data retention policies that teams can hook into.

Privacy infrastructure engineers also work on making sure data is encrypted, both at rest and in-transit. Check out Meta’s blog posts on how they deployed TLS in their datacenters. Sometimes, this kind of work needs your vendors and partners to be compliant with your technical privacy requirements. This can mean having to standardize your privacy-enhancing protocol, and contribute to open source implementations.

This archetype helps with privacy compliance (Need #1) and reduces the PR fallout from a breach (Need #2).

Archetype #3: The Privacy Reviewer

I’d say the vast majority of privacy engineers, especially at companies like Google and Meta, are hired to do technical privacy review. These engineers review specs and features for privacy issues and identify risks. They work across legal, product and engineering teams to make sure the company’s privacy stance is upheld. They think about things like data minimization (is this feature collecting more data then it needs?), retention (when is the data collected by this feature deleted?) and consent (is the user being given a meaningful choice about their data?). They advocate on behalf of the user. Sometimes privacy reviewers act as consultants for a larger organization, and sometimes they’re deeply embedded into one product line. Either way, the job is all about understanding features quickly by asking the right questions, developing long-term relationships, clearly outlining risk and suggesting improvements.

Privacy reviewers may end up setting privacy principles in order to ensure consistency in reviews across different reviewers. These principles are either officially encoded in a document or unofficially exist in their heads and/or Slack threads. These principles guide concrete technical decisions in product development, distinct from legal privacy policies. For example, a privacy principle could be “we drop all IP addresses at the WAF layer for HTTP requests containing user data but not otherwise”; the privacy policy corollary for this could be something generic like “We do not store any user-identifying information”.

Privacy reviewers inevitably see common patterns cropping up in the specs they review. For example, they might realize that a lot of product teams are needing to build their own bespoke way of deleting data. This insight leads to generalized privacy infrastructure (see previous section) being built. Roles can be fluid!

Privacy review helps product teams comply with privacy laws (Need #1), identify if a feature might “feel” sketchy to users (Need #2), and reduce risk of privacy exposure that might ruin user trust (Need #3). This is important work!

Archetype #4: The Privacy Red Teamer

This is related to privacy review, but with a more adversarial mindset and done after the product is deployed. Similar to security red teams, privacy red teams take an existing product or feature and try to “break” it by finding privacy vulnerabilities. For example, if a user presses the “Delete My Data” button on their privacy dashboard, is that data actually scrubbed from all database tables on the backend? If a company claims that telemetry data is “anonymized” (whatever that means), can a particular user’s identity be reconstructed using telemetry data points? Even if one particular feature is working as intended, things can go wrong in the interaction between systems. This is where privacy red teams shine: it’s hard to grasp all possible interactions before code hits production and data hits servers. For example, did the Marketing team accidentally start using data that was collected for product improvement purposes? Can we automatically detect when that happens? Privacy red teams report their findings to folks working on privacy infrastructure to prevent bad stuff from happening again, and to privacy reviewers so that future bad stuff is caught in the review stage.

AFAIK, only Meta, Google and TikTok have privacy red teams. It’s definitely a bigger company problem that happens when there is a Cambrian explosion in the surfaces where user data is ingested and it starts becoming hard to prima facie identify all privacy risks.

This role helps to fulfill the same needs as privacy review (Needs #1, #2 and #3).

Archetype #5: The Privacy Incident Responder

Review and red-teaming notwithstanding, privacy incidents happen! Privacy incident response teams get involved when there’s a risk or occurence of user data leakage. They then work with other privacy teams to make sure the incident doesn’t happen again.

Privacy incident response is classically a legal problem: there are regulations around disclosure of exposure of user data. Privacy incident response teams make sure those legal obligations are fulfilled, so this job role helps with Need #1. It also helps to retain trust (well, mitigate loss) by informing users quickly and clearly of privacy incidents (Need #3).

Conclusion

The field of privacy engineering is rapidly evolving, and it’s a fun time to be a privacy engineer. Privacy engineers often wear multiple hats and work across the roles I outlined: I’ve particularly enjoyed doing a combination of engineering, policy and review. Plus, it just feels good when your work is aligned with users’ interests. Other privacy engineers I know feel similarly.

Do you work on something else as a privacy engineer? Let me know!

Big thanks to Nandita Rao Narla and Jay Averitt for proofreading and suggestions.