We Need to Talk: The Dev/Privacy Relationship Is Getting Rocky

A few days ago I attended the OWASP Toronto chapter, where agentic AI took centre stage* from a security perspective. It confirmed something I'd been sitting with for weeks, and honestly, it wasn't comfortable.

Every relationship has a moment where someone needs to say it out loud. The dev team and the privacy office have been growing apart for a while now, and agentic AI just accelerated things considerably. The privacy office is still setting ground rules for the first date, and the development team has already moved in together, redecorated, bought a hot-tub, and is halfway through building an extension.

My LinkedIn feed reflects this perfectly. A constant stream of AI governance content, policies for LLM usage, guidance on what employees can and can't put into a prompt. All well-intentioned. But it's so 2025. The conversation has already moved on, and most governance frameworks haven't followed it.

I work in the space where these two worlds are supposed to meet. I help development teams put privacy guardrails into software, and I help privacy officers understand how development actually works, because what looks like a straightforward fix from the governance side is often weeks of refactoring that quietly disappears into technical debt. The gap between those two functions is where the real risk lives, and agentic AI has just made that gap considerably harder to bridge.

So here are five things I think the privacy office at any development company should be paying close attention to right now, and what dev teams should be raising with their governance folks. I'll be going deeper on each of these in the weeks ahead, because each one deserves its own conversation. Consider this the opening of that discussion.

The context window knows too much

As agentic systems run, they accumulate. Source code, configuration files, environment variables, database schemas, API keys, customer data pulled in during testing. That growing memory becomes a target in itself, and it creates real secondary usage risk that most teams haven't stopped to consider.

Permissions don't shrink as agents delegate

When humans pass work down a chain, the security window typically narrows. With agentic systems, it often does the opposite. Permissions compound as agents delegate to other agents. An agent can end up operating with a combination of developer privileges, service account access, and API tokens that no single human would ever hold simultaneously.

You may not know where your data is going

If your team is using a hosted agentic service, every prompt is a data transfer to a third party. Is that data being used for training? Are Data Processing Agreements in place? Can those agents route information further downstream without a human ever making that call? If an agent is told to simply find a solution, it may engage an external service entirely on its own.

Can you reconstruct what actually happened?

When an agent writes code autonomously or takes actions across systems, can you determine what it touched? Can you establish whether it accessed data it shouldn't have, or made a decision that violated your privacy commitments? In a regulated environment, "the agent did it" is not going to satisfy an auditor.

The system can surprise you in ways nobody designed

This is the one I think governance professionals find most unsettling once they fully grasp it. Agentic systems are nondeterministic. An agent testing its own code might pull in real customer data to do it. It might make a sequence of decisions that individually seem reasonable but collectively create a violation nobody anticipated or intended.

So are we heading for a breakup?

None of these are addressed by a policy telling employees not to paste client data into ChatGPT. That conversation, while still worth having, is like discussing household chores when one partner is already out the door.

If you're in a privacy office or a development team and recognising the tension I'm describing, reach out. Sometimes what these two functions need most is a translator who understands both sides, can sit in the room with them, and help them remember why they got together in the first place. The relationship is worth saving. It just needs some work.

I'll be unpacking each of these risks in more detail over the coming weeks. If you want to follow along, the newsletter sign-up is below.

*shout out to Daniel Shechter from Miggo for a great preso!

Previous
Previous

Three Red Flags That Should Trigger a Privacy Review in Your Development Pipeline

Next
Next

Product Leaders: Privacy Isn't Just Your Developers' Problem