ava's blog

can agentic AI consent on your behalf?

Tech companies have been promising that online shopping or booking a hotel can soon be handled by AI. Just tell it the different standards you're setting and to pick the cheapest price, and off you go.

We've seen agentic AI spectacularly fail; email inboxes cleared out, databases and hard drives deleted, and more. Where a few might really be down to hallucination or inexplicable actions, others have also happened due to vague instructions, no safeguards, no confirmations and too many permissions/too broad access.

Still, this clearly needs to be refined before the general public, especially less tech savvy folks, intellectually disabled members of our communities, or children use it. Who's on the hook if the bot buys into a scam product, or if it agrees to buy 500 live ducks on your behalf, when you just wanted to order a bathtub duck?

When websites will be optimized around how a bot navigates and crawls it, will we have dark patterns ("consent optimization") for bots? Compared to humans, how easy will it be to trick AI? How do you teach an agent what an untrustworthy website looks like, when each observation likely also applies to a lot of reputable websites? Can AI do nuance?

How will notices like the Privacy Policy, Terms of Service, agreements to newsletters and marketing work when their presentation currently relies on you yourself navigating the website and being presented with the option to see them directly? If your agent opts into newsletters, cookies or additional data tracking, but you actually disagree, what does that mean for vendors, who have to keep definitive proof that you consented?

I think these are all interesting questions to get into, both philosophically and legally! So let's start!! :)

☁️☁️☁️

When you let an agentic system act on behalf of you and enter contracts, it's becoming somewhat of a “proxy actor”, a representative. You set the general target, but the agent decides the path, the "how". How autonomous is that actor? If the system is merely executing explicit, pre-specified preferences, it looks like an extension of your will. Realistically though, these systems also interpolate, generalize, and occasionally improvise and hallucinate. At that point, the agent is not just expressing your will, but it's also developing it further. There’s a risk of made-up consent based on a statistical guess about your preferences rather than your actual judgment.

There's also the problem of how to give consent for something you don't understand, or consent that relies on a possibly altered or incomplete AI summary. Human consent typically presumes having an overview of all information given (without some mediator meddling with it), and some understanding of what is being agreed to (and we will get to that later!). But AI can't "understand". If an AI negotiates or accepts terms too complex for you to fully grasp, that's a problem. The reasoning process of the agent is often non-transparent and non-auditable.

In terms of responsibility and the consequences of consent, we run into more issues. When the system is selecting among trade-offs and standards you have communicated or implied (price, privacy, convenience, etc.), it’s performing a lot of judgments about what you are willing to support, endure, financially invest in, and more. If the result is harm, do we blame the user, developer, deployer, or the agentic system? It'll depend on each case, but we will likely struggle with the fact that AI can be a black box where no single actor fully “owns” the decision making.

Assuming things go swimmingly though, and the actual process works well: How does our view on the value of a given consent change when agents seemingly prefer specific vendors, either because the company has a deal with the AI deployer, or because it learns from past orders, not bothering to look much elsewhere? In a way, this narrows down our consent, makes it worth a little less, as if the choice was already made for you (partially). You may believe your agent is maximizing your preferences while it is, in fact, constrained by guardrails or past uses.

We will definitely need new and more legislation on that, and I assume we will see interesting court cases tackling many of these issues.

☁️☁️☁️

So how does it look like for consent via AI agent in the EU? Several other EU laws refer to the GDPR when it comes to the definition of consent, so we can actually use it to guide us.

First off: By consenting to something (like data processing), you are waiving some of your fundamental rights - this sounds scary at first, but is normal!

You might do this because you are willing to forego them in a specific situation. Think of it like surgery: You have a right to remain free from bodily harm, but to improve your health or save your life, you might agree that a doctor is allowed physically harm you to help you. In case of digital products, you might be okay giving up some of your data protection rights and allow processing if it allows you to get goods and services you want :)

Article 4(11) GDPR says:

"‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her;"

So for consent to be valid, the mentioned four elements need to be met.

  1. It must be freely given - no coercion, pressure, intimidation, financial incentives, manipulation and the like! Power imbalances need to be considered here. Also, this is where legal capacity comes in: being old enough and able enough to consent.

  2. It has to be specific - like a specific processing operation or a specific purpose. You need to be able pick and choose, like opting into and out of different processes or purposes, and they should not be conditionally tied together.

  3. It has to be informed - you have to know and understand what you consent to.

  4. It must be unambiguous and clear - so you have to be active (like ticking a box that isn't pre-ticked) so it is obvious you have read it or seen it. There is still room for implicit consent ("If you want to be in the picture, get together here" and you move into the frame, therefore being active, but not literally saying anything).

The most well-known option is the explicit consent, like ticking a box, pressing a button, entering information and sending it off, or writing "I hereby agree that...". For some things, like sensitive data (health data, biometric data, sexual orientation, political and ethnic data etc.) under Article 9, we definitely need explicit consent. Ordering the agent to book a ticket for you means the AI facilitates a contract between you and another party, and that fulfills Article 6(1)(b), which is a legal basis for processing personal data for "the performance of a contract". So either you agree directly to data processing (a), or the data processing might be necessary to fulfill your order (b), which is a contract you also agreed to (or the bot did for you!).

The AI agent isn't able to genuinely be aware or perceive anything, while perception and understanding still plays a large role in proving that the standards for a valid consent are fulfilled in our current model of it. It'll be difficult to prove that you as a person have given consent freely if an electronic tool picks and chooses for you without you being aware of the circumstances and options. Technically, if you tell an agentic AI to search for the cheapest rosemary body lotion and buy it, you give consent to the acquisition of this item, not the popup asking whether you like to share your data with 1337 of the shop's partners.

You'd have to make sure first to include standards for these popups and scenarios the agent can follow, but even then, that could fall under a sort of broad consent that is technically not considered valid as it is too unspecific... which is less of a problem for you, and more for the company on the other side! They need to demonstrate consent, meaning they need have proof on hand that you consented. You can find this in Article 7(1) GDPR, related to the Principle of Accountability in Article 5(2) GDPR. An instruction to your agent to always agree to cookies, tracking, newsletters, policies and the like just to get the transaction over with is too unspecific and not informed, since it doesn't take the contents and circumstances of each vendor into account. You're not agreeing to this specific vendor's policy or partner list.

You might feel reminded of browser extensions that handle the cookie banner for you automatically. In that case at least, you installed that extension for that specific purpose, can choose how it handles the decisions its settings, disable it on specific websites, and you still perceive the rest of the website, policies and options yourself. This is a bit different than not even being aware that the bot is currently skipping over it all, agreeing to it.

All of this leaves your agreement very ambiguous for the vendor - was it you, or the bot? What instruction did you give your bot? Did it ask you? Did it agree despite you disagreeing, or did you consent specifically to this? Was this an invalid broad consent? Hard to detect for now. They'll try and argue that if you had not wanted this, you should have configured the agent better, and that the risk of using this mode of attaining services has risks you have agreed to; false consent and the loss of data might be part of that. We'll see how consumer protections catch up to this!

Then there are also underage people who might use these agents one day, whether it is specifically also open to them, or because children are curious, see them online and set them up, or use the services their parents also use. It can be easy and accessible: think of something Alexa-esque where just voice is enough to order.

Age of consent differs in the member states, but seems to be around 13-16 years old. Article 8 GDPR mandates extra standards for the consent of children, part of it being that there should be reasonable efforts to verify that consent is given or authorized by legal guardian of the child.

That's interesting, because how does that look like in practice? What is the parent specifically consenting to - the fact that the AI agent acted on behalf of the child and that the child used the bot, or just the transaction itself? The bot could also consent on behalf of a child that has not reached the legal age of consent for these transactions, and verification methods... well... you see how well that is going right now, and how it endangers digital safety and privacy. We haven't figured this one out yet.

conclusion

Can AI agents technically navigate the correct sequence of steps to consent to something for you online? Sure. But there are several philosophical and legal pain points that discount the consent.

Our current design choices, legal standards and understanding of consent are not designed for this level of consent abstraction, because they were still built in an era where direct human perception was the norm. The link between a user’s intent and legally valid consent becomes fragile when a bot with capacity for hallucination and rogue behavior acts as a middleman, producing something more akin to inferred or fabricated consent.

Until there is a good way to preserve user intent, ensure transparency, and operate within verifiable consent boundaries, the use of these agents is problematic and will cause some interesting court cases. Consent should not be watered down into something weaker, less informed, and harder to attribute to make space for this tech. We all, especially regulators, courts, NGO's, consumer protection groups and system designers, need to rethink how autonomy and accountability are enforced and proven online without succumbing to surveillance and loss of rights.

Reply via email
Published

#2026 #data protection