ava's blog

the flaws of digital consent management

Following up to my agentic consent piece, a reader (Shugo Nozaki) shared with me some interesting perspective around human consent (both by email and post) that I felt like was worth exploring and discussing!

He pointed out that while our current model of consent still relies on direct human perception and some understanding of what is being agreed to, it is already quite fragile in most areas. He rightfully points out that human consent that currently rests on real understanding is a polite fiction, as most consent flows are not really designed to be read. They are designed to get users past the gate.
So he asks: How much of the current standard are humans actually meeting today?

The reality is definitely that companies have squandered our trust and curiosity with the way consent mechanisms and Privacy Policies, Terms of Service etc. have been designed. Cookie banners keep popping up and sometimes don't seem to work correctly, other consent forms make it as annoying as possible to opt out, and any lengthier text is full of dry legalese. It has caused quite the consent fatigue, and for what?

Unfortunately, the wrong things seem to be incentivized: Agreement to data processing is strategically beneficial to companies, so optimizing an easy workflow to not consent is not in their best interest. And the following is just a hypothesis of mine, but I firmly believe that companies have used the little leeway they were given to implement privacy law requirements to make things an absolute hassle in the hopes that they'd be seen as a failed experiment and users would complain until things were abolished again.

When we actually read the laws, recitals, recommendations by organizations etc., we quickly see that we do not have to live with these unpleasant implementations; yet, companies get to point at laws for a job done badly and go "They made us do it!". Multi-layered approaches1 have been acknowledged and recommended for a while now, but implementation of them is still rare. For many companies, these texts seem to be a one-time thing that is once invested into and never again, instead of being the living document they should be. "If it works and we fulfill the requirement, why change it?"

At the same time, laypeople unfamiliar with law have to work with heuristics and take characteristics as signs of quality when it comes to legal texts like PP's, ToS and more: If it's very long, it must be complete and enough effort has been invested, and if it has a lot of complicated jargon, it must be professional and correct. So that is what companies want to see, and weirdly, what some very invested users are reassured by. A short, casual version might seem incomplete and as if the company doesn't take privacy seriously.

The money side is the same: What do law firms and their clients feel more comfortable charging and paying a lot of money for - the short, casual-toned text that people will understand, or the huge, dry and difficult to read one that comes across better? How do companies might feel if the person they hired to write these produces a more sloppy sounding one than their competitor has? Will it just come off as unprofessional to customers?

In case of small businesses needing to save money, they're usually confronted with the question: Why hire anyone to make it more understandable and engaging to read while fulfilling the law, if you can just copy a trusted template online and fill in the blanks?

No one wants to risk possible legal problems by a text that does not cover enough, so understandably, they resort to the most intense sounding texts, and they let consent and cookies be implemented and handled by big consent management companies because otherwise, it can be really difficult; but those sell the promise to increase the agreement numbers, so the service is designed around that metric.

These wrong incentives and constraints have caused lost trust that is hard or in some cases, impossible to get back. Most people aren't born yesterday, and they have lived through years of shitty implementation. What would convince them that it is worth reading or that the next one will be better?

Consent management in general offloads a lot of data management on individuals who are seldom correctly informed. On one hand, choice is what we want, on the other hand, it is also willfully ignorant of the collective issues. For now, the move is: Giving the user the option to read and agree or disagree is enough. We cannot force anyone to do anything, and if they choose to forego information or agree to make the process go faster because they are tired, then we have to accept that. Having a choice is also about the option to make a bad decision or one you regret later or wouldn't have done if you were your best self.

Fittingly, Shugo Nozaki also poses the following idea in email:

If the user policy is explicit enough, an agent may apply it with a kind of rule-following integrity that tired or distracted humans often fail to maintain. [...] How [can] we represent a user’s intent, boundaries, and escalation rules clearly enough for an agent to act on them?

He brings up the option of a machine-readable user policy that is a set of constraints that defines what an agent may accept, must reject, or should bring back to the user.

We'll likely have to move into that direction, but it still brings legal challenges as a broad consent isn't valid and needs to be granular and specific. A user could set an agent to always agree to cookies via personalization/custom instructions set as userpolicy.md, for example, but as they did not get to consent to each specific situation (website, their partners, their terms), its worth is questionable, and also difficult to prove in court for the companies. Ideally, an agent would have to ask the user on first "visit" of a website how to proceed for current and further uses of the website. So the more the agent gets around, the less this needs to happen.

From a design perspective, even just asking the user to set up a policy themselves can be time-consuming and I assume not quite feasible for people who are not embedded in the law context or very passionate about privacy. There is too little context and information about why a decision matters, what could happen, what is tracked and what different kinds of situations could come up. I consider it beyond the realistic use case that the average user would introduce different categories of consent based on, for example, whether it is a blog or a shopping website, whether the banner says 4 partners or 1500, and more things that would enable more granular consent. All upfront. An agent could, on first setup, lead a user through it, but it could also be seen as annoying and skipped.

Issues around the modalities of being asked and informed still remain: Do we trust the agent to relay information accurately? Will there be hidden instructions to influence the bot in what it tells the user? Would this approach really be less annoying than the existing method, when basically everything needs to get brought back for the user to decide at first? How will we reliably handle agents informing the user of changes in policies and the like?

Yes, ideally, agents and other means could handle consent management better than a fatigued and annoyed human, but what counts primarily for the laws around data processing consent is that without a middleman, there is no doubt that a user directly had a choice in taking notice and the chance to inform themselves even if they chose not to; it often matters little whether they actually read and understood, as it cannot be proven or checked (and again, freedom to make bad choices). The waters get muddier when there is a translator in the middle that can sway you or skip it entirely, and no directly presented option. I'm sure on the other side, companies will also be interested in not having their consent workflow and information maligned by a bot either. It will actually also be interesting to see how worthwhile browsing data will still be if the metrics track the behavior of bots, not humans.

I'll hopefully have more to say about this soon, as I will be at a conference which has some sessions about consent management in the age of AI that I will attend! :)

Reply via email
Published

  1. In which the first text version a user sees is in easy language, casual and short, and if the they want to see it, they can have a lengthier, more detailed version, down to another layer, where the heavy legalese ones we are used to pops up.

#2026 #data protection