...hi, this is av


ava's blog

fast tech progress and risk/benefit analysis

Something I find scary and often overlooked in the conversation about social media and posting content is how the risks can change within a few years into something you wouldn't have agreed to before and did not see coming at all, but now it's too late.

What I mean is: popular YouTubers, Twitch livestreamers, Instagram influencers and similar others probably couldn't have foreseen AI and what it does now. They (hopefully) knew at least some of the risks of putting themselves out there back before all this - the danger of getting doxxed, harassed, the content aging badly, among other things, and they knew and risked that anyway. But who in their position could have foreseen that in the future, all their content can and will be used to train AI which makes it easy to impersonate them, create content with their likeness, even offensive content? Not just crawlers of the bigger companies, but fans saving videos and images as well and using it to train openly available models to create porn of that creator? Their content is already out there, spanning years, and cannot be taken back. It doesn't matter now whether they are okay with it or not, and just stopping their career or deleting the service isn't changing much.

I remember thinking about the risks of using messengers like Discord; all those server messages and DMs sent over years. It was easy to think "Well, most people are not interesting enough for anyone to comb through through millions of messages and the time required would be ridiculous." But as seen now, you'll likely underestimate what will be possible in the near future; famous people have not foreseen people digging through their old internet content, and nowadays, it's easy to feed AI something to summarize - why not the same data you'll get through your GDPR request? Of course there can be mistakes and hallucinations, but now, the timesink of a big amount of messages is not an issue anymore. They can just be fed and analyzed that way very quickly. We're now learning that we cannot feel safe by the limits and processes we have nowadays; we have to think ahead and try to think of ways in which these limits will probably be lifted by technological progress and will therefore not protect us anymore.

All that happens now can happen again in the future, something completely unforeseen that would have changed your willingness to use certain services or put yourself out there if you knew it now. And it could be something that seems completely unhinged and unlikely to us right now, just as what we currently are dealing with would have sounded like a lie in 2012. We always relied on Terms of Service and promises by companies that don't mean much; but look at all the drama around TOS being changed semi-quietly to include dystopian sounding things and AI clauses. We cannot trust the TOS to stay fair and protecting us, either.

Published 21 Aug, 2024, last edited 4 weeks ago

#social media