ava's blog

using AI to inflate your ego

Personally, I’m open to retrying AI use cases every now and then. I’ve written about it before, and I freely share the fails and wins in chats I am part of.

In my case, it’s no use to endorse it based on hype online, nor to try it once and keep my opinion fixed on that experience. I’m expected to engage with it somewhat at work, and independently, I want to know what I can expect from these tools so I can make better decisions and write better when it comes to the data protection impact these tools have. No use to stick my head in the sand when my desired career path is touched so heavily by it.

What bewilders me is how many people seem to use the tool (and topic in general) to inflate their ego. I don’t just mean the literal sycophancy displayed in the model outputs, but also in the conversation around its use.

There’s a group of people that are saying they do very important, difficult and smart work every day thanks to AI, in a pace and way humans just can’t. The gist of it is:

ā€œI am better than you because I use AI, and have more productive output than you, and do more difficult work. The fact I need AI to do it means my work is very demanding, very admirable and at the bleeding edge, and humans could never do it like this, or have an output this fast. The fact that you don't want or need to use AI for your work must mean it's low-value.ā€

Often, they remain very vague about what that work even is, so it's hard from the outside to verify that.

On the other hand, there are also people who do the inverse: They don’t plainly say that AI performed badly in their use cases, so it’s not useful for them, but instead, it becomes a way to prove that the work is so difficult and demanding that AI could just never do that. Something like:

ā€œBehold, I am god’s gift to research and problem solving, and the machine cannot beat my perfect brain. The fact that you are able to use AI for your work must mean you are stupid and your work is easy, since AI can, at best, only do stupid and easy work.ā€

Both of these groups then make sweeping generalizations of what other people should do.

The former group tends warn that ā€œyou’ll get left behind!ā€. It’s such a pathetic cope. It looks like people who were never the top at any skill in their environment, but now think they can finally have an edge thanks to adopting early and shitting out as much as possible in the quest to "learn" or hit some kind of jackpot, attract the right eyes. They have to cling to the fantasy that the nay-sayers will have a disadvantage somehow just so they can feel justified and special.

But tell me, were you left behind when you started using Excel late? Was it bad you only learned office stuff when you needed it? Were you not able to catch up? Chances are, it makes no difference and with some effort and a workshop or YouTube videos, you can use the tools equally well. In the case of LLMs, using it has never been easier. You can just use plain, natural language! No submenus, settings, buttons, search operators and the like to remember. Itā€˜s designed to be easy.

Prompt engineering is and was always a scam. There are no secret incantations you only learn in a 500 Euro class. Anyone can use the tool, learn, and refine. It’s embarrassing to pretend otherwise. Coworkers that have trouble with basic Outlook and Word do surprisingly well with ChatGPT. And why wouldn't they? They have spoken a natural language all their life and have probably trained multiple other new employees in their career; they know how to explain standards and expectations, and how to explain tasks, to a human and a tool.

The other group I mentioned is so weirdly dismissive based on their attempts at a very niche, still unstable use cases. I understand criticism that’s about directly advertised claims by the companies that aren’t fulfilled, or commonly seem use cases online that just don’t actually seem to reliably work; I wrote about the same thing in the past, and how the free models available are not capable enough to do many of the advertised things we are inundated with.

What I don’t understand is thinking

ā€œThe LLM couldn’t generate a PDF with all my branding included and a table with this and that and accurate graphs and footnotes with sources. That means it’s not even useful to create an email draft, or for grandma’s grocery shopping list, and you shouldn’t use it for a motivational letter.ā€

Why can’t there be nuance? It obviously sucks bad for some complex stuff, but it really hits the corporate bullshit text creation just right.

Don’t tell me I don’t get it - I recently tried out what it would recommend for a business card and it said I should use a transparent plastic card to signal transparency in my work. Of course I see how stupid it can be, even for some simple stuff. I get how it could royally screw up grandma's shopping list.

But for me, both of the groups previously identified also ignore that most people simply aren’t in these high-stakes positions, interested in these hobbies or working these jobs.

Many have no need or interest to vibecode some custom solution for their smart home or a family app that rewards homework time of the kids with gaming time automatically just to sell it to VCs or make a SaaS out of it, and they aren’t researchers or problem solvers coding complicated stuff or writing the next bleeding-edge paper in the field. They aren't hustlers scared of being outpaced by competition.

Many people on this planet are taxi and bus drivers, nurses, kindergarteners, cleaners, cashiers, baristas, warehouse workers, construction workers, and the like. Or doing a boring secretary job that is about writing e-mails and sending out meeting details via buttons, using templates or pre-generated e-mails.

They’re some boomers or part-time parents who aren’t that good with tech or don’t need much of it and pass office time clicking a couple buttons. What are you optimizing for, when you realistically only work like four hours of your eight hours a day and it’s the easiest work ever, just following protocol? They sure as hell aren’t interested in automating themselves out of a job, and they don’t wanna work anything else or do something more demanding.

They wanna earn money with the least amount of effort and with the least amount of changing their workflow, and they don’t particularly care for computers or hustle. But if they can get out of some annoying text-based stuff like some e-mail aspect, maybe they’ll use it. And that's fine! They shouldn't be told by some AI fans that them not letting AI take over everything is making them a redundant NPC that has nothing to offer, or told by AI haters that doing easy work that AI can actually somewhat do means they're doing worthless work.

The funny thing is: Their jobs often are just easy enough that it is faster and more foolproof to do it themselves than attempt a vibecoded or generated solution, while also having many use cases that work most reliably at this point and can actually be recommended.

For example: Writing a short email thanking your boss for something is faster done by yourself than typing the prompt; but asking an LLM to make your angry email disagreeing with your superior sound nicer and more diplomatic works. My coworker can’t vibecode a solution to let AI enter text fields in the database automatically, but she can ask ChatGPT how to hide cells in Excel (nevermind that a search engine could also do this).

I definitely am in that boat of ā€œno use, better done quickly myselfā€ with the core part of my job. So I just don’t understand why so many people need to brag that they’re moving the needle so much with their daily work either by using or not using AI, and subtly also shitting on people whose jobs are either replaceable with AI or aren’t fit for AI use, which I allege many fall into!

It can’t be everyone that has such an unusual, high impact knowledge worker job where AI is either the magic enabler or not capable enough. I mean for fucks sake, seems like most of them posting the stuff are students, trainees, junior devs, or vague office job. It’s like people use this controversial topic to present themselves as less expendable and more important than they actually are.

ā˜ļøā˜ļøā˜ļø

There’s also a group of people who won’t intellectually engage with the topic at all because they just do what everyone else does.

Their personal podcast idols have about AI? Better give it all the data and put together a self-improvement plan and let it talk you through some journaling prompts. They don’t wanna discuss the bad sides because the people they admire love it. In my experience, they’re also very easily impressed with shoddy work just because it’s written in a charismatic way. ā€œThis was groundbreakingā€ and it’s something a Tumblr girl would have posted at age 14.

All your friends hate AI? Better not touch it, out of fear of social repercussions. They can’t talk with you about the bullshit it did last time they tried it, or ethical, privacy, or environmental concerns, because they just never actually cared to develop an opinion aside from not wanting to be hated by their circle. That’s boring and people pleaser behavior. I think you look silly if you have no deeper reason to not use something, no interesting arguments.

ā˜ļøā˜ļøā˜ļø

A tangent about arguments: I no longer care about whether images created by AI are good or bad and I don’t care about water or electricity usage.

That is because the capabilities as well as resource usage can and will likely improve, and to me, are more representative of missing regulation and a shitty government than the tech itself; it’s better happening in a context more removed from the actual core of the tool and in how the industry needs to be regulated. I want to be more precise in what is actually the fault of the tool vs. the fault of the region many of these services are located in, and its political problems.

If you claim to hate the tool, but only for the fact that it makes soulless images and it starts making better ones, what then? I'm sure you won't suddenly have no concerns! You usually hate the tech for other reasons than that, so we should focus on these better arguments instead. I think it’s much more interesting to debate whether it is art or not, about responsibility in war or accidents, or focus on the privacy aspect, the intellectual theft, the e-waste, job market effects and so on.

Additionally, if we truly focus on electricity and water use (irrespective of regulation, placement, and other factors that cause issues of droughts and rising prices thanks to data centers), I think we would quickly have to argue against the terabytes of useless bullshit we all hurl onto the net to be stored for ages, take up space, and are another reason for more data centers and people’s increased use of their devices. Even your well-meaning blog post about enjoying a good sandwich counts, or your favorite cat video.

I don’t want to discuss an intellectual bar or importance metric that online content has to clear before it can be uploaded because of our precious resources, because it would hit most of us, and it would hit art and marginalized voices. If we haven’t ever seriously discussed looking critically at each search engine use, each video we watch etc. as something potentially excessive that uses too much resources compared to how useful it was, I don’t know if this is the right way to start.

I think for many, it’s only okay to start that conversation because it’s about something they don’t (yet?) use. It’s hypocritical, as many are not ready to give up their other online consumption behaviors for resource reasons either, because they don’t even cease them when mental health and privacy are harmed. 🤔

Lastly, there is some weird ego stuff going on about talking or not talking about AI.

ā€œYou hate AI, yet you talk about it. Curious!ā€

ā€œWhy do you wanna focus on something negative?ā€

ā€œThe more you talk about it, the more you speak it into existence.ā€

I don’t need to speak it into existence; billion-dollar industries funnel money into the bubble and force it into every device and software and ad. Don’t be disingenuous. Each industry, or art form (if you believe AI art is art) needs its critics. And as AI fans love to bring up, every new invention has had its moral panic, so if that also applies here, why are you mad?

On the other side: Boohooo, you avoid the word "AI" to ā€œnot give it more powerā€; fine, have fun self-censoring for virtue signalling reasons, Mr. I-did-it-with-all-ten-fingers-and-a-few-braincells.

I will keep writing about it, because everywhere I look, I just see people exploiting both ends of the spectrum for views and money, making extreme claims to get engagement. Who screams the loudest and makes the most absolute judgments is seen as more correct, after all.

I will write about the AI Act, about labeling requirements, and more of the spectacular failures and okay-ish results I’ve had, though, and I will have to name the beast. And I don't care to read weird ego boosting shit swirling around elsewhere.

Reply via email
Published

#2026 #tech