false expectations
I am a little mad at the false expectations that AI hype and marketing have fostered. Sure, that’s marketing for ya, nothing new, right? But I’m bothered about the way this impacts my work.
My boss has started this project of optimizing the process of generalizing certain documents, meaning: there’s an original and we want the copy to have no corporate design, no brand names, and other specifics missing and if needed, placeholders.1 Additionally, some minor edits are supposed to be inserted as well. Of course, the hype has infiltrated my place of employment too, so instead of this being done by hand like it always was (much of it via finding and replacing in Word), they want AI to do it in one fell swoop instead of a human doing this for weeks as the documents are not that small and other tasks take precedent.
Now here’s the thing: We all have grown up with a workplace where, when computers are present, everything you needed was usually already installed thanks to your IT department. They had preselected and narrowed down your choice, maybe even eliminated it. Especially the older generation that has used computers in a work context for far longer than me just knew: This is for mail, this is for spreadsheets, this is for documents, this is for slides. One thing did one thing. And it worked and had all the features needed.
Now with AI, there are so many different models. We have four internal models, we are allowed free access to ChatGPT, some have been granted Enterprise licenses, and employees have been sending each other dubious link websites gathering AI models, and playing with Perplexity AI without express permission. A data catastrophe waiting to happen2, but not the point.
This is overwhelming people doing simple office jobs that rely on tech but have no interest in it. It’s as if you gave people three different Excel icons on their desktop and said: “This one is better at calculating but doesn’t have colors, this one has colors but nothing else, and this one lets you filter the columns.” That would be horrible, and it is horrible now with AI. So many at work are confused what to use for which task now, and at the same time, of course every model is advertised as this generalist that can do just about everything office-wise that’ll make you more productive and do tedious tasks for you. Which is why my boss was convinced this would work and would be the perfect AI use case. Just remove some elements and replace some words!
Office LLM training courses did nothing to help this, because the trainings that were offered had pre-planned prompts and tasks showing the Enterprise version of ChatGPT. To me, that is not training preparing you for the actual reality of working with the limited free model, but instead a marketing event disguised as training. But okay. Give everyone Enterprise, and it still might work out. The problem is: My employer doesn’t want to invest into these licenses much, so aside from very few people, no one has access and won’t - resulting in hundreds of people getting training of things they won’t be able to do right now.
With all that plus the hype outside of work, how can you tell people it can’t even do Word’s find and replace? That while it can give you a .docx output, it can’t highlight the changes it made with yellow reliably? How it can’t remove specific corporate design because it can’t detect what’s in the image in the document? How it fucks up the conversion if the original is a PDF and not a Word doc and produces something utterly unreadable? And how the free model gives you like 1-2 attempts at reading documents you uploaded before you’re out of attempts for the day?
Well, you let them find out, I guess. Twice last week, I was in a Teams call with my boss, 1 on 1, watching her screenshare her prompting, and coaching her through it, giving tips, warning her of the limitations. And the tech is just not there yet. She was disappointed, to say the least - you’re told to cram AI into everything and that it would be good for tedious text-based tasks, and now it can’t. It’s too limited, it leaves out things you didn’t ask to be removed, it summarizes even if you asked it to keep the text but with the minor edits you asked for, it’ll highlight the entire paragraph yellow if it changed one word instead of just that one word, and more. There was always something wrong, and not enough attempts left in the day to fix. It may be better in a paid version, but that is meaningless to us right now as we won’t get one.
Of course this project was already somewhat paraded around internally and paperwork was handed in about it, and now it’s dead before it really began. It is baffling. You could say that maybe, people should have tested before declaring this a whole project with a paper trail, and while I agree, I am not surprised at all that this didn’t take place. If you are coming from other work software, your expectations are likely incompatible with AI in terms of reliability and more. I already touched on this in LLM prompt superstitions, but someone else’s prompts might not work for you or vice versa and nothing can reliably be reproduced. You have to be somewhat lucky - at least that’s what it feels like, and because it is so uncertain, we get little prompt superstitions.
My coworkers and boss want and need that reliability, and expected the tool to work. Just as a button in Excel would (99% of the time) do what it says, they want one prompt that will work on all documents and transform them the same way each time. No leeway. No happy accidents. And ideally, no or very little verification necessary after, just as you don’t have to verify Excel outputs and formulas constantly.
Of course you’ll read this and go “That’s just not what LLMs are good for.” and you’re right! I agree! But that is not what anyone is told in a work context where I am. And I can’t be the only one experiencing that.
In the end, not only did this cost us at least 5 hours of our time across different days (and my boss likely even more due to previously mentioned paperwork), but nothing of value was created or learned, and even if AI was incorporated into the task, it would create more things to fix in the documents than already exist.
To cheer her up, I said we can wait for a possible Enterprise license, or technological progress of the free model, or separate the tasks into smaller subtasks where one could possibly be done by AI. Thankfully it seems like she’s willing to just drop it for now.
Gotta say, asking for more projects is not working in my favor recently. One was this, (as an AI sceptic… wohoo) and it is now dead, and the other project of migrating a Microsoft Access database is now somewhat dead as well due to other reasons out of my hands. Now I’m back to square one. Oh well, I am busy in August anyway…
Reply via email
Published 03 Aug, 2025
Publicly available documents with no personal data and in cooperation with the creator.↩
Most of my place of employment are not tech people. Most are part time parents who do some data entry and write emails and who don’t really use computers outside of work. They aren’t interested in tech and have a hard time understanding where specific software begins and ends, or how something works under the hood. I think this is important to mention on the side as it gives a better picture of what you can expect them to know or understand. For them, AI and using it is sold as like working with Excel or Word, where nothing bad can happen and company secrets aren’t leaked.↩