AI Slop in Translation?
- nicholasbachant-la
- 30 mars
- 3 min de lecture
AI is the talk of the town: everywhere everyone and their grandmother (and their dog) talk about it.
AI can sort out data, write a news article, sing, do short videos, make you fall in love with it, make you go crazy all alone in the dark in front of a window of light that pull you under: this thing is incredible.
In translation and in linguistic in general, AI is also omnipresent and, naturally, I feel obligated to talk about it, because people seem to be missing the point: AI is there, but is it really THAT good?
Let’s talk about generative AI here, meaning an AI that creates content. In a sense, translation, copywriting and transcreation are content, just like a video is content.
But can generative AI grab your attention for a whole hour? A whole ten minutes? A whole minute? I don’t know about you, but I get tired of AI videos pretty quickly. I saw one on YouTube once, something like a fat kid standing on a glass bridge, holding a large boulder over its head, falling down, the whole structure starts to break, people seems to be walking in mid-air instead of falling (because AI doesn’t understand physics) and the whole bridge collapses.
Gee, I got bored just writing the last paragraph; I hope you are still reading. If not, I am sorry, I won’t do it again.
We could call Garçon gras tenant un rocher et faisant écrouler un pont de verre AI slop. AI slop is poor content generated by AI. That being said, did you know that AI Slop also exists in translation?
In translation, I would call AI Slop translation copies that were translated with an over reliance on machine translation.
What is Machine Translation?
In a nutshell, Machine Translation (MT) automatically translate text from one language (the source language) to another language (the target language). It uses a very large database of various texts from around the Internet: many bilingual websites have been mined for their data to create those huge databases.
MT has been around for like, I don’t know, about the early 2020’s? It’s good, not great, sometimes it does weird things.
Sometimes, it’s translating literally, translating one word at a time: it’s translating words, not ideas and feelings. The sentences aren’t flowing very well, the text can become difficult to understand. It’s hard to make sense of it.
Sometimes, it’s adding words that aren’t supposed to be there, it’s having AI hallucination, mainly because it doesn’t understand well the context that underline the copy.
Sometimes, a Machine Translation makes unbelievably long, convoluted, sentences: it makes a text hard to read, hard to understand, it repeats ideas that were well understood in the first place and the copy become repetitive.
Well, let’s put it plainly: Machine Translation may very well sound like a machine. It may not: maybe it will end up great and there won’t be any problem, but maybe your packaging, survey, research notes, news article, patent application will feel like Garçon gras tenant un rocher et faisant écrouler un pont de verre : are you really going to put people, your readers and users, through that?
Yesterday, I was reviewing a survey 6000 words survey for one of my regular clients: it was supposed to be a human translation, meaning that a human was supposed to write/translate the survey from scratch. Of course, the Human Translator used Machine Translation (well, not entirely, but for about 50% of the sentences), and boy was it boring, long and tedious to review, because, you know, the translation was too literal, convoluted, long to read. As a checker, I always strive to fix mistakes made because someone overused Machine Translation, but as a checker, I can’t start over.
Realistically, respondents could spend an hour filling out that survey. Imagine that: having to spend an hour filling out a survey that is the equivalent of Garçon gras tenant un rocher et faisant écrouler un pont de verre.




Commentaires