
AI's current worrying state and its missed opportunities
Today, I'd like to talk about a topic that has become omnipresent since the past year: the rise of generative AI. I'd just like to share some thoughts that are generally not the main topic, so I won't delve into all the details of what it is or the technology involved. For that, I'll put in some links.
Why I think generative AI is bad and harmful
There are a few obvious reasons. Art theft, privacy issues, ecological disasters, cheap labor. But I think there's other more fundamental reasons to dislike this technology, or rather its current use, which I have not seen as much discussed. This mostly comes down to the fact that it is seen as if those problems will all be soon resolved, or that they are not seen as solvable because AI dominance is viewed as inevitable. I'd like to go back a bit and think about how generative AIs like ChatGPT are a bad use of resources and technology. I highly suggest starting by reading the articles I've linked above to have some basic context.
I'll start this section by listing some common uses for generative AI and explaining why I think it's actively harmful.
Making search worse
One of the core uses of generative AI like ChatGPT is search. Sure, Google's search engine has been largely deteriorated over the year (partly because of AI generated content!). But is it a reason to replace it with an even worse solution? ChatGPT will give you an answer, that's for sure. Is it right? Who knows. Can you check its validity? Absolutely not. Reading Wikipedia and thinking everything written on it is the truth is at best naive, but dozens of people take on the task of verifying regularly the content, and there are sources you can access at the bottom, which are of course mandatory. Using an LLM will just give you the answer of what's more probable, not what's true. Hence, to be effective, you need to be an expert of the topic you're asking a question about, defeating the purpose. The very way it works is by generating text based on probability tokens. This has two issues: there's absolutely no database of verified information, and common but "bad" (understand racist, misogynist, etc.) texts will be prioritized.
Searching information through the internet has been nothing short of a revolution. There's always been issues with the veracity of any information, but it was a search engine. ChatGPT gives you an answer, not a list of sources. Removing any curated aspect (which is partly the goal of Wikipedia or other websites) just because we want to automate it is, in my opinion, dangerous. Biases already present are reinforced. Data scraping lags behind real-world news and up-to-date research, and not all information will be scrapped. In fact, a lot of references on Wikipedia come from books or other media not available on the web. That data is probably not on generative AI datasets.
Humans debate, change positions, and information is not something absolute that does not move. Giving Google the sole power of seeing what's worth when searching is already an issue: corporations should not have this kind of power. But when we're even deleting the ability to have the source and just want a purpose-made answer (that might be wrong!), we're just refusing to harness power. This might sound like a philosophical issue, but it's quite concrete. We should not refuse our agency in exchange for some time. As I will explain later, this is a problem even for the foundations of our democracies.
Making content less interesting
One other important aspect of AI is generating content. From what I've seen, it's generally... not great. Let me get some examples.
A cover letter. This is definitely one of the best applications of an LLM. It is generating boring, useless, predictable content. But guess what? You could just take a template, take 30 seconds to customize it, and voilĂ ! No carbon-intensive operations involved.
A picture for an ad. Yeah, like most ads, it needs to be boring, talk to most people without ever being creative. Bland content, for bland brands doing bland products. Stock footage does exactly the same thing, without people being robbed of their work by AI.
Bad, generic, uninspired content for social networks. The sense of community pushes users to make the same thing together (I'm talking about those starter packs or those Ghibli pictures). I don't get how inspiring or imaginative this is.. People just generate this, put it online, and that's it. Where is the thrill of creation in this process? Are we so dull that we are satisfied by something brands have done as marketing from day one also? On the contrary, some artists have been doing their own starter pack in response to this: every drawing is different, unique, and personal. I'd argue this is what makes us human. A growing sense of community, by people doing their own little things. I think this is how we can thrive.
And if you're thinking that it is fun to compare those generated images on a common theme, yes, I get you. That's why it has been made for decades, with tools like Picrew, made by artists, purposefully. And you must do something with these tools. This is what is making it interesting again: agency. Perhaps you'll start recreating yourself in the artist style, but then you will have new ideas, and go on from there.
You can tell me that AI does this more and more. And while this is partly true, the limitations are arbitrary, based on theft, with an agency that is still controlled by the AI without rules that you cannot explicitly control cleverly, and while consuming way more resources.
I've also seen people ask ChatGPT if what they've written is good or not. Language models will tend to make things dull and lifeless, and as humans, what is interesting is our individuality. Do we want to trade it against the sum of what has been written or drawn?
Most of those things are either useless or actively harmful for our wellbeing. Before using an AI "because it is a dull and boring task" or "because everyone's doing it", I think we should collectively think about what we're doing and why. Too many things are being done for no sensible reason at all.
I'd like to also cover the argument that tempering a prompt is indeed art, and that one can get interesting results, with iteration and work. To me, this is not true (and is covered by the article I linked talking about art theft in the introduction). The first obvious reason is that you have no agency over the prompt. You can always ask for more precise elements in the prompt, but you will never have real control over the result you're getting. You do not control the neural network, nor do you have access to the data the AI has been trained with. Ultimately, you're just using art made by a computer. In some philosophical sense, you could argue that you're stealing the computer's program art, since he's been inspired, in its own way, by data. Therefore, you do not have done anything.
Going from there, there are some LLM that are run locally by some users, with their own data. I will admit that I have thought much about this aspect. However, I'm pretty sure the models used were trained using stolen content, so my point still stands. If perhaps, in the future, we were able to use precise content, with consent (like our own creations) to get some inspiration from there... I'd say, perhaps, there could be some interest. But I'd argue that it will always, at some point, be based on software that was trained on theft, and that makes it ethically unacceptable.
Destroying the very existence of contents
Today, the web is actively spammed with AI generated nonsense. It is becoming increasingly harder to do research based on real human text, because the human content is decreasing over time.
Replacing human text by generated text, I think, is actively harmful. Especially in the current social and political landscape. This means that more and more content we're actively consuming is made by a biased model, that has no motive to improve. And the people owning those models are actively working against our democracies. Big entities like X and Facebook are increasingly becoming a threat to our democracies. Thus, generative AI, owned by corporations owned by billionaires with harmful agendas, will, of course, have an immense power over our lives. And the consequences, which are already bad, will be catastrophic.
What about summaries generated by AI? We all lack time, and we cannot read all the news, books or emails we want. Surely this could be good, right? Well, I am absolutely not agreeing with that. Articles exist for a reason. They develop arguments, our minds work when reading them, we develop our own reflections and opinions. We can compare with other articles and this stimulates our brains. With AI, its own biased database is working instead of us. Sure, we're all biased, but again, we have agency over ourselves and can decide to improve. AI doesn't have this. And of course, there's always hallucination issues. There's been quite a backlash, with outright fake news made up by AI trying to summarize articles. You could argue that this will be resolved over time. I'd argue that my first point still stands: sure, there's a benefit to have excerpt from the news, but not through ultra biased AIs who are taking sources from already biased sources. It's just diluting dilution.
And don't get me started on automating text messages to your contacts, like is being pushed in social media apps. As social animals, we thrive in interactions, and should embrace it, not automate it.
A "good" use of generative AI?
As counterintuitive as it may sound, I think shitpost is the best use, if not the only "worthwhile" one of generative AI. For instance, popular Twitch streamer DougDoug regularly uses AI in its stream to make amusing content. Since AI regularly hallucinates, it makes for great entertainment. If you haven't seen it, I can only recommend this video of a Pajama Sam AI trying to resolve its own game, just hallucinating more and more, needing DougDoug to regularly "kill" it to reset its distorted AI "mind". This has questionable ethical concerns, as seen before, but is quite fun.
However, if AI were to become any good in the future, this use would be less interesting. It's the inherent limitations in the current state of this technology that makes it fun.
Good AI and what could have been
AI is present in a lot of products, has been for decades, and will not disappear. And I do not wish for its disappearance. My issues lie specifically in generative AI, and most precisely on how it's being used by big tech companies, trained on stolen data, annotated by exploited workers in the global south. In this part, I will explore some interesting uses of AI.
The first thing that should be noted is that I am highly critical of do-it-all generative AI applications. I think that's the main issue. I must also say that I don't know how much power intensive or based on theft these solutions are. However, I will address why AI is in its current development state later on, and this part focuses on how the current technology is useful or could be more useful.
Pattern recognition
AI is good at pattern recognition. It's even how generative AI works to make up its result. I'd argue that there are more useful ways to use this, and some that are already readily used in current software.
Take for instance the magic wand tool in Photoshop. This works with pattern recognition, allowing to detect elements in the picture. The same goes in Lightroom: there's a feature to select subjects, the sky, landscape, etc. Why do I think it's good? Because this allows you to work with the picture. The feature I do not like in those software is to put in some random generated content in pictures. There's a major conceptual difference between using it to make some work and using it to generate unpredictable content that has nothing to do with the picture.
As an aside, I'd like to add that I'm not confident in the ways in which Adobe has trained its system, but theoretically, it could be done without stolen content.
The same idea works for industrial usage. There's a growing use of AI to detect defects in manufacturing, for example in mechanical parts. With a trained database of what it should look like, it's able to detect when something's wrong. This can save time and make us gain accuracy and security. In biology too, protein structure research has been greatly improved thanks to pattern recognition. Those are just a few examples of the power of pattern recognition, being meaningful for humanity, or at least quite useful.
Transcription
Another domain in which it is useful is transcription. Rather than summarizing content like I criticized before, I think it is already a great tool to transcript audio content, or extract text from an image. VLC has started implementing real time subtitles, helping people with hearing disabilities to watch content when there are no subtitles available. If technology continues to progress and reaches a perfect transcription, this will be nothing short of revolutionary. Today, it's not quite there yet, but I think that's a domain where it is helpful at nobody's expense, because not all media have subtitles.
As for text from images, this is already present in iOS and iPadOS and has been for a few years. Not perfect yet, but it can be quite useful. Again, it's just making us gain time. I wouldn't trust it without reading it again, but if research continues and this improves, I think it's a good use of AI, because there's no loss of information or biases compared to a summary.
You could even imagine a system where you could search data inside videos and be able to find them thanks to this transcribed information. Imagine searching for a quote coming from a video. You would be able to find it, with the exact timestamp, by inputting tags to be searched (like some names of concept, an exact quote, data of publishing, the creator, etc). That would be systemic, controlled, and not something possibly invented by AI. This would require us to invent new interfaces to interact with. But the possibilities for end users could be incredible.
Organization
The previous good uses I talked about are already deployed in a lot of sectors and improving day to day. That is great. However, I'm not seeing progress (at least for end users) in this category: organization.
I think one of the things where AI could be the most useful is to help us find things with more accuracy. Rather than giving us a made-up answer, it should help us organize data. Whether that's a search engine trying to catalogue with accuracy your offline content or the results of a search engine.
Let me just take an example. Say you're searching for a painting. You know it's got blue sky, a woman with a parasol, and it's from an impressionist painter. It would be extremely useful for content to have AI generated tags with this information, embedded into the content. Search engine today for a painting that is that famous will have no issues finding it. But for smaller content, with less accurate description, it could help us browse.
And that's where I'm getting at. I think the inherent concept of an "answer" is not great. I do not want the answer, I want to browse content! I want to see a search engine where I can browse most of the paintings known to human history, I want to organize them by year, colors, anything! I want to find interesting books based on criteria available at my public library. I think that would be extraordinarily more inspiring and useful than to just have only one answer (that might be wrong).
The same goes with my offline content! If I want to see only pictures of my flowers, only my text with more than 6000 words, I want to be able to do that quickly and efficiently. That, to me, would be the revolutionary use of AI tools, making us not only able to find what we want, but to browse what we want, how we want.
Social media platforms have been using AI to detect harmful content for years. This same technology could be researched upon and improved to be able to detect and classify more contents. It's not good enough today, needing some human work, but this could make us gain a lot of time.
This whole new organization of data is the idea of the semantic web. The whole point was for data to be linked to other data, allowing to have information relationship, through metadata. With improved AI, those metadata could be generated, at least to some extent. The whole point, again, is to be able to explore with our human brains these human data.
In the same way as I was talking about searching inside videos, this would make a wholly integrated system where you could have incredible configurable access to more data than we ever had. It would be a whole new way of interacting and browsing, a whole new paradigm that could be quite exciting.
Instead, we have a hallucinating multi-purpose AI that gives answers that might be true. It's far from being close to this accessibility revolution dream. How come? Let's get to it in the next part.
Why I am worried
AI, in its current form, is a tech bubble, OpenAI being the most over-evaluated company in the world. Like the Metaverse or NFTs were before, this is a huge risk on a product that is not as tangible as those companies would like you to believe (but quite more than the previous one, yes). You can read Ed's articles about how OpenAI current valuation is over-evaluated, how it is not making any money and how it is of no great interest for users to pay for it.
But you will also read everywhere that AI is becoming more and more common, and see friends, family, and colleagues using it. How come?
The industry needs you to like it
AI is being sold, not only as a new way, but as the unique way of doing things in 2025. It is being integrated in every way and in every place. Too much money has been invested, so it needs to become the new norm. Does it need to be good? No. It needs to be used. That is the only thing that matters, even if the overall result is worse for the users. It needs to become a habit, the same way that Google has become a synonym for search. "ChatGPT said that..." needs to become the synonym of "the answer is". ChatGPT being right is not important. If people believe it is and give OpenAI the money, then the investors have won what they're ultimately seeking: money. Big tech firms are increasingly having trouble finding significant innovations that drive up benefits and market share, so there is a panic movement every time a new technology surfaces. This time it's AI, and everyone wants to invest, just in case it might be good. Except it isn't, it might never be, but the money needs to be recouped!
Becoming the norm allows for people criticizing its usage and the technology to become outcasts. Just like it was strange to not have a Facebook account (there's been some dramatic takes on that), it needs to become strange to not use AI. Just like smoking was omnipresent everywhere before it became almost overnight banned indoors in most countries without much uproar, it was something actively harming people that was just there. AI companies aim to be this new, unquestioned norm. There is also the question of regulations. It's much harder to regulate something that is already there is widespread, rather than questioning its presence in the first place. People could also be angry at the governing instances trying to regulate AI in order to protect them, just because it would have become normal.
As I said before, the previous crap was the Metaverse and NFTs. It also needed to be sold. However, their selling point was quite harder. AI sells a tangible dream, even if it's not good or working correctly. Suffice only that the users believe it (and obviously they will believe it because there's not much mainstream visibility of generative AI skepticism). For now, not enough people find this value good enough for AI companies to make any money. I just hope it will continue this way, and that fundamental AI research will continue without corporate hype going in a bad and harmful direction. The industry does not know what to sell you next, needs to grow because this is how capitalism works, and struggling to do so makes them have crappy decisions. It was especially baffling to me that Apple, an enterprise being usually conservative with novelties, and getting in the wagon only when technology is mature enough for the user experience to be better than its competitors (at least in what is their vision) present its generative AI solution is fast. Apple Intelligence seems a decision of a company that is increasingly struggling with making new things. And so far, it's been quite chaotic.
What I don't understand is the media treatment of the whole matter. A lot of media outlets are owned by billionaires who have economic interests in selling their products. But most of them are not owned by people who have interests in generative AI, so it's baffling how for the past two years it has been a weekly topic in every news outlet. Of course, people are using something that has so much free marketing, despite only questionable added value. I talked about generative AI trends before, and I see news about each of those! No wonder it causes traction.
Capitalism is how we experience life
AI being popular is only a consequence of how we've been told to experience life, for a long time. Platforms like Instagram have been making people want to experience the same thing, in the same way, sharing it exactly like it's popular. Sure, this can lead to interesting things, but for the most part, it leads to the loss of discovery.
This is something that has always saddened me: how, for the most part, discovery is not something that is sought. Netflix works exactly like that and is the reason why I do not like it. Netflix doesn't want you to think and search for a movie or a show you want to watch. It wants you to accept what it recommends to you. You shall not search, you shall not think. Just indulge yourself! In the same way as generative AI wants to become omnipresent, Netflix has become a synonym for watching things on the television.
Of course, I'm not saying every activity should be some big experience, with a ton of thinking and whatnot. I'm again talking about this loss of agency. Let me take another example.
Instagram is another platform that is huge and has become the norm for a lot of things. Take for instance holidays. People will post pictures on Instagram, and some places, some restaurants, will become automatically "places you should go to". I've seen quite a few people plan holidays only around that. Is it inherently bad? No, recommendations are indeed useful, and the internet has made informed travel easier than it ever was. But it creates a kind of obligation to absolutely have an experience that should only be in a certain way. Influencers and brands, for sure, have embraced this trend: they can sell you these places, and make you feel like you're not having the "only good" experience if you're not doing it. So, you might feel FOMO if you're not doing the same Instagram as everyone else, or if you're not going to this new trendy restaurant, even if you're not interested in any of that.
What I'm getting at is that I'm not surprised that people are asking ChatGPT to create them a travel itinerary. It's obviously super practical and useful to know about those places. But when doing your research, you'll perhaps come across this small museum gathering to one of your interests, and that would be sad not to go there, right? Not even talking about the issues of inaccuracies, closures, holidays, renovations, and outright lies about things that do not exist, because generative AI hallucinates. You could get a good skeleton of what's a 5-day itinerary through the Netherlands. But those data come from blogs of people who have actually been there, so why not go directly read the source, rather than listen to a probabilistic model that is just making things up?
Well, the reason is simple (apart from lack of understanding of generative AI, or just plain naive optimism of its capabilities). It is that, as a society, we're absolutely lacking time. Capitalism is eating it all. There is not a single person I know that is satisfied with its work-life balance, and those standardized experiences is a direct result of that. Why spend your evenings during a whole week to making up an interesting trip when an AI can make it in a minute? You'll see the exact same thing as everyone, will get a bit of wrong information, but at least you did only spend a single minute on that. I get the appeal. I just think we've collectively failed as a society to have reached this point. The whole dream of AI is to automate boring tasks that could allow us to have more time to do these fun things, like planning a trip, drawing, writing, etc. Not automate the fun things!
We've failed people with computers
As a computer scientist, I can only come to a disastrous conclusion: we've failed people with computers. Sure, day-to-day computing can be a sum of frustrating experiences to us, developers, but we can get by, because we understand what's going on. But end users? It's just a mess. Basic computers tasks have become increasingly difficult for no valid reasons, products are just not good enough, and in the process we've become entirely dependent on them.
For the longest time, it was still a novel thing, getting better and more powerful with time. Today, we've reached a point where computers in our pockets are insanely powerful, and yet, things as simple as typing text are not great. Nothing is great with computers. I blame this on enshittification and on bloated software, with no end user thinking. I will not develop this argument here, because it is not the main topic. However, I think it's not a ground-breaking observation to say that people do not really like using computers. So naturally, when there's an all-in-one solution sold by corporations to do everything without the friction, it's natural that people will not question it and just think "Finally I can do less of this crap!".
We've collectively failed to make computers enjoyable to create things. It's an about okay experience to consume, on phone, on smart TVs, etc. But to actually create? I'd argue that apart from nerds (who are by the way angry at computers far too much a day), this is not true, and with too many constraints even for those who go through it. Just ask any artist what they think of the software they're using...
Conclusion
Today, even something as small as my website is targeted by AI crawlers. My git is not handling the requests, and my server is taking the toll. That is also a hidden cost for AI, as they're putting up pressure on everyone, without consent. What's even the point of making open-source projects if the data will be scraped by a company owned by a billionaire who wants to become richer, one might ask? The same goes for artists. Well, I do hope that people won't fall into this kind of despair, but I certainly can understand the feeling. I'm currently not putting up my git back up until I put some AI protection in place (which is taking up my time and energy and makes me grumble).
AI is a tool. It could have been used in good, interesting, creative, ground-breaking, emancipating ways. In exchange, its development is actively harming most of the people in the world, making us dependent, uncreative, dull and harmless in a world becoming technofascist. And this, to me, is incredibly worrying. It's like the last straw in a collapsing world: it's just making me angry about how it is becoming normalized, and all its issues just brushed aside. The worse might be that there aren't any sign that the technology will actually be good or useful one day. I won't say I have a solution. I would say the most important thing to keep in mind is to always think about why we're implementing new technology. Is it solving what we want? If so, how? What are the consequences? I think that's the main thing we've collectively lost track of, and big tech companies even more.
I needed to put up in an article the thoughts I've been gathering over the past year, since the rapid democratization of this new technology. I hope this has made you think even a little about the consequences this has for everything around us, whether you're a sceptic or an enthusiast of the technology. And I will finally have something to link to when I want to rant about this topic.