Those who endure me in class know it, as much as my clients: I only have “AI” on my lips (and Astro, yes, that’s true). I explain at length that we’ll soon witness a social and economic paradigm shift, and that AI will play a major role in this transformation.
Let’s try to take stock of the ins and outs of the totally disruptive nature of this technology, and what you can do to survive this sad reality: technical progress knows neither pity nor compassion.
But first, let’s establish some basics on the limits of what we wrongly call “AI.”
Current Technical Limitations of AI
AI has made tremendous progress in recent years, but it’s still far from matching human intelligence.
It excels in specific tasks, like image recognition or automatic translation, but it lacks common sense, creativity, and adaptability. Make no mistake: current AI is “weak AI,” specialized in precise tasks. True “strong AI,” one that possesses consciousness and general intelligence comparable to a human’s, remains a very distant goal.
Lack of Common Sense and Contextual Understanding
AI can perform complex tasks, but it often struggles to understand context and show common sense. It can follow instructions to the letter, but it’s unable to adapt to unforeseen situations or understand human language nuances.
We talk about an “alignment” problem. Where humans think their instructions are simple, AI will find a bias to solve the problem in a completely hallucinatory way, or simply won’t understand the question.
For now, for many tasks requiring thought, AI needs to be “fed” data for each situation. Common sense doesn’t download.
Difficulty Generalizing and Learning New Tasks
AI is often specialized in a precise task. If asked to perform a different task, it must be retrained from scratch. It struggles to generalize what it has learned and apply it to new situations.
For example, an algorithm trained to recognize cats won’t know how to recognize dogs.
If AI is making waves today, it’s for its ability to generate text that makes sense and works, as long as context allows (we’ll talk about this again later).
Data Dependence
AI also needs large amounts of data to learn and improve. If the data is poor quality, incomplete, or biased, AI will produce mediocre or erroneous results. Moreover, AI is vulnerable to adversarial attacks, meaning subtle data manipulations that can deceive it and make it make bad decisions.
Try asking ChatGPT to code something completely new or poorly documented, and you’ll quickly understand that we still need developers.
From “Predicting the Next Token” to AGI…
Now that we’ve established these basics, let’s definitively open obvious doors: what we wrongly and broadly call “AI” has nothing to do with true artificial intelligence.
It’s not a “robot” that behaves like a human, but an intelligence system that can “learn” from data and can “respond” to questions by calculating the statistically most probable output.
Try asking Nova, our chatbot at the bottom right of this page: “The cat drinks …” to see what it will answer.
(Well, by the way, there’s a chance it’ll make a lame joke, but that’s another subject…)
Milk? Really?
When I had a cat, it drank water. Not milk.
Except that, in collective imagination and all its written representations found in AI training data, the cat drinks milk.
It doesn’t make it a reality, certainly, but it demonstrates AI’s limits as we currently define it: it collects everything in its dataset and restores a form of “average.”
For better and for worse!
An LLM is therefore not a form of intelligence, but a program that will restore data in statistically optimal ways, meaning the most probable.
And that’s already a lot, because the LLM works better and faster than us… No more hours combing through Stackoverflow threads: ChatGPT does it in a second.
Need to code a WordPress function? It’s so well documented that the model will never be wrong.
Cool.
On the other hand, if we completely go off the beaten path and ask a question about poorly documented data, we get a completely inaccurate answer that can’t be corrected by the user: this is the famous AI “hallucination.”
Towards Agentic AI
To compensate for these small LLM wanderings, “agents” come into play, and that’s the future of AI: a form of intelligence that “supervises” the LLM’s work and defines complementary tasks to accomplish before providing an answer.
Basically, the supervisor will generate a succession of prompts from your prompt to contextualize your question more “safely” and “precisely” to get a more relevant answer.
Along the way, it might make several round trips between its different agents, so token consumption will be higher.
The supervisor thus learns to “establish a succession of tasks to accomplish to answer more precisely”: but isn’t this the beginning of intelligence?
AGI will therefore very likely come from so-called “agentic” AIs, those that will be a succession of agents specialized in a domain, managed by a “supervisor” that will “train” them to answer more precisely…
And we’re almost there: “deep thinking” mode will be what brings us closest to AGI, and it already exists on ChatGPT, DeepSeek R1, Manus AI, etc…
Social and Economic Paradigm Shift
Now that we’ve established this foundation (we’ll soon have access to models that will be faster and more efficient than us, for everything), what will happen?
Ask translators and web writers: their revenue is collapsing.
Or some development teams seeing their workforce dwindle day after day, because their “production capacity” can be managed by 20, 30, or 50% fewer people, well supported by AI…
Or the “small hands” of the web found on large platforms like Amazon, increasingly replaced by chatbots…
AI doesn’t unionize. It works 24/7. It has no salary claims, and paying for its exploitation generates neither social charges nor employer charges.
There’s therefore good reason to believe that as time passes, AI will increasingly substitute for humans: capital always forces through over labor in a company.
We’ll therefore soon all be obsolete.
But What Are Governments Doing?
Faced with this growing threat, what are those supposed to protect us doing?
Each time a big market player started to become known, it has, very often, for communication or public relations purposes, set up an “ethics and security department” (or other marketing-bullshit name of this kind), whose purpose was to “define AI risks and challenges”.
And generally, not long after, this department disappears.
Why?
Because any attempt to limit/modulate/reason/reduce risks considerably slows model and company development, in a context where, precisely, EVERYONE is racing after AGI, or something approaching it.
Departments, companies, shareholders, and even higher: governments. Americans, Chinese, Europeans: EVERYONE.
AGI is a bit like the new “grail”: the tech will be so disruptive that whoever takes the lead in the market can potentially win the jackpot.
So governments do nothing, quite the contrary: they push.
And when the tech is mature, really mature, with great ease of adoption by the general public, it will be too late.
Consequences of the AI-Related Paradigm Shift
In a company, using AI will be far less costly than calling on humans: we’ll all be obsolete. The rise of robotization will considerably accelerate this process. Certainly, for the most complex professions, there will be some resistance.
But for all simple or redundant tasks requiring relative motor skills and little analysis, it’s over: we’ll be replaced by robots.
And this isn’t science fiction, as you can see in this video: humanoid robots are making spectacular progress.
Will it happen: yes.
What can we do: not much.
For one simple reason: we’ll all adopt this technology. Moratorium or not.
The question that will arise will therefore be that of these technologies’ owners’ responsibility: capital transfers will be massive, as will stock market valuation (if any). And it’s in this context of massive, brutal change that other problems will arise… Even if the first will indeed be Man’s obsolescence!
Security and Ethical Risk
Due to massive model use and strong connection with economic and home automation ecosystems, AI will raise questions regarding data security and confidentiality.
Massive use of personal data to train AI algorithms will raise legitimate concerns about privacy protection. And we must also consider data in companies’ possession, which they can use to sell you products/services, or resell your data to third parties.
AI’s Ethical Risks
AI is not intrinsically good or bad. It’s a tool, and like any tool, it can be used for noble or nefarious purposes. The question is therefore not whether AI is a threat, but rather how to frame its development to minimize ethical risks. Imagine an autonomous car that must choose between saving its passenger or a group of pedestrians. Who decides? According to what criteria? These questions, far from theoretical, are at the heart of AI’s ethical challenges.
Algorithmic Bias and Discrimination
AI algorithms learn from data. If this data is biased, AI will reproduce and amplify these biases. It’s a vicious circle. For example, a facial recognition system trained mainly on white faces will struggle to identify people of color.
Result? Unfair and discriminatory decisions in areas like recruitment, credit, or justice.
Don’t believe AI is neutral: it will amplify what it has in its dataset.
The moderation question therefore arises:
- Who will do it?
- According to what criteria?
It’s therefore crucial to ensure the quality and diversity of data used to train algorithms, and implement control mechanisms to detect and correct biases.
Ethical questions are numerous, and it’s very likely this will take decades to handle effectively, knowing there will necessarily be a political or ideological bias.
The AI question will become terribly Promethean!
Manipulation and Disinformation
AI can also be used to create ultra-realistic deepfakes (fake videos), generate false information, or manipulate public opinion. It’s a formidable weapon in the hands of ill-intentioned people.
Imagine a deepfake of a politician making a compromising statement. The impact on public opinion could be devastating. It’s therefore essential to develop tools to detect and denounce deepfakes and false information, and sensitize the public to these risks. Disinformation in the AI era is a ticking time bomb.
Google is already working on the subject, with SynthID natively integrated into Gemini and Imagen (its most advanced model for generating images). To learn more, go here: everything about SynthID.
Mass Surveillance and Privacy Violation
AI also allows collecting, analyzing, and using massive amounts of personal data. It’s a windfall for companies and governments, but it’s also a threat to individuals’ privacy. Imagine a society where every movement, every purchase, every online interaction is monitored and analyzed by algorithms.
Data centralization and the race to economic dematerialization, facilitated by blockchain and Central Bank Digital Currencies, will multiply this bias and all associated risks!
The risk is therefore indeed sliding toward a mass surveillance society, where individual freedom is compromised.
The Need for Responsible and Transparent AI
Faced with these challenges, it’s imperative to develop responsible and transparent AI. This involves implementing strict and neutral regulations, promoting ethics in algorithm design, and sensitizing the public to AI’s challenges.
AI must serve humanity, not the other way around. Transparency is key. We must understand how algorithms make their decisions, what potential biases are, and how to correct them.
Ideally, an Open Source system could limit damage, and we must hope such models will emerge…
Public Awareness and Education
You understood, I’m one of those (enlightened?) people who think the future will be worthy of 2000s science fiction movies, but news is increasingly going in this direction (the digital Euro comes out in October 2025, models are increasingly efficient, robots are arriving on production lines, etc…).
Today, more than ever, it’s important to sensitize the public to AI’s challenges, its advantages and risks. We must educate citizens to understand how AI works, identify potential biases, and protect themselves against manipulation and disinformation.
AI education is an investment in the future. The more informed citizens are, the more they’ll be able to make enlightened decisions and actively participate in public debate.
You must also appropriate this technology, to see how you can implement it in your own company, in your service, or in your daily life, because the more comfortable you are with it, the more productive you’ll be, and the less likely you’ll be to be replaced…
I’m convinced that for most of us, it’s a matter of economic life or death: technical progress knows neither pity nor compassion, it’s merciless, and, well pushed by the market, it never stops…
You’ve been warned…





