Caveman did not build the Eiffel Tower
At the time of writing, I have just survived to a weekend of "Generative AI" training.
When I look at the news, even the technical ones, there are two big trends:
- The pessimists
- It will only lead to more energy consumption
- It will destroy 80% of the jobs
- It will reinforce society bias, leading to more unfairness, and poverty
- It will create a major disruption in the digital world, and in society in general, something we cannot imagine
- The optimists
- It will destroy 80% of the jobs
- It will create many one-person billion-dollar company
- It will create a major disruption in the digital world, and in society in general, something we cannot imagine
- Thanks to Jevons paradox, we will need even more SWE
- It will only create new jobs (when high-level programming languages where introduced, the amount of low-level programming jobs increased)
Let's stick to the facts:
- Generative IA are doing astonishing progresses since the last five years
- AI grew slowly for the 65-70 years before that
- LLMs can come up with not-so-bad solutions without too much effort
- Getting good solutions takes a real effort (though I refute the term "prompt engineering")
- Today's LLMs need a lot of help
During this weekend, I have worked on the frontend of a feed reader.
Even with a first prompt quite detailed regarding the specifications, the first result was disappointing (compiling with a lot of warnings, messy, all the interactions were broken, the UI was very different of the requirements).
I had to rework my prompt, including code examples, enumerating precisely the best practices, etc.
It was better, still a lot of warnings and bugs, but a bit better.
After 3 hours of refactoring and improving, I got something usable.
And this is the whole thing, not only it takes domain knowledge to achieve a "decent" result, but it also takes some skill after this.
Okay, let's admit, and these are facts, that I'm just a beginner, and people more skilled than me achieve better results.
During this weekend, we had a speaker, which introduced him-self as a AI-clip maker expert.
He had the following workflow:
- Write a script for a 90-120 seconds video
- Split it in 1-2 seconds portraits
- Describe precisely each of them (background, foreground, action, lens brand, lens width, etc.)
- Animate each image
- Assemble each sequence
Even seasoned GenAI users need other domain knowledge to perform, which is a good news, because it means all professions won't disappear, they will be enhanced.
The other thing I notice it that, as for software development, there is (still) no way to scale easily (i.e. create a whole 90 minutes movie).
Another thing I have learned during this weekend is that software engineering principles and software architecture skills are still useful for "advanced" usages (i.e. RAG, MCP, Agentic AI), as they still hold to build large systems including LLMs.
The last fact is that, all the predictions are AI are just predictions, sure there has been a breakthrough, but it does not mean the pace will change or be steady.
Let's finally tackle the title, I belong to the same species which was living in the caves thousands years ago, and the same species which has built the Eiffel Tower.
Sure, cavemen could not build the Eiffel Tower, not because they were stupid, but because they did not have the social and technical knowledge to build it, but the concepts to engineer it.
That's what our new responsibility to structure the new landscape created by AI if it keeps expanding.