Image created with AI.
For years, discussions about artificial intelligence focused on familiar themes: chatbots writing emails, AI creating images, or computers helping students with homework. Most people saw AI as a useful tool — impressive perhaps, but still limited.
That changed when Anthropic announced a new AI model called Mythos.
The company claims Mythos became so good at discovering weaknesses in computer systems that it decided not to release the model publicly.
At first glance, this may sound like a technical problem affecting only programmers. But the reason Silicon Valley is suddenly nervous is much broader. Modern society runs on software. Hospitals, banks, ports, trains, electricity networks, water systems, airports, supermarkets, and communication systems all depend on millions of lines of code quietly working in the background every day.
Most people never think about that invisible infrastructure — until it fails.
Mythos reportedly discovered hidden weaknesses in important software systems that human experts had failed to notice for years, sometimes even decades. One example involved FFmpeg, a video-processing system used throughout the internet. According to reports, Mythos found a flaw after millions of earlier security scans had missed it.
The fear is simple: if AI becomes extraordinarily good at finding weaknesses, then criminals, hostile governments, or terrorist groups could eventually use similar systems to attack the digital foundations of society.
Why This Feels Different
Cybersecurity experts have worried about hackers for decades. But Mythos appears to represent something new in both speed and scale.
Human experts work slowly. A skilled cybersecurity researcher might spend weeks studying one piece of software looking for a vulnerability. AI systems like Mythos can potentially examine enormous amounts of software continuously, day and night, at a speed humans cannot match.
Some researchers involved with the project described the experience as unsettling. One engineer reportedly said the model found more vulnerabilities in a few weeks than he had discovered during the rest of his career combined.
That matters because much of the world’s digital infrastructure is old, fragmented, and poorly protected. Critical systems are often built on layers of software written over many decades by different people and companies. Even experts describe parts of this infrastructure as fragile. One cybersecurity specialist bluntly summarized the situation by saying many essential systems are effectively “held together with sticky tape.”
In practice, that could mean AI systems eventually becoming capable of exposing weaknesses in power grids, shipping systems, financial networks, or communication infrastructure faster than humans can repair them.
This is why Anthropic decided not to fully release Mythos and instead launched Project Glasswing together with major technology companies. The idea is to strengthen critical systems before more advanced AI models become widely available.
More Than a Technology Story
Some critics argue that Anthropic also benefits from the publicity surrounding Mythos. Declaring a model “too dangerous to release” naturally attracts attention and investment. Others point out that AI systems have already been helping cybersecurity researchers for years, meaning Mythos may be an important step forward rather than an immediate catastrophe.
But even if some of the language is exaggerated, the broader concern remains real. The Mythos story reveals how dependent modern civilization has become on software that very few people truly understand.
It also raises political questions. If only a handful of companies possess AI systems considered too powerful for public release, who controls those systems? Governments? Corporations? Military alliances? And how transparent will those decisions be?
For Europe, the issue is especially uncomfortable. Europe largely missed the rise of the dominant internet companies and now risks depending on American AI infrastructure as well. The Mythos debate may therefore force European governments to think more seriously about digital independence, regulation, and technological sovereignty.
Perhaps the fears surrounding Mythos will eventually prove overstated. Silicon Valley has always had a tendency toward dramatic predictions. Yet many people inside the AI industry now openly speak as if society is approaching a historic turning point.
The real fear is not simply that AI will become smarter. It is that AI may evolve faster than governments, laws, and ordinary citizens can keep up with — while becoming deeply connected to nearly every system modern life depends on.
