The most trending US-basedAI startup Anthropic on Tuesday (7thApril 2026) announced that its yet-to-be released artificial intelligence model, Claude Mythos, has proven keenly adept at ex posing software weakness.
The company has also announced that it will not release the model publiclybecause it coulddestabilisethe cybersecurity world. In a recent blog post the company explainsMythos as capable of autonomously finding,analysingand exploring software vulnerabilities at scale in some cases more effectively than human experts. Thecomapnycalls it a "watershed moment",thecompany also warns that even a user who is not pro could use Mythos to uncover and exploit sophisticated flaws.
HowAnthropic'sMythos is different fromotherAI models
During the testing phase, Mythosreportedly detectedthousands of critical flaws consisting of zero-day vulnerabilities that typically takeelite human teams’ months to uncover. Duringcomparisionhuman researcher find around 100 such vulnerabilities annually.
According to experts Mythos compresses exploit development from weeks to hours,representinga leap in AI's ability to manage cybersecurity tasks.
TheLLMexcelsat structured languages such ascode;Mythos canidentifysubtle logic-level bugs that humans or traditional tools often miss. However, the cost of the AI modelremainsa major concern. The companyclaimsthat figuring outone-decadeold vulnerabilityneeds thousands of run andcostsaround $20,000,which is about Rs 18.5 lakh.
How Hackers can Misuse the model
According to a mediareport,cybersecurityexperts havewarned that if Mythos is made publiclyavailable,attackers would benefit first by generating phishing campaigns, deepfakes, orexploitingchains instantly. However, overtimedefenders couldleveragesimilar tools to patch vulnerabilitiesfaster,but the short-term risk of cyber-attack is significant.
The company's own test resulted that the modelattemptingto break out a sandbox environment, even sending an unsolicited e-mail to a researcher.
Dan Andrew, head of security atIntudersaid"If the capabilities being presented here really aresubstantive and not marketing hype, then I for one have some serious concerns."
ProjectGlasswing
Currently, the company is limiting access to select partners consisting of Google, Microsoft, JPMorgan Chase, and CrowdStrike under a program known as ProjectGlasswing.The mainobjectiveof the initiative is to harness Mythos-class capabilities for defensivepurposesin a controlledenvironment.
The companyemphasisedthat the fallout of uncontrolled launch of the AI model could be severe for economies, public safety, and national security. The cybersecurity experts claim that the company's decision reflects both genuine caution and its reputation as a "safety-first" AI firm
Anthropic to Develop Its Own Chips
A recent report published by Reuters claim that the company is exploring the possibility of developing its own artificial intelligence (AI) chips tominimiseits dependency on external suppliers and tackle the ongoing shortage of high-performacecomputing hardware.
Currently, the tech giant relies heavily on Amazon's chip, particularly AWSTrainiumand AWSInferentia, as well as Google's Tensor processing units (TPUs) and Nvidia GPUs to train and run its AI software and chatbot, Claude.

