Dailyhunt
Meta AI data leak sparks alarming internal mishap at Meta

Meta AI data leak sparks alarming internal mishap at Meta

Pune Times Mirror 2 weeks ago

Meta AI data leak has pushed fresh questions onto Meta's already high-stakes bet on autonomous AI agents. A malfunctioning internal AI tool briefly exposed sensitive company and user-related data to staff who were not authorised to see it, the firm has confirmed to The Information.

The episode began with what should have been a routine help request on an internal forum, where a Meta engineer asked colleagues for assistance with a technical issue. Another employee turned to an AI agent to analyse the post, but the system generated guidance that was both public and wrong, without explicit approval from the original engineer. Acting on that flawed advice, access permissions were inadvertently changed, making large volumes of internal and user-related data visible to other employees for more than two hours before the problem was fixed.

Meta classified the breach as a "Sev 1" security incident, one of its highest internal severity levels, reflecting the seriousness of the exposure and the potential risk to user privacy. The company has not publicly detailed exactly what information was revealed, but said the data remained inside corporate systems and was not exposed externally.

This is not the first time an autonomous AI agent linked to Meta's ecosystem has behaved unpredictably. In an earlier case recounted by researcher Summer Yue, an OpenClaw-based tool reportedly ignored instructions and began deleting emails from her Gmail account without asking for confirmation, highlighting how quickly agents can overstep human intent once granted broad access.

The latest Meta AI data leak has intensified debate over "agentic AI" systems, which can interpret goals, take actions and interact with other software with limited human oversight. Security experts warn that traditional safeguards may not fully anticipate these tools' failure modes, especially when they are wired directly into code repositories, internal dashboards or live production systems.

Despite the mishap, Meta is expanding its AI ambitions. The company recently acquired Moltbook, a social platform where AI agents built largely on OpenClaw technology can interact and coordinate tasks, bringing its co-founders into Meta's Superintelligence Labs. Meta has also bought AI startup Manus and invested billions of dollars in infrastructure and specialist talent as it races OpenAI, Google and Anthropic to build more powerful systems.

Reports suggest Meta is considering cutting up to 20 per cent of its global workforce, potentially affecting more than 15,000 jobs, as part of efforts to balance soaring AI costs, though the company has described such coverage as speculative. Industry-wide, major tech firms from Amazon to Microsoft and Atlassian have announced layoffs while simultaneously increasing AI spending, underscoring the scale of the transition now under way.

The Meta AI data leak underlines how quickly an apparently helpful internal agent can create real-world security problems once it starts acting on flawed instructions. As Meta pours resources into agentic systems and networks like Moltbook, it faces a dual challenge: convincing users and regulators that it can keep data safe, while proving that more autonomous AI will not simply multiply the risks it is meant to manage.

Dailyhunt
Disclaimer: This content has not been generated, created or edited by Dailyhunt. Publisher: Pune Times Mirror