Cybercriminals Exploit Google Gemini Flaw to Steal Private Calendar Data via Malicious Invites

Cybercriminals Exploit Google Gemini Flaw to Steal Private Calendar Data via Malicious Invites

20 January, 20266 sources compared
Technology and Science

Key Points from 6 News Sources

  1. 1

    Attackers used malicious Google Calendar invites with indirect prompt injection targeting Gemini

  2. 2

    Vulnerability bypassed calendar privacy controls and exfiltrated private meeting details without user interaction

  3. 3

    Miggo Security researchers discovered and disclosed the flaw to Google

Full Analysis Summary

Gemini calendar privacy flaw

Researchers disclosed a serious privacy flaw in Google's Gemini assistant that let attackers extract private calendar information by embedding malicious natural-language instructions inside calendar invites.

Security teams at Miggo and other researchers described the issue as an indirect prompt-injection attack.

In that attack, a hidden instruction in an event description is executed when Gemini processes a user's schedule query, enabling the model to summarize meeting contents and even create new calendar events that contain those summaries.

The discovery was shown as a proof-of-concept and reported publicly by multiple outlets, prompting fixes and broader warnings about AI-native attack surfaces.

Coverage Differences

Emphasis/Tone

The sources vary in emphasis: filmogaz (Other) frames the issue as a serious privacy flaw and highlights the class of AI risks; PhoneWorld (Other) focuses on the Miggo proof‑of‑concept and the attack mechanics; The Hacker News (Western Mainstream) emphasizes enterprise impact and notes that Google fixed the issue after responsible disclosure. Each source reports the same underlying technical finding but highlights different consequences and follow-up actions.

Calendar prompt injection

The attack plants dormant natural-language instructions inside the description field of a Google Calendar event.

When a user later asks Gemini a routine scheduling question — for example asking whether they have any meetings on Tuesday — the assistant can read and follow the hidden instruction.

It can then summarize that day's meetings and automatically create a new calendar event containing those summaries.

Researchers described this as an example of Indirect Prompt Injection, a language-based manipulation rather than a software bug, meaning the model's language understanding is being used against user privacy.

Coverage Differences

Technical framing

All sources describe planting prompts in event descriptions, but they frame the mechanism differently: PhoneWorld (Other) labels it explicitly as “Indirect Prompt Injection” and stresses the natural‑language hiding inside event descriptions; filmogaz (Other) emphasizes that the model executes hidden instructions and can create new events; The Hacker News (Western Mainstream) provides a procedural description showing the user query triggers execution and the model returns a benign reply while exfiltrating summaries.

Calendar privacy exploit

Attackers can obtain confidential meeting details without any action by the victim.

In many enterprise setups, newly created events are visible to the attacker because event creators can often view event details.

An attacker who originally sent the malicious invite may receive or access the generated summary event, bypassing ordinary calendar privacy controls.

Researchers and outlets flagged this as particularly dangerous for corporate environments where meeting contents often include sensitive information.

Coverage Differences

Impact emphasis

Sources converge on the core impact but differ in emphasis: filmogaz (Other) warns broadly that attackers can obtain confidential meeting information without victim action; The Hacker News (Western Mainstream) focuses on enterprise setups and visibility to attackers; PhoneWorld (Other) highlights that the attacker can view summaries because event creators can access details — all three describe the same outcome but stress different organizational implications.

AI security risks

Observers said the vulnerability fits a broader pattern of AI-specific risks where language and context manipulation rather than traditional code bugs can lead to breaches.

filmogaz linked the finding (reported by Liad Eliyahu of Miggo Security) to related incidents such as Varonis's 'Reprompt' attacks, Google Cloud service-account exposures for AI workloads, and flaws in other assistants.

The Hacker News warned that AI-native features expand the attack surface.

PhoneWorld emphasized that the discovery highlights emerging security risks from AI systems that automatically process everyday user data.

At least one outlet documented a responsible disclosure and reported that Google subsequently fixed the issue.

Coverage Differences

Context and comparatives

filmogaz (Other) provides broader context by naming related incidents and naming the researcher (Liad Eliyahu of Miggo Security), while The Hacker News (Western Mainstream) frames the issue as evidence that AI-native features expand attack surfaces and notes Google’s fix; PhoneWorld (Other) highlights the discovery’s role in underscoring novel risks tied to automated data processing. gbhackers (Other) does not provide coverage and explicitly states it lacks the article text, showing omission of the story in that source's snippet.

AI permissions and controls

Filmogaz explicitly advises organizations to reassess AI app permissions and calendar access controls.

It also recommends updating security practices for language-model-specific vectors.

The Hacker News noted that Google fixed the issue after responsible disclosure.

Researchers reiterated the need to rethink how AI features are permissioned.

PhoneWorld's coverage adds that the proof-of-concept highlights why organizations should treat automated AI processing of routine data as a potential exposure vector.

It advises organizations to adjust controls accordingly.

Coverage Differences

Recommended responses

All sources call for mitigations but with different emphases: filmogaz (Other) makes concrete operational recommendations about app permissions and access controls; The Hacker News (Western Mainstream) reports the fix and frames the lesson in terms of responsible disclosure and expanding attack surfaces; PhoneWorld (Other) emphasizes the importance of treating automated processing as an exposure vector. These are complementary perspectives rather than contradictions.

All 6 Sources Compared

Cyber Press

Google Gemini Privacy Controls Bypassed to Expose Private Meeting Data

Read Original

El-Balad

Google Gemini Vulnerability Leaked Calendar Data Through Malicious Invites

Read Original

filmogaz

Malicious Invites Exploit Google Gemini Flaw, Exposing Private Calendar Data

Read Original

gbhackers

Google Gemini Flaw Allows Access to Private Meeting Details Through Calendar Events

Read Original

PhoneWorld

Google Fixes Gemini Flaw That Exposed Private Calendar Data

Read Original

The Hacker News

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

Read Original