June 1, 2025

When AI Poses as the White House Chief of Staff’s Phone

FBI investigates sophisticated voice cloning attack on White House Chief of Staff Susie Wiles as compromised phone exposes nation’s most powerful leaders.

A quick recap of the incident

Today, federal investigators confirmed they are probing an AI-Voice cloning attack targeting Susie Wiles, the White House chief of staff and the president’s closest adviser. An unknown caller, reportedly using an AI-generated clone of Wiles’ voice, spoke with governors, senators, Fortune-500 CEOs, and party rainmakers, requesting sensitive government information, asking members of Congress for political favors, and demanding cash transfers. Some recipients initially believed the requests were legitimate. The attacker reportedly directed one lawmaker to assemble a list of individuals whom the president could pardon. 

“Safeguarding our administration officials’ ability to securely communicate … is a top priority,”

FBI Director Kash Patel told reporters as the Bureau expanded the probe.

What began as a handful of “odd” calls and texts ballooned into a multi-state criminal investigation after recipients noticed the grammar was stilted and the caller sometimes fumbled facts Wiles herself would know cold. Forensic teams now believe the impostor first compromised Wiles’ personal smartphone, harvesting her extensive contacts list, and then used AI voice cloning and inexpensive spoof-caller software to pose as the White House Chief of Staff.

How AI is expanding personal device information security risks into new attack surfaces against National Security

The integration of AI voice cloning into this attack represents a paradigm shift in mobile device threats. Previously, a compromised contact list leaking from a phone posed a limited information security risk. It might enable spam or phishing attempts that would be quickly flagged. Now, attackers can leverage AI voice cloning to create a new, much more dangerous attack surface by generating convincing-sounding copies of trusted individuals. 

The voice cloning itself likely only required attackers to take 2-5 seconds of spoken audio from Wiles. Leading AI voice recreation tools can do this for pennies. How much would it have cost to try to recreate Wiles’ voice from 5 seconds of audio before our current AI era? What this attack highlights is how the information security value of some data has changed by a factor of billions, and we’re just starting to wake up to it. 

Think about what a modern phone actually does. It snaps pictures and video, grabs snippets of speech, notes where you are and how fast you’re moving, maintains a BLE/Wi-Fi beacon log of all nearby devices,  records the pattern of your face and fingertips, and, because most of that ends up in the cloud, keeps a running backup of your life that an attacker can reach from anywhere. In practice, a smartphone is a compact log of its owner’s world.

For years, security people worried about this, but the danger felt bounded: a spy who stole a phone still had to sift through gigabytes of raw data by hand. That constraint just vanished. Machine-learning engines now turn crumbs of data leaking from phones into full meals. Feed a model five seconds of hallway chatter, and it spits out a flawless voice clone. Give it a time-stamped photo roll, and it infers a travel pattern accurate enough to predict tomorrow’s motorcade route. Toss in seemingly harmless Bluetooth metadata, and it maps the whole social graph of a secure compartment.

Voice forgery is only the headline. 

AI makes all of these side-channel attacks orders of magnitude cheaper and more feasible, with even less data required:

Each of these starts with what used to look like “noise.” AI makes the noise legible.

That new leverage changes the math inside government spaces. Unmanaged, personal smartphones are notoriously vulnerable to data exfiltration. Security professionals within the government have long recognized this; former NSA Director Keith Alexander famously urged the public to reboot their phones weekly to counter zero-click spyware. Yet for years, the risk was regarded as manageable; an attacker still had to sift manually through gigabytes of exfiltrated data, and the damage was assumed to be limited to what that one device contained. 

AI has changed this. Because smartphones are so good at harvesting ambient information, every personal device carried into a secure facility now doubles as a reconnaissance sensor for adversarial AI. Even the slimmest egress (a stray Bluetooth handshake, a cached voicemail clip, an inadvertent photo reflection) feeds machine-learning pipelines that can reconstruct classified floor plans, patterns-of-life, or the biometric signatures of cleared personnel. The threat is no longer the theft of data “at rest”; it is the weaponization of sensor exhaust that was never meant to leave the room.

For national-security installations, this changes the calculus completely. The familiar tolerance for bringing personal devices into secure areas is no longer in effect. The policy has to shift to “phones stay outside, and the room’s RF airspace is continuously swept to prove it,” because AI is collapsing the gap between a few leaked bytes and a strategic compromise.

The Wireless Airspace Blind Spot

Without continuous RF situational awareness, an attacker with a mobile device can record, exfiltrate, or impersonate at will, and AI simply lowers the cost of doing so. The 2023 SECDEF memo outlined that all SCIFs needed to implement Wireless Intrusion Detection Systems. Recent cases, such as those involving Jian Zhao, Korbein Schultz, and Michael Schena, where individuals allegedly used smartphones inside secure facilities to steal government information, suggest that too few government facilities have actually implemented systems to protect themselves from these risks.

Bastille’s Wireless Airspace Defense platform delivers the missing layer of visibility:

  • 100 % passive sensors cover 100 MHz–7.125 GHz and monitor Wi-Fi, Cellular, Bluetooth/BLE, Zigbee, Z-Wave, and more.
  • AI-driven risk analytics classify every wireless device, such as a smartphone, access point, or rogue modem, based on its behavior and threat level in real time.
  • One-to-three-meter geolocation lets security teams pinpoint where that “extra” smartphone or hidden LTE module just powered on.
  • Fusion-center integrations stream enriched alerts into your XDR, SOAR, SIEM, or PTZ camera systems, allowing operators to isolate the threat in seconds, not days.
  • SCIF-grade policy enforcement continuously clears classified spaces of covert phones, spy devices, and RF-enabled wearables, without relying on bag checks. 

To learn more about protecting your organization from AI-augmented wireless threats, visit Bastille Networks or contact our security experts for a wireless airspace assessment.

Close your cybersecurity gaps with AI-driven wireless visibility

See Bastille in action with a live demo from our experts in wireless threat detection.