The U.S. State Department has issued a cautionary message to its diplomats regarding attempts to impersonate high-ranking officials, including Secretary of State Marco Rubio, using advanced artificial intelligence technologies.
This advisory followed the discovery that an impostor, masquerading as Rubio, had targeted multiple officials, including at least three foreign ministers, a U.S. senator, and a governor.
The information was detailed in a cable circulated to all embassies and consulates last week, which was initially reported by The Washington Post.
The fraudulent messages were sent via text, Signal, and voicemail to the recipients, whose identities have not been disclosed in the cable shared with The Associated Press.
Tammy Bruce, a spokesperson for the State Department, confirmed the department’s awareness of the situation and its ongoing monitoring and addressing of the matter.
“The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department’s cybersecurity posture to prevent future incidents,” she stated, though she declined to elaborate on specifics due to security concerns and the current investigation.
This warning comes in light of another incident from May, where impersonators targeted Susie Wiles, President Donald Trump’s chief of staff. As artificial intelligence technology improves and becomes more accessible, the potential for misuse in impersonation schemes is on the rise.
The FBI also issued warnings earlier this spring about malicious actors impersonating senior U.S. government officials using text and voice messaging as part of a broader deception campaign.
Despite the tactics employed in the Rubio impersonation incidents being described as “not very sophisticated” and the attempts ultimately unsuccessful, State Department officials deemed it necessary to notify all employees and foreign governments. This decision reflects an increasing concern over efforts by foreign actors to infiltrate and compromise information security.
The cable communicated that while there is no immediate cyber threat from these impersonation campaigns, it noted the risk that information shared with targets could be compromised.
In a public service announcement, the FBI elaborated on a malicious campaign involving text and AI-generated voice messages that imitate high-ranking U.S. officials. These efforts aim to deceive other government officials and the contacts of the intended victims.
This latest incident is not Rubio’s first experience with impersonation; earlier this spring, a deepfake video was released depicting him making false claims about cutting off Ukraine’s access to Elon Musk’s Starlink internet service. The Ukrainian government quickly countered this misrepresentation.
As concerns about the growing misuse of AI for deceptive practices persist, a variety of proposed solutions have emerged. These include the establishment of criminal penalties for misuse and enhancing media literacy among the public.
In response to the threat of deepfakes, numerous apps and AI systems designed to identify fraudulent content are currently being developed. However, the technology companies behind these detection systems are now engaged in a competitive race against those leveraging AI for deceptive purposes.
Siwei Lyu, a professor and computer scientist at the University at Buffalo, highlighted the arms race between deepfake creators and those developing detection technologies. He expressed concern over the escalating sophistication of deepfakes, which now often appear so realistic that they can easily fool humans.
Lyu pointed out that just a few years ago, deepfakes typically contained identifiable flaws—such as exaggerated movements or unnatural voices—but advancements in AI have significantly reduced these issues, thus giving an edge to those creating deepfakes.
The recent attempts to impersonate Rubio reinforce the idea that the potential for AI-generated deception poses a genuine threat, necessitating ongoing vigilance and preparedness from U.S. officials.
image source from:npr