WASHINGTON — At a workshop hosted through the Air Force’s military university on Aug. 26 in Montgomery, Alabama, students were shown a video of President Joe Biden addressing the UN while effortlessly switching between five languages including Mandarin and Russian.

While Thomas Jefferson and John Quincy Adams were fluent in several languages, Biden, like most U.S. presidents, is only known to speak English.

The video was a piece of synthetic media, more commonly known as a “deepfake.” Created using a combination of machine learning and artificial intelligence, deepfakes are hyperrealistic videos that replace one person’s likeness with that of another, or appear to show them doing something they never did.

And as the technology improves, they get harder to detect.

The workshop, which featured DeepMedia, a synthetic media start-up, focused on providing airmen with an introduction to deepfakes and their relationship to military applications as well as the company’s AI technologies that could be used to combat the threat they can pose.

Earlier this year, a deepfake video of Ukraine President Volodymyr Zelenskyy showed him calling on his soldiers to lay down their arms and surrender amid the country’s conflict with Russia. Although the video was quickly debunked and taken down, the technology is increasingly being used in propaganda and disinformation campaigns.

Rijul Gupta, a co-founder of the Oakland, California-based DeepMedia, called deepfakes the “next frontier” of video technologies, citing their low production cost and relatively quick turnaround time.

“Everything will be able to be faked in real-time,” he said at the workshop.

The company created the Biden video to illustrate that not all deepfakes are harmful. DeepMedia’s AI-based universal translator tool, for example, can be used to facilitate diplomatic communications, it said.

The translator tool takes video of someone speaking one language and transforms it so the subject appears and sounds to be speaking a different language in real time. The technology could also be used to pinpoint the linguistic benchmarks that differentiate a deepfake from a real video.

Hollywood has been using some form of deepfake technology for years, to make older actors appear decades younger of even provide new “life” to actors who have died. Actor Peter Cushing appeared in “Rogue One: A Star Wars Story” in 2016, more than two decades after his death.

Still, unauthorized deepfakes of celebrities and other politicians have spread widely across social media video platforms, prompting concern about the potential use of synthetic media to spread disinformation.

According to a 2021 report from the U.S. Department of Homeland Security, the threat of deepfakes does not come from the technology used to create them but from people’s natural inclination to believe what they see. As a result, deepfakes do not need to be particularly advanced or believable in order to be effective in spreading misinformation, the report said.

DeepMedia focuses on both detecting and producing synthetic media. In April, the Air Force Research Laboratory announced a partnership with the company to study the detection of deepfakes.

One of the main ways to detect whether a video is a deepfake is via an analysis of the language being used, which can reveal dialect differences between the video’s subject and audio presented.

Analyzing videos for those differences takes time, money and effort, said Emma Brown, a co-founder of the company.

In many cases, the cryptologic linguist conducting the analysis must have a deep understanding of the language and its dialects as well as the country’s culture and history. Simply having a cursory knowledge of the language isn’t enough to pinpoint the flaws in deepfakes, she said.

As the military struggles with a major recruiting crisis, attracting the needed talent to fill cryptologic linguist positions poses yet another challenge. To substitute human talent, Brown and Gupta explained that DeepMedia has developed AI tools that take the place of linguists.

“There’s simply not enough people to do this without AI,” Gupta said.

Catherine Buchaniec is a reporter at C4ISRNET, where she covers artificial intelligence, cyber warfare and uncrewed technologies.

Share:
More In AI