Brian Blum, Israel21c, February 3, 2020
With permission, read full article at Israel21c.
It’s November 2020, just days before the US presidential election, and a video clip comes out showing one of the leading candidates saying something inflammatory and out of character. The public is outraged, and the race is won by the other contender.
The only problem: the video wasn’t authentic. It’s a “deepfake,” where one person’s face is superimposed on another person’s body using sophisticated artificial intelligence and a misappropriated voice is added via smart audio dubbing.
The AI firm Deeptrace uncovered 15,000 deepfake videos online in September 2019, double what was available just nine months earlier.
The technology can be used by anyone with a relatively high-end computer to push out disinformation – in politics as well as other industries where credibility is key: banking, pharmaceuticals and entertainment.
Israeli startup Cyabra is one of the pioneers in identifying deepfakes fast, so they can be taken down before they snowball online.
Cyabra cofounder and CEO Dan Brahmy. Photo: courtesy
Cyabra CEO Dan Brahmy tells ISRAEL21c that there are two ways to train a computer algorithm to analyze the authenticity of a video.
“In a supervised approach, we give the algorithm a dataset of, say, 100,000 pictures of regular faces and face swaps,” he explains. “The algorithm can catch those kinds of swaps 95 percent of the time.”
The second methodology is an “unsupervised approach” inspired by a surprising field: agriculture.
“If you fly a drone over a field of corn and you want to know which crop is ready and which is not, the analysis will look at different colors or the way the wind is blowing,” Brahmy explains. “Is the corn turning towards its right side? Is it a bit more orange than other parts of the field? We look for those small patterns in videos and teach the algorithm to spot deepfakes.”
Cyabra’s approach is more sophisticated than traditional methods of ferreting out deepfakes – looking at metadata, for example, of where was the picture taken, what kind of camera was used and on what date it was shot.
“Our algorithm might not know the exact name of the manipulation used, but it will know that the video is not real,” Brahmy says.
Only a computer program can spot telltale signs the human eye would miss, such as eyeglasses that don’t fit perfectly or lip movements not perfectly synched with movements of the chin and Adam’s apple, Brahmy tells ISRAEL21c.
Staying a few steps ahead
Cyabra’s technology detects inauthentic nuances that the human eye would miss. Photo: courtesy
Deepfake detection technology must continually evolve.
In the early days – all the way back in 2017, when deepfakes first started appearing – fake faces didn’t blink normally. But no sooner had researchers alerted the public to watch for abnormal eye movements than deepfakes suddenly started blinking normally.
“To have a durable edge, you need to be a year or two ahead, to make sure no one can re-do what you just did,” Brahmy says.
That’s important both in catching the deepfakers and for a company like Cyabra to stay ahead of the competition.
Cyabra’s edge is that two of its four cofounders came out of IDF intelligence divisions where they looked for ways to foil terrorist groups trying to create fake profiles to connect with Israelis.
In addition, former Mossad deputy director Ram Ben Barak is on the company’s board of directors.
Fake social-media profiles
Cyabra’s deepfake detection technology was only released in the last month. For most of the past two years, since the company was founded, it has been focused on spotting fake social-media profiles.
Cyabra cofounder and COO Yossef Daar. Photo: courtesy
Brahmy cofounderYossef Daar claims there are 140 million fake accounts on Facebook, 38 million on LinkedIn, and 48 million bots on Twitter.
These, too, are not easy to detect.
Researchers from the University of Iowa discovered that some 100 million Facebook “likes” that appeared between 2015 and 2016 were created by spammers using around a million fake profiles.
Cyabra’s machine-learning algorithms run some 300 unique parameters to determine profile authenticity. A three-day-old profile with 700 friends whose user has no footprint outside of Facebook raises a red flag, for example.
In the 2016 U.S. elections, fake profiles on social media were the biggest problem – deepfakes didn’t exist yet.
By now, though, you’ve probably seen a few deepfakes yourself: Facebook CEO Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” former US President Obama using a profanity to describe President Trump or Jon Snow apologizing for the writing in season 8 of “Game of Thrones.”
Brahmy says the leadup to the 2020 election season is the right time to offer Cyabra’s solution.
Investors agree. Cyabra has raised $3 million from TAU Ventures and $1 million from the Israel Innovation Authority. The 15-person company started in The Bridge, a seven-month Tel Aviv-based accelerator sponsored by Coca-Cola, Turner and Mercedes. Now they’re based at TAU Ventures with a small presence in the United States as well.
Public and private sector clients
Cyabra’s clients prefer not to be named, although Brahmy did tell ISRAEL21c that 50% of its clients are in the public sector – governmental organizations or agencies – and the other half are “in the world of sensitive brands: consumer product, food and beverage, media conglomerates.”
“Imagine you’re in the business of providing unbiased information and suddenly 500 bots send you a message with a falsified picture and you’re ready to publish it. We want to be there five seconds before you pull the trigger, to let you know it’s false,” says Brahmy.
This heatmap shows the level of doctoring done to a picture or frame in a video. Emphasized areas represent more heavily forged pieces of content. Image courtesy of Cyabra
Cyabra leaves the task of fact-checking content for “fake news” to other companies such as NewsGuard and FactMata. (Neither company is Israeli.)
There are also other companies dealing with deepfakes and fake profiles. But, Brahmy says, “we’re the only one doing both, with the technical capability to detect deepfakes along with cross-channel analysis to detect the bots [powering fake social media profiles], all under one roof.”
Facebook announced in January 2020 that it is banning deepfakes intended to mislead rather than entertain. But can Facebook really get ahead of all the deepfakes out there – and those to come?
If Cyabra and companies like it succeed, the next time you see a politician or celebrity saying something you find reprehensible, it might just be true.
For more information, click here