Impostor Syndrome

So what is being done on the tech side to combat the proliferation of deepfakes? Farid’s answer was brutally honest. “I think it's probably fair to say,” he told us, “that today there is no operationalized technique for reliably detecting deepfakes. Part of that is because deepfakes are a relatively new phenomenon and we and other people are in the early stages of developing those techniques. I predict that by the end of the year, maybe by the fall, we certainly will have some techniques out there. I think the first round we'll start seeing in the next six to 12 months. But there is very much a cat-and-mouse game to be played here. As we develop forensic techniques, deepfakers will learn them and try to circumvent them. I think the way that's going to end up is the way most of these cat-and-mouse games end up. We will make it more difficult. You will need a little bit more skill. You’ll have to work a little bit harder. But in the end, you'll probably still be able to create fake content. Yet if I can take out of the hands of the average person the ability to create compelling fakes that are undetectable, I will consider that success. If I have now moved it into the hands of a relatively small number of people, while that is still a risk, I think we can probably agree that it is significantly smaller risk than the average person on Reddit being able to create this fake content. So that's our goal: to keep raising the bar.”

As to what the actual countermeasures will be, Farid suggested two techniques that rely on human nature. One is developing what he called a “soft biometric” to distinguish real recordings of, say, Barack Obama from deepfakes. “The basic idea is this,” he said. “When somebody is speaking, there is a correlation between what they say and how they say it. For example, when I frown and pinch my brow, something is upsetting to me. If I say something funny, I tend to smile and maybe lift my head up a little bit. How our faces move, how our head moves we are finding to be tightly correlated to what we are saying.” Another would be “controlled capture” technologies. These, he told us, function at the point of recording to authenticate the material. “Imagine,” Farid said, “that you witness a human rights violation, police misconduct, a natural disaster, some remarkable event — and you don't want people down the line questioning the authenticity of your video or your image. So instead of capturing with a standard iPhone or Android camera, you use controlled capture software. There are companies out there that produce this commercially. At the point of recording you cryptographically sign the content. You put that on the blockchain, a distributed and immutable ledger. Then you can, with fairly high confidence, authenticate that content going down the line. This may be where we have to go. With Apple and the Androids building these directly into the app where you have the option, like turning a flash on and off, to securely record and not securely record.”

Farid does not share the utopian view of tech that permeates large segments of our society. “There were people in the early days sounding the alarm bells on AI,” he notes. “Elon Musk, of course, famously has been talking about concerns about the turning-over of decisions to AI-powered systems — anything from algorithms that make bail decisions to algorithms that make admissions decisions to universities. People have been concerned about that. And rightfully so. It didn't take long for us to see what the threat of the tech behind deepfakes was, even before they started appearing. So how should the people who are developing these technologies think about what they're doing? Because the reality is while there are some cool applications to these technologies, everybody also agrees that there are some nefarious applications. How do we as a community wrestle with moving technology forward, making advances, while knowing that those technologies are almost certainly going to be weaponized? If a biologist developed a deadly virus and said, ‘Let's give this to the public and see what happens,' I don't think anybody would think that that was acceptable. Yet we do that almost all the time with technology.”