Ethereum Co-Founder Vitalik Buterin on Tackling Deepfake AI Risks: 'Ask Security Questions'
Vitalik Buterin, a co-founder of the cryptocurrency project Ethereum, has raised an alert on using deepfakes, videos created using artificial intelligence (AI) to try to impersonate human beings, to persuade others about making financial transactions. For Buterin, the issue is not only cryptographical and can be tackled using security questions with friends and colleagues.
Ethereum Co-Founder Vitalik Buterin Advises Asking Security Questions in Deepfake Era
Vitalik Buterin, a co-founder of Ethereum, has referred to the deepfake issue, stating that the answer to these impersonating attempts in security environments can be tackled without using cryptographic techniques. While commenting on an event where a financial officer made a transaction worth $25 million after being tricked by attackers using deepfake technology, Buterin stated that security questions, among other measures, could have prevented this from happening.
Buterin stressed that deepfakes, which use artificial intelligence (AI) tech, have improved exponentially over the years, with attempts going from “embarrassingly obvious and bad” to increasingly difficult to distinguish from the real deal. This makes security questions and a stack of mutual acknowledgment techniques necessary nowadays.
Buterin explained:
Security questions are nice because, unlike so many other techniques that fail because they are not human-friendly, security questions build off of information that human beings are naturally good at remembering.
As a complement, other techniques can be mixed in with these questions, including pre-agreed code words and even the implementation of a duress key, which is a word that can be used in the case of being coerced or threatened to let the other party know about this situation.
Furthermore, about transactions, Buterin recommends including delays for irreversible actions implemented at a policy or even at a code level. “In a post-deepfake world, we do need to adapt our strategies to the new reality of what is now easy to fake and what remains difficult to fake, but as long as we do, staying secure continues to be quite possible,” he concluded.
What do you think about Vitalik Buterin’s advice to deal with deepfakes? Tell us in the comments section below.