Everyone You Know Is (Deep) Fake!

Malicious Actors in the Age of AI

Victor Hogrefe
5 min readMar 28, 2023

Picture this: One day, you receive a frantic call from your spouse. Their voice trembles with fear and desperation as they reveal they’ve been kidnapped. The abductors demand a significant amount of cryptocurrency be sent to a specific address. Concerned for your spouse’s safety, you hastily transfer the funds, hoping and praying for a favourable outcome. You also alert the authorities, but they can offer little assistance. After the payment is confirmed, the line goes dead, and you’re left to worry about the uncertain fate of your loved one. However, just moments later, your spouse walks through the door, completely unharmed and oblivious to the events that just transpired. It becomes evident that you’ve been the victim of a cruel scam.

The disturbing fact about this scenario is that the scammers aren’t even human. They are the product of malicious AI-malware, operating from an office in Kolkata. This AI system gathers personal information, such as voice patterns, and then auto-dials thousands of potential victims, extorting money through fabricated kidnapping threats or other malicious schemes.

As the gap between human and machine communication narrows due to the advancements in large language models, society becomes increasingly vulnerable. Trust becomes a scarce commodity, and we must grapple with the challenge of verifying the authenticity of those we interact with, be it social media users, fellow voters, consumers, or even family members.

The question arises: Which social media platform will be the first to have over 50% of its “daily active users” replaced by AI agents? How can we maintain trust in the digital age when even well-established news outlets could be infiltrated and manipulated by AI-driven deepfakes, presenting fabricated stories like the assassination of a president, complete with high-definition video footage?

In this rapidly evolving landscape, humans and society are ill-equipped to confront the dangers posed by malicious AI. The traditional methods of verifying facts, messages, videos, pictures, and news stories are no longer sufficient. It has become imperative to develop an entirely new form of verification methodology, one that is resilient to AI-generated fakes and manipulations.

Moreover, the issue of trust has been further exacerbated by existing social and political forces that exploit divisions within society, creating separate realities for different groups. Democrats and Republicans struggle to agree on which day of the week it is, or who won an election, even in the absence of AI interference. With the addition of malevolent AI, these divisions could deepen, plunging society into greater discord and confusion.

The advent of increasingly sophisticated AI technologies has exposed the limitations of human society to deal with the perils they present. To counteract the potential damage inflicted by malicious AI, we must urgently devise innovative verification methodologies to safeguard the authenticity and reliability of facts, messages, videos, pictures, and news stories. The consequences of inaction could be disastrous, as trust erodes and the boundaries between reality and fiction become indistinguishable.

Here are three strategies to help deal with this issue:

1) Public Key cryptography (or some other secure method like quantum cryptography) needs to move visibly into mainstream, conscious usage.

2) More AI will be needed to check on the authenticity of material (like detecting deepfakes, etc.); fighting fire with fire.

3) Some costs must be imposed on the spread of information, such as proof-of-work, or some other form of decentralized protocol.

Public Key Cryptography

The visible integration of Public Key Cryptography may assuage some concerns but poses a user-experience challenge. Any news channel, website, and public figure would sign off on all of their messages and broadcasts using their private key. And everyone listening or watching or reading could verify this by using readily available apps.

This does not solve the problem, but it may help. Taking it even further could mean the integration of public-private keys in every recording device, such that all images and video can be authenticated as real and originating from a certain hardware, like a phone, camera, or microphone. Deepfakes do not originate inside of hardware. They are synthetic images, or changed images, created from the obscure imagination of complex weight matrices. If the digital signature of media included the hash of that media, then any alteration would be easily detectable.

Fight AI with more AI

Deepfakes can be successfully found out by training convolutional neural networks on fake images. The difficulty here is that the way that deep neural networks store and represent information within their weight matrices is subject to being tricked as well. A picture of a cat with just a few pixels changed in strategic places could make an AI agent think it is looking at an airplane, even if the change to the image is not detectable by a human observer. The networks trained to detect deep fakes can themselves be tricked by networks that create deepfakes specifically to fool the deepfake-detectors. In turn, we must train new networks to detect the deepfake-detector-deepfakers. We are thus caught in an endless tug-of-war between fakers and detectors. This path promises only to present a resource challenge to anyone willing to take it.

Proof-of-Work Protocols

Another option is to employ truth machines, such as large proof-of-work (PoW) networks like the Bitcoin protocol to make it implausibly expensive for scammers to operate. Anti-spam was indeed the original purpose of PoW, as it levied a tax on the spread of information, spam-emails, etc.

Combining key-signed media with proof of work protocols may prove to be a powerful way of securing data authenticity, but comes at a significant cost. It is conceivable that an advanced society will spend a considerable amount of its resources and energy on such networks to create a reality-anchor, not only to authenticate information in this world, but also to create a lighthouse within simulated worlds which may soon outgrow the possibilities of actuality.

None of these methods would fully guard against a truly intelligent AI agent, not to mention a Superintelligence, but they could impose costs on malicious actors until we figure out how to deal with the AI issue in earnest. In the long run, we must guard against a dispassionate God that may unintentionally kill us, rather than the evil intent of scammers, criminals and politicians who seek to use AI tools for their benefit. We shall cross that bridge when we get there.

--

--

Victor Hogrefe

Tech Entrepreneur, here to share thoughts on technology, politics and other philosophical musings.