PCC Secure Generative AI Deepfakes

Generative AI and Deepfakes

There is something magical about sitting in a movie theatre and watching an adventure where a Norse god of Thunder, a man in a flying suit of armor, and a flash-frozen World War II hero fight to save the galaxy from a purple alien Demi-god and his army. The characters look so natural, and the battle scenes are so realistic you lose yourself in the story. To enjoy the story, you willingly suspend your capacity for disbelief. For a short time, you believe in Captain America and Ironman.

To help you suspend your capacity for disbelief and in the service of telling unforgettable stories, movie makers have pioneered using technology to fool our eyes, ears, and minds. Over the past decade, scientists and technologists have made incredible advances in data science, computing power, data storage, artificial intelligence, and incredibly Generative AI. Generative AI is a subset of the broader category of synthetic media, defined as content modified or created through artificial intelligence and machine learning. Generative AI can produce content such as text, imagery, audio, and synthetic data. Content that is increasingly hard to identify as the product of artificial intelligence.

Once you leave the movie theater, this storytelling technology, with its suspension of disbelief, has an increasingly sinister side. Foreign governments, spies, terrorists, hackers, stalkers, con people, political activists, misogynists, sexual predators, perpetrators of revenge porn, financial criminals, and thieves have discovered the power of Generative AI and deep fakes. When used by bad actors, these storytelling technologies erode the information ecosystem, threaten national security, influence elections, facilitate financial fraud, and endanger women and children.

Unfortunately, the anonymity of the Internet and the power of these video, audio, and text storytelling technologies gives sexual predators, misogynists, pedophiles, and disgruntled boyfriends powerful and poorly regulated tools. Women are overwhelmingly the victims of the misuse of this deepfake technology. Sensity.AI, a firm specializing in deepfake content and detection, reported that in 2020 over 100,000 computer-generated fake nude images of women and underaged children were created and broadcast without their consent or knowledge. Sensity.AI also said that approximately 90% to 95% of deepfake videos since 2018 are primarily based on non-consensual pornography. When attacked by the malicious use of these technologies, women face harassment, mental anguish, reputation loss, and economic harm.

With its powerful storytelling technologies and ability to spoof living people and their biomarkers, generative AI is a boon for hackers and criminals. Criminals use these technologies to defeat biometric security devices such as facial recognition and fingerprinting. The University of Surrey developed a deep learning algorithm to beat 20% of all fingerprint scanners. Intel Labs and the University of Oregon have a machine learning program that generates synthetic images of people that can circumvent facial recognition systems. These technologies also facilitate the creation of false documents such as driver’s licenses, passports, and corporate IDs. Criminals also use Generative AI to spoof employees with deepfake audio and deepfake visual messages of senior executives to circumvent a company’s financial controls and trick employees into sending money or revealing corporate secrets.

Our major geopolitical adversaries have sophisticated and well-funded propaganda and information warfare programs that are increasingly effective with the addition of Generative AI with improved algorithms for creating deepfake texts, audio, and images. This technology can create false videos of candidates and elected officials and be used to embarrass or blackmail officials or create espionage opportunities. In addition, this technology can create false narratives that seek to weaken confidence in public institutions, influence elections, channel public debate, or influence domestic or foreign policy. The Director of National Intelligence warned that deepfakes seriously threaten national security.

In this world of Generative AI and Deepfakes, how can you manage the risks? First of all, if you or anyone you know is being harassed or is the victim of a sexual predator, a pedophile, revenge porn, or a cyberbully, reach out and let us help… 914.576.8706. Second, be suspicious. All text, audio, and imagery need to be checked and verified. Unless you are in the theater, never suspend your capacity for disbelief. It is far too easy to fall prey to Generative AI and its powerful storytelling technology.

Leave a Reply

Your email address will not be published. Required fields are marked *