rise of deepfakes 4x3

Summary List Placement

A shocking dossier intended to detonate a bomb under Joe Biden’s presidential campaign was defused after a researcher spotted its author was a computer-generated deepfake.

A document penned by Typhoon Investigations began circulating in right-wing circles from September and alleged compromising ties between Biden’s son Hunter Biden and China.

But “Martin Aspen”, the document’s purported author, isn’t real. His likeness was produced by a generative adversarial network (GAN), a branch of artificial intelligence, and the report’s allegations were baseless.

Disinformation researchers have warned that deepfake personas like Martin Aspen pose a threat to democracy, though up until now the threat has been minimal. We’ve seen convincing examples of Trump and Obama deepfakes, though neither were used for nefarious political purposes.

The Martin Aspen incident is something else — if political fakery is really on the rise, how do we protect ourselves?

There are tell-tale signs when a neural network has produced a fake image

First, it’s helpful to understand how these images are created.

Neural networks, which use hardware processing power to learn new skills, compete against each other to try and trick the other about what is a real image and what is faked, but indistinguishable, from the real thing.

GANs have …read more


Source:: Businessinsider – Tech

      

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *