Wednesday 8 May 2019

How AI is powering fake news: 'These people never existed'

'Seeing is believing' no longer holds true as technology makes the job of detecting false content increasingly difficult
By Vikram Khanna, Associate Editor, The Straits Times, 8 May 2019

At the Horasis Global Meeting last month in Cascais, Portugal, which brings together innovators and social scientists from around the world, a researcher in artificial intelligence (AI) showed me about 20 photographs of people's faces on her mobile phone.

She asked me to guess which of those faces belonged to actual people.

They all looked lifelike to me, down to fine details like wrinkles, skin tones, hair texture and candid expressions.

So I said, of course, those must be real people. I was wrong.

All the photos were fake - they were photos of people who never existed.

They had been created by a new kind of AI developed by a company called Nvidia and made public.

Earlier this year, a website called thispersondoesnotexist.com was launched, which uses the technology to show faces of never-existent people, all of which look real.

See the sample images on this page and judge for yourself.


I knew there were apps like FaceApp which uses AI to transform anyone's face, making it look older or younger, adding a smile, or changing a hairstyle, as well as various selfie-editing apps.

But what I just saw was something different.

It was the creation of real-looking people out of thin air.

It was proof that AI can now have imagination and can come up with convincing results, unsupervised.

The same technology can generate imagined images of anything: pets, cars, homes and beautiful art out of rough sketches.

It can also create original poetry.

All of that is already available. Some of these imagined, synthetic renderings can be helpful, for instance, to designers, architects, interior decorators and artists, enabling them to experiment and generate new ideas.

UNINTENDED CONSEQUENCES

But they can also have unintended consequences, some of which can be dangerous.

For example, technology writer Adam Ghahramani, who argues that Nvidia's new AI technology should be restricted, warns that it is a gift to con artists and purveyors of fake news. Non-existent people can, for instance, be portrayed in sensationalised stories as the perpetrators or victims of heinous crimes, creating panic.

Romance scammers can fool star-struck online paramours by impersonating imaginary people using unverifiable images. Pet scammers can cheat people into buying non-existent pets online.

Untraceable images can also create problems for policing, making criminal evidence more difficult to obtain and cases harder to prove.

The already flourishing fake news industry has also been boosted by new techniques that use AI for video and audio.

Last year, tech website The Verge reported that film-maker Jordan Peele had teamed up with news portal BuzzFeed to create a "realistic fake" video of former United States president Barack Obama delivering a "public service announcement" in which he calls current President Donald Trump "a dips***."



The video was produced with the help of an AI-powered face-swopping tool called "FakeApp". You can watch it here.

A Canadian AI start-up called Lyrebird has developed a voice-imitation algorithm that can mimic any person's voice and accent after analysing barely a minute of pre-recorded audio.

In a promotional exercise, the company provided samples of the voices of Mr Obama, Mr Trump and former US presidential candidate Hillary Clinton, extolling its technology. Although the voices are not entirely convincing - though the quality of the mimicry is expected to improve as the technology matures - a lot of people could be fooled. You can hear the mimicked voices here.



FAKE NEWS GETS NEW TOOLS

Such breakthroughs in AI technology have given the fake news industry powerful new tools.

Now, anybody - politicians, religious leaders, CEOs or even people we know personally - can be made to say anything, in what sounds like their own voice, intonation and all, and on video.

For many people, suspension of disbelief will become harder and harder still as the technology improves. The maxim "seeing is believing" is no longer true.

In what is developing into an AI arms race, researchers are working on new AI technologies to detect AI-generated fakes. But until these appear - if they do - the gatekeepers who are supposed to guard against fake content, such as social media companies, mainstream media, corporations and governments, will have their work cut out. There may be times when they, too, will be fooled, despite their best intentions.

The problem will be compounded by the fact that fake news spreads faster than the truth.

In 2017, before the appearance of the AI innovations discussed above, consulting firm Gartner predicted that by 2022, most people in mature economies will consume more false than true information. Academic research adds credibility to this claim.

In a study in the journal Science published last year, Massachusetts Institute of Technology researchers Sinan Aral, Deb Roy and Soroush Vosoughi found that false stories "spread significantly farther, faster and more broadly than did true ones".

For instance, falsehoods were 70 per cent more likely to be retweeted and reached far more people than true stories.

Falsehoods about politics, urban legends - mainly myths and rumours - as well as science reached the most people.

Such falsehoods can do serious harm. They can influence political choices, lead to a misallocation of resources during emergencies and create panic and fear. Some of these outcomes have materialised.

IT'S PEOPLE, NOT BOTS

Worryingly, the research also found that people were more responsible than bots for spreading falsehoods.

One reason is that false news is more novel and inspires surprise, which makes it more likely to be shared. Spreaders are also motivated by a desire to demonstrate that they are people who are "in the know" or have some "inside information".

These disquieting findings apply to advanced, mature democracies.

We can only imagine what the results of similar studies might be for developing countries with a higher proportion of uneducated people equipped with smartphones and access to easy-to-use AI apps.

The evidence we have so far is that falsehoods that go viral have led to mayhem - for example, a spate of lynchings and murders in India since 2017, triggered by WhatsApp messages - and even inspired acts of communal violence and terrorism.

The latest advances in AI, harnessing video and audio, could turn the rising tide of false news into a tsunami.

As digital marketeers know, visual content commands more attention than text. For example, video content makes information more digestible, leads to more engagement and retention, and generates 12 times more shares than just words.

Singapore is creating a law to guard against fake content, which is timely and important - even if the proposed law is controversial.

The debate so far has focused mainly on who should be the arbiter of what is true and what is false, and what recourse people might have to challenge rulings.

But fighting falsehoods should also address the issue of the technologies that make them appear more credible and convincing. In any event, it will take more than a law, however well-intentioned and implemented, to ensure that people are more informed by content that is true rather than false.

It will take people themselves - us, in other words - to be more vigilant and discriminating about the information we consume and share.


No comments:

Post a Comment