In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” - seemingly realistic videos generated by artificial intelligence. First seen on Reddit with pornographic videos doctored to feature the faces of female celebrities, deepfakes were made popular in 2018 by a fake public service announcement featuring former President Barack Obama. Words and faces can now be almost seamlessly superimposed. The result: We can no longer trust our eyes.
In June, the House Intelligence Committee convened a hearing on the threat deepfakes pose to national security. And platforms like Facebook, THvid and Twitter are contemplating whether, and how, to address this new disinformation format. It’s a conversation gaining urgency in the lead-up to the 2020 election.
Yet deepfakes are no more scary than their predecessors, “shallowfakes,” which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes - deep or shallow - is that their very existence creates a world in which almost everything can be dismissed as false.
Read the story here: nyti.ms/2MgboZ9
More from The New York Times Video: nytimes.com/video
Whether it's reporting on conflicts abroad and political divisions at home, or covering the latest style trends and scientific developments, New York Times video journalists provide a revealing and unforgettable view of the world. It's all the news that's fit to watch.
14 ส.ค. 2019