That could shut out people who don't have the resources to hire experts.Īnd whether inside or outside the courtroom, denying that real events actually occurred has corrosive effects. "If lawyers start to get juries to demand all the bells and whistles to prove that a piece of evidence is not a fake.that is a way for lawyers and for their clients who are seeking to downplay or dismiss damning evidence against them to essentially run up the bills and make it more expensive, more time-consuming for the other side to get that damning piece of evidence admitted," she said. Technology We asked the new AI to do some simple rocket science. In Musk's case, the judge did not buy his lawyers' claims. So far, courts aren't buying claims of deepfaked evidence "Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence," Chesney and Citron wrote. The idea is, as people become more aware of how easy it is to fake audio and video, bad actors can weaponize that skepticism. The liar's dividend is a term coined by law professors Bobby Chesney and Danielle Citron in a 2018 paper laying out the challenges deepfakes present to privacy, democracy, and national security. "That's exactly what we were concerned about: that when we entered this age of deepfakes, anybody can deny reality," said Hany Farid, a digital forensics expert and professor at the University of California, Berkeley. Policymakers can't keep upīut the unleashing of powerful generative AI to the public is also raising concerns about another phenomenon: that as the technology becomes more prevalent, it will become easier to claim that anything is fake. "That's kind of what's starting to happen at the UN-level.Untangling Disinformation AI-generated deepfakes are moving fast. "Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you're having any kind of non-in-person engagement, you have a way to verify," he said. He also suggests there should be a content verification system using digital tokens to authenticate media and snuff out deepfakes. For audio, you got to ask… 'Are they saying things they would normally say? Do they seem out of character? Is there something off?'" he explained.Īt the same time, Sahota says policymakers need to do more when it comes to educating the public on the dangers of deepfakes and how to spot them. If it's a video, you got to look for weird things, like body language, weird shadowing, that kind of stuff. Sahota says people need to have a keen eye for videos and audio that appear off, as it could be a sign of manipulated media. And deepfake actually got started in revenge porn," he said. "We hear the stories about the famous people, it can actually be done to anybody. Last month in Quebec, a man was sentenced to prison for creating synthetically generated child sexual abuse imagery, using social media photos of real children. On top of scams and fake news, Sahota notes that deepfakes have also been widely used to create non-consensual pornography. In one instance, an Ontario woman lost $750,000 after seeing a deepfake video of Elon Musk appearing to promote an investment scam. Scammers have also used deepfakes to produce false celebrity endorsements. "A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody," California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told CTVNews.ca over the phone on Friday.ĭeepfakes have been used to produce a wide variety of fake news content, such as one supposedly showing Ukrainian President Volodymyr Zelenskyy telling his country to surrender to Russia. A UN adviser says the world needs to be "vigilant" as artificial intelligence technology improves, allowing for more realistic-looking deepfakes.ĭeepfakes refer to media, typically video or audio, manipulated with AI to falsely depict a person saying or doing something that never happened in real life.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |