Microsoft is using tech to battle against a dangerous AI-powered threat, deepfake content.
What you need to know
- Microsoft announced several pieces of technology that battle deepfake content.
- Deepfake content can make it appear as if a person said or did something that they never did or said.
- Deepfake content can be dangerous, especially as the U.S. election approaches.
Microsoft announced several measures that it's taking to battle deepfake content (via CNet. The company announced two pieces of technology to combat deepfakes, Microsoft Video Authenticator, and a reader that can be included in a browser extension to check if content is authentic. Deepfake content, including audio and video files, is increasingly dangerous as they become more realistic and convincing.
Deepfake content is manipulated content that makes it appear as if a person said or did something that they didn't actually do or say. It can be used to manipulate video or audio, and because it utilizes artificial intelligence, it continues to get better over time. While there are some harmless ways to use deepfakes, such as Collider's series of having a fake George Lucas react to new Star Wars content, deepfakes are more frequently used in dangerous ways.
Deepfake content can be used to make it appear as if a politician stated something they never said, to manipulate a video to edit a narrative, or replace someone's face with a digital copy of someone else. A growing number of deepfakes are in the pornography industry, targetting celebrities and other people by making it appear as if they are in a video. MIT created a fake video of President Richard Nixon talking about leaving astronauts on the Moon, which, of course, he never delivered.
As the U.S. election approaches, deepfakes will become more dangerous and prevalent. To combat this, Microsoft announced several new technologies and tools to help people identify deepfakes. Microsoft explains how deepfakes will become harder to detect over time, but that technology can be useful when detecting faked content.
The fact that they're generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.
The new Microsoft Video Authenticator can help identify deepfake content. It analyzes either still photos or videos and provides a percentage chance that content is fake. These percentages can update in real-time, so you can see when a video is likely manipulated. The technology works by detecting the blending boundary of the deepfake and subtle fading or grayscale elements. These are often indiscernible by the human eye but can be picked up with technology.
Microsoft Video Authenticator was created using a dataset from Face Forensic++, and Microsoft tested it using the DeepFake Detection Challenge Dataset.
Microsoft also announced a tool built into Microsoft Azure that allows content producers to add digital hashes and certificates to content. These hashes and certificates then travel with the content in the form of metadata, making it easier to authenticate. In tandem with this, Microsoft announced a reader that can be utilized by browser extensions that can check certificates and match hashes. This tool should provide a high degree of accuracy when identifying if a piece of content is authentic.
These new pieces of technology were built by Microsoft Research and Microsoft Azure in partnership with the Defending Democracy Program. The technology will power Project Origin, which was recently announced by the BBC.
Deepfake content can be surprisingly difficult to identify. Microsoft has a new interactive quiz that shows how difficult it can be to discern. After going through the quiz, it's easy to see how people can be fooled by deepfake content and why detection technology is important.
0 comments:
Post a Comment