After AI-generated recording of Pikesville High principal, here’s what to know about deepfakes – Baltimore Sun
As artificial intelligence becomes more advanced, the threat of deepfakes — synthetically generated content often used to misrepresent someone — becomes more pronounced.
Last week, Pikesville High School’s athletic director was arrested in connection with a racist audio recording of the school’s principal, which police say was made using AI. Authorities believe the recording, which included offensive statements about Black students and teachers and Jewish parents, was made by Dazhon Darien in retaliation for Principal Eric Eiswert investigating him for allegedly misusing school funds and theft.
Darien denied to detectives any involvement in the recording and its release. Subpoenaed documents from Google, AT&T and T-Mobile led police to an internet provider address registered to Darien’s grandmother, and a Baltimore County Public Schools information technology employee searched his access to the school system’s network and found that he used AI tools shortly before the recording was released, according to charging documents.
Darien was charged with disrupting school activities, as well as theft, retaliating against a witness and stalking. He was released on a $5,000 bond and did not have an attorney listed in online court records Wednesday morning.
After Darien’s arrest, Cindy Sexton, president of the Teachers Association of Baltimore County, said the union is troubled by AI’s potential to harm educators.
“As a society, we need to get in front of and get a handle on AI because of, unfortunately, situations like this are going to continue to happen,” she told The Baltimore Sun at the time.
As their ramifications continue to surface, here’s what to know about deepfakes.
What are deepfakes?
Deepfakes can include artificially generated videos, photos and audio, usually created with the intention to create a false version of a person saying or doing whatever the creator desires, said Anton Dahbura, executive director of the Johns Hopkins Information Security Institute and co-director of the university’s Institute for Assured Autonomy, which teaches about and researches autonomous systems such as AI
Deepfakes have been around for less than seven years, but the technology continues to evolve, Dahbura said.
“These techniques are becoming more common by the day and almost by the hour, and they aren’t going away anytime soon,” he said. “There are some very powerful AI techniques that can create at this point virtually any kind of video and audio.”
People can create these false recordings and videos on various websites using only a few seconds of a person’s voice, Dahbura said. These fakes have been used for movies, television, training videos and more.
In 2018, comedian Jordan Peele created a deepfake of former President Barack Obama as a public service announcement about fake news. This year, a deepfake of Taylor Swift depicted her supporting former President Donald Trump, and pornographic deepfake images of her circulated online.
How can experts spot a fake?
Earlier version of deepfakes often had certain tells, such as additional fingers on a body, that made it easy to spot if the images were fake. But now, it’s almost impossible to tell the difference without specific tools that ascertain the authenticity of images or audio, Dahbura said.
“It’s evolved from very primitive and rudimentary versions that were easily detectable just by looking at it,” Dahbura said. “Today’s versions are much more powerful and so realistic that they are very difficult or not possible to discern with the human eye.”
Researchers have been working on ways for websites that generate deepfakes to add different signals, similar to watermarks, to the video or audio that would be detectable to special software, Dahbura said.
But that software has to continue to evolve as quickly as the technology used to make the deepfakes.
“The development of this (deepfake) technology is so widespread there it is nothing forcing people who do have evil intentions to put in those signals,” Dahbura said. “There is good research that can catch deepfakes. Then, the people that produce the deepfakes improve the software, so those techniques no longer work.”
These fakes can cause irreparable damage to a person’s reputation or livelihood, and laws need to be created to reflect this “reality hijacking,” Dahbura said.
What are Maryland lawmakers doing about deepfakes and AI?
In the annual 90-day legislative session that ended April 8 this year, Maryland lawmakers began the first significant debates on changes to state law to adapt to the rapid expansion of artificial intelligence.
Democratic Gov. Wes Moore signed an executive order in January that set some guiding principles for the government’s approach to the technology, primarily around implementing it into government services. And one law, the Artificial Intelligence Governance Act, that made it to Moore’s desk similarly deals with government use. Moore also hired a senior adviser for responsible artificial intelligence, a first for the state.
But potential laws that would have penalized certain uses of AI, including deepfakes, came up short.
One bill that passed unanimously in both the House and Senate but still failed to get across the finish line on the last day of the session would have updated Maryland’s revenge porn law to prohibit the distribution of deepfakes that depict someone nude or engaged in sexual activity. It also would have allowed an individual to file a civil suit against someone for doing so.
Another bill that passed the House but failed in the Senate would have required the Maryland Department of Education to study how AI could be used in schools. One more that didn’t make it through would have required in political campaigns the disclosure of “synthetic material” in election media.
How common are criminal charges for deepfake creators?
Baltimore County State’s Attorney Scott Shellenberger said the Pikesville High incident was the first time his office has prosecuted a case related to AI and one of the first they could find in the country.
In Florida, two teenaged boys were arrested for allegedly creating and sharing AI-generated nude images of male and female classmates in December, according court records from the Miami-Dade Police Department. These modified images were found on the boys’ phones, and the people featured in the images were identified by parents, according court records.
New Hampshire authorities are investigating a robocall that mimicked President Joe Biden’s voice and falsely insinuated that voting in the Democratic primary would preclude voters from the general election, the Associated Press reported in February. The political consultant behind the call said he was trying to send a message about AI, not influence turnout.