AI: Humanity's only hope against the deepfake threat?

AI: Humanity's only hope against the deepfake threat?

Rising concerns over the risk of increasingly realistic deepfake videos, generated by AI, are spurring efforts to develop counter tools to detect them, but according to academic Sadi Evren Seker, the ever-present gap between the two leaves room for significant vulnerabilities- Some technologies for detecting generative and transformative AI exist and are getting better, but generators are also improving, Seker, dean of Istanbul University’s Faculty of Computer and Information Technologies, tells Anadolu - '

By Emre Basaran

ISTANBUL (AA) - As advanced technologies such as artificial intelligence (AI) and machine learning continue to integrate into various aspects of daily life, offering considerable benefits, many are also grappling with their negative implications.

One of the most hotly debated issues surrounding AI is its potential for malicious use, particularly in the creation of deepfake videos — synthetic images impersonating individuals — which are becoming increasingly convincing due to advancements in data and machine learning.

Still, AI appears to be both the culprit and only real hope to fight deepfake content, according to an expert who spoke to Anadolu.

“To detect deepfakes, AI is the only hope we have,” said Sadi Evren Seker, dean of Istanbul University’s Faculty of Computer and Information Technologies.

Underlining the universal threat that deepfakes posing in the near future, the Türkiye-based academic said it’s “affecting daily life of every single person, and the threat is increasing by the time.”

“Unfortunately, both technologies — deepfake technology and counter-deepfake technology — are increasing at the same speed. But there’s always a gap,” he said, pointing out that counter-deepfake tech would always be at a disadvantage.

According to Seker, countering this threat will require more “focused and concentrated studies,” possibly funded governments or international organizations to create publicly available tools.

Deepfakes are currently particularly dangerous as they take advantage of people’s tendency to believe visual media, he suggested, while acknowledging that this may change with a generational shift in attitudes on technology use.

“The paradigm is also changing … We are from the previous generation. The next generation is living and growing with these technologies. Their understanding and their perception of all the deepfake threats might be completely different.”

“What we see is something important for us, and we believe what we see. But think about a child growing with the deepfake technologies, or even generating these deepfake videos,” he said, noting that since the threat will persist until such change takes root, the “best way of fighting with these threats is generating an AI technology for counter-deepfake technologies.”


- Current counter-deepfake technologies

On the current state of counter-deepfake tech, Seker was cautiously hopeful, saying that humanity has “some” technologies but that progress is gradual.

“We have some technologies for detecting to defect technologies or generative and transformative AI, and these technologies are getting better and better.

“But unfortunately, the generators are (also) getting better and better,” he said.

With every technological advance comes those that would abuse it, underlined Seker.

“A technology might have some positive effects, some advantages, and on the other hand, there is always somebody trying to abuse or trying to use the technology for their own purposes. And also, they can cause some problems in society. Unfortunately, judicial concepts come later than crimes,” he said.


- Legislation lagging

On legal measures to keep deepfakes in check, Seker said lawmakers have been unable to keep pace with technology, with regulations coming late.

“Technology is ascending and spreading fast,” he said, adding that “lawmakers are coming far behind the technology.”

“This brings up a gap between the laws and the technology … Civil technologies supported by the universities or non-governmental organizations or even the governments are coming behind the technological risks and threats,” he said, adding that governments are also changing with the impacts of technology.

“They are shifting the paradigm in most of the cases. For example, for digital issues, you might have a local court, you might have a national court, you might also have a supreme court for considering the critical issues.

“You can have some regulations or legislations about any issue in your country, but the world is becoming more global and international, the concepts are becoming more international,” he stressed.

Therefore, a crime committed in a particular country and can affect another, or leave an organization vulnerable to foreign attack, Seker warned.


- Independent from time and space

Globalization has resulted in digital content becoming “independent from time and space,” Seker said, pointing out the effects of the Internet.

“If you have a digital picture, you can keep this picture for decades or even centuries, there is no deformation, there is no loss of information,” he explained.

In terms of place, he noted that content generated in one country is easily accessible worldwide, making digital content free of spatial restrictions, too, he said.

This time and space independence must be taken into consideration when drafting new laws, which Seker said must be future-proof.


- ‘Citizen data scientists’

Seker also pointed to the emergence of so-called “citizen data scientists,” working in a manner similar to citizen journalists. Both refer to individuals with little or no training, performing the tasks of these professions, albeit in a less sophisticated way.

“We can consider everybody as a data scientist, everybody is somehow related to data, at least they collect their own data about their bodies,” he said, referring to, for example, step counter apps.

“So, we are collecting data from every single person and organization, of course, so we are responsible for more data,” he said.

According to Seker, this has implications for privacy concerns that come with data collection practices of any kind.

“And of course, there’s a data privacy issue connected to the personal data and each citizen should be aware of these issues,” he said.

Underlining that humanity must be prepared for what is to come in the future, he said:

“Just 30 years ago, maybe transformative large language models such as Gemini or ChatGPT was a dream only. But now, we are living in this world and we can understand that the world is changing and transforming very fast.”

Failure to prepare could lead to “all opinions and all existence” being undermined in years to come as AI and deepfakes churns out ever-more realistic content.

“So, this is a time of transformation. And we have an opportunity to make correct decisions and we need to take correct actions, since we still have time.

“Otherwise, everything is going to be too late for the humanity.”​​​​​​​

Kaynak:Source of News

This news has been read 52 times in total

ADD A COMMENT to TO THE NEWS
UYARI: Küfür, hakaret, rencide edici cümleler veya imalar, inançlara saldırı içeren, imla kuralları ile yazılmamış,
Türkçe karakter kullanılmayan ve büyük harflerle yazılmış yorumlar onaylanmamaktadır.
Previous and Next News