Artificial intelligence is a rapidly emerging field, with its high scope in almost every aspect of information technology, engineering and sciences. In the current world, AI is one of the most widely used methodologies available and its applications are in nearly every walk of life.
A new report published by the SHERPA consortium – an EU project studying the impact of Artificial Intelligence on ethics and human rights – found that if human attackers have access to machine learning techniques they can focus most of their efforts on manipulating existing AI systems for malicious purposes, instead of creating new attacks that would use machine learning.
The primary focus of the study was on how malicious persons can abuse AI, machine learning and smart information systems. The researchers identified that there are many malicious uses of AI, including the creation of sophisticated disinformation and social engineering campaigns.
Although they didn’t find any sound evidence of AI cyber-attack, they believe that actors are already using the tech by manipulating AI systems used by search engines, social media websites and recommendation websites.
A researcher working for the company’s AI center of Excellence, Andy Patel from F-Secure, thinks many people would find this surprising.
“Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers,” explain Patel.
“But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There’s even companies selling services to support this behavior. So ironically, today’s AI systems have more to fear from humans than the other way around.”
Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI used to make decisions.
An example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain content.
“These types of attacks are already extremely difficult for online service providers to detect and it’s likely that this behavior is far more widespread than anyone fully understands,” says Mr. Patel.
Perhaps AI will be the most useful application for attackers in the future. It will be help them to create fake content, considering the advancement in the field of modern AI systems, and will also generate false audio and visual content as well.
This type of AI misuse could be seen to become hugely problematic over time.
Muhammad Nadeem Jahangir