AI attacks🤖

8bz1...QVcw
13 Jan 2024
34

Artificial intelligence (AI) is undoubtedly changing the world as we know it. However, although this developing technology has many benefits, artificial intelligence also brings the possibility of cyber attacks. Bad actors can use AI to increase the effectiveness of existing threats and create new attack vectors. To stay one step ahead, businesses need to consider how to protect their systems against various AI attacks.

What is an artificial intelligence attack?

Bad actors can manipulate an AI system deployed by the organization to serve a malicious purpose. This attack occurs when a bad actor finds limitations in a machine learning (ML) model and exploits them. Threat actors can also use AI to direct offensive attack technologies. They use AI to automatically generate higher volumes of attacks or exploit vulnerabilities themselves, making it harder for organizations to protect themselves. Shawn Henry, chief security officer at an American cybersecurity company, explained that cybercriminals can use artificial intelligence to penetrate cybersecurity defenses, spread misinformation, or infiltrate corporate networks.

According to the expert, the US government relies on artificial intelligence more than ever and uses it as a workforce in various institutions. 27 departments of the US federal government use AI-powered systems In Texas, more than a third of state agencies have delegated basic tasks to AI, including answering people's questions about unemployment benefits. Ohio, Utah and other states are also using AI technology. As artificial intelligence is favored in many areas of government and public services, experts fear that systems may fall victim to loss of control over the technology or privacy breaches. In October, FBI director Chris Wray warned that artificial intelligence was in danger of taking low-level cybercriminals to the next level.
Cybersecurity experts have argued that rival governments could use AI tools to spread misinformation to undermine democratic institutions and achieve foreign policy goals. According to the research, one in three government agencies in Texas was using some form of artificial intelligence internally by 2022. Ohio's employment officials used artificial intelligence to detect fraud in unemployment insurance claims. Utah is using artificial intelligence to track livestock. Additionally, the U.S. Department of Education uses a chatbot to answer financial aid questions and a workflow bot to manage administrative schedules. Examples of emerging AI-based attacks In addition to improving existing attack vectors, AI is also enabling bad actors to create new methods that pose unprecedented risks to today's organizations. Some of these emerging threats include:

rapid injection
Evade attacks
Training data poisoning (AI poisoning attacks)
Armed models Data privacy attacks
Model denial of service (Sponge attacks)
model theft

A malicious actor accomplishes this attack by strategically inserting prompts into a large language model (LLM) that leverages prompt-based learning. They use these strategic prompts to make the model perform malicious actions. Evade attacks Evasion attacks fool ML models by changing the input to the system. Rather than interfering with the AI, these attacks interfere with incoming data to intentionally cause a system error or evade defensive measures. For example, changing the appearance of a stop sign could theoretically persuade a self-driving car's AI algorithm to ignore the sign or read it as something else (a turn sign, etc.). Training data poisoning (AI poisoning attacks) This type of attack manipulates the training set used by the AI model to produce false outputs, such as biases or misinformation. A poisoning attack often targets AI models that leverage user data as part of their training sets.
Armed models To create a weaponized model, attackers write files in a data format used for model interchange, such as KERAS. These files often contain executable, malicious code that is set to run at a specific point and work with the target machine or environment. Data privacy attacks In some cases, ML models leverage real-life user interactions as training data. If these users share confidential information with AI as part of their interactions, they put their organizations at risk. Since the ML model stores the exchange for training purposes, attackers can theoretically exploit this sensitive data if they enter the correct series of queries. Several Samsung engineers recently came under fire for putting private information on ChatGPT, increasing the organization's risk of a data privacy attack. Model denial of service (sponge attacks) These attacks are a type of distributed denial of service (DDoS) attack. The attacker formulates a prompt for an AI system that asks for an impossible or gigantic query, similar to RegexDOS. This prompt then consumes system resources, increasing costs in computer resources for the model owner. model theft Attackers may also attempt to steal custom AI models through traditional means, such as infiltrating private source code repositories via phishing or password attacks. Symbolic AI is most vulnerable to pattern theft because it is an “expert system” with set queries and answers based on the answers. To exploit symbolic AI, attackers only need to record all possible answers to each question and then move up the “answer tree.” In addition, a 2016 study at Cornell Tech showed that it is possible to reverse engineer models through systematic queries, risking model theft.

Get fast shipping, movies & more with Amazon Prime

Start free trial

Enjoy this blog? Subscribe to Hacker

1 Comment