AI

TECH INTELLIGENCE: The dark side


Listen to this article

Artificial intelligence is a major advancement. I am, however, concerned that in the rush to use this new technology, important security issues may be ignored.

Carl Mazzanti
Mazzanti

AI has many benefits, such as making tasks easier and saving time. This allows people to focus on more important and creative work. And of course, efficiency translates into cost savings and increased productivity for businesses.

AI also enhances decision-making by taking vast amounts of data at speeds that are beyond purely human capability, and by using its intelligence to analyze data and predict outcomes for decision-making purposes. In finance, AI can help with tasks such as risk assessment and fraud detection, while in health care, it can assist with diagnosing diseases and developing treatment plans.

Moreover, AI fosters innovation by enabling the development of new products and services. Already, AI-powered technologies like virtual assistants and autonomous vehicles are pushing boundaries and shaping the future by driving progress.

AI can make life easier for people with disabilities by creating personalized solutions, improving communication, and enhancing learning for students. This can help bridge gaps in access to resources and make things more inclusive.

AI systems offer these and other advantages, but AI also comes with new security risks. We must address these vulnerabilities, while not ignoring existing cybersecurity threats.

  • AI can get things wrong and present incorrect statements as facts, a flaw known as “AI hallucination.”
  • AI can show bias and often proves gullible when responding to leading questions.
  • AI can be coaxed into creating toxic content and is prone to “prompt injection attacks.”
  • Manipulating the data used to train AI models can corrupt them, a technique known as “data poisoning.”

 

AI technology can create hard-to-detect threats, such as AI-powered phishing attacks. Another concern is that bad actors could mix malware with AI, which could allow the AI to study a company’s cyber defenses and identify weaknesses.

Prompt injection attacks are one of the most widely reported weaknesses in large learning models, or AI systems, that are capable of understanding and generating human language by processing vast amounts of text data. In a prompt injection attack, an attacker creates an input designed to make the LLM behave in an unintended way. This could result in making abusive posts, sharing secrets, or causing problems in a system that does not filter input.

More Tech Intelligence

AI
DEPOSIT PHOTOS

Data poisoning attacks occur when someone alters the data used to train an AI model. This causes the model to generate undesired outcomes related to security and bias. As people increasingly use LLMs, the risks of attacks will increase.

Strict laws on data privacy regulate the collection, use and processing of sensitive information. This can pose challenges, since AI tools typically collect data from different places, often including sensitive information in the process — and as threat actors target systems for this information, these data stores are at risk for cyberattacks and data breaches.

Further, AI technology can analyze large data sets, like private communications and user behavior. This can lead to compliance violations if there is misuse or unauthorized access.

When development is moving quickly, like with AI, security is sometimes not the main focus. So, I believe security should be a top priority for AI systems, from development to end-of-life. It is important for people in charge of AI systems, like senior managers, to stay updated on new developments. To make AI products successful, professionals like data scientists, developers, decision-makers, risk owners, and cybersecurity consultants must collaborate.

They all need to ensure the products work well, are available when needed, and safeguard sensitive data from unauthorized access. This will lead to new levels of efficiency; and in this scenario, all legitimate users win.

Carl Mazzanti is president of eMazzanti Technologies in Hoboken, providing IT consulting and cybersecurity services for businesses ranging from home offices to multinational corporations.





Source

Related Articles

Back to top button