What is AI? | ITPro
When Artificial intelligence (AI) is mentioned, Blade Runner and its replicants or the Terminator franchise often share the same sentence. But outside pop culture, AI is already having a profound impact on every aspect of our lives and shaping how businesses connect with their customers. But what is AI and how does it work?
At the heart of any AI system lies a complex amalgamation of algorithms, data, and human ingenuity. The fathers of AI are generally held to be Marvin Minsky, who founded MIT’s AI laboratory, and Alan Turing, whose work into AI in the 1950s laid the foundations for the explosion of AI we see today.
From banks using AI to identify suspicious activity, to predictive text in email clients and productivity platforms, AI is already everywhere we turn. It can all sound very worrying that a machine intelligence knows so much about our lives, but AI is now a core component of many of the digital services we all take for granted.
How does AI work?
AI is an umbrella term that includes a range of different technologies, approaches, and architectures. Machine learning (ML) is a subset of AI that forms the basis for many AI systems in use today.
ML is the practice of feeding large amounts of data into algorithms, which over time result in a system that can make predictions and decisions without specific human instruction. For example, streaming platforms and online marketplaces use ML to learn consumer preferences and make accurate show or product recommendations.
Modern AI systems often work by running data through a complex ML structure called an artificial neural network (ANN), built to mimic the flow of neurons through a human brain.
Deep learning, a subset of ML used for AI systems that need to process large amounts of unstructured data, ANNs are connected in a structure similar to a human brain. The “deep” aspect of deep learning is reflected in the multiple layers of ANNs within the structure, with layer processing data independently before passing it to the next layer for further analysis.
In this way, deep learning models can be trained to recognize patterns in vast swathes of unstructured data. This is important when it comes to processing data in near real-time, as in a text-to-speech system or natural language processing (NLP).
Generative AI, which has become widely used since 2022, relies on deep learning architectures to produce outputs based on user inputs. Most popular models currently operate using an architecture known as a transformer, which allows for sophisticated responses based on contextual processing of user inputs.
Benefits of AI for business
Like any technology, AI has pros and cons. First and foremost, the technology allows for the automation of menial tasks so that employees can focus on more important and enriching activities.
Workers can save up to a month of work per year using AI tools, according to Slack’s 2023 State of Work report. The study took in responses from over 18,000 global workers, with many of those who regularly use generative AI tools reporting improved productivity through the technology.
Outside of automating tasks that humans would normally do, AI can be used to perform jobs at a speed or precision that humans could never achieve. For example, AI can carry out big data analytics on a firm’s unstructured data, to identify trends or flag anomalous results.
In short, AI is a vital technology when it comes to noticing the unnoticeable. It excels at drawing connections between data points. This is why AI cyber security has great potential to help businesses defend themselves, with AI security systems able to proactively identify threats based on suspicious activity and recommend courses of remedial action.
Cyber security powered by AI will also be necessary to counter the new wave of AI threats businesses face. This includes the use of AI for deepfakes and social engineering campaigns.
Risks of AI for business
As with any technology, AI also has its drawbacks. From an employee perspective, AI-linked job cuts are a serious threat, particularly as AI systems become more capable at handling everyday office work. Research published by the UK Department of Education in 2023 found that skilled workers are most at risk from AI, particularly in the insurance and finance sectors.
AI also comes with the risk of making incorrect decisions or producing outputs that do not align with facts. Generative AI models are prone to ‘hallucinations’, a term used to describe confidently incorrect responses to user inputs. These can be damaging to a company’s reputation if sent directly to customers or the wider public and can mislead decision-making if not identified at an internal level.
On a broader level, all AI systems are capable of inherent bias. For example, some hiring algorithms are biased against applicants from underrepresented groups, which can result in decisions that are ableist, sexist, or otherwise discriminatory. Many developers have committed themselves to ethical AI development, with a goal to remove as much bias as possible and to ensure models are accountable.
Legislators have been targeting the potential risks of AI for some time, but laws are only just coming into place to regulate the technology. For example, the EU AI Act seeks to limit the deployment of AI systems that are deemed to have a high innate risk.
For these reasons, some businesses are choosing not to invest in generative AI for now with leaders in controlled industries particularly cautious over the potential for unwanted outputs.
Examples of AI in use today
Digital assistants are often how most people are exposed to an AI for the first time. The use of chatbots for customer services has expanded massively as these AI tools have improved, with generative AI having improved the detail of chatbot responses.
On a less visible level, AI algorithms have already become commonplace in many pieces of software. For example, in modern productivity suites users are given word or full-sentence suggestions based on their personal context.
As mentioned above, AI can be fantastic for identifying anomalous, erroneous, or suspicious data across an organization’s estate. This has numerous applications, with AI systems able to chip away at laborious tasks night and day – with the right training – more accurately than human workers.
AI is already empowering security teams to turn the tide against hackers with enhanced protections against network or identity-based attacks, while generative AI tools such as Copilot for Security provide cyber security teams with summaries of attacks and suggestions for responses.
Leaders can also use AI to improve their supply chain through targeted analysis of inefficiencies and projections for how changes could improve overall operations. The same process can be applied to business models to help leaders make data-driven decisions, or even for detailed weather forecasting based on climate data. On a larger scale is already being used by scientists to feed into decisions in the fight against climate change.
Another practical application of AI is in healthcare. Advanced algorithms are being employed to analyze medical data, and diagnose diseases. The power of AI is the ability of these systems to sift through vast amounts of data about a patient and their condition, which would be impossible for a human clinician. Already, preventative care for breast cancer is being revolutionized by AI. For example, Imperial College London is testing an AI system that can detect 13% more cancers than a human clinician alone.
In the near future, improvements in AI computer vision could allow devices to better guide blind or people with vision impairment. The same technology is already being used on a more basic level for autonomous robots on manufacturing floors, as part of the move toward smart manufacturing.
Applied AI vs general AI
Applied AI, also known as weak AI, refers to AI systems designed for a specific purpose. These systems excel at making recommendations based on historical data and processing vast amounts of information to provide more precise predictions or suggestions. Weather forecasting, for example.
General AI, also known as strong AI or artificial general intelligence (AGI), is a term for a theoretical AI system that could equal or surpass the intellectual capabilities of a human. Such an AI would be self-aware and far closer to the
If realized, general AI could pose a serious risk to humanity. When generative AI development picked up in earnest some called for a six-month pause in AI development so that innovation did not dangerously outpace legislation.
Some companies, such as OpenAI or Google DeepMind, are expressly seeking to create general AI. But at present, such models can only be found in academic papers and the realm of science fiction.
The future of AI
We have witnessed an unparalleled expansion of AI tools over the last two years. ChatGPT began a step change in how AI is seen and used by millions. Across all sectors, AI’s transformative capabilities are reshaping the way we work, communicate, and interact with technology.
However, alongside its benefits, AI also introduces significant risks and challenges. Concerns surrounding data privacy, bias, and job displacement loom large, underscoring the importance of ethical and responsible AI development and deployment. And the spectre of super intelligent AI, capable of outpacing human intelligence and autonomy, raises existential questions about the future of humanity.
As we navigate this rapidly evolving technological landscape, it is imperative to strike a balance between innovation and regulation, harnessing the power of AI while mitigating its potential harms. Collaborative efforts across governments, industries, and academia are essential to establish frameworks that promote transparency, accountability, and inclusivity in AI development and deployment.
Looking ahead the future of AI holds immense promise, yet it is also fraught with uncertainties. Continued research and investment are crucial to unlock AI’s full potential while addressing its ethical, social, and economic implications. By fostering a culture of responsible innovation and human-centric design, we can steer the future of AI toward maximum benefits and minimum risk.