Generative AI

How It Works, Benefits and Dangers


What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

DOWNLOAD: This generative AI guide from TechRepublic Premium.

How does generative AI work?

Generative AI uses a computing process known as deep learning to analyze patterns in large sets of data and then replicates this to create new data that appears human-generated. It does this by employing neural networks, a type of machine learning process that is loosely inspired by the way the human brain processes, interprets and learns from information over time.

To give an example, if you were to feed lots of fiction writing into a generative AI model, it would eventually gain the ability to craft stories or story elements based on the literature it’s been trained on. This is because the machine learning algorithms that power generative AI models learn from the information they’re fed — in the case of fiction, this would include elements like plot structure, characters, themes and other narrative devices.

Generative AI models get more sophisticated over time — the more data a model is trained on and generates, the more convincing and human-like its outputs become.

Examples of generative AI

The popularity of generative AI has exploded in recent years, largely thanks to the arrival of OpenAI’s ChatGPT and DALL-E models, which put accessible AI tools into the hands of consumers.

Since then, big tech companies including Google, Microsoft, Amazon and Meta have launched their own generative AI tools to capitalize on the technology’s rapid uptake.

Various generative AI tools now exist, although text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding a prompt into the engine that guides it towards producing some sort of desired output, be it text, an image, a video or a piece of music, though this isn’t always the case.

Examples of generative AI models include:

  • ChatGPT: An AI language model developed by OpenAI that can answer questions and generate human-like responses from text prompts.
  • DALL-E 3: Another AI model by OpenAI that can create images and artwork from text prompts.
  • Google Gemini: Previously known as Bard, Gemini is Google’s generative AI chatbot and rival to ChatGPT. It’s trained on the PaLM large language model and can answer questions and generate text from prompts.
  • Claude 2.1: Anthropic’s AI model, Claude, offers a 200,000 token context window, which its creators claim can handle more data than its competitors.
  • Midjourney: Developed by San Francisco-based research lab Midjourney Inc., this gen AI model interprets text prompts to produce images and artwork, similar to DALL-E.
  • GitHub Copilot: An AI-powered coding tool that suggests code completions within the Visual Studio, Neovim and JetBrains development environments.
  • Llama 2: Meta’s open-source large language model can be used to create conversational AI models for chatbots and virtual assistants, similar to GPT-4.
  • Grok: After co-founding and helping to fund OpenAI, Elon Musk left the project in July 2023 and announced this new generative AI venture. Its first model, the irreverent Grok, came out in November 2023.
  • Cohere AI’s Command R: The startup Cohere AI announced in April a new model for enterprise, Command R, which is designed for generative AI adoption at scale. In particular, Command R is optimized for retrieval-augmented generation (RAG). RAG is intended to improve the accuracy of generative AI responses by checking them against a source.

Types of generative AI models

Various types of generative AI models exist, each designed for specific tasks and purposes. These can broadly be categorized into the following types.

Transformer-based models

Transformer-based models are trained on large sets of data to understand the relationships between sequential information like words and sentences. Underpinned by deep learning, transformer-based models tend to be adept at natural language processing and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Gemini are examples of transformer-based generative AI models.

Generative adversarial networks

Generative adversarial networks are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generator’s role is to generate convincing output, such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. DALL-E and Midjourney are examples of GAN-based generative AI models.

Variational autoencoders

Variational autoencoders leverage two networks to interpret and generate data — in this case, an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data but isn’t entirely the same.

One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of people’s faces into a few important characteristics — such as the size and shape of the eyes, nose, mouth, ears and so on — and then use these to create new faces.

This type of VAE might be used to, say, increase the diversity and accuracy of facial recognition systems. By using VAEs to generate new faces, facial recognition systems can be trained to recognize more diverse facial features, including those that are less common.

Multimodal models

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 3 and OpenAI’s GPT-4 are examples of multimodal models.

Foundation models

Foundation models are the bedrock of generative AI chatbots. They may be machine learning or deep learning systems. Either way, foundation models are trained on corpuses of data. From there, any AI tool built from that particular foundation model will be able to refer to that data. Foundation models can be further customized or used to answer general questions. OpenAI’s GPT-4, Amazon’s Titan and Anthropic’s Claude are some examples of foundation models.

Frontier models

“Frontier model” is a term for hypothetical upcoming AI that could far surpass the capabilities of today’s AI.  There is no single definition of a frontier model’s capabilities, except that those capabilities will be larger in scope and more powerful than the AI available today.  The term is sometimes used in the context of future-proofing today’s technology against possible future threats. Some people have begun to refer to today’s high-performance generative AI, such as GPT-4 or Mistral AI, as “frontier AI,” muddying the definition somewhat.

What is ChatGPT?

ChatGPT is an AI chatbot developed by OpenAI. It’s a large language model that uses transformer architecture — specifically, the generative pretrained transformer, hence GPT — to understand and generate human-like text.

You can learn everything you need to know about ChatGPT in this TechRepublic cheat sheet.

What is Google Gemini?

Google Gemini (previously Bard) is another example of an LLM based on transformer architecture. Similar to ChatGPT, Gemini is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAI’s ChatGPT and Microsoft’s Copilot AI tool. It was launched in Europe and Brazil later that year.

Learn more about Gemini by reading TechRepublic’s comprehensive Google Gemini cheat sheet.

SEE: Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT? (TechRepublic)

Benefits of generative AI

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can help automate specific tasks and focus employees’ time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and insights into how well certain business processes are — or are not — performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing, and potentially more. Again, the key proposed advantage is efficiency, because generative AI tools can help users reduce the time they spend on certain tasks and invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important; we explain why later in this article.

Use cases of generative AI

McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

SEE: Indeed’s 10 Highest-Paid Tech Skills: Generative AI Tops the List 

Generative AI has found a foothold in a number of industry sectors and is now popular in both commercial and consumer markets. The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

  • Healthcare: Generative AI is being explored as a tool for accelerating drug discovery, while tools such as AWS HealthScribe allow clinicians to transcribe patient consultations and upload important information into their electronic health record.
  • Digital marketing: Advertisers, salespeople and commerce teams can use generative AI to craft personalized campaigns and adapt content to consumers’ preferences, especially when combined with customer relationship management data.
  • Education: Some educational tools are beginning to incorporate generative AI to develop customized learning materials that cater to students’ individual learning styles.
  • Finance: Generative AI is one of the many tools within complex financial systems to analyze market patterns and anticipate stock market trends, and it’s used alongside other forecasting methods to assist financial analysts.
  • Environment: In environmental science, researchers use generative AI models to predict weather patterns and simulate the effects of climate change.

In terms of role-specific use cases of generative AI, some examples include:

  • In customer support, AI-driven chatbots and virtual assistants can help businesses reduce response times and quickly deal with common customer queries, reducing the burden on staff.
  • In software development, generative AI tools can help developers code more cleanly and efficiently by reviewing code, highlighting bugs and suggesting potential fixes before they become bigger issues.
  • Writers can use generative AI tools to plan, draft and review essays, articles and other written work — though often with mixed results.

Dangers and limitations of generative AI

A major concern around the use of generative AI tools — and particularly those accessible to the public — is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation.

SEE: Gartner analyst’s take on 5 ways generative AI will impact culture & society

The risk of legal and financial repercussions from the misuse of generative AI is also very real; indeed, it has been suggested that generative AI could put national security at risk if used improperly or irresponsibly.

These risks haven’t escaped policymakers. On Feb. 13, 2024, the European Council approved the AI Act, a first-of-kind piece of legislation designed to regulate the use of AI in Europe. The legislation takes a risk-based approach to regulating AI, with some AI systems banned outright.

Security agencies have made moves to ensure AI systems are built with safety and security in mind. In November 2023, 16 agencies including the U.K.’s National Cyber Security Centre and the U.S. Cybersecurity and Infrastructure Security Agency released the Guidelines for Secure AI System Development, which promote security as a fundamental aspect of AI development and deployment.

Generative AI has prompted workforce concerns, most notably that the automation of tasks could lead to job losses. Research from McKinsey suggests that, by 2030, around 12 million people may need to switch jobs, with office support, customer service and food service roles most at risk. The consulting firm predicts that clerks will see a decrease of 1.6 million jobs, “in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.”

SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances

What is the difference between generative AI and general AI?

Generative AI and general AI represent different sides of the same coin; both relate to the field of artificial intelligence, but the former is a subtype of the latter.

Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data.

General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction — think Disney Pixar’s WALL-E, Sonny from 2004’s I, Robot or HAL 9000, the malevolent AI from 2001: A Space Odyssey. Most current AI systems are examples of “narrow AI,” in that they’re designed for very specific tasks.

To learn more about what artificial intelligence is and isn’t, read our comprehensive AI cheat sheet.

What is the difference between generative AI and machine learning?

Generative AI is a subfield of artificial intelligence; broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP. Generative AI models use machine learning techniques to process and generate data.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

DOWNLOAD: TechRepublic Premium’s prompt engineer hiring kit

What is the difference between generative AI and discriminative AI?

Whereas generative AI is used for generating new content by learning from existing data, discriminative AI specializes in classifying or categorizing data into predefined groups or classes.

Discriminative AI works by learning how to tell different types of data apart. It’s used for tasks where data needs to be sorted into groups; for example, figuring out if an email is spam, recognizing what’s in a picture or diagnosing diseases from medical images. It looks at data it already knows to classify new data correctly.

So, while generative AI is designed to create original content or data, discriminative AI is used for analyzing and sorting it, making each useful for different applications.

What is the difference between generative AI and regenerative AI?

Regenerative AI, while less commonly discussed, refers to AI systems that can fix themselves or improve over time without human help. The concept of regenerative AI is centered around building AI systems that can last longer and work more efficiently, potentially even helping the environment by making smarter decisions that result in less waste.

In this way, generative AI and regenerative AI serve different roles: Generative AI for creativity and originality, and regenerative AI for durability and sustainability within AI systems.

How big a role will generative AI play in the future of business?

It certainly looks as though generative AI will play a huge role in the future. As more businesses embrace digitization and automation, generative AI looks set to play a central role in industries of all types, with many organizations already establishing guidelines for the acceptable use of AI in the workplace. The capabilities of gen AI have already proven valuable in areas such as content creation, software development, medicine, productivity, business transformation and much more. As the technology continues to evolve, gen AI’s applications and use cases will only continue to grow.

SEE: Deloitte’s 2024 Tech Predictions: Gen AI Will Continue to Shape Chips Market

That said, the impact of generative AI on businesses, individuals and society as a whole is contingent on properly addressing and mitigating its risks. Key to this is ensuring AI is used ethically by reducing biases, enhancing transparency and accountability and upholding proper data governance.

None of this will be straightforward. Keeping laws up to date with fast-moving tech is tough but necessary, and finding the right mix of automation and human involvement will be key to democratizing the benefits of generative AI. Recent legislation such as President Biden’s Executive Order on AI, Europe’s AI Act and the U.K.’s Artificial Intelligence Bill suggest that governments around the world understand the importance of getting on top of these issues quickly.



Source

Related Articles

Back to top button