Generative AI

Many people don’t trust AI


Artificial intelligence, or AI, has quickly captured the world’s fascination. AI technology has helped drive tremendous innovations in science, medicine, research & development, engineering and the arts, among many other fields. It has the potential to transform economies and societies for generations to come. But the downside to this groundbreaking innovation is a growing fear of distrust and skepticism of a lack of regulation and the potential abuses that AI may bring.







Mark M Grywacheski

Mark M. Grywacheski



Kevin Schmidt



According to a global survey by consulting firm Edelman, just 35% of Americans have trust in AI-related companies. This is down from 43% in 2023. In 2019, the level of trust was reported at 50%. Americans are much more skeptical of AI than the rest of the world. Among the respondents from 28 nations included in their survey, Edelman notes the global average of trust towards AI is at 54%.

Understandably, the rapid rise in the use of AI has created unease among the American population. There have been a number of high-profile incidents where AI has been used to alter photographs, create a biased outcome or to spread disinformation.

People are also reading…

Another concern is that even if AI is free from intentional bias or manipulation, its output can be incomplete or simply inaccurate. In 2023, a New York attorney used AI tool ChatGPT to help him research case law for his client’s personal injury claim. Unfortunately, the AI-created legal brief the attorney submitted to the court referenced six court decisions that simply didn’t exist. Within the legal brief, ChatGPT also included false names and docket numbers and used made-up citations and quotes.

At the heart of this distrust is a lack of confidence in effective regulation and control. According to survey results from MITRE-Harris, just 39% of Americans believe AI is safe and secure. This is down nine points from their November 2023 survey. Moreover, 82% say they are either somewhat or very concerned about AI being used for malicious intent. Some of the top concerns noted by survey respondents were that AI could be used for cyberattacks (80%), identity theft (78%), sale of personal data (76%), lack of accountability for those using AI (76%) and deceptive political ads (70%).

By overwhelming consensus, Americans want accountability. In fact, 85% of Americans believe AI technologies should be regulated to ensure adequate consumer protections. Douglas Robbins, Vice President of Engineering & Prototyping at MITRE stated, “The deep concerns that U.S. adults are expressing about AI are understandable. While the public has started to benefit from new AI capabilities such as ChatGPT, we’ve all watched as chatbots have spread political disinformation and shared dangerous medical advice. And we’ve seen the government announce an investigation into a leading company’s data collection practices.”

For Americans to fully embrace AI requires trust. But according to the latest surveys, that trust just isn’t there yet.

Lisa Edicicco, Senior Editor at CNET, talks with Nicole Zaloumis about the future of generative artificial intelligence in cell phones and how it can evolve the difference out of our everyday use. (04-10-2024)



Mark Grywacheski is an expert in financial markets and economic analysis and is an investment adviser with Quad-Cities Investment Group, Davenport.

Disclaimer: Opinions expressed herein are subject to change without notice. Any prices or quotations contained herein are indicative only and do not constitute an offer to buy or sell any securities at any given price. Information has been obtained from sources considered reliable, but we do not guarantee that the material presented is accurate or that it provides a complete description of the securities, markets or developments mentioned. Quad-Cities Investment Group LLC is a registered investment adviser with the U.S. Securities Exchange Commission.



Source

Related Articles

Back to top button