AI

What are some ethical and legal concerns surrounding Artificial Intelligence?


Carolyne Tarver and Marissa Diaz

(KTAL/KMSS) – Artificial Intelligence (AI) is advancing and gaining popularity, but many individuals have ethical and legal concerns.

AI is a “field that leverages mathematics and statistics, cognitive science, and computing to enable problem-solving based on vast and robust datasets” (Louisiana State University, n.d.). AI is highly advanced and can mimic human intelligence, learning from patterns and improving over time. Some examples of AI are self-driving cars and sites such as Chat GPT, which can answer questions and generate speech that sounds exactly like a human being.


There are positive and negative concerns about AI on social media due to how information can
be quickly posted and shared. Social media companies can use AI for positive aspects online,
such as content moderation, chatbots/virtual assistants, content creation, and personalized
content. Concerns for online use are the spread of misinformation, job displacement, privacy
(due to AI collecting data from online users), and algorithm bias (as AI mimics humans and can
generate more prejudice and bias within the algorithm) (Keenan, 2023). Even more severe
concerns are regarding another example of AI: deep fakes.

Deep fakes are AI-generated images, videos, and audio that can mimic a specific individual. For
instance, someone can take a face and plaster it on another person. AI can also use a sample from someone’s voice to mimic their voice, making them say anything. Deepfakes can be very
believable, and many people have difficulty discerning whether a video, photo, or audio is
deep-faked. At one time, producing deep fakes used to be difficult, but today, this process has
become much easier.

Due to this, many cases have come to the surface where, primarily young women, have had their faces deep-faked onto inappropriate videos and posted online (Erickson, 2023). Additionally, there are also concerns about the use of AI during elections and campaigns. Politicians or their supporters have used artificial images to defame opponents and persuade
voters.

Even though AI regulation in the United States is still new, there is still potential for preventing
unethical use. Existing AI laws are for privacy, security, and anti-discrimination (Li, 2023). Since
2019, 17 states have enacted 29 bills that could regulate the development and use of AI, with
legislatures in California, Colorado, and Virginia spearheading the regulation framework
(Wright, 2023).

In Louisiana, there is a Senate bill, SB217, that could be used to prevent synthetic media creation during elections (Telsee, 2024). If this bill is passed in the Senate, it must go to the House to be voted on again, and if it passes, it will go to the governor, who could sign the bill into law (Louisiana State Legislature 2024).

More regulations in the future will involve safeguarding elections, preventing deceptive
campaign advertising, and, in general, preventing defamatory images/audio/videos.
Unfortunately, it can be challenging on social media to hold platforms responsible for
defamatory or misleading information posted by users due to Section 230, which prevents
platforms from being legally responsible for what users post, with minor exceptions (Ortutay,
2023). In the future, there will be much more discussion and focus on regulating AI to prevent
unethical uses and to promote responsible use.

Sources

  • Erickson, S. (2023, April 3). Deepfake Technology Poses a Threat to Reality | Journal of Gender, Race & Justice – The University of Iowa. Jgrj.law.uiowa.edu. https://jgrj.law.uiowa.edu/news/2023/04/deepfake-technology-poses-threat-reality
  • Keenan, N. (2023, October 23). The impact of AI on social media, pros & cons | Born Social. www.bornsocial.co. https://www.bornsocial.co/post/impact-of-ai-on-social-media
  • Li, V. (2023, June 14). What could AI regulation in the US look like? Www.americanbar.org. https://www.americanbar.org/groups/journal/podcast/what-could-ai-regulation-in-the-us-look-like/
  • Ortutay, B. (2023, February 21). How Section 230 helped shape speech on the Internet. AP NEWS.https://apnews.com/article/us-supreme-court-technology-social-media-business-internet-eb89baf1fa30e245c030992b48a8a0ff
  • SB217. (n.d.). Legis.la.gov. Retrieved April 1, 2024, from https://legis.la.gov/legis/BillInfo.aspx?s=24RS&b=SB217&sbi=y
  • Telsee, D. (2024, March 13). Louisiana lawmakers push for bill to regulate “AI” during
  • elections. https://www.kalb.com/2024/03/13/louisiana-lawmakers-push-bill-regulate-ai-during-elections/
  • What is AI. (n.d.). Www.lsu.edu. https://www.lsu.edu/ai/what-is-ai.php
  • Wright, R. (2023, December 6). Artificial Intelligence in the States: Emerging Legislation – The Council of State Governments. https://www.csg.org/2023/12/06/artificial-intelligence-in-the-states-emerging-legislation/

Carolyne Tarver is a graduate student at Louisiana State University, getting her masters in mass communication. She was born in Shreveport, Louisiana, and has always had a passion for writing and understanding effective communication. She completed her undergraduate bachelor’s degree in psychology at Louisiana Tech University in only three years. Carolyne is interested in becoming a journalist/reporter and is focused on the effects the media can have on minority groups and the possible effects emerging technology, such as AI, can have on the internet in the future.



Source

Related Articles

Back to top button