AI

Black tech entrepreneurs say AI can’t be ignored


Many people are afraid of artificial intelligence, commonly known as AI.

They worry that the technology could be harmful by creating false narratives, video and audio of real people or incidents.

On Thursday, the day after a Philadelphia discussion on artificial intelligence and the Black community, the Washington Post published a story on the FBI warning that foreign adversaries might use AI to interfere in American elections by spreading disinformation.

People also are afraid AI will result in people losing their jobs. One reason for the Hollywood writers strike last year was a concern that AI would replace human writers for television and films.

Artificial Intelligence (AI) is a term for any technology that makes it possible for a digital computer or computer-controlled robot to perform tasks that are usually performed by human beings.

Sulaiman Rahman, founder and CEO of DiverseForce, a “strategic capital solutions firm” that helps to develop diverse talent pipelines to employers and corporate boards, said people need to face the realities of AI.

“There’s a lot of fear about AI, ” Rahman said this week. “I sent a LinkedIn post where I had AI duplicate my face and voice and create a speech and I wanted to see how our community responded.”

He had alerted people that an AI tool created the message and got a two kinds of responses: Some people thought the technology was “amazing,” and others said, “This is scary.”

“We have to face that fear,” Rahman said. “We can’t put our heads into the sand. That’s not going to affect change.”

» READ MORE: How voters can avoid deepfakes and AI-generated misinformation in the 2024 presidential election

Rahman spoke with The Inquirer the day after DiverseForce teamed up with WURD Radio and the P4 Hub to sponsor “Artificial Intelligence, Black Realities: Unpacking AI’s True Impact”, a panel discussion Wednesday at the P4 Hub in Germantown.

About 200 people crowded into the P4 Hub Wednesday night for a standing-room-only conversation. It was a part of Philly Tech Week 2024.

In addition to Rahman, the panel included ; Akinyemi Bajulaiye , founder of Pentridge Media; Shelton Mercer, founder and CEO of Virtuous Innovation and founder of Audigent, an Inc. 5000 company; Deborah Roebuck, DNP, founder and CEO of Going Thru The Change. Technology and lifestyle expert Stephanie Humphrey, who regularly appears on a WURD radio show, moderated the panel.

Decades-old technology, breaking through

Artificial Intelligence has been around for decades, Rahman said.

But since ChatGPT came onto the scene in late 2022, there is a widespread concern that an artificial “super intelligence,” now known as Artificial General Intelligence (AGI) — in which AI becomes more capable than humans— is coming sooner than once expected.

He compared the recent innovations in artificial intelligence to how computers and mainframes were around for decades (on university campuses and government and military institutions) before personal computers arrived.

“AI has been around for the last 30 years, but it was behind the scene. Now it’s in the hands of individuals,” he said.

Shelton Mercer, a tech innovator, said that the proliferation of AI presents both opportunities and challenges for Black people and other people of color. Mercer said the panel discussion was needed to help Black people “make sense of the bombardment, whispers and chatter that people are hearing about AI.”

(Mercer is also on the board of the Lenfest Foundation which owns the Philadelphia Inquirer.)

“There’s a lot of work for leaders to do to help demystify AI and show how this technology is influencing us currently and what it will do in the future,” Mercer said.

“Authenticity is the thing I talk about. We talk about artificial intelligence, but we’ve got to make sure we are tripling down on authenticity because it is more difficult to discern [what is real].”

People are already using AI when they use chatbots on a company website. It’s also used in smart phones that suggest the next words to use when writing a text.

Can algorithms be biased?

One example of bias in AI’s technology discussed Wednesday occurred when someone mentioned searching for images of doctors, and the only images of physicians “imagined” by the AI tool were white or Asian.

That bias about who can be a doctor in American society, came from the biases of the human programmers, Mercer said. Human beings program the algorithms, or the set of instructions to be followed to solve a problem.

“We have seen those types of cultural biases show up in the way these machines generate output and the way the AI assumes that certain roles, like doctors, lawyers, or scientists can only been seen in some populations, either white Europeans and Asians,” Mercer said.

“Society suffers from the ills of many isms and phobias and unfortunately, the tools that are created and programmed by human beings are going to be affected by those kinds of ills,” Mercer added.

Deborah Roebuck, a women’s health and executive coach, said when the COVID pandemic shut things down in 2020, she didn’t know how to use Zoom nor did she know about social media. But she signed up for a course to learn about technology.

“If we don’t get on board, we are not being models for the other people,” said Roebuck, who will be 70 this year.

A technology that needs guidance

In the audience, Chris Brown, 31, of Mt. Airy, admitted the conversation was new to him, even though he is a young man.

“I’m an anti-social media person and it’s all scary to me,” said Brown, who works in financial services. “I’m trying to open myself up to learn more about it. “

Tatzanna Jackson, 25, a student at Community College of Philadelphia, said she had recently returned to college and wants to complete her degree. She said she came to make contacts and meet people who can help her make career choices.

While people may be afraid of AI, Rahman said they should compare it to how fire is used.

“Fire can be used to cook food or keep your home warm, or it can be dangerous and destructive,” he said.

There are already discussions about how AI should be governed or monitored and held in check both in the United States and globally, he said.

“It’s inevitably going to be a part of our lives, and if we want to leverage it for good, we need to be aware of the implications as it becomes part of our society and participate in the conversation of we are going to raise this toddler — I think of AI as a toddler — and create it in a way that is responsible.”



Source

Related Articles

Back to top button