AI

Reframing the Artificial Intelligence conversation


iStock/Chor muang
iStock/Chor muang

A recent letter signed by more than 200 musical artists calls “AI developers, technology companies, platforms and digital music services to cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.” Though the letter acknowledges that “when used responsibly, AI has enormous potential to advance human creativity,” it argues that irresponsible uses pose “enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods.” AI, in other words, will upset the status quo.

The letter is representative of the sorts of objections we consistently face when we consider the negative consequences of AI. It addresses issues of economics, identity, privacy, and the erosion of “values” (e.g., human creativity expressed through music). In addition to mirroring the concerns many of us have about AI in other domains and fields, the letter illustrates a flaw in the way we think about AI. In particular, identifying “responsible” and “irresponsible” uses of AI is unhelpful because it masks an underlying set of trade-offs we are making even when our use of AI is “responsible.”

AI will change the world. As it does so, it will have varied effects on all of us. AI will define some new “normal” to which all of us will need to adjust. When we define certain uses of AI as irresponsible, we will tend to neglect responsible uses of AI that have negative consequences for various members of the population. Rather than considering “responsible” and “irresponsible” uses of AI, we need to decide what we are unwilling to give up regardless of the benefits AI promises to provide for some or most of us. AI’s benefits are enticing, but every change involves loss. We need to seriously consider that loss.

Get Our Latest News for FREE

Subscribe to get daily/weekly email with the top stories (plus special offers!) from The Christian Post. Be the first to know.

While we often think of technology as a means of progress, without some stable understanding of what it means to be human, we can’t be sure whether we are moving forward or backward … only that we are moving. Progress can’t just mean that we have invented some new device that has a specific set of benefits. It also has to mean that we arrange our individual and collective lives to privilege ideas like human creativity (or certain expressions of it), however difficult they may be to define, because we recognize them as valuable to our humanity. So, why might we decide that we care about human creativity?

First, we may conclude that humans are not simply consumers. Our relationships are not only transactional but storied. To some degree, we develop a relationship with those who produce the music, art, and literature we enjoy. Music, art, and literature are expressions of those who create them. As such, we encounter another human through the work they produce. We enter an interesting sort of dialogue that familiarizes us with other people who have taken the time to record their thoughts through lyrics, prose, pictures, and other media. While it is possible to enjoy a song, painting, or book without that connection, we can also point to a number of instances in which knowing something about the artist or author or the life setting which gave rise to the art produced enhances our appreciation of it. The final product is important, but the human effort and story behind the product often tie us to that product in a more intimate way (as any parent who has hung their child’s drawing on the refrigerator will intuitively understand).

Second, we may come to certain convictions about what sort of effort we value and why. As someone who did a lot of art in high school and has done a lot of writing since, I tend to believe that the effort involved in such creative realms is valuable in and of itself. Could AI produce art and literature of a similar quality to humans? If it can’t now, I assume it will be able to at some point in the future. The product, however, isn’t the point. Instead, the point (or perhaps the thesis) is that effort … the energies we invest, sacrifices we make, and difficulties we suffer … forms us into a particular sort of people that a life without that effort can’t. The “product” that AI can’t replicate is not the art, literature, or lyric, but the human character forged through success and failure as we engage in our various creative endeavors.

Finally, we may find that chasing our desires and interests through digital technologies has not created a situation in which we are the best version of ourselves. Can we really say that journalism in the digital age is the journalism we want? Can we really say social media has encouraged us to be the sort of people we’ve always wanted to be? Do we think we can get from our current state to a more desirable state by outsourcing more of ourselves to technology? Do we need to reorient our desires and interests so that the technological “tail” is no longer wagging the human “dog”? These sorts of questions point us back to something far more crucial than the “responsible” and “irresponsible” use of technology. They prompt us to question whether our goals and aims have led us to the wrong place. If so, they also prompt us to consider where we want to be and how we intend to get there.

AI governance is, at best, a half-measure because it assumes that AI is inevitable. AI (and any other technology) has to be up for grabs because technology should not determine who we are as human beings. We need to consider what we are unwilling to surrender to AI and why. Human creativity and the efforts associated with its creation are not trivial. It isn’t at all clear that replacing human creativity with AI creativity would be beneficial. To assert that AI will be a “net gain” for society, we must do more than identify its benefits. We must also account for the losses that may reduce human flourishing.

AI may well serve humanity, but we will never be sure of that until we consider deeply what it means to be human.

Dr. James Spencer currently serves as President of the D. L. Moody Center, an independent non-profit organization inspired by the life and ministry of Dwight Moody and dedicated to proclaiming the Gospel and challenging God’s children to follow Jesus. He also hosts a weekly radio program and podcast titled “Useful to God” on KLTT in Colorado.  His book titled “Christian Resistance: Learning to Defy the World and Follow Jesus” is available on amazon.com. He previously published “Useful to God: Eight Lessons from the Life of D. L. Moody,” “Thinking Christian: Essays on Testimony, Accountability, and the Christian Mind,” as well as co-authoring “Trajectories: A Gospel-Centered Introduction to Old Testament Theology.”



Source

Related Articles

Back to top button