We must ask how the technology fits into the Creation story lest it takes away from meaningful work and connection.
****
It’s not uncommon to hear artificial intelligence (AI) described as a new “tool” that extends and expands our technological capabilities. Already there are thousands of ways people are utilizing artificial intelligence. All tools help accomplish a task more easily or efficiently. Some tools, however, have the potential to change the task at a fundamental level.
This is among the challenges presented by AI. If in the end it is not clear what AI is helping us to achieve more efficiently, this emerging technology will be easily abused. AI’s potential impact on education is a prime example.
Since the days of Socrates, the goal of education was not only for students to gain knowledge but also the wisdom and experience to use that knowledge well. Whether the class texts appeared on scrolls or screens mattered little. Learning remained the goal, regardless of the tools used.
In a recent article at The Hill, English professor Mark Massaro described a “wave” of chatbot cheating now making it nearly impossible to grade assignments or to know whether students even complete them. He has received essays written entirely by AI, complete with fake citations and statistics but meticulously formatted to appear legitimate. In addition to hurting the dishonest students who aren’t learning anything, attempts to flag AI-generated assignments, a process often powered by AI, have the potential to yield false positives that bring honest students under suspicion.
Some professors are attempting to make peace with the technology, encouraging students to use AI-generated “scaffolding” to construct their essays. However, this is kind of like legalizing drugs: there’s little evidence it will cut down on abuse.
Consider also the recent flood of fake news produced by AI. In an article in The Washington Post, Pranshu Verma reported that “since May, websites hosting AI-created false articles have increased by more than 1,000 percent.” According to one AI researcher, “Some of these sites are generating hundreds if not thousands of articles a day. … This is why we call it the next great misinformation superspreader.”
Sometimes, this faux journalism appears among otherwise legitimate articles. Often, the technology is used by publications to cut corners and feed the content machine. However, it can have sinister consequences.
A recent AI-generated story alleged that Israeli prime minister Benjamin Netanyahu’s psychiatrist had committed suicide. The fact that this psychiatrist never existed didn’t stop the story from circulating on TV, news sites, and social media in several languages. When confronted, the owners of the site said they republished a story that was “satire,” but the incident demonstrates that the volume of this kind of fake content would be nearly impossible to police.
Of course, there’s no sense in trying to put the AI genie back in a bottle. For better or worse, the technology is here to stay. We must develop an ability to evaluate its legitimate uses from its illegitimate uses. In other words, we must know what AI is for, before experimenting with what it can do.
That will require first knowing what human beings are for. For example, Genesis is clear (and research confirms) that human beings were made to work. After the fall, toil “by the sweat of your brow” is a part of work. The best human inventions throughout history are the tools that reduce needless toil, blunt the effects of the curse, and restore some dignity to those who work.
We should ask whether a given application of AI helps achieve worthy human goals – for instance, teaching students or accurately reporting news – or if it offers shady shortcuts and clickbait instead. Does it restore dignity to human work, or will it leave us like the squashy passengers of the ship in Pixar’s Wall-E – coddled, fed, entertained, and utterly useless?
Perhaps most importantly, we must govern what AI is doing to our relationships. Already, our most impressive human inventions – such as the printing press, the telephone, and the internet – facilitated more rapid and accurate human communication, but they also left us more isolated and disconnected from those closest to us. Obviously, artificial intelligence carries an even greater capacity to replace human communication and relationships (for example, chatbots and AI girlfriends).
In a sense, the most important questions as we enter the age of AI are not new. We must ask, what are humans for? And, how can we love one another well? These questions won’t easily untangle every ethical dilemma, but they can help distinguish between tools designed to fulfill the creation mandate and technologies designed to rewrite it.
*****
This column was first published to Breakpoint.org on January 8, 2024, and is reprinted with their gracious permission. We’re sharing it because it’s a good article on an important topic. But we have another reason. We wanted to give RP readers this sample of Breakpoint’s Daily Commentaries to, hopefully, pique your interest. Breakpoint has an American focus and is not specifically Reformed (though some writers are), so we differ in some notable respects: they are anti-evolution and RP is specifically 6-day creationists; and while we’ll highlight problems with the Pope both when he is acting Roman Catholic and when he is not, they’ll stick to the latter. So, as with everything, there is a need to read with discernment. But when it comes to the hottest cultural battles of our day – sexuality, gender, the unborn, and God’s sovereignty over “every square inch” of creation – they get it right, consistently, and they are timely, often replying to events that happened just the day before. That’s why Breakpoint articles have been featured in our Saturday 6 column for years now. If this article did grab your interest, then you’ll want to sign up here to get Breakpoint sent right to your email inbox each day.