Skip to content
Link copied to clipboard

An AI bot passed this Wharton professor’s exam. Here’s why he’s not concerned.

“If we end up as in a status that is as good as prior to the release of ChatGPT, I think we have missed an enormous opportunity,” he said.

A professor at the University of Pennsylvania's Wharton School of Business recently tested ChatGPT's ability to take an operations management exam.
A professor at the University of Pennsylvania's Wharton School of Business recently tested ChatGPT's ability to take an operations management exam.Read moreTOM GRALISH / Staff Photographer

Wharton professor Christian Terwiesch was sitting with his grown children around the dinner table when the subject of artificial intelligence came up. Both of his kids had been experimenting with the nascent technology in their respective fields: “One of them is interested in design … and the other one is interested in computer science.”

Eventually, his son prompted ChatGPT to explain a sorting algorithm “using terms from Homer Simpson.”

“It was funny,” Terwiesch said, adding, “it was, from a computer science perspective, correct, and it was so easy as a user interface.”

ChatGPT users can input any prompt — The Inquirer previously asked it to write a story about Gritty — and the tool will respond nearly instantly with an answer so clear that it would be easy to believe a human had written it instead of a computer. Some examples show ChatGPT’s ability to write essays, legal briefs, and even full songs that mimic an artist’s writing style.

» READ MORE: We asked ChatGPT to tell us a story about Gritty. It delivered in seconds.

Terwiesch wanted to test it himself. He fed ChatGPT questions from the final exam for his operations management class — a subject he literally wrote the book on — to gauge whether the tool could pass an MBA-level course.

And it did. Terwiesch rated the bot’s “performance as a B to B-,” but noted in his research summary that some answers contained “surprising mistakes in relatively simple calculations at the level of sixth-grade math.”

He said those mathematical shortcomings prove that we’re far from using the current technology to replace trained professionals.

“Imagine a medical professional making a decision of what dosage of a drug to give ... If you would automate it as an investment adviser, this thing would be total garbage,” Terwiesch said. In these cases, professionals need to “stay away from this technology.”

Finding utility in an experimental technology

Terwiesch, who teaches in Wharton’s MBA program, encourages fellow educators to consider “opportunities where we can think about improving our learning process” through using AI tools in the classroom. That could include prepping testing materials — Terwiesch asked it to write new exam questions to some success, according to his research — and other ways to lighten educators’ workloads across every grade level, including elementary and public schools.

“We all see and admire how all these teachers come to work and work their butts off in a very difficult work environment, and so I think we have to find ways to use technology to help them as opposed to just kind of worrying about the cheating point,” Terwiesch said, referring to instances where students have already used ChatGPT to cheat on tests, a concern for many critics.

» READ MORE: Lensa is the app behind your friends’ new profile pics. Here’s what artists have to say about it.

When it comes to things like taking a driver’s test or a CPA exam, Terwiesch supports an outright ban, because the purpose of those exams is to certify that the test taker is qualified to perform particular skills.

But the purpose of teaching, and of learning, is to “engage with the material,” not simply recite it. Terwiesch remains optimistic that tech tools can be used for good in the classroom.

“Our job as educators is to use the technology to engage [students] differently,” Terwiesch said. “We have to find ways in the curriculum where the deepest skill that we really want to teach is taught in a novel way because if, at the end of the day, we end up as in a status that is as good as prior to the release of ChatGPT, I think we have missed an enormous opportunity.”

An example he gave was using the tool to act in the place of French philosophers, giving students the opportunity to interview them in “real time.” (ChatGPT does, indeed, speak French.) “How can we reimagine teaching French in a world where we have French pen pals available by the dozens?” he asked.

ChatGPT’s knowledge isn’t limitless, Terwiesch warned. That’s mainly because it is built on “what it has seen in the past,” meaning the technology should be viewed more as a support for developments in fields where it may be useful.

“We should not believe that this is the end of human thinking, and machines [are starting to] take over,” Terwiesch said.