
Most U-M students are already using ChatGPT. Engineering sophomore Varun Agrawal even used it to seek advice for moving into his apartment. | Photo: J. Adrian Wylie
“This could be what makes the next rocket to Mars possible,” says Ravi Pendse, his voice sparking with excitement. “This could inspire the next vaccine that saves lives across the world. I could go on and on. The potential for new knowledge creation is enormous.”
Pendse, U-M’s vice president for information technology, chief information officer, and a professor of electrical and computer engineering, is talking about “generative artificial intelligence.” Perhaps the most well-known example of GenAI is ChatGPT, released last November, which currently draws more than 1.5 billion visits every month and can perform tasks previously thought only to be products of the human mind—including composing essays, writing computer code, explaining concepts, and even engaging in conversation, all in response to prompts written in nontechnical language.
Understandably, GenAI tools are creating a sea change in higher education. Proponents say the technology can improve learning outcomes through personalized tutoring and augment research by automating redundant tasks. On the other hand, it raises thorny questions about data ownership and enables cheating that’s hard to detect.
“We could have taken two approaches,” says Pendse. “One approach would have been: Let’s just not do anything about it. Let’s just wait. But that’s not the Michigan way. Michigan leads.”
To that end, Information & Technology Services has partnered with Microsoft to develop its own university-hosted GenAI services that it will roll out in the fall semester. Although other Ivy League and Big Ten schools are offering guidance and conducting experiments, Pendse says, “we believe we are the only university at this time offering these kinds of services to our campus community.”
To take a thoughtful approach to GenAI, Pendse and provost Laurie McCauley formed the Generative Artificial Intelligence Advisory Committee (GAIA) in May. Consisting of eighteen faculty, staff, and students, the committee spent the summer researching and discussing the potential impacts of GenAI on research, education, and life at U-M. Their initial report, released July 24, recommends the integration of GenAI to enhance education and enrich research, while also calling for the university to establish best practices for privacy and integrity.
“It is the most up-to-date, comprehensive report we have at the moment,” says GAIA member Varun Agrawal, a sophomore in the engineering school. “That isn’t to say it wouldn’t change.”

“It’s an important technology,” says public policy prof Shobita Parthasarathy, “but it’s also limited and problematic in a lot of ways.” | Photo Courtesy of Shobita Parthasarathy
A companion website offers guidance for students, faculty, and staff on how best to utilize GenAI in light of its strengths and weaknesses.
“There are so many different possibilities, it’s incredible,” says Josh Pasek, associate professor of communication and media and political science, and associate director of the U-M’s Michigan Institute for Data Science. “One possibility is arguably it renders obsolete traditionally done things in higher education.”
“My hair is not on fire at the moment,” laughs Shobita Parthasarathy, public policy professor and director of the Science, Technology, and Public Policy Program. “It’s an important technology, but it’s also limited and problematic in a lot of ways.”
These problems include equity of access and data security—driving factors behind the development of U-M’s three-tiered GenAI services: U-M GPT, Maizey, and U-M GPT Toolkit.
A cursor blinks at the bottom of the U-M GPT screen, waiting for instructions. It’s used the same way as the chatbot where it gets its name, but with a few key differences: free access to multiple platforms (GPT-3.5, GPT-4.0, and Llama 2), integration with screen readers for visually impaired users, and privacy protection.
“Create a New AI Bot,” invites the home screen of Maizey. Below it, two text fields and two drop-down menus let users point Maizey toward a data source, give it instructions—in layperson’s terms, no coding required—and “within a matter of minutes, which would have taken you years to do, you have now created a useful tool for the entire world to use,” Pendse says. “It will dramatically change how we analyze our data together.”
U-M GPT Toolkit is the realm of Mars rockets and lifesaving vaccines, and it more closely resembles what might come to mind when you imagine powerful computing technology—line after line of code inscrutable to all but those with deep technical knowledge. In a nutshell, Toolkit will enable faculty and researchers to build their own language models based on curated data sources.
“There could be a model around health care; there could be a model around water conservation,” Pendse says. “We will only be limited by our imagination.”
—
“U-M can approach GenAI as an opportunity to rethink how we teach and define meaningful learning objectives, promote inclusion and equity, and assess learning,” reads the GAIA report. “GenAI literacy will be vital in the future and using it ethically and responsibly will necessarily become part of our academic mission.”
Ethics being the operative term. Much has been made of GenAI-enabled plagiarism; for that reason, a handful of U.S. school systems, from New York to Los Angeles, have banned ChatGPT outright.
“You can just tell it to write an essay, and it will, and it will be pretty good,” Agrawal notes. But “if you’re not going to try [writing] it yourself, if you’re not going to try to logically figure it out, you’re going to miss out on the opportunity to learn from it.”
Of course, cheating is nothing new. From Wikipedia and Google Translate to Quizlet and Grammarly, the internet is replete with tools that can enable intellectual laziness—or streamline the educational process. It just depends on who’s using them.
“For many years I’ve had students— international students, for example—who use translation software,” Parthasarathy says. “And I’ve certainly had students in my class who have lifted large portions of text from websites in the past. All without ChatGPT.”
While the GAIA report acknowledges that GenAI could be used for plagiarism—at one point even using the phrase “cheating machine”—it also advises against banning it. Not only are detection tools like Turnitin and GPTZero inconsistent, it argues, GenAI is a part of the world students will graduate into.
“We are, at the end, an educational institute,” Agrawal says. “We don’t want our students to have a disadvantage going out into the world of not being familiar with these resources.”
Many have already plunged ahead. In a campus-wide survey with 6,037 responses, nearly 60 percent of faculty and students said they have already used GenAI—with students reporting more experience than their teachers. Agrawal confirms that he and his friends have used ChatGPT for a variety of purposes: constructing responses, automating tasks, research, outline summaries, coding.
“Just recently I was moving into an apartment, so I asked if it could give me a list of things I should be prepared for,” he says. “Sure, it’s not a replacement for actually just asking someone who’s been through it, but it’s a good start.”
Still, the issue of cheating remains. The GAIA report recommends that “honor codes, pertinent academic policies, and the definition of plagiarism be reviewed and potentially modified before Fall 2023.” As of press time, only the schools of nursing and social work have rewritten their academic policies to explicitly define “inappropriate” use of GenAI as plagiarism—and teachers will still have to decide what’s inappropriate in the context of their classes.
—
“I think the right policy is embracing it,” says Pasek. “It’s going to be hard to teach without acknowledging it pretty soon. Think of any discipline that uses statistics: You need to use a calculator, right? You need to engage with tools that are going to be more sophisticated, because that has become part of the process. And I think on an underlying basis, the students are probably rightly recognizing that they’re going to be using these tools in their jobs, and that they’re not going to be asked to do it the old-fashioned way.”
Pasek has spent his summer formulating a course for this fall on AI in human communication. He hopes to help students develop the skill of prompt generation—asking GenAI the right questions to get the most useful information.
It’s an important skill given ChatGPT’s biggest limitation: hallucination, a colorful term that describes its propensity to make up information. The large language models that drive chatbots like ChatGPT are based on prediction and probability, not accuracy, which tends to produce answers that Pasek describes as “seemingly right-ish” and “close to the truth but not quite.” ChatGPT is also prone to logical inconsistencies, biases, and a lack of citations.
“If you don’t have the foundational skills to understand what it’s doing, you can’t actually use it successfully,” Pasek warns. “You’ll make silly errors and not realize it.”
Those silly errors can have grave implications. Parthasarathy gives the example of the Michigan Integrated Data Automated System used by the Michigan Unemployment Insurance Agency, which between 2013 and 2015 falsely flagged more than 40,000 unemployment claims as fraudulent. In 2022, the state paid a $20 million settlement in the ensuing class action lawsuit, but not before 1,000 people filed for bankruptcy.
“We need to start teaching young people at a young age how to think critically about technology, how to evaluate technology, how to evaluate AI, and how to respond to it,” she cautions.
Although the GAIA report recommends giving instructors the freedom to choose whether and in what capacity to use GenAI, it urges them to at the very least reevaluate their methods in light of its effects.
“[R]egardless of one’s opinion on the potential, benefits, and risks of GenAI,” it reads, “this technology cannot be ignored.”
—
Timothy Cernak is no stranger to AI. An assistant professor of both medicinal chemistry at the College of Pharmacy, and chemistry at the College of Literature, Science, and the Arts, he has been using algorithms in his lab for more than a decade to streamline the process of drug discovery. He says where ChatGPT is particularly useful is when it comes to searching, processing, and condensing “100 and some-odd years” of literature.

“We could have taken two approaches,” says U-M VP for information technology Ravi Pendse (third from left, with ITS staff). “One approach would have been: Let’s just not do anything about it. Let’s just wait. But that’s not the Michigan way. Michigan leads.” | Photo: J. Adrian Wylie
“In my field every day, there’s probably four dozen papers that are relevant to our research that come out. They’re dense, they’re the topic of, like, five PhD theses,” he says. “ChatGPT has no problem digesting all of that information. … It’s crazy how much time it can save.”
Cernak says he’s never seen a tool like this that worked so well on its first attempt—“like a light switch,” he describes it. “And it’s so easy. Like it was literally just typing a short paragraph into the computer.
“It’s not flawless,” he adds. “Human expert chemists aren’t going to be replaced instantly.”
Will they be replaced eventually? Cernak doesn’t think so.
“The way it’s often described in our field is, chemists won’t get replaced by AI, but chemists will get replaced by chemists who use AI,” he explains. “I don’t feel threatened by it at all, because we asked it something that was very vanilla and textbook-y.”
And Cernak is optimistic about its potential uses, particularly in the realm of biodiversity.
“I’m so passionate about endangered species, we are attempting to invent medicines for them because many of them are dying from disease. That was a question we could not have asked last year,” he says. “I believe that the biggest problems facing the world are environmental. And so I hope that we can leverage it to solve those challenges.”
But increased computing power also means increased carbon emissions. Using GenAI requires not only a great deal of energy but also water to cool the data centers—roughly a pint for a conversation of twenty to fifty exchanges.
How will the U-M reconcile this fact with its stated goal of carbon neutrality by 2040? Pendse says they partnered with Microsoft because the company has been carbon neutral since 2012 and aims for carbon negativity by 2030. Additionally, mindful decisions about which GenAI tool to use can mitigate carbon footprints.
“Do we use a large language model, or is just a language model enough? The difference could be potentially many, many megawatts of power being saved,” he says. “Our faculty members are very thoughtful and very environmentally conscious to make those right choices.”
—
Environmental impact aside, the GAIA report lists a number of other potential unintended consequences of GenAI. It could create overly high expectations for student performance, leading to stress and burnout. A loss of the human touch, and even inappropriate relationships with human-seeming AIs, could lead to isolation. Reliance on GenAI may impair original thinking and problem solving, as well as perceptions of creativity, curiosity, and knowledge.
“ChatGPT almost flattens the knowledge-generating process, because it says the knowledge comes from nowhere and has no lineage,” Parthasarathy observes. “If it were to get to a place where it worked better, and it becomes ubiquitous, what does that mean for the way we imagine the way knowledge gets made?”
And then there’s the world outside academia. Will GenAI render certain jobs obsolete, without creating new ones to take their place? At what point does work become something that the tool has done instead of the person using the tool? How does a university prepare students for a workforce—indeed, a world—that can change so much and so quickly in less than a year?
“The bottom line is this,” Pendse says. “In the fall semester we are all going to learn together, and we’re going to evolve with it.”