The uncertAInty of ChatGPT

AU community discusses the impact of artificial intelligence in education

Grace Higgins and Vanessa Levins

American University educators grapple with how to address plagiarism after the release of ChatGPT, an Artificial Intelligence chatbot, on Nov. 22.

OpenAI’s release of Chat Generative Pre-trained Transformer, or ChatGPT, a chatbot that can produce writing from a prompt, has made educators like Nathalie Japkowicz, chair of the Computer Science Department at AU, worried about how AI could affect what people write.

“I’m afraid of opening my newspaper on CNN and not knowing what I can trust, what’s reality, what’s not,” Japkowicz said. “That’s the biggest fear.”

OpenAI’s product description page says GPT models can analyze human language to complete tasks such as text generation, summarization and translation. The new program can converse with users in various ways.

“The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests,” according to the model’s website.

AI software development dates back to 1956 when a program specializing in mathematical proofs called Logic Theorist was released, according to a November 2022 article from History Computer, a site focused on technology and innovation. The program sparked a new wave of technological research as interest in AI and machine learning algorithms increased.

Since then, AI has expanded to the public. A Pew Research survey completed in December 2022 found that 27% of Americans say they interact with AI on a daily basis.

According to the MIT Technology Review, before OpenAI released ChatGPT, they had previous large language models, GPT-3 and GPT-2, according to a paper written by staff involved in the project. OpenAI ́s website stated that the company released its newest model, GPT-4, on Mar. 14.

According to Japkowicz, a large language model is trained on large amounts of data to create associations between words and phrases, allowing it to generate text. Japkowicz said the technology behind ChatGPT is revolutionary for her because she used to create large language models manually.

“It’s amazing, because when I got started, I was doing this kind of thing by hand, like many other people,” Japkowicz said. “So it’s like a miracle.”

ChatGPT has gained widespread attention across college campuses since its release, according to a BestColleges Survey conducted in March of 2023. Over 40% of college students report having used ChatGPT or similar AI-generated programs.

Academic professionals see ChatGPT as a possible threat to education because of its ability to produce human-like writing. Japkowicz said she’s worried that ChatGPT’s abilities and accessibility increase the potential for students to cheat on class assignments.

“I am very concerned because I know that if I give my students some code to write, they could have it generated auto-
matically,” Japkowicz said. “[ChatGPT] doesn’t always make errors, or maybe it makes errors that I may not detect. I’m very afraid of that.”

However, Alex Schuessler, a first-year pursuing a minor in computer science, said ChatGPT assisted coding projects he has completed in his courses by helping him correct mistakes or explain unfamiliar code. He said the program’s coding ability cannot stand alone and requires human thinking to finish the work.

“You definitely need to have some combination of a human mind mixed in,” Schuessler said. “It can help you with errors, and it can fill in gaps, but it can’t make the whole picture.”

First-year Finnley Collins said he sees ChatGPT as a useful tool in non-academic work, such as personal or professional pursuits. Recently, he used ChatGPT to help him write an email. “I was trying to write an email to interview someone, and I didn’t know how to address them; I didn’t know how casual or how formal,” Collins said.“So I told ChatGPT to act like it was a student trying to reach out for an interview. And I critiqued what ChatGPT wrote, and melded it into my own.”

Educators at AU now have to find a way to account for ChatGPT’s capabilities, as students could use the technology to complete assignments. The university responded to concerns about the program’s educational implications by offering more support to professors, said Professor Alison Thomas, the academic integrity code administrator for the College of Arts and Sciences.

“I think the first most major thing that we started doing is talking about ChatGPT as something related to good teaching,” Thomas said. “We have a Center for Teaching Excellence that is providing guidance, care and support for faculty members who are wondering how to change the way they assess student abilities, the way they design their course materials, things like that.”

AU is also looking to rewrite the current Academic Integrity Code based on technological developments, Thomas said. She said the code was implemented in 2007– before the widespread use of handheld devices and changes to the AU curriculum. The university last updated the code in 2020.
“The ideas that we’ll be pursuing are focused on what a new academic integrity system at AU would look like, what it should look like, given the commitments we’ve made to the educational models, like Habits of Mind that you see in AU Core,” Thomas said.

Provost and Chief Academic Officer Peter Starr said AU wants the new code to be a collaborative effort between students, faculty and administration. It would be hard to underestimate the extent to which we want the ultimate code to be the product of a university-wide dialogue because students, faculty and administrators all have a vested interest in a code that is as clear as can possibly be,” Starr said.

As the administration works to create a code that considers new developments in technology, AU faculty members like
Naomi Baron, professor emerita of world languages and cultures, are evaluating ChatGPT’s abilities.

Baron said ChatGPT produces responses that lack errors, differentiating it from human language and making it more detectable as a chatbot.

“You’ll never get a spelling error,” Baron said. “You don’t see a grammatical error. You don’t see a punctuation error. If it looks too good to be true, it probably is.”

Baron said that ChatGPT can form perfect sentences, but it doesn’t deliver perfect writing because the program lacks creativity.

“It is explicitly engineered to be safe and boring so it is not harmful,” Baron said.

Roberto Corizzo, a computer science professor who researches machine-generated text, said that ChatGPT differs from its AI predecessors.

“ChatGPT’s conversational capabilities are unparalleled with prior attempts,” Corizzo said. “That’s why it’s so convincing. And it’s gaining so much traction since their responses are really articulate,realistic and useful.”

Although it has more advanced conversational skills, Corizzo said there are linguistic indicators that make machine-generated text easy to identify.

“ChatGPT-generated text seems very realistic and good at first sight,” Corizzo said. “Then when you check all these indicators that define the richness of the language – the repetitiveness, the emotional semantics the punctuation marks – it’s evident the human language is much richer. So you can tell apart machine-generated from human-generated quite easily.”

Mark Nelson, a computer science professor with a research focus on artificial intelligence, said the technology combines language and artificial neuron signals that quickly gather large amounts of information from the internet to predict the most appropriate and effective words on a topic.

OpenAI’s description of the model claims the company has accounted for potential misuse and has “made efforts to make the model refuse inappropriate requests.” However, the developers said they are still improving the program’s response to “harmful instructions” that may encourage discriminatory behavior.

Moderation API, a tool that prevents the program from outputting unsafe content, has allowed OpenAI to better regulate the program’s replies to users. ChatGPT also cannot offer opinions, Nelson said.

He said ChatGPT has an “almost bureaucratic style” in which it lectures users if they want it to output something it considers controversial.

“If you ask things like, ‘who is the most overrated philosopher?’ it will say something like ‘Well, I don’t have opinions
on this, but also, you can’t really say that someone’s overrated or not because people have different strengths,’” Nelson said.

Nelson also said he explored how the program would respond to academic questions requiring more critical thinking. He input several questions from a recent exam his students took, looking to see if ChatGPT would answer accurately. The results were unimpressive, Nelson said.

“It produced a paragraph of text that was sort of incompetent,” Nelson said. “It tends to ramble. The questions that it tends to do well at are ones that are fairly vague and just require sort of an approximate answer.”

Because ChatGPT is a new program, professors still have unanswered questions. Nelson said he is unsure whether it will influence how students learn since they can find the information needed to “take short-
cuts” on assignments on Google and other software.

Nelson said it remains unclear whether or not the program will even prove useful for students. But Japkowicz said she believes ChatGPT has very dangerous implications that expand beyond education.

Some AI software, like The Next Rembrandt, can create art by analyzing previous paintings. Japkowicz said she worries about losing the human component of creativity if artistry, like Motzart’s songwriting, for example, were traded for AI programs.

“I’m listening to a human being who poured his heart into what he wrote,” Japkowicz said. “And then, well, here I have very pretty music, but it doesn’t mean anything.”

Baron said she’s more worried about the reaction of educational systems to the technology than ChatGPT itself.

“My biggest concern is that we’re panicking, and we’re panicking about plagiarism,” Baron said. “Plagiarism is not new. Cheating is not new. This is just another way. So what educators need to do is slow down.”

As professors figure out how ChatGPT will affect students in the classroom, Baron said they have discussed how to adapt to the program. She said professors have explored ideas to shift toward group-oriented work, multimedia projects or assignments
that prompt students to answer questions that require critical thinking. Baron also said academic professionals should acknowledge the program’s benefits for students, especially for encouraging creativity.

“A lot of people have writer’s block,”Baron said. “So to get started, there’s nothing wrong with [using ChatGPT]. It’s a problem of whether you let the tool do all the writing as opposed to just getting some ideas.”

Baron said she has heard her colleagues consider whether they could use the program within writing courses as a method of having students recognize the difference between good and poor writing.

“A lot of people have talked about getting ChatGPT to write an essay and then have students critique it, talk it through together to say, what’s good, what’s bad,” Baron said. “And you start thinking about style, you start thinking about grammar, you start thinking about what would be a more interesting word here.”

Corizzo said he sees another value in ChatGPT for students who speak English as a second language. “If you’re asked to write an essay, and you don’t have excellent English articulation skills, it can really help to rephrase,” Corizzo said. “The initial content is yours, but ChatGPT is just helping you rephrase it, putting it in a way that makes sense from an English perspective. And so that is an excellent and legitimate use.”

Baron said ChatGPT requires more observation before educators can draw conclusions about its place in academia.

“I mean, it’s going to take months to figure out what are the things that might actually be useful,” Baron said. “And I’m sure we’ll find some things. But right now we’re grasping at straws.”

This article is a part of AWOL’s 32nd Magazine edition