American University’s College of Arts & Sciences announced a fund for researching generative artificial intelligence over the summer, but the AU community knows little else about it.
According to a June 26 press release on the university’s website, the fund, named the Hedayat Fund for Generative Artificial Intelligence, “will support advancements in key areas of impact including digital and AI ethics, data privacy and security, government policy, environmental and biomedical advances and public education on AI tools.”
Hamoon Hedayat, a sales executive at Google Cloud, and Nancy Hedayat, a former member of the AU Parent Leadership Council, endowed the fund. Their three daughters all graduated from AU. Hamoon Hedayat currently sits on the CAS Advisory Board and graduated from the School of International Service in 1993 with a master’s degree in International Development.
The release also said Hamoon Hedayat would meet with CAS Dean Linda Aldoory and other CAS faculty at the Google DC summit that summer to brainstorm partnership ideas and gain insight on using generative AI in higher education. It did not disclose the size of the Hedayats’ gift, which counted toward the completion of the Change Can’t Wait fundraising campaign.
But months after the fund’s announcement, professors say they have been left in the dark about what it will mean for them and their students. Many have said there is little information about the fund available to faculty outside of publicly available materials. While many of them feel optimistic about the fund, some are concerned that increased generative AI usage could imperil AU’s sustainability initiatives.
Kelly Joyner, director of the Writing Studies program at AU, said all he knew about the fund came from the press release.
“It sounds like, from the way that they pitched it, it’s a healthy sort of partnership, because it includes research into the ethics of AI, intellectual property ethics, as well as other kinds of ethics,” Joyner said.
Intellectual property ethics, Joyner said, are why he hesitates to use AI-powered tools like ChatGPT. According to legal researchers, Generative AI models are often trained on copyrighted material without permission from the original creators of the data.
Still, Joyner said he’s curious about the Hedayats’ vision for the role of AI in society.
“I would want to ask our dean what she knows about Hedayat,” Joyner said. “What are his future plans for this? Is he seriously invested in AI as a way to streamline people’s jobs? Is he worried at all about how it might put people out of work? New technologies always do, not necessarily inevitably, but they do, so I would want to ask questions about what’s the end game.”
In an email to AWOL, CAS Dean Aldoory said the fund was still in an early stage. She said that little money is currently available and none has been used yet.
“We expect it to be helpful starting earliest spring, and into next year and beyond,” Aldoory said. “The funds plan to be used to support student seed projects, students working with faculty, faculty research and potentially some programming in responsible AI.”
Elizabeth Deal, assistant vice president and deputy chief communications officer at AU, said the fund’s stewards would be willing to provide an interview once the project is further developed.
Sophomore Haider Zaidi, the president of the CAS Undergraduate Council, said he had not heard anything about the initiative, but is optimistic about the idea of the project because CAS has many majors that have connections to AI.
“So I’m sure it provides opportunities for some to grow in their educational endeavors and I’m excited to see what role that plays in their career output and how it helps them,” Zaidi said.
Despite the lack of available information, professors have said they are also excited about the research opportunities the fund could provide.
Joyner said he was interested in studying practical applications for AI networks like ChatGPT that are trained to understand language, known as large language models.
“I would dig into the viability of LLMs to produce writing and see how people are actually using it to produce writing,” Joyner said. “I would want to know how these LLMs are being used, the variety of ways they are being used and if some of these ways are sort of counter to the purpose of the writing.”
Roberto Corizzo, an assistant professor of computer science, said he had only heard of the fund from the press release. However, he said there are numerous issues that could be addressed through research promoted by the fund, like the bias against racial minorities encoded in LLMs.
For an example, Corizzo referenced a study conducted by researchers from the Allen Institute for AI, which showed that LLMs possess racist biases against speakers of African American English.
“Knowing this requires a major effort to mitigate racism, sexism — all these covert types of inferences that this model can make,” Corizzo said. “And if you think about automation, if the language model does any form of automated decision, then think, you know, a very harmful impact in society.”
However, generative AI has been criticized for other problems that exist outside of the content it generates. AI products require high volumes of energy to produce text and images, according to a research article published by researchers at Carnegie Mellon University and Hugging Face, an AI development company. According to the article, generating a single image with a powerful model consumes enough energy to fully charge a smartphone.
These energy demands have grown the carbon footprints of companies that once claimed to be sustainable. Google Search also requires far more power than before because of AI-generated topic summaries now built into the search engine, according to a 2023 article in the MIT Technology Review. And according to Google’s July 2024 Environmental Report, its parent company Alphabet is no longer maintaining carbon neutrality in day-to-day operations.
AU became the first carbon neutral university in the United States in 2018. Megan Litke, director of AU’s Office of Sustainability, said that her office had not been involved in the university’s investigations into AI. Instead, she said, the office’s main focus has been spreading awareness of generative AI’s high power costs.
Litke said AU measures its carbon footprint in separate numbered categories called scopes. According to the university website, direct emissions, meaning carbon byproducts from nonrenewable fuel burned on campus, are counted in Scope One. Emissions from electricity generated off-campus for on-campus use are counted in Scope Two. Indirect emissions, like air travel for study abroad, are counted in Scope Three.
Because generative AI processes are computed in decentralized data centers with energy consumption that is difficult to itemize, they are not considered to be within the scope of AU’s carbon footprint, Litke said.
According to the university’s page on tracking greenhouse emissions, AU publishes itemized reports of its greenhouse gas emissions annually. The most recent report does not include external computing in its categories of tracked emissions.
Emissions from power used to fuel on-campus computers would fall into Scope Two, since energy in AU buildings would be purchased from an energy provider. However, according to the report, AU has had zero Scope Two emissions since 2011. According to the university website, Scope Two emissions are mitigated by reduced on-campus electricity use, purchased renewable energy credits and solar power generated by AU.
Alexander Golub, adjunct professor of environmental science at AU, said AU’s carbon emissions are important to note regardless of where they are being released. Even if emissions from generative AI were legally outside of AU’s jurisdiction, he said, AU is still responsible for them because it is the final consumer of the energy expended for its sake.
But Golub, like many other professors, said he has ideas for green AI research that could be conducted with resources from the Hedayat Fund. He said he could research the potential for synergy between renewable energy and AI in growing the economy.
“I think what is important to understand for our policy makers is that decarbonization is not a barrier for economic growth,” Golub said. “It could be a driver, and deployment of AI and alternative energy may create jobs, may improve competitiveness, productivity of jobs.”
Computer scientists at AU have also been working to reduce the carbon footprint of AI models themselves. Corizzo said there is much being done to make generative AI more efficient.
“Now more than ever, we have these large models, and they are very computationally intense,” Corizzo said. “They’re too large.”
Part of Corizzo’s research is on compression, the shrinking of AI models to reduce their required computing power.
“So, we have a large model,” Corizzo said. “How can we build a smaller model that has a similar performance, but, let’s say, saving 90% of the carbon footprint?”
Unnecessary weights can be culled from AI neural networks. This process, known as pruning, streamlines the model, reducing its energy consumption while preserving most of its accuracy, according to a 2013 article by computer science researchers. Another process, called quantization, converts simple data from a highly detailed representation to one more basic, according to a 2021 article by AI researchers.
For instance, an uncomplicated weight in a neural network expressed in 64 bit could instead could be expressed in eight bit or four bit, which would reduce computational load while not significantly affecting reliability.
But Corizzo said simplification of language models could exacerbate the content issues, like racial bias, that researchers are currently working to correct.
“It’s an interesting trade off to explore. There are a number of now well-known techniques to do this, but there is always a fine line,” Corizzo said. “You need to decide where to set the bar.”