Iowa Professors Say Students Must Be Educated About Artificial Intelligence
Artificial intelligence technology will be essential in some workplaces and a hindrance in others, university professors say.
Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Three professors from Iowa鈥檚 public universities are working to raise awareness of the importance and contradictory nature of artificial intelligence in higher education, pointing to concerns about privacy, bias and academic integrity.
The professors, speaking to the Board of Regents on June 14, pointed to the benefits and detriments of AI use in classrooms, as it is necessary for the workforce in some occupations and hinders others.
鈥淚t鈥檚 important that we are, in all cases, educating our faculty, staff and students on the use of these technologies, both from the perspective of the opportunity they offer, but also the challenges and concerns that they present,鈥 Barrett Thomas, professor and senior associate dean of the Tippie College of Business at the University of Iowa, said.
Abram Anders, an associate professor of English and the interim associate director of the Student Innovation Center at Iowa State University, said the impact of AI is being witnessed by 鈥減ioneers鈥 at higher education institutions across the world. He said the large language model technology, in which computers can learn and generate human languages, is raising the bar for what鈥檚 possible in the classroom, but it does come with limitations.
鈥淓ven though we can see magical-like performances of these tools, it鈥檚 really important to know they have limitations,鈥 Anders said. 鈥淚t鈥檚 not like they鈥檙e sentient; they don鈥檛 think and feel like a human does. They鈥檙e not objective, they are likely to have some of the same biases of the human language that they鈥檙e trained on. They are not authoritative. Like a human author, they cannot be responsible for the consequences of their texts and they are not ethical.鈥
Thomas agreed with Anders about the detriments of the newer AI generator technology, including bias.
鈥淢ore broadly, all AI technologies have questions of bias and that bias comes in algorithmic design, it comes in how we sample the data that is used to train these models,鈥 he said. 鈥淚t comes from the way the data is generated. This is, in these cases, human-generated data and so the data you get depends on who has access to that human generation.鈥
He also pointed to AI responses that simply aren鈥檛 true when asking questions, which spreads misinformation and impacts individual users. Thomas pointed to a 鈥渘ow infamous case鈥 of and citing case law that doesn鈥檛 exist.
Academic integrity questions and classroom needs
Jim O鈥橪oughlin, professor and head of the University of Northern Iowa鈥檚 Languages and Literature Department, showed the regents several headlines about academic integrity and the use of ChatGPT. He said questions of plagiarism are not new and the universities in Iowa have policies on academic infringements.
鈥淭here鈥檚 already some mechanism for dealing with electronic text,鈥 he said while showing the regents a copy of UNI鈥檚 Academi Ethics Violation policy. 鈥淏ut we are 鈥 in the section in red 鈥 working on what modest changes may need to be made to account for generative AI.鈥
O鈥橪oughlin said that these policies must remain flexible to see proper use in different classroom settings, as some may encourage understanding AI for future occupational application. Some students will need extensive understanding of generative AI, he said, while others may just need a little knowledge on it.
He pointed to the job of prompt engineers, who develop, refine and optimize AI text prompts for accuracy and relevant responses. Some current students at Iowa鈥檚 universities will go into these jobs, he said, who will need several classes on how to use and better AI.
Those aren鈥檛 the only cases, though, O鈥橪oughlin told the board.
鈥淐learly, there are going to be some circumstances and some classes where the use of AI would be detrimental and would need to be prohibited and faculty would need to have the leeway for that,鈥 he said.
Another issue is the current infrastructure professors have to determine if student work is plagiarized or not, O鈥橪oughlin said.
鈥淭here are some concerns that a lot of faculty have right now,鈥 he said. 鈥淓lectronic plagiarism checkers that are already in place, they鈥檝e actually struggled to accurately identify AI-produced text, particularly a lot of false positives come up for students for whom English is not their first language.鈥
Needing new assignments
O鈥橪oughlin said the assignments the regents and some current professors at UNI, ISU and the University of Iowa would have experienced in their educational journeys will likely be nullified because of generative AI.
鈥淲e are also finding, now, that some standard forms of assessments, things that we all would鈥檝e done 鈥 the take-home exam, the annotated bibliography, the research paper 鈥 these are going to become less reliable indicators of student performance because ChatGPT can be used with them so easily,鈥 he said.
Written communication, argumentation and basic computer coding skills are easily assisted or even fully written by generative AI, he said. Discernment and understanding if something is good, bad or argumentative is becoming more important in higher education, he said, which is taught in more humanities courses.
New courses are also being offered surrounding AI, Anders said, pointing to a class he鈥檚 teaching at ISU entitled 鈥淎rtificial Intelligence and Writing.鈥 He will teach literacy tools for students to understand and develop effective prompts and find accurate information using AI.
O鈥橪oughlin pointed to an epidemiology class at UNI where students analyze what ChatGPT has to say on public health issues for accuracy. There are also creative writing courses that use AI to understand original story ideas.
Opportunities for AI use are everywhere and in every discipline, Thomas said, including classes at the UI in entrepreneurship and AI as well as providing hands-on experiences in the Commercializing New Technology Academy.
鈥淚t鈥檚 going to impact all of the research across campus and then also all of our students as they go into the workforce,鈥 he said. 鈥淎nd it鈥檚 important that we鈥檙e preparing them for that space.鈥
Privacy concerns
Thomas said one of the major issues with using ChatGPT and similar software is that students may not realize it stores data.
Generative AI holds onto the information input by people to train its next version, which includes any sensitive data.
鈥淭here are changes that are coming, particularly in ChatGPT, to allow you to keep your data private but I think there are still concerns and it requires education to make sure that people understand these and, probably in certain circumstances, prohibition against using these technologies with certain data,鈥 he said.鈥
The time is now
Anders said the disruption of AI is happening now.
鈥淭hese technologies, unlike other technologies, are not emergent in the sense that we don鈥檛 have to wait five years to see what they can do,鈥 he said. 鈥淭hey can already do it now and if we had no further progress they would already be transforming our world.鈥
AI won鈥檛 replace jobs, he said, but a human using AI will as the technology is focused on 鈥渞amping up鈥 human talent.
鈥淭he last point, that I think we all three agree on, is the question is not to ban or not to ban,鈥 Andes said. 鈥淭hat鈥檚 already gone. This is here for good. But how can we assume leadership for inventing ethical features, ones that mitigate harms in our learning communities and prepare our students to use these tools moving forward.鈥
is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: [email protected]. Follow Iowa Capital Dispatch on and .
Did you use this article in your work?
We鈥檇 love to hear how The 74鈥檚 reporting is helping educators, researchers, and policymakers.