Advertisement

The Quad

The Quad

The Future of Generative AI in Education

As generative AI continues to intersect with education at SHP and society as a whole, debate about its benefits and dangers persists. With regard to AI’s helpfulness at school, Nick Karros ‘24 remarked, “I’ll use it to generate ideas… So that can be really helpful.” However, he also said that “a goal of school is to teach us to write, and I think that kids are definitely going to fall into the trap of becoming too reliant on AI as it gets better.” How do we prevent this from happening, while maintaining the benefits Karros mentioned above? Where do we draw the line between academic dishonesty and the use of a useful tool? A few university educators and board members share their takes on generative AI’s place in and out of the classroom.
First to offer her opinion is Dr. Songyee Yoon, a Massachusetts Institute of Technology graduate with a Ph.D. in Artificial Intelligence and a member of the MIT board of trustees, known as the “MIT Corporation.” She is the co-president of NCSOFT, the second-largest gaming company in South Korea. When asked about the “new surge” in artificial intelligence, she replied, “I have been studying AI for the last 30 years. So it’s kind of surprising that we act as if AI is something very new.” She referenced the 2012 Duhigg case, in which a father found out his daughter was pregnant after seeing AI-generated targeted ads. She then compared generative AI in the classroom to the calculator: “when calculators came around for the first time, it was very controversial. It’s debatable, right? Whether or not allowing use of the calculator undermined math ability of students.” She recognized that the majority of schools in the U.S. permit the use of calculators in the classroom. “So if it could be a tool, then generative AI could be a tool, too.” Yoon made clear that AI should be used with caution and that “you should know your subject” because of “hallucinations, where AI produces an answer that’s very plausible, but not always very factual.” She referenced the case of New York lawyer Steven A. Schwartz, who used Chat GPT to create a brief that referenced cases that did not exist.
When asked about MIT’s policy regarding generative AI, Dr. Yoon stated that it was left largely up to the individual professors and the kinds of assignments they wanted to give based on their subject. However, she also mentioned that the university was concerned with “how we can use AI for different types of research and come up with something more creative. That has been the focus on campus, rather than just allowing it or not.”
Professor Jennifer Trainor of the San Francisco State University (SFSU) English Department has a similar stance as Dr. Yoon. She expressed that she isn’t “seeing a lot of cheating or plagiarism based on AI,” but because generative AI is still an “emerging technology, it is going to shape how people write… so it’s good for students to know about it and what its limitations are and what it can do.” However, she also made clear that although students should have an understanding of it, “I want them to be aware that just blindly or uncritically using any technology may end up promoting more bias, discrimination, and inequality.” This was a reference to ChatGPT’s somewhat biased answers. As OpenAI put it, “ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content.” Trainor also acknowledged that generative AI is hard to catch reliably, and “they’ll identify non-native speakers of English more readily, so there’s a bias built into tools like, say, TurnItIn.” She compared generative AI to the rise of the Internet, with entities like Google and Wikipedia that revolutionized the way students read and write. So, “as English professors, we see how we’ve changed a lot over the years,” she said. Because of prior drastic changes that have occurred with the rise of the internet, the SFSU English Department doesn’t see the explosion of AI as unprecedented, and is thus largely open to the idea of leveraging AI’s benefits.
Trainor explained that San Francisco State University faculty have had “a lot of meetings and conversations,” but that the university maintains the academic integrity policy that was already in place. She explained that “it’s against policy to present yourself fraudulently, to turn in work that isn’t yours… all of those things are already against policy.” She acknowledged a “gray area” that the policy did not cover, where students used generative AI as a tool for their own work. Thus, it’s always best to check with a teacher before using generative AI as a tool. When asked whether college admission essays would carry less weight if generative AI could have been used, Trainor pointed out that “some students are already getting so much help from tutors and parents… I don’t know how admissions committees could discern which was written by a student, which was by a tutor, or which was written by AI.” In this way, generative AI was just highlighting problems in an already flawed college application system.
Before 2021, Professor Michael Ostrovsky of Stanford University considered generative AI to be “more of a toy or project… it was not something that [he] ever considered using.” Like Trainor, Ostrovsky said ChatGPT and other forms of generative AI could be very useful for brainstorming ideas, but he also said that it was a “really efficient method of quickly learning some basics about a subject.” Although he was a professor of economics and generally did not assign longer writing assignments, he explained that some of his colleagues that assigned essays in their classes realized “that type of assignment could no longer be given.” But overall, “the university is actually pretty excited” by the popularization of generative AI. However, the school has taken a “firm position” in saying that should a student use ChatGPT, the “student must disclose that.” Ostrovsky also said that in order to keep students from using ChatGPT, professors often “move certain assignments to in-class and the answers must be handwritten.” But he also pointed out that generative AI is a “powerful tool” that has opened the door for new kinds of assignments. He says he may give students an assignment in which they are given a topic and “then run it through ChatGPT, get an output and… critique ChatGPT’s answer.” In this way, students could learn not to blindly trust generative AI and figure out how to use it most effectively. The two main ways Ostrovsky expects generative AI to change education is by “speeding up mundane tasks, similar to how calculators make it easier to teach advanced subjects quicker,” and also making it so “more individualized education is going to be possible.” Ostrovsky noted that the university is in an “exploratory mode,” not wanting to ban the use of generative AI because “it is something that students will use when they go get jobs,” and thus it is the university’s job to teach them how to use it wisely.
This sentiment was echoed by Professor Reuben Thiessen, an AI engineer and an education technology leader at the Stanford Graduate School of Education, who said “Stanford’s response has been: let’s not be afraid of this… but rather have course instructors set their own policy and explore it a little.” Thiessen explained that Stanford put together “some policy guidance around the Honor Code…What is interesting is that Stanford sees the use of AI as if one was working on an assignment with another person.” Treating ChatGPT like a fellow student makes it so you can receive assistance from it, but you can’t copy an entire essay from it. Thiessen, like all of the previous interviewees, also took note of ChatGPT’s hallucinations, and explained it like this: “I heard somebody say, rather than calling it artificial intelligence, I would have called it applied statistics. Essentially what it’s doing is statistically figuring out what the best possible next letter, word, sentence is.” Thus, the new problem that must be solved is how to “appeal to the result of a generative AI output.” Currently, the best way to get a satisfactory answer is through prompt engineering. Professor Thiessen explained that “with Google, you could be very brief, concise, and use keywords. Bringing that mindset to a chatbot isn’t going to work as well as sitting down and doing planning first.” He recommended for all students to be “verbose” if writing prompts for chatbots for best results.
When asked about college entrance essays, Professor Thiessen said that the goal of such essays is “to convince the admissions team… that [you] are a serious person, and [you] have big ambitions and that [you’re] bringing that with you if let in.” Because those kinds of essays are so difficult to write for a human, for a chatbot to write an acceptable college entrance essay it would require that “you had written such a great prompt that you got that kind of response… that’s the funny thing is: is that bad or actually not bad?” A great essay produced by a chatbot, at least today, requires extensive planning of the prompt by the student and makes it clear that the student is thoughtful and understands the requirements of the essay.
The main point that all four educators made is that generative AI is not something we should shy away from using, or bar ourselves from exploring. It is a powerful tool capable of transforming how we work. Like any other tool, it can be abused, and in the words of Trainor, “students will find a way to [cheat] anyways.” Thus, many universities have largely welcomed generative AI, and schools are now shifting towards teaching students how best to use it while still understanding what they are learning.

Leave a Comment
Donate to The Quad

Your donation will support the student journalists of Sacred Heart Preparatory. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Quad

Comments (0)

All The Heartbeat Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *