Skip to main content

Generative AI in academia: How Virginia Tech professors are approaching GenAI in 2026

(From left to right) Dr. Ivan Hernandez, Dale Jenkins, Prof. Liam Weikart. (Copyright 2026 by Virginia Tech - All rights reserved.)

BLACKSBURG, Va. – The usage of generative AI in academia has led to contentious debate in recent years as the development of programs like ChatGPT and Sora have progressed. While scholars may not entirely agree on how the implementation of generative AI tools should be handled, both in and outside the classroom, it is important to acknowledge its impact and develop plans that outline where and when it should be used.

10 News spoke to three professors at Virginia Tech about their opinions on generative AI programs, their classroom policies, and their personal experiences on the subject. All three had differing views and approaches.

Recommended Videos



Dr. Ivan Hernandez, an associate professor of psychology, takes a generous, AI-forward approach to his teaching process.

“For my own classroom, AI is similar in function to a friend/tutor who is capable of offering outside help. It can aid in explaining and understanding, but is also capable of doing the assignment entirely for a student. Therefore, the same policies for receiving outside help apply in certain circumstances, but not others.”

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

Dr. Hernandez’s research focuses on the “intersection of AI and the workplace,” and many courses that he teaches involve the applications of AI, as well as how it can be used effectively. He believes avoiding AI usage altogether is misguided.

“Avoiding general AI usage altogether seems misguided because of its pervasiveness in various work environments. We have already conferred a great deal of social value from these technologies. There is a common misperception of these technologies not yet demonstrating usefulness or being generally unwanted. However, within medicine, Generative AI like OpenEvidence (a ChatGPT-like application that uses peer-reviewed medical literature as the basis for its answers to questions about treatment is widely used.”

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

Dale Jenkins, a senior instructor at Virginia Tech’s School of Communication, offers a direct, anti-GenAI usage approach for his classroom.

“In my classroom, I do not allow students to use ChatGPT or AI to complete assignments; I include this in my syllabi and make it clear that this is a violation of the Virginia Tech Honor Code. Using artificial intelligence in the writing process for organizational purposes is significantly different from using AI to compose the text for you.”

Dale Jenkins, senior instructor at Virginia Tech’s School of Communication

Jenkins teaches media writing, magazine writing, and international communications courses. He believes it is important for students not to rely on AI to do their work for them.

“I do not use generative AI in my teaching. My goal is to teach students how to write for media, not to teach them to rely on AI to do that work for them.”

Dale Jenkins, senior instructor at Virginia Tech’s School of Communication

Liam Weikart is an instructor at Virginia Tech’s Department of Sociology; he allowed students to use generative AI on certain assignments one semester, but encountered some flaws that dissuaded him from its use.

“One semester, I did allow it for a few assignments, as long as it was cited. Many times I noticed that the answers generated were indeed often decent, but so obvious (bullet-point; completely bland and bloodless’ utterly lacking in any humanity; always taking the most bland, middling positions, etc.), and could be used as a learning tool for students--however, at that point there is little way for me to determine if they actually read the output, or merely cut-and-pasted.”

Prof. Liam Weikart, instructor at Virginia Tech's Department of Sociology

After that initial experiment, Prof. Weikart decided to cut down GenAI usage in his coursework.

“I do not allow it for homework assignments or papers. AI use is hard to prove, however, so much of it comes down to encouraging student honesty, and the ways in which it is doing a disservice to them (students).”

Prof. Liam Weikart, instructor at Virginia Tech's Department of Sociology

Dr. Hernandez tends to embrace AI in his classroom and is more technical in his explanation for why he supports its implementation. He believes that the submission of students’ chat logs can allow him to confirm their understanding of the subject.

“Something that is uniquely beneficial about AI is that, even though it is equivalent to receiving outside help, unlike a human, its usage is more documentable, and so students are able to submit their chat logs with their homework assignments so you can see how their understanding developed, and what ways of using AI are especially beneficial to concept mastery based on the students’ performance on later assessments.”

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

Generative AI has also improved drastically in the past three years, going from a seldom-used novelty, to something that is potentially viable as a form of creation. The “Will Smith Eating Spaghetti” test is an informal benchmark that can show the rapid improvement of generative video models.

In 2023, a video using the generative abilities of Modelscope went viral, showing Will Smith eating a bowl of spaghetti. While odd and uncanny at the time, it has since improved to a hyper-realistic video that is almost indistinguishable from a real recording, most recently using Seedance 2.0.

Dr. Hernandez has acknowledged GenAI’s rapid development and explained how those in academia can benefit from the improved models.

“Sometimes courses that are regularly taught by an instructor have that instructor drop-out with no time to find a replacement, and the topic is very specialized. This happened recently where students needed a course: Structural Equation Modeling (a type of statistical analysis that examines the associations between many variables in a complex social process simultaneously). This course was only offered intermittently in an outside department and was required for graduation. The sudden change affects students who need to graduate that semester.

In the Psychology Department, we pilot tested a graduate course in Structural Equation Modeling that would be partially taught by AI. Students received a learning objective sheet every week, and 50% of those weeks, they learned the objectives from a traditional textbook, and for the other half, they learned only with ChatGPT. All of the students were able to successfully pass the exams associated with the learning objectives, suggesting that AI can already serve as a replacement for traditional textbooks, and even partially for instructors."

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

In addition, Hernandez mentioned the use of AI in the workforce, and how it is important to prepare students to be ready to utilize these tools outside of the classroom.

“Nearly all students have used ChatGPT. It looks like the same percentage of students that have a Spotify account (another question I ask). For usage in education, I would assume most students use it as much as possible or at least use generative AI whenever using it is more convenient than not using it…”

“In the workplace, Gallup finds that about 40% of US employees report using AI at least a few times a year. Therefore, one would hope that a large percentage of students are to be appropriate prepared for entering the workforce.”

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

Jenkins, however, is concerned about the possible overreliance on AI in the classroom, and Weikart expressed frustration with perceived student laziness.

“I admit that AI has a place in our culture, but I consider total reliance on it to be a major problem. Any new technology can fill a void, but that does not mean that individuals should embrace it as a panacea.”

Dale Jenkins, senior instructor at Virginia Tech’s School of Communication

“It has made students lazy, and me, annoyed. I think the proliferation of AI-like tools have muddied the waters to the point where it is often unclear what is defined as AI.”

Prof. Liam Weikart, instructor at Virginia Tech's Department of Sociology

Most professors have their own AI outlook and policies, with Virginia Tech’s policy not offering rules, but possible guidelines.

There are no university-wide policies, but there are guidelines provided by the universities to help faculty and staff navigate how to accommodate it.

Dr. Ivan Hernandez, associate professor of psychology at Virginia Tech

Jenkins, in particular, states that using AI to complete assignments is an honor code violation; however, he emphasized that its use for organizational purposes or as an assistant (like Grammarly) is permissible.

“Students can use AI tools for organizational purposes in my classroom or an app like Grammarly, but that does not eliminate the reason to learn the course material. Becoming a stronger writer makes a student much more marketable in the workforce, especially in media; that is the ultimate goal that I have for my students.”

Dale Jenkins, senior instructor at Virginia Tech’s School of Communication

Prof. Weikart also outlined the autonomy the university gives to teachers regarding their implementation.

“Upon last check, university policy allows a fairly high degree of autonomy across departments and professors--we mostly get to decide.”

Prof. Liam Weikart, instructor at Virginia Tech's Department of Sociology

All three of these professors, despite differing views, understand that it is nearly impossible to ignore the impact of artificial intelligence in the modern day. In the 2026 Super Bowl, just under a quarter of all broadcast advertisements involved AI in some way. Ads for ChatGPT, Google Gemini, and other AI assistants were prominent, as well as some ads being created using visual generative AI tools, such as one for Svedka.

Google’s AI assistant is also unavoidable, as the world’s most popular search engine does not offer a simple option to turn off the GenAI summary that comes with nearly every query.

Many corporations also seem to be all-in with AI. Companies like Google and Amazon have heavily invested in GenAI technologies over the past few years, as have companies that frequently pair with the U.S. government, such as Palantir. Despite this, a Pew Research study found that six-in-ten college experts have “little to no confidence” in U.S. companies to responsibly develop and use AI, a sentiment which Dr. Weikart partially echoed.

“It is indeed the toxic combination of things like AI plus the profit motive and private ownership that lead to the biggest problems, in terms of labor (job-killing, etc.) and the environment.”

Prof. Liam Weikart, instructor at Virginia Tech's Department of Sociology

Jenkins also expressed his opinion on the development and motive of OpenAI in particular, and his concern about the trust placed in information made available by AI assistants.

“The ownership of OpenAI remains uncertain, but the billionaires vying for control of it do not quell my concerns about society giving its unfettered trust to the info made available on AI. Who determines what is factually accurate? Who monitors the info disseminated to the general public?”

Dale Jenkins, senior instructor at Virginia Tech’s School of Communication

Outside of academia, data centers in Southwest Virginia have remained a controversial topic among residents, such as the proposed Google data centers in Botetourt and Montgomery Counties, which you can find more information on here and here.