How We Know The Future of Higher Education Is Already Here

After the release of ChatGPT, professors at Pittsburgh colleges and universities are grappling with how new artificial intelligence tools will change work in the classroom.
Ai Robot Text Paper

ILLUSTRATIONS: SHUTTERSTOCK

If you had asked Pittsburgh professors a year ago to picture a robot that can answer any question and complete any written task, most would have envisioned a scene from a science fiction blockbuster, gears whirring and steam blowing from an android’s seams.

When asked the same question today, professors picture ChatGPT and other artificial intelligence tools — all easily accessible by cell phone and computer — that now have the potential of upending the pillars of academia.

Since ChatGPT’s release in November, professors have questioned whether the new technologies should be feared or embraced.

Aside from Carnegie Mellon University, which has no plans to change its academic integrity policy because the unauthorized use of AI falls into its category of “unauthorized assistance,” other local schools at the time of this reporting are scrambling to incorporate rules for the new tech into existing academic frameworks.

The University of Pittsburgh recognizes that AI is a “powerful tool and can be used for good or evil” and has been providing resources for professors since November, according to a statement, but is taking a “deep dive” look at the new tech in a provost-commissioned committee before any official policy is established.

At Chatham University, officials say they are trying to address AI in current academic policies instead of creating new policies that may quickly become outdated for the rapidly advancing technology. Duquesne University is similarly considering a policy that is “flexible enough to grow with the technology,” according to a statement, and is “encouraging thoughtful experimentation” with the new tools.

Point Park University is organizing a committee of students and faculty members to make recommendations on how to best approach policy changes, and the Community College of Allegheny County is hosting common hours for faculty to discuss the new technology and its impacts on their programs.

Despite these varying approaches, Pittsburgh schools agree on two points: AI is here to stay, and students, professors and administrators must adapt now. Aarti Singh, a professor in the School of Computer Science at Carnegie Mellon, says the window to reform higher education will not stay open forever.

“The technology is out there, and people will use it regardless,” Singh said in an interview. “So if we don’t account for it, then it will be used to spread misinformation. It will be used in ways that we may not be prepared for.”

Shutterstock 2290138937

Understanding the New Technology

AI tools such as ChatGPT are called large language models, or LLMs, which are artificial neural networks that learn skills by analyzing data in the form of digital text. By pinpointing patterns in that text, LLMs learn to generate their own text, which can range from blog posts to poems to real-time conversations.

ChatGPT exploded in popularity at the end of 2022 and had 100 million monthly users within two months of its debut — for comparison, it took TikTok nine months to reach that many users, and Instagram took two-and-a-half years.

Concern grew among tech experts as ChatGPT entered the mainstream, and many warned that if AI continued to advance without strict regulation, it could eventually replace human thought and jobs. Fears quickly spread into higher education as well; by December, articles with headlines such as “The College Essay Is Dead” questioned how writing samples could be used to grade undergraduate students, and even grant Ph.D.s to graduate students, when writing processes could now be significantly automated.

Now, as institutions are still grappling with how to address the technology in official university policies, professors are left to make their own rules — ones that they deem best for the given topics, levels and requirements of their courses — and adopt new strategies to
incorporate, or deliberately avoid, AI use in their classrooms.

Mark DiMauro, assistant professor of English literature and multimedia and digital culture at the University of Pittsburgh-Johnstown, says there is no need to start this school year with the anxieties that initially surrounded AI technologies.

“When we invented the calculator, people didn’t stop doing math. When we invented color photography, people didn’t stop creating art,” he said in an interview. “AI is just another tool, and as long as we demystify it and contextualize it, there should be zero problem with applying it effectively and ethically.”

Ai Typewriter

ILLUSTRATION: SHUTTERSTOCK

Responding to the Challenges

Professors now face new challenges in creating their syllabi and conducting their courses; the first of which is deciding how much, if any, AI use is allowed.

Singh says that because current AI tools can only complete “mundane, regular, low-complexity tasks,” they can help students in higher-level classes be more efficient and aid, not harm, their learning of difficult topics. However, in introductory classes that focus on simpler thought processes and don’t involve much critical thinking, Singh says AI tools are dangerous to student learning because they could replace nearly all student work.

DiMauro says that he prohibits AI use in his lower-level classes for that very reason — he doesn’t want students to become dependent on AI for foundational skills such as researching and writing. To ensure students are doing their own work, he gives assignments that AI cannot complete. His go-to strategies are including current events in his assignments and asking students about their personal experiences with class material, as AI tools are not trained with data that can answer either type of question.


Related: Real or Robot? Can You Tell Which Passage Was Written by ChatGPT?


Grading assignments also proves to be a challenge, as students now have a tool that can gather information, condense that information into concise summaries, find alternative phrasings to improve clarity and more. So when professors grade papers, DiMauro says, they may not be able to tell what was written by students and what was written by AI.

DiMauro says tools such as ZeroGPT that claim to detect AI-generated text within students’ writing are unreliable, and he recommends instead that professors shift to grading students’ thinking process in addition to their final written product. Professors can designate grading checkpoints throughout an assignment and ask students how they organized their research, made improvements to their rough drafts, came to their final conclusions and other questions to ensure that no unauthorized shortcuts were taken with AI.

John Slattery, director of the Carl G. Grefenstette Center for Ethics in Science, Technology and Law at Duquesne University, says when he is suspicious of a student using AI on assignments without his permission, he asks that student to explain class concepts to him in an oral exam to assess their true understanding.

But Slattery says that professors’ focus shouldn’t be on catching students who misuse AI but rather on teaching students how to ethically engage with the technology in any situation. He teaches a class devoted to the ethics of technology and says he plans to start his fall class by introducing students to ChatGPT, asking it questions and dissecting the answers to those questions to look for elements of bias, accuracy and other ethical concerns.

Shutterstock 2327334515

Embracing New Opportunities

Despite the looming challenges, Pittsburgh professors are finding ways to not only address the changes that AI has brought to higher education but also use AI to improve the learning experience.

“AI is going to enhance student creativity, student efficiency and student effectiveness,” says DiMauro, “and ultimately, in the long run, that’s what we as educators are here to do.”

The technology can act as an assistant to students in higher-level classes — a brainstorming assistant for generating prompts, a research assistant for gathering facts, or even a marketing assistant for creating taglines and logos, he says. Using AI as a helper doesn’t destroy human creativity but rather makes students more efficient and gives them more time to discover connections between ideas and critically analyze their work.

For example, DiMauro says it could take a student three weeks to brainstorm and develop a concept for a fictional short story on their own. Another student could develop a story idea in a few hours with the help of AI by asking ChatGPT to generate prompts, find patterns in previously written stories or simply respond to ideas. The student that uses AI then has more time to write the story and refine and improve their ideas — all of which are higher-level thinking skills that the student will benefit from practicing.

Inara Scott, an associate dean at Oregon State University’s College of Business and a leading voice for adapting higher education to AI, says the technology encourages creativity by forcing students to be unique and original if they want to stand out.

“You have to ask yourself, ‘What am I bringing to the world? Can a computer do this just as well as me?’ And if the answer is ‘yes,’ we’ve got a problem,” she says.

Shutterstock 2319630301

Stepping Into the Future of Higher Education

Some Pittsburgh professors are already looking ahead to how AI will continue to shape higher education and society beyond this academic year.

CMU is home to one of seven new National Artificial Intelligence Research Institutes and received $20 million from the U.S. National Science Foundation to begin work on June 1. The institute, directed by Singh, will first analyze how emergency responders, health officials and others who operate in time-sensitive environments make decisions. Researchers will then decide where AI can be incorporated in the processes and create AI tools that will ethically aid the officials’ needs.

Singh says members of the institute are also creating a curriculum about AI’s place in society that will be shared with 40 public schools and 30 community colleges nationwide. She has been working closely with teachers from the Winchester Thurston School in Shadyside to design the curriculum and contemplate how AI can be taught to future elementary, middle and high school students.

CMU has also received one of 14 “Future of Work” grants from the National Science Foundation. The university is teaming up with CCAC to investigate how components of that school’s information technology courses can be aligned with AI tools to enhance and accelerate learning.

Michael Rinsem and Justin Starr, endowed professors of technical curriculum at CCAC, say the grant has already allowed for impressive developments, such as the creation of AI tutors that can help students identify their mistakes in real time. Starr says CCAC is committed to incorporating AI in all of its offerings, as made evident by the new $40 million, three-story Center for Education, Innovation and Technology on its North Side campus.

“A broad philosophy we’ve been talking about at CCAC is that AI is one of the new skilled trades and one of the new things that we need to integrate into our [general education] programs. If we want to prepare students for careers with AI, we need to not just integrate it into the computer science classes,” he says.

Some job descriptions are already listing ChatGPT experience as a preferred job skill, DiMauro says. Slattery says that for this reason, universities would be doing students a disservice by not including AI in future courses.

“How can we serve these students to make sure that they’re not blindsighted when coming out of college and that they actually have the skills to go into the workforce and say they know how to ethically work with AI?” Slattery says.


Shutterstock 2277362117

Oregon State Leads the Way in AI Use Standards

Inara Scott, an associate dean at Oregon State University, in April issued a clarion call to academics across the nation.

“I thought we could create personalized discussion questions, meaningful and engaged essay assignments and quizzes that were sufficiently individualized to course materials that they would be AI-proof.

“Turns out, I was incorrect.”

ChatGPT is creating a “crisis” in the classroom, and university officials and professors need to take action, she says, saying that efforts to create course materials that can “outwit” AI have already proven unsuccessful. She and her colleagues at Oregon State are among those taking the lead in developing policies and guidelines that send a clear message to students about what is acceptable and not acceptable with AI use.

Oregon State in February established a university-wide policy, which trickles down into different rules and guidelines for each school and individual course.

“The policy was intended to encourage faculty not to just reflexively block AI but to think about how it could be used and how to structure assignments in light of it,” says Scott.

An example is Oregon State’s School of Business AI policy, stating that a student’s use of AI without proper citation will be penalized the same as any other case of plagiarism. A citation format is included for students to cite AI as a source, allowing students the freedom to reference AI in projects.

Scott and OSU are working behind the scenes to ensure that specific, detailed guidance is available to every student. The university’s top priority is to prevent blurred lines and academic integrity issues and to advance with the technology.

“The worst possible situation would be for students to not have any guidance at all,” says Scott. “The way that I would approach it in a law class versus the way somebody approaches it in a data analytics class is going to be different. We just need students to understand what the expectations are for the class they’re in.”

As part of the new policies, Scott is working to develop a list of icons that faculty can include in their syllabi that represent a multitude of AI tasks such as generating ideas, rephrasing text, adjusting grammar and spelling, generating outlines or first drafts, analyzing data and more.

The icons provide each professor freedom to choose which tasks they are comfortable with their students using AI for, and will
be a universal way to set
expectations without confusion or unexpected loopholes.

“Someone will realize that higher education is focused on the wrong things — the wrong outcomes, the wrong content —
and make something better, a higher education for critical thinking, ethics, empathy, human dynamics and problem-solving, perhaps, skills students really need.”

— Abby Yoder


Emma Malinak is a rising junior at Washington and Lee University, where she is majoring in journalism and English and seeking a minor in Africana Studies. Abby Yoder is a rising senior at Point Park University, where she is studying multimedia.

Categories: Education, From the Magazine, Hot Reads