My post on artificial intelligence in the context of academic integrity this week looks a bit different! I created a video in the style of the “Subway Surfers TikTok story time” trend, where the screen recording of gameplay is below the main content in an effort to keep the viewer’s attention.

I admit that this isn’t the most adrenaline-inducing topic, so it’s a bit of a thought experiment for me to see if it works!

The info on UVic’s response to artificial intelligence can be found starting on page 212 of this Senate document.


Transcript of the video for reference:

Artificial intelligence and academic integrity: one of the most exciting innovations of our lifetime and one of the lamest parts about the university experience (and good alliteration!)

I know, I know, we might not want to talk about it, but it’s becoming more and more of our reality, so we have to at least consider the relationship between the two – specifically about UVic’s response to AI and academic integrity.

In the UVic calendar, academic integrity is defined as “honesty, trust, fairness, respect, and responsibility” for our work. Examples of academic integrity violations include plagiarism, unauthorized use of an editor, cheating.

But like many of our other policies, this isn’t as future proof as we want it to be, and our perceptions of academic integrity can and have changed over the years.

Just like here at UVic: in 2023, the Office of the Vice-President Academic and Provost released a statement that said UVic as an institution commits to integrating AI tools in a “responsible, ethical and equitable manner that enhances learning and teaching as appropriate”.

Words like that: responsibility, ethics, equity, they kind of sound like our academic integrity statement. And that’s basically the stance of many Canadian universities. It’s undeniable that AI is going to surpass the exploration phase and jump into the application phase quickly. It’s probably already happening around your personal network as we speak; so why not capitalize on that opportunity to show people how to use it with integrity?

UVic hasn’t given black and white guidance as to what the expectations are when it comes to generative AI, and that’s kind of the point. Not because we’re still understanding how it works – come on, we’re a research university, but what the ethics are in all of this. Instructors can’t use AI to mark student work because it lacks human subjectivity, but maybe we can use it to help us translate a work that’s only available in Chinese.

Sitting at the crossroads of intelligence and artificialness, generative AI has the benefits of “human” and computer, but also it has the same flaws as both, in a way, it’s kind of double the opportunity for flaws that we can walk into as just human. So we can kind of look at it like a Google of sorts: scraping the internet, with the awareness that it might not be correct, or it might be outdated, or it might be biased in its interpretations.

When the onus of understanding artificial intelligence’s place in the context of academic integrity rests on the student lest they be the ones to suffer the consequences, I think our general confusion comes not from our misunderstanding of artificial intelligence, but our misunderstanding of academic integrity policy.

Think about it: instructors say “academic integrity, academic integrity” like a broken record and I get it – the integrity of student work can be a reflection of the integrity of an instructor’s teaching or the university’s reputation. But students are skipping through the academic integrity pledges on Brightspace like it’s the TOS of some random internet account.

And if I asked one of my classmates to name me the 7 categories of academic integrity violations in the calendar, chances are they probably wouldn’t be able to answer them. Chances are, you wouldn’t be able to answer them too, because there aren’t 7, there are 6. But you believed me, right?

Generative AI gives universities the unique and timely opportunity to look at their academic integrity strategies and begin to future proof them. If in 2025 we are acknowledging the existence of AI and the potential it has, then we need to codify that in our policies. And if we’re going to commit to decolonization as a university, it’s also time for us to understand that our power doesn’t only come from policy, but it comes from community.

What we need from the generative AI debate isn’t more bureaucracy, it’s more support for students in a confusing time when misinformation and distrust of the experts are second nature to research. The debate isn’t just on ChatGPT. It’s about the people; about the past, the present, and the future of our university’s values and DNA.