A Mental Model for AI Use

I recently spoke with my potential advisor, who said she would like to hear my views on AI in the classroom when we meet next semester.

The next day, in a moment of serendipity, Cory Doctorow wrote about reverse-centaurs in his newsletter.

My first thought was, "This is a fantastic mental model for AI."

Don't become the assistant in your own thinking.

According to the essay, a centaur is a person who uses AI to assist them with mundane tasks. Doctorow's example involves using Whisper to scan 40 hours of audio files to find a specific claim from a podcast. Something he never would have had time to do himself.

A reverse-centaur is a person who becomes an assistant to the AI. The example given is a freelancer who is hired for a job, but not given enough time to complete it correctly. This individual freelancer has no choice but to delegate all of the work to AI, because they're doing a job that used to be done by dozens of journalists and interns.

These are obviously extreme ends of a full spectrum of AI use, and most users are likely to fall somewhere in the middle. But even if you aren't a full reverse-centaur, it's important to remain mindful of how you're using AI in the moment.

My views on AI in the classroom

Students are going to use AI whether it’s against the rules or not. They need to be taught how to use it in a way that doesn’t disrupt their own thinking.

A fundamental aspect of note-making is learning and/or clarifying your thoughts through personal connections and analogies. Because personal thought is so foundational to a course on generative note-making, I think it's perfect for introducing these ideas.

One of the biggest hurdles currently is school administrations that are focused on preventing the most obvious reverse-centaur behavior, cheating. But this is one of those areas where we have a choice. We can bury our heads in the sand and hope impulsive teenagers will follow the rules and avoid AI altogether, or we can face the reality that there will always be students who figure out new ways to cheat. These aren't the students they should be worried about.

The students who would benefit most from education have good intentions. They're not the type who would normally cheat on an exam or plagiarize a paper, but they don't see using AI against the rules as cheating. They may genuinely believe they're learning while unintentionally outsourcing the most important cognitive work to AI.

This is because AI makes it easy to fall into a version of the collector's fallacy. The collector's fallacy warns us that we often believe we know something when we actually just know about something. This connection just occurred to me while writing this newsletter. It's a brand new seed that I need to grow a bit before writing about it further.

Education can help these students understand the difference between AI use that tricks you into thinking you're learning the knowledge and AI use that actually aids in learning.

Centaurs in the classroom

Centaur mode for the classroom could look like AI scanning your notes to consolidate all your thoughts on a specific topic. It could be summarizing a freewriting session to eliminate all of the noise. It could be uploading an academic paper you just read, along with your analysis, to make sure you understood it correctly.

The common factor here is that you've done your own work first. You've synthesized the reading and written on the topic to clarify your own thoughts before you even touch AI.

The only thing I would use AI for before this point is having it search for relevant papers and sources on the topic for you to read.

Using AI to produce a first draft for you is a good example of the spectrum I mentioned earlier.

Centaur: You've done all of the research and have written quite a bit yourself. You feed all of this information into the model and have it provide a first draft using your words. Now all of that previously chaotic information is organized and ready for your informed rewrites.

Reverse-Centaur: You've done little to no research or writing yourself. You feed the model your topic and ask for a first draft essay. AI has now done the bulk of the work, all of the "thinking," and you're just its assistant, doing small edits.

Anyone who uses AI regularly probably shifts between centaur and reverse-centaur modes without realizing it. Using this mental model to be aware of which mode we're in can help us make more deliberate choices about when and how we use AI.