Feed the Machine

Using AI ethically isn’t a one-time decision—it’s an ongoing process of experimentation, reflection, and dialogue. It invites us to question our role within a vast, often opaque system where the link between individual choices and collective outcomes can be difficult to see.

Feed The Machine is a game that creates that conversation. It puts players in a tension: do they use AI and risk potential hallucinations, or do they work without it but potentially move slower than a colleague? Meant for high school, CEGEP, and undergrad audiences, the game asks players to reflect on AI use in their work.

Feed the Machine is designed to talk with students on the positives or negatives of AI adoption. It sets the stage for a meaningful reflection on how our use of AI could have long-term impacts. Made by researchers, supported by teachers, and tested with students, the game does not tell, but shows the trade-offs of AI use. It is playable in 30 minutes, easy to prepare, and fast to pick up.

A Deeper Breakdown

Feed the Machine is built around the tension of using or not using an AI tool. Each turn, players will choose: Do they ‘Feed’ the AI a prompt and hope to get back a helpful answer, or do they ‘Work’ alone to get exactly what they need? To show this tension, cards are split in half: one side is to Feed, the other is to work. The Work sides represent ethical principles of AI use (e.g. being transparent, disclosing AI use, or checking an AI’s sources) while the Feed side points at potential challenges or implications of AI use (e.g. stealing ownership, training a system, etc). These play mechanics asks players to reflect on their role

As players compete to finish news stories, they feel the pressure to feed the AI hoping to gain a lead on their opponents. They are incentivized to work with the AI. It gives them more resources, but at the cost of randomness. Working too much with the AI might give them false prompts, negative points, or useless resources. However, working alone might be too slow to compete with their opponents. 

Finally, the game ends with a twist. While players might think they have won, depending on how much they used the AI, it might beat everyone. Each game has a scenario that ends with the AI either taking the coveted job all of them are competing for, or the AI causes environmental

We are seeking educators in various spaces to test the game with their students. You can download the cards with printing instructions an the rulebook for free. You can help us by filling this feedback form.

The game rules and printable materials will be available shortly.

Scott DeJong

Scott DeJong

PhD Candidate
Scott DeJong is a PhD candidate in Communication studies at Concordia University studying educational game design, play, and disinformation.  Currently, his FRQSC dissertation is studying how games and play relate to disinformation practices and critical thinking skills. His work connects creative practice with research to improve media literacy approaches, as witnessed in his research boardgame Lizards and Lies. His past work, funded by the Social Sciences and Humanities Research Council of Canada, constructed games on issues of digital literacy and older adult mistreatment. In his free time, Scott co-produces a podcast speaking with scholars and practitioners on the role and design of humour in games. He is an active member of the TAG (technoculture, Arts and Games) Lab, the Applied AI Institute, and the mLab.
Ann-Louise Davidson

Ann-Louise Davidson

Professor | Director of Innovation Lab

Ann-Louise Davidson Ph.D. is the Director of Concordia University’s Innovation Lab and is the Innovation Strategic Advisor for the Faculty of Arts and Science. She is also Associate Director of the Milieux Institute for Art, Culture and Technology, where she directs #MilieuxMake, the institute’s makerspace.