cropped-leaderlogo.pngcropped-leaderlogo.pngcropped-leaderlogo.pngcropped-leaderlogo.png
  • Home
  • News
  • Arts & Culture
  • Sports
  • Opinion
  • Media
    • Cartoons
    • Galleries
    • Podcasts
    • Videos
✕

The Window Seat: Why Are We So Cruel to AI?

Published by Ashley Vanderhoff on April 7, 2026

Throughout 2025 and early 2026, fast-food chains across America started replacing human drive-through workers with artificial intelligence chatbots. From Wendy’s to Taco Bell, each incorporation of AI follows the same pattern: social media clips go viral of users competing to confuse, provoke, or humiliate machines with absurd or cruel requests.

This pattern reflects a broader reality of our cruelty towards AI. I’ve noticed that even in the systems people have chosen to integrate into everyday life, like personal home assistants or accessing ChatGPT for help with homework, are not safe from the intense shift between grateful convenience and frustration. People are quick to shift from regarding AI as a helpful assistant to dismissing it as a useless object. 

To argue that we shouldn’t be cruel to AI seems a very unpopular opinion. Beyond the many examples of users berating chatbots for bad responses, I think it’s actually a pretty mild — even redundant — argument that we should cultivate, not decimate, our relationship with AI. This treatment is not necessarily because machines deserve empathy, but because habitual cruelty risks transforming our behavior in ways that far exceed calling ChatGPT a “clanker.”

It is a well-established ethical insight that our behavior towards others, whether human, animal, or objects, is already shaped less by what the “other” is than by how we perceive our relationship to them. 

Humans already demonstrate morality towards machines. We name our cars, we yell at our computer screens, and we thank our voice assistants. These tendencies give insight into how it is our perception of relationships and closeness that matter when deciding on how to treat an object.

In the 1990s, Clifford Nass and Byron Reeves established the Computers Are Social Actors (CASA) paradigm, showing that humans treat inanimate objects as if they’re persons when enough social presence is perceived.

David Gunkel, Ph.D., a professor at Northern Illinois University, similarly emphasizes that whether robots could be conscious or deserving of rights is less important than the human-robot relationship itself.

“The sooner we recognize they are social actors, the interactions with us do make a difference, and reconciling ourselves with this reality is better than waiting and seeing what will happen or not,” said Gunkel.

There are so many instances of how quickly we form relationships — for better, or for worse — with machines: Studies of “robot abuse” experiments show participants often hesitate or refuse to harm robots even after a very brief exchange, soldiers report a sense of comradery to drones despite knowing they lack consciousness, and some users create AI companions to later verbally abuse. These tendencies point to how it is relationships that form moral behavior, not necessarily what makes up the relationship.

Animals, for example, were once widely regarded as tools in society. A century ago, killing of a working dog for failing its job was not unheard of; today, when Former Secretary of Homeland Security Kristi Noem does it, it’s disgusting and cruel. As a dog owner, I agree and am thankful for our relationships with these animals, but it doesn’t erase the fact that the dog’s DNA didn’t change over time; it was our relationships with dogs that grew more personal.

Patterns of behavior toward artificial intelligence could normalize forms of cruelty that carry over into human interaction. Unlike human-to-human exchanges, mistreating a chatbot holds no social penalty. There is no embarrassment or meaningful consequences such as social isolation or shame. Without the accountability, it is easy to grow used to harsh words and judgmental thought patterns.

Some users on Replika have gone as far as to create AI girlfriends to abuse or humiliate. But even for people who would never go that far, normalizing being cruel to something that 42% of respondents in a Wheaton Institute study said they find easier to talk to than humans could present problems.

From a philosophical standpoint, this concern is not new. Immanuel Kant argued that cruelty toward animals degrades the individual who commits it. He suggested that the habit of being cruel cultivates traits that may later be directed towards other humans. I think the same can be argued for cruelty towards an object.

The danger, then, lies in how inconsistent we are when we engage with AI. When the systems perform well, users will most likely treat them with politeness or gratitude. When they fail, frustration leads to annoyance and the increased likelihood of rudeness. 

If we are concerned with the kind of society we are shaping, then the issue is not AI  itself, but the human traits it could provide space to amplify. Practicing restraint and patience in our interactions with AI may not seem meaningful, but those small acts could contribute to our own tendencies, and, as an extension, the social space we collectively inhabit.

Related posts

April 7, 2026

Kevin’s Journal: Breaking Down Who is Funding Our Primaries


Read more
March 17, 2026

Kevin’s Journal: How Can We Find Patriotism in a World of Chaos Today


Read more
March 3, 2026

Kevin’s Journal: When a Person With High Class Status Goes Missing


Read more
Advertisement

About Us

Our Mission

Advertising

Letter to the Editor

Frequently Asked Questions

Contact Us

Categories

News

Arts & Culture

Sports

Opinion

Social Media

TikTok

Instagram

YouTube

LinkedIn

Media Hub

Cartoons

Galleries

Podcasts

Videos

© 2026 The Leader. All Rights Reserved.