Red team go!

Red and blue game pieces positioned on a 3D topographical map.

Earlier this week I posted about a custom GPT I had created to give AI-generated text a more “human” feel by randomly inserting typos. I called it TypoBot. It seemed like a fun idea at the time - a way to make AI text feel more authentic, maybe even a bit cheeky. But the next day, while casually chatting over a bowl of cereal with my partner, I realized that the tool I had spontaneously created and put out into the world without much thought was ethically problematic.

I felt a flush of worry come over me. What if someone used TypoBot maliciously? What if it was used to deceive with the intention of committing a crime? For example, what if TypoBot enabled someone to commit fraud by making phishing emails appear more believably human? As my wife reminded me, it’s important to consider how our creations might be used maliciously once they're out in the world.

She got me thinking about a process called "red teaming" used by AI developers. The term comes from cold-war era military simulations where the “red team” represented the enemy (which at the time was the Soviet Union) and the “blue team” represented the friendly forces (typically the USA). In these simulations, the red team would try to think like the enemy and exploit any weaknesses they could find. This in turn helped the blue team bolster their defences appropriately. In AI, it involves intentionally thinking through and testing all the ways that AI might be used to do harm. This is how AI developers determine how and where to put safe-guards into their products.

As web professionals, we're often so focused on creating cool new features or pushing the boundaries of what's possible that we might not always stop to consider the potential downsides. But in this brave new world of AI, maybe we need to.

“To err is human; to forgive, divine” the saying goes. The first part of this quote, originally published by Alexander Pope in his 1711 poem “An Essay on Criticism, Part II”, has found new meaning as AI becomes woven into the fabric of society and the creative process has become a collaboration with the current wave of computational technology. We now live in a world with AI that can write, make art and music, and even come up with recipes. As a result, the ability to distinguish between something created by a human and something generated by AI is becoming more difficult by the day. As more people become comfortable using AI tools like ChatGPT, Gemini, and Claude, and begin to integrate them into their creative workflows, the lines are getting blurrier. My hunch is that in the near future it might not matter and people will eventually stop caring if something was “made with AI” because everything will be to some extent. It will simply become the norm over time as access to the technology becomes ubiquitous and its incorporation into everyday tools and processes becomes unavoidable.

But we're not there yet. We're still going through a transition phase, and not everyone is onboard. I'm seeing a lot of distrust and even pushback in my news and social feeds, as traditional artists and creators (and their supporters) staunchly and vocally decry the use of AI in areas traditionally reserved for the human spirit. This shouldn't come as a surprise to anyone, least of all those whose livelihoods are at risk. But add to this the rise of deepfakes, scams, and misinformation campaigns, all upending our ability to know what's real and society now has a big challenge to navigate, both ethically and politically.

Yet, AI can be used in truly helpful and meaningful ways that benefit society and our ability to communicate with one another as it enables us to write more clearly, visually express the ideas in our heads, and even create music that evokes the emotions we’re feeling. It’s a classic case of the double-edged sword.

For now, we're caught in the middle, and the concept of trust and authenticity has become much more critical as we seek reassurance in our ability to accept what we see, hear, or read to be true. As a result, not surprisingly, some people think that if AI made something, or even helped with it, they don't like it or value it as much. It's a sentiment we might encounter from clients or users, and one we need to be prepared to address.

So what’s a creative person to do? What are UX and product designers to do?

For starters, we need to be aware of the risks, and by that I mean be aware that there are risks. We need to question our motivations, examine our intentions, and explore the potential outcomes if our creations are employed for harm. That doesn't mean we can't still create tools that might have genuine utility along with potential risks. A hammer can do a lot of damage in the wrong hands after all. But the world needs hammers, along with knives and other potentially lethal tools.

So we need to think about how we market these things, and how we educate each other about how to use them. That includes having discussions about how they can be dangerous, and why we need to be careful. These are things we have to teach kids as they're growing up, helping them understand why they need to be careful, and why they need to consider the consequences of their actions.

Perhaps we need to remember that when it comes to our use of AI, we're a lot like children, and that means we still have a lot to learn, and growing up to do. As professionals at the forefront of this technological revolution, it's on us to lead the way in using AI responsibly, to create amazing things while always keeping an eye on the potential consequences of our creations.

I'm still not sure what to do about TypoBot. If you have thoughts, please let me know in the comments!