

Discover more from JasonDotGov
The real AI overlord is the racist and sexist code we wrote along the way
Are there more pressing issues regarding artificial intelligence than sentience?
I am still very much blown away by a leaked conversation between a Google engineer and a Google-made Artificial intelligence bot named LaMDA (Language Model for Dialog Applications). The AI’s ability to interpret complex human emotions had me wondering if it was, in fact, a real boy.
I have since been reminded of more pressing issues with technology as advanced as LaMDA, and the dangers of implying AI has free will when in reality, it is merely a reflection of the engineers who coded it. And those engineers are far too often white and male, not gods capable of creating life (even if they imply such ability).
After reading the entire conversation, I rushed to Instagram. I shared a flurry of story posts with my closest friends and asked what everyone was asking — is this AI sentient? It didn’t take long for me to realize that one, I had no idea what sentience was, and two, perhaps I was asking the wrong questions. Still, it was clear that what I was reading was written by the most advanced AI we’ve ever known. And it was wildly impressive. If not a bit scary.
Here’s a glimpse into how the AI was developed:
Lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it's possible that some of those correspond to feelings that you're experiencing we don't know how to find them.
Full conversation here.
The story broke on Medium and went viral on Twitter — two corners of the internet where the knee-jerk reactions rarely live past the first 24 hours. Prevailing wisdom usually reveals itself around day two of a story. The “is this bot sentient?” discourse eventually shifted to more persistent ethical questions around AI development — are we coding into our AI racist and sexist bias?
And, of course, the jokes flooded in as well.
Blake Lemoine, the engineer who leaked his conversation with LaMDA, claiming it was sentient, was suspended by Google shortly after his story went viral. It didn’t take long for the internet to compare his story to that of a computer scientist and diversity in tech advocate, Timnit Gebru.
Google hired Timnit Gebru for her published study revealing bias in facial recognition software. She was fired after writing about the ethical issues with Google’s LaMDA program. In the paper, Gebru warns about a phenomenon called “coherence in the eye of the beholder,” where people can often bestow sentience onto non-sentient things, like an AI merely “parroting” back what it was programmed to respond. Blake Lemoine is the very beholder Gebru has warned us about.
Gebru, and others like her, have long warned us about the dangers of a brave new world led by biased artificial intelligence. In an industry dominated by white males, inadvertent bias and blind spots are often baked into the code. And in a country unable or unwilling to address our systemic racism, technology will inevitably use racist and sexist data sets to form solutions to our problems.
Here’s an example of bias in technology — you develop an AI system that attempts to predict what a criminal looks like. If that system uses data sets from the US Justice System, you will end up with a pretty racist AI. Because the AI doesn’t understand what systemic racism is, nor does it know that over-policing in Black and brown neighborhoods have unjustly created mass incarceration for those community members. If you think such technology is far-fetched, know that it already exists.
While perhaps one day, in the distant future, AI will become sentient, and we’ll have congressional hearings on the moon to decide if a robot deserves rights, there are more pressing issues in today’s world. And if we accept engineers like Blake Lemoine claiming AI is sentient with its own free will, we’re allowing companies like Google to wipe their hands clean of accountability. Because with free will comes personal responsibility. No one ever blames our “god.”
If there’s one thing this story has in common with the miraculous creation of life, it’s that artificial intelligence is created in our image.