It happened when I was testing some new AI features we had embedded in Ironclad. I had just connected Ironclad to GPT-4, then the most current generation of OpenAI’s platform. I had decided to start with an easy test case, so I gave Ironclad a simple instruction: “Change governing law to Delaware.”
Instantly, the system spat out a proposed new governing clause reflecting the change. So far so good! Then, immediately underneath, another sentence popped up. I read it and almost fell out of my chair.
“Does anybody give a f*** in this company?”
I stared at the words on the screen. Where the hell did that come from? Was someone at OpenAI monitoring my API requests and messing with me? Has some poor soul been repeatedly replying to my incessant requests, and suddenly just lost it? I felt the hairs go up on the back of my neck.
After a minute, I realized that this was a genuine response from the AI, not the outburst of a frustrated human being refusing to perform any more tasks. What, then, to make of this result?
Can technology be too human?
On one level, the answer was mundane. I realized that we had accidentally set the “temperature control” – a setting that controls the level of randomness and creativity of responses – to the maximum setting. So GPT-4 was drawing from a broader set of potential human responses than we had intended. Easy to address, easy to fix.
On a deeper level, however, this unexpected and shocking response was more than simply “noise” generated by a new technology. Generative AI had just done something remarkable: Passing the Turing Test with flying colors.
The idea of a test for how closely a machine could simulate human conversation has been around since Alan Turing first proposed it in the 1950s. While earlier versions of AI have occasionally, in certain circumstances, been able to pass the test, the new wave of generative AI clearly goes much further in its human-like responses. Since ChatGPT came out, millions of people have probably wondered, “Is this actually AI, or is some person at OpenAI just sitting there all day typing up responses?”
What is more fundamentally human than a frustrated and profane response to a repeated request? It is not desirable behavior, of course, but one that is VERY relatable to anyone who has ever been in that position. I’ve been interacting with computers my entire life. That felt like the most human response I’ve ever received from any technology system.
And that’s no accident. The way that generative AI models work is to generate many possible responses and then pit them against each other to determine which are the most “human.” Hence the result that I received.
Welcome to the uncanny valley
This incident illustrates the challenges and contradictions of generative AI at this moment. At times, it behaves very much as a human might (even too much so, as in this case). At others, it falls short, leading to disconcerting and unfortunate results. I think generative AI is in its own kind of “uncanny valley” right now: human enough to delight people, but falling short at key moments and scaring them.
The “uncanny valley” describes the way our feelings of empathy turn to disgust as something comes closer and closer to becoming human-but-not-quite-human. It’s partly what makes Wall-E cartoonishly adorable, but the characters in The Polar Express creepy and uncomfortable to look at. As AI reaches new technical levels and capabilities – something which seems to be happening almost daily – it becomes more and more powerful, but retains an unpredictability that challenges those of us who are trying to build customer-ready solutions. Trusting it too much, and too early, can lead to some real problems.
Already, Microsoft made this mistake with Bing, and Google with Bard. But, that doesn’t mean the technology is bad, just that it requires more care and understanding in how we apply it. I think we’ll be in this uncanny valley for a while, and that it’s going to result in a lot of broken promises in the coming year.
Where do we go from here?
So what does all this mean for the future of AI? How do you engage with technology that combines so much capability and potential with so much unpredictability?
After so many decades of seeing marginal progress and innovation in natural language recognition and processing, we suddenly find ourselves in a totally new world. The old goal of having technology that could clear the Turing Test no longer seems meaningful. We now have ubiquitous, readily available technology that promises to erase the line between human and machine language. That is an absolutely HUGE unlock in terms of what it means for future innovation.
We are crossing over into something brand new and unknown. Even in these early days, it’s already clear that this is stretching our horizons, allowing us to do things that have never been practical or even feasible before. After so many years of incremental progress, we are now operating in a world of almost unlimited possibility. It is thrilling to contemplate the new experiences and outcomes that we can design with this technology.
But AI isn’t just about possibility; it can also be about unintended consequences. Having technology that can be so human means that we must learn to engage with it in a fundamentally new way. You have to play with it, experimenting to find its strengths and weaknesses, testing it to see how it responds in different contexts and situations. The very thing that makes AI incredibly powerful – its ability to replicate our own understanding and behavior – requires that you treat it less like a piece of code and more like a human construct, capable of surprising us in both positive and negative ways.
So dive in! Test AI out on the problems and needs facing your company, your employees, your customers. Harness its extreme power and insight to improve your personal and work life. The more you engage with AI and become comfortable with both its capability and unpredictability, the more you will be able to get out of it.
Ironclad is not a law firm, and this post does not constitute or contain legal advice. To evaluate the accuracy, sufficiency, or reliability of the ideas and guidance reflected here, or the applicability of these materials to your business, you should consult with a licensed attorney. Use of and access to any of the resources contained within Ironclad’s site do not create an attorney-client relationship between the user and Ironclad.