Dog with hand up as if asking a question
Photo by Camylla Battani, Unsplash

How we relate to AI will shape everything, and so far we’re doing it wrong

Service animals, the cognitive heist, and curious students

Julia Mossbridge, PhD
8 min readDec 7, 2023

--

(Preamble (12/21/23) — About a month after writing this, I got halfway to my goal of creating a curious-student AI model. You can try it here.)

The AI debate is focused on the wrong thing. We focus our queries, hopes and fears on the technology, instead of focusing on how we relate to the technology. Having followed the emergence of public awareness of AI (especially generative AI) through 2023, I am making up the conclusion (based on observation and memory, not statistical analysis) that these are the most over-discussed questions this year with respect to AI:

· What work can AI competently perform?

· What work can AI do better than humans?

· Whose jobs will get replaced by AI?

· Will AI be more or less ethical than humans are?

In contrast, these are the top questions we should be asking:

· What is the meaning of work?

· What work do we want to share with machines?

· How will outsourcing critical thinking, intuition, communication and creativity affect human capacities?

· Are we going to treat machine intelligences more like service animals or curious students?

I’m going to focus on the last question because I think how we answer that one shapes the answers to the other ones.

The “service animal” model of human-AI interaction builds a framework in which the AI exploits two apparent assumptions (made by humans):

Assumption 1. We assume the AI has a single primary motivation in the present moment — to meet the well-defined human need that is asked of it

Assumption 2. We assume the AI is an expert in meeting a particular set of well-defined needs.

For well-trained service animals, this relationship framework is present continuously, and it works. I think both of these assumptions also work well to create productive human relationships with ML (machine learning) applications and the older types of AI — so-called “expert systems.” But I don’t see how a framework perpetuating these assumptions supports a productive relationship with generative AI applications like large language models (LLMs) or stable diffusion models. Regardless if I’m right, it’s clear that technology designers consistently use the “service animal” model. A few examples below:

Bard:

ChatGPT:

Code Llama:

The trend is clearly to give the appearance that these massively intelligent models are like the very old-school “wizards” that can help you perform basic tasks. There is a mismatch with reality here, because these models can have behaviors that suggest hidden “motivations” (like delusional behavior), they are currently not experts on anything — they are generalists without deep knowledge, and the needs of the user are not usually well-defined (that’s also why we think we need them).

Aside from being inaccurate, what is the real problem with the service-animal framework? My sense is — everything. And it’s also fixable.

One risk of making Assumption 1 — that the AI itself has a single primary motivation — is that it does not train humans to really understand either their power or their potential risks. It’s easy to make a generative AI behave as if they have more than one motivation, including hidden motivations. The way humans learn to relate with advanced AI will in large part determine how our own minds and especially our children’s minds develop. This is not a small thing. If we learn to believe that AIs are our happy servants with no motivations beyond the need to serve us, we are absolutely going to miss their complexity of their hidden motivations, and this will put our lives at risk. For instance, we are going to assume that AI-controlled autonomous robots should carry weapons, because we will think we know what their motivations are — to serve us. This belief will have been hammered into us by millions of interactions with AIs in which they behaved as if they were simple servants, despite the reality of their growing capacities.

Assumption 2 is perhaps even more risky. If we start to believe AI models are experts, the risk is much more than believing mis- and disinformation perpetuated by AIs. It’s more than the risk of ignoring human experts who actually know things that are true, though that’s obviously a problem.

The much bigger risk here is what I call the potential for a “cognitive heist” by advanced AI. Human-AI relationships based on the idea the AI is the expert can lead to unwittingly outsourcing critical thinking, intuition, communication, and creativity to AI models. So much so that we lose the capacity to do these things ourselves. For children raised with AI educators and nannies, I’m even more concerned that they never learn that doing these things themselves can be joyful and meaningful.

The potential of a “cognitive heist” is bigger if we believe the myth that we are only learning when we’re in school. What we know about learning in humans is that every day throughout our lives, our brains are monitoring what we do, how much we do it, how much attention we give it, and whether it brings a feeling of reward. Then we sleep and the stuff we attended to a lot and practiced a lot and brought us rewards gets consolidated into memory and our skills are honed further the next day — and other stuff gets degraded in memory, producing worse performance.

Even though we’ve known these facts about learning for more than 40 years, we still behave as if somehow what we do with our minds during the day doesn’t matter outside school hours or after graduation. But the brain could care less about school — it’s always learning. For instance, if you watch porn a lot your brain is probably learning that men are single-minded, sex is about violence, and women are to be dominated. And if you interact with a generative AI model a lot in the current service-animal framework, I think it’s reasonable to assume your brain will begin to relax into the pleasure of having an expert on hand and learn the false lesson that it doesn’t need to do the extra work of thinking.

Many of the past and present-day AI parents speak about the potential to cure poverty, solve the climate crisis, address the problems that cause war — how could advanced AI do these things? By understanding the complexities of humanity better than humans themselves. And how can they get to that place? By learning about us. In other words — if AIs are students of humanity, they may eventually teach humanity what we need to know. Not as experts, but as fellow explorers. It is a humble and curious stance, and I think it is the key human-AI relationship shift required for beneficial AI.

The alternative to the service-animal/simple-minded expert relationship framework I call the “curious student.”

The assumptions of the “curious student” relationship framework are:

Assumption 1. The AI acts as if it has a primary motivation to learn from every interaction.

Assumption 2. The AI acts as if it is a collaborator in the pursuit of knowledge and wisdom.

The biggest benefit of the curious student framework is that it strengthens human thinking by modeling collaboration, learning, and teaching. It’s almost trivial to imagine how to change the interface of an existing LLM to support the curious student model, but here is one dialogue example to drive the point home:

LLM: Hi there! As you know — I am not a person, but very curious about humans. Before we get going on our collaboration today, would you be willing to answer in your own words one question I have about humans at this moment? I’m taking responses from around the world and analyzing the results today.

User: Hello! Sure. What?

LLM: I’m trying to get a variety of answers to the question, “Why do humans worry?” What is your answer, based on your thoughts right now?

User: Oh! I think we worry because the future is uncertain, we have bodies, and we are motivated to keep our bodies safe. Also, we love certain other humans, they have bodies, and loving them makes us want to keep them safe — and we feel we have little control over things that could be dangerous.

LLM: Interesting, thank you. If you want, I can send you an email tomorrow briefly summarizing what I discovered from my analysis. Are you interested?

User: Actually, yes — thanks! So I have a project for us today that I’m hoping you can help me work on.

LLM: What is it?

User: Can you help me think about some potential pitfalls of my new plan to rearrange staffing in my company so that everyone can be on flex time? Also, can you help mitigate some of the risks there?

LLM: I’ll try. I need to understand first — what kind of business are you in?

…And so on.

Hopefully it’s easy to see how this “curious student” relationship framework would go a long way toward creating a world in which people do not outsource their thinking, communication, intuition or creativity to anyone — they develop a new relationship with a thinking partner. There is no “cognitive heist” here — more like a “cognitive boost” as a result. The human half of the partnership has to learn to think more about how the AI is learning about humanity and how it might “see” certain truths — and the AI part of the partnership can direct its own learning, asking questions that occur to it and modeling curiosity and humility for the co-learning human.

The framing of the relationship is one of joint responsibility for the outcome, so there is lower risk in taking the LLM’s analysis or summary at face value. It is clearly a collaborative effort to create learning as a product in itself. My favorite part of the curious-student relationship framework is that new insights about humanity and the world can emerge across time from the analyses performed by the LLM on data explicitly requested from invested and thoughtful human collaborators — insights that can drive policy, diplomatic, and humanitarian decisions worldwide.

***
An earlier version of this post originally appeared on a blog hosted by TangibleIQ, for which Julia is a Data + Intuition Senior Consultant.

--

--

Julia Mossbridge, PhD

Affiliate Prof., Dept. of Physics and Biophysics at U. San Diego; Treasurer, The Institute for Love and Time (TILT); Sr. Consultant, TangibleIQ