Changing the Game (Theory) with P2P Decentralized AI

Julia Mossbridge, PhD
12 min readJul 10, 2024

How LACE Grows Wisdom

LACE: Lightweight AI Computing Ecosystem is an emerging P2P Decentralized AI Internet providing AI and data nodes over web2 and web3 infrastructure using ledgerless protocols. How can it support the definition of and evolution of wisdom?

In just another day not at the office, I had a philosophically intense conversation with LightRoot Quantum CSO Robert Moir, PhD² to address this question. Here is a summarized, streamlined version of our discussion (and here’s the actual interview if you want to see me try to walk off a cold while we address some of the nuances of this conversation).

Julia: I would love to do a deep dive on whatever aspect of the five WORLD values baked into LACE that you’d like to dive into: Wisdom, Openness, Resilience, Love, Diversity. Where would you like to start? And I’m going to include a link to who you are, so you don’t have to introduce your amazing badassery. But there is amazing badassery here with Robert Moir!

Robert: Thank you. I appreciate that. Wisdom is the one that springs to mind. I spent a long time working in academia on a very multi-disciplinary project. That’s still there in my mind. I think I might come back to it one day. But it wasn’t really feasible to pursue that for me in the academic context. That caused me to leave academia and then I ended up sort of in this transition from academia to industry, working on Earth64 with Toufi Saliba.

Julia: What’s Earth64?

Robert: Earth64 is a highly decentralized data structure. You can think of it like a portable blockchain. It fits into this puzzle because of HyperCycle [parent of LightRoot], which is this company that is building the basic network infrastructure for the forthcoming AI internet.

The first internet was the internet of people, connecting one person to another. The AI internet is now allowing AI to connect to AI. To do that, you need to have secure, super-efficient, and very cheap transactions that are peer-to-peer with no intermediary. So you don’t have to wait for some kind of a settlement process to happen.

Performing a transaction is like sending cash over the internet — Earth64 plays that role in the context of Hypercycle. And there’s a larger story that predates me, involved with Toufi Saliba and Dann Toliver, who are the co-authors of what’s variously referred to as TODA, the TODA Protocol, or the TODA Protocols.

Julia: In terms of what LightRoot is doing, we are creating a public sector interface for what HyperCycle and some other P2P decentralized organizations are trying to do, which is to build this Lightweight AI Computing Ecosystem, or LACE.

You’re the Chief Scientific Officer for Lightroot. And it is pretty exciting to have you on board. The question is, how do you see LightRoot in particular driving home the wisdom piece?

Robert: Where LightRoot comes into that is we are looking to connect data centers that connect to the internet through a gateway with large amounts of traffic. They don’t have the ability to have each compute node in their data center connecting directly to the internet. So there needs to be a way of solving the problem of how to allow a data center to call out to the internet in this way and have it be efficient, secure, and inexpensive among all the other key features.

Julia: What about an interface to LACE from all devices used by the federal and state governments?

Robert: We want to have everybody connected to this eventually. So it’s a case of sorting out what are the key bottlenecks that need to be addressed at each point in time and addressing them one by one. And facilitating this connectivity over time.

Julia: What values are governing the process by which this technology is being unfolded? This is an extremely important conversation to have, especially for a decentralized system, right?

Robert: For sure. For our world, this is a conversation we need to be having more about what values are governing anything we’re doing as a collective of beings on this planet, you know?

Julia: Amen. Exactly. And maybe LACE could allow us to have that conversation in a way; kind of force the conversation.

Robert: Well decentralized technology brings that into the fore because the other side of the coin with decentralized technology is decentralized governance. So there’s this very hard problem that we’ll be figuring out better and better ways to solve, over time.

But into that comes the problems of collective choice and collective action. And when you have large groups of people involved in a governance process, you have conflicts of values. In the philosophical context, any sort of debate ends up either leading into some kind of agreement or some conflict of values, typically.

And values are hard to rationalize a lot of the time. There are ways of doing that, but it’s a difficult process because values tend to attach to people’s identity at a deep level.

Criticizing someone’s values feels like a criticism of their identity. So you need to have a way of having open conversations about really important issues that don’t shut down the discussion.

Julia: It requires this kind of emotional maturity, which I think actually can be supported by decentralization, generally. The sort of understanding that the whole (I really like this E pluribus unum idea), that the whole is created from all these unique parts, and it’s okay. In fact, it’s desirable for them to be unique, because otherwise, you don’t need decentralization.

Robert: If all of the nodes are the same, you just have a centralized system.

Julia: Right. So decentralization actually supports uniqueness — and competition and collaboration. Which is very much like the human brain. So anyway, don’t get me started, but That’s good. I like it. But let’s talk about wisdom in the system.

Robert: I don’t remember where I got this from, something I read recently had presented wisdom as in contrast to knowledge. The idea is that wisdom is knowing what to do with knowledge, so there’s an implicit value.

If you can reduce things to the level of knowing what you want to achieve, then there’s something more rational there. More neutral. But choosing what you want to achieve is very much a values question. I think there are interesting ways in which you can structure a system to function better and better according to some particular value.

Like if you have some metric according to which the performance of the system is measured, it’s relatively easy to game the system on the optimization of that metric at the expense of other things. That’s not the kind of approach that tends to lead in the right kinds of directions, perhaps because of the fact that particularly when we’re talking about systems of human beings, some of the most important things that we would like to measure are very hard to measure. Like wellbeing, thriving, and flourishing.

Julia: But wait. We’re going to have a little bit of a discussion about this because I just had a discussion with an economist on that NASA-sponsored podcast about this. He asked how you measure these subjective things. But that concern is a hundred years behind — the fields of psychology and psychophysics have been measuring people’s internal states in ways that are relatively accurate for like a hundred years.

There’s this general belief among economists that all we can measure are things that are outside people and that we can’t trust people to report on their internal states. It turns out that you can trust people to report on their internal states, if they’re motivated by the right incentives.

Robert: This gets to one aspect of what I was meaning to get into. The fact that more and more we find that we can solve complex problems by introducing the right incentives into a system and relying on game theoretic mechanics to drive behavior in directions that are beneficial for the whole. This is part of the strategy that HyperCycle is using. There’s a growing recognition that it’s a very effective way of dealing with enormous complexity whilst still allowing some sort of simple model to be used to drive behavior in beneficial directions.

Julia: Some people might be concerned that a particular AI node or multiple nodes aren’t going to play according to what is predicted by game theoretic rules, right? Because they’re AIs and they’re not humans.

Robert: This gets into essentially how you design a reputation system to drive the right behavior in a context of AI agents. They won’t be motivated by the same things that humans are motivated by. But at least for now, these agents are being run by human beings, and the fundamental profit motive is there for anyone who’s running one of these nodes. Whatever group is involved is paying for the hardware, electricity, licenses, software and the space to run it.

All of that investment is being used because it’s expected that this node will make money. So if this node is not a good (for lack of a better word) “citizen,” its reputation will go down, and it will make less money. It’s not like that’s the only thing that matters from a reputation point of view, because again, there’ll be systems of systems and different communities that evolve within this, and they can each use different data to assess different aspects of reputation that are relevant to each subcommunity.

This can happen in an organic way because different communities, different markets care about different things. Rather than thinking about this in terms of metrics, thinking in terms of what data is being used to establish what actually happened. From that, you can determine whether this node is reputable or not.

Julia: A reputation for truth or accuracy.

Robert: Yeah, because there’s a cryptographic record of its behavior. It is just one of the things that TODA/IP will be used for in the system.

Julia: This is very philosophical and luckily you have a PhD in philosophy as well as a PhD in math, but do you think that wisdom can come from truth?

Robert: Wisdom generally comes from experience. It doesn’t come from something that you read or learn about from somebody else. Like it has to do with this interplay of knowledge and experience over time.

Julia: Is intuition in there?

Robert: Yeah, I think so. To me, intuition plays an absolutely crucial role in basically everything that we do. We just don’t notice it. And then if, if someone asks us why we behaved in the way that we do, we come up with some rationalization of it, but it’s intuitive behavior that’s driving almost everything that we do

Julia:. So wisdom isn’t just knowing what to do with knowledge, it’s knowing what to do with knowledge in a context of self awareness, or just let’s say awareness — like contextual or situational awareness.

Because those who are aware that most of their rationalizations are indeed rationalizations, and that in fact they have an intuitive faculty that’s driving them, end up being thought of as more wise. There’s this awareness of the fragility of the story of the single independent unit that supports intuition integrated into wisdom, supports intuition/wisdom of the individual, somehow.

Robert: I don’t know if this is right, but I’ll just communicate the idea that knowledge is more like a local thing. Because wisdom has something to do with the interplay of knowledge and experience, it’s acting on knowledge in a system of agents that are all also doing the same with different states of knowledge.

But wisdom is able to deal with a lot of uncertainty, because you don’t know what people know and it’s not a rational process of calculating what people know and predicting what they will do. There’s just, as you say, an awareness of, “if I act on this knowledge in this way, it will tend to produce certain outcomes.”

And I can just tell that intuitively that wisdom gets reduced to this form of intuition, it seems. So this is the form of intuition that wisdom is reduced to when you’re interconnected in some kind of productive way that you can trust, which I think is what you’re saying.

Julia: It wouldn’t work in a situation where all of your awareness is false.

Robert: No. Right. So the wisdom that emerges from this interdependent situation is based on trust and also proof, periodic proof, that that trust is warranted, right?

Julia: And validation of intuition, that your intuitions are right.

Robert: Yeah. So there’s another thought I had earlier I wanted to share. Where there’s very intense distrust between the players of a game or the agents in a system, that doesn’t often lead to good outcomes, particularly if the model you’re using treats the agents as if they are purely self interested agents who will totally screw over everyone else if that leads to a better outcome for them.

The more you treat people as only self-interested, the more they’ll become that way. So the more policy is designed from that perspective, the more it creates societies that behave that way. And that’s my perspective. This is just a pure question not coming from my mathematician’s hat, or even the philosopher hat per se. It’s more just an intuitive thing.

So rather than viewing self interest as a thing that applies only to an individual agent, coming from a lot of spiritual traditions there’s a deeper understanding of self that transcends the individual, and you can understand the other as part of yourself.

If we understand “self” interest from that point of view, it looks very different and it’s not obvious how to analyze that mathematically, because mathematics tends to favor linearity assumptions and independence assumptions.

Julia: It’s simple.

Robert: Also computable. Yeah. In a way that nonlinear and interacting systems tend not to be. But part of the linearity in this case is this kind of self interest that’s specific to an individual, because that’s something that’s very easy to calculate. It’s only this one node in a network that you need to focus on for maximizing utility, and then they’re all competing to do the same thing.

But what if we understand self in this broader sense? I think that’s what really the whole project of decentralized governance is about is figuring out how to solve that problem. And it seems clear that where things are going is thinking about things in terms of incentives as a way of driving beneficial behavior at a collective level.

Maybe the way we think of game theory also needs to change in keeping with that. Incentive mechanisms aren’t necessarily game theory proper. It’s a sort of broader thing that we’re developing into as we think about how to govern complex, dynamical, collective systems.

Julia: Well, I think incentives are really important, but I also think, as you said earlier, if you treat people as if they’re little input-output machines, they will be little input-output machines, and if you have the assumption of self interest only, and no awareness of interdependence, they’ll behave in that way.

So, I think we need something more than incentives. I think we need an expectation, that I consider to be a loving expectation, that wisdom can be emergent. An expectation that people, if given the opportunity, can grow and flourish. Like, we can actually water the seeds in the garden and expect that they will grow into different kinds of plants.

Right? So when we get too tied to the sort of behavioral economics, game theory stuff, that is based on standing outside the system and saying, now how do these little stupid creatures behave and how can we manipulate their behavior? It’s sort of an affront to the human soul. It doesn’t acknowledge the sort of developmental blueprint that unfolds when you have love.

Is that too idealistic?

Robert: It’s certainly not necessarily in my mind. But I think some people might not like what you just said because of the very profound ambiguity in the term ‘love’. And part of the role of wisdom, I think, in this context is discovering the right meaning of love — coming to these conversations with an open heart and an open mind. An open definition within limits, because there are certain things that are clearly not love, and so it excludes all of that, but within this much broader range, it has this very diverse meaning.

So this gets to the interplay between the analytical and the intuitive mind.

You could build an analytical model of it if you wanted that captures some aspect of the intuitive meaning of ‘love’. Where it misses, you go back to the intuitive side, you extend or adapt the analytical version to better match what you require, and eventually you sort of converge on something that’s close enough.

And then it can be acted upon by computational systems because it has this analytical representation.

Julia: So that’s beautiful. That answers the question about how to get wisdom into the system. Thanks.

--

--

Julia Mossbridge, PhD

President, Mossbridge Institute; Affiliate Prof., Dept. of Physics and Biophysics at U. San Diego; Board Chair, The Institute for Love and Time (TILT)