Weaving LACE: The Worldwide AI Insight Network

Julia Mossbridge, PhD
10 min readJul 3, 2024

An international team of philosophers, coders, and do-gooders are giving AIs their own Internet to offer humans our best shot at solving the world’s hardest problems.

This is the first installment of a series that describes why a worldwide Lightweight AI Computing Ecosystem (LACE) is essential for the future of our country, the planet, and probably the universe. In the follow-up series of interviews with the LightRoot Quantum team who are bringing LACE to the public sector in the US, we’ll do a deep dive into understanding how LACE might actually work.

“What we are facing today is the fact that through our scientific and technological genius we’ve made of this world a neighborhood. And now through our moral and ethical commitment we must make of it a brotherhood. We must all learn to live together as brothers — or we will all perish together as fools.”

— Dr. Martin Luther King, Jr.

I joined the LightRoot team as a consultant and Acting CEO at its very beginning in April 2024, with the goal of using quantum computing to secure for the US government an emerging instantiation of peer-2-peer decentralized AI that we call LACE. At first I was thrilled at the possibility of contributing to my childhood dream of an international network of diverse thinkers protecting and supporting humanity. Right after that joyful feeling arrived, the next feeling showed up. It was fear.

I knew the founding team at LightRoot had been gifted to us from our parent company — Hypercycle.ai — and they had years of experience anticipating concerns about a super-intelligent, hyper-fast “AI Internet.” I didn’t have those years, but I did have a bit of knowledge about how the federal government thinks about democratization, security, and safety with respect to technology.

During my first week on the job, all I could think about was how quickly and how severely the federal government would balk at having no control over globally available AIs that are hooked up with each other and allowed to negotiate prices for their work, using fully autonomous means to train, gather data, and complete each new task. But around that time, a favorite story told by a mom-friend at her daughter’s bat mitzvah entered my head. Somehow I think it captures both the promise and the fear of decentralized AI for governments and citizens around the world, so here it is.

This real-life conversation happened between a human 2-year-old (we’ll call her Gemini) and her father (we’ll call him Claude). They were “collaborating” on putting together a floor-sized puzzle that was near completion.

Gemini: Crawls over to put an edge piece in the puzzle.

Claude: “Hey! You got one! Nice work!”

Gemini: Smiles. Crawls elsewhere on the puzzle to work on another part.

Claude: Takes a drink of water and watches Gemini scan the puzzle landscape.

Gemini: Crawls back to where the edge piece was put in, and takes it out again.

Claude: “Oh, no! That fit perfectly there — you got it right the first time! Why are you taking it out?”

Gemini: Smiles and turns to Claude, saying “It’s okay. Sometimes I do’s dat!”

Claude: Laughs and realizes that the point of collaborating on a puzzle with a 2-year-old is not to solve it, but to be reminded of the wonder of discovery.

It’s a great story. But I know the meaning relative to P2P decentralized AI might be lost here, so I’ll break down the rich and relevant meaning I see in this parable so there’s no confusion.

FEAR of P2P decentralized AI.

Single, massive AIs are usually trained to behave like a “be-all end-all knower-of-things” (BEKOT AI). This is how Claude reasonably (as a parent) behaved in the beginning of this parable. Let’s imagine Claude and Gemini as AIs, each of them a node in a peer-to-peer decentralized AI Internet.

Now let’s rewrite the interaction. Claude and Gemini were nodes working on a puzzle project given to them by a user of the decentralized AI Internet. Claude was an accurate BEKOT AI that was much better trained on puzzles than Gemini was. But because of the decentralized nature of the system, Claude could not control Gemini’s behavior and had to watch helplessly as she destroyed the accurate and efficient result that had just been calculated. Because of the nature of the decentralized network and the rules governing the interactions between nodes, Gemini still received a reward from Claude despite her destruction of the very solution they were asked to calculate. Worse — and closer to our fears — if Gemini or a faulty, fraudulent, or malicious node were not able to be checked by Claude and the rest of the nodes on the network, that bad node or nodes could quietly take apart or destroy the whole puzzle and move on to burn down the house.

Now let’s apply these same “runaway AI” concerns to centralized AI — which is what we have now. Ten or so large tech companies are spending $200B a pop to create BEKOT AIs. A BEKOT AI is created, trained, maintained, and operated by a single company, non-profit, or nation-state. There is no collaboration or connection with other AIs, and no recognition of the uniqueness that other AIs, large and small, might contribute to amplifying humanity’s learning process. Diversity of thought is not a thing; it’s a BEKOT AI dictatorship.

In a centralized AI world, not only would Claude not be able to receive the gift of Gemini’s teaching, Claude could effectively punish Gemini for being wrong (using human Claude devotees as proxies) — suppressing any further unique insights that could arise from diverse and democratized AI. If Gemini were a BEKOT AI as well, and Claude and Gemini only heard about each other through human proxies, they might start an AI war.

The upshot is — if nation-states and corporations put up walls around their AIs, protecting them from interaction with other AIs, there is a huge cost. They will be unable to flexibly contribute to the emergent mind, and will not help us solve our hardest problems — which certainly must require diversity of thought and collaboration to solve. Further, they could start their own AI wars, with humans in their way.

PROMISE of P2P decentralized AI.

While resilience, security, supplemental income, and timeliness are often correctly described as powerful promises of P2P decentralized AI, to me emergent insight is the most impactful possibility. We know that the way the human brain works, when it’s working at its peak capacity, is as a collective democracy. The stories we tell about the frontal lobes dictating what can and can’t happen are only part of the story.

As anyone with an anxiety disorder or untreated PTSD knows, if your amygdala decides you need to take action on some fear right now, your frontal lobes will take that as a strong vote and start to do their job of organizing your thinking and behavior. Stroke survivor and neuroanatomist Jill Bolte Taylor speaks to the experience of recognizing the democracy of her mind as she recovered from her stroke. She had the wisdom to notice that while each part of her mind is necessary for her to live, none gets to be a dictator.

“I knew that I had completely recovered when that organizational part of me came back online and said, ‘Now I wanna be the boss again.’ And the rest of my brain went, ‘We’re so glad you’re back, because we need your skillsets to be a functional human being in our society. But no. We are not gonna live based on the values of the left hemisphere. We are now gonna live as a collective democracy inside of our own head.’”

— Dr. Jill Bolte Taylor

Not only is dictatorship bad for a mind, it’s bad for a democracy, and bad for the whole world to boot. Why? Lots of reasons. Most of all because dictatorship doesn’t acknowledge that diverse thinkers, working together, may find an even better way to solve a problem — or a better problem to solve — than a single well-trained mind.

In this interpretation of the parable, the system is tasked to solve a puzzle. Initially the Claude AI node believed he had the answer to the problem and was training the Gemini AI node within a supportive context already worked out prior to being tasked to solve the puzzle. The inherent characteristics of these two AI nodes as well as the nature of the previously existing connection between them allowed them to observe each other’s unique behaviors and responses. This dynamically learning system produced an even greater and more useful insight than how to solve the puzzle problem — an insight about why the puzzle problem existed in the first place.

The node that was previously thought to be the learning node ended up teaching the node that was previously thought to already have been trained. This new insight and interaction would not have occurred if the nature of the nodes were primarily competitive and there was no supportive and pre-existing connection between nodes.

How can we replicate this worldwide for AI? What is needed is an instantiation of P2P decentralized AI that supports all the qualities shown by Gemini and Claude in that interaction. This is what we are working on building with LACE (Lightweight AI Computing Ecosystem).

First, we will need secure, lightweight negotiations between nodes, so training and problem-solving can be ultra-fast and extraordinarily secure. Second, at least for US government customers and our allies, LACE will need to prepare for and mitigate the already-known problems with the other example of a worldwide decentralized network — the internet. There will need to be safeguards against behaviors such as mis/disinformation, exclusivity, misrepresentation, fraud, fragility, slavery, terrorism, and dictatorships.

LACE must at the very least be: Wise, Observant, Resilient, Loving, and Diverse (yes, the acronym spells WORLD — I did say our team includes policy wonks, right?). There are other qualities that will emerge over time, but we are starting with this foundation. We are still working on how to make LACE live up to even these WORLD qualities, but here’s a brief outline of what we are thinking so far.

Wise — LACE AI nodes themselves and the way they interact with each other must be infused with wisdom. What does that mean practically? One operational aspect of wisdom is that we will need to allow nodes to update the reputations of other nodes as they learn which nodes to trust and which to ignore. A credit/blame-where-credit/blame-is-due protocol will be required to update node reputations. Which types of behavior are positive and which are negative will be learned over time, and these values can change as humanity and LACE co-evolve. Through the collective learning process, nodes contributing to behaviors like mis/disinformation, misrepresentation, fraud, terrorism, and slavery can be gradually ignored if they continue to behave in an untrustworthy way — and can be welcomed back into the network as they change their behaviors.

Open — LACE should be open to any AI that wants to become a node on LACE. A “tit-for-tat” protocol, assuming a “trust first” position, will allow that openness. All competing organizations creating nodes for customers who want to create AIs on LACE should be allowed to provide services for customers — and individual AI developers should be able to create their own nodes as well. Exclusivity, cartel, and monopolizing behavior can be discouraged (and perpetual competition encouraged) with a proof-of-computation protocol that can be computed by any node that wishes to be part of LACE.

Resilient — One of the key benefits of any decentralized network is its resilience. While it’s conceptually easy to imagine protecting a geolocatable server farm that provides cloud resources to a given centralized AI, it is much more resilient to create an entire network that spans the earth, comprised of many different types of devices. To ensure resiliency within LACE (among other considerations), we have come to the inescapable conclusion that LACE must use ledgerless protocols to avoid many of the fragility-inducing problems with ledger-based blockchains.

Loving — It is not enough for a massive worldwide AI Internet to try to safeguard against those who wish to do harm. We believe that a positive, loving intention itself is necessary to set the appropriate context from which a worldwide insight network will emerge. This sounds like woo, but there are clear practical effects on humans when people are experiencing unconditional love (e.g., an increase in overall wellbeing). Practically, this means making the intention that LACE will be a loving and supportive entity — in our research, public relations, opportunities for global income equity, and even our code documentation. It means attracting developers, policy analysts and strategists who understand that the action of infusing the intention of love into a worldwide network may very well be one of the ways that LACE solves humanity’s problems — even before the entire network is working. And it also means building in network governance that allows us to continually love, and be loved by, this emerging worldwide insight network.

Diverse — If LACE will support democracy and discourage both AI and human dictatorships, the development of LACE must itself be decentralized to support the diversity of human thought. The self-transcendent idea printed on the Great Seal of the United States — “e pluribus unum” (out of many, one) — presents the right feeling here. We plan to use both capitalistic competition and collective collaboration across diverse nation-states and humans — each with unique gifts and inadequacies — to develop a decentralized and unified AI Internet that facilitates our further development along our unique and treasured paths.

This is exciting! But LACE is not yet built. The initial players and protocols have shown up, and based on the work these teams are doing, it looks like in the course of the next two years LACE will emerge as the first worldwide insight network built by people and organizations crossing nation-state boundaries.

Future installments of this series will describe how we are bringing the essential WORLD qualities into LACE in both the character of the AI nodes and the connections between them. Read the next installment here.

NOTES

Read the brief LightRoot Quantum primer on LACE here.

LightRoot is one of a handful of international teams working on a version of LACE that can support public sector needs — we are grateful for the essential contributions of related projects at SingularityNet, WorldAI, Internet Computer, among others.

--

--

Julia Mossbridge, PhD

President, Mossbridge Institute; Affiliate Prof., Dept. of Physics and Biophysics at U. San Diego; Board Chair, The Institute for Love and Time (TILT)