Chitra Sivanandam of Living Dino and I yukking it up in our discussion about LACE — edited transcript below. You can watch the original discussion (with more jokes) here: https://www.youtube.com/watch?v=fYw2HheR3so

Diversity of AI Thought: Enhancing the Signal with a Symphonic Approach

Julia Mossbridge, PhD

--

This is #4 in my series on LACE: Lightweight AI Computing Ecosystem. In this edition, LightRoot strategic advisor and AI expert Chitra Sivanandam have a lively chat about why AI diversity is crucial for our future and the future of intelligence and national security.

This article is highly edited and enhanced for clarity — if you prefer a more fun and slapdash approach, check out our discussion here: https://www.youtube.com/watch?v=fYw2HheR3so.

Julia: You’re a strategy advisor at LightRoot — but also you are famous for your AI work and your brewery (see Iron Butterfly podcast episode) — but could you introduce yourself, the way you’d like to be introduced?

Chitra: I would say if I had to characterize myself, I accidentally found myself enamored with the intelligence community. I’ve spent my life supporting this particular part of industry and have learned, over the years how to throttle a lot of things on the emerging tech side with the things that are happening in the government.

There’s a lot of things in the federal sector that can be frustrating, but there’s a lot of really cool things inside too. So I always find it interesting. I think the way I like to look at it is there’s a lot of goodness in this crazy machinery that is part of the national security defense infrastructure. And there’s a lot of goodness that happens out in commercial tech.

I like those places where things mash up, right? So I think about finding ways to change our thinking or accelerate our thinking because of these funny little mashups is where I jam the most.

Nobody thinks of creativity usually when they think about the government, but there’s a lot of really interesting things that are under the hood. And I think it’s always fun to find new ways to advance those funny things in like deep dark pockets.

Julia: Well, that’s an intriguing introduction. Immediately, I’m like, “tell me all your secrets!”

Chitra: I’ve always been lucky in finding myself around these interesting people that like to think similarly and also are trying to help national security.

So I guess I’m just, this, person who’s fond of those funny problems and like the people that like to chase those problems.

Julia: You’re a people supporter and a people extender. Yeah, but you’re also an idea generator.

Chitra: Yeah, connector of things. I think a project has to be exciting. To me, I get more energy off of the excitement of the rest of the people involved. And if people are like, okay, I’m just doing this because this is what I get paid to do, that tends to be less exciting. If I had to pick, the crazy idea with lots of energy is always more appealing than solid idea with lower energy.

Julia: The moonshot is always more interesting than the incremental. Okay. Here we are with our moonshot. So when it comes to the world of LACE, this whole series is about what we’re hoping to accomplish with this Lightweight AI Computing Ecosystem. We are a group of people who are focused on moonshots and we are waving at the world saying “we’re doing things differently.” In an idealistic sense, what do you think is the best-case scenario, and in a practical sense, because I know that you also think practically, where do you think this goes in a more constrained scenario?

Chitra: I think they’re actually both, the same, in my head. I say that because I think in the best scenario, it commoditizes really fast and becomes this thing we take for granted because it just is. And nobody wonders too much or puts a lot of thought today and like how our networking infrastructure works or TCP IP. I think there was a moment in time when we were all concerned about IP addresses, and that went away really fast, and here we are, right? So LACE feels like it has the potential to have that same legacy of monetizing and turning into nothing that anyone pays attention to.

And the critical thing to do that is that we have to trust it. We don’t wonder about it. We don’t question it. It becomes interesting at that magnitude because nobody will care. And it can only get to that point where nobody will care if we sit there and say, I’m not going to have to second-guess it.

Julia: So when you say trust it, you’re not saying just trust it blindly without any evidence that it’s secure, right?

Chitra: I’m saying we basically have the hard work to do to get to where it is proven and demonstrated over and over again to where we don’t have to think twice about it. And that’s a long pole. It might not get there anytime soon.

Julia: Yeah, and I just want to get to the point where we can just type in a query for an AI at the prompt on the web, and we don’t really care which set of AIs is getting paid to do the computation as long as they’re in the reputation network that we are allowing ourselves to use. So there are pre-vetted nodes.

Chitra: And that’s really hard because, I think there’s something very altruistic that we talk about related to the reputation network. I love that idea. And I know it’s really hard to do because the best thing that touches us all that are that’s more analogous is Amazon’s recommendation engine. But that has a clear motive and is not altruistic at all. So how do you really do this and balance all of these elements? It’s a nice, interesting challenge.

Julia: One reason why people love P2P decentralized networks is it’s not supposed to be biased, but at the fundamental code base if there’s something hidden in there that creates a bias, then the whole system is biased.

That’s why we have a bunch of idealistic people working on this and trying to influence the code base developers who are also a bunch of idealistic people trying to keep out desire for dominance or control of what is supposed to be decentralized.

That brings me to the WORLD principles that we’re talking about infusing in LACE: Wisdom, Openness, Resilience, Love, and Diversity. These things are key. In the very first article I talked vaguely about each of them. I’m curious which one resonates, which one at once resonate with you, what’s your passion among those?

Chtira: If I had to lean into one, I probably will pick the diversity one, because, especially on the federal side, right?

I feel like, the last 20 years or so all I see is continued noise, for no better expression. We are in love with marketplaces. We think about lots of different things. There’s always a slightly different algorithm that does something slightly better or worse that’s needed for some other purpose.

But it’s always difficult to figure out that matching between need and algorithm, right? How do I think about my use case, my application, my data, my mission. How do I think about even within a dataset — like how I looked at normalization or filtering of that data to support the algorithm? There’s so much nuance to all this.

So when you think about it, it’s always going to be this end-to-end problem where you are overwhelmed with your choices. And we continue to innovate more, and we should never stop that. But then how do we start making better, effective decisions for whatever objectives we have?

The value here and the thing that becomes really interesting is I don’t have to throttle down the diversity in order to do something that’s better. Maybe at the end I can continue to say we can still amp it up. When we’re done with LACE, maybe I can trust that the system will know what right things to bring in context to each other in order to solve the thing we’re trying to solve.

It’s like nature, right? There’s a continued biodiversity. Minimizing the diversity doesn’t enrich the environment. and sometimes it looks like there’s too much, but it’s a matter of how do they all actually all work in concert. And it’s something that happens so easily, just physically in nature, and we had to figure out like, what do we do and how do we translate that.

The easy answer is to throttle it down, right? And to say everything has to conform in a way that I know how to consume it. But then you don’t end up with the best outcome.

Julia: That also speaks to why the noise happens. The noise happens because it’s not a choir. So a choir is a bunch of people singing with different voices that comes out in this beautiful, diverse array of sound, and these timbres of voices that they come together and it’s really beautiful.

That’s the kind of diversity that we’re going for. But what has happened is this noise — it’s each person figuring out a different solution, and then wanting to control it, or not figuring out a solution and wanting to control the information so they don’t look bad for not knowing the solution.

It feels to me when people are in a system that’s like a control system or their theory is that it’s simple enough that you can control it; when they don’t realize that it’s a complex system that you actually can’t control, that’s when noise happens.

And when this choir happens with this diversity of voices that’s really productive and beautiful is when people realize that authentically, this is a diverse and complex system. And the only way to manage that is decentralized interaction.

Chtra: Yeah. And I really like your symphony example. Cause then I come from more of an imaging, background, from a technical education perspective. So when you think about signal to noise, it’s never about trying to wipe away the noise. It’s all about suppression of the noise and amplification of the signal, right?

So on any given day for any given thing, you’re playing with your amplifiers and you’re figuring out what you’re suppressing. You’re not getting rid of noise, because it might be that you need it for something else. So I think it’s an interesting signal problem.

And, I think we have to sit there and say, “Hey, like there’s lots of value in a lot of these things and we never know what we need at any given time — and if we throw it away, we definitely don’t get it!” The other aspect of this that I like, related to the way we’ve been discussing all of this is, I think it’s genuinely the free market enterprise concepts that we’ve all loved and endeared ourselves to in the Western world. And, the things that are good and, valuable to an outcome, are the ones that make its way through.

Julia: The truth will out, the diversity will out, and you might as well use it for the good, rather than have it be the noise, right? And so this is a way to use it to get to an end goal that is really very powerful.

Also, as we solve the problems of P2P decentralized AI, we are solving the same problems of P2P human intelligence interaction, right? As we are solving the problems of P2P decentralized AI, we are solving the problem of what it means to be a person in a community who’s trying to solve problems that make the world a better place, right?

So it’s all the same thing. People don’t actually react well to autocratic control. If you want people to do things, it works to incentivize each person. It works to give them their own autonomy and help them figure out what they’re good at and how they’re going to respond to negotiations.

This is what works for people. And as the whole world transitions from centralized control to decentralized control, this is sort of a sandbox. That’s part of why I’m excited about this. Since I was a kid, I’ve been trying to address the question of, why can’t we all get along? And I think the answer is because we’re not decentralized.

Chtra: No, it’s funny that you bring it up like this. When I was a kid, we were on a trip to visit in India. I always had my own little issues with religion just because I was like, “y’all are saying the same thing, but a little different, and then everyone’s ‘this is my club’ and ‘you’re not invited to my club.’”

So it always felt a little funny to me. We went to visit this little town, this French settlement in Southern India called Pondicherry. And there was this area that’s basically like a commune, for all intents and purposes, but it was really interesting because, you go there and everybody has a thing they contribute.

There’s no job that pays you to do something. But if you were a doctor, then you are the doctor for all these people — whatever your skills are, you bring your skills and you do stuff, right? Like a kibbutz.

So they had built a lot of natural products there; it was the eighties, nineties when I saw some of that stuff. It was fascinating to me. At the end, the problem with all these societal implementations is there is a set of people that are AKA “the leaders” that determine what your skill sets are, that have you do something in some function. And you’re all happy to contribute in whatever function.

And so it’s like halfway there. It solves this one problem, but it introduces other problems. And so I think it’s in that same mode where we’re thinking about decentralization — it’s mixed in our heads with this kibbutz style philosophy, and we don’t really have a good analog for what decentralization really offers. We don’t know how to frame it — it’s new new, and we’ll have to figure out how this works.

But I think the problem is like, we think if it goes amok, we can’t put it back together. But that’s the way physics works.

Julia: Yeah, It falls apart into a new part, right?

Chitra: But I think there’s definitely something novel in the notion that nobody controls it. And the flipside, bad things could similarly happen, but then that’s why we have to figure out how you put safeguards in the system. And what are the things that we pay attention to prevent complete anarchy in the system, right?

Julia: To some extent, every system will have bad stuff that goes on and that will be exploited to do various things. The question is what can we do at the very ground level to try to discourage that?

Chitra: Yeah. And I do think that like all physical systems or anything you design — or even just people — the philosophy will be to take the path of least resistance. I think that’s still valid, even if we design this type of a system.

Julia: Especially if we design this type of a system.

Chitra: The incentive model depends on it. So there’s a got a lot of good things going on, there’s a lot of philosophy baked in, but in most cases I think the thing that’s most exciting is that this is actually workable.

Julia: It’s actually workable and it’s happening. And it’s scary. It sometimes scares me. Like, wait, are we really doing this? Yeah, because it’s the next step. It’s the thing that has to happen. It’s bigger than us. And it’s appropriate. And there’s going to be problems, and there will be solutions to those problems.

And potentially we could use the system itself to solve some of those problems. So it’s exciting. I really love that when you were a kid you were thinking about that. Those are exactly the kind of thoughts I would have had as a kid on that trip and that experience.

It’s now, wait a minute, this is like England telling people you’re in class A and you’re in class B, right?

Chitra: And I’ve assessed your skill set. But it’s interesting — you have to think about why are these things like still around. There’s always like new versions of these things that pop up.

Julia: Exactly. And how do we do a reputation scoring system that’s not based on, “we’ve assessed your skill set and this is what you’re available to do,” but instead, how can we do reputation scoring on things that are appropriate, but that can be changed.

Chitra: People evolve. Even AI will evolve.

Julia; I am feeling good about this conversation. I think diversity is a really key piece. Anything you want to add?

Chitra: No, I think this is going to be awesome. I’ve always found myself lucky in that I’ve met lots of amazing thought leaders in the government, even though a lot of people dis on the government. And I think it’s going to be fun to see how this part of the world that we live in, can not just be part of the story, but can help lead and frame the story.

Julia: Amen

--

--

Julia Mossbridge, PhD

President, Mossbridge Institute; Affiliate Prof., Dept. of Physics and Biophysics at U. San Diego; Board Chair, The Institute for Love and Time (TILT)