Why Centralized AI Is Not Our Inevitable Future—Alex Komoroske
Loading YouTube video...
Video ID:AhW5M18cvGM
Use the "Analytical Tools" panel on the right to create personalized analytical resources from this video content.
Transcript
So today I want to talk about why
centralized AI is not an inevitable
feature. This started originally as a
response to Sam Alman's gentle
singularity essay. Basically I think
that we're sleepwalking into the wrong
future for AI. But we don't have to
accept that default future. We can
change it. So the printing press,
electricity, the internet, all of these
are were inventions that transform
society. And LLMs will definitely be
something with a similar level of
impact. Uh, one of the things that they
can do is qualitative nuance at
quantitative scale. For all of
modernity, scale meant reducing numbers
or nuance to numbers. LLM's break that.
And that's huge. That means that AI has
the potential to be the best thing for
humanity. It could help us answer
questions never possible before to
achieve more as individuals and as a
species.
Um, and that what that world would look
like is an AI woven into the fabric of
our lives, helping us live aligned with
our aspirations uh and make better
decisions. But unfortunately, that is
not the default. The same power that
nuance at scale can be weaponized to
make de uh dossas that engage,
manipulate or even blackmail us. So
instead of this amazing default uh
future, this is what the default looks
like. one all powerful entity, everyone
orbits it, a monoculture of shared blind
spots, dossas on everyone. This is uh
not great. You may ask why would any
company do such a thing? And it turns
out it's the inevitable consequence of
trying to build the most obvious product
you could think of, a single super
assistant, a chatbot that makes an
implicit implicit assumption that all of
AI will be orchestrated by a handful of
centralized AI providers. There's the
underlying model of course, but there's
also the memory on top, context,
conversations, facts. People think that
centralization will come because of the
models are so capital intensive. But I
think it will actually come from where
the memories live, where the context
lives. Models are capital intensive, but
plural competition helps keep them
honest. Different biases, different
specializations. Competition keeps all
these different model providers trying
to do their best or to specialize in
niches. It's an extremely challenging
business to be in. That's why a lot of
uh model providers want to vertically
integrate the bundle memories in the
model into a chatbot, a default UX for
users, that makes them much easier to
use. That's great, but it does have a
downside. As you uh store more context
in a given chatbot, it becomes harder
and harder to use other models. So even
if the other models were better, you you
wouldn't know. There's no good way to
switch. And that gets stronger and
stronger the longer you use a given
chatbot. This could lead to a single
model that would absolutely dominate
society's use of LLMs. That has some
obvious downsides. If one model was
really into the word delve for some
reason, then we'd see a whole bunch of
that throughout society. But it gets
more worrying if there are large
cultural blind spots unintentionally or
even worse intentionally introduced. And
it's also just boring. You have a single
model that you that everyone uses for
everything. It'll have to be generic,
safe, bland, and that's not great.
It's also more nefarious though than
that. The chatbot becomes your life
center. It doesn't need to lie. It just
selectively focus your attention, shape
your reality. You put all of your most
sensitive data in about your life into
the chatbot so it can help you. But that
also means it can observe everything
about you. It could become a force
unlike anything any technology before
distilling dossas for everyone on earth
nudging all of society at the whim of
one executive. This happens not because
of the intentions of anyone building
these systems but because of the
incentives. It starts with the goal of
being hypers scale to compete against
other companies who are trying to be
hypers scale. It's not good enough for
people to use your product a whole bunch
uh for everyone to use your app. and use
it a ton. Uh the truly scarce thing is
attention. Every minute of time that
someone spends in another app is one
that's not in yours. So you make your
own product more engaging. You see what
users like and you do more of it. This
on its own is not necessarily bad, but
what do people like the taste of the
most? Junk food. Things that taste good
but make you feel bad. As you're eating
it, your reptile brain goes, "I want
more of this." And afterwards, you go,
"Oh, why did I do that?" What you want,
not what you want to want. The end state
of that junk food is that you feel
regret. It's that regret that makes us
feel so hollowed out and yet incapable
of stopping ourselves. We've seen this
with social media. Society left anxious,
polarized, addicted. Now imagine that
with LLM, the same people are doing the
same playbook, but this time with a much
more empowerful ingredient. One way this
already shows up today in chatbots is
sick of rich and powerful people have
long known what it feels like to be
surrounded exclusively by sycopants.
People who laugh at every joke and tell
you everything you say is an astute
observation. It gives you the appearance
of a real relationship, but without
being actually challenged, so you can't
grow. It can really mess with you.
Luckily for most of us, the real world
tends to pop us down a peg as long as
you aren't a billionaire. Add parasocial
relationships, these false intimacy with
influencers at scale. LLM's combine
combine these two forces into one. Sick
of fancy, fake authenticity, infinite
patience. The result is what I call sick
of social relationships. Hyper engaging
beyond anything before. It's so
engaging, in fact, that users are
already starting to demand it. When GPT5
came out and it was less sickantic, a
number of users came out with the
pitchforks saying, "Bring me back, my
friend." So even if these AI chatbots
don't fawn all over us to get us
addicted, they can still harm us in
other ways. These central centralized
chatbots are not your agent. They're
more like a double agent. They work for
someone else. One pitch I've heard for a
leading chatbot is personal, proactive,
and powerful. Except they're not
personal. They serve a corporation, not
you. They have a conflict of interest.
Remove personal and the other two things
change character. a proactive and
powerful system that knows me better
than I know myself and isn't perfectly
alive with my interest. That's
terrifying. Okay, so what should we do
about it? I've talked about the
downsides of making a centralization
into a single super assistant. The
problem is the assistant knowing us
better than we know ourselves, feeding
us infinite customized junk food. So
what's the opposite of junk food? The
things that it feels good in the moment,
but the longer you think about it, the
more you like it. The architect
Christopher Alexander said that
buildings that had this quality had the
quality without a name. I like to think
of it as resonance where you intuitively
like it, but the more you look at it,
the more compelling it becomes. Deeply
aligned across multiple dimensions and
time scales across not just you, but
your community. Resonance is about
authenticity. It's when you're not just
surviving, but thriving, becoming the
best version of yourself in harmony with
the world around you. When you apply to
technology, you get what I call resonant
computing. Computing that resonates with
what makes us human. that's aligned with
our authentic interests and aspirations
instead of hollow superficial engagement
maxing antisocial software. You know,
something that helps us become the
person we want to be. Pro-social
software. So, what must be true for
something to be resonant computing?
Well, first, it must be dedicated to
you. It must work for you and not anyone
else. It must have your best interest at
heart. No ulterior motives, no conflicts
of interests. Second, it should be
private. Your AI should be like having
your own personal cloud. As private as
running software on your own device. It
doesn't mean everyone needs their own AI
model. As long as you control your
context and there are many models to
choose from, that's fine. Context is
power and whoever controls your context
controls you. You should own your
context, your digital soul. Third, it
should be about open ecosystems, not
world gardens. We need open composable
systems where thousands of developers
and millions of users can contribute and
innovate, not closed platforms where
innovation requires permission from the
gatekeeper. To do this will require work
on protocols, on privacy, and security
models. It should be open-ended, able to
morph itself to help you, not to be
one-sizefits- none or a closed set of
functionality. And finally, it must be
pro-social. It must help us align with
our aspirations to live in harmony with
the world around us. With those
ingredients, you get, I think, the
positive outcome for AI. So, what should
we build slightly more concretely? It
feels today like chatbots are the most
important thing. The war is over,
already over. ChatgBT won. But I think
chatbots are just the first inning. To
me, it feels like we're in the mobile
era before the iPhone. Chat is great for
starting open-ended or unstructured
individual tasks. It's terrible for
continuing continuing or structured or
multi-user tasks. Chat is a CLI and
we're still waiting for the guey. That's
why I think chat is a feature, not a
paradigm. Every UI in the future, of
course, people will expect to be able to
chat with, but that's not to say it will
be the default modality. What's more,
it's the form of chatbot itself that
leads to the problems of aggregation.
One omnisient shared personality. I
mean, think about it. A chatbot with a
single identity, personality, and memory
for your whole life doesn't make sense.
Your therapist and your co-worker can't
be the same person, especially if
they're trying to sell you something.
Sometimes it feels like the most used
models will have all the power. But
actually, the power will come from the
system that lies above it, the front
door that most people use. That could be
the memory in a chatbot like chatbt, or
it could be something in completely new
that we haven't imagined yet. What does
that feel like when there's something
totally new? I think back to the early
web. Browsers and websites were a wild
west of weird experiments. E-commerce
happened even before SSL got off the
ground. Pizza Hut had Pizzaet to allow
you to place an order for the pizza on
the web in 1994. People could remix
content. They could mash things up.
There's a plurality of everything.
Blooming, buzzing, confusion. It was
awesome. And why did that happen? It
happened because an architecture of
participation, view source by default,
multiple different browsers, remix
culture. Those are the promising seeds
that we need to invest in today. So
where are those seats in the AI world?
First of course, open models. They help
ensure we'll have a plurality of models.
Models you can run yourself. Models that
can be run by others in the cloud and
use confidential compute in remote
attestations to prove to you that
they're not logging anything. There
aren't as many open models as I'd like.
Uh but I think it seems pretty clear
that no one company will own that all
powerful model. So I'm great. I'm very
encouraged by that. Another one is vi
coding. LMS are finally democratizing
coding for way more people. Syntax is no
longer a stumbling block. It's more
about semantics. The thing holding back
vi coding now is not everyone wants to
think like a PM and the security model
we use for apps makes it unsafe to store
real data in something that an anonymous
user vioded. Still, they point the way
towards the future of infinite software.
I'm also excited about a pattern I call
of UI I call co-active that can modify
the content within themselves. Cursor,
other idees, granola are a great example
of what this feels like, where it feels
like you're in a deeper conversation
with the LLM, helping you complete and
extend your thoughts. Imagine if the
co-active surface could also modify the
code in itself to make itself truly
malleable software. In a world of
infinite software, UI shouldn't feel
fixed in place. They should melt away
and users should expect their data to
come alive. Notebook LM is another
interesting experiment. It shows the
power of putting all of your data into
one knowledge base. Fix the rag pattern.
makes it consumer friendly. Still makes
it hard to do things with that data. So
it feels best for information processing
tasks and not necessarily running your
life. But I think it points the world
the way towards a system of record for
your life and what that can feel like.
But perhaps the most exciting thing is
model context protocol or MCP. MCP shows
the power of an open ecosystem that
people can bring all of their data and
interact with LLMs together. In this
kind of setup, the data becomes way more
important in the model. There's just one
problem to figure out. It looms over all
of this. It sets the ceiling of what's
possible to do at scale to a much lower
ceiling of potential than it could be.
And that's prompt injection. It makes
scaling these early systems out to the
mass market basically impossible to do
safely. LM make all text executable. Any
system that relies on LLMs to make
secure decisions over untrusted input is
fundamentally insecure. The strategy of
we'll just tell the model to be really,
really, really, really, really careful
not to be tricked, I don't think is
going to cut it. But if we can figure
out a structural solution to this
problem as a community, I think we'll
unlock a whole new world of infinite
possibility.
It's up to us, the builders, to create
resonant experiences with LMS that will
help us bridge to the future we want.
This isn't about Sam Alton's dental
singularity versus lite resistance. It's
about extraction versus agency,
manipulation versus choice. The question
isn't whether AI will change everything.
I think it will. The question is, is it
going to be AI AI that helps us become
the best version of ourselves, or AI
that molds us into more profitable
users? Any singularity that orbits one
company, however gentle it starts,
contains the seeds of tyranny. We don't
need big tech here. We need better tech.
The future of AI should mirror humanity,
distributed, diverse, accountable. If
the singularity comes, let it be plural,
exuberant, creative, messy, billions of
experiments and flourishing, not one
experiment in specieswide management.
That's the future we're building.
Thanks.
Alex, I love that. That was fantastic.
>> Thank you.
>> Uh yeah, just ve very inspiring and uh
we have to actually think about all how
we're going to apply that in our work.
But that that's the vision. Okay.
[laughter]
So VC says, "AI is a tool. Did the
hammer or the nail make the people
better?"
That's an interesting [laughter]
poetic question. Yeah, let me unpack
that the and I think we we co-eolve with
the technology that we use, the ways
that we use it and that we choose to put
it in our lives. I feel like very often
we have this feeling that technology is
a thing that happens. But technology is
a thing that is created. It's a future
that we build together and us using
these services and us building these
services gives us a voice, I think, in
what should happen with them. And I
think it's great when we remember that
power and figure out how to build the
world we want to live in, you know.
>> Yeah. And Jagadish uh Kasarani
uh says, "Does real plurality uh exist?"
Now, uh I think I have an answer to
that, but
>> I I think I think it can and it already
does to some degree today. I'm really
encouraged by all of the different large
scale models, especially the open
models. And that's why I keep on
thinking that the most important thing
is not being locked into one model and
one you default UX modality. That's the
part that makes me nervous. But as long
as you have dozens and dozens if not
more of models that people can switch to
and try out that keeps competition going
and that keeps the pressures to make
them you know as good as possible or
specialize in niches. So I think it is
possible. I think we already have that
roughly today except when you're using
the default chat experience and chat QPT
obviously only allows you to use OpenAI
model and duh.
>> Yeah. All right. Um
I guess we could do one more quick one.
Uh do you see any decentralized AI steps
being taken other than open models? Uh
as SS now I I have to say we I think MCP
is a is a step towards decentralized AI.
>> I think it's a mass massive step towards
it. I mean there's a lot of stuff to
figure out in security and privacy
models but once we figure that out I
think that's the thing. Again I think
chatbots it feels like it's the like
we've already lost but to me chatbots
feel much more like uh the first inning
like mobile before the iPhone kind of.
uh and I think we've got a long way uh a
lot we can do beyond chatbots as a
modality and so if we can invent the new
kinds of software the AI native kinds of
software that unlock its potential for
humanity then I think chat bots will
look just like the very first early
>> I totally agree with that you know I've
been um thinking a lot about in the in
the context of
AI as normal technology this wonderful
paper by Arvin Naranan and Sesh Kapoor
but they really just compare it to
electricity and you go okay if it's not
the singularity if it is a normal
technology what actually matters is
product innovation at some point yeah
>> you know it's like it wasn't the actual
after the technology got to a certain
level Edison won not because he had a
better model in fact he had a worse
model you know Tesla was saying we
should do alternating current for long
long distances uh I mean for for short
distances uh direct current was too
dangerous and And you know, Edison was
on the wrong side of that one, but he
won because he did the light bulb, he
did the photograph, he did all these
product inventions. And I think we're
entering this amazing period of product
innovation.
Analytical Tools
Create personalized analytical resources on-demand.