OpenAI vs. Grok: The Race to Build the Everything App w/ Emad Mostaque, Dave Blundin & AWG | EP #199
Loading YouTube video...
Video ID:bXT0PBasDc0
Use the "Analytical Tools" panel on the right to create personalized analytical resources from this video content.
Transcript
Open AI Dev Day just occurred. Some of
the most staggering things in human
history got announced yesterday and they
still undersold it. Good morning and
welcome to Deb Day. The battle here is
that human attention is finite. Open AI
meta, everyone's making a play for who
are you talking to that then enables
these MCP enabled agents to come and do
the job. Everyone's trying to be the
everything app.
What happens when suddenly we're able to
5x 6x 7x the amount of broadly
accessible super intelligence across the
world? I I think this starts to become
the foundation for transformative
economic changes at a planetary scale.
OpenI is really trying to do a global
land grab, right? Going into India,
going into the UK, going into Greece,
and then you've also got all of the
open- source models coming out of China.
We're just at this tipping point and the
tipping point is in the next 6 months.
Now that's a moonshot, ladies and
gentlemen.
Everybody, welcome to Moonshots and our
next episode of WTF just happened in
tech. Uh we're spinning up this episode
real quick with my extraordinary
moonshot mates because Open AI Dev Day
just happened. Want to cover the
subjects there, but there's a lot
happening across the board in robotics.
you know, FSD 14.1 from Tesla just
dropped, as well as other robotic
updates and data center updates. Uh,
we've got my Moonshot mates Dave London.
Dave, good to see you, pal.
Good morning.
And of course, we have AWG live from
someplace in the uh in hyperspace. And
good to see you, Alex. Welcome back. Uh
and then uh one of our other moonshot
mates, Immad Mustach, coming in from
London. Immad, good morning to you.
Morning
or good afternoon as the case may be.
You know, in the rocket business in the
business, there's something called a
hyperbolic fuel. Uh a hypergolic fuel is
when two chemicals come together and
they explode and they make a propulsive
force. And I think about AWG and EMOD
coming together as my hypergolic fuel
this morning. So,
better than coffee. Completely agree.
Yeah. Uh, absolutely. All right. So, as
always, this is the news that's breaking
that I think is impacting the global
economy, impacting our mindsets,
impacting, you know, how we teach our
kids and run our companies. So, nothing
more important uh for me and let's jump
in. The reason we spun this up uh for
everybody is OpenAI Dev Day just
occurred. want to hit on this and I'd
like to really evaluate along the way,
you know, how critical, how rapidly how
is Sam sort of manipulating the future
of his company and AI uh in a positive
way. Um we're going to discuss this.
All right. Uh I'm going to open up with
a short video clip of Sam opening up
Open AI day yesterday. Let's take a
listen. Back in 2023, we had 2 million
weekly developers and 100 million weekly
touch BT users. We were processing about
300 million tokens per minute on our
API. And that felt like a lot to us at
least at the time. Today, 4 million
developers have built with OpenAI. More
than 800 people use ChateBT every week
and we process over 6 billion tokens per
minute on the API. Thanks to all of you,
AI has gone from something people build
play with to something people build with
every day. We think this is the best
time in history to be a builder. It has
never been faster to go from idea to
product. You can really feel the
acceleration at this point. So to get
started, let's take a look at apps
inside of Chat GBT.
All right,
Dave, you want to open up?
Yeah. So you obviously Sam's not, you
know, the best. He's not Steve Jobs on
stage, but the the numbers are just
staggering. You know, the 300 million to
six billion tokens per minute is the one
that really jumps out. And uh it's going
to explode from here forward too
because, you know, I could easily
consume 10,000 plus myself just coding
uh and with the number of developers
coming on board and the number of home
users coming on board, it's it's just
astronomical. So, as we've been saying,
nowhere near enough compute to keep up
with it. I think Johnny IV might be the
guy driving the hey let's do this Steve
Jobs style have our very first big stage
developer day. Uh you know their GPT5
launch was really flat. I mean really
really flat. They did a much better job
yesterday. I got to believe Johnny is
driving that guy. Put some money and
some effort behind it. Let's let's go.
It's but you know they some of the most
staggering things in human history got
announced yesterday and they still
undersold it
relative to the implications. We'll see
it in a couple other videos here. Uh,
but I I don't know if that's deliberate
slow playing because they don't have
enough compute to keep up with the
demand anyway or if it's just they're
learning how to do showbiz on the big
stage. But in any event, we'll see some
more just mind-blowing capabilities that
if anything are understated
and 800 million users uh is pretty
extraordinary. They're tracking for a
billion users. And I I just wonder is
this a winner takemost type scenario or
is there anything that can overturn them
the final result
let's go to you and then we'll go to
Alex to to uh bring us home on this one.
Yeah, I think to put it in context, it's
a lot, but it's about as many weekly
active users as Snapchat. And I know
which one's going to have a bigger
impact upon the world between the two,
you know. Um I I think there's still so
much upside to come from here, but now
you're seeing their model with Sora 2
and others moving maybe towards an
advertising model as tokens get cheaper,
as they get faster.
To put the token numbers in context, 6
billion a minute is three quadrillion
tokens a year.
All of the humans in the world together
speak 50 quadrillion tokens a year and I
expect that number to go up 10 times. So
next year OpenAI is probably going to be
at 30 and then by themselves the year
after they'll overtake in terms of
tokens all the human words spoken every
single year.
So this is the type of
I think that moment calculated. That's
really
I think we're getting close to that
because Google said they're doing a
quadrillion on their billion active
users right now because it's in search
and things. So, I think we're at that
tipping point now where the number of AI
tokens coming into the world is about to
overtake humans. And maybe the we should
call it a something day, right?
Quadrillion here, quadrillion there.
Yes.
Quadrillion here, quadrillion there.
Yeah. Alex, what's your take on this
opening commentary?
Yeah, I think we're really far from
saturation. So um I I I would add that
in addition to being call it at about 6%
saturation by comparing number of open
AI generated tokens versus human spoken
tokens uh per minute. I I think there's
probably an even more important
statistic which is that there are
approximately 4 billion human users of
smartphones that aren't yet using any
sort of super intelligence if you will.
Now ask yourself what happens when
suddenly we're able to 5x 6x 7x the
amount of broadly accessible super
intelligence across the world. I I think
this starts to become the foundation for
transformative economic changes at a
planetary scale
and you're limiting that to humans and
of course humans might be the least
significant users of super intelligence
in the final result.
Well with full autonomy super
intelligence is arguably the ultimate
user of super intelligence. Yeah,
there's a limit to the number of words
we can say. You know, it's like 20,000 a
day. Our thinking tokens are 200,000 a
day. AI has no limit to the number of
tokens and economically valuable tokens
it can do except for the GPUs. That's
the only limit.
Well, you know, on our last podcast, we
we talked about, you know, Sam was
saying we're going to have to make a
trade-off between tokens used for
education for our children or token used
for healthcare to save lives. But we
can't. We don't have an infinite amount
of compute. and I don't want to make
those difficult decisions. But if you
know FSD is coming online and all these
cars are going to start driving
themselves and the the quality of the
driving is directly tied to the amount
of compute available. So we're
imminently going to make very very
difficult decisions around, you know,
tolerate a very rare car crash versus
give somebody the ability to build
something at home using AI. I just
incredible the the difficult decisions
that are coming immediately after all
these functions that we're about to see
get deployed. and at the same time also
limited by energy which we'll talk to um
in this conversation. I'm going to move
us to a few of the features the open AI
day we'll see a few things. I I didn't
show the video here but one of the
primary uh you know high points was
they're talking to apps within chat GPT.
They have an uh an apps SDK and the
ability for them to say to booking.com,
book me this trip or Figma, you know,
diagram this or Cera, teach me this. Or,
you know, just speaking to Zillow, it's
the ultimate interface with all the
other apps out there. Uh what's the
significance on on this for you, Emmad?
Well, I think attention is all they need
as it were. Like the battle here is that
human attention is finite and so open AI
meta everyone's making a play for who
are you talking to that then enables
these MCP enabled agents to come and do
the job. What is the 10 cent WeChat type
super app that's coming together because
everyone's folding themselves into these
nice kind of things and again that's how
they're going to try and monetize. So
you'll see this battle between Meta via
WhatsApp, Instagram, things like that,
Google and Open AI to try and occupy
that real estate. And then of course
Elon's going to come in with X and all
sorts of interesting things coming.
Everyone's trying to be the everything
app.
Yeah, for sure. Dave,
well, in a second we're going to see
actually something built by voice. Why
don't we look at it and and then we can
It's actually really cool when you see
it.
The code the codeex example.
Yeah. Yeah.
Yeah. All right, we'll we'll come to
that. But before I mean, I'm just
wondering, you know, when OpenAI drops
this capability, are they picking
winners in the final result? Are they
going to be equally uh you know, sort of
spreading their attention across
everybody? Uh and are they basically
eating away all the entrepreneurial
startups? There was a there was a tweet
that went out. I was trying to capture
it but it said okay Open AIJ just
eliminated you know a million different
startups out there working on on their
approach.
Remember every platform ergonomically
wants to have its own app store. So I I
think that the notion of an app store
being built on top of a new platform
where chat GPT and then presumably other
frontier models as well wanting to
become the new operating system or it
certainly rhymes with Facebook platform
moment when Facebook launched that as
well. I I think that that's a very
natural market movement. But I would
also perhaps caution at at some point I
think it's reasonable to expect that
every pixel is going to be generated.
It's not just going to be vector art uh
or HTML type graphics. Every pixel is
going to be generated. So I I would view
this as almost a transitory moment where
apps are floating on top of chat GPT as
the new operating system environment,
but it's a passing phase. at some point
every single pixel probably wants to be
generated. That that was my first
thought. The other thought is do you
remember Peter back in 1987 when Apple
without Steve Jobs launched their
knowledge navigator concept?
Yes, I do. I mean
we're living in that now.
Yeah,
we're living in that where you know
professor is having a conversation with
uh basically similar type canvas that is
able to pop open new apps and interact
with them on demand. We caught up with
the future approximately 40 years later.
We're living the knowledge navigator
future.
Every week, my team and I study the top
10 technology metat trends that will
transform industries over the decade
ahead. I cover trends ranging from
humanoid robotics, AGI, and quantum
computing to transport, energy,
longevity, and more. There's no fluff,
only the most important stuff that
matters that impacts our lives, our
companies, and our careers. If you want
me to share these meta trends with you,
I write a newsletter twice a week,
sending it out as a short two-minute
read via email. And if you want to
discover the most important meta trends
10 years before anyone else, this
report's for you. Readers include
founders and CEOs from the world's most
disruptive companies and entrepreneurs
building the world's most disruptive
tech. It's not for you. If you don't
want to be informed about what's coming,
why it matters, and how you can benefit
from it. To subscribe for free, go to
dmmandis.com/metatrends
to gain access to the trends 10 years
before anyone else. All right, now back
to this episode. One of the examples
they had here, they on their on their
demo live stage, they had an individual
propose a new startup. In this case, it
was a dog walking uh app. And they said,
"Okay, create me an image for it. Create
me a name for it." And then they said,
"Okay, Canva, turn this into a deck. I
want to raise money." At the end of the
day, you know, we're not too many steps
removed from, you know, chat GPT, start
this business for me and start, you
know, wiring the revenues to this
location. Uh, I mean,
I I I think that's the the the multi-t
trillion dollar endgame here where at
some point we see autonomous
corporations.
Mhm.
Yeah. I literally did exactly what you
just said, Peter, yesterday at a red
light
uh in Cambridge as I was sitting there,
created a business plan and tried to
recruit a Princeton team into it via AI
at the red light. Say yes, but
that's like that's like when Elon said
when he was driving from SpaceX back to
his home in Beverly Hills and there was
traffic and he goes, "Damn it, I'm going
to start, you know, an a tunneling
company. I'm going to it's going to be
boring. I'll call it boring ink. I mean,
and then it's there's literally a future
in which we're going from mind to
materialization. It's stating what you
want to do and having the universe
conspire to create it for you.
That's crazy, ID.
Yeah. I think, you know, he has a boring
company, but then he has his even cooler
name of macro hard, his new software
company.
I love that.
Against Microsoft.
Elon is a 13-year-old kid for sure,
which is literally trying to do this.
It's trying to create ideas to full
companies entirely digitally, right?
And I think what you've seen is three
phases. Consumption was expensive. It
became cheap.
Creation was expensive. It's becoming
cheap. And now the valuable thing is
curation and attention. So again, the
battle is who can have that value for
the pixels that you see, for the noises
that you hear. And then a lot of that
creation element is going to be
abstracted away. And I think all the big
players realize this.
And the question is where does it end
up? Where does it go eventually, Dave?
Well, know that quote that you had, I've
heard it a hundred times. You a million
startups just died because of what they
rolled out yesterday. It's absolutely
not true. Show me the names of those
startups that died. And this came up
when we were talking to Amjad Masad a
couple weeks ago on that other podcast.
You know, the founder of Replet, he had
to build his entire foundation model
from scratch to get to market because it
was before, you know, OpenAI had the op
APIs. And you ask him, do you regret
that? You had to throw away all that
code. He's like, "No, I absolutely don't
regret it. You constantly have to
change. You know, AI is going to move at
this ridiculous accelerating pace.
You're not reinventing your business
constantly. You're dead on arrival."
Yeah.
Yes. Exactly. And but your team is
intact. If you have a great team and
you're in AI, you will succeed every
single time. Yeah. Maybe something you
do gets crushed by the next iteration of
Open AI, but you pivot so quickly and
easily, just like we're talking about
right now. So, you show me the names of
those companies, those million companies
that died. They don't exist.
All right, let's jump into the next demo
they had at OpenAI day. It is agent
builder. Creating multi-step workflows
without coding. I'll just show the first
few seconds of this.
And to make this interesting, I'm going
to give myself 8 minutes to build and
ship an agent right here in front of
you. So, I'm starting in the workflow
builder in the open platform. And
instead of starting with code, we can
actually wire nodes up visually. Agent
Builder helps you model really complex
workflows in an easy and visual way
using the common patterns um that we've
learned from building agents our
All right. Uh Immad, you're building
agents left, right, and center right now
for intelligent internet. Uh what do you
think of this?
Yeah, I think you kind of gone from the
creation to the now composition and the
multi-stage process for image generation
and media. We built something called
Comfy UI, which again is this nodebased
process. But where we're going, we don't
need nodes and spaghetti.
Exactly.
You know, the future of these things,
you can look at our common ground
platform for example. It flips between
camban and kind of workflows and gant
charts and things. It will just show you
what you need to show. And the way that
you'll interact with agents is like
interacting with Jarvis in Iron Man.
Like I think in a year or two, that's
what the agent build is going to be.
You'll just have a nice chat and it will
show you all these things and mock them
up instantly. And in fact, Claude had
this with their latest release for the
pro users, this instantly generating app
desktop type thing that literally
programmed things on the fly without
code because code is just a human
translation layer and that can be
removed completely.
For sure, Dave.
Yeah. Yeah. It's funny because the the
people succeeding in AI are
overwhelmingly really young, really,
really smart with very limited business
experience and they keep recreating the
same mistakes from like 20 years ago.
It's okay because you know AI is such a
great tailwind. But this this graphical
programming language, those lines is the
stupidest thing in the world in the age
of AI where you can talk to to the AI.
It's very similar to in the cursor
interface. if you want to upgrade your
account and you're talking to cursor
like hey uh or or to Claud um 4.5 hey
Claude upgrade my account and it says
well go to the menus navigate to the
settings like what are you talking about
I'm talking to you right now you have
MCP like just just do it so that'll all
get fixed very quickly but it's it's
crazy the whole interface to AI is going
to be voice
voice and images
and the idea that you're going to design
programs by drawing boxes and connecting
with with lines
which has been around since like 1980.
No, no, no, no, no. So, it's it's it'll
it'll get cleaned up very very quickly.
It's just kind of funny to see this
transition phase and and all the same
old same old mistakes being made.
Alex, any uh any other points you want
to make on on agent builder?
Yeah, I'm reminded almost by analogy of
the the early days of Hollywood that
were shaped around vaudeville type
design patterns. I think that that's the
stage that we're at. This is the
vaudeville on Hollywood screens stage of
of AI for software development where
it's just sort of um on the one hand
it's great glad that it exists and glad
that it provides probably a comforting
safety net for enterprises that are
migrating to endtoend agentic workflows
on on the other hand very much feels
like a passing phase I I think it's an
interesting modality for specifying for
software development but really why not
take a full leap and instead of
specifying flowcharts for for individual
individual workflows, why not just dump
in an entire org chart of a human
organization and rather than individual
tasks or roles, just port over an entire
company work chart and ask for an entire
enterprise to be replicated.
Vogill analogy is is great. You know, if
you look at an old original black and
white movie and the actors are on an
actual stage with actual stage lights
doing they're doing a play and they're
just recording it on film. Uh very
similar to when the internet took off
and the newspapers just copied the exact
newspaper over to the internet and said,
"Oh, we just read it over here instead."
Same font, same everything.
We humans are so stuck. We're so stuck
in the way we do stuff. We have such a
high cost of switching. All right,
continuing on this theme of voice mode
over everything. Uh here's a little
video of controlling codecs with voice
mode. Take a listen. Let's see if we can
actually control this camera now with
this Xbox controller. There we go. It
works.
That's pretty awesome.
What's funny is like I didn't even
specify to codeex which button should do
what, but it figured that the joystick
was probably a good idea. So, that's
that's actually perfect. But let's uh do
the real test now. Let's enter voice
mode and see uh what happens. So, hello.
Hi there. I can hear you loud and clear.
Let me know what you need. Yeah. What do
you see on the camera?
I see a large audience in front of you
with rows of attendees seated and ready.
The room is well lit and the crowd
stretches far back filling the space.
That's right. Could you uh shine the
lights towards the audience?
There we go.
I've just lit up the audience with a
nice bright water. They should be
perfectly highlighted now.
All right. So, this is a unusual demo
for for Codeex to say the least. just
showing our ability to control physical
items in our environment. But I have to
say I love this for one reason. I've
always said I will invest in the first
company that's able to make AV
foolproof, right? How many of us are uh
in conversations or giving presentations
and we can't get this to connect to
that. I just want to be able to say to
my AI, you know, show that video on that
screen, you know, connect me over Zoom
to that screen. So there's hope still
and that's a that's a business plan that
like that's not hypothetical. That's a
business plan that if someone's
listening right now and they get
together a team and then Peter Seed
invests in it to give it credibility
that will become yet another one of
these massive success stories. It's it's
really that simple.
I'm on stage and I say you know
something fails. Okay, you know AI is
easy, AV is hard. Well hopefully AI can
solve that. Well, but also controlling.
Right now, you kind of wave to people
that are backstage and they push some
buttons or whatever. It's crazy because
the AI can now recognize your hand
gestures and can respond to your voice
and it's much more engaging for the
audience if you're talking to the AV and
it's changing the lighting, changing the
slides, you know, pulling up things from
the internet in real time.
Very doable. You could get that product
out the door in like six months or less
and just crush it. And of course, it
self-demos and then Peter, you know,
Peter will bring it into the podcast and
it's like it's just that simple.
There you go. Let's let's start with
with uh with Alex. Alex, what's your
thinking? I think it's it's sort of
interesting because so many facets of
codeex are open source and and available
for review on GitHub to actually trace
where it
appear. Tell us what tell us what Codex
is in the in the first point.
Codex is uh is a an open AI brand that
seems to cover a number of different
independent software tools. So it covers
their code generation specific AI model
backend. It's also used as a web front
end for software agentic software
development. It's also used in
connection with a command line interface
tool. So, so they they use it as a an
umbrella branch as it were. But in in
this case, at least one of the codecs
associated projects is up on GitHub. You
can review the source history. And so
pulling the thread on the story, it was
it was interesting I I think to to
discover that at least some aspect of
this functionality appears to have
originated as a feature request from a
third party from the Carnegie Melon
affiliated software engineering
institute back in April. That was uh
where where a user was complaining or
really pointing out that human prompt
typing speed is increasingly become the
limiting factor becoming a limiting
factor for software development. So,
by the way, I want to read I want to
read a quick tweet here that came over
from an open AI employee. It says,
"Agent builder we released today was
built end to end in under 6 weeks with
codecs writing 80% of the uh the PRs.
This matches the AI 2027 report uh in
that was forecasted in 2026. Coding
automation goes mainstream. Agents will
uh work like teammates. AI R&D is 50%
faster from algorithms. I mean we are
seeing sort of I want to say science
fiction but science predictions sort of
tying much closer to reality.
We're close to the point of recursive
self-improvement and and it can go in
the other direction as well. We can get
negative speed where the software is
just written preemptively.
Interesting. So, you know, we're not
smart enough to realize we need the
software, but the AI is and it's prepped
for us in advance. Exactly. I love that.
Immad, your thoughts on CEX here.
Yeah, if you get enough tokens. I think
this is the thing like most people are
just using half a million or a million
there and you know like to build
something like that's five bucks with
the new Grock model. It's 50 cents like
you're seeing a crazy thing.
I want to come to that after we we we
close on this which is you know how
would you be disrupting open AI if you
were going to? Um Dave, you want to
comment on on controlling codecs with
voice mode?
Well, something Alex said really sparked
a thought which is you know codeex when
they launched it was a way to run five
or 10 different coding processes in
parallel and have status checks and it
made you much more efficient. But now
you know they use it to to kind of
bundle five or 10 the different things
together.
This is going to be a real problem
because we're used to products having a
very specific name and brand and doing a
very limited number of things. But with
AI, the explosion of capabilities is
exponential and you can't even keep up
with the names. So now it's going to be
much more like these thematic branding.
Like codeex is a grab bag, you know, GPT
is a grab bag, but what else can you do?
There's just so much going on. So just
keeping up with the names of things and
naming things in general is going to be
we had that conversation with with Kevin
Wild Kevin Wheel at OpenAI that is
naming protocols are kind of insane.
Yeah.
Yeah. this if we can add one thing um 6
months ago Dario Amadai from Anthropic
said that 90% of code will be written by
AI I think he meant can be written by AI
and this is a really great example of
that
and so again they decided to embrace it
so you see codeex the CLI the command
line interface tool literally gets two
updates a week which for a multi-billion
dollar half a trillion dollar company is
unheard of
and so again as Alex said I think you're
going to get these self-recursive
improvement cycles first with humans in
the loop, but then the software might
just upgrade itself and respond to what
people might need. Yeah. Continuously.
Yeah. What's interesting is that that
interacts with the interfa like like if
you said my iPhone is going to update
itself twice a week, you'd be confused
as all hell. Like you would never know
where anything is. But now that you have
an AI interface on everything, it's okay
because it's self-explaining. It's it's
just seamless.
I mean, I just want Jarvis. I just want
an AI I talk to and it does everything I
need to get done. I'm just going to
assume that anything is doable and my AI
is going to enable it or find the
capability and I don't need to know all
the hard work it's doing on the back
end. I don't need to know what's calling
up getting access to. It's just making
it happen.
I'm telling you, Peter, within the
virtual world, not within the physical
world, but within the virtual world,
that's today. You know, no one's
productized it yet, but all the
technology and capability exists right
now. The robotic version of it where it
makes your Iron Man suit might be a year
or two or three out virtual world, you
know, build me a video game, build me
whatever. That's right now.
Yeah.
Someone needs to go and get Paul Best
voice rights, you know.
Let's let's take a look at one more
video from OpenAI day, which was the
Sora 2 uh API and a segment I call
sketch to video. And again, this is
going from mind to materialization. If I
can imagine something, can I make it
real? Uh, take a listen. Today, we're
releasing a preview of Sora 2 in the
API.
[Applause]
Mattel has been a great partner working
with us to test Sora 2 in the API and
seeing what they can do to bring product
ideas to life more quickly.
So, one of their designers can now start
with a sketch and then turn these early
concepts into something that you can see
and share and react to. So let's take a
look at how this works.
So if you're listening to this podcast,
what we're seeing here uh is basically a
hand sketch then being developed into a
photorealistic video of a Mattel Hot
Wheels toy or Matchbox toy. Uh super
compelling. uh being able to go from
from that and I've talked about in the
future I'm going to be able to describe
verbally what I want. I want a device
that can hold a hot liquid. I want to
have a handle on it. I want to have it
this color. And then as I'm describing
it, it's visually materializing on my,
you know, AR glasses in front of me. And
I say, "No, can you make it a little bit
larger? Can you stretch the dimension?"
Just in in plain English. And then how
much would it cost to make? It gives me
a price and can you give me an
alternative that's cheaper or that has
better thermal insulation and they go
yeah that's it please print it for me
manufacture it for me and put it up on
the web so anyone can grab it. I mean
this going from again I call it mind to
materialization super powerful uh you
want to open up on your thoughts on
this? I mean the holiday deck is getting
closer, right? Not with hard lights, but
as you said, that aspect is there. These
models learn physics, they learn
material. So in the video that just
shown, the car goes down these ramps and
it's transformed it into 3D. You can
have 3D extensions from it.
One of the things you can do with these
models is like you can actually do a
sketchboard where you show scene by
scene
how every single thing changes and you
have that as the input and it'll
generate that clip.
And it doesn't do that by thinking or
breaking it apart. It literally
interpolates the concept to the video.
And so we're actually only scratching
the surface of how powerful these models
are at the moment. And then I think as
they get more and more used, you'll see
that they are genuinely world models
that can create anything you can imagine
and then adapt on the fly as well
with audio to match with perfect audio
to match.
Yeah.
Yeah.
I think everybody everybody has access
to this, right? They just launched the
API version of it yesterday for large
scale use, but anyone can do this. And
if you haven't done it, you're crazy. Do
it. It's so mindopening and compelling.
Do exactly what Amad said. Do it as a
series of scenes. And then right right
now, you got to wait about five or six
minutes to get your video back, which is
really annoying. Uh, and it shows you
the comput
how quickly we are spoiled by
Well, it shows you the compute
bottleneck though. I'm sure when they do
it internally, it comes back in a
millisecond. You know, it is physically
possible to do it very very quickly. Uh,
but again, way too many users for the
capability, but you got to try it
because again, it's mind opening. When I
first saw this video, I I didn't get it
because I couldn't tell. The video is
actually synthetic. I thought I was
like, "This guy's sketching a toy, a
Mattel toy, and here's the toy." Like,
so what? Like, oh, wait, that's not a
that toy doesn't even exist. That's
that's actually been synthetically
created. That it's just so good. The
video has perfect physics. You know, you
just would never know that it's
synthetic. Uh, Alex, what does this mean
in the final result? Where are we going
here?
This is mechanical design getting
solved. MIT, the mechanical engineering
department, has an entire set of courses
just devoted to training the next
generation of mechanical engineers how
to do product design like this. We're
seeing right before our eyes an entire
discipline or subd discipline get solved
in bulk by generative AI. And I think
maybe even more interesting than this
particular instance is the API pricing.
So if if you go to the the API p pricing
page now for for Sora 2, it it's 10
cents per second for the base model. You
do the arithmetic that's $360 per hour.
Assume 10x year-over-year hyperdelation
model costs within the next year.
Suddenly, it's far cheaper to outsource
mechanical product design to an API call
to Sora 2 or whatever it evolves into
than a human. That's an entire
hyperdelationary field getting solved
overnight.
Okay, keep those numbers top of mind
because when we start talking about
compute in a minute and the cost of
compute, you'll immediately recognize
what what Alex just said. you know, that
10x deflation in price. We need that
desperately because the demand for what
we're seeing here is going to be orders
of magnitude bigger than the amount of
compute currently available.
What an amazing time to be a kid, right?
Imagine you're sitting down with your
mom and your dad and you're just
describing what you want as a toy or
what you'd like your toy to do and all
of a sudden it's materialized into a
video for you and then some other
enterprising company in the 3D printing
world says, "I can just manufacture that
for you as an end of one." I mean, just
amazing.
Star Trek replicators aren't 24th
century. They're they're now just 2025.
We are really bringing Star Trek to
today. That makes me so happy. I'm so
happy about that.
Why wait a few centuries?
Yeah, for sure. All right, so uh we're
going to wrap on the Open AI day there,
but I'd like you to jump in here a
second. Um you know, we're seeing a half
a trillion dollar valuation for Open AI.
We're seeing open AI really working hard
to create multiple revenue flows from
advertising from selling products in a
multitude of other areas. What are your
what are your thoughts on open AI?
So I think that the core business of
open AI in terms of the monthly chatt
subscription is going to come under
challenge because we've seen this
breakthrough with deepseat gro 4 and
others where the cost per million tokens
is literally dropped 20 30 times and so
the basic chat experience is not good
enough anymore. So almost like you see
these levels of AI that will fill. So
the chat experience is basically a
couple of bucks a year when you
calculate now in terms of the cost down
from 200 bucks just over a year ago. So
now they have to think about agentic
workflows. They have to think about
economically valuable workflows and then
even beyond because the number of tokens
goes from 2,000 to 20,000 to 200,000 to
2 million. And so this is why when we
see Sora, they're doing likenesses and
they'll be doing advertising and more
because how do you have the cash flows
to justify that? Google and Meta both
have the advertising cash flows.
Mhm.
How do you monetize those 800 million
users? Either by delivering excess value
through your $20 a month subscriptions
or by having these new verticals because
your competitors are going to release
what was your key product at the start
of this year, Chat GPT, for $20 a month
for free because that's how far and how
quick token prices have dropped.
Yeah. I mean we've talked about the
notion that open eye is really trying to
do a global land grab right going into
India going into the where you are Immad
in the UK going into Greece going other
locations I mean and it's interesting
battle between its land grab and then
you've also got all of the open- source
models coming out of China uh which are
which are going after a a land grab as
well. Um,
well, you know, Paul Graham said Sam
Alman if you dropped him on an island
full of cannibals and came back a year
later, he would be running the island.
Uh, so I mean, he's got to be the b one
of the greatest business strategists of
all time. And so he's going after India.
He's going after a massive installed
base. He's got an 800 million user
installed base. He's going after the
rest of the world. And he's also going
after the data centers. And we'll see
that later in this podcast. So I think
he's narrowed in on the the two
foundational points of control in this
great battle are installed base of users
and massive amounts of compute. If you
control the end points, everything in
the middle will fill in. That's the way
the way I think he sees it.
This episode is brought to you by
Blitzy, autonomous software development
with infinite code context. Blitzy uses
thousands of specialized AI agents that
think for hours to understand enterprise
scale code bases with millions of lines
of code. Engineers start every
development sprint with the Blitzy
platform, bringing in their development
requirements. The Blitzy platform
provides a plan, then generates and
pre-ompiles code for each task. Blitzy
delivers 80% or more of the development
work autonomously while providing a
guide for the final 20% of human
development work required to complete
the sprint. Enterprises are achieving a
5x engineering velocity increase when
incorporating Blitzy as their preIDE
development tool, pairing it with their
coding co-pilot of choice to bring an AI
native SDLC into their org. Ready to 5x
your engineering velocity? Visit
blitzy.com to schedule a demo and start
building with Blitzy today.
All right. Uh I call this section
meanwhile in the continuing AI wars. Uh
let's hit on a few others. Anthropic
nears superhuman computer use. And here
we're seeing a graphic of uh of
performance as a percentage uh hitting
human performance very close to it. and
we're seeing uh basically over the last
year. Alex, do you want to kick us off
on this one?
Yeah. So maybe a comment first on what
the benchmark is. In the past on on this
podcast, I've beaten the drum for the
importance of benchmarks more broadly
for not just measuring progress, but
also accelerating progress. In this
case, the benchmark OS world for
operating system world is really lovely
benchmark that was initially developed
by Salesforce and colleagues. And it's a
benchmark that measures the ability of a
computer use agent and an an AI that has
access to a keyboard and mouse and
screenshots to be able to conduct
regular everyday economically important
tasks on Ubuntu, Linux, Windows, and Mac
OS. Hundreds of different types of
tasks. And and so what Anthropic is
demonstrating with this chart is
probably again by the law of straight
lines perhaps by the end of this year in
the next few months we're going to see
at least from Anthropic putting aside
other other frontier labs superhuman
performance at the ability to control
computers for normal everyday tasks.
So Alex and Ahmad I I asked uh
Perplexity Comet what does this
benchmark even mean? It's really vague
and it came back with some complete
garbage answer. So hopefully you can you
can fill me in like what are we
measuring here? We're we're measuring
the ability for an AI to literally
control uh Windows type interface with
mouse control and keyboard control and
perform a variety of tasks. Web browser
navigation that would require
it's ultimate ver it's the the ultimate
you know uh verbal interface,
you know, do this for me
without the verb.
I don't need to know. I mean, I just set
up a new MacBook Pro and getting all of
the settings back to where I wanted it,
it just ate up a half a day of wasted
time.
Yeah. Yeah.
Yeah.
Peter, you want Jarvis, this is Jarvis,
albeit not in the physical world for
Jarvis for controlling your computer
across applications, but this is a
pretty good benchmark for Jarvis for
computer use. There are companies that
are also setting up giant science
factories controlling, you know,
hundreds or thousands of experimental
devices, right? Where it's uh just
basically putting an AI layer on top of
all of them and running 24/7 dark
experiments to ferret it out the
breakthroughs of science. Um anyway, uh
Immod you want to add to this? you know,
it's 360 odd tasks that take over your
computer. Out of the labs and things,
it's a different kind of reinforcement
learning environment. I think what this
is showing is that these generalist
models are getting good enough to do
most human standard tasks. And again,
these models have economies of scope. So
now we're seeing thinking machines and
others building RL environments so they
can plug into the real world even more
seamlessly. And I don't think anyone
believes that that line isn't going to
break through the human level. Again,
this is the takeoff point. And so when
they can control anything we can control
digitally and then physically,
then it's only a question of the number
and quality of tokens behind that. And
so again, this is the takeoff point.
This is why we're about to see
and what could possibly go wrong.
I would say what could possibly go right
and quite a bit can go right.
Yeah. Okay. Thank you for bringing me
back to the world of abundance, Alex. I
appreciate you.
Anytime.
All right. And our our next uh our next
news item here is a major update to Gro
imagine uh going from V 0.1 to V 0.9. I
love the the numbering protocols
everybody. And also uh Elon entering the
gaming world or at least announcing you
know Elon if anything else is a gamer
and the video gaming industry is massive
you know uh you know outweighing
uh entertainment by you know Hollywood
by a long shot. Let's take a quick look
at a video clip. Uh and the thing that's
important is Grock imagine can generate
15-second clips. Uh, and their comment
is, "We're focusing on speed and fun."
All right. Well, let's take a look at
some speed and fun. Grock launched as a
truth seeking AI.
And I love
love that it says Grock launched as a
truth seeeking AI. And there you see
Elon as this, you know, medieval
emperor, you know, in battles. It's
like, okay, this is the truth. We're see
we're seeing this. But I I think even
just t taking that line, truth seeeking
and combining it with these models. I I
completely buy the the notion that video
as a first class modality when
incorporated into chains of thought is
going to help us to discover the truth.
I I think it it's one of the the key
modalities for understanding our
universe. Okay. What what does that
mean, Alex? Dive in a little bit deeper,
please.
So, when you ask a question of chat GPT
or some other frontier model, now post
chat GPT5, there's thinking that goes on
usually under the hood. It it thinks
internally sequence of tokens before it
produces a final answer. Right now,
almost all of that thought takes the
form of text tokens. But imagine a near
future where the the agent, as it were,
is able to think not just in terms of
text, but in terms of video. It's able
to hallucinate a short video clip
imagine, you know, basically visual
imagination imagining things that you
can introspect as well. You can pop open
little drop down and see the little
videos that it's it's generating as part
of its chain of thought before it
answers your question. Video reasoning,
I think, is going to end up being a
killer app for how these video models
that right now are obviously largely
aimed towards entertainment end up
delivering transformative economic
value.
Amazing. in our you know our occipital
cortex our our neoortex for visual image
understanding processes much more data
than our ability to to bring it in
language
yeah so open AI did $4.3 billion in
revenue in the first half of the year
the video game market did 200 billion in
revenue last year so you can see when we
think about gaming when we think about
media this is a massive market to go
after and you know ex Elon are going to
go after it from a first principles
basis whether or not the games will be
any good that's a question you know I
think they'll probably be quite
addictive
um and again the scarce things in the
world there's Bitcoin there's my
financial coin there's human attention
the battle for human attention is the
next battle for revenue
and everyone is basically drawing their
lines getting their GPUs ready for it
so I think we'll see this type of thing
from everyone and it's good for
consumers in many ways because the
quality bar will lift and the access
will expand.
All right, I we can go so deep into that
entire conversation, but before we exit,
uh meanwhile in other AI wars, I wanted
to play a quick clip and say a thank you
to one of our subscribers, CJ Truheart,
who heard our call for a Moonshot theme
song and proposed one. Not saying this
is it, but I was super impressed. Okay,
let's hear what he has to say or sing or
produce.
So, I recently heard you guys mention on
the last podcast that you were going to
create a Moonshot theme song. And for
someone who's who's been using Sunno for
two years and and uh especially since uh
Moonshot's podcast is is my favorite as
far as AI and technology goes. Um I
really appreciate you helping me be able
to understand what's happening and give
me a perspective for my entrepreneurial
creative mind to best position myself.
Made you a customized um theme song for
the moonshots.
[Music]
Where the moon shot me breaking through
the noise
just with a clarifying voice in a races
disruptions never clean. We'll show you
what it means. The story between
[Music]
tomorrow.
[Music]
Love that.
Thank you, CJ. I love it. I love the
fact that you pulled over to shoot the
shoot the video, too. That's just
awesome. Much appreciated.
Yeah. Just I appreciate I love our
subscribers. They're just uh they're
generous. They're intelligent. They're
creative. And just a shout out to all of
you guys. Thank you. We love your
feedback, your input. We read it. We
consume it. All right. Uh let's jump
into our next segment. Chips and data
centers. A lot going on there, but
probably the single most important news.
AMD and OpenAI announced strategic
partnership to deploy 6 gigawatts of AMD
GPUs. Uh Dave, let me go to you, buddy.
Yeah, I stock moved what 30% on the
news, which shows you Sam's ability to
morph the world or warp the world to his
uh perspective or whatever whatever he
says. Massive massive impact on a huge
public company. Uh and uh you know, it's
it's interesting. He's going to get 10%
ownership or OpenAI will get 10%
ownership if they hit milestones for
basically no price. Uh and how often do
you get to negotiate a deal like that?
Unless you're the president of the
United States, in which case you can
negotiate all the time.
Yeah, I guess that's true. The reason
this is a serious win-win, though, is um
you know AMD is they have capacity to
manufacture with TSMC
and anyone can design inference time
chips uh and and sell them out, but you
have to have the manufacturing capacity.
So Sam's going to grab that capacity via
AMD. Uh, I'm really curious on November
14th to look at Leopold's 13F filing,
you know, from the situational awareness
hedge fund and see if he also bought AMD
and got that 30 or 40%.
He probably did.
Probably did. Yeah.
I mean, Dave, isn't this I mean, we
could have predicted this as well. At
the end of the day, we, you know,
talking about Intel uh and the
criticality that capability, you could
have said the exact same thing about
AMD. Who else? I mean, there's Broadcom,
uh, there's uh, Micron. which of these
other chip manufacturers are going to be
pulled into sort of this US- ccentric uh
chips first strategy? Well, I'll tell
you what else. If you drill a layer
deeper underneath the chips, there's a
whole bunch of other material that will
get dragged into the vortex that no
one's quite realized yet. So if you
really want to see these 30 40 50% pops,
you look a layer deeper than just the
chip companies into the underlying, you
know, you've got, you know, silicon
bools, you've got glass, you've got, you
know, all this underlying manufacturing
infrastructure that's all just going to
get sucked into this same exact vortex.
And you know, a lot of those are public
companies and some of them are smaller,
too. So the the movement is much bigger.
Yeah. Emmod, thoughts on this one?
I mean, he'd probably just be calling
everyone now and saying, "Hey, you want
to give me warrants? Your stock price
will go up, right? To all the
companies."
Can you imagine that if he if he
ironically did this exact deal 50 times
back to back? The amount of value that
would create. Oh my god.
Just all the SAS companies. Come on,
partner up with me. Right. Here's the
Hey, if you want to do a deal with
EverQuote, I'm the chairman of that one.
Just give me a call. We'll do we'll do
this deal tomorrow. You're right. I do I
do wonder if he's using GPT6 Pro to kind
of come up with these deals, but I mean
this it's massive.
If you look at the 10 GW that they're
doing with Nvidia and the 6 GW here,
it's about $50 billion of buildout per
gawatt.
So it's about $800 billion of buildout,
like a trillion that they've already
got, I think. Probably more.
Amazing. And completely sold out. Sold
out years in advance.
Sold out years in advance. And you know
again the only market that can sustain
this and incry the revenue that is if
they're going after
the entire like all software jobs
effectively. So I think in the next few
years you're going to see basically
OpenAI and others replicate the whole
macro hard strategy of fully autonomous
workers. that is the product that they
will bring to the market and they will
cost like 10 20,000 30,000 $100,000
and that's the only thing I can see that
will fill this particular massive amount
of
Alex are you going to stick with your
efficient market hypothesis from uh two
podcasts ago or are you going to just
start tracking the tail number of Sam's
jet and seeing who he's meeting with
next? Well, I I one might imagine losing
sleep as a public market investor that
maybe the singularity as it were happens
in some private company where there's
indirect at best exposure via public
markets. Like what happens if Open AI
and call it the the 10 other largest
privately traded companies suddenly have
an intelligence explosion and are worth
tens of trillions of dollars overnight
as a as a public retail investor. That's
perhaps a suboptimal outcome. So, so I
would actually view this as through a
very positive lens that through
indexing, through exposure to AMD,
Intel, etc., this is now an enormous
jump in exposure to open AI to the
extent that an intelligence explosion
happens there. Let me uh hit on a couple
of of these uh related stories. So,
BlackRock is buying up to 78 data
centers totaling 5 gawatt in a $40
billion deal. Um and then we're also
seeing here Corning uh is posed to
dominate AI data centers with optics
obviously for for fiber optics for
connecting everything uh on these two
topics of of Corning and Black Rockck.
Uh let's get some commentary there.
Yeah, I mean that's the war for the
downstream kind of elements here, right?
like Black Rockck coming in with that 40
billion. They're coming about three
times what the normal multiples are like
you need to deploy capital and this
feels like currently the best capital to
deploy. Downstream Corning is kind of
optimal here but half I think something
like half of all GDP growth in the US
this year is AI
which is insane.
I mean, comparing to where we were even
just a year ago or two years ago,
um, it's
it's an economic transformation story
for the US at at a minimum. The the
company that Black Rockck is purportedly
considering buying many of the the
campuses that they're converting to data
centers or are brownfields, including
according to public reporting, uh,
former coal plant in Ohio. This is what
economic transformation, industrial
economic transformation at scale looks
like. and then it is the it's the world
it's again we're on a war footing we
have to realize that we're in just
preWorld War II we're converting
automotive plants into aircraft plants
we're tiling uh I mean you know Santa
Monica airport where I fly out of was
basically build out as a secret
manufacturing and uh airport hub uh it's
it's happening and it's
this is programming of the entire
industrial base Yeah.
Yeah.
Yeah. And also uh it's another
investment theme just for our
investmentoriented listeners. Uh one of
our best and most prolific partners Kush
Bavaria is starting a new company with
Alex's help. Uh that funnels money into
data centers. But it's part of a broader
theme of if you know this is half the
GDP growth of the c country and
accelerating there's all this pentup
capital all over the world that's not
investing in things like corning. So if
you can create new conduits of the money
into all the implications you know so
Alex has been talking about photonix for
months now and the leap from there to
saying oh Corning is going to benefit is
not a huge leap. So then you know the
capital just needs to get into these
avenues to keep this engine humming and
so uh you know new entities new funds
you know black rockck is obviously very
very smart money pouring into this area
but then all the other implications you
know you know data centers and new
geographies and pumped hydro and what
about the equipment for pumped hydro and
solar installation costs all those
things all are investment opportunities
I
I mean is there I mean is this an
infinite sync in other words it's going
to attract as much money and
capabilities and resources. Uh, you
know,
is there is there any moment where the
music stops and uh there's not enough
chairs for everybody who's invested?
It's easy. It's super super easy to
calculate. Now, it's an infinite demand,
no doubt, but it's limited by chip fabs.
So, if it gets overbuilt or
overinvested, it's just purely because,
you know, too much of X for the number
of chips. But, you know, the upper bound
is based on the chip fabs. And you can
see those coming four years in advance.
Mhm.
And so, you know, it's all bottlenecked
at at Intel, TSMC, and Samsung. So, from
there, you can do all the math in both
directions in terms of data centers and
users and everything.
Well, we my mental model
Go ahead. Go ahead, Alex. My my mental
model continues to be that the music can
continue as long as the transformative
applications continue. As long as we're
driving the cost of the service economy
to zero, as long as transformative
discoveries and scientific inventions
pour out of these super intelligent
boxes, then the the music can't
continue. The the data center buildout
can continue to the point of trillions
of dollars of capex. We just need the
transformation to continue and the
revenue generation that results from
that. And the transformations, you know,
optical like, okay, nothing was optical,
now it's all going to be optical.
Corning, huge beneficiary. Nothing was
liquid cooled. Now it's all going to be
liquid cooled. The Jeff Markley told me
he bought a million valves. Why'd you
buy a million valves? It's like, well,
because if water starts leaking out of a
pipe, you need to isolate it quickly.
These are like, you know, $60,000 in a
single one U. You can't have water
dripping on them. So, I need to But
there aren't enough valves in the world.
So, I bought them all.
And then there's the under there's the
underlying problem here of energy
production, right? We're about to see
energy begin to spike. We're seeing
certain communities that are voting
against opening up data centers because
they don't want to have it soak up all
the energy. And so are we going to get
differential pricing where data centers
are paying this much per kilowatt hour
versus homeowners playing paying a
different otherwise we're going to have
you know communities basically blaming
uh you know the AI tech bros for taking
their jobs and hiking up the cost of
electricity and that does not bode well.
I I think that the scenario where uh new
data center deployments continue to be
connected to to utility scale grid is
probably implausible at this point.
There's there's simply too much demand
for colloccated new energy output that
is completely off-rid. As long as the
regulatory environment continues to be
favorable, and it it does continue to
be, I I think we end up it's more likely
we end up in a future that looks like
Colossus where there are coll-located
NAT gas and soon SMRs and fusion plants
in a few years and all of that is by
default disconnected from the broader
utility scale grid.
Yeah, I think that you you had Jeff
Bezos who is a pretty smart guy saying
we will have gigawatt data centers in
space and when you actually do the math
it actually makes sense in a few years
when you look at payload costs you look
at chip costs you look at again power
with solar and that's just something
that's crazy but again it just shows the
demand for these things
yes I know Eric Schmidt has a great deal
of interest in that vision as Well, all
right. I'm going to move us on to our
last conversation topic for today,
keeping this WTF episode sharp and fun,
and that's on robotics and the release
of FSD 14.1.
So, uh Elon has uh released as promised
uh something that's got 10x more AI
parameters. I love this. Navigation and
routing are now handled by Tesla's
neural net. uh can help you find
detours, unexpected obstacles. You know,
I've had my Tesla drive me into uh you
know, situations that shouldn't I should
have gone into uh robo taxi style upon
arrival. So, you can now select
precisely your arrival option where you
want to park in the street, in a garage,
in a curbside. And Elon's words, V14
feels alive. Um, of course, this is just
the prelude to the entire automation of
driving across every sector. Who wants
to jump in here?
Tell you one thing that's one thing
that's new is uh, you know, with the big
screen and with FSD, you can watch the
podcast on screen safely while so you
don't have to have Peter describe every
video to you. Anyone?
Let us know if you're watching this
while driving.
Yeah, I think that this promises to be a
a big jump over 13.2.9.
And I I think aspirationally also
represents the beginning of several
different forms of convergence. The
convergence between obviously robo taxi
tech stacks and human driven or
supervised driven autonomy tech stacks.
obviously less obviously I I think we're
I I would expect we're going to see over
the next few years maybe 2 to 3 years a
sequence of subsequent conversions I uh
convergences I would expect to see for
example the Optimus tech stack converge
with FSD maybe in in some future version
and what what the at the core I think
what we're what we're seeing is the
emergence of a vision language action
model VLA model from Tesla that's just
endto-end embodiment it it works in
cars. Hopefully, it works in in Optimus
robots as well. I would expect to see
from all of the other major frontier
labs also singular consolidated VLA
models that work across a variety of
different embodiment.
Amazing. Um, speaking about VA models,
uh, so out of Google, we're seeing the
next gen of physical agents, Gemini's
Robotics ER15,
play a little video. If you're watching
this on YouTube, you can see the model
is identifying everything on your desk.
And so it helps robots think through
complex realw world tasks. Uh reasons
like a human model outperforms GPT5 on
embodied reasoning and pointing
accuracy. Uh Emod, you want to comment
on this one?
Yeah, I mean what are the brains of
robots? is these joint vision language
models um that can basically think and
reason. And the crazy thing about this
is like to do this a couple of years
ago, you really need to have very high
performance chips with a thousand W of
electricity. If you look at how
efficient models like this are, you can
extrapolate out. You can see they're
actually going to be possible on edge
compute, which just opens up the
opportunity so much. And again, I think
as Alex said, this is why you're
standardizing around specific stacks,
just like Dojo was stopped in favor of
the edge compute at X, for example. So,
I think we'll see these very specialist
chips and these very specialist models
for them with tremendous capabilities
that can then act as a basis to learn
any given task effectively.
So, I mean, is everything becomes smart?
Everything understands its context where
it is and you can speak to anything and
have it understand what you mean. Alex,
where does this go?
It gets even better than that. I I just
in in line with what we were discussing
a few minutes ago about our living in
the sci-fi future. You you can't make
this stuff up. The the safety benchmark
for for Deep Mind Gemini's robotics VLA
model is named Azimov. And it it's a of
course and it it's a benchmark that's uh
semi-ynthetic but it's based on lots of
different visual/ language/action
scenarios uh and the relative safety
thereof. And the beauty is if you
actually go and read the Azimov paper,
Gemini, the Gemini team in Deep Mind are
benchmarking the safety of Isaac
Azimov's three laws of robotics against
better constitutions for safety of these
embodied robotic models. And turns out
that there are in fact better
constitutions for constitutional AI that
one can come up with beyond those three
laws. But the very fact that we're now
at a point in in our future history
where we're benchmarking the three laws
of robotics against other better models,
it's amazing.
I I I love the I love the group at at
Deep Mind and Google. Thank you for uh
for what you're doing.
Um
well, Alex, this week we'll close our
investment in uh the I don't know if I
Yeah, Andy Systems. Who know who cares
if that leaks out? But, uh, incredible
company, uh, that, you know, picks
through all the recycling using this
exact technology you just saw in the
video, pulls out the precious and rare
and valuable metals, the rare earth
metals, uh, and then gets them back into
recycling for the next generation of
chips and computers, uh, right out of
Wall-E. I mean just the and and it shows
you how this human paradise is possible
where you know everything can be
cleaned, sorted, fixed, repaired using
this exact vision capability you saw in
that video.
Love it. Love it.
Alex, your your investment picks so far
are still 100%. So add this to the
no investment advice for me.
These are private. That's okay.
I'm going to show this video of of
Tesla's optimist learning kung fu just
because it's so cool. Let's take a quick
look.
Now, if you're listening rather than
watching on YouTube, we just saw Optimus
with a kung fu sparring partner making
some impressive moves. And I've got to
imagine just for it to actually be
impressive. That was not a human
controlling optimist that that was its
uh its AI model in the world. Anybody
have any counterveailing evidence of
that? Elon has actually said in
connection with this that it was
autonomous. It was not teleoperated.
Fantastic. uh you know, we've seen we've
seen our friends from uh uh from Unitry,
you know, doing impressive work, but
Optimus towers over the the G1 from
Unitry. So, you know, it's we're not too
far from mech bots fighting in the ring
train by imitation learning. We're we're
we're so painfully close, I think, at
this point to unlocking physical labor
and solving physical labor. Remember in
in the services economy approximately
2/3 of of all of the the service labor
ultimately is connected to some sort of
physical task. So think of of how in the
future so many tasks that no human would
ever want to perform for which there
there aren't any jobs even can just be
automated. Yeah, I think I think the
fact that it's all neural network based
and imitation learning, that's a really
important point because people who've
been working in robotics, you know, I
had dinner with the founder of I I robot
and he's all cynical about, you know,
robots are slow, robots, what it's just
it's not true because it's all neural
network driven now. The pace of
development and that the smoothness of
the mo movement and the the dexterity is
going to skyrocket because it's all
neural net based.
Yeah. And so
I need to jump shortly, but some some
closing thoughts here, pal.
It's again the most exciting sci-fi
times to learn kung fu is going to be
like a couple of megabytes. And to do
any task, it's probably not going to be
more than another couple.
But models once again,
you're going to have to wait for the
neural link to get better for that. I
think again we're just at this tipping
point and the tipping point is in the
next like six months across just about
all of these.
Yeah,
Alex, you're
we're going to we're going to need these
capabilities for data center
construction. If we're going to achieve
250 gawatts by the early 2030s, we may
not have the the human labor to to
accomplish that. So, one, as you know,
Peter, I'm always looking for what the
innermost loop of of the tech tree is.
in in this case mixing metaphors and it
it increasingly to me at least looks
like the innermost loop is going to look
something like a recursive
self-improvement of robots building data
centers training better robots
robots robots building robots first that
are then building data centers uh that
are then putting out the you know
digital super intelligence to increase
the efficiency of the materials that the
robots are built out of and the
efficiency of the energy used to pump
into data centers. It's It's a hyper
exponential. I can feel the singularity
coming.
You're feeling the AGI.
I'm feeling the ASI. Oh my god.
Right.
Yeah. Dave, thoughts to close us out.
Well, my final thought, tomorrow's my
25th wedding anniversary,
and so when Mora sees this podcast,
she'll see I I bought two tickets to
Bermuda for the weekend, so we're gonna
spend a ton of money and
she should see that. It'll be concurrent
with the podcast. So, uh, go ahead and
open it.
Yeah. And my question ultimately is, as
we hit longevity, escape velocity, uh,
does till death do us part hold out for
hundreds of years? We're going to find
out.
That' be awesome. That would be awesome.
Uh, so I went to Tiffany's and and I
bought something. I swear it's made of
vibranium and and set with infinity
stones given the pricing it, but dealing
with people in a physical store is the
worst form of torture for me that I can
possibly endure. Uh so that's the that's
the real gift.
My god, amazing.
Immod and Alex, uh grateful for your
brilliance as always and uh see you guys
next time. Everybody listening, thank
you for subscribing. Thank you for being
part of our community. Uh super pumped
the speed of uh of this these
breakthroughs. I mean, I don't know how
you asmtoically approach infinity, but
we're we're going to watch it happen.
All right, take care, guys. Every week,
my team and I study the top 10
technology meta trends that will
transform industries over the decade
ahead. I cover trends ranging from
humanoid robotics, AGI, and quantum
computing to transport, energy,
longevity, and more. There's no fluff,
only the most important stuff that
matters, that impacts our lives, our
companies, and our careers. If you want
me to share these meta trends with you,
I write a newsletter twice a week,
sending it out as a short two-minute
read via email. And if you want to
discover the most important meta trends
10 years before anyone else, this
report's for you. Readers include
founders and CEOs from the world's most
disruptive companies and entrepreneurs
building the world's most disruptive
tech. It's not for you if you don't want
to be informed about what's coming, why
it matters, and how you can benefit from
it. To subscribe for free, go to
dmandis.com/tatrends
to gain access to the trends 10 years
before anyone else. All right, now back
to this episode.
[Music]
Analytical Tools
Create personalized analytical resources on-demand.