Replacing Humans with AI is Going Horribly Wrong
Loading YouTube video...
Video ID:QX1Xwzm9yHY
Use the "Analytical Tools" panel on the right to create personalized analytical resources from this video content.
Transcript
Hi, welcome to another episode of Cold
Fusion. Since 2023, the fast food chain
Taco Bell introduced artificial
intelligence at over 500 locations in
the United States. The aim was to reduce
mistakes and speed up orders. But in
some cases, AI delivered just the
opposite. Like this frustrated customer.
>> And what will you drink with that?
>> Oh my.
>> I want a large mouth soup.
>> And your drink?
McDonald's drive-throughs also tried AI,
but decided to scrap it because it was
too unreliable. One person had bacon
added to their ice cream in error, and
another had hundreds of dollars worth of
chicken nuggets mistakenly added to
their order. As for Taco Bell, the
fiasco has caused them to rethink their
use of AI. Their chief technology
officer, Dne Matthews, told the Wall
Street Journal in regards to deploying
the voice AI system, quote, "Sometimes
it lets me down, but sometimes it really
surprises me." End quote. And what he
said is the very crux of consumer
generative AI today. It works most of
the time, but about a few% of the time,
it just gets things wrong. But where
does this all lead? A recent MIT report
found that after surveying 150 business
leaders and 350 employees, just 5% of
integrated AI pilots are extracting
millions in value, while the vast
majority remain stuck with no measurable
profit and loss impact. In other words,
AI implementation fails in 95% of the
cases. The market was spooked by these
findings and shares in Nvidia, the $4
trillion company whose chips power the
AI boom, dropped by 3.5%. While
Palanteer fell by 9% off the back of the
news. It all sounds pretty heavy, but
this was just a reaction to the
headline. Because as you dig deeper into
what was actually said in the report,
the picture isn't as clear-cut as AI
simply failing everywhere. A few
episodes ago, I talked about the current
crisis with new graduate jobs. A large
part of that was the AI threat. It's
true and happening in some entry-level
jobs, especially in the creative fields.
I did caveat in that episode that
current AI systems still get a lot
wrong, but will improve in the future.
But today, let's explore that fact a bit
more deeply. What if generative consumer
AI continues to underperform in
businesses for years to come? In this
episode, we'll look at how AI is
actually performing once put into the
position of taking people's jobs. To
summarize the sentiment of this episode,
it's basically this. Consumer generative
AI will revolutionize global
productivity eventually, but as for now,
it's possibly in a bubble.
>> You are watching Till Fusion TV.
>> So, first thing, let's lay it out
straight. AI has some legitimate use
cases and works well in non-critical
areas that don't require 100% precision.
robots, live translation, and even some
prototype website builders like Lovable
are some examples. But the sentiment is
clear. Most people online find it
annoying. But that aside, there's a
fundamental problem with current
generative AI. Let me explain. You see,
everything we call artificial
intelligence today was built on the
findings of a 2017 paper from Google.
That paper provided a way for AI to
focus on different parts of an input
sequence simultaneously and determine
the relevance of each word to every
other word. This innovation called a
transform neural network allowed
researchers, computer scientists and
later companies to change the world. But
here in lies the fundamental problem
with this method of transforming neural
networks. It turns out by determining
the relevance of each word and
predicting the next word, it just makes
up stuff. Essentially, it doesn't know
what it's saying. We call this problem
hallucinations, and it has real world
consequences.
[Music]
Picture this. You're a business that
decided to replace its staff with AI to
write up patient documents, fill in
patient information, summarize meetings,
or do some basic scheduling. But as time
goes on, to your horror, you realize
that the AI system makes up 10% of
everything that it delivers. But the
real problem is you don't know what it's
made up and you don't know what's
accurate. So you or your staff manually
have to go back and check everything.
This ends up just being extra work for
the remaining staff and ultimately is a
waste of time. It sounds stupid, but
this is exactly what's happening now.
The following are Reddit comments taken
from those in the workforce who had to
take the brunt of upper management
thinking that blindly implementing AI
was a good idea. quote, "My company paid
for some AI scheduling software a few
months ago, thinking that it could free
up the accounts team so that they could
continue their hiring freeze in that
department. Now the accounts team is
having to do extra work making sure the
program isn't messing everything up. Not
to mention, our production team that
never had to worry about schedules is
double/ triple checking everything.
They're finally scrapping it for some
basic scheduling software." And same
experience here. Products sold and
demonstrated to be a lifesaver.
Ultimately, the staff that used to do
the work spend all their time making
sure the AI doesn't screw up and help
train it. Also, working in a
medical/clinical setting, we really
don't trust the new file sorting and
labeling system. Names, date of birth,
insurance data has to be perfect. AI is
less than that. It fails at gathering
proper demographic information and
assigning relevant tasks. Sometimes it
thinks the doctor is the patient or
doesn't know where the document was
faxed from, etc. And yet another user
states that AI was good at taking notes
from Zoom meetings, but would make up 5
to 20% of the content, even when handfed
the transcripts of the meeting. Those
who looked over the AI generated summary
realized that some of what was written
wasn't even said.
All of these disasters make sense. LLM
predict the next word statistically, but
they'll never tell you when it doesn't
know the answer or if it can't
understand something. There's many such
stories of company regrets after rushing
into halfbaked AI solutions. A report
suggests that 55% of companies regret
replacing people with AI.
This bank for example, who fired staff
to install an AI chatbot that was so bad
they begged for their old humans back.
And take the example of Cler. 2 years
ago, they implemented a hiring freeze
and began to replace their human staff
with AI. By 2024, their headcount had
fallen from 3,800 to 2,000. But lo and
behold, their customers wanted to speak
to actual humans. Cler said that its AI
chatbots perform the work of 800
employees. But the company admits that
their service quality and customer
satisfaction have dropped and they
lament that human interaction is still
needed.
on replacing people with AI. The
publication Fortune notes, quote, "Not
only is it shortsighted, it's
fundamentally bad business. The
companies cutting people today in the
name of AI will be the ones playing
catch-up tomorrow. There's no doubt that
AI is excellent at doing more with less.
It speeds up processes, cuts down
repetitive work, and buys back time. But
AI on its own cannot create the next
generation of products and services."
End quote.
So on Cold Fusion, we tried to be
thorough here. So it has to be said that
all of these failures aren't the full
story. There are indeed companies that
are extremely successful at using
artificial intelligence.
[Music]
Now, even though the MIT paper said that
95% of companies who implemented
generative AI failed in said
implementation, the same MIT paper
states, quote, "Some large companies,
pilots, and younger startups are really
excelling with generative AI." They mean
startups led by 19 or 20 year olds. They
quote have seen revenues jump from 0 to
20 million in a year. It's because they
pick one pain point, execute it well,
and partner smartly with companies who
use their tools." End quote.
So, how companies adopt AI is crucial.
For example, purchasing AI tools from
specialized vendors and building
partnerships succeed 67% of the time,
while internal builds succeed only
onethird as often.
This goes to show that you can't just
slap AI everywhere and expect it to
work. It needs thought in its
implementation and also depends on the
specificity of the AI tools in question.
But in the grand scheme of things, it's
still early days for AI and it'll be
naive to think that it will stay the
same forever. All it would take is
another groundbreaking paper, a new
underlying neural network architecture
and everything could change again. This
could mean another giant leap forward
that none of us are expecting. But in
the meantime, it's uncomfortable
territory. So what happens?
>> Like the steam engine which sparked the
industrial revolution of the late 1700s,
the internet is changing everything it
touches. And at the cutting edge of the
revolution is Wall Street. In the dotcom
bubble during the mid '90s, everyone who
simply put a.com at the end of their
company name saw massive valuations
because investors who didn't understand
the technology saw them as the future.
In reality, these companies had no solid
way of making a profit or didn't even
have a business plan. When the broader
market realized that it was all a
smokeokc screen, the sector crashed.
Most doc companies went out of business
and only a handful made it out and are
giants today. So, let's compare that to
the AI wave of today. The long-awaited
release of chat GPT5 was a
disappointment, and many users even
thought the previous version was better,
and this caused Open AI to scramble.
In addition, the company was also caught
blatantly fudging the performance
numbers in some very strange graphs.
Incremental rather than revolutionary
was the tone.
Soon Meta would announce that it's
downsizing its AI division and this
comes amongst a growing chorus of
analysts saying that AI is heading
towards if not is already in a bubble.
And next we have the massive valuations
and spending. Nvidia H100 the GPUs that
power the artificial intelligence boom
are about 30 to $40,000 each. Google has
26,000 of them and they've managed to
create Alpha Fold, Gemini, V3 and more.
But Meta on the other hand has 600,000
Nvidia H100s. And while they do have the
open- source Llama LLM, it isn't
discovering new science like Google's
Alpha Fold for 23 times the compute. And
to give you an idea of how extreme this
is all getting, AI itself has caused a
4% increase in electricity use in the
US. Morgan Stanley states that data
center investment will reach $3 trillion
over the next 3 years in preparation for
AI use. And of course, that's heavily
fueled by debt.
The belief is that AI will cut costs by
40% and that should add 16 trillion to
the S&P. But as we've just seen at the
beginning of this episode, according to
that MIT study, that could be very
unrealistic.
So, this could be the future if AI
doesn't improve massively soon. Number
one, business executives and business
owners will get frustrated with
hallucinations. useless solutions, bad
code, and a very poor return on
investment.
Number two, a lot of the AI gurus like
Sam Olman will have to admit that
artificial general intelligence isn't
going to be achieved by LLMs, and
they're essentially a dead end. Number
three, the general populace begins to
get sick of LLMs, and we're already
seeing it. the absolute flood of AI
slop, the hallucinations, AI agreeing
with what users say, sometimes driving
them insane.
Number four, in seeing this, the venture
capital finally dries up and LLMs become
just too expensive to justify unless
there's a massive increase in
efficiency. The cost to run OpenAI's
data centers, all the pipes and guts and
things that like keep AI running is
about $40 billion a year. Their revenues
right now are only like 15 to 20
billion. And finally, number five, after
a long winter, new implementations come
around that truly live up to the hype
promised by the first wave.
So, just like the com bubble, there's
going to be a few winners that rise from
the ashes. But if or when it crashes,
from here we see the true artificial
intelligence companies that last the
distance.
So to finish off this episode, let me
end with this. I've talked about this
many times in my older episodes, but
take a look at this diagram. It's called
the Gartner hype cycle, and it describes
the typical progression of the
infiltration of new technologies into
society.
So where do you think we are? The
technology trigger, the peak of inflated
expectations, the trough of
disillusionment, the slope of
enlightenment, or the plateau of
productivity? Feel free to comment down
below. So where to from here? Well, Sam
Oldman and all the other AI leaders
should focus their efforts on fixing the
hallucinations.
Once again, a different neural network
architecture could be discovered, one
that fixes hallucinations.
Or perhaps it could just be fixed
manually. Either way, ushering in a new
boom. But that all being said, and
that's the funny thing about AI. The
point is, with AI, nobody knows the
future. So, what do you guys think? Do
you think we're in a bubble and a crash
is imminent, or do you think the next AI
innovation is just around the corner?
Now, if you've been recently hired by a
company or just simply need to brush up
on your knowledge, today's sponsor is
perfect for you. Brilliant is the best
way to learn subjects like computer
science, data analysis, and AI, but in a
way that's interactive and hands-on.
Brilliant gives you puzzles and
step-by-step challenges where you learn
by doing. It's proven to be six times
more effective than watching lecture
videos. Their courses cover everything
from the fundamentals of neural networks
and probability to the mathematics that
underpins AI and even everyday topics
like algorithms and data security. And
because all their content is crafted by
experts from places like Stanford, MIT,
Caltech, Microsoft, Google, and more,
you can trust that what you're learning
is not just accurate, but also useful.
Learn at your own pace to brush up on a
project for work or just for your own
self-development. Whether it's leveling
up or learning new skills, I highly
recommend checking it out. To get
started for free, head to
brilliant.org/coldfusion
or scan the QR code on screen or click
the link in the description. Brilliant
also given our viewers 20% off an annual
premium subscription, which gives you
unlimited daily access to everything on
Brilliant. Hey guys, it's me. Hi. I'm
not an AI, so you're finally seeing my
face. But anyway, thanks for watching.
If you did enjoy it, there's plenty of
interesting stuff here on Cold Fusion,
so feel free to subscribe. Otherwise,
that's about it from me. My name is
Togo, and you've been watching Cold
Fusion. And I'll catch you again soon
for the next one.
>> Cold Fusion, it's new thinking.
[Music]
Analytical Tools
Create personalized analytical resources on-demand.