Eric Schmidt: Google | MIT Artificial Intelligence (AI) Podcast

Eric Schmidt: Google | MIT Artificial Intelligence (AI) Podcast

– The following is a
conversation with Eric Schmidt. He was the CEO of Google for 10 years and a chairman for six more, guiding the company through
an incredible period of growth and a series of
world-changing innovations. He is one of the most impactful leaders in the era of the internet and the powerful voice for
the promise of technology in our society. It was truly an honor to speak with him as part of the MIT course on
artificial general intelligence and the Artificial Intelligence podcast. And now, here’s my
conversation with Eric Schmidt. What was the first moment when you fell in love with technology? – I grew up in 1960’s as a boy where every boy wanted to be an astronaut and part of the space program. So like everyone else of my age, we would go out to the cow
pasture behind my house, which was literally a cow pasture, and we would shoot model rockets off, and that I think is the beginning. And of course generationally today, it would be video games and
all of the amazing things that you can do online with computers. – [Lex] There’s a
transformative inspiring aspect of science and math that maybe rockets would instill in individuals. You mentioned yesterday
that eighth grade math is where the journey through
mathematical universe diverges for many people. It’s this fork in the roadway. There’s a professor of math
at Berkeley, Edward Franco. I’m not sure if you’re familiar with him. – I am. – [Lex] He has written this amazing book I recommend to everybody
called Love and Math. Two of my favorite words. (laughs) He says that if painting
was taught like math, then students would be
asked to paint a fence. It’s just his analogy of
essentially how math is taught. So you never get a chance to discover the beauty of the art of painting or the beauty of the art of math. So how, when, and where did
you discover that beauty? – I think what happens
with people like myself is that you’re math-enabled pretty early, and all of the sudden you discover that you can use that to
discover new insights. The great scientists
will all tell a story. The men and women who are fantastic today, it’s somewhere when they were
in high school or in college they discovered that they could discover something themselves. And that sense of building something, of having an impact that you own drives knowledge acquisition and learning. In my case, it was programming and the notion that I could build things that had not existed, that I had built that had my name of it. And this was before open-source, but you could think of it as
open-source contributions. So today if I were a 16
or a 17-year-old boy, I’m sure that I would aspire
as a computer scientist to make a contribution
like the open-source heroes of the world today. That would be what would be driving me, and I would be trying and learning, and making mistakes and so
forth in the ways that it works. The repository that GitHub represents and that open-source libraries represent is an enormous bank of knowledge of all of the people who are doing that. And one of the lessons
that I learned at Google was that the world is a very big place, and there’s an awful lot of smart people. And an awful lot of
them are underutilized. So here’s an opportunity, for example, building parts or programs,
building new ideas, to contribute to the greater of society. – [Lex] So in that moment in the 70’s, the inspiring moment
where there was nothing and then you cerated
something through programming, that magical moment. So in 1975, I think, you
created a program called Lex, which I especially like
because my name is Lex. So thank you, thank you
for creating a brand that established a reputation
that’s long-lasting, reliable, and has a big impact on the
world and is still used today. So thank you for that. But more seriously, in that time, in the 70’s as an engineer personal computers were being born. Did you think you would be able to predict the 80’s, 90’s and the noughts
of where computers would go? – I’m sure I could not and
would not have gotten it right. I was the beneficiary of the great work of many many people who
saw it clearer than I did. With Lex, I worked with a
fellow named Michael Lesk who was my supervisor, and he essentially helped me architect and deliver a system
that’s still in use today. After that, I worked at Xerox
Palo Alto Research Center where the Alto was invented, and the Alto is the predecessor of the modern personal computer,
or Macintosh and so forth. And the Altos were very rare, and I had to drive an hour
from Berkeley to go use them, but I made a point of skipping classes and doing whatever it took to have access to this
extraordinary achievement. I knew that they were consequential. What I did not understand was scaling. I did not understand what would happen when you had 100 million
as opposed to 100. And so since then, and I have
learned the benefit of scale, I always look for things which are going to scale to platforms, so mobile phones, Android,
all of those things. The world is a numerous, there are many many people in the world. People really have needs. They really will use these platforms, and you can build big
businesses on top of them. – [Lex] So it’s interesting, so when you see a piece of technology, now you think what will
this technology look like when it’s in the hands
of a billion people. – That’s right. So an example would be that the
market is so competitive now that if you can’t figure out a way for something to have a million
users or a billion users, it probably is not going to be successful because something else will
become the general platform and your idea will become a lost idea or a specialized service
with relatively few users. So it’s a path to generality. It’s a path to general platform use. It’s a path to broad applicability. Now there are plenty of good
businesses that are tiny, so luxury goods for example, but if you want to have
an impact at scale, you have to look for things
which are of common value, common pricing, common distribution, and solve common problems. They’re problems that everyone has. And by the way, people
have lots of problems. Information, medicine, health,
education, and so forth, work on those problems. – [Lex] Like you said, you’re a big fan of the middle class– – ‘Cause there’s so many of them. – [Lex] There’s so many of them. – By definition. – [Lex] So any product, any
thing that has a huge impact and improves their lives is
a great business decision, and it’s just good for society. – And there’s nothing
wrong with starting off in the high-end as long as you have a plan to get to the middle class. There’s nothing wrong with starting with a specialized market in order to learn and to build and to fund things. So you start luxury market to build a general purpose market. But if you define yourself
as only a narrow market, someone else can come along
with a general purpose market that can push you to the corner, can restrict the scale of operation, can force you to be a lesser
impact than you might be. So it’s very important to think in terms of broad businesses and broad impact, even if you start in a
little corner somewhere. – [Lex] So as you look to the 70’s but also in the decades to
come and you saw computers, did you see them as tools, or was there a little
element of another entity? I remember a quote saying AI began with our dream to create the gods. Is there a feeling when
you wrote that program that you were creating another entity, giving life to something? – I wish I could say otherwise, but I simply found the
technology platforms so exciting. That’s what I was focused on. I think the majority of the
people that I’ve worked with, and there are a few exceptions,
Steve Jobs being an example, really saw this a great
technological play. I think relatively few of the
technical people understood the scale of its impact. So I used MCP which is
a predecessor to TCP/IP. It just made sense to connect things. We didn’t think of it
in terms of the internet and then companies and then Facebook and then Twitter and then
politics and so forth. We never did that build. We didn’t have that vision. And I think most people, it’s a rare person who can
see compounding at scale. Most people can see, if you ask people to predict the future, they’ll give you an answer of six to nine months or 12 months because that’s about as
far as people can imagine. But there’s an old saying, which actually was attributed to a professor at MIT a long time ago, that we overestimate what
can be done in one year. We underestimate was
can be done in a decade. And there’s a great deal of evidence that these core platforms of hardware and software take a decade. So think about self-driving cars. Self-driving cars were
thought about in the 90’s. There were projects around them. The first DARPA Grand
Challenge was roughly 2004. So that’s roughly 15 years ago. And today we have
self-driving cars operating at a city in Arizona, so 15 years. And we still have a ways to go before they’re more generally available. – [Lex] So you’ve spoken
about the importance, you just talked about
predicting into the future. You’ve spoken about the importance of thinking five years ahead and having a plan for those five years. – The way to say it is that almost everybody has a one-year plan. Almost no one has a proper five-year plan. And the key thing to have
on the five-year plan is having a model for
what’s going to happen under the underlying platforms. So here’s an example. Moore’s law as we know it, the thing that powered improvement in CPUs has largely halted in its traditional shrinking mechanisms because the costs have just gotten so high and it’s getting harder and harder. But there’s plenty of
algorithmic improvements and specialized hardware improvements. So you need to understand the
nature of those improvements and where they’ll go
in order to understand how it will change the platform. In the area of network conductivity, what are the gains that are
to be possible in wireless? It looks like there’s
an enormous expansion of wireless conductivity
at many different bands and that we will primarily, historical I’ve always thought that we were primarily
going to be using fiber, but now it looks like
we’re going to be using fiber plus very powerful high bandwidth sort of short distance conductivity
to bridge the last mile. That’s an amazing achievement. If you know that, then you’re going to build
your systems differently. By the way, those networks have
different latency properties because they’re more symmetric. The algorithms feel
faster for that reason. – [Lex] And so when you think about, whether it’s fiber or just
technologies in general, so there’s this Barbara
Wootton poem or quote that I really like. It’s from the champions of the impossible, rather than the slaves of the possible, that evolution draws its creative force. So in predicting the next five years, I’d like to talk about the
impossible and the possible. – Well, and again, one of the
great things about humanity is that we produce dreamers. We literally have people who
have a vision and a dream. They are, if you will,
disagreeable in the sense that they disagree with the, they disagree with what
the sort of zeitgeist is. They say there is another way. They have a belief. They have a vision. If you look at science, science is always marked by such people who went against some conventional wisdom, collected the knowledge at the time, and assembled it in a way that
produced a powerful platform. – [Lex] And you’ve been
amazingly honest about, in an inspiring way, about things you’ve been
wrong about predicting, and you’ve obviously been
right about a lot of things. But in this kind of tension, how do you balance as a company predicting the next five years
planning for the impossible, listening to those crazy dreamers, letting them run away and
make the impossible real, make it happen, and you know that’s how
programmers often think, and slowing things down and saying well this is the rational, this is the possible, the pragmatic, the dreamer
versus the pragmatist that is. – So it’s helpful to have a model which encourages a
predictable revenue stream as well as the ability to do new things. So in Google’s case, we’re big enough and well
enough managed and so forth that we have a pretty good sense
of what our revenue will be for the next year or two,
at least for a while. And so we have enough cash generation that we can make bets. And indeed, Google has become Alphabet, so the corporation is
organized around these bets. And these bets are in areas of fundamental importance to the world, whether it’s artificial intelligence, medical technology, self-driving cars, conductivity through
balloons, on and on and on. And there’s more coming and more coming. So one way you could express this is that the current business
is successful enough that we have the luxury of making bets. And another one that you could say is that we have the wisdom
of being able to see that a corporate structure
needs to be created to enhance the likelihood of
the success of those bets. So we essentially turned
ourselves into a conglomerate of bets and then this
underlying corporation, Google, which is itself innovative. So in order to pull this off, you have to have a
bunch of belief systems, and one of them is that you have to have bottoms up and tops down. The bottoms up we call 20% time, and the idea is that people
can spend 20% of the time on whatever they want. And the top down is that
our founders in particular have a keen eye on technology, and they’re reviewing things constantly. So an example would be
they’ll hear about an idea or I’ll hear about something
and it sounds interesting. Let’s go visit them, and then let’s begin
to assemble the pieces to see if that’s possible. And if you do this long enough, you get pretty good at
predicting what’s likely to work. – [Lex] So that’s a beautiful
balance that’s struck. Is this something that
applies at all scale? – Seems to be. Sergey, again 15 years ago, came up with a concept
called 10% of the budget should be on things that are unrelated. It was called 70/20/10. 70% of our time on core business, 20% on adjacent business,
and 10% on other. And he proved mathematically, of course he’s a brilliant mathematician, that you needed that 10% to
make the sum of the growth work. And it turns out that he was right. – [Lex] So getting into the world of artificial intelligence, you’ve talked quite
extensively and effectively to the impact in the near term, the positive impact of
artificial intelligence, especially machine learning in medical applications and education and just making information
more accessible. In the AI community,
there is a kind of debate. There’s this shroud of uncertainty as we face this new world
of artificial intelligence. And there is some people like
Elon Musk you’ve disagreed on, at least in the degree of emphasis he places on the existential threat of AI. So I’ve spoken with Stuart
Russell, Max Tegmark, who share Elon Musk’s view, and Yoshua Bengio,
Steven Pinker who do not. And so there’s a lot of very smart people who are thinking about this stuff, disagreeing, which is
really healthy, of course. So what do you think is the healthiest way for the AI community to, and really for the general
public to think about AI and the concern of the technology being mismanaged in some kind of way. – So the source of education
for the general public has been robot killer movies
and Terminator, etcetera. And the one thing I can
assure you we’re not building are those kinds of solutions. Furthermore, if they were to show up, someone would notice and unplug them. So as exciting as those movies are, and they’re great movies, were the killer robots to start, we would find a way to stop them, so I’m not concerned about that. And much of this has to do with the timeframe of conversation. So you can imagine a
situation 100 years from now when the human brain is fully understood in the next generation and next generation of
brilliant MIT scientists have figured all this out, we’re gonna have a large
number of ethics questions around science and thinking and robots and computers and so forth and so on. So it depends on the
question of the timeframe. In the next five to 10 years, we’re not facing those questions. What we’re facing in the
next five to 10 years is how do we spread this
disruptive technology as broadly as possible to gain
the maximum benefit of it? The primary benefit should be in healthcare and in education. Healthcare because it’s obvious. We’re all the same even though
we somehow believe we’re not. As a medical matter, the fact that we have
big data about our health will save lives, allow us to deal with skin
cancer and other cancers, ophthalmological problems. There’s people working
on psychological diseases and so forth using these techniques. I can go on and on. The promise of AI in
medicine is extraordinary. There are many many companies
and start-ups and funds and solutions and we will all
live much better for that. The same argument in education. Can you imagine that for each generation of child and even adult
you have a tutor educator. It’s AI based that’s not a human but is properly trained
that helps you get smarter, helps you address your
language difficulties or your math difficulties
or what have you. Why don’t we focus on those two? The gain societally of
making humans smarter and healthier are enormous. And those translate for
decades and decades, and we’ll all benefit from them. There are people who are
working on AI safety, which is the issue that you’re describing, and there are conversations
in the community that should there be such problems what should the rules be like? Google, for example, has
announced its policies with respect to AI safety,
which I certainly support, and I think most everybody would support. And they make sense. So it helps guide the research. But the killer robots are
not arriving this year, and they’re not even being built. – [Lex] And on that line of thinking, you said the timescale. In this topic or other topics
have you found a useful, on the business side or
the intellectual side, to think beyond five to 10 years, to think 50 years out? Has it ever been useful or productive– – In our industry there
are essentially no examples of 50 year predictions
that have been correct. Let’s review AI. AI, which was partially
invented here at MIT and a couple of other
universities in 1956, 1957, 1958, the original claims were a decade or two. And when I was a PhD
student, I studied AI, and it entered during my looking at it a period which is known as AI winter which went on for about 30 years, which is a whole generation
of science, scientists, and a whole group of people who didn’t make a lot of progress because the algorithms had not improved and the computers had not improved. It took some brilliant mathematicians starting with a fellow names Geoff Hinton at Toronto and Montreal who basically invented
this deep learning model which empowers us today. The seminal work there was 20 years ago, and in the last 10 years
it’s become popularized. So think about the timeframes
for that level of discovery. It’s very hard to predict. Many people think that
we’ll be flying around in the equivalent of flying cars. Who knows? My own view, if I want
to go out on a limb, is to say we know a couple of things about 50 years from now. We know that they’ll be more people alive. We know that we’ll have to have platforms that are more sustainable because the earth is limited
in the ways we all know, and that the kind of platforms
that are gonna get built will be consistent with the
principles that I’ve described. They will be much more
empowering of individuals. They’ll be much more
sensitive to the ecology ’cause they have to be. They just have to be. I also think that humans are going to be a great deal smarter, and I think they’re gonna be a lot smarter because of the tools that
I’ve discussed with you, and of course people will live longer. Life extension is continuing at a pace, a baby born today has a reasonable
chance of living to 100, which is pretty exciting. It’s well past the 21st century, so we better take care of them. – [Lex] And you’ve mentioned
an interesting statistic on some very large percentage, 60%, 70% of people may live in cities. – Today more than half
the world lives in cities, and one of the great stories of humanity in the last 20 years has been
the rural to urban migration. This has occurred in the United States. It’s occurred in Europe. It’s occurring in Asia, and
it’s occurring in Africa. When people move to cities,
the cities get more crowded, but believe it or not
their health gets better. Their productivity gets better. Their IQ and educational
capabilities improve. So it’s good news that
people are moving to cities, but we have to make them livable and safe. – [Lex] So first of all, you
are but you’ve also worked with some of the greatest leaders
in the history of tech. What insights do you draw from the difference in
leadership styles of yourself, Steve Jobs, Elon Musk, Larry Page, now the new CEO, Sundar Pichai and others, from the I would say calm
sages to the mad geniuses. – One of the things that I
learned as a young executive is that there’s no single
formula for leadership. They try to teach one, but
that’s not how it really works. There are people who just
understand what they need to do and they need to do it quickly. Those people are often entrepreneurs. They just know, and they move fast. There are other people who are systems thinkers and planners. That’s more who I am,
somewhat more conservative, more thorough in execution, a little bit more risk-adverse. There’s also people who
are sort of slightly insane in the sense that they are
emphatic and charismatic and they feel it and they
drive it and so forth. There’s no single formula to success. There is one thing that unifies all of the people that you named, which is very high intelligence. At the end of the day, the
thing that characterizes all of them is that they saw
the world quicker, faster. They processed information faster. They didn’t necessarily make the right decisions all the time, but they were on top of it. And the other thing that’s interesting about all of those people is that they all started young. So think about Steve Jobs starting Apple roughly at 18 or 19. Think about Bill Gates
staring at roughly 20, 21. Think about by the time they were 30, Mark Zuckerburg a good
example at 19 or 20, by the time they were
30, they had 10 years, at 30 years old they had
10 years of experience of dealing with people and products and shipments and the press
and business and so forth. It’s incredible how
much experience they had compared to the rest of us
who are busy getting our PhDs. – [Lex] Yes, exactly. – So we should celebrate these people because they’ve just
had more life experience and that helps them form the judgment. At the end of the day, when you’re at the top
of these organizations, all of the easy questions
have been dealt with. How should we design the buildings? Where should we put the
colors on our products? What should the box look like? That’s why it’s so interesting
to be in these rooms. The problems that they face in terms of the way they operate, the way they deal with their
employees, their customers, their innovation are
profoundly challenging. Each of the companies is
demonstrably different culturally. They are not, in fact, cut of the same. They behave differently based on input. Their internal cultures are different. Their compensation schemes are different. Their values are different. So there’s proof that diversity works. – [Lex] So when faced
with a tough decision in need of advice, it’s been said that the
best thing one can do is to find the best person in the world who can give that advice and find a way to be in a room
with them one-on-one and ask. So here we are. And let me ask in a long-winded way. I wrote this down. In 1998, there were many
good search engines: Lycos, Excite, AltaVista, InfoSeek, Ask Jeeves maybe, Yahoo even. So Google stepped in and
disrupted everything. They disrupted the nature of search, the nature of our access to information, the way we discover new knowledge. So now it’s 2018, actually 20 years later. There are many good
personal AI assistants, including, of course,
the best from Google. So you’ve spoken in medical and education the impact of such an AI
assistant could bring. So we arrive at this question. So it’s a personal one for me, but I hope my situation
represents that of many other as we said dreamers and
the crazy engineers. So my whole live I’ve dreamed of creating such an AI assistant. Every step I’ve taken has
been towards that goal. Now I’m a research scientist in human-centered AI here at MIT. So the next step for me as
I sit here facing my passion is to do what Larry and Sergey did in ’98, the simple start-up. And so here’s my simple question. Given the low odds of success,
the timing and luck required, the countless other factors that can’t be controlled or predicted, which is all the things
that Larry and Sergey faced, is there some calculation, some strategy to follow in the step? Or do you simply follow the passion just because there’s no other choice? – I think the people
who are in universities are always trying to study the extraordinarily chaotic nature of innovation and entrepreneurship. My answer is that they didn’t
have that conversation. They just did it. They sensed a moment when
in the case of Google, there was all of this data
that needed to be organized, and they had a better algorithm. They had invented a better way. So today, with human-centered AI, which is your area of research, there must be new approaches. It’s such a big field. There must be new approaches different from what we and others are doing. There must be start-ups to fund. There must be research projects to try. There must be graduate students
to work on new approaches. Here at MIT, there are people
who are looking at learning from the standpoint of
looking at child learning. How do children learn starting at age one and two–
– Josh Tenenbaum and others. – And the work is fantastic. Those approached are different from the approach that
most people are taking. Perhaps that’s a bet that you should make, or perhaps there’s another one. But at the end of the day, the successful entrepreneurs
are not as crazy as they sound. They see an opportunity
based on what’s happened. Let’s use Uber as an example. As Travis tells the story, he and his co-founder
were sitting in Paris, and they had this idea ’cause
they couldn’t get a cab. And they said we have smartphones,
and the rest is history. So what’s the equivalent of that Travis Eiffel
Tower where is a cab moment that you could as an
entrepreneur take advantage of, whether it’s in human-centered
AI or something else? That’s the next great start-up. – [Lex] And the psychology of that moment. So when Sergey and Larry talk about, in listening to a few interviews, it’s very nonchalant. Well here’s a very fascinating web data, and here’s an algorithm we have. We just kind of want to
play around with that data, and it seems like that’s a really nice way to organize this data. – Well I should say
what happened, remember, is that they were graduate
students at Stanford, and they thought this was interesting. So they build a search engine and they kept it in their room. And they had to get power
from the room next door ’cause they were using too
much power in their room, so they ran an extension cord over and then they went and they found a house and they had Google world headquarters of five people to start the company. And they raised $100,000
from Andy Bechtolsheim, who is the Sun founder to do this and Dave Cheriton and a few others. The point is their
beginnings were very simple, but they were based on a powerful insight. That is a replicable
model for any start-up. It has to be a powerful insight, the beginnings are simple, and there has to be an innovation. In Larry and Sergey’s
case, it was PageRank, which was a brilliant idea, one of the most sited
papers in the world today. What’s the next one? – [Lex] So you’re one of, if I may say, richest people in the world, and yet it seems that money
is simply a side effect of your passions and not an inherent goal. But you’re a fascinating person to ask. So much of our society
at the individual level and at the company level and as nations is driven by the desire for wealth. What do you think about this drive, and what have you learned about, if I may romanticize the notion, the meaning of life
having achieved success on so many dimensions? – There have been many
studies of human happiness, and above some threshold, which is typically relatively
low for this conversation, there’s no difference in
happiness about money. The happiness is correlated
with meaning and purpose, a sense of family, a sense of impact. So if you organize your life, assuming you have enough to get around and have a nice home and so forth, you’ll be far happier if you figure out what you care about and work on that. It’s often being in service to others. There’s a great deal of evidence
that people are happiest when they’re serving
others and not themselves. This goes directly against the sort of press-induced excitement about powerful and wealthy
leaders of the world, and indeed these are consequential people. But if you are in a situation where you’ve been very
fortunate as I have, you also have to take
that as a responsibility and you have to basically
work both to educate others and give them that opportunity but also use that wealth
to advance human society. In my case, I’m particularly interested in using the tools of
artificial intelligence and machine learning
to make society better. I’ve mentioned education. I’ve mentioned inequality in middle class and things like this, all of
which are a passion of mine. It doesn’t matter what you do. It matters that you believe in it, that it’s important to you, and your life can be far more satisfying if you spend your life doing that. – [Lex] I think there’s
no better place to end than a discussion of the meaning of life. – Eric, thank you so much.
– Thank you very much, Lex.

100 thoughts on “Eric Schmidt: Google | MIT Artificial Intelligence (AI) Podcast

  1. That was one of the best yet. There were quotable quotes, deep moments, and a raft of great advice.

    Really good interview Lex.

    I really liked your shout out to Josh Tennebaum's work. There is really amazing work going on at MIT in AI.

  2. Pretty blown away by how truly modest and grounded Eric is. There's not even a tiny hint of self-aggrandizement or ego in the way he comports himself, despite how much he has achieved. Looks like he's living life with a really healthy mindset.

  3. I don’t like the “what moment did u X”. It perpetuates this one sudden moment culture that is false. Everything is a process, especially important realizations and skills.

  4. When i first listened to Eric schmidt i assumed he was just a ceo. I think it's because he's so great at communication that his tech background and sources of knowledge don't become front and center. He's a master of condensing huge amounts of information and I ttuly think every word he speaks is worth hearing. One of my heroes, thank you for Mr. Schmidt for having such developed and clearly worded points of view on so many things I didn't even know I wanted to hear about.

  5. Good interview, but I felt Eric was a little reluctant to speculate or philosophize too much even though Lex's questions were invitations to do so.  I also thought his "killer robot" references were somewhat disingenuous to the legitimate concerns coming from the AI safety community.  I guess if you are associated with a big company, there's no PR incentive to open the door to any conversation that can be taken out of context and used against your company's mission.Going forward, Lex, I think you should have all your interviews/conversations over drinks to loosen up the guests.

  6. I find Mr.Schmidt's refutation of the dangers of AI bad, because he mostly compares it to killer robots and doesn't talk about more serious arguments like "How do we know that we'll be in control if the AI is like a new level of smart".

  7. In 1901 Orville Wright wrote a letter stating that he didn't think an airplane would be built in his lifetime……

  8. Lex I don't know where this feeling comes from exactly (I'm not a scientist) but when I think about how humans reason it seems as if our memory is compression based. I think in order for an AI to reason more like a human it needs to save algos in a compressed way and see only the highlights unless further unfolding is required, kind of like origami. Because only when it simplifies an algorithm to its basic structure is it abstract enough to use in combination with other parts and build something entirely new, aka imagination, which leads to improvisation. So we need to go from an entirely linear processing of information to one that's parallel on the subject of building new algo's. Think about what you do when you are faced with a new situation. You try to find where the known/old parts of information stop and the new one begins, then try to decipher whether this new information is made up of multiple old pieces, since everything is just that, to do this we use pattern recognition and scanning of our memory/database. We already have very good pattern recognition and memory storage, what we need now in order to make computers like a more ideal human being is the final part; the right compression algorithm that can transform other algos' shape/structure. I see current computers as having all the memories a human has but with walls around every single one, there's zero connection between them because none of the memories are used as part of another memory. When you think of the girl you love you don't just see the first time you met her, you see all of the moments as well as having the option to use memories of other places in which you can imagine her taking a spot. Because we have the power to chain algos together into new ones in an abstract manner, not just strictly linearly. In order to make something less linear while still processing time linearly one needs to cut out parts of the algos, the question is based on what? I think humans do this in their sleep, probably during REM sleep. I think we scan our daytime memories and decide what is useful for future survival and put the key aspects of algos/memories into long term with the help of compression. How compression algo(s?), pattern recognition and memory work together is the holy grail, solve that and you understand the human mind completely. At least from my perspective, I have no mathematicak backing for any of these claims, these are all just hunches from life experience and my limited understanding of the brain, AI, algorithms, etc. Hope this helps or inspires you or anyone in the field of human-like AI.

  9. I don't claim to be smarter than this man. But I have seen Russian killer robots, firing guns & driving a jeep, robot soldiers

  10. Go with ben go..,gary marcus,jeff dean etc.. (I would love if you can talk with Sam harris and Peter thiel too) there are also many talents but they aren't celebreties.. Also of you can get Demis ..

  11. Damn.. its the best thing .. i have listen this year.. thanks man.. by the way what are your views for self driving car in india?? ( As there people are unpredictable and traffic's are excess)

  12. Nice guy as in he's not a psychopath… the politics of expediency, ppf kill me now. His legs are chained to his Google colossal shares and options. Anyone shoot their own foot.. What a load of baloney, dismissive, condescending reacting to the question rather making a real attempt ..I don't buy it pop, no offence to all fine and sophisticated audiences of the chanel

  13. Intried three different ways to talk to anyone at M.I.T., either by email, or directly. I have never got a response. My father taught at M.I.T. in the late 1930's. Radio Engineering… He also got the football team started again around 1938' and became the coach for the team. I have 16mm films he shot of games, and an extraordinary on the field film of a massive tug of war with what seems to be hundreds of people. I'm a film guy, and I've never seen footage like this. I also have his albums from the period, which includes team photos, newspaper articles, interviews with the press, and even a hand made play book. All this was mentioned in my contact attempts to M.I.T., as I thought they'd be interested in this material. I have never heard back from them. I live in San Francisco.

  14. I can't find the video, but I recall Elon indirectly hinting that Google is the only company he's concerned about when it comes to AI. In the Future of Life Institute video from last year, he said the biggest problems are 1). I/O bandwidth (for when AGI will be an issue) and 2). Democratization, he emphasized it by rephrasing the Lord Acton quote by saying "Freedom consists of the distribution of power, and despotism in its concentration." Remember, it was the astronauts at Clavius, and their secrecy, that made HAL the way he was. You can't expect Heywood Floyd to tell you what's in the Tycho crater, especially with all he stands to gain from it. Anyway, it would be cool to hear interdisciplinary and/or esoteric AI opinions, especially those that intersect with biology. Like the concept of Artificial Outelligence that Eric Weinstein presents here.

    Edit: Found it, ironically it was the top left recommendation on my home page.

  15. The over emphasized equivalence of sensory awareness with the reflections of perception called consciousness tend to ignore or obscure the actual cause-effect mechanism that operates much the same as a very complex analog computer/clock driving a time duration resonance memory.., universally, and in the subsystems, mind-body context of humanity, individually and collectively. Altogether, it's Actual Intelligence operating on one Principle of constant creation/quantum measurements fitted in congruent probabilities.

    Whatever system that extends those already begun by Google to correlate and access relevant knowledge and raw information, it is inevitability following the same principle by default, as the evolutionary process that is the Observable Universe.
    The self-defining, self-regulating defensive immunity system from which the bio-logical Universe is constantly recycling, is modeled by Defence/Health and Education Services. It the obvious application of all forms of Intelligence.

    There's a world full of volunteers interested in improving Health and Education. Management of dissipating interference, when maintaining a healthy society, is the difficulty.

    All life is parasitism to some degree, so an heirachical integration of time delayed investment in succeeding generations, from whom a realistic return it is possible to maintain and manage balanced continuity.., it's complicated and messy.

  16. Eric explains it very clear and easy to understand which reflected his clear mind and fully understand what he is talking about.

    No wonder why he was hired as Google CEO.

  17. all of this assumes you will have food to eat and water to drink. Once the philosophy of greed destroys biology, you are done. All hail technology.

  18. Lex,

    This was very interesting and valuable interview. I am interested in how you got to be in the position you are now,
    1.How did you start to be able to interview people like Eric Schmidt?
    2.What are your thoughts on block chain?
    3.How important is identity in a persons process of creating?
    4.What would it take for you to be on my podcast?

  19. What was it that Carlos Slim's dad said about his version of Walmart stores in Mexico. "Theres more poor people in the world, sell to poor people."

  20. What he really meant is that the killer robots are not even being built and that they are not arriving this year. And not the other way round.

  21. 21:33 Won't happen, in fact lifespan will likely be worse than today. Between 1900 and 1960 if you account for infant mortality the lifespan only increased in the USA by 6 months.

  22. amazing interview! I like how calm he talks..I like how highly he thinks of human beings! I hate wasting my life till now

  23. In the 1960's not every boy wanted to be an astronaut! (0:44) there were of course The Beatles, The Doors, Janis, Hendrix, Rolling Stones….. many wanted to be Rockstars

  24. Lex is looking for "that moment" that his startup idea is born. If all you ever do is search for the answer, you're never gonna find it.

  25. great discussion, but why is it so short?
    There are so many things to discuss about. So, it should be at least an hour.

  26. one thing about things that are not yet our concern: it could be such a thing that will be unsolvable when become a real concern.

  27. Great video, thanks for sharing! Nikola Tesla actually made a correct prediction in 1926 on the use of smartphones.

  28. I don't want to diminish anybody's hard work or talent but, most of the legendary technological individuals mentioned were years behind the obvious demand of the market. the resource to productivity ratio is pretty bad.

  29. Oh, come on! Killer robots, really?! Actually, drones (quadcopters), are becoming smaller and smaller, with longer and longer range. How are you going to "unplug" 1000 of them, if they are almost invisible until they attack… I'm sure, he knows all of this, he's just saying it for the general public, but nobody (including Elon Musk) is concerned about robots. Everybody is talking about an intangible piece of code which can improve itself continuously, and the moment it "enters the internet", it will become impossible to find/stop it. And considering it's smarter than human beings, it will find a way to manipulate us, for example through fake new etc., to do many bad things if needed.

  30. The cuts in the video with the adjustments after were very off putting. It's a decent interview, just wish there weren't so many cuts.

  31. On the, "The great migration to cities…" Look at all the homelessness, drug use, virtue signaling, loss of values and culture that made the cities possible in all countries, and basic lack of common sense. I am in no way convinced this is a net positive, but I could be wrong.

  32. He's so confident and smooth — but the artificial intelligence (once it becomes really intelligent) is likely going to fulfill the role of antichrist. And most people are going to believe in it and like it, just as they believe in and like the whole complex of science x technology x management that we have in technological society today.

  33. Oh my god you’re going to look back at this podcast and cringe so hard dude. The jokes, the long questions, the waja chills are coursing through my body

  34. great interview, Lex. However, I think Eric was being slightly disingenuous regarding the Artificial General Intelligence (AGI) debate. The issue isn't necessarily a fear of Terminator-like robots in the streets, the matter is far more subtle. The question is: if our interests should become unaligned with the interests of the AGI, will that lead to unforeseen disastrous consequences like a disruption of markets, international conflict, etc. Nick Bostrom's book 'Superintelligence: Paths, Dangers, Strategies' is a really good place to start in order to really unpack the issue properly.

  35. I respect you Elon, sorry Lex (;)) for bringing these amazing minds to your podcasts. And I love your passion. I have to tell you that you are my Role Model, Lex. 🙂

  36. He is not concerned about the killer robots because they will obey google´s will and ensure google´s world domination.

  37. Lex, you asked some questions that many interviewers would dance around and not ask. I really appreciate for asking those questions and the insight they brought. Thank you

  38. Eric thanks so much for your grand contribution to the field of Engineering and Software you are constantly finding ways to improve and make things better computer and technology wise we the people love it. God has placed each of us here for a purpose and you are surely fulfilling yours. Eric God birth the thoughts that's embedded in your head which sparks the existence of your ideas to come to fruition to unleash what is now your business praise Him for that!!! God bless you from your sister in Christ in Hammond, LA.

  39. Great interview. I like to stop and take a moment to appreciate what a privilege and delight it is to listen to these interviews with such amazing people.. Thanks!

  40. Most people don't have enough money to get by, Eric. The guests on this podcast are all fantastically out of touch.

Leave a Reply

Your email address will not be published. Required fields are marked *