Video URL: https://www.youtube.com/watch?v=BChxQHyFIOI


hello freak [ __ ] do you think in our lifetimes and or in our children's lifetime it's feasible that we figure out a way in some way to I'm not endorsing like taking people's money and giving it to other people but in some sort of a way to eliminate poverty is that even possible is it ever going to be possible to completely eliminate poverty worldwide and within like a lifetime well I think we talked about this the last time when we spoke about AI but I mean this is the implication of much of what we talked about here if you if you imagine building the perfect labor saving technology right where you imagine just having a machine that can build any machine that can do any human labor you powered by sunlight more or less for the cost of raw materials right so you're talking about the ultimate wealth generation device and now we're not just talking about blue collar labor we're talking about the kind of Labor you and I do right so like artistic labor and scientific labor and um you know just a machine that comes up with good ideas right we're talking about General artificial intelligence um this if in the right political and economic system this would just cancel any need for people to have to work to survive right it just would be there'd be enough of everything to go around and then the question would be do we have the right political and economic system where we where we actually could spread that wealth or would we just would we just find ourselves in some kind of horrendous arms race and and uh a situation of of wealth inequality unlike any we've ever seen um it's a um we we don't have the it's not in place now I mean if someone just handed us this device you know if if um and it were you know all of my concerns about AI were gone I mean there's no question about this thing U doing things we didn't want it would do exactly what we want when we want it and there's no there's just no danger of it uh its interest becoming misaligned with our own it's just like a perfect Oracle and a perfect designer of new technology um if if it was handed to us now you know I would expect uh just complete chaos right I would exp if if if Faceook built this thing tomorrow and announced it or rumor spread that they

had built it right what are the implications for Russia and China well in so far as they are as adversarial as they are now it would be rational for them to just nuke California right because because the having this device is just a win or take all scenario I mean you you win the world if you have this device you can turn the lights off in China you know the moment you have this device you can just I mean it's just the ultimate um because literally we're talking about and you know many people may doubt whether such a thing is possible but again we're just talking about the implications of intelligence that can make refinements to itself in over a Time course that is Bears no relationship to what we experience as Apes right so you're talking about a system that can make changes to its own source code um and become better and better at learning and more and more knowledgeable has instantane if we give it access to the internet it has instantaneous access to all human and machine knowledge and uh it does you know thousands of years of work every every day of our lives right thousands of years of equivalent human level intellectual work um it's just a it's on I mean our intuitions completely falter to to to capture just how immensely powerful such a thing would be and there's no reason to think this isn't possible I mean the the most skeptical thing you can honestly say about this is that this isn't coming soon right like this is not but to say that this is not possible makes no scientific sense at this point there's no reason to think that a sufficiently Advanced Digital computer can't can't instantiate general intelligence of the sort that we have there's no reason to think that I mean the intelligence has to be at bottom some form of information processing and if we get the algorithm right with enough Hardware resources and the the limit is definitely not the hardware at this point it's it's the the algorithms um there's just no reason to think this could can't take off and and scale and that we would be in the presence of something that is that is uh like having an an alternate human civiliz ation in a box that is making thousands of years of

progress every day right so just imagine that if you had in a box you know the 10 smartest people who have ever lived and you know every time every week they make 20,000 years of progress right because because this that is the actual we're talking about electronic circuits being a million times faster than than biological circuits so even if it was just and I I believe I said this the last time we talked about AI but this is you know this is what brings it home for me even if it's just a matter of faster right it's not it's not anything especially spooky it's just this can do human level intellectual work but just a million times faster and again this totally under cells the prospects of super intelligence I I think you know human level intellectual work is is um is going to seem pretty poultry in the end but if you just imagine just speeding it up if you imagine if if we were doing this podcast imagine how smart I would seem if between every sentence I actually had a year to figure out what I was going to say next right and so I say this one sentence and you say you ask me a question and then in my world I just have a year I'm going to go spend the next year getting getting ready for for Joe and it's going to be perfect and this is just compounding upon itself like not only can I not not only I um am I working faster ultimately I can change my my ability to work faster I mean like we're talking about software that can change itself you're talking about something that that becomes you know self-improving so there's a compounding function there but um it's the point is it's unimaginable uh in terms of how uh how much change this could affect and if you imagine the best case scenario where this is under our control right where there's no alignment problem where it's just it doesn't this thing doesn't anything that surprises us this thing will always take direction from us it will never it will never develop interests of its own right which is again the fear but let's let's just say this is totally obedient it's just an Oracle and a genie round you know in in one and um you know we say you know cure Alzheimer's and it cures Alzheimer's you know you solve the the protein folding problem and and it just it's just off and running and to develop a perfect nanotechnology and it does

that this is all again going back to David Deutsch there's no reason to think this isn't possible because anything that's compatible with the laws of physics can be done given the requisite knowledge right so you just you get enough intelligence as long as you're not violating the laws of physics you can do something in that space um so but the problem is this is a winter take all scenario so Facebook does it tomorrow and China and Russia find out about it they can't afford to wait around to see whether the US decides to do something not entirely selfish with this right because that there their worst fears could be realized if Donald Trump is President what's Donald Trump going to do with a perfect AI when he has already told the world that he you know hates Islam right um it's uh it's a um we would have to have a political and economic system that allowed us to absorb this ultimate wealth saav wealth producing technology um and and again so this may all sound like pure sci-fi craziness to people um I don't think there is any reason to believe that it is but walk way back from that edge of craziness and just look at um dumb AI you know narrow AI just self-driving cars and Automation and um in intelligent algorithms that uh can do human level work uh that is already poised to change our world massively and create massive wealth inequality which we have we have to figure out how to spread this wealth you know what do you do when you can automate uh 50% of of human labor were you paying attention to the uh artificial intelligence go match yeah yeah uh explain I don't actually play go so I wasn't paying that kind of attention to it but I'm aware of what happened there and do you know the rules of Go uh um not not so that know actually don't I don't I don't play it I know I don't I know I know vaguely how you how you um how it looks when a game is played but I don't actually it's supposed to be very complicated though more complicated and more possibilities than chess oh yeah and and that's why it took 20 years longer for u a computer to be the best player in the world um it's it is um did you see how the computer did it too well I didn't I I know I mean this the company that did it is um Deep

Mind which is was acquired by Google and they're at The Cutting Edge of AI research and yeah well it's the cartoons are are unfortunately not so far from what is possible but um the uh yeah I mean there's again this is no this is not general intelligence like we're talking so these are not machines that can even play Tic-Tac-Toe right now there's some there there have been some moves away from this so like deep mind has trained an algorithm to play all of the Atari games like from 1980 or whenever um and it is very quickly became superhuman on most of them I think I don't think it's superhuman on all of them yet but it it could play you know Space Invaders and all these and breakout all these games that are are um uh uh to highly unlike one another and it's the same Al ithm becoming expert and superhuman in all of them and that's that's a new paradigm and it's using a technique called Deep learning for that um and that's and that's been you know very exciting and will be incredibly useful you know this is I mean the other the flip side of all this I know that everything I tend to say on this sounds scary but this is all like I mean the the next scariest thing is not to do any of this stuff it's like we we want intelligence we want automation we want to figure out how to solve problems that we can't yet solve so like intelligence is the best thing we've got so we want more of it um but we have to have a system where I mean it's scary that we have a system where if you gave the best possible version of it to one research lab or to one government it's not obvious that that wouldn't destroy Humanity right that that wouldn't lead to massive dislocations where you'd have you know some trillionaire who's trumpeting his new device and and just you know 50% unemployment in the US you know in a month right like you like it's not obvious how we would absorb this level of of progress um and we we we definitely have to to figure out how to do it and of course we can't assume the best case scenario right that's the best case scenario I think there's a few people that put it the way you put it that terrify the [ __ ] out of people right and everyone else seems to have

this Rosy vision of increased longevity and automated everything and everything fixed and easy to get to work and medical procedures would be easier they're going to know how to do it better everybody looks at it like we are always going to be here but are we obsolete I mean is this idea of a living thing that's creative and wrapped up in emotions and lust and desires and jealousy and all the pettiness that we see celebrated all the time we still see it it's not getting any better right if if are we Absol I mean what if this thing comes along and says listen there's a way to do you can abandon all that stupid [ __ ] you can abandon all that makes you all the stuff that makes you fun to be around yeah it also [ __ ] with you you can live three times as long without that stuff well I I think it it would in the best case would usher in a a the possibility of of just kind of fundamentally creative life where I mean on the order of something like the Matrix whether it's in the Matrix or it's just in the world that has been made as um beautiful as as possible um based on what would functionally be an unlimited resource of intelligence I mean just it's just like for there to be a an ability to solve problems of A Sort that we can't currently imagine it's just it really is like a place on the map that you can't you can't you can indicate it's over there you know it's like the blank spot on the map this is why it's called The Singularity right it's like this is this is a uh it was John Von noyman the um the inventor of Game Theory who mathematician who um is one along with Alan Turin um and a couple of other people really responsible for the computer Revolution he was the first person to use this term Singularity uh to describe just this that that there's a a speeding up of um information processing technology and a a cultural Reliance upon it uh Beyond which we can't actually foresee the level of change that can come over our society it's like you know an event horizon past which we can't see um and uh this certainly becomes true when you talk about these intelligence systems being able to to

make changes to themselves and again we're talking mostly software it's not I'm not imagining um I mean that the most important breakthroughs are certainly at the level of of better software I mean we have in terms of the computing power that the pH physical Hardware on Earth it's not that's not what's limiting our AI at the moment it's not like we need more more um Hardware um but we will get more Hardware too up to the limits of of physics and it'll get smaller and smaller as it has uh and you know if quantum Computing becomes possible um or practical um that will uh actually David Deutsch is is um the physicist I mentioned is one of the fathers of the concept of quantum Computing um that will open up a whole another area you know extreme of computing power that is um not at all an an to the kinds of of uh machines we have now but um it's just when you imagine people don't people seem to always want to I I just had this conversation with with Neil degros Tyson on my podcast he name dropper yeah right no it was just I'm just keeping people I'm just I'm just attributing these ideas to him um uh he's not all he doesn't take this line at all he's not he thinks it's all [ __ ] right he's not at all worried about AI right what does he think he thinks that you know we just we just use he's drawing an analogy from how we you currently use computers that they just they just keep helping us do what we want to do like we decide what we want to do with computers and we just add them to our process and that process becomes automated and then we'll find new jobs somewhere else like you you don't need a stenographer once you have voice recognition technology and um uh that's not a problem a stenographer will find something else to do and so the economic dislocation isn't that bad and um computers will just get better than they are and you know eventually Siri will actually work you know and you'll she'll answer your questions well and you're not it's not going to be you know a laugh line what Siri said to you today and um then all of this will just proceed to make life better right now um none of that is imagining what it will be like to make because there would be a certain point where you'll have systems that are you know it's

like the chess the best chess player on earth is now always going to be a computer right there it's never there's no there's not going to be a human born tomorrow that's going to be better than the best computer I me that's like it's already like it's we have superhuman chess players on earth now imagine having computers that are superhuman at every at every task TK that is relevant every intellectual task right so the best physicist is a computer you know the best medical diagnostician is a computer the best um prover of math theorems is a computer the best engineer is a computer right there's no there's no reason why we're not headed there I mean it would be the only reason I could see we're not headed there is that something massively dislocating happens that prevents us from continuing to improve our intelligent machines but if if you just the moment you admit that intelligence is just a matter of information processing and you admit that we will continue to improve our machines unless something heinous happens because this intelligence and automation are the most valuable things we have um at a certain point whether you think it's in 5 years or 500 years we are going to find ourselves in the presence of super intelligent machines and then at that point the the best source of innovation for the next generation of software or Hardware or both will be the machines themselves right so then you so then you just have then that's where you get what what was what the mathematician I good described as as the intelligence explosion which is just the process can take off on its own um and this is where you know the singularity people um either either are hopeful or worried uh but you because there's no there's no guarantee that this process will be remain aligned with our interests and and every person who I I meet even you know very smart people like Neil um who says they're not worried about this when you actually drill down on why they're not worried you find that they're actually not imagining machines making changes to their own source code um and they're not or or they're they simply a believe that this is so far away that we don't have to worry about

it now right and that's actually a non Seer I mean to say that this is far away is not actually grappling with it's not an argument that this isn't going to happen and um and it's based on what too and it's and it's based on I mean first of all there's no there's no reason to believe Jamie you want to find out where that is um there's no I mean we don't know how long it will take us to prepare for this right so like like if if you were if you knew this it was going to take 50 years for this to happen right is 50 years enough for us to prepare politically and economically to deal with the ramifications of this and and to do it and to to say nothing of actually building the AI safely in a way that's aligned with our interest I don't know I mean so 50 years is it's like we've had the iPhone for what 10 years uh n years something I mean it's like 50 years is not a lot of time right to to deal to deal with this and um there's just no reason to think it's it's that far away if we keep making progress I mean it's not it would be amazing if it were 500 years away I mean that that seems like it's it's it's more likely me from what I the sense I get from the people who are doing this work it's far more likely to be 50 years than 500 years like you know um uh I mean the PE the people who think this is a long long way off are I mean they're saying you know 50 to 100 years no one says 500 years um no no as far as I know no no one who's actually close to this work and some people think it could be in 5 years right I mean the people who are you know like the Deep Mind people who are very close to this are the sorts of people who say because the people the people who are close to this work are astonished by what's happened in the last 10 years like we went from a a place of you know very little progress to you know wow this is all of a sudden really really interesting and powerful and um and again progress is compounding in a way that's counterintuitive people systematically overestimate how much change can happen in a year and underestimate how much

change can happen in 10 years and you know as far as estimating how much change can happen in 50 or 100 years I don't know that anyone is good at that how could you be with giant leaps come giant exponential leaps off those leaps and it's it's almost impossible for us to really predict what we're going to be looking at 50 years from now but I don't I don't know what they're going to think about us that's what's most bizarre about it is well we really might be obsolete if we look at how ridiculous we are we look at this political campaign look at what we pay attention to in the news look at the things we really focus on we're a strange ridiculous animal and why if we looked back on you know some strange dinosaur that had a weird neck why should that [ __ ] thing make it you know why should we make it we we might be here to make that thing and that thing takes over from here with no emotion no no greed and just purely existing electronically and for what reason well that that's a little scary there there are computer scientists who when you talk about why they're not worried or talk to them about why they're not worried they just swallow this pill without any qualm like we're going to make the thing that is far more powerful and beautiful and important than we are and it doesn't matter what happens to us I mean that was our role our role was to build these mechanical gods and and it's fine if if they squash us um and I've you know I've literally heard a a I've heard someone give a talk I mean that's what woke me up to to to how interesting this area is I went to this conference in in San Juan about a year ago um and there were uh you know like the people from Deep Mind were there and and there were the people who were very close to this work we there and um I mean to hear some of the reasons why you shouldn't be worried from people who were interested in in calming the fears so they could get on with doing their very important work um it was amazing because they were highly uncompelling reasons not to be worried you know it was just um so so they had a they had a desire to be compelled they're not they're not worried at all well no people people want to do this there's a deep

assumption in many of these people that we can figure it out as we go along right it's like scary you know it's just like we're going to we're just going to get we going to get closer we're we're far enough away now even five year even if it's 5 years five years we'll get there once we get closer once we get something a little scary then we'll pull the brakes and talk about it but the problem is they are everyone is essentially in a race condition by default I me you have you know Google is racing against Facebook and the US is racing against China and every every group is racing against every other group um however you want to conceive of groups this this is a to be the first one to be the first one with incredibly powerful narrow AI is to be the next you know multi-billion dollar company right so everyone's trying to get there and uh if they suddenly get there and sort of overshoot a little bit and now they've got something like you know general intelligence you know or something close um what we're relying on every and and they know everyone else is attempting to do this right um we don't have a system set up where everyone can pull the brakes together and say listen we got to stop racing here we have to share everything we have to share the wealth we have to share the information we have to um This truly has to be open source in every conceivable way and um we have to diffuse this win or take all Dynamic um you know I think we need something like a Manhattan Project to figure out how to do that you know not not to figure out how to build the AI but to figure out how to to build it in a way that does not create an arms race that does not create um an incentive to build unsafe AI which is almost certainly going to be easier than building safe AI and just to work out all of these issues because it's it's not because what I think we we're going to build this by default we're just going to keep building more and more intelligent machines and this is going to be done in by everyone who can can do it you know and it's and with each generation if we're even talking about Generations it's going to be will have the tools made by the prior generation that are more powerful than you know anyone imagined 100 years ago and it just is

going to keep going like that did anybody actually make that quote about giving birth to the mechanical Gods no that was just me yeah but it there was a scientist that actually was thinking and saying that but that was that was the content of what he was saying he like we're going to build the next species that is far more important than we are and that's a good thing and well and actually I can go there with him I mean it actually the the only uh caveat here is that unless they're not conscious right like so if the true horror for me is that we can build things more intelligent than we are more powerful than we are uh and that can squash us and they might not they might be unconscious right there might be nothing like the universe could go dark if they squash us right or or at least our corner of the universe could go dark right um and yet these things will be immensely powerful um so if and this is just you know the jy's out on this but if there's nothing about intelligence scaling that demands that Consciousness come along for the ride um then it's possible that I mean nobody thinks our machines are you know very few people would think our machines that are intelligent are conscious right so at what point does Consciousness come online um maybe it's possible to build super intelligence that's unconscious you know super powerful does everything better than we do you know It'll recognize your emotion better than than another person can but then the lights aren't on that that's that's also I think possible you know but maybe it's not possible but that's that's the worst case scenario because in the ethical Silver Lining you speaking you know outside of our self-interest now but just from a bird's eye view um the ethical Silver Lining to building these mechanical gods that are conscious is that yes okay we've in fact if we have built something that is far wiser and has far you know more beautiful experiences and deeper experiences of the universe than we could ever imagine and there there's something that it's like to be that thing that's just you know it is it has a kind of a Godlike uh experience um well that would be a very good thing then we will have built we will have built something that was you know if you stand outside of our

narrow self-interest I can understand why the he would say that he he was just assuming what was scary about that particular particular talk CU he was assuming that Consciousness comes along for the ride here and I don't know that that is a safe assumption well and the the the really terrifying thing is who if if if this is constantly improving itself and it's under the beck and call of a person then so it's either conscious it's either conscious where it acts as itself right it acts as an individual thinking unit right or as a a thing outside of it's aware right either it is or it isn't and if it isn't aware and some person can manipulate it like imagine if it's getting 10,000 how many how many thousands of years in a week did you say well if it was just if it was just a million times faster than we are it's 20,000 years 20,000 years in a week AEK in a week so with every week this thing constantly gets better at even doing that right so it's reprogramming itself so it's all exponential presumably just just imagine again you could keep it in the most restricted case you could just keep it at our level but just just faster just a million times faster but if it did all these things if it kept going and kept every week was thousands of years right we're going to control it a person a regular that's even more insane just imagine being in dialogue with something that had that that that lived the 20,000 years of human progress in a week and you come back you know on Monday and say listen um I I that thing I told you to do last Monday I want to change that up and this thing has made 20,000 years of progress um and if it's in a condition where it has access I mean so we're imagining this thing you know in a box you know air gapped from the internet and you it's got nothing it's got no way to get out right uh even that is an unstable situation but just imagine this emerging in some way online right already being out in the wild right so let's say it's in a financial market right um that's again this is what worries me most about this and what is also interesting is that our intuitions here I think the primary intuition that people have is no no no that just that's just not possible or not at all likely

but if if you're going to f if you going to think it's impossible or even unlikely you have to find something wrong with the claim that intelligence is just a matter of information processing um I don't know any scientific reason to doubt that claim at the moment um and uh very good reasons to believe that it's just undoubtable uh and the and you have to doubt that we will continue to make progress in the design of intelligent machines and but one once you then it's then the all that's left is just time right if if if intelligence is just information processing and we we are going to continue to build better and better information processors at a certain point we are going to build something that is superum um and so whether it's in five years or 50 it's it's a Hu I mean it's it's the biggest change in human history I think we can imagine right um so uh and and people I what I find I keep finding myself in the presence of people who seem at least to my eye to be refusing to imagine it like they they're treating it like the Y2K virus or whatever where it's just or the Y2K bug where it just may or may not be an issue right like like it's a hypothetical like may this is just we're going to get there and it's it's going to be it's either not going to happen or it's it's it's going to be trivial but how you don't if you don't have an argument for why this isn't going to happen uh then you have to have then then you're left with okay what's it going to be like to have um systems that are better than we are at everything in the in intellectual space um and you know what will happen if that suddenly happens in one country and not in another right it's um it's uh it's it has enormous implications but it just sounds like science fiction you know I don't know what's scarier the idea that an artificial intelligence can emerge it's conscious it's aware of itself and that acts to present Pro protect itself or the idea that a person a regular person

like of today could be in control of essentially a God right because if this thing continues to get smarter and smarter with every week and more and more power and more and more potential more and more understanding thousands of years I mean it's just yeah this one person a regular person controlling that is almost more terrifying than creating a new life or or any group of people who don't have the the the total welfare of humanity as their Central concern so just imagine I mean what would what would China do with it now right what would we what would we do if we thought China you know if Buu or what or some Chinese company was on the verge of this thing um what would it be rational for us to do you know I mean if North Korea had it it would it' be rational to Nuke them given what they say about what you know their relationship with the rest of the world so it's um well that kind of power rational that kind of power is it's so lifechanging it's so Paradigm shifting right but if you to to wind this back to what someone like Neil degrass Tyson would say is that the only basis for fear is yeah don't give your super intelligent AI to the next Hitler right that's that's obviously bad but if we don't if we're not idiots and we just use it well we're fine and that I think is an intuition that is just that's just a failure to to unpack what is entailed by again something like an intelligence explosion a process that be once once you're you're talking about something that is able to change itself and you have to gu so what would it be like to guarantee let say we decide okay we're just not going to build anything that can make changes to its own source code you know any change to to to software at a certain point is going to have to be run through a human brain um and we're going to have veto power well is every person working on AI going to abide by that rule it's like we we've agreed not to clone humans right but you know we going to stand by that agreement for the the rest of human history and is you know is is our agreement binding on China or Singapore or you know any other country that might think otherwise it's just we have it's a free-for all and at a certain point we're going to be you know close enough everyone's going to be close enough to making the final breakthrough that um unless we we have

some uh agreement about how to proceed is someone is going to get there first that is a terrifying scenario of the future you know you cemented this last time you were here but not not as Extreme as time you seem to be accelerating the rhetoric yeah exactly yeah I'm uh you're going deep yeah boy I hope you're wrong I'm on team Neil degrass Tyson on this one go Neil um and well and so in defense of of the other side too I should say that you so David deut also thinks I'm wrong but he thinks I'm wrong because we will integrate ourselves with these machines I mean so that we this will there'll be extensions of ourselves and can't help but be aligned with us because we will we will be connected to them that seems to be the only way we can all get along we have to merge and become one yeah but I just think there's no there's no deep reason why like even if we decided to do that right like in the US or or or in half the world um one there's I think there are reasons to worry that even that could go Haywire but there's no guarantee that someone else couldn't just build AI in a box I mean if we if we can build AI such that we can merge our brains with it um someone can also just build AI in a box right and and that's uh um and then then you inherit all the other problems that people are saying we don't have to worry about if it was a good Cohen Brothers movie it would be invented in the middle of the presidency of Donald Trump and so then that's when AI would go live and then AI would have to challenge Donald Trump and they would have like an insult contest but that that that's when this thing becomes so comically uh terrifying where it's just just imagine Donald Trump being in a position to make the final decisions on topics like this for the country that is is going to do this almost certainly in the near term it's like you like should we have a Manhattan Project on this point Mr President um you know the idea that anything of of value could be happening between his ears on this topic or hundred others like it I think is is now really inconceivable and so what do what what price could we might we pay for that kind of inattention and and you know self-satisfied inattention

to these kinds of issues well this this issue if if this is real and if this could go live in 50 years this is the issue yeah unless we [ __ ] ourselves up Beyond repair before then and shut the power off if it keeps going yeah no I think it is I think it is the issue but unfortunately it's the it's the issue that doesn't it sounds like a goof right it just sounds you sound like a crackpot even worrying about this issue sounds completely ridiculous but that might be what's how it's sneaking in yeah yeah I mean it's just just imagine that the tiny increment that would make suddenly make it compelling I mean just imagine I mean chess doesn't do it because chess is so far from any Central human concern but just imagine if your if your phone recognized your emotional state better than any than your best friend or your wife or anyone in your life and it did it reliably right and it was your buddy like that movie with uh Walk Phoenix her yeah he falls in love with his phone right I mean that's just not you know that is not far off that far off it's not it's a very discreet ability I mean you could do that you could do that without any any other ability in the phone really it's like it doesn't it doesn't have to to uh stand on the shoulders of of any other kind of intelligence it could just you know it just you have I mean this could be you could do this with just brute force in the same way that you have a great chess player that doesn't necessarily understand that it's playing chess you could have you know it's facial Rec the facial recognition of emotion and the and the tone of voice recognition of emotion and um the idea that it's going to it's going to be a very long time for computers to get better than people at that I think is is very farfetched I was thinking yeah I think you're right I was just thinking how strange would it be if you had like headphones on and your phone was in your pocket and you had rational conversations with your phone like your phone knew you better than you know you like I mean I don't know what to do I mean I I don't think I was out of line she yelled at me I mean what should I say and it would listen to every one of your conversations with your friends and train up on that and

just talk to you about it and go listen man this is what you got to do you being way critical you were sounding angry you got defensive you got defensive why were you so defensive apologize relax let's all move on you could accelerate it or okay you're right man right man and like you're talking this little artificial maybe that's the first version of artificial intelligence that we suggest we all right let's give it a shot and like self-help guys in your phone you have like a personal trainer in your phone how to talk to girls tell you everything slow down dude slow down you're talking too fast got to act cool yeah that would I mean literally like giving you information that would be like step one that would be like the Sony Walkman remember when you had a Walkman like a cassette player that was like like a VCR yeah we're on our way to what we have today where you have [ __ ] 30,000 songs in your phone or something I think I remember the first Walkman the first thing it's when I back when I skied there was something called it was called astral Tunes or something it was like it was like a car radio that that you could just put on in a pack on your chest um yeah if they kept coming out with those they would get smaller and smaller so then that little the little dude would start telling you yo man dude listen they keep replacing me every year just let them stick me in your brain we'll be together all the time yeah yeah I've been giving you good advice for Years bro let me in your brain and so you and this little artificial intelligence has a you have a relationship over time and eventually it talks you into getting your head drilled and they screw it in there and your artificial intelligence is always powered by your central nervous system do have you seen most of these movies like did you see her and and no I didn't no and did you see saw XM that was one of my that was good top 10 alltime favorite movies I love that movie actually I like it I saw it twice I I I I was slow to realize how well uh they they did it I mean it was just the first time I I saw it I thought um I wasn't as impressed and I I watch it again and they really I mean first of all the performance of of um I forgot the uh actresses name uh Vander Alisa Vander or something um the woman who plays the

robot in XM is just fantastic but um scary good Tu you into anything we're getting a little full on time yeah what are we like five hours in four and a half hours in but I just got to know this is about to fill up wait how many hours four and a half hours our computers about to fill up how dare we just did a 4 and a half hour podcast yeah we were ready to keep going too Jesus Jamie didn't [ __ ] you know what man you opened up that box that Pandora's box of small question about AI that I haven't heard you guys discuss yet and I've looked up is there any sort of concept of like autism in AI like a spectrum of AI like there are dumb Ai and there's going to be smart AI but oh yeah yeah yeah no so the scary thing so yes it's like super autism there's no um across the board there's I think that super intelligence and motivation and goals are totally separate so you could have a super intelligent machine that is purposed toward a goal that just seems completely absurd and harmful and non-common sensical Ian so the the example that that Nick Bostrom uses in his in his book uh super intelligence which was a great book um and did more to inform my thinking on this topic than any other source um he talks about a paperclip maximizer you could you could build a super intelligent paperclip maximizer now not that anyone would do this but the point is you could build a machine that was that was smarter than we are in every conceivable way but all it wants to do is produce paper clips right now that seems counterintuitive but there's no there's no reason when you kind dig deeply into this there's no reason why you couldn't bu build a superhuman paperclip maximizer it just wants to turn everything you know just literally the atoms in your body would be better used as paperclips um and so this is just the point he's making is that super intelligence could be very counterintuitive it's not necessarily going to inherit everything we find as you know Common sensical or or emotionally appropriate or wise or desirable uh it could be totally foreign totally trivial in some way you know focused on something that means nothing to us but means everything to it because of some Quirk and how its motivation system is structured and yet it can

build the perfect nanotechnology that will allow it to build more paper clips right so um at least at least I don't think anyone can see why that's ruled out in advance I mean there's no reason why we would intentionally build that but the fear is we might build something that either is not perfectly aligned with our goals and our common sense and our and our um aspirations and that it could form some kind of separate in Al goals to get what it want wants that are totally incompatible with with Life as we know it and that's you know I mean again the the the examples of this are always cartoonish like you know how I mean Elon Musk said you know if you built a super intelligent machine and you told it to reduce spam well then it could just kill all people and that's like a great way to reduce spam right um but see the reason why that's La it's laughable but you there's you can't assume the common sense won't be there unless we've built it right like you have to have anticipated all of this you can't if you say take me to the airport as fast as you can again this is Bostrom you know and you have a super intelligent automatic car um you know a self-driving car you'll just you'll get to the airport covered in vomit because it'll just it's just going to go as fast as it can go um so it's a it's our intuition is about what it would mean to be super intelligent necessarily or are um I mean there's no we have to correct for them because I think our our intuitions are are bad [Applause] [Music] [Applause] [Music]