Video URL: https://www.youtube.com/watch?v=GmlrEgLGozw
artificial intelligence is superhuman it is smarter than you are and there's something inherently dangerous for the Dumber party in that relationship you just can't put the genie back in the bottle Sam Harris neuroscientist philosopher author podcaster he goes into intellectual territory where a few others their tread six years ago you did a TED Talk the gains we make in artificial intelligence could ultimately destroy us if your objective is to make Humanity happy and there was a button placed in front of you and it would end artificial intelligence what would you do well I would definitely pause it the idea that we've lost the moment to decide whether to hook our most powerful AI to everything it's just oh it's already connected to the internet got millions of people using it and the idea that these things will stay aligned with us because we have built them we gave them a capacity to rewrite their code there's just no reason to believe that and I worry about the near-term problem of what humans do with increasingly powerful AI how it amplifies misinformation most of what's online could soon be fake can we hold a presidential election 18 months from now that we recognize as Valor right like is it safe and it just gets scarier and scarier I worry we're just going to have to declare bankruptcy to the internet if your intuition is correct are you optimistic about our chances of survival before this episode starts I have a small favor to ask from you two months ago 74 of people that watch this channel didn't subscribe we're now down to 69 my goal is 50 so if you've ever liked any of the videos we've posted if you like this channel can you do me a quick favor and hit the Subscribe button it helps this channel more than you know and the bigger the channel gets as you've seen the bigger the guests get thank you and enjoy this episode oh [Music] son six years ago you did a TED Talk um I watched that Ted Talk a few times over the last week and this Ted Talk was called can we build AI without losing control over it in that Ted Talk you really discussed the idea whether um AI when it gets to a certain point of sentience and intelligence will
will wreak havoc on Humanity six years later where'd you stand on on it today do you think are you optimistic about our chances of survival survival yeah I mean I can't say I'm optimistic I'm I am worried about two species of problem here that are related I mean there's sort of the near-term problem of just what humans do with increasingly powerful Ai and um how it amplifies the the problem of misinformation and disinformation and make and just makes it harder and harder to make sense of reality together um and then there's just the the longer term concern about you know what's called alignment with with artificial general intelligence where we build AI that is is truly General and you know by definition superhuman and it's competence and power and then the question is have we built it in such a way that is aligned in a in a durable way with with our interests and um I mean there's some people who just don't see this problem that they're kind of blind to it when I'm in the presence of someone who doesn't have doesn't share this intuition they don't resonate to it I just don't understand what they're doing or not doing with their minds in that moment let's say I'm wrong about that well then you know it's just the other person's right and so we're just we just have fundamentally different intuitions about about this particular point and then the point is this if you're imagining building true artificial general intelligence that is superhuman and that is what everyone whatever their intuitions purports to be imagining here I mean there's you know there are people on both sides of the of the alignment debate the people who think alignment's a real problem and or and people think it's a total fiction but everyone you know virtually everyone whose party to this conversation agrees that we will ultimately build artificial general intelligence that will be superhuman and it's in its
capacities and there's very little you have to assume to be confident that that we're going to do that and there's really just two assumptions one is that intelligence is substrate independent right there's no it doesn't have to be made of meat it can be made in silico right and we've already proven that with narrow AI I mean there's just this we obviously have intelligent machines and you know your calculator and your phone is better than you are at arithmetic and it's just that's that's some very narrow band of intelligence so as we keep building intelligent machines on the assumption that there's nothing magical about having a computer made of meat the only other thing you have to assume is that we will keep doing this we will keep making progress and eventually we will we will be in the presence of something more intelligent than we are and that's not assuming Moore's Law is not assuming exponential progress we just we just have to keep going right and when you look at the reasons why we wouldn't keep going those are all just terrifying right because intelligence is so valuable and we're so incentivized to have more of it and every increment of it is is valuable it's not like it only gets valuable when you get you know when you double it or 10x it no no if you just get three more percent right that's that's uh that pays for itself um so we're going to keep doing this our failure to do it suggests that something terrible has happened in the meantime right we've had a World War we've had a global pandemic far worse than covid we got hit by an asteroid something happened that prevented us as a species from continuing to make progress in building intelligent machines right so absent that we're going to keep going we will eventually be in the presence of something smarter than we are and this is where intuitions divide my intuition and it's shared by by many people I'm sure I know at least one who you've spoken to my intuition is that there is something inherently dangerous for the Dumber party in that relationship there's something
inherently dangerous for the Dumber species to be in Pre in the presence of the smarter species and we have seen this you know based on our entanglement with all other species dumber than we are right or certainly less competent than we are and buy so buy it reasoning by analogy it would be true of something smarter than we are um people imagine that because we have built these machines that is no longer true right but and here's where my into tuition goes from there that is that imagination is born of not taking intelligence seriously right because what intelligence is is a you know a mismatching intelligence in particular is a a fundamental lack of insight into what the smarter party is doing and why it's doing it and what it will do next on the part of the Dumber party right so I mean you just couldn't imagine that by analogy just imagine that that dogs had invented us as their their super intelligent AIS right for the purpose of making their lives better you know just securing resources for them securing comfort for them making getting the medical attention um it's been working out pretty well for the dogs for about 10 000 years right I mean there's some exceptions we've got we've mistreat certain dogs but generally speaking for most dogs most of the time humans have been a great invention right now it's true that the the mismatch in our intelligence dictates a fundamental blindness with respect to what we've become in the meantime right so like we have all these instrumental goals and things we care about that they cannot possibly conceive right they know that when we go get the leash and say it's time for a walk they understand that particular part of the language game but everything else we do
when we're talking to each other when we're on our computers or on our phones they don't have the dimmest idea of what we're up to and if we ever if something happened if we I mean we love the truth is we love our dogs we make just irrational sacrifices for our dogs we prioritize their health over all kinds of things that is just amazing to consider and yet if we learn if there was a new you know Global pandemic kicking off and some xenovirus was jumping from dogs to humans and it was just a super Ebola right it was just it was 90 lethal and if this was just a forced choice between him what do you value more you're the the lives of your dogs or the lives of your kids right if that's if that's the situation we were in it's totally conceivable unless it's not a you know not by no means impossible we would just kill all the dogs right and they would never know why right we would just and it's because we have this layer of of mind and culture and and just just the the new sphere right there's just this this realm of of of mind that requires a requisite level of intelligence to even be party to even know exists that they have they have no idea it exists right and it's so this is a fanciful uh analogy because the dogs did not invent us but evolution invented us right Evolution has coded us you know as I said to survive and spawn and that's it right so Evolution can't see everything else we've done with our time and attention and and all the values we've formed in the meantime and all the ways in which we have explicitly disavowed the program we've been given right so Evolution gave us a program but if we were really going to live by the likes of that program what would we be doing and we would be having as many kids as possible right you know the guys would be going to sperm banks and donating their sperm and finding that like the best use of their time and attention it's like the idea that you could have hundreds of kids for which you have no financial responsibility that would be the that should be the most rewarding thing that you could
possibly do with your time as a man and yet that's obviously not what we do and there are people who decide not to have kids and they're people who and and yet and everything else we do from you having podcast conversations like this to to curing diseases type of Interest like literally everything we're doing with our you know with science with with culture is yes there are points of contact but between those those products and our evolved capacities right it's not it's not just it's not magic right we are social primates that that have leveraged certain ancient Hardware to do new things but evolution the code that we've been given doesn't see any of that right and we've not been optimized to build democracies right um Evolution knows nothing it can know nothing if evolution were a coder there's just no there's no democracy maximization in that code right it's just it's not a it's not there so the idea that these things will stay aligned with us because we have built them because if we have this origin story that we gave them their initial code and yet we gave them a capacity to rewrite their code and build future generations of themselves right um there's just no reason to believe that I see no and and the mismatch in intelligence is intrinsically dangerous and you could see this by I mean it's just Stuart Russell I don't know if you had him on the podcast he's a great um professor of computer science at Berkeley and he he wrote literally co-wrote one of the most popular textbooks on AI um he has some arresting analogies which I think are good intuition pumps here um and one is just think of how you would feel if you knew like unless we got a communication from elsewhere in the galaxy and it was a message that we decoded and it said people of Earth we will arrive on your lowly planet in 50 years get
ready right that anyone who thinks that we're going to get super intelligent AI in let's say 50 years things were were essentially in that situation and yet we're not responding emotionally to it in the same way if we if we received a communication from a a species that we knew just by by fact by the sheer fact that they were communicating with us in this way we knew they were more competent and more powerful and more intelligent than we are right and they're going to arrive right we would we would feel that we were on the threshold of the most momentous change in the history of of our species and we would feel but most importantly we would feel that it's because this is a a relationship an unavoidable relationship that's being foisted upon us right it's like we like some a new creature is coming into the room right with its own capacities and now you're in relationship and and one thing is absolutely certain it is smarter than you are right by by what factor I mean ultimately we're talking about by factors you know just by so many orders of magnitude it's it our intuitions completely fail I mean even if even if it was just a difference in in the time of processing even if it let's say there was no difference in in the actual you know native intelligence but it's just processing speed a million fold difference in processing speed is is just a phantasmagorical difference in capacity so like just imagine we had 10 Smart Guys in a room over there and they were working and thinking and talking a million times faster than we are right well so they're no smarter than we are but they're just faster and we talked to them once every two weeks just to catch up on you know what they're up to and what they want to do and whether they still want to collaborate with us well two weeks for us is 20 000 years of analogous progress for them right so how could you how could we possibly hope to constrain the opinions and collaborate with and negotiate with
people no smarter than ourselves who are making twenty thousand years of progress every time we make two weeks of progress right it's just it's it's is unimaginable and yet there are many people who don't that just think this is just fiction everything I all all the noises I've made in the last five minutes are just like a a new religion of fear right and it's just there's no reason to think that alignment is even a potential problem if your intuition is correct and the analogy of us getting a signal from outer space that someone is coming in 30 years which by the way a lot of people that speak on this subject matter um don't believe it's even going to be 30 years yeah until we reach that sort of Singularity moment I think they speak of artificial general intelligence I've heard people like Elon say you know many fewer decades 10 10 years 15 years 20 years Etc if that is correct then surely this is the most pressing challenge conversation issue of our time and there's no logical reason that I can see to refute your intuition there I I can't see a logical reason the rate of progress will continue don't necessarily see anything that will wipe out or pause our rate of progress um I mean let me just to uh be charitable to the other side here there are other assumptions that they smuggle in that they some people I mean some do it without being aware of it but some actually believe these assumptions and This spells the difference on on this on this uh particular intuition um so so it's possible to assume that the more intelligent you get the more ethical you become by definition right now and we might you know draw a somewhat more equivocal picture from just the human case where we see that oh there's some very smart people who aren't that ethical but they're I believe there are people I mean I've talked to a few at least a few people who believe this there are people who assume that kind of in the limit as you push out into just just if
far beyond human levels of intelligence there's every reason to believe that all of the the provincial creaturely failures of human ethics will be left behind as well it's like you're not like the the selfishness and the and the and the basis for conflict like what like these are not the apish urges of you know status seeking uh monkeys is is just not it's not going to be in the code and as you push out into into just kind of the Omnibus Genius of of the coming AI you're gonna there's a kind of a sainthood that's going to come along with it right and and a wisdom that will come along with it now I just think that's a that's quite a gamble I I think I would take the other the other side of that bet and and I would frame it this way there have to be ways in the space of all possible intelligences that are beyond the human right there's got to be more than one possible there's got to be just like there's many different ways to have a chess engine that's better than I am at chess they're still they're they're different from each other but they're all better than me right um there's got to be more than one way to have a superhuman artificial intelligence and I would I would imagine there there are you know not not an infinite number of ways but just a vast number of of in the space of all possible Minds there are many locations in that space beyond the human that are not aligned with human well-being right there's got to be more ways to build this unaligned then aligned right and what other people are smuggling into this conversation is the intuition that no no once you get beyond the human is just going to get it's just you're going to be in the presence of you know just the Buddha who understands quantum mechanics and oncology and everything else right I just see no reason to think that that's so and we we could build something that is again taken intelligence seriously
we're going to build something we're in relationship to it's really intelligent in all the ways that we're intelligent it's just better at all of those things than we are it's by definition superhuman because the only way it wouldn't be superhuman the only way it would be human level even for 15 minutes is if we didn't let it improve itself if we wanted to just keep it stuck at you know at a we just we built a college undergraduate we wanted just to keep it stuck there but we would have to dumb down all of the specific capacities we've already built right just like all every AI we have narrow AI is superhuman for the thing it does you know it's it's it has access to all the information on the internet right it's just like it's got perfect memories it can perfectly copy itself when one part of the system learns something the rest of the system learns it because it just can swap files right it can it's um you're again your your phone is a bit is a superhuman calculator there's no reason to make it a a calculator that is human level um and so we're never going to do that we're never going to be in the presence of human AGI we will be immediately in the presence of superhuman AGI and then the question is how quickly it it improves and how far they're how much Headroom is there to improve into on the assumption that you can get quite a bit more intelligent than we are right that there's like they were nowhere near the summit of possible intelligence you have to imagine that you're going to be in the presence of something that is again it could be completely unconscious right this I'm not saying that there's something that's like to be this thing although there might be and that's a totally different problem that's worth worrying about but whether conscious or not it is solving problems detecting problems improving its capacity to do all of that in ways that we can't possibly understand and the products of its increasing competence are always being surfaced right so it's like it's we've been we've been using it to change the world we became we've become Reliant
upon it we built this thing for a reason I mean one thing that's been amazing about the developments in recent months is that those of us who have been at all cognizant of the AI safety space for you know now going on a decade or more for some people always assumed that as we got closer to the end zone we'd become either the labs would become more circumspect we'd be building this stuff air gap from the internet you know it's like we have this phrase air gapped from the internet like we thought this was a thing like you this thing would be in a box and then the question would be well do we let it out of the box and let it do something right like is it safe and how do we know if it's safe right and we thought we would have that moment we thought it would it would happen in a lab at Google or at Facebook or somewhere we thought we would hear okay we've got something really impressive and now we just want it to touch the stock market or we wanted to touch the our medical data or we just want to see if we can use it we're way past that right we've built this stuff already in the wild it's already connected to the internet it's already got millions of people using it it already has apis it's already it's already doing work so from an AI safety point of view that's amazing like we didn't even have the moment the the choice point we thought was going to be so fraught of course we didn't we we because there was such pressing incentives for people to press forward regardless of that conversation especially but yeah everybody everyone everyone thought I mean I was never I was I don't believe I was ever in conversation with someone with someone like Elias or udakowski or or Nick Bostrom or Stuart Russell who assumed we would be in this spot like I just everyone we because you know I'd have to go back and look at those conversations but there was so much time spent you know it seems quite unnecessarily on this idea that circumspect we'd make a certain amount of progress and circumspection would kick in like even the people who are who
were doubters would become worried and at their and there would be like in the final yards you know as we go across into the end zone there'd be some mode where we could sort of slow down and figure it out and try like try to deal with the arms race Dynamics like let's place a phone call to China and and and just like let's talk about this we got something interesting but the stuff has already been built in connection to everything and there's already just endless businesses being being devised on the on the the back of this thing and all the improvements are going to get plowed into it and so just imagine what this looks like even in success right like let's say it just starts working wonders for us and we just we get these great productivity gains and okay so then we cross into the into the you know whatever the singularity is right at whatever speed we find ourselves in the presence of something that is truly General after all of this stuff is all of this narrow stuff uh albeit superhuman narrow stuff is is something that we totally depend on right like every hospital requires it and every airplane requires it and all of our missile systems require it and it's we're just this is the way we do business um there is no there's nothing to turn off at that point I mean I just don't you know it's like I guess I mean I put this to Mark Andreessen on my podcast and he said yeah you can turn off the internet I mean I don't I can't believe he was quite serious I mean yes if you're North Korea I guess you can turn off the internet for North Korea and that's why North Korea is like North Korea but the idea that we could I mean it just the cost of turning off the internet now would be uh I think it would be unimaginable in the in the in the economic just the economic cost alone it just would be um so anyway I'm interested the the idea that we've we've lost the moment to decide whether to hook our most powerful
AI to everything because it's already being built more or less in contact with if not everything many so many things that you just can't put the genie back in the bottle that's that is genuinely surprising to me and um yeah I mean incentives is this not the most pressing problem then because I I was going to ask about this conversation by asking you the question about the thing that occupies your mind the most and the most important thing we should be talking about and I I in part assume the answer would be artificial intelligence because the way that you talk about your intuition on this subject matter you've got children yeah you think about the future a lot um if you can see this species coming to Earth in the next even if it's in the next 100 years um it strikes me to be the most pressing problem for Humanity well I do I'm as interesting as I think that problem is and and consequential as it is I'm I'm worried that life could become unlivable in the near term before we even get there like I'm just worried about the the misuses of narrow AI in the meantime just I'm worried about just just take the current level of AI we have you know we have gpt4 um I I think within the next 12 months or two years let's say let's say we whatever GPT 5 is we're going to be in the presence of something where most of what's online that purports to the information could soon be fake right we're like just most of the text do you find on any topic is just fake right like someone has just decided write me a thousand Journal articles on why mRNA vaccines cause cancer and give me you know 150 citations write them in the in the style of Nature and nature genetics and Lancet and Jama um and just put them out there right right one teenager could do that in five minutes with the right AI right it's like it's just like we're not gpd4 is not quite that but gpt5 you know possibly will be them it's like that that is such a near-term advance right or get you know just when
you imagine knitting together the visual stuff like mid-journey and Dolly um and stable diffusion with with a large language model just imagine the tool again this is maybe this is 18 months away maybe it's three years away but it's not 30 years away the tool which where you can just say give me a 45 minute documentary on how the Holocaust never happened filled with archival imagery give me you know Hitler speaking in German and with it with the appropriate translations and um give it give it in the style of Alex Gibney or Ken Burns or and give me a 10 000 of those right like that like that's all all the friction for misinformation has been taken out of the system and yeah I worry we're just going to have to declare bankruptcy with respect to the internet like just like we just are not going to be able to figure out what's real and when you when you look at how hard that is now with social media uh in the in the aftermath of of covid and Trump and how just the challenge for of holding an election that most of the population agrees was valid right that challenge already is is on the verge of being insurmountable in the U.S right I mean it's just like it's easy to see us failing at that AI aside now when you add a large language models to that and the more competent future version of it where it's just the most compelling deep fakes are indistinguishable from you know real data um and everyone is siled into their tribes where they're stigmatizing the information that comes from any other tribe and we're just and the internet is now so big a place that there really isn't the ordinary selection pressures where where bad information gets successfully debunked so that it goes away it says you can live in a conspiracy cult for the rest of your life if you want to you know you can be queuing on all day
long if you want to and now we've got deep fakes Shoring all that up and just spurious you know scientific articles showing all that up I all this becomes a more compelling form of psychosis and you know culturally speaking and so I'm just worried that it's good it's going to get harder and harder for us to cooperate with one another and collaborate and that our politics Will just completely break and that'll you know offer an opportunity for lots of you know Bad actors and I mean leaving aside there's cyber terrorism and there's their synthetic biology that you know the moment you get you turn AI loose on on the on the prospect of of engineering viruses and you know all of that it's like it it potentiates I mean the asymmetry here is that it seems like it's it's always easier to break things than to fix them or to prevent people categorically prevent people from breaking them and what we have with increasingly powerful technology is the ability for one person to create more and more damage or one small group of people right and it was so it's just it just turns out it's hard enough to build a nuclear bomb that like one person can't really do it you know no matter how smart you need a team and you need to you need it's traditionally you've needed State actors and you need you need access to resources and you have to get the physical material and it's hard enough but this isn't this is being fully democratized this Tech and so it's um yeah I worry about the near-term chaos I've never found the narrow term consequences of artificial intelligence to be that interesting until now is that what you said that image of like the internet becoming unusable so that was a real Eureka moment for me because I've not been thinking about that yeah no me too I was I was just concerned about the AGI risk and now really in the in the aftermath of trump and kovid I've just I see the risk of um you know it if not losing everything
losing a lot that matters just based on our and interacting with just these very simple tools that that are Mis reliably misleading us I mean I'm just I'm amazed at what social media I forget about I'm amazed at what Twitter did to me I mean you know even with all of my training and all you know with my head screwed on reasonably straight I mean it's amazing to say it but almost all of the truly bad things that have happened to me in the last decade that just really like just destabilized relationships and and just priorities and really like I kind of got plowed back into kind of became a kind of professional emergency you know stuff I had to respond to you know in writing or on podcasts it was on Twitter it was my my engagement with Twitter was the thing that produced the chaos and it was completely unnecessary um and it was just it was amplifying a kind of signal for me that I felt compelled to pay attention to because I was on it and I was trying to communicate with people on it I was getting certain communication back and it was giving me a picture of the rest of humanity which I now think was fundamentally misleading but it was it was still consequential in its yeah like even believing it was a certain point believing that it was misleading wasn't enough to inoculate me against the delusion of these kind of the opinion change that was being forced upon me um and I was feeling like okay like these people are becoming unrecognizable like I know some of these people I've had dinner with some of these people and their behavior on Twitter is is appearing so deranged to me and so in such bad faith um the people are uh people who I know to be non-psychopaths are starting to behave like Psychopaths at least on Twitter and I'm becoming similarly unrecognizable to them that it's just again it it all felt like a psychological experiment to which I hadn't consented which I enrolled myself somehow because it was it was what everyone was doing in 2009
um and I spent you know 12 years there getting some signal and responding to it and it's not to say that it was all bad I mean I read a bunch of good articles that got linked there and I you know I discovered some interesting people but uh the change in my life after I deleted my Twitter account was so enormous I mean it's embarrassing to admit it I mean it's just it's like it's like getting out of a bad relationship and it was just it was a fundamental um just freedom from from this this chaos Monster that was it was always there ready to disrupt something but based on its own Dynamics and when did you do Lisa um yeah like December I think it was December I would and I'm not someone that really takes sides on things I like to try and remain in the middle I think politically so you must have a very different Twitter experience than I was having no no no so I don't treat anything other than this podcast trailer don't do anything else right okay so I just did anything you'll see on my Twitter is the podcast trailer that's it yeah and for all the reasons you've described and more interestingly I wanted to say in the last eight months as someone that tries to be doesn't get caught up too much in the media oh Elon bought this it's a hundred percent gone in that direction as in my timeline now is I say to my friends all the time and some of my friends who are again I think are nuanced and balanced have said to me the there's something that's been turned up in the algorithm to increase engagement that has planted me in an unpleasant Echo chamber that I didn't desire to be in and if I wasn't Cog somewhat conscious I would 100 be in there my timeline my friend tweet the other day my friend Castle tweeted he's never seen more people die on his Twitter timeline than he has in the last six months they're prioritizing video so you're seeing a lot of like death in CCTV footage that I've never seen before and then the debate around gender um politics right-leaning subject matter has never been more right down your throat yeah
because it's been it's almost like something in the algorithm has been switched where it's now it's now like people have been let out the Asylum that's you know I can describe it and it's made me retract even more so when Zuckerberg announced threads the other the other couple of weeks ago it was kind of like a a life raft right out of this out of the Titanic um and I really really mean that and I'm not someone to get easily caught up in narrative you know as it relates to social media platforms it's been my industry for a decade but what I've seen on Twitter and it's actually made me believe this hypothesis I had five years ago where I thought they would be um I thought the the Journey of social networking would be would have way more social networks and they'd be more siled I thought we'd have one for our neighborhood our football club and now I believe that even more than ever yeah that seems right and I think I mean whether it's possible to have a truly healthy social network that people want to be in and it's a good reason to be there and it's it's uh I don't know if that's possible I I like to think it is but it's um I think there's certain things you you have to clean up at the outset that is supposed to make it possible I mean I think I think anonymity is a bad thing I think um probably being free is a bad thing I think for you if you know you sort of get what you pay for online and if it's if it's uh I just think they're there there might be ways to set it up that where it would be better but I don't think it would be popular was that I think with the thing that makes it popular makes it toxic right right and even the anonymity piece I've played this out a couple of times in my mind and the rebuttal I always get is well there's people in Syria who have news to break important needs to break and they they'd be hung if they so we need a Anonymous version of the social internet right yeah well I guess there could be some exception there but um I don't know it just doesn't it actually doesn't interest me because
I just feel such a different sense of my being in the world as a result of not paying attention to the my online simulacrum of myself is it's it's a um because Twitter was the only one I used like I was on I've been on Facebook this whole time I've been on I think I I guess I'm on Instagram too but like my team just uses those as marketing channels you know it's just like you it sounds like that's the way you use Twitter now but Twitter was the the one that I decided okay this is going to be me I'm gonna be posting here I'm gonna you know if if I've made a mistake I want to hear about it you know it's like and I just want to use it as actual uh an actual basis for communication um and for the longest time it actually felt like a valid tool in that respect you know it reached a crisis point I decided this is just pure toxicity there's just no reason even the good stuff can't possibly make a dent in the bad stuff so I just deleted it and then I was I was returned to the real world right where I've where I actually live and to books and to I mean I'm online all the time anyway but up but it's not having the this is the time course of reactivity when you don't have social media when you don't wait and you don't have a place to put this this instantaneous hot take that you're tempted to be put out into the world because there's literally no place to put it like if like for for me if I have some reaction to something in the news I have to decide whether it's worth talking about it in my next podcast that I might be recording you know four days from now and rather often people have been just bloviating about this thing for four solid days before I ever get to the microphone and I then I get to think well this is still worth talking about in most almost nothing survives that test anymore right it's like the conversations moved on so there's actually no place for me to just type this thing that either takes me 10 seconds and then rolls out there to get
to detonate in the minds of you know my friends and enemies to opposite effect and then I see the the result of all that you know on a again on a the sort of reinforcement Loop of every 15 minutes um not having that is such a relief that I just don't even know why I would so like when threads was announced I wasn't I think I'm on threads too but it's not me it's just you know just again another marketing channel um but yeah I haven't I feel such relief not exercising that muscle anymore where it's like I I you know I don't know how often I was checking Twitter but it was I was you know I was not checking it just to see what was happening to me or what the response to my La the last thing I tweeted I was checking it a lot because it was my news feed it's like I'm following you know 200 smart people they're telling me what they're paying attention to and so I'm fascinated so yeah well yeah I want to see that next article or that next video just that engagement and the endless opportunity to comment and to put my foot in my mouth or put my foot in someone else's mouth or have someone put their foot it's just not having that has been such relief that I would be I mean it's not impossible but I would be very cautious in reactivating that because it was it was so much noise and again it would it created there's so much it became a uh I mean it became an opportunity cost but it became a just this endless opportunity for misunderstanding but especially misunderstanding of me and you know everything I've been putting out into the world and then my sense that I had to react to it and then you just can't plow that back into the you know that that becomes the basis for further misunderstanding um and it just it constantly was giving me the sense that there's something there's something I need to react to on my podcast in an article on Twitter that it's just this is a valid signal like this is this is this is like this is a five alarm fire this is like you got to
stop everything like you're by the pool on the One Vacation you're taking with your family that summer and this thing just happened on your phone that like it can't wait right like you actually have to pay attention because it's like the conversation is happening right now and so it was a kind of addiction to information and right you know some level reputation management or or or [Music] um and it was just I mean just yeah to just be free of it is such a relief apart from like you know health issues with certain family members virtually the only bad things that have happened to me have been a result of my engagement with Twitter over the last 10 years so it's just it's just you know I I you know I guess I'm if I'm a masochist I would be back on Twitter but like that would be the only reason to do it narrow AI I asked you the question a second year which we um I really wanted to get a solution to it because I'm mildly terrified I completely believe you'll believe your um the logic underneath your opinion that narrow area will cause this um destabilization and usability of the internet so just focusing on narrow AI what what would you consider to be a solution to prevent us getting to that world where misinformation is right to the point that it can destabilize Society politics and culture well I think it's something I've been asking people about on my podcast is because it's not actually my wheelhouse and I would just need to hear from experts about what's possible technically here but um I'm imagining that paradoxically or ironically this could Usher in a new kind of gatekeeping that we're going to rely on because like the provenance of information is going to be so important I mean the the the the assurance that a video has not been manipulated or there's not a a just a pure confection of of deep fakery right so you get so
it could be that we're we're Meandering into a new period where you're not going to trust a photo unless it's come it's coming from you know Getty Images or you know the New York Times has some story how the about how they have verified every photo in their that they put in their newspaper they have a process and you know so if you see a a a video of of Vladimir Putin seeming to say that he's declaring war on the U.S right I think most people are going to assume that's fake until proven otherwise it's like it's just it's just going to be too much fake stuff and it's going to be it's all going to look so good that the New York Times and every other you know organ of media that we have relied upon um as imperfect as they've been of late they're going to have to figure out what the tools are whereby they can say okay this is actually a video of Putin right and if the new I mean I'm not going to be able to figure it out on my own right the New York Times doesn't have a process or CNN doesn't have a process that they go through before they say Okay Putin really said this and so this is we have to now react to this because this is real um whatever that process is and you know whether it's whether there's some kind of digital Watermark that you know that's connected to the blockchain that's I mean there's there's some tech implementation of it that can be fully democratized where you by just being in the latest version of the Chrome browser can know that you're so you you can differentiate you know real and fake videos say I don't know what the implementation will be but I just I just know we're going to get to some spot where it's going to be all right we have to declare epistemological bankruptcy we don't know what's real we have to assume anything especially lurid or agitating is fake until proven otherwise so prove otherwise and that's you know that that'll be a resetting of something I don't know what we do with that in a world where we really don't have that much time to react to certain things that are you know a video of Putin saying he's launched his big missiles is something that you know 30 minutes
from now we would we would understand whether it's real or not I mean forget about again forget about everything we just said about AI look at all of our Legacy risks look at the risk of nuclear war the the risk of stumbling into a nuclear war by accident has been hanging over our head for 70 years I mean we've got this old Tech we've got these wonky radar systems that throw up errors we've we have moments in history where you know one Soviet sub-commander decided based on his just gut feeling his common sense that the data was almost certainly an error and he decided not to pass the the the the obvious evidence of a an American ICBM launch up the chain of command knowing that the chain of command would say okay you have to fire right and he reasoned that if the U.S was going to attack the Soviet Union they would launch more than I think in this case it looked like there were four missiles that was the radar signature if the us is going to launch a first strike against the the Soviet Union in one of this like the mid 80s um they're going to launch more than four missiles right this has to be this has to be bad data right so this is that but you know so if we automate all this will we automate it to systems that have that kind of common sense right um but we've been perched on the on the edge of the Abyss based on this this the possible forget about malevolent actors you know who might decide to have a nuclear war on purpose we have the possibility of of accidental nuclear war you add this cacophony of misinformation and deep fake to all of that and it just gets scarier and scarier and this is just this is not even AI this is just you know you know narrow AI Amplified misinformation how do you feel about it well I mean this is the thing that worries me I I worry about the next election you know I think the next president if we can run the 2024 election in a way that most of America acknowledges was valid that will be an amazing Victory you know whatever the outcome I mean obviously I would not be looking forward to a trump presidency but um
I think even more fundamental than that is can we hold a presidential election 18 months from now that is that we recognize as valid right like that I I don't know I don't know what kind of resources are being spent on on that particular performance but that is hugely important and I don't think our near-term experiments with AI is going to make that easier why is it so important well it's just I mean if you think the maintenance of of uh a valid democracy in in the world's low in superpower is is of minor importance I um I'd like to drink the tea you're drinking but yeah it's Mystic I mean I I'm I can't say I'm optimistic I'm you know it's it's a paradoxical State I mean because I I definitely I I tend to focus on what's wrong or might be wrong I tend to I think have a a pessimistic bias right like I I tend to notice what's wrong as opposed to what's right you know I mean that's my um that's my bias but I'm actually very happy right like I have a very a very good life I'm just like everything is is I just I'm incredibly lucky I'm surrounded by great people it's like it's just it's all great and yet I see all of these risks on the horizon so I'm like I'm not um I just I have a very high degree of well-being at this moment in my life and yet I like what's on the television is scary and so it's it's it's very interesting juxtaposition yeah you know I will be I'll be very relieved if we have a busy or just I feel like we're in a very weird spot I mean like the I haven't seen a a full postmortem on the coveted pandemic that has fully encapsulated what I think we what I think happened to us there but my my vague sense is that we didn't learn a whole hell of a lot I mean basically what we learned is we're really bad at responding to this kind of thing this was a challenge that that
just fragmented us as a society it could have brought us together it didn't and it it Amplified all of the the divisions in our society politically and and economically and tribally in all kinds of ways the role of misinformation and disinformation on all of that was was all too clear and I think just getting worse so I think you know as a dress rehearsal for some future pandemic that's that is inevitably going to come and is you know could well be worse I think we failed this dress rehearsal and you know I have to hope that at some point our institutions will reconstitute themselves so as to be obviously trustworthy and engender the kind of trust we actually need to have at our institutions like we need a CDC that not only that we trust but that is trustworthy that we that we that we're right to trust right and and so it is with an FDA and every other you know institution that that is relevant here and we don't quite have that and half of our society thinks we don't have that at all right and and so it's um we have to rebuild trust in institutions somehow and I just think you know we have a lot of work to do but to even figure out how to make an increment of progress on that score because we're again the siloing of of large constituents into alternate information universes is just just not functional and that's so much of what social media has done to us and alternative media I mean like you know I call it you know you and I are podcasters but I call it podcast to Stan right we have this this landscape of I mean there's now whatever million plus podcasts and there's you know you email newsletters and everyone has now just decided to curate their information diet in a way that's just bespoke to them and you can stay there forever and you're getting you're getting one slice of and it could be a you know a completely fictional slice of of reality and um we're losing the ability to converge on a common picture of what's going on and you so did that sound optimistic I didn't
hear the optimism there you tell me no I I no I but I I kind of can't refuse to anything you said on a like a logical basis it all sounds um like that is the direction of travel that we're going in unfortunately um I have faith that they'll be surprising positives it always tends to be surprising positives that we also didn't factor in um it's easy to see I mean if there's anything if there's any significant low-hanging fruit technologically or or scientifically that could be AI enabled for us let me just take like you know a cure for cancer a cure for Alzheimer's I mean just having one thing like that right that would be such an enormous good um and that so that that is that's what that's why we can't get off this ride and that's why there is no break to pull because the value of intelligence is so enormous I mean it is it is just it's not everything I mean it's not that you know there's there are other things we care about and a right to care about Beyond intelligence I mean love is not the same thing as intelligence right but intelligence is the thing that can Safeguard everything you love right like even if you think the whole point in life is to just get on a beach with your friends and your family and just hang out and enjoy the sunset okay you don't have to augment you you don't need superhuman intelligence to do any of that right you're you're fit to do it exactly as you are you could have done that in the 70s and it would just be just as good a beach and they'd be just as good friends but every gain we make in intelligence is the thing that safeguards that opportunity for you and everyone else how would you I feel like we've not defined the time artificial general intelligence from my understanding of it it's when the the intelligence can think and make decisions almost like a human yeah maybe Loosely this this is a kind of just a semantic problem but intelligence can mean many things but
you know Loosely speaking it is the ability to solve problems uh and meet goals make decisions in response to a changing environment in response to data um and the general aspect of that is an ability to do that in across in many different situations all the sort of situations we encounter as people and to have one's capacity in one area not you know as I get better at deciding whether or not this is a cup I don't magically get worse at deciding whether you know you just said a word right it's like I can do but it's like I can do multiple things in multiple channels that's not something we had in our artificial systems for the longest time because we were everything was bespoke to the task we'd build a chess engine and it couldn't even play Tic-Tac-Toe all I could do was play chess and they could and we and we just would get better and better in these in these piecemeal narrow ways and then things began to change a few years ago where you'd get you know with like deep mind would it would have its algorithms that were uh you know the same algorithm with slightly different tuning could play go right or it could you know it could solve a protein folding problem as opposed to just playing chess right and it became the best in the world at chess and I became the best in the world to go and um and amazingly I mean to take you know Alphas what Alpha zero did it you know before Alpha zero all the chess algorithms were they just had all of our chess knowledge plowed into them if they had studied every human game of chess and they just it was just you know it was it was a bespoke chess engine Alpha zero just played itself I think for like four hours right it just it just had the rules of Chess and then it played itself and it became better not merely than every other every person who's ever played the game it became better than all the chess engines that had all of the the all of our chest knowledge plowed into them so you it's a
fundamentally new moment in in how you build an intelligent system and it promises this this possibility again this inevitability the moment you admit that we will eventually get there the moment the moment you admit that it's it can be done in silico and the moment that you admit that we will just keep going unless a catastrophe happens and those two things are so easy to admit that I just don't at this point I don't see any place to stand where you're not forced to admit them right I don't see any neuroscientific or cognitive scientific argument for substrate dependence for intelligence given what we've already built and again we're we're going to keep going until something stops us right we'll hit some immovable object that prevents us from releasing the next iPhone but other otherwise we're going to keep going and then yeah so then it then we'll whatever General will mean in that first case they'll be a case where we've built a system that is so good at everything we care about that is functionally General now maybe it's missing something maybe it's not you know maybe it's missing something that we don't even have a name for you know we're missing all kinds of their possible intelligences that we haven't even thought about because we just haven't thought about them right there there's the things that there are ways to section the universe undoubtedly that we can't even conceive of because we are just we have the minds we have Elon was asked a question on this by journalists the journalist said to him in a world where you believe that to be true that artificial general intelligence is around the corner when your kids come to you and say Daddy what should I do with my life Define purpose and meaning what advice do you now give them if you hold that intuition to be true that it's around the corner what do you say to your children when they say what should I do with my life to create purpose and meaning and did you say that Elon answered this question yeah what did he say it's one of the most chilling moments in an
interview I think I've seen in recent times because he stutters he goes silent for about 15 seconds which is very annelon he stutters he stutters um he stutters a bit more because he can't and then he says he thinks he's suspend he's living in suspended disbelief because if you've really thought about it too much what's the point he says what's the point of me building all these cars he was in his Tesla Factory what's the point of me building all these cars and what's the point I do think that sometimes so I think I have to live in as his words were suspended disbelief right well I would encourage him to ask what's the point of spending so much time on Twitter because that he could clearly benefit from rethinking that but um that aside I mean my answer to that is and I think other people have echoed this of late um I mean it's sort of surprising to me my answer is that this begins to privilege a return to the the humanities as a kind of a core like the center of of of mass intellectually for us because when you look at what we're really good at and uh it's among the last things that can be plausibly automated uh and if if we automate it we may cease to care about it so it's like learning to write good code is something that is going to be it's being automated now it's it is you know I'm not a programmer but um you know I have it on good authority that already these large language models are improving code and something like half the time they're writing better code than than people that's all going to become like chess right it's just it's going to be better than people ultimately um so being a software engineer is something that you know and being a radiologist and being like those things it's easy to see how AI just cancels those professions or at least makes one person you know so effective at using AI tools
that you know one person can do the work of 100 people so you've got 99 people who don't have to be doing that job um but creating art and you know writing novels and being a philosopher and uh talking about what it means and to live a good life and how to do it it's like that's that's something that if we we have we have to look at those we have to look at where we're going to care that we're actually in relationship to and in dialogue with an another person who's who we know to be conscious right like where we don't care about that we're not going to care we're going to want just the best version of it like I don't care if the cure for cancer comes from an incentive and AI I do not give a I just want the cure for cancer right like there's no added value that where I find out okay the person who gave me this cure really felt good about it and he's you know he had tears in his eyes when he figured out the Cure every engineering problem is like that we want safer planes we want you know we just want things to work we're not sentimental about the the Artistry that went into all of that uh and when the difference when the gulf between the best and the mediocre gets big and consequential we're just going to want the best we're just going to want the best all the way down the line but what is the best novel right what is the best podcast conversation what is it and can you subtract out the the conscious person from that and still think it's the best and and so like so someone once sent me a um what purported to be I didn't even listen to it so I'm not even sure what it was but it looked like it was an AI generated conversation between Alan Watts and Terence McKenna right both guys who I love I remember I didn't know either of them but fans of both have listened to hundreds of hours of both talk as far as I know they never met each other it would have been a fascinating conversation um I realized my when I looked at this YouTube video I realized I simply don't
care how good this is because I only care if it was actually Alan Watts and Terence McKenna talking like a simulacrum of Alan Watson and Terence McKenna in this context I don't care about right so another use case I I stumbled upon I was playing with with chat GPT and I asked it you know the causes of World War II you know give me 500 words on the cost of World War II because it gives you this perfect little you know bullet pointed essay on the cause of World War II that's exactly what I want from it that's fine that's like I don't care that it was there was no person behind that typing but when I when I think well do I want to re read Churchill's you know history of World War II it's on my shelf to read as I you know it's like I'm one of these aspirational sets of books haven't read it yet um I actually want to read it because Churchill wrote it right like that that's why and if you could give me an AI version of Churchill saying this is in the style of Churchill it's very even Churchill Scholars say this sounds like Churchill I actually don't care about it like like that's not the used I I'll take the generic use of you know give me the cost of World War II the fake Churchill is profoundly uninteresting to me the real real Churchill even though he's dead is is interesting to me so the rebuttal I give here and this is what my mind is doing is yeah saying this the distinction you're you're presenting the the difference I see is that in the case of the conversation between two people you respect that has been generated by AI someone has signaled you that that it is fake if you remove that because say Churchill thought yeah why would I write a book when I could just click a button and this thing will write it in my in my voice in my tone of voice with my you know with the entire the entire back catalog of things I've written before and it will produce my my account and it will save me time so I'll just click a button my publisher maybe will do it for me and then I'll sell that to Sam on the
basis that it is um my thoughts which I imagine I I can imagine a very near future if we just do it by percentage how many books are going to be increasingly written by artificial intelligence to the point that when you look at a shelf I imagine at some point in the future if the intelligence does increase um by any measure that most of it would be words strung together by artificial intelligence and it will be selling potentially better than the words written by humans so again when we go back to the conversation with your your children there might not be a career there either because artificial intelligence is faster can produce more contest and iterate on whether it sells better clicks gets more clicks it can write the headline create the picture write the content and then I can just take the chat because I put my name to it yeah so I go even in that regard what remains well so in the limit what I think we're imagining is a world where and so none of the terror none of the terrifyingly bad things have happened so it's just all working we're just producing a ton of great stuff that is better than the human stuff and people are losing their job so we gotta we got a labor disruption but we're not talking about any other kind of political catastrophe or or you know cyber apocalypse um much less AGI destroying everything um then I think we just need a different economic assumption and ethical intuition around the value of work I mean our default Norm now in a capitalist Society is you have to figure out something to do with most of your time that other people are willing to pay you for right you have to figure out how to add value to other people's lives such that you reliably get paid otherwise you might die right like we've got a social safety net but it's it's pretty meager you know we're not we're there are cracks you can follow through you could wind up homeless and we're not
going to figure out what to do about that we're all too well you know and um your so your claim Upon Your Existence Among Us you finding something to do with your time that other people will pay you for right and now we've got artificial intelligence removing some of those opportunities creating others but in the limit and I do think it is different from I think analogies to other moments in in technological history are fundamentally flawed I think this is a a technology which in the limit will replace jobs and not create better new jobs in in their wake right it's just this just cancels the need for for human labor ultimately and it's strangely it replaces some of the highest status most cognitively intensive Jobs first right you know it replaces replaces Elon Musk before it replaces your electrician or your plumber or your masseuse way before right so we have to internalize the the reality of that if again this is in success this is not it's all good things happening right um and we have to have a new ethic we have to have a new economics based on that ethic which is you know Ubi is one solution to this like you shouldn't have to work to survive right Universal basic income yeah there's there's so much abundance now being created we have to figure out how to spread this wealth around right we've got a cure for cancer over here we've got perfect you know photovoltaic uh driven economies over here where it's like we've solved the climate change issue you know we're just pulling wealth out of The Ether essentially um we've got you know nanotechnology that is just birthing whole new Industries yeah but it's all being driven by AI we don't you know there's no room in this whenever you put a person in the in the CH in the decision chain you're just adding noise this is the best thing this should be the best thing that's ever happened to us this is just like God
handing us the perfect labor-saving device right the machine that can build every other machine they can do anything you could possibly want we should figure out how to spread the wealth around in that case right this is just powered by sunlight no more Wars over resource extraction it can build anything we can all be on the beach just hanging out with our friends and family right like do you believe we should do Universal basic income where everybody's given like a month so something we have to break this connection again this is this is what will have to happen in the presence of this kind of Labor Force dislocation enabled by all of this going perfectly well right like this again just as pure success just AI is just producing good things and the only bad thing is is putting all these people out of work you know it's coming for your job eventually I've heard this and I've my issue with it am I rebuttal when I talk to my friends about this idea of universal basic income when we you know we hand out enough casual resources to people so that they're stable which I'm not necessarily against but just just want to play with it a little bit is humans seem to have an innate an innate desire for purpose and meaning and we seem to be designed and built psychologically for labor and for a discomfort but it doesn't have to be labor that's tied to money right like it can be like we we will get our status in other ways and we'll get our meaning in other ways and again this is all these are all just Stories We Tell ourselves I mean like you know you're talking to a person who knows it's possible to be happy actually doing nothing right like like just sitting in a room for a month right and just staring at the wall right because I've done it like that's possible right so so and yet that's most people's worst nightmare you know it's a solitary confinement in a prison is considered a torture right and I know people who spent 20 years in in a cave right so it's like there's a their capacities here that we're talking about but um just more more commonly I think we will
we want to be entertained we want to have fun we want to be with the people we love we want to be useful in relationship and insofar as that gets uncoupled from the necessity of working to survive right it doesn't all just go away we just need new norms and new ethics and new conversations around what we do on vacation right it's like so what what you're imagining is that if you put everyone on vacation on the best vacation you can make the vacation as good as possible a majority of people will eventually be miserable because they're they're not back at work right and yet they're most of these people are working so that they have enough money so they could finally take that vacation right we will figure out a new way to be happy on the beach right I mean like if you can't if you get bored with frisbee we will figure something else out that is fun you know you you can re you know I'll be able to read The Churchill history of World War II on the beach and not be rushed by any other imperative because I'm you know I I I'm happily retired right because my AI is creating the thing that is solving all my economic problems right um you know we should be so lucky as to is to have that be our problem like how to be happy in conditions of no economic imperative no basis for political Strife on the on the basis of scarce resources and no question about the the question of survival is off the table one does with one's time and attention right you can be as lazy as you want and you'll still survive you can be as unlucky as you as you want and you and you'll still survive and they could the awful situation we're in now is that differences in luck mean everything right you know someone is born in a in without any of the advantages that we have we don't have a s we don't have a system we have an economic system that reliably gives them every advantage and opportunity opportunity they could have right so it's like it's we just we um we don't have the re you know we apparently we've convinced ourselves we
either don't have the resources or we've convinced ourselves we don't have the resources we don't have the incentive such that we access the resources so as to actually come to the help of people we could help right I mean the idea that people starve to death is just it's unimaginable and yet it still happens you know that's not a scarcity problem it's a political problem wherever it happens and yet all of this is tied to a system where everyone has convinced themselves that is normal to really have one survival be in question if one doesn't work right and and we by choice or by accident like like if you get if you haven't you know I think I think it's still true that in the at least in the U.S this is almost certainly not true in the UK but in the U.S the most common reason for a personal bankruptcy is um you know overwhelming medical expense that just comes upon you for whatever reason well you know your wife gets cancer you guys go bankrupt solving the cancer problem or failing to solve the cancer problem and now everything else unravels right and we we have a society which thinks yeah well unlucky you you know that's you know if you wind up homeless just don't sleep in front of my store because I need my you know you're going to hurt my business um like you know successful AI that cancels lots of jobs would be it would be it would only be canceling those jobs by virtue of producing so many good things so much value for everybody that we would we would have to figure out how to spread that wealth around otherwise we'd yeah otherwise we would have a and you know if an amazing amazingly dystopian bottleneck for a few short years and then we would just have a revolution right then we'd then the guys in their in their you know gated communities making trillions of dollars based on them having you know gotten close enough to the gpus uh that they that it you know some of it rubbed off on them um yeah they'd be dragged out of their houses and off their Gulf streams and
you know we would have a fundamental reset we have a hard reset of the political system if I had to put you in a yes or no situation and ask your intuition the question now that if your objective was to which I'm sure it is is to encourage the betterment of humanity and to increase our odds of happiness and well-being 100 years from now um and there was a button placed in front of you and it would either end the development of artificial intelligence as we've seen it over the last decade so it would never we'd never proceed with developing intelligent machines um or not so you could press a button and stop it right now stop it permanently such that we never then do that thing we just never figure out how to build intelligent machines pause it indefinitely well I would definitely pause it to a point where we would we would could get our heads around the alignment problems permanently if the button was a permanent pause that you couldn't undo well the question is how deep does that go so like we we have everything we have now but we just yeah it just never gets better than yeah we never make progress from here right um and your objective is to make Humanity happy and prosperous it's hard because when you when you begin imagining all of the good stuff that we could get with with aligned superhuman AI well then you know then the it's just you know Cornucopia upon Cornucopia it's just everything is everything is potentially Within Reach yeah I mean I I take the existential risk scenario seriously enough that I would I would pause it you know I would say I mean I think we will get we will eventually get to if if curing cancer is a is a biomedical engineering problem that admits of a solution and I I think there's every reason to believe it ultimately would be we will eventually get there based on our own you know muddling along with our you know current level of tech you know currently a information Tech um I'm you know reasonably confident of that um because I mean aren't you know our
intelligence shows every sign of being General it's just it's not um it's not as fast as we would want it to be it's not it's not what the thing that AI is going to give us is is going to give us uh speed that is um I mean there's speed and there's the the access there's memory right it's like and like we we can't integrate we don't have the ability we have no person or team of people can integrate all of the data we already have right so that like the the real promise here is that a these systems will be able to find patterns that we wouldn't even know how to look for and then do something on the basis of those patterns you know I think an intelligent search within the data space You Know by by Apes like ourselves will eventually do uh most of the the great things we want done and you know the there isn't there isn't uh I mean the pro the problems we need to solve so as to safeguard the the [Music] um the the career of our species and to make civilization durable and sane and and uh to remove this sort of Damocles that is over our heads at every moment that you know at any moment we could just decide to have a a nuclear war that ruins everything or or create a a an engineered pandemic that ruins everything we don't need superhuman intelligence to solve all those problems and we need the we need an appropriate emotional response to the the the untenability of the status quo and we need we need a political dialogue that eventually transcends our our tribalism for those of you that don't know this podcast is sponsored by Weaver company that I'm a shareholder in and I'm obsessed with my woop it's glued to my wrist 24 7. and for those of you that don't know it's essentially a personalized wearable health and fitness coach that helps me to have the best possible Health my whip has literally changed my life whip is doing something this month which I'd highly suggest checking out it's a global Community
challenge called The Core 4 challenge essentially they guide you through a set of four activities throughout the month of August that are scientifically proven to improve your overall health I'm giving it a go and I can't wait to see the impact it has on me and I highly recommend you to join me with that so if you're not on weep here there is no better time to start if you're a friend of mine there's a high probability that I've already given you a week because I'm that obsessed with it it is the thing that I checked when I wake up in the morning it's the first thing that I look at I want the information on my sleep to then plan my day around so if you haven't joined woop yet head to join.woop.com CEO to get your free whoop device and your first month free try it for free and if you don't like it after 29 days they're going to give you your money back but I have a suspicion that you're going to keep it check it out now and let me know how you get on send me a DM quick one if you've been listening to this podcast for some time one of the recurring messages you've heard over and over and over again especially when we first had that conversation with Tim Spector is about the importance of Greens in our diet and a while ago I started pressing my friends at hewell to come out with a product that did exactly that allowed you to have all those greens the vitamins and minerals you need in a drink and after several several several months of iterations and processes they released this product called huel Daily Greens which is now one of my favorite products from heel because it tastes great and it fills that very important nutritional Gap that I had in my diet the problem is it launched in the US and it sold out straight away and became a Smash Hit for fuel for the rare reasons I've described it's now back in stock in the United States but it's not here in the UK yet so if you're a UK listener which I know a lot of you are it's not yet available so let's all attack you let's DM them everywhere we can and tell them to bring huel Daily Greens to the UK this is the product when it is available in the UK I'm going to let you know first but until then let's spam their DMs you and I'd say a few others maybe two
or three others help change my mind about one of the most profound things I think anyone could believe which was when I was 18 I believed in Christianity and then there was a couple of moments that shook my belief nothing on a personal level just a couple of ideas that managed to sort of infect my operating system that led my curiosity towards um your work and I changed my mind profoundly it's such a profound change that I had um how do we change our minds and I and I really want to I really want to focus that question on the the individual the individual's mind like I want to change my mind I want better beliefs better ideas in my head that are going to allow me to get out of my own way um because I I'm not achie I'm miserable I'm not living the life that I I would say I know I can live but some people don't even know they can live live a better life I'm not happy that's the signal and I want to I want to rectify this in some way yeah well there are a few bright lines for me I mean like take um our ethical lives and our relationships to to other people right so um there's there's the problem of individual well-being that kind of is still real even if you're in a moral Solitude if you're you know on a desert island by yourself you really don't have ethical questions that are emerging because you're not in relationship to anybody else but you still have the problem of how to be happy but so much of our unhappiness is in collaboration with others right we're unhappy in our relationships we're unhappy professionally um and it's worth looking at how we're behaving with other people uh for me that the the highest leverage change I ever made and it's against it's very easy to spell out and it's very clear um and ultimately it's pretty easy is just to decide that you're not going to lie about anything really I mean there might be some situations in extremis where you'll feel forced to lie but those you
know in my view are are analogous to acts of violence that you may be forced to to use in self-defense right so like a line is sort of the first stage on the Continuum of violence for me right so like I'm not going to lie to someone unless I I recognize that this is not a rational actor who I can possibly collaborate with this is someone I have to be um I have to avoid or defeat or otherwise you know contain the the their propensity for to do me harm uh so yes if the Nazis come to the door and ask if you've got Anne Frank in the Attic yes you can lie or you can shoot them or you can these are not normal circumstances but that aside every other moment in life where people are tempted to lie um is one that I I think you can categorically rule out as being unethical and being Beyond unethical it's just not it's it's creating a a life you don't when you when you examine it you don't want to live right in the moment you know that you're not going to lie to people and they know that about you um the the it's like all of the dials get the social dials get sort of recalibrated on both sides and then you find yourself in the presence of of people who don't ask you for your opinion unless they really want it right and then and then when you're honest I mean then then it's it's not it's a night and day difference when you're giving people feedback critical feedback and they know you're honest right they know they're they they're you know they're they're detector is not going off because they just know you're you're even when it's not convenient you're being honest and or even when it's not comfortable you're being honest um one that's incredibly valuable because basically you're you're giving them the information that you would want if you were in their shoes right because we have this sort of delusion that takes over us when whenever we're tempted to
tell a white lie we imagine okay this person doesn't want it much better for me to just tell them the kind fiction and then tell them the the uncomfortable truth right but we don't do the so we don't even calculate that you know for the Golden Rule there most of the time and we if you if you just took a moment you'd realize oh wait a minute does someone who is actually doing a bad job want me to tell them that they're doing a good job and then just send them out into the world to bounce around other people who are going to be recognizing as I just did that the thing they're doing isn't so great right um you're just not doing them a favor right this is part of the nature of belief changes in it that when someone we believe that someone is on our side or We Believe from like a political standpoint that they they represent the 99 of the views that we represent we're much more likely to change our beliefs expect to tally Shara about this the neuroscientist and I wrote about this in a chapter in my upcoming book about how you how you change people's minds and they showed in the elections that if like a flat earther says something to a flat earther about the nature of the earth I believe it but if NASA says something to a flat earther they will just dismiss on site because the source of that information is not one that they believe or trust or like or believe is well-intentioned I mean this this is a bug not a feature I mean it's understandable but this is something we have to grow Beyond because the the the truth is the truth right so you can't I mean again it goes in both directions the person on your team who you love and respect is capable of in their very next sentence of speaking of falsehood right and you need to be able to to detect that and conversely you know the the person you least respect is capable of saying something that's that's quite incisive and worth taking on board and and so that's we have to we have to have this sort of meta cognitive layer where we're noticing how we're getting played by our our social alliances and recognize that the truth and and rather
often important truths are are um uh evaluated by different principles I mean it's not a matter of the message the messenger you know you shouldn't shoot the messenger and you shouldn't worship Him you mentioned lying as being a well removing lying and being more honest as being a significant step change in your own happiness is that accurate in my happiness and your happiness yeah yeah um yeah immensely so because it's it's how practically and specifically how so you I mean when you look at how people ruin their reputations and their relationships and their businesses their careers the gateway to all of the misbehavior that accomplishes that is line it's I mean look at somebody that Lance Armstrong right I mean just or Tiger Woods right these guys are the absolute apogee of sport they everyone loves them everyone's just amazed at what they're what they've accomplished and yet you know the the dysfunction in their lives just gets vomited up for all to see at a certain point and it was just enabled at every stage along the way by lying right so if if if either of them had early in their career before they became famous before they became rich before they became tempted to do anything um that was gonna derail their lives later on if they had decided they weren't going to lie right they would have found all everything else they they did to screw up their success impossible so when I decided and this is this is in the book this is a course I took it at Stanford it was a a seminar with This brilliant Professor Ron Howard who who many people who I think some people in Silicon Valley have taking this course as well um I mean this this course was just like a machine you know undergraduates and graduate students would come in on one side and then 12 weeks later would come out convinced that basically lying was no longer on the menu right it's just it's just it was it that the whole seminar was an analysis of the question
is it ever right to lie and really we focused on on white lies I'm truly tempting lies as opposed to the obvious lies it's grow people's lives and relationships um it's just so corrosive and it's corrosive of relationships in ways that you unless you're a student of this kind of thing you you don't necessarily notice I mean one example I believe is in that that's in that book is that I remember my wife was with a friend uh and the two of them were out and the the friend had something she she had to do with another friend later that night but she didn't really feel like doing it um and she got a call from that friend in the presence of my wife and she just lied to the friends to get out of the plan right she said oh you know I'm so sorry but my you know my daughters got this thing and it was just just an utterly facile use of dishonesty to get where she could have she could have just been honest right but she just it was just too awkward to be honest so she just got out of it with a lie but now it's in the presence of my wife and my wife is now the the immediate question is how many times have I been on the other side of that conversation right how many times has she lied to me in an equally compelling way about something so trivial right and so it just eroded trust in the in that relationship in a way that the the liar would never have known about would never have detected it because it's just she just went right back to having a good time with you know they were just out to lunch and they continued you know having their lunch and they're still having a good time and it's all smiles but my wife has just logged something about kind of the ethical limitations of this person um and the person doesn't know it right and so once you sort of pull on this thread you basically your entire life becomes for at least for the the the transition period when until this just becomes a a habit you no longer have to consider um
Suddenly It's your your the world becomes kind of mirror thrown up to your mind and you and your you meet yourself in all these situations where you are avoiding yourself before so like someone will say you know do you want to have plans or do you wanna do you want to collaborate with me on this project and if previously you you always had recourse to some kind of white lie that just got you out of you know the awkward uh truth which is the answer is no and they're actually reasons why not right um you never had it you never have to confront the the awkwardness of that you're this kind of person who has these kinds of commitments and this kind of it's like I you know I mean the most awkward one would be you know someone declares a romantic interest in you and the the the the the the the the answer is no and then the and the the it's known for totally superficial reason right like this person is is either you're not they're not attractive enough for you right you know they're or they're overweight or whatever I mean it's just it's like you have your reason why not and this is something you feel you cannot say right now I'm not saying that you should always go out of your way like like you're someone with Tourette's who just helplessly blurts out the truth like there's there's a scope for for kindness and compassion and tact but if someone is going to really drill down on the reasons why not if the person says no I want to know exactly why you don't want to go out with me there's something to discover on on on either side of that true disclosure right like either you are cast back on yourself and you have to realize okay I'm such a superficial person that it doesn't matter who anyone is if they're 10 pounds overweight I'm not interested right that's that's the mirror held up to your minds like okay all right so you're that kind of person do you want to still be that kind of person do you really want to just decide that everyone no matter what their virtues
right and no matter what has been going you know what no matter what chaos is going on in their life and they actually this person might actually lose those 10 pounds next month and you would have a very different situation but are you really not available are you really filtering by weight in this way um and are you really comfortable with that and are you comfortable saying that like if you if if somebody forces you to actually be honest we have a closing tradition on this podcast where the last guest leaves a question for the next guest not knowing who they're going to leave it for the question that's been left for you impeccable handwriting where do you want to be when you die describe the place time people smell and feeling well it's actually uh connects with an idea I I've I've had I mean I think what we need we haven't talked about psychedelics here but um there's there's been this Renaissance in research and psychedelics and it's hard to know I I I'm worried that we could recapitulate some of the the errors of the 60s and and uh roll this all out in a way that's less than wise but the wise version would be I think we need to recapitulate something like the the mysteries of ellusis where we you know we have rise of Passage that are enabled by in many people's case psychedelics and and the practice of of meditation I just think it's I think these are just fundamental tools of insight that are that I mean for most people it's hard to see how they would get them any other way right I just think you know there's a lot longer conversation about which molecule and how and all that but another component of this is a math a a hospice situation where the experience of dying is as wisely embraced and facilitated as is possible and I think psychedelics could certainly play a role for for many people there so I imagine something like we we need a we need places that are truly beautiful that where you know people have gone to die and their families can be you visit
them there and it is just a you know a final rite of passage that is that is embraced with you know um all the wisdom uh we can muster there and yeah so for in my case you know I would want to be in you know currently I I'd be happy to be home but you know wherever a home is at that point I would want a um I would want a view of the the sky you know it could be an ocean beneath the sky that would be ideal right um I just I mean it there's there's basically nothing that makes me happier than just looking at a blue sky with just watching like cumulus clouds move across a a blue sky I mean it's just like I can extract so much mental pleasure just looking at that right it's just I mean it's um so yeah if I'm gonna spend my last uh hours of life looking at anything if my eyes are going to be open you know looking at the sky and having the Stars will the sky the daytime the daytime yeah yeah I mean if I were if I light pollution is enough of a thing in my world that I go for I feel like I go for years without seeing a good nice guy um so I've kind of given up hope there but I do love that um but yeah just a you know a view of the sky and with the people I love at that point who are who are still alive at that point yeah I mean I'm not I'm not worried about death in that sense I mean I really I think it's it the Death part is not a problem I mean I I can't say I'm looking for if I can imagine there could be sort of medical chaos and uncertainty and all of the you know the weirdness that happens around the dying process right depending on um and there are all kinds of ways to die that I wouldn't choose but having a nice place to do that with a view of the sky would be the the only solution I think I would require the question asks the smell give me the smell smell give me an ocean breeze I have put an ocean there so yeah
an ocean breeze would be perfect Sam thank you so much thank you um not just this conversation as I said to you before you sat down you were pivotal in um really helping me to unpack some problems when I was younger some conflicts I should describe them as with my my view on religious belief and um and the nature of the world but I think more more importantly you didn't you didn't robber me of my religious beliefs and leave me with nothing right you left me with something else which is something that was really important to me which was the idea that there can still be great meaning and there can be what you describe as spirituality in the absence or in the place of um that religious belief religious belief gives people you know a lot of things and I I it's funny because when I was religious and I went on the journey to becoming agnostic let's say um I was in conflict with people as in I would want to have a debate with everybody yeah and I spent those two years watching everything that you and Richard Dawkins and Hitchens had all done and then I came out the other side and it was peaceful yeah and it's you believe what you want I'll believe what I want um as long as we're not causing any conflicts with each other and you're not doing any harm it's okay yeah and then I discovered what I would call my own spirituality which is my meaning the meaning that I see in the world around me and um and the self and things like psychedelics and it's a it's a better place to be and it removes my fear of death which I had as a religious person all right well that's good so thank you yeah thank you for that and all your subsequent work but you know incredible books you've written so many of them that are absolutely incredible you've got an unbelievable podcast which I was gorging on before you came here as well in an app um which I mean if you could speak just a few sentences about the meaning of the app and what you do I know it's much more than meditation now but I think people listening to this might be compelled to to check it out and download it yeah so I had that book which you're holding waking up which is
the um uh which is where I talk about my experience in meditation and just how I fit it into a a scientific secular worldview um and just it just turns out that an app is a much better delivery system for that kind of information I mean it's just a hearing audio is you don't even need you don't need video I think audio is the perfect medium for it so when that technology came about or when I discovered it um I just felt incredibly lucky to be able to to build it and so it's it's kind of outgrown me now there are many many teachers on it and many other topics Beyond meditation that are touched but um it's a it really subverts all of the problems that you know some of which we touched upon here with the with the smartphone I mean like the smartphone has become this this tool of fragmentation for us it fragments our attention it continually interrupts our experience it's depending on how you use it um but most of what we do with it you know you're checking slack you're checking your email you're checking your social media you're you're just it's punctuating your life with with all this stuff is you know at this point seemingly necessary interruptions but this app or you know any really any app will like it that's delivering this kind of content subverts all that because it's just this is this is it's just a platform where you're getting audio that is guiding you in a specific very specific use of attention and a sort of reordering of your priorities and getting you to to recognize things about your experience that you you wouldn't otherwise see and yeah an app is it just a sheer good luck it turns out it's it's just the perfect delivery system for that information so yeah I just feel very lucky to to have stumbled upon it because again you know 10 years ago there were no apps and you know there's just it was just all I could do was write a book Sam thank you yeah thank you thank you so much yeah a pleasure to meet you congratulations with everything it's really it's oh thank you I was catching up on your podcast in anticipation of
this and it's amazing the the reach you've got now so yeah wonderful no it's we're still trying to catch up with it but it's a credit to all of the team and I really want to say from the bottom out thank you because the work you do is is really really important um it's been important in my life as I've said but it's just really important and we I feel like we're living in a world where like nuance and all the things you've talked about and openness to debate and honest dialogue us we're getting further further away from there so if there's anyone left in this world that's still willing to engage on that level I feel like they must be protected at all costs and I see you as one of those people so thank you nice nice well to be continued [Music] oh [Music]
