Culture Series: “Nexus,” a Book by Yuval Noah Harari

Episode 273 November 06, 2024 00:56:52
Culture Series: “Nexus,” a Book by Yuval Noah Harari
Call It Like I See It
Culture Series: “Nexus,” a Book by Yuval Noah Harari

Nov 06 2024 | 00:56:52

/

Hosted By

James Keys Tunde Ogunlana

Show Notes

James Keys and Tunde Ogunlana discuss a few things that stood out in Yuval Noah Harari’s new book “Nexus: A Brief History of Information Networks from the Stone Age to AI,” which looks at the role information and information networks have played in how human society has developed, and how the introduction of AI may be more revolutionary than we can imagine.

 

Nexus: A Brief History of Information Networks from the Stone Age to AI. (ynharari.com)

 

Nexus: A Brief History of Information Networks from the Stone Age to AI. (Penguin Random House)

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: In this episode, we discuss some things that stood out in the 2024 book Nexus, which looks at the role information and information networks have played in how human societies has developed, and also how the introduction of AI may be more revolutionary than we can even imagine because it's not just a tool under human control, but actually can function with its own agenc. Hello, welcome to the car like I See it podcast. I'm James Keys, and rolling with me is a man who may say some wild things, sometimes so much that people may wonder what's gotten into him sometimes. Tunde. Ogonlana Tunde, Are you ready to spit some venom today? [00:00:54] Speaker B: Yes, sir. I'm just. I'm just aging like fine wine. [00:00:59] Speaker A: All right, there we go. That's what we all strive for. Now, before we get started, if you enjoy the show, I ask that you hit subscribe and like on your podcast platform or YouTube, doing so really helps the show out. Now we're recording this on November 5, 2024, and since nothing else is going on today, we decided we'd continue our culture series today by doing some reading between the lines in the 2024 book Nexus A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari. In this book, Harari first looks back at how developments and information networks in world history have impacted the societies and just have impacted history and in the direction of everything. And he also then looks forward and how the introduction of AI could influence things going forward. Now, Tunde, you know, I want to jump right in. What were your thoughts on the book? You know, you can start off generally, but then also just one of the central premises of the book, that information is not the same thing as truth and that more information does not always bring us closer to truth or more truth, so to speak. [00:02:13] Speaker B: Yeah, no, that. Just to answer kind of the first part, it was an excellent book. I highly recommend it. And we are not paid sponsors for this author or the book, but. No, just like in the book Sapiens, this author does a great job, I think, of tying together not only some kind of broad concepts about knowledge and humanity, but also tying those things together with this arc of human history and how, I guess, we have evolved over this time. And in this way, I mean, Sapiens was more of the anthropological kind of physiological evolution and the mental evolution, the cognitive revolution. This, like you mentioned, the book is really about information systems and how humans kind of communicate and how the actual use of information and the systems built around, you know, kind of the scaffolding of information systems in the history of humanity has led us to here. And like you mentioned then, the future as machine learning now takes over and we are in this period of this transition, let's put it, our potential transition, depending how the future plays out. So I thought it was an excellent. [00:03:30] Speaker A: What I would add to that is kind of, I think the purpose of the book is to look at, to try to look forward at the AI and what AI, how AI may change things, may, you know, be revolutionary in ways that we can't even imagine. But in order to do that, I think he looked back at various developments like when they started using stone tablets or when they started writing, you know, putting together books or you know, when they started having periodic, you know, pamphlets or newspapers, things like that. Like these. You look back at these revolutionary developments in the past and pointed out how. But all of these were ultimately under human control. You know, but so you have the, this new one is different than those in the sense that it actually is something that is not necessarily in direct human control. And but we can look back at how those revolutions change things to at least get some kind of idea of how this one could change things. [00:04:23] Speaker B: Yeah, and I think, and that's what I think this author does a great job with that part of that. Like you said, it's kind of like the old saying many of us have heard just throughout our lifetime. If you don't know your history, you're doomed to repeat it. So what he does first of all is give us the history of information and information networks as it relates to humans. I think that's very important because a lot, I learned a lot and I mean I've pretty well read guy over the years. And then like you're saying with that foundation we can then look at and how he transitions the book into. Okay, what does this mean for the future and why this is so unique? I think is very important the idea that machine learning now is the first time that we as humans are going have experienced and are going to experience the idea of creation and content information that is being created from a non sentient source. Right. From a non human intelligence. And I was naive to actually the real strength and force of AI already until reading some of the real part of the machine learning. And actually the fact that machines are intelligent at this point in the development, they're just not sentient. [00:05:35] Speaker A: Well yeah, just the, like the AI, the artificial intelligence, the ability to make decisions and so forth. And how like what he really. This has been discussed over the years, but just the idea and I think We've touched on this, the ability for AI algorithms to learn, you know, and that's the machine learning aspect, so that you can set them on an objective, give them information. And wherever you, wherever the code started and whatever the process, wherever the process started, it will evolve itself without further input from the programmer to then whatever objective it was getting, get given to get better and better and better at delivering that objective. And so what you end up with six months down the road is not something necessarily that the original creator would recognize or would, would know that it was going to end up there or anything like that. And so yeah, the, the idea you create something that then continues to grow and evolve on its own is definitely different than the idea of a printing press where it's like, okay, I'm using this copy, physical copies of something that a human created, but it's not altering the creation of the human, it's not updating it. It's not, you know, anything like that. And so it's a different kind of tool in the sense that it has its own decision making capability and its decision making capability will change, evolve over time to something, whatever you set its decision making, you know, if this happens, then you do. If A happens, then you do. B, whatever you set it as initially, it won't be that necessarily as time goes on. And wherever it is at six months isn't necessarily where it's going to be in 12 months and so forth. So, you know, there, there is a, you kind of set it, set something and then push it off on a stream and then where it goes, the creator, so to speak, doesn't control it anymore. So it is more akin to, it's less of a tool in that context than what we're used to. Humans have created tools for a long time, but those tools were meaningless without a human to operate it. Whereas you got these AI, you know, these AI bots or whatever it would be that can operate in the, in the digital sphere, on the, on the Internet or whatever without. And they can operate and continue to evolve and change their operations. So yeah, it's, it's something that we haven't dealt with. But I do want to ask you though, like, the, and you know, because I want to keep us moving the, the idea of information. Like he has this central premise that he talks about and he talks about like he references a naive view of information and that is that the more information that you have, the closer you will be able to get to the truth, you know, and then information and truth are, you know, convergent. On one another and so forth. And he pushes back, he says, this is completely. This. He calls it the naive view, you know, and so did you have any thoughts on that piece about it? Like, because again, that's to understand where he goes. Where he goes back and looking at the printing press or looking at anything, you know, like all of the things it looks like in the past or understand where he's going forward. This is a key piece to understand that, because if information itself isn't necessarily a automatic convergence to fact and truth, then information can serve a lot of different agendas. So your thoughts on that? [00:08:42] Speaker B: Yeah, I think this was a very important part of the discussion because this is where I've even found myself historically being naive in this way. And the idea was that, you know, more, infrared, more information will correct misinformation. And he does a good job actually distinguishing misinformation from disinformation, meaning disinformation being purposely an attempt to deceive someone, where misinformation can just be generally either mistaken information or things like that, you know, kind of bad faith versus good faith, or at least not. Not having bad faith and misinformation all the time. [00:09:19] Speaker A: But I like the idea disinformation, you know, you get that d With the deceive misinformation, like you said, mistake it could be, you know, but, you know, so. But the. And the piece you just said, as far as that, more information can be used to correct misinformation. And that's what. That's part of what. He considers that naive. [00:09:35] Speaker B: Yeah, well, that's what. And that's what I'm saying. Like, I realized that I fell under that trap a lot of, you know, before having this kind of knowledge about how our brains work and all that. And I think this is where, you know, if you look at things like, you know, even some of our political phenomenons, like the media and how they behave today, this idea that, well, if we just tell people facts, somehow they'll all just get on the same page of information. And maybe the COVID pandemic was a good example where just kind of a personal experience that I'm sure most people watching and listening went through as well. Whereas we had, you know, the CDC and Dr. Fauci and certain medical experts and scientific experts that just wanted to get all this information out there. And for some reason, you know, there was a large parts of our population that just didn't want to receive that information, and they were more comfortable or trusted in other information sources of information. And I Think that's where you can look at the idea. Okay, so what's true? What was true in terms of that type of information? Was it true what the CDC was saying? Was it true that, you know, hydroxychloroquine or these other things were. And I think that's where the idea of just adding more information doesn't necessarily create anything other than just more, you know, more chatter and stuff. Yeah, so that to me was very interesting and I, and that's why I say, I realize I've had that naive view that I thought that just putting more information out there would somehow, you know, get people to understand a different truth. [00:11:08] Speaker A: Yeah, I think it's a common, a common thing because I mean I, I would say him pointing out the distinction was helpful for me as well in the sense that if you walk in thinking that information is information inherently serves truth, that I think is the disconnect. Information doesn't inherently serve truth. Information can be used to. And he actually talks about a dichotomy, dichotomy. Information can serve truth or it can serve order and order doesn't necessarily need to be based on truth. And I say this in the context of, and this is about like religious commentary, but just like the, the Bible creates order in terms of amongst it's the people who adhere to it and follow it, regardless of whether or not everything in it is true. And so that's there's information in this book. And you know, he talks about the Bible as a big innovation in terms of which we can, we can touch on later. But the idea that that information can it is the Bible helpful for from a physics standpoint, understanding how the earth came into being, many would say no. But the idea that it allow that it gives people something to believe in and follow it follow and allows people to bind together with other people that believe it serves other purposes. You know, so if we're looking at it only as each statement, each paragraph, how, how much of this information is truth that can be verified either scientifically or this and that with documented history, then you're missing the point of the whole book. You know, the book isn't necessarily the, the purpose of that book isn't necessarily to establish some factual record, some truth. It's to. It's information that can be used to create order, information that could be used to build community. You know, and so the idea, once you expand out your understanding of the idea of information can serve a lot of different purposes and truth is just one of them. But to every, to many people Truth isn't even the most important thing. Information is for, you know, the most important thing it could be for is for. It could be for order, it could be for, again, community. It could be for creating ties between different people. And so I think that piece about it, once you can become comfortable with the idea that information, then you can look back throughout history. You can look at modern times and say, okay, I can see how information in this instance is not being used to find, to serve truth. It's being used to gather power. It's being used to divide people. It's being used to a lot of different reasons other than truth and finding truth. And then I'm an attorney, so this was something that was jarring to me because a lot of times with the law, what we think of is what he calls, what Harari calls this naive view where, hey, let's just get the information out there. And then we'll have 12 people sit on a jury and decide, okay, based on this information, here's what we got. Or, you know, have to have nine people sit on the jury. And okay, based on this information, we're going to elucidate the truth. And it's like, well, you know, what information do we have? You know, like, is that information what this information actually doing? [00:14:05] Speaker B: Well, I think that comes back. I mean, look, I'm not an attorney, so being on the outside of your profession, I've always heard that, right? It's not about who really has the best, who's really truthful or has the most facts in an argument in trial, for example, it's who. [00:14:19] Speaker A: Who. [00:14:19] Speaker B: Who presents the best argument. Right. And so, and so I think that that goes into part of that and. [00:14:24] Speaker A: That a lot of time is narrative, you know, like, Correct. [00:14:26] Speaker B: That's what I mean. Like, that becomes the truth for the jury. Just that the better argument, the better narrative, that. That who can tell the better story, you know, and so, and I think at least, and I don't want to project on anyone else, but I'll say for someone like me that, that I'm 46 years old. I kind of just in his recent years of all this kind of reading this stuff, I'm finally, it's finally clicked in my head that that is actually how we work as humans. It is irrational because what is truth in a sense. And I like how in the part of information, I just want to read this part while we're on the information, because he slowed it down and says that information. I'll quote the book, information doesn't necessarily inform us about things. Rather it puts things in formation. And I just kind of like the. [00:15:12] Speaker A: Way you said that because I was like, excellent. [00:15:14] Speaker B: Yeah, I never thought about breaking apart. Yeah, breaking apart information. But yeah, it does put things in formation. And so then he goes on, but. [00:15:22] Speaker A: Also it puts things and people in formation. [00:15:25] Speaker B: Yes, but that's why I never thought of the ability to think of trans information and translated this way. But things like he said, if you look at, and you've always said this somewhat, people can look at the stars and they just see stars. Other people looked in self constellations. Well, now we can say that's someone putting things in formation when they look in the sky. [00:15:47] Speaker A: He referenced taking that even beyond. Some people then can look at those stars, same stars, and see astrology and say, okay, here's what happened to you based on your birthday. And so all of that is information that is not necessarily serving truth. It's serving. [00:16:04] Speaker B: Yeah. And the one that got me to appreciate it and you know, this might resonate with you, was music. I never thought of this. And it's true. He says music is a form of information. And then he kind of alludes to the idea is music right or wrong? And that's a good point. Right, like. Like good music versus bad music. Very subjective. And so if you were saying music as information, well, is it truthful or not? You know what I mean? That's how you get into. [00:16:30] Speaker A: Well, but that, and that's murky area is that it's not even. Music isn't even trying to serve truth. Like it's information. Yeah, yeah, exactly. And so but that's the point is to break apart this assumption that a lot of us come in with that information leads to. Is information is this path. You know, you get all this information and you're on the right way to truth. Or if there's. There's untruth out there, if there's false stuff out there, you just need to throw information in and that'll, that'll quench, you know, the, the false stuff. And it's just, that's just not necessarily the way it works. Like the way that truth happens is by deliberate effort with information as far as what information is out there. And then the vetting of information, which again that's a part of the book as well. We may not get into that part, but I do want to. There was. And I don't want to spend too much time here, but yeah, he gave examples of how information networks can go wrong. And some of those, you know, some of these Things are some of the most infamous things in history, you know, in terms of, you know, like the kind of the witch burning phase that they went through in Europe. And I hate to sum it up like that, but I mean, because it's, it's a, it's terrible, it's terrifying stuff, but even also like kind of Nazism or Stalinism and how information networks were created there to murder millions of people, you know. And so were there any of these things that stood out to you in terms of just examples? I mean, and again, all of this is being provided in the context of pointing, of looking in order for us to look forward and say, okay, what's going to happen with our information networks now that in addition to humans creating information, distributing information, curating information, you have AI doing it. So again, let's look back at how this stuff has gone wrong in the past. So in terms of any of the things that have gone wrong in the past, were any of those particularly significant to you or what stood out in that. [00:18:09] Speaker B: Yeah, no, again, they all were. Is fascinating. So another quote, truth may be essential for the scientific method, but may not motivate a nation. And again, it goes back to. So I'll say this because you're right about the Stalinism, Nazism, all these isms. It is really all about ideas and what is the truth of that society or that nation. And so, and that's an interesting kind of dissonance there, right? Scientific. The scientific method requires actual like finding out facts and actually requires one to. Give me a second one to acknowledge ignorance. That's interesting. [00:18:49] Speaker A: But see, scientific method is a discipline. That's the structure that's put onto the information gathering and the information vetting. The scientific method isn't information. It's saying here's how we want to handle information. And so that's where you have this pursuit of truth, like you said, the recognition of ignorance, the introduction of self correcting mechanisms. So people are rewarded for pointing out deficiencies as opposed to being shunned for pointing out deficiencies in the current information. And so yeah, like that's a good point to bring up in terms of the scientific method is a discipline. You know, a jury trial is a discipline. A constitutional republic is a discipline that is trying to take all of this information. It may float out there and introduce things that will allow you to get closer to truth and justice or whatever other objectives you may have. [00:19:40] Speaker B: Yeah, and that's really, I think that if we look 30,000ft at kind of how societies can be organized, whether democracies or autocracies? [00:19:48] Speaker A: Well, don't go there yet. Don't go there yet. [00:19:50] Speaker B: No, I'm just saying that the idea like you're saying about the self correcting mechanism and the ability to acknowledge ignorance, you know, kind of that humility to say, well, we were wrong or I don't know this, so I gotta go this other direction. I mean, I think. Yeah. [00:20:04] Speaker A: The idea we'll get into. Appreciate this piece. For the next piece. [00:20:10] Speaker B: Yeah, no, no problem. Sort of go back to. Because what fascinated me about the quote was, you know, like we got stuck a little bit here on the scientific method, but that truth may not be essential for motivating a nation. And so you talked about some of the regimes of the 20th century. And one of the things I, I thought of was something we've talked different discussions on our show was things like the Lost Cause. Because I was thinking about our American story. [00:20:34] Speaker A: Yeah. [00:20:35] Speaker B: And this idea that, you know, think about it from the Civil War all the way till 2023 in our state, in Florida, when our governor was promoting the banning of certain history books about American history and leaving out certain parts of American history. And I, and you know, you and I sometimes would talk about. [00:20:55] Speaker A: Which by the way, is a form of controlling information, accessible information, and not for the purposes of truth, but for the purposes of, you know, of building a society. [00:21:05] Speaker B: Correct. And, and, and that's what I'm saying. Like for me, this book was just eye opening for me to reflect on how I feel about things, even because I personally was offended when Governor DeSantis did that. Because I'm thinking, wow, why are you trying to exclude the history of all Americans? You know, and just kind of, kind of stick to this narrow slice. [00:21:25] Speaker A: Put it, yeah. Put it a different way though. You were saying, why in the world are we trying to make this information less truthful? You know, you, you were, you were objecting to the idea that he was walking us away from truth, you know, so to speak. [00:21:40] Speaker B: And that's where again, we have these pockets of culture. Right. It's, it's, it's a truth that I understand from the facts that I've read and the history that I understand in the United States. What I've learned to appreciate is that for whatever reason, Right. Just like this book tells us to motivate a nation, the before you and I were born, obviously. Right. The stories of the nation that were accepted and allowed to be put into the curriculum, let's say, and as the word you like to use, the curated into the American fabric and American history. Unfortunately, those stories did not include sometimes people like us and the realities like we talk about, you know, the amount of black cowboys that really existed in the late 1800s versus by the 1980s, when you and I were growing up, the lack of that same representation when we will watch old Western films that were filmed in the last 60, 70 years. So somewhere along the line, you know, I guess decisions were made collectively by the nation to have just this kind of main narrative story that did exclude a lot of truths. But that's what a nation needs to. [00:22:47] Speaker A: Well, that's what they believed. That's what it was. Believed that that was what was necessary to keep X number of people all pulling in the same direction, you know, and it's like, okay, we don't want to say anything about acts of terrorism completed, committed against black folks, you know, and like, we want to remove all that stuff. Even though it's truth. We don't want that information circulating because we feel like that won't build the nation, so to speak, which again, I disagree with. But in this context, you understand exactly what that is versus, you know, this personal animus necessarily, that you might interpret it to me, I would say, though, that the real. The piece that illustrated this for me was, you know, he pointed out, you know, two. Two different publications. You know, you have Copernicus's book, you know, in Copernicus, the guy that realized that the earth revolves around the sun, not the other way around. And then he also talks about the witches, the Witch's Hammer, the Hammer of the Witches book, you know, the book that set off the witch burnings. And he was talking about. One of these books is like one of the most groundbreaking, truthful information publication in the history of the world, you know, and the other is fantasies made up about women having sex with the devil and orgies and stealing men's penises and stuff like that. And yeah, the Copernicus book sells like 100 copies. And the other one is selling thousands and thousands of copies and motivating people to go on murderous rampages of women, you know, and accuse people of witchcraft. And, oh, if you. If you deny being a witch, that means you're a witch. And it's like. I mean, like, it's the craziest thing. And so what that, what those two books did, like the Copernicus book, at least in the. In the short term, did not give any way to. To put people in formation, to put, like. It was just. It was information out there. It was great, but it didn't capture anybody in terms of their imagination, in terms of. Well, very few, not for hundreds of years though. Whereas the, the Hammer of the Witches, you know, like that put people in a formation not geared towards finding truth, but on murderous rampages. But it actually was effective. It motivated a nation or many nations or many tribes or whatever to go do on these, do these things. And so it's like information networks, the creation of information networks doesn't. It can go very wrong depending. And salacious information is going to be a lot of the most. The seductive stuff anyway, you know. And so it's like if, if you don't put. And that's where it, it, it. That's one of the illustrations where you know, if you do talk about. If you don't have these self correcting mechanisms, books on their own aren't self correcting, you know, so if you don't have these self correcting mechanisms, which we'll get to, it's like these information networks can run wild and fundamentally up in society in ways that have dirt completely disconnected from what's true or not. And that can be very terrible for many members of the society. [00:25:33] Speaker B: Yeah. Now, and this is where, you know, I want to, I want to move to, but I just want to share this because to me that part was profound about the witches. And I often share with people, especially people that are very anxious about just today's discourse, you know, in our culture and all that because of, you know, the things that we've well reported on the amplification of emotions through social media and the algorithms triggering people, it all comes back to this stuff. And so that was a very interesting story that is funny on its face. But again, understanding history helps us kind of relate to today, at least helps me keep my anxiety down. So that story, like you said about the penises that I couldn't believe that that was a true story. I had to go look up other stuff online and because, I mean I believed it, but I was like, wow. Because for the audience the story was that people were actually convinced to believe that women were stealing men's penises, like literally taking them off at the middle of the night. And there was a story where a guy was going around saying that he saw this witch, that she put 20 to 30 penises in a bird's nest and was feeding them oats and things like that. And that's how they were staying alive. And this was all happening. James, just because the time frame we're reading this in, I was reading that part when we have a, you know, presidential candidate at that top level, who was literally trying to convince the electorate that Haitians were eating people's cats and dogs and reading that part about the witches and the penises, because I had to think back, wow, people actually believe that? To the point like you're saying they actually drugs. They drug some woman out of her house and burned her to death because they believed that a friggin somehow a guy's penis could come off and the guy would stay alive and not bleed to death back in the day, before they even had anesthesia and things like that. And that the penis on his own will be eating in a bird's nest. And that's when I was thinking, well, of course then people could be convinced to believe in post birth abortions or in the fact that you're going to send your kid to elementary school in the morning and he'll come home with his penis cut off and he's a girl. That's things that to me would be unbelievable to even believe when someone says it. But that I read this and historically this is just the way human beings behave. It's like, okay, this isn't about being intellectually right or wrong. This is just about people want to believe certain stories. Some people do. And when enough people do, we have this critical mass of pain. [00:28:00] Speaker A: Yeah, no, I mean, it's one of those things. And people like you and I are blind to it a lot of times because the more fantastical a story, story is typically, I'm looking, I know you're like this too. I need more and more proof. The more like, you know, if you said somebody jaywalked across the street, I don't need a lot of proof for that. But if you're saying, yeah, they're cutting off, you know, people's, you know, penises and then feeding oats to it, I need a lot of, it's like, I'm not going to take people's word on that. But that's good. [00:28:28] Speaker B: Eat on its own. [00:28:29] Speaker A: That's not, that's not how everybody looks at it. And so a lot of times, I mean, and again, that's why this, this book was very helpful to, to see that and you know, to really look at from a historical state, like this stuff really happened and it's like, okay, yeah, when I'm hearing all this fantastical stuff and I dismiss it as like, okay, yeah, you say all that, man, you better come up with a lot of proof. That's not necessarily the way that everybody or even many people are wired, so to speak. Particularly if the Again, the information that when that information is put out there, if it serves some other purpose, I'm wondering, is it true? But what if it's just to make people feel together, to unite people behind another person, like to make them feel connected to something like that. If the information is for that purpose, then the truth of it, while I'm looking for the truth of it, the truth of it is not even what the receiver of that information is looking for. And so another piece that I thought laid a good groundwork for his discussion on looking forward was just how the information networks available at certain times in history were very important at the type of societies that evolved at that time. You know, and he pointed out, for example, that it was the creat like for large scale democracies, for example, without ways to distribute the happenings of society in a wide way, without being able to distribute information efficiently. And one of the things talk about with books and then also later with periodicals, newspapers and things like that was the ability to replicate information identically. That was the printing press, you know, and so you could say it wasn't that you were hand copying things and you talk about all these hand copied versions of Bible from, you know, 1500 years ago. These things all had little errors and so you couldn't even make, you know, one for one copies of things very easily, whereas the printing pressure could do that. But the ability to distribute information, what was going on in society widely, quickly through like newspapers, pamphlets, things like that, is what really enabled large scale democracy, you know, like in the United States. And that's a large country, that's not just a city that's practicing that, you know, so where people can keep up word of mouth and stuff like that. And then on the same, on the same lines, large scale like totalitarianism, like when you look at the, the, the what they did in Germany in the 1930s and 40s, what they did with what they did in Russia, you know, with the Stalinist Russia and so forth, where you have the, the government's assuming total control over everybody's thoughts and actions. That was impossible. That wasn't how autocracies functioned 500, a thousand years ago, because there was just no way of getting all that information without telegraph and ultimately radio, all that kind of stuff, you couldn't communicate that kind of information wide enough to exert that much centralized control. And so from that, from that discussion, kind of what stood out to you in those entirely how we laid out the connectivity of the capability of the information network to the type of society that can evolve. And now it doesn't necessarily mean it's going to evolve into that type of society, but just the open up the possibilities for the kinds of societies. [00:31:32] Speaker B: Yeah, I think this is all just like awesome stuff to think about because it's kind of profound. I mean, this is like you said, I see it kind of coming together over the last few hundred years of, you know, the scientific method and the industrial age and all of that combined has led to greater and greater discoveries and inventions, maybe, let me put it that way, of how we can share information quicker and easier, which then helps to increase things like human productivity. So, you know, where like 85 years ago a computer was a punch card. Right. But that was still more efficient than, you know, something 100 years earlier than that, like the printing press. So I think it's all very interesting because to your point about the 20th century and some of these, whether it's the totalitarian regimes that sprung up, whether Stalinism or Nazism or Mussolini, in terms of the way that they used information to create new truths about their societies and how people should see the world, or if it was things like in the late 1700s, the US Constitution and how that was a document and that type of information and that type of truth created what the United States became by the 20th century. And so to your point, without the technologies, the ability to communicate and have these shared stories through humans, be able to circulate vast distances, this was not possible at any time in human history prior to this kind of last couple hundred years. And that's an interesting point too, because he calls this story chains, because he kind of says, you know, back in the old days, it was human to human chains meaning word of mouth. Like you said, maybe a small collective city of 1,000 people could become democratic. But how could you get 50 million, 100, 200 million humans to all agree on the same way of running a country? [00:33:29] Speaker A: And that's when to all kind of look at what's happening in the same kind of context. Okay, well, what decisions do we need to make? You know, like how could you even decide that? You know, if you. Without the ability. [00:33:43] Speaker B: Yeah, the story like it, it couldn't no longer be a human to human chain. There had to be a way to get this story out. And that to your point would be whether you start with the telegraph, then you get to electricity and radio and television. So, yeah, I think it's a fascinating, just it's all fascinating in terms of. This is kind of like we blanketed now our ability globally to communicate and now the culmination is what we see, like we're talking about now going into the future. We're here with AI, we're here with algorithms, and we're here competing for truth. Right? And this is why did Elon Musk buy X versus, you know, someone buying another platform is because they want to control how information is disseminated and how people view truth? [00:34:26] Speaker A: No, I think that. And it's fascinating in the sense that he discusses, like, these approaches to, you know, like, as time has gone on, there's been more and more and more information available. And that's illustrated through, you know, like, if it's just word of mouth, then, you know, really you're just limited on how much information you're going to consume and take in. You know, you might go to a town square and hear from one person. Other than that, you're just hearing from the people that you're talking to. You know, and so how many people can you talk to relative to and more. So how can, how many people can one person talk to to get that information to. But then as you go, you know, if you introduce printed material that can be shown to a lot of different people and that's talking about, like clay tablets. But then if you look at the Bible, for example, that can be distributed to millions of people, you know, and especially after the printing press, you know, when you can create so many copies, thousands and millions of copies, and so more and more information and then more and more books, you know, added to that, and you can have all the more and more information can be distributed. And so what do you do with all of that information? That's the question that's trying to be answered here. And so when you look at the forms of society, the forms of ways of organizing society that have evolved more recently with, you know, if you look at this type of democratic type of way or the totalitarian way, what he points out is that these are two different approaches, diametrically opposed in a sense of how, what to do with all this information just floating around out there. And it's more and more. I mean, it's, you know, now it is not just books, but the ability to do daily papers or weekly papers, you know, things like that, pamphlets, and then, you know, with the telegraph, you know, and then the radio and all this information floating around more and more. So from the more democratic side, he's like, okay, yeah, what you're doing there is democratic forms or systems. Assume they, or I should say this, they do not assume infallibility of People or of information. They assume that things can be wrong and that may need to be corrected. And so they encourage a flow of information. By and large, you know the theory you encourage flow of information and you build in structures that allow for self correcting. Because if, if some, if, if anything can be fallible, if humans or the information that is flowing around can be fallible, then you want the ability to self correct. But that again that doesn't happen naturally just from more information floating around. You have to have systems that are in place that will prioritize truth over what's sensational or what is engaging or anything like that. Versus the totalitarian approach which was the answer to this question, which is we're just going to control all the information but that assumes infallibility on behalf or of the, the leadership or of whatever the leadership is saying that information that's in place there. And so it's a completely different approach but all it is is trying to how are we going to manage this huge flow of information, this forever increasing flow of information. Are we going to build in self correcting and allow societies to evolve with, as more information becomes available, if we learn something that we thought was inaccurate or if we see a person that makes a mistake or does something wrong, are we going to be able to correct for that or are we just going to keep going down that road? Which is the alternative? The totalitarian is just like look, no, the, the whoever's in charge, whatever they say, that's what it is and it's infallible. And that has that, that approach. If you look at a spectrum of truth versus order, the totalitarian approach biases everything towards order. We want things to be in order and you know, predictable and everything like that. Whereas the, the democratic approach, the self correcting approach prioritizes truth sometime at the expense of some level of order. And obviously the totalitarian prioritizes order with a lot of, at the expense of truth. So I mean I thought that looking at that in this spectrum was really interesting. But again we'll talk about this, you know, next is just the how this sets the stage for how when you have AI coming, you know there's going to be much more information not just being curated, but AI has the capability of creating information. So you know, I'll let you react real quick before we jump to that part. [00:38:22] Speaker B: No, I think because what you're getting at is a big part of this, this section in the book of the infallibility was also. He's discussing the idea of bureaucracy that information was the cause of the entire cause of why bureaucracies were created. [00:38:40] Speaker A: Yeah. Why they were needed. Yeah, yeah. [00:38:42] Speaker B: Someone needed to be able to store this information, you know, tax receipts and things like that for a nation. And so you're right, it says, you know, bureaucracies and myths both sacrifice truth for order. And it's just fascinating to me because you're right. I think some of this is even how we're wired as individuals. You know, this goes back to like Star wars, right? The Rebels versus the Empire. The Empire was the Empire. The Emperor Palpatine, Darth Vader. [00:39:08] Speaker A: Very. They prioritize order. [00:39:10] Speaker B: Yeah, you know, they, they order and, and everybody looked the same, right. The stormtroopers all had the same white uniforms, all that. And, and that was order. And some people are more comfortable in that world because they just know they can be plugged in and they, they know what to expect both from the above them on this food chain and below them on the food chain. Yeah, the Rebels were the ragtag group of misfits, right. They look, you know, you had the aliens and you had Princess Leia where they're, where their, their head, head buns, you know, things and Han Solo and the whole group. And so for some people that feels messy. For other people that feels like it's comfortable because they don't have to be in some sort of order. And I think that is the wrestling between in the modern age, right, like democracies versus autocracies. Democracies can feel messy, but they have these self correcting mechanisms. Like in the United States we vote for our leader every four years and every two years, let's say for Congress, they haven't, you know, that certain countries have had the same leaders and the same era of living on the planet Earth for 20, 25 years. You know, leaders leave for life in certain countries and both have their pros and cons. [00:40:19] Speaker A: Like the, it's because of the self correcting that it is that it feels messy. You know, it's not that they, it's not, those don't operate independently. Like by taking away self correcting mechanisms you can make things feel orderly because whatever you woke up, whatever the world you woke up in today is going to be the same world you wake up in tomorrow and the next day and the next day and the next day. Even if the world you woke up in is tragic, you know, for some reason or the other, or we're just doing things in a terrible way versus the messiness comes from the self correcting. And yeah, I think it's more of a spectrum. I think that just people are at various points their, their level of discomfort when there is transition, when there is disorder, people, a lot of people are very averse to change. Some people are welcome change. And so that's kind of the struggle when we put all, all of us together in a society and it's like well, how are we going to go? And like you said the, or like we were talking about before, the, the democracies with self correcting mechanisms, you know, and again, it's not necessarily what you call yourself, it's how these, the structures in your society are set up. Are they set up to prioritize order and you know, building, you know like having people building a nation, so to speak, you know, in the Ron DeSantis mold, like saying hey, we're going to take truth out of the history books because we want order. Or are you going to say hey, we want more information more. We want to get closer to the truth and closer to what's actually real by introducing self correcting mechanisms. And that's going to be messier. So all right, so all of that to say looking forward now the book spins the second half of the book. Looking forward, you know, like we, we looked back more in our show. We're not going to spend another, you know, this amount of time talking about the forward part. But we did want to touch on it in terms of where the book basically said okay now and I've sprinkled it in throughout this conversation. We're talking, you know, AI introduces because AI grows and learns on its own and then is an actor if you think of, you know, like whether it be Twitter, a bot so to speak. And if it's a self learning bot, a bot that has machine learning code where it's going, going to, hey, this, this bot is programmed to with the objective of increasing engagement and it's going to learn how to do that. Or this algorithm that curates your social media is set up to increase engagement and it's going to continue to learn by, based on what happens, based on its millions of interactions with millions of people, what does best in creating, creating engagement. And it's going to adjust based on that and change without additional input from programmers. What Harari talks about is that these are actual agents within these information networks. Now they're not just reproducers. A printing press is a reproducer in an information network. A book goes there and then 30 of those books go out back into the net or go out into the network, same thing. And so AI agents though, are actual agents that can not just curate what's out there, they also can put information out there and so forth. And so now this is just revolution. He calls it an alien intelligence more than artificial intelligence. What stood out to you in this part? You know, I know again, we're not going to spend a ton of time here because this is, this is admittedly a very speculative part of the book, but what stood out to you in this part as far as how the warnings or the concern that Harari expressed, as far as the challenges that this will introduce when we have all these additional artificial or alien intelligence agents out there adding to the information network that exists. And again, also another thing where we're increasing, drastically increasing the amount of information that's out there, and all of it's not coming from humans. [00:44:02] Speaker B: Now, I do think that it is something we should heed and a warning that our society should really look into and take very seriously. This is my concern as I joke sometimes about our own political leaders and, you know, the fact that they aren't focused on things like this, it doesn't seem. And they're focused on, you know, people eating other people's pets and stuff like that. And so, but, you know, this is serious stuff. So for example, we discussed earlier in the, in the show certain things that reminded me of an example he gives about an AI system that is. That is instructed to, to make paper clips. And that's what it's. [00:44:44] Speaker A: Yeah, that's a main mission is famous story. Yeah. [00:44:47] Speaker B: And, and, and that it's. It's been programmed that be. Make paper clips as many as you can in the most efficient way you can as a machine. And it says that, you know, if in this example, if the paper, if the machine is strong enough, eventually it will end humanity in order to accomplish its goal of making paperclips. And then it may end up destroying the Earth. And because it's using all the resources to make all these paper clips. And so the whole point is like, that's not necessarily right or wrong, it's just that this computer or program was instructed a certain way. And, you know, there might have been a blind spot on the programmer because the machine learning, like we said earlier in this discussion, it's not sentient, it's not conscious, but it can learn how to become more efficient. So if you don't, you know, kind of account for every possible way that an AI system could misinterpret what a human's intent was when it programs it. And again, that's the Whole point of saying something's a blind spot. There's probably going to be blind spots as these technologies proliferate that we can't see right now. And that will come back to bite us in some way. And so I think that should be things that we're discussing. [00:46:02] Speaker A: Well, just real quick, again, I don't want to. I'll let you keep going. But on that, what he talks about is that's the alignment problem, so to speak. And the point he makes is that it's impossible to foresee how, you know, it's functionally impossible in that scenario to foresee how if you give a computer an instruction, maximize paperclip production, it's impossible to foresee all of the different ways that as it continues on its journey of evolution and learning the best ways to do this and trial and error and all that, to see all of the different pathways that that can go wrong for from a human standpoint. And so the question is, and. Which is still an open question, but many people think that it's. There's no way to say, hey, we're do this but never harm a human being. Well, that create. That adds in this other scenario where okay, well is he direct? Are things directly harming human beings or not? And so it's without the ability to say, to provide proper constraints on whatever instruction you give this self learning artificial intelligence, without the ability to do that, we never know what the ultimate result will be. And he talks about the Myanmar disasters where Facebook says create engagement. And the Facebook algorithms learn very quickly that fostering outrage and genocide is the best way to create engagement. So they foster outrage and genocide. And it's like, well, hold up, that wasn't what the programmers set out for it to do. That's what the machine learned on its own very quickly from again, millions of interactions all the time. And so this alignment problem is one that we've dealt with, you know, even, you know, he goes back throughout time in terms of how alignment problems have emerged, but then oftentimes between war and politics and so forth. But this is something that is inherent to the difficulty of trying to create a be of creation, so to speak, which is what you're doing when you're programming these AIs initially and giving them the capability to learn on their own. [00:47:58] Speaker B: Yeah, and the other thing that I found very interesting in this kind of idea about the future and where things can go is, you know, like when he talked about like the potential for like a religion because. And this is what I mean, like this is, this is Kind of profound meaning. We as humans, for the first time in the human experience of, you know, millions of years, potentially, are facing the potential that we could have something other than a human brain creating sources of information and truths for us. So, for example, he gives. I thought it was a great example. He gives the example of QAnon, and he says, you know, could we see in the future, let's say hundreds of years from now, even thousands of years from now, people worshiping a religion that was totally fabricated by an artificial intelligence. And I think when at first, on hearing that, I would have thought, yeah, probably not. Like, that's crazy. But now when he mentions Qanon, I'm like, yeah, actually we've already got an example here. Because my understanding is, you know, this Qanon thing is something that happened online. No one knows who the original Q is. You know, the person that really started and posted this. And then it's only been. [00:49:13] Speaker A: It could have been an AI bot for. [00:49:15] Speaker B: That's my point. Like, so it could be. Right? But just the idea that we are. We just witnessed, because it's. Qanon's not that old either. Meaning it was, what, six, seven years old? It came out 2018, something like that. So the idea is that. My understanding is that something like 15 million Americans actually really believe in the QAnon stuff, and then as many as 30 to 40, kind of somewhat like tacitly, okay, you know, I could get in, I could believe some of this stuff. I mean, that's a large. That's more than 10% of the US population as Q adjacent, if we can put it that way, in a very short period of time. Because of why? Because of the proliferation of information by algorithms that are machine intelligent now that aren't even being programmed anymore by humans. The machine itself has learned through its own trial and error, how to get the response it wants out of us humans. [00:50:08] Speaker A: Yeah. [00:50:08] Speaker B: And it knows that. [00:50:10] Speaker A: It knows how to trigger the people that it's putting things in front of. You know what it made me think of when you said this was the. I think there was the. One of the Marvel movies recently was talking about how there was. I think it was the Marvels. And you know, like, there was a planet that was ruled by an AI and, you know, I think one of the Avengers destroyed it or whatever, which causing all these kinds of problems. But like. But yeah, you look into the future and it's like, okay, yeah, you can definitely see how if machines learn how to manipulate us, which they do. That's been shown through Testing and so forth. But if that happens, then the alignment problem that we have, that we are the ones that created these things, but we couldn't define the parameters that they operated in a way that will not lead to ruin for us. If we know, looking in the past, that how our information networks operate, one that it's not always going to lead to some, to some improvement in life. You know, the, the. He uses the example of the printing press and how many people pointed that, that lately into the scientific revolution. But he's like, well, in the meantime though, you had this, these, you know, killing of all these witches, you know, like that. The printing that follow the printing press as well. It wasn't for a couple, you know, 100 years after that that the actual, the benefit of the scientific revolution comes from. But it seems like the introduction of this. And I mean, this is. I hate to sound very down about this, but it seems like it's something we're going to have to go through. One, like there's. That we can put some constraints on it. I guess the example would be spam. Like we had to go through the spam downturn for the companies that didn't turn around and say, okay, hey, let's program this to get rid of spam. And which they were able to do for the most part. But there were a few years, you know, five, 10 years where spam was everywhere in email all the time. So it seems like we're going to have to go through something in order to, for society to get a handle on how to address it and then try to come up with constraints that allow us to operate with all of this additional information and the information not just being curated by humans now, but being curated by all these other things. And so ultimately I look at it and say, okay, it's less of a. I look at it as more of a warning like, be aware, things are like. And he goes again, he gives. I think it's very speculative on a lot of things he says, he throws out that can happen. But he acknowledges that he's just guessing. But really the takeaway is that this actually things are. Things are gonna get a little hairy here for a little bit. Hopefully we can respond quickly. If we're aware that these challenges are coming, we can respond quickly. I don't think there's a situation where we're able to just avoid it all together because you can't just pull the plug on these things, you know, So I think we're gonna have to go through something. Hopefully we're prepared we're able to respond quickly with self correcting mechanisms, not with the approach, the totalitarian approach where we're just gonna try to control all this because it looks like it's going to be something that gets beyond our control pretty quickly. [00:53:01] Speaker B: Yeah, no, and I think just to wrap it up like that's the fascinating thing to me about this last probably 200 year period is that we've seen with the, with the exponential growth and the ability to share information. Like you're saying kind of like, I hate to use the word weaponization because it's more negative than what I'm trying to say here. But it's like the 20th century to me kind of represented that like maybe this, that was a way that like you said, having to go through it, that different societies experimented on how maybe you kind of collectively gather information to create a truth for society. And yeah, Nazism, Stalinism, the American system of kind of more open and messier democracy, all those were tried. And I think, you know, it appears that the one that won the 20th century was the one that allowed the little bit of messiness and self correctiveness that provided the great and I say this from the migration patterns of humans, right? People wanting to come to the U.S. england, Canada, Australia, all the kind of western style nations that allowed for that had self corrective mechanisms of information sharing seem to have won at least through the votes of people migrating. Whereas the Stalinistic, nazistic, that type only benefited a collective, a small few in the society and not that many. So yeah, you're right. Maybe now what we're going to experience in the 21st century is we got to go through another churning of which hopefully is not too painful for us. But you know, humans figuring out, okay now how are we going to deal with this information kind of the exponential lightning speed way it can travel and the fact that machines now are now adding their own scripts to this. And yeah, it's, it's really uncharted water. So yeah, I guess let's stay tuned. [00:54:55] Speaker A: Hey man, no plans to the contrary. No plans to the contrary. But yeah, the, the warning it is, you know, it's like okay, well it's coming and so what can we do about it? And we do have lessons from the past. So you know, as we talked about earlier, you know, you hopefully you can learn from those and not necessarily avoid any struggle, but once the struggle hits to be able to come with solutions that we can use to put us on a good ground. Because if you look at where the 20th century started versus where it ended. I mean, there was a lot of progress that was made in terms of even avoiding large scale war, in terms of just creating a world where it wasn't perfect. There were, you know, the economic exploitation, all that stuff was all still there. But if you look at what happened in the first 50 years of the 20th century, the way the last 50 years unfolded, what would. Was not the same? You know, it was not filled with the war and the death and so forth. And if you talk to somebody in, you know, the first half of 20th century, you would have never, they would never believe that you would, that the world would have been able to control nuclear arsenals as well and not destroy the whole world in that way. So I hope that if we look at, you know, in the 21st century, you know, when we look at it and say, okay, well, yeah, great. I can't believe the world figured out how to, to not make the climate inhospitable. And I can't believe the world figured out all this other stuff and. But you know, there's gonna be a process of going through that. So that's what we get to live through. And you know, but ultimately it was a good book. We both enjoyed it, you know, and so, yeah, check it out if you have the time. And at the same time, we appreciate you for joining us on this episode of Call Like I see it. Subscribe to the podcast, rate it, review it, tell us what you think, send it to a friend. So next time. I'm James Keys. [00:56:33] Speaker B: I am Tunde Ogamana. [00:56:35] Speaker A: All right, we'll talk to you next time.

Other Episodes

Episode

April 11, 2023 00:52:58
Episode Cover

Using AI to Replace Humans is Coming, but is it Progress? Also, What to do About All the Lead Pipes the EPA Found in Drinking Water Systems

With Levi’s announcing that it will begin using AI generated models on its website and app, James Keys and Tunde Ogunlana consider whether using...

Listen

Episode

July 19, 2022 00:47:42
Episode Cover

“The Limits of Growth” and Predicting a Collapse of Civilization; Also, Uncovering a Southbound Underground Railroad

50 years ago, the book, “The Limits of Growth,” made some controversial predictions about a potential collapse of civilization due to our culture’s unsustainable...

Listen

Episode

March 31, 2020 00:53:29
Episode Cover

You Won’t Go Anywhere (Good) Trying to Lead on Good Intentions Alone

With the Democratic Party in its current form seeming so ineffective and unable to consistently fill its role in our two party system of...

Listen