Episode Transcript
[00:00:00] Speaker A: Foreign.
[00:00:14] Speaker B: Hello, welcome to the Call It Like I See it podcast. I'm James Keys, and in this episode of Call It Like I See it, we're going to consider whether Google has developed a sentient AI and discuss how the increasingly advancing artificial intelligence in our technology may be something that can make our lives much easier, but also something that could fundamentally change our lives and our societies. And later on, we're going to take a look at some recent scientific breakthroughs which have some scientists believing they're on the verge of being able to literally reset the age of ourselves to a younger age.
Joining me today is a man who's so skilled as a wealth advisor, his business cards just read, I get the bag. Tunde. Ogonlana Tunde. Are you ready to share your wealth of knowledge?
[00:01:06] Speaker C: Yeah, if compliance will let me. I gotta deal with FINRA and the sec, so I gotta be careful how you introduce me here.
[00:01:13] Speaker B: Yeah, yeah.
No claims we can't back up, huh?
[00:01:16] Speaker C: Yeah, well, all right.
[00:01:19] Speaker B: Well, also joining me today we have a special guest. He's an esteemed attorney, author, and overall renaissance man, Rob Buchel. Rob, are you ready to sign, seal and deliver us some great takes today?
[00:01:34] Speaker A: Absolutely. Good to be here. Good. Good day.
[00:01:37] Speaker B: All right. All right. Now we're recording this on July 2, 2022, and a few weeks back, we saw reports and discussions all over the place, all coming from a claim by a Google engineer, Blake Lemoine, that Google's artificially intelligent chatbox generation generator, Lambda, was sentient.
Now, whether this is true or not, this kind of claim stands out, not just because it's the premise of many of our sci fi nightmares, but it's pretty remarkable because this is not some random guy. This is a trained software engineer. So the fact that at minimum he could even think something like this and say it out loud is at minimal notable and something we need to pay attention to or we should pay attention to. Now, Google suspended him, but primarily the reason being was that he was violating their confidentiality policies. But the way technology is going, this discussion really is just starting. That's not going to be the end of it. So, Tunde, what was your reaction to seeing these claims made by this Google engineer who believes that Google has an AI chat box generator that's sentient. And these are. These are based on his interactions with it.
[00:02:44] Speaker C: Yeah, it was interesting. I was intrigued.
Thoughts of every bad movie we ever saw, from the Matrix to the Terminator. Of course, they're great movies, but I mean, the bad idea of technology taking over us humans came to mind of its own. Yeah. And it's interesting because like you say about Google suspending him, like you're saying, clearly, I believe that they could have totally suspended him for a legit confidentiality breach because he's an engineer working on sensitive stuff.
[00:03:14] Speaker A: But.
[00:03:15] Speaker C: But it lends to kind of the idea of almost like conspiracy, like, oh, what are they hiding?
They slapped this guy on the wrist. Cause he told the truth. And so it just, it very interesting. And I think it's. For me, it was like, okay, another reminder that we're here getting close to that kind of singularity from Larry Kurzweil of when technology may overtake kind of human stuff. And so kind of just like, are we getting close to that? That's what it kind of made me think of.
[00:03:48] Speaker B: So it like the first step that everybody who's alive remembers. Like, oh, remember that? We heard about this first, then. Rob, what was your thought on this?
[00:03:58] Speaker A: I was amazed by the number of pronunciations of the word sentient first.
Then I thought about.
I thought about Alan Turing and a book that I read a while ago called the Most Human Human by Brian Christian. And they supposedly have these contests every year. Whoever can fool the blind party on the other side that they are human through their software. The software, or whether they could figure out is it human they're talking to or. Or is it a computer that they're talking to? And there are contests about that and the whole philosophy behind it. But originally this whole thing made me think about a program called Eliza, which was supposed to give therapy through a software program where you could just type in I am sad. And the computer would say, why are you sad? And then you'd give an answer and it would. I don't know if it's. If it proves that therapy is something that is rote and programmable or whether we really are getting to a point where these computers and what Isaac Asimov laid out decades ago, that the computers and the robots are going to be able to run a bunch of stuff and humans are going to become more and more irrelevant or rise to the level of just, you know, being catered to.
[00:05:33] Speaker B: Yeah, yeah. One or the other. Like either we become. We become just pampered or we become dispensable.
[00:05:41] Speaker C: Well, you know, it's interesting though, because I think they discussed this about the engineer and the bot that he was talking about. And it's kind of like what you're talking about with this therapy thing, right? Rob is like, is it that? Like, it's almost like, are we deluding ourselves sometimes. Because is it that the, like the software has any type of intelligence or is it that the fact that it's communicating with us, it doesn't have to be sentient, but the fact that it's responding. Like you said, Rob, if you ask, if you say I'm sad and the computer is already programmed to say why are you sad? That we feel like we're communicating with something even though we actually are still communicating with a non. Like an inanimate object in terms of not alive.
[00:06:27] Speaker B: Well, that was actually my reaction to seeing this is more so, you know, I talk about this sometimes. We see a lot of things. We project structure and project kind of what we. The way things we project design. Like, we see. Look at the sky, we see stars, but we see constellations. We look, we see clouds and we see organization. And so I think our minds are automatically wired to do that.
[00:06:49] Speaker C: Some people see Jesus in a piece of toast. Remember that?
[00:06:52] Speaker B: Or a lot of things.
[00:06:52] Speaker C: Yeah.
[00:06:53] Speaker B: In their coffee, you know, a lot of, a lot of things. So to me, like my reaction was, it was really, I thought the who mattered. And there's. You can look more closely at the engineer particularly and say, okay, based on his background, this is understandable, or something like that, but that it wasn't just someone who's sitting on their computer, you know, half on Facebook and have like partial attention, but like actually somebody whose job it is to kind of interact with this thing. And he's asking these questions. So. But I think the one thing we have to be sure we're clear on is sentient artificial intelligence. You know, in terms of decision making machines making decisions based on certain stimuli. That's. Now that we have that now like that's in all types of devices that we use where you, your phone, you have your phone and location is on and you drive somewhere and your phone is remembered that you drove there before. And so it'll prompt you with something or all of those things. It's taking inputs and it's making decisions based on those inputs and based on other information. It's tracking and so forth. That type of artificial intelligence, so to speak is there sentient is much, a much, much higher form. I mean that's when we're talking about humans with emotions and things like that.
[00:08:03] Speaker A: Consciousness.
[00:08:04] Speaker B: Yeah, yeah, consciousness, exactly. Like beyond, like beyond. Just I can take a bunch of information, process it through and then make a decision without you having to predetermine that decision for the machine, so to speak. Like if you go back to the more rudimentary like just machine. Like. They're just kind of automation. Like, I pull a lever and a stamp comes down. Like that's. There's no decision making there. Like, it's just when there's a trigger and then there is a action that happens. But taking a bunch of information and then making a decision based on how that information is weighed and so forth. That's here already. So the sentient piece, I think, was how much further that is than the type of artificial intelligence that we interact with and utilize every day was. It was really stood out to me here. But again, that goes back to the idea that this guy might have been seeing. The way he's asking questions and the way he's processing the answers and so forth, or interacting with it. He's seeing that organization and then kind of, oh, this thing is talking about his feelings. It's questioning its life and it's being and all. If you unplug me, then I'll be dead and stuff like that. And so some of that, I think, has to be a projection. But it's still remarkable that some way that it can get close enough to project that, so to speak.
[00:09:19] Speaker C: Yeah, I think we've seen it in some movies already. If you want to talk about. You know, everyone knows about Star wars and the characters like C3PO and R2D2, which one could argue whether they're sentient or not. But they definitely interact with humans in the stories in a way that makes you feel like that there's something in there that's more than just a robot, so to speak. And then I'm thinking of, if you guys have seen the film interstellar with Matthew McConaughey. And he had that machine called TARS. And there was a time when he was kind of working it a bit and he said, humor level at 80%. And the thing started making jokes. And he goes, well, let's take that down to 60%. You know what I mean? And I thought that was kind of cool because it was like, all right, that's probably how it could develop where tars. No, you know, I didn't think that TARS was sentient. But it was just such an advanced version of AI that you could do that, like manipulate its humor level. And the thing was already programmed on how to be funny or not funny. You know, all that. So I think we're probably around that stage already. You know, with the high level of.
Well, to be fair, engineering out there.
[00:10:25] Speaker B: We're not at the Star wars level, like, I think C3PO, but think about this.
[00:10:30] Speaker A: Isaac Asimov in Robot has one of his stories in it that parents were very concerned. They bought a robot for one of the. For the child, for their daughter and they felt she was starting to become too attached to the robot, that she wasn't socializing with other kids her own age, getting involved in sports. All she did was hang out with the robot. So they took the robot and said that the robot ran away and instead of time passing and like the kid got over it. No, she was on a full on search to find her lost robot.
[00:11:08] Speaker C: You know what that is? That's like a phone today with Instagram on it.
Right? That kind of came true.
[00:11:17] Speaker A: So the reverse is what does it mean to be human when all you're doing is interacting with screens all day long? And if I can, I have a book open for. There's a notable author, David Foster Wallace, who sadly committed suicide based upon his demons. But he wrote this way before Facebook. But he writes, today's person spends more time in front of screens in fluorescent lit rooms and cubicles, being on the end of the other of an electronic data transfer. And what is it to be human and alive and exercise your humanity in that kind of exchange?
[00:12:02] Speaker B: Interesting. Yeah, yeah. I mean you ask those fundamental questions, but if we take a step back and look outside of our current context, it's not unheard of. For example, when, if you look back 50 years, 100 years, 1,000 years, I'm sure, where people can have affection for objects and things like people run around with, like if you go to kids a certain age, I mean, their pet rock, you know, where it's like, oh, and I got to take this rock everywhere and if I lose the rock, they're going to cry for a little bit and so forth. And so like that aspect alone I think isn't. It isn't really where you, you pull the distinction because people. That's part of being human. That would be part of the answer is to find meaning and define kind of depth in places where it may not be immediately observable, you know, or it may not be observable to everyone, but it's observable to a person here or a person there. And so the difference would be with that a rock is not impersonating, it's not pretending to be or something that we're like, oh, actually if someone just walks by observing, for example, the rock doesn't answer, the rock doesn't say, oh, I'm feeling good today, you know, how are you? Like, it's all in the Person who's interacting with the rock. It's all in their imagination.
[00:13:15] Speaker A: But what happens when it gets better? What happens? What happens like in the movie AI where the kid is screaming emotionally and sending all the human triggers. Please don't have me killed. When everybody knows the kid's a robot.
[00:13:32] Speaker C: Yeah, but there's been studies on that and just that I've seen in various documentaries about us as humans that if a robot looks more human, humans have a harder time doing it harm. But if it looks like a traditional, like basically the R2D2 and C3PO, C3PO is a little more humanoid looking. So if you pull ripped his arms out of the sockets, we. We'd feel it a little more emotionally than if you just took one of R2D2's legs off because it's like he doesn't look as much like a human. And they said they found scientific research this kind of found that we. Again, because I think to James is your point about projecting. We see in something that looks more human, more alive. We project that it is alive, even though we intellectually may understand that it's not. And I think it goes back to things like we talk about irrationality where there are certain things that people accept.
Even things like an election might have been fair, but then they may still believe somehow that something went wrong somewhere. Right. There's always that piece of the mind that still wants to hold onto the initial belief. No matter how much the intellect may be convinced that okay, maybe this is a certain way, there's still the emotional pull. The elephant and the rider.
[00:14:47] Speaker B: It's that hope. Yeah. In many ways hope can be a great power that humans have. Like hope against all odds. Like, oh, we'll make it out of here. You know, we can keep our calm and we just know. Yeah, we just. But whereas, you know, you're falling off a cliff and you know, in humans oftentimes can, can maintain.
[00:15:04] Speaker C: You know, maybe that's what makes. Maybe that's true sentience. You know, like the ability to be totally irrational makes. You know, like until computers do some really off the wall stuff without being provoked, then we know that they're not.
[00:15:18] Speaker B: It's. It's to be irrational while believing that you're rational is.
[00:15:21] Speaker C: Yeah.
[00:15:21] Speaker B: Like it. The self deception is our greatest power.
[00:15:25] Speaker C: You know what I was thinking in this conversation, Just as a quick joke, I was like, what if. What if Q from QAnon was the ultimate like test of AI? What if it was actually just a bot sitting there spitting out stuff, but.
[00:15:38] Speaker B: It'S going to be an experiment to.
[00:15:39] Speaker C: See how many humans can we actually get to do something based on a bot.
[00:15:43] Speaker B: That, that, that leads me to my next question and I want to kick this one to you, Rob. First, like, what are your thoughts on how advanced? Like again, forget about sentience. I mean the sentience piece is kind of like, okay, yeah, that's, that's crazy. You know, like that's again like, oh, okay, yeah, we can go to the moon. So therefore if we can go to the moon, is AI then going to like Alpha Centauri is sentience. It's, it's a much different scale. But in general AI has become pretty advanced. Like there have been companies that have been founded on the concept of hey, we're going to, eventually we're working towards and we're testing it. Cars that drive themselves, now that's something that's powered but there's a lot of AI that's powering that to be able to make all the decisions that have to be made on the road and so forth. So just, just generally what are your thoughts on how advanced AI has become? You know, whether it, you know, like are we on the verge or are we already seeing big stuff that we just don't even fully appreciate because it's incremental or are we on the verge of something big? But in general, what do you think my feelings are?
[00:16:39] Speaker A: You must first start and define what we're talking about when you say artificial intelligence, AI because a lot of people, it's like an abused marketing term now. So when a computer does something, it's AI and it becomes that thing. Oh, it is the organic of computers now. So this is organic, this is ah, so that means whatever I think it might mean. Oh, this is AI algorithmic. You know, we can, right 10 day, you can invest for me with the AI and make and we can figure all how to be a billionaire fast with the AI. But AI truly means can the function of a computer convince you it's human, convince you that it is sentient.
But it is misused particularly that the artificial intelligence can be a self learning machine. It's autodidactic. And so that's what you would think. Oh, with these self driving cars, that's AI and it should self learn. But that's the wall we're going to hit in the technology.
[00:17:58] Speaker B: And there's a different, there's a fundamental difference there in terms of the scale of what you're talking about auto learning versus again like all of the different permutations. Even when you have the various type of neural networks that are built in and like the, the ability to incorporate new data points. It's still, you know, a different world, so to speak.
[00:18:19] Speaker A: Do we want our self driving cars to self learn?
I think no. And I'll tell you why. Because there are going to be some malicious people out there that will teach your self driving car that the stop sign means go. Mm and now we've got problems.
[00:18:37] Speaker B: Or that the yellow light means accelerate or that Just common things that you know.
[00:18:43] Speaker C: Yeah, you know, it's another film. You ever guys see the movie Chappie?
[00:18:47] Speaker A: I have, yeah.
[00:18:48] Speaker C: It's a very good one.
[00:18:49] Speaker B: Please, please continue.
[00:18:51] Speaker C: It's no, it's just. I think it's on Netflix now or. And definitely on Amazon, you can pick it up. But it's.
Hugh Jackman was in it, made by the same guy who made District 9. So it's one of those kind of dystopian. But it's about a robot that's in a factory and this is probably, you know, a little bit ahead in the future. And it's exactly what you mentioned, Rob. It's a. The robot gets one of the engineers, takes it home to work on it and somehow it ends up in the hands of criminals. They teach it how to go like rob banks and rob, you know, corner stores and all that. And so you've got this, this robot walking around town is super strong. You know, everything you think of, but it's just been taught to be bad instead of be good and it follows through. It's very interesting. It's actually a very interesting concept.
[00:19:39] Speaker B: What's the duality of human. You know, so it's one of those things that when a lot of times, well, it depends. If you're kind of in the nightmare mode, then you do think of the negative. If you're in the. Oh, you know, this could be so helpful. You think of the positive, but it can go either way. But in terms of just kind of how looking at an application or looking at how this thing could potentially affect us. Rob, you recently released a book called God's Ponzi that deals heavily in AI and so forth. Would you tell us a little bit about that?
[00:20:10] Speaker A: Well, God's Ponzi, I don't want you to think that God's running a Ponzi scheme. But artificial intelligence, if we get it to the level where it doesn't have to eat, it doesn't have to speak, it doesn't have to drink, it can stay up 24 hours a day and if we actually tasked the AI to manage a Ponzi scheme, we would call that Ponzi scheme God's Ponzi. And because that's a nickname in the software community, that AI is a God, that it doesn't need humans to keep it going. And so this is to Tunde's point, is that we could start tasking this artificial intelligence to do bad things. And a group of friends from MIT get together in the book and decide they are going to create and manage a Ponzi scheme in order to get revenge on behalf of one of their friends who killed himself because he was being tortured in the legal system. And a bunch of evil lawyers just tortured him for years and he finally kills himself. But he downloads his brain and his personality into an artificial intelligence. And the main character decides he is going to get revenge against these lawyers and by creating his own Ponzi scheme. The problem is, is he wasn't trying to make money from the Ponzi scheme. He really wanted to lure these lawyers into the scheme and then have them criminally prosecuted for participating in the Ponzi scheme. And so it became even more tricky because he.
You got to make a lot more money. So a Ponzi scheme, you got to.
[00:22:14] Speaker B: Bring more people in. Yeah, yeah.
[00:22:15] Speaker A: So Ponzi scheme is basically you're using new investors to pay off old investors. It's not a pyramid scheme. It's not kicking upstairs, and you want to be the first founders. And then. And then it continues down. It's anything could be a Ponzi scheme. And in this particular. In the book, the Ponzi scheme is an investment bank. And the AI is very good at balancing the books and the different types of books, the real books, the books that they show to everybody else, the books that he shows to regulators, he can answer everybody's email. It's the ultimate assistant to the ultimate manager of the Ponzi scheme. And there is a controversy among the crew of friends, the MIT friends, because the main character believes he really is talking to his friend, that his friend is still alive, and he is an artificial intelligence who has the personality of his friend. And the rest of the team think he's crazy. He's crazy and has lost his mind because they don't think that the artificial intelligence is that advanced. It's just a computer posing as his friend. And of course, there's a black swan is. I'm sure, you know, Tunde knows that every so often the market just does something that.
That is against the statistics. It's the 1 in 1,000. But the 1 in 1,000 happens. And when the economy does that downturn, we always say it's like the. It's like the tide. Whenever it goes out, we find out who's not wearing swim trunks, who's swimming naked. Yeah, everybody wants their money back. And nobody wants. Reality kicks in. You're not supposed to get 300 on your money. I just want my money back.
And, and that's what happens. And then everybody's after the main character, Gregory Porton, and he has to make one final bet. Any. And. And it's a complete shift of what they were working on in order to try and solve the problem and, and be victorious. And he's also, you know, we're all trying to manage their personalities and relationships because, you know, basically the whole function of. Of what Gregory Portent was doing and why he was running this Ponzi scheme was. Was almost therapy trying to get over the suicide of his friend. And it gets complicated.
[00:24:52] Speaker B: Hey, that's. That's always good. That's what we want. I got it right here.
[00:24:55] Speaker C: There it is.
[00:24:57] Speaker B: So, yeah, it's. Well, well, this is a, you know, this is not a visual medium, but nonetheless, my word for it, he held it up.
Exactly. But no, and that's fascinating, but it's just looking at that, you know, in the, in the AI sense, in terms of. That's how we look at what computers, so to speak, are capable of or could be capable of. And this fuzzy line, you know, like, like you said, even in the book, the characters are dealing with, what exactly are we seeing? You know, and that's, that's the same issue that this Google moy, the Google engineer is dealing with. He's trying, he's saying what he's seeing is this Google saying is something else. There are other people that weigh in and say, hey, well, you know, whether he's right or not, we're close. And to me, as far as how advanced AI has come, I think that we're still really in the nascent stage. Like, I think that. And I see this as, you know, I. My background was computer science, you know, back in my pre. Pre. Pre attorney days. And so I saw where we were then, you know, and then I see where we are now and the rate of growth on both the hardware and as far as what the capabilities of, from a software standpoint keeps growing so fast. It's really something like. I think that already. Toondaye, you and I have talked about this before. Like the lie. If you lived a thousand years ago, the life of you, the life of your grandfather and the life of your grandson are all going to be pretty similar.
Nowadays that's not the case. Like my life compared to the life of my grandfather, in terms of the type of technological environment and everything that's going on around me is very different. And then my grandchild, it's going to be completely different. And it's. There's going to be no. So there's going to be no real. There's going to be a few through lines. We're still, I would imagine we're still going to eat food and go to the bathroom and things like that have. I hope we still have humor in our interaction on a regular basis, but you don't really know. It's going to be very different and it has been very different and things are getting more different fast. So ultimately where it's going, I'm sure that whatever it's going to be is going to be something that's going to be us looking at it. If we looked into the future, our jaws would be on the floor, I don't know. And we can save it, our value judgments for later, but all I know is that it's going to keep getting different, more and more different. And then our humanity is going to take over and we're going to project on that what we want and, or it's going to change. So it's either going to change us or we're going to change it to a certain way.
[00:27:24] Speaker C: All right, so we got to go back to the Stone Ages. That's what we need to do.
[00:27:29] Speaker A: Can I read a quote from a controversial character I think is important?
[00:27:32] Speaker B: Please, please.
[00:27:33] Speaker A: And you can guess who said it. I believe that robotics can inspire young people to pursue science and engineering. And I also want to keep an eye on those robots in case they try anything.
[00:27:48] Speaker B: Hey, that's good. Well, but that's. Remember what, I spoke to you guys offline about this and I'll bring it up now. I mean, ultimately, AI is another tool in humans quiver. I mean, it's not. We initially used our hands, then we started using rocks and sticks. You know, we initially were hammering nails and stuff, and then we got automation to do that kind of stuff. And so like there's always. Now those have been. Generally speaking, there's been more mechanical improvements. We've improved our ability to move things or, or can combine things, mix things, combine things, you know, put things together. This is different, this is decision making, you know, but it's still in the sense of human creating tools. To, to. To do things.
[00:28:29] Speaker A: So, so. And humans are good with tools, and we're really bad without tools. So the ultimate question is, is what decisions are we going to be making when artificial intelligence gets us to a level where we can grow our own food and food will be cheap and we don't have to work 40 hours a week, and we don't have to have work be the central component of our identity and our lives. What do we do with that extra time?
[00:29:00] Speaker C: I think what modern society shows is we'll end up fighting on a serious note, right? Yeah.
[00:29:06] Speaker B: Well, that's how we'll spend the extra time.
[00:29:08] Speaker C: Facebook was developed with, in a good intention to share the pictures of grandkids and cats and to keep in touch with your high school buddies, you know, and look at what we turned that into. That, that, that public square, so to speak.
But one, one thing I think that will really, you know, is just interesting from, from this standpoint of this conversation is, you know, the, the progress, like you guys are saying, because I'm thinking about, you know, 100 years ago would have been the year 1922. And we've all seen the old black and white footage of what used to be a woman's job being a phone operator, remember? And you would pick up one of those old telephones and you would talk to the operator and say, I want to talk to some so and so in Michigan and this county. And they would take a little plug and plug it into something else. And, you know, now, 100 years later, we have an iPhone in our pockets that is like a Swiss army knife of stuff, right? It's got a phone, it's got a video screen, you can communicate in various ways, text, email, all that. So I think that that is one example because that was a whole bunch of jobs that got taken out when the technology got better. And I think we could find thousands of examples that as technologies improved, it took away some the need to have a human doing something which then causes less jobs or things to people to do. But James, you and I, we did a show several months ago where we quoted John Maynard Keynes, remember, and his concern in the 1930s as an economist was that productivity was escalating at such a fast pace that he was worried that within a generation or two there might not be anything for human beings to do.
And remember, we were talking about pre pandemic, pre2020. Everybody was talking about how stressed out Everybody was working 80 hours a week and all that in our society before we all started doing zoom and work at home and all this. So my point is, is that it goes back to like you saying, James, in the show today, we project a lot of things, a lot of fears. And I just thought of John Maynard Keynes as you were talking, because I was like, here's a guy who was around 90 years ago saying everything is so advanced that we're going to have nothing to do in a generation or two. And here we are almost 100 years later and we still all stressed out work and everything else. So I feel like we're just going to pile on more stuff.
[00:31:32] Speaker B: Exactly.
[00:31:32] Speaker C: More stress.
[00:31:33] Speaker B: It's what's part of the human condition. I don't think we can escape it because just be doing things is so ingrained in us and trying to find meaning in what we're doing even more so. And so even if what we're doing doesn't have the same meaning as, hey, I gotta go farm and if I don't farm then I'm not gonna eat. Like, that's a direct kind of correlation to what you're doing and, or I gotta go down to the well, get some water and bring it back. If I don't, we don't drink anything. Like that's the direct connection with what your action to what your, your need is like immediate, immediate need. We connect those things though, no matter what, whatever we do, we, we find, we project onto it or we find some meaning in it that is vital to our existence. Like that's the kind. The arguments you see on Facebook. People are taking issues that may be relatively remote to them and they're, they're fighting for them like they are the. If I. It's the same as if they don't get water today, I'm not going to have any, any water. And so I think part of it is our humanity. And really the. What Keynes talked about and Rob, what you mentioned, as far as like where we can grow our own food and the food is in abundance. One thing that we have to keep in mind though is that Keynes probably would have been right if the, if the spoils of all of the improvements were not necessarily. I'm not one to argue they just be shared equally, but just if they were distributed in a way where everybody got at least some access to it, we probably would be in a world where people only need to work 30 or 20 hours a week. But that's not how it works. Humans, you know, like, we dig opulence, you know, and so things get. There may be a ton of food that we're creating, but we're not saying, hey, let's make sure everybody got a little bit. We're saying, hey, the people, people that can make it happen, you guys can have it all. And everybody else.
[00:33:16] Speaker C: That's what I was going to say too, because remember Nikola Tesla, who, you know, was one of the early.
I don't want to say what are not inventor, discoverers of the use of electricity. And you know, for certain things, he was all altruistic. He wanted to give it away for free. And he had this whole. Remember we did the thing about solar power and the first solar power stuff was created in the 1890s. But what happened is Edison was more of, you know, he wanted to make money on it, so he hooked up with J.P. morgan. And none of us really ever heard of the name Tesla until Elon Musk named a car after it. 100 years.
[00:33:51] Speaker A: He needed a better IP lawyer. Right.
[00:33:53] Speaker C: Yeah, exactly. So my point is, is that I think a lot of this stuff from an idealism, altruistic standpoint kind of was already there. It's just the human, like you're saying, James, that we also have things like hierarchy in our setup in our DNA, you know, we also have culture. And I was thinking about as you're talking like our, the whole Western civilization, our country included, is kind of built on the back of the British Victorian age culture of work, where if you look at like a group like the Puritans, you know, the Christian group, early in the founding of this country, their actual religious belief was tied to work. So it's interesting.
[00:34:35] Speaker B: I want to get Rob in too. But the thing is, I'm saying that that's humanity, is that our culture don't.
[00:34:42] Speaker C: Have that like in their DNA. Like they've worked, you know, not to.
[00:34:46] Speaker B: That extent, but some kind of meaningful work, I think is tied to whether it's. It's all consuming, but something to do that's meaningful is built in, you know, and then again, we're talking about whether or not that has to be all day, every day.
[00:34:59] Speaker C: Yeah.
[00:35:00] Speaker A: So it's even beyond becoming a trust fund baby or having so much money that you don't need to worry about money. That the real next level is, is right now we don't even need to go to a supermarket to get the food. They'll bring it to you if you order it, you know, you need, oh, that one little plug. You can go click, click, click. They'll bring it to you and bring it to you pretty quickly. The question is, what is the meaning of being human when the AI gets to a certain point that they're so accommodating. Hey, you know, James, last year you went skiing at this, at this point you started booking. Do you want me to book the same ski vacation with your family at the same place?
[00:35:45] Speaker B: I took the liberty to book this for you again. You went, you seem to enjoy it because your location data said you went here, there, there and there. You need to do it again. I've checked your blood pressure. Your blood pressure seems to be rising. So we think we're due for a vacation. Here's where you like to go in August. Oh no, actually, what's March? Now we see your blood pressure's rising, so we know you like to go here in March. So yeah, that's why I think if it becomes self learning, then I think, and you had said this before, Rob, that's when it's like, okay, well now we need to worry because there's a lot to learn from humans and the duality of humans means a lot of that is not good stuff.
[00:36:25] Speaker A: But it could, it could be dangerous because the computer is infinitely patient. You may, the computer may ask you all sorts of questions that you answer and explain why this March, you're not going to travel to where you went last March. And you need to understand that what other human, what other person you're in a relationship with a spouse could withstand the level of questioning that you might be willing to give to a computer?
[00:36:54] Speaker B: Yeah, well, so I'll ask, and let's do this one briefly, but with how reliant we are on our machines, do you think the direction we're going, while it may appear inevitable no matter what, anyway, do you think it's a good direction for society or does this make you something? Are you excited about the possibilities here, even understanding there's some, some, some downsides? Or is this, are you more concerned or worried than excited? Either one of you guys go ahead.
[00:37:20] Speaker A: I'm optimistic. I'm personally optimistic about it. I think there's always going to be growing pains and we'll make fun of our friends. Oh, that one never makes a decision. She just lets the AI do, do everything and she floats through life or he floats through life and there'll be some growing pains. Like you said with social media, you know, we get to the point where in our, in our distance and anonymity we can be nasty to each other, but at some point it's gonna self regulate because nobody likes the nasty person 24 7. And so yes, there'll be a reaction the other way where we start just, you know, we call them cancel people and we put them in timeout for a while and, and then they need to self reflect. And I think that's just that that is the human experience which is yes, we can say I'm kind because I've thought myself that, that being kind is the best way to go.
But we're, we're conditioned by the reaction of others over, over time to be, to have our personality to make decisions like, hey, maybe I shouldn't be a 24 7, no one wants to hang out with me. But every once in a while you.
[00:38:44] Speaker B: Just find an AI that's cool with it.
[00:38:47] Speaker A: You ever have someone say don't talk to her like that, don't talk to the various things you're yelling, hey, Google, don't talk to Google like that.
[00:39:02] Speaker B: Well, let me just real quick because I want to build off of that. I think what you called it out right there actually like you, there is cause for concern and there is cause for excitement because basically like with, like with all of these innovations, there is the step forward and then there is the growth. And I don't think we can get to the growth before we have the step forward, you know, and so with World War I, there was a step forward and you know, relatively, you know, meaning in the warring context in terms of how efficiently you could kill people. And there was, they got really good at that and they, and there were chemical gases and all that stuff. And then there was a kind of a learning from that, so to speak, and say, hey, maybe we shouldn't do war like this, you know, like this is crazy, you know, like. And so there's, it almost is inevitable that where there's going to be, as you call them, growing pains or you kind of like the action and the reaction. And so I think the human spirit will prevail. That has generally had us on an arc to more, more inclusiveness, more accepting and so forth. But it's not going to be something that magically appears and it's not something that can appear unless we see the downside first. And so as long as the downside isn't something that we can't get out of, which I'm optimistic as you that I hope that whatever, whatever the negative stuff we're going to get from this is not stuff that's, you know, something that can't be recovered from, but as long as we can, then, then there will be growth, there will be learning and society will find a new normal and then it'll be on Some next thing that we have to worry about. Oh, this is the next thing on the horizon that we're gonna have to learn to deal with.
[00:40:37] Speaker C: You guys are too optimistic. So let me come.
[00:40:39] Speaker B: I, I wanted you to be able to finish this because I knew you were coming with the pest.
[00:40:43] Speaker C: Let me balance this out, you guys, you know, geez, audience, I gotta deal with all these optimistic. Glass half full guys. Let me go, let me go. Break the glass and shatter it.
All the content spill all over the floor.
Now it's interesting. I'm full of crap today in terms of media I've consumed. So this one's a video game that you guys can all look up. It's called, the video game is called Detroit, played it on PS5. And it's very interesting because it's a very well made game actually.
And it's in a very good graphic. So it's set in the year 2055 or something.
And basically there's AI, like robots that look like people, right? And so, but they're at the point where they're kind of getting sentient. And what's interesting is like it's so believable because think about it, it's kind of like the movie Artificial Intelligence where like you said Robert about the boy crying, where they look real, right? And it just looks like people. But if you were to, you know, cut their arm or something, they don't bleed, they just, you know, they are still robots. So what happens is everyone has, not everyone, but most people have one of these type of artificial intelligence human looking things in their home. Just like we might have an iRobot vacuum cleaner today, or like we have a dog or a cat. That's what I mean, how common they are. And you know what people do in the video game, it's interesting because the premise of the game is these robots are raised up and they ask for their own rights, like they want freedom and all that. Because what's happening is people are just treating them like crap. You know, people are having them in their house, they're putting cigarettes out on them, they're beating them, they're taking out all their aggression. And I thought this is interesting because this is how humans behave.
But we talked about it, James, on a serious note. And I've thought about this, like if you look at, I was just reading something about the Spanish Inquisition, just. And I thought about it like in Europe it was Jews and gypsies that were like the ones that the rest of society took their kind of anger out. And everybody else was Able to coexist with each other as long as you had that bottom group there. And in the United States, I'm not them. Yeah, in the United States for a long time it was African Americans. Right. Black people were at the bottom of the barrel. And you know, you had segregation and every other group came in and jockeyed for position, but at least everyone conditioned on that group. And every, I mean, and that's been.
[00:43:11] Speaker B: Studied, you know, like that. Studied boys, the psychic wage and all that.
[00:43:15] Speaker C: Yeah, yeah. But every society has it. They have it in Asian countries, they have it in. I lived in Australia. Australia, they have it there. So what I'm saying is if it were to be like that, that might in one angle be something that helps humanity because now we all collectively have a group of non living humans that we can just beat on. So that's one thing. And then the other thing I was thinking was, you know, because I remember my mom used to say this when I was young. Like she used to say, you know, if everything goes to hell in a handbasket in our society, the only humans in the earth that will actually be okay and know how to deal with will be like the people that are still living, like hunter gatherers. And so my other thought is, the more we get attached to needing this and the more we have generations, like our kids growing up immersed in this, if something were to happen, like the electromagnetic pulse, you know, that the sun decides to send out, that ends all electronic, all microchips. Yeah. Or we have like, you know, what's going on in Russia, Ukraine now. Like we have a real war and then somebody really uses some cyber attacks to really shut stuff down. We won't know how to act. That's my concern of being Glass Half Empty Guy is that we, something could happen negatively, quickly that takes it all away from us and we literally won't know how to act. We won't know how to hunt and get our food. We'll be, we'll be. It'll be a period of chaos for humanity before it all calms down for sure.
[00:44:40] Speaker B: I mean, well, that's, I think though from that standpoint, that is where ideally society will that something Mad Max like.
[00:44:48] Speaker C: A bad movie like that something is.
[00:44:50] Speaker B: Going to have to happen for people to learn from. And my hope is that it won't have to be that bad. Like I'm hoping that science fiction won't come true in that sense.
[00:44:57] Speaker C: So I'll have a spot for you when it does. I'll remember that you were the optimistic one.
[00:45:01] Speaker B: Hey, I Mean, that's human superpower, man. Delusion. Like, that's how we can live. Because, you know, there's a lot of things to worry about. You can't focus on it all the time, but, you know, you can look at things optimistically, and that helps us in some ways, and other ways it may prevent us from seeing reality. But I do want to move on. I mean, that topic is one you can go on forever because so much of it is like, we've all consumed science fiction in the fiction form, you know, and then Rob contributes to, as an author, he's contributed to that body and of all, you know, it's interesting stuff. But the other topic we want to do today is very science fictiony as well. And it's. I guess it's technology as well. There have been scientists that successfully in mice have been able to, quote, unquote, reset the age of cells in a particular organ, for example. And initially they were able to reset the age of the cells all the way back to stem cells. But then a stem cell is not differentiated. It doesn't know if it's supposed to be a heart cell or a blood cell or whatever. So they were like, they stopped doing that, and then they were able to actually keep it as a liver cell, but make it young again, make it a young cell and then do that for the whole organ. Now, they've been able to successfully do this in living organisms, though, you know, like. Like a mouse or something like that. And so these scientists, which normally, I mean, you don't normally see scientists as the most boastful people around. And they're saying they're going to be able to do this in humans pretty soon, where they can take an organ or the big thing is, oh, we'll just do this to your whole body. And it's not something that's permanent. It doesn't set your cell there, but it resets the age. It knocks off 20 years or something like that. And I guess presumably you could keep doing it. What was, you know, you guys have any thoughts on this kind of technology kind of coming online? People talking big like it's about to happen now.
[00:46:47] Speaker A: So I have. Yes, I do. I have some thoughts.
[00:46:51] Speaker B: Please share, Please say.
[00:46:52] Speaker A: So if you can imagine most people say they don't want to live to 100 years old. They just can't imagine themselves living a good life at that age.
But what happens if I ask the question, if you felt like the age you are now, but at 100. So if you're 50 and you feel this way your body's working, you can eat, you can walk, you can talk, you can think. Would you live to be 100? Would you want to live to be 100? And the answer most people give is yes, of course, they don't want to die. Now imagine if the science gets to the point, the AI gets to a point where, where we say, gee, you need a new back, a backbone, you need a new spine. We can replace it and give you a new spine. We can give you a 20 year old spine and you'll feel good again. Oh, your legs. And we do it now, right? I need a new hip. All right, we'll give you a new hip. And if it's a successful surgery, it's great. And there's a whole foundation towards it. You could just easily look it up. Called the Methuselah Foundation. And they're dedicated, a bunch of scientists and people who are dedicated to making 90 the new 50.
And I guess your consciousness stays alive. The belief is in your body as long as your body is alive. And so if you start taking the right vitamins and getting the right stents or getting a new heart or getting a new kidney or whatever is ailing you, if we get to that point where we can look at your body and say, you know, James, in two years you're going to need some new hands.
[00:48:31] Speaker B: Well, if you combine this with the previous topic, really, you would probably just be wearing a patch or something that's monitoring all this stuff. And then when it decides, hey, it looks like we need new kidney here, or we need to reset the age of the kidney just releases something into your bloodstream and all of a sudden your kidney, you wake up the next day and Your kidney is 20 years old again or something like that. So, I mean, the, the possibilities along the lines of what you're saying are endless. Tunde, I see you ready to go, man. What, you gotta come.
[00:48:58] Speaker C: I gotta come on my baseball bat and break this happy party up, you know, and shatter it because.
No, it's funny because you asked a very good question, Rob. And I'm thinking, as you're asking, because I'm 44, so I'm, you know, hitting a nice middle age and starting to, you know, pee more often every night, you know, than I used to 10 years ago. I have to wake up two, three times now and feeling my body, you know, it's harder to put on weight with lifting weights again, you know, like, than it was when I was younger with all that extra testosterone. And I thought when you said that, you know, if I felt like this at 100, would I still want to keep going? And actually, my answer is no. Like, I actually have really contemplated. I'm fine living 80, 85 years and having a nice life enjoying it. And I think that's the problem most of our society has. Is we don't know how to actually enjoy the present. It goes back to the book we did Power of Now last year.
So we're always searching. It's like I rail against also sometimes when all these fantasies about we're gonna go find some alternative to Earth sometime in the next thousand years that we can all live on. And as a kind of a deflection from having to deal with things like climate change and pollution. And actually making sure that we keep this place habitable. I just think that, you know, I think we're. You know, the whole system has been set up this way. Right. We evolved the Earth, it can deal with certain conditions. We already have 8 billion humans on the planet. And that seems to be a lot for the Earth to handle. And, I mean, I seem to be.
[00:50:33] Speaker B: Too happy about that.
[00:50:34] Speaker C: Yeah. If we start having humans live 2, 3, 400 years, I mean, what does that mean? Will we have 30 billion humans? Like, you're talking about one war in the middle of Europe is messing up food for 27 million people in this planet. You know what I mean? We're about to have Lake Mead dry up. Which delivers power and electricity and water to 40 million Americans. That's what I'm saying, is that everybody has this idea, oh, I want to live forever and all that. But we don't sit here and say, okay, what does that mean for society? And I'm just saying, like. I'm just thinking, like, if I've been alive since 1880, let's say hypothetically. I just feel like just thinking about it now. That's a long time.
I don't know if I want to be hanging out that long. And I just want to, you know, like, I'm a pretty happy guy right now. I'm okay getting older.
[00:51:21] Speaker B: Yeah.
[00:51:21] Speaker C: And that's it. Like, I'm not sure this.
[00:51:24] Speaker B: Yeah, it's an interesting thought because. Yeah, I'm of the mind now. I'll say this to your point, though. You're saying 44. But what if you Would your answer change if it was. Okay, Tunde, you will set you at 30. And then you can age, body age from 30 to 35. Then we'll take you back to 30 again.
[00:51:42] Speaker C: No, I mean, because at some point it still has to end. So why do I want to extend this? Like, this is. To me, it's just like, this is nature. And again, we've already seen what happens when we try and manipulate nature too much. It usually doesn't work out well. And, you know, I'll say this, I.
[00:51:57] Speaker B: Agree with you in the sense that I. I like the arc, the entire arc. Like, I'm bought in, so to speak, to the idea of the arc. The idea of staying young forever doesn't appeal to me as much. And because it's just a lot. It's a lot to deal with. It's so everybody's just going to keep being here. It's more than I can conceive now. I'll say this, though. I wonder if peer pressure, like, let's say this is available. Does peer pressure start to play a role? So what if your wife is like, you know what? I think I'm going to stay young. So you gonna be the old dude and your wife is like, you're worried.
[00:52:30] Speaker A: Like, hey, man, they do that. They do that.
[00:52:33] Speaker C: That's what I was gonna say. Now I might have to put the baseball bat down. That doesn't sound too bad.
I don't mind being 70. If she looks like she's 42, I'm good.
[00:52:42] Speaker B: But she might mind you being 70. That's the point. So I wonder about that. But you raise an interesting point as.
[00:52:48] Speaker C: Well, because I'll call Rob about a postnub. We'll get a good lawyer about that one.
[00:52:53] Speaker B: The issue you raise, though, is one that we have to get a handle on no matter what, whether people are living forever or not. And that is the idea of just endless consumption that our western culture is based on. We're gonna have to get a handle on that at some point. This is kind of going back to where I was saying we have to have something bad happen usually before we learn from it.
The bad is starting to happen from our kind of culture of just consume, consume, consume. And once you use up everything here, we'll just go somewhere else and use it all up there. That mindset, the Earth is getting larger. Our imprint, our footprint has gotten larger. So ultimately, whether people are living forever or not, we're going to have to do something about that. We're going to have to figure out a better way to be around, because the effects of just 200 years of the industrial revolution is being felt on the Earth much more than the past 2 million years. So ultimately, when I look, though, at the aging piece, the ability, like, to Rob's point, the ability for vitality to avoid disease and so forth. Like, I don't have a problem with aging and then coming to a natural end at a certain point at a normal human lifespan. But the idea of saying, hey, we can keep your liver functioning like a 25 year old so I can party hard or something like that, that doesn't. I could, you could convince me on that. Or hey, we'll keep, you know, like you want to keep working out and stuff like that. Or keep your joints up and running. You can keep running. You don't have to worry about, you know, your cartilage and stuff starting to fail. It's like, you might be able to convince me on that, but so I wouldn't look at this as something to try to stay alive longer, but I would want more vitality, like I'd like to be. And I do a lot now as far as exercise, diet, you know, all that type of stuff to try to remain or keep my vitality at a certain age or at a certain level. But I could be open to the idea of, oh yeah, you'll keep my eyes at a good. So I'll be 80, but I can see really good. Even though, hey, eventually I'm gonna go, that's fine. I don't know about the slow deterioration part. Basically, I could probably, I might be.
[00:54:47] Speaker A: Able to optimize what happens if it's a hard deadline and like 10day85. And even though you feel great and everything's good, you get to the point where, sir, you're 85, you're done.
[00:54:59] Speaker C: Yeah. I mean, talk about the revolution. Yeah.
[00:55:01] Speaker A: About a bunch of 85 year olds.
[00:55:03] Speaker B: Who are like running around with a bunch of testosterone.
[00:55:06] Speaker A: Make me die.
[00:55:10] Speaker C: That sounds like Washington D.C. today with all the geriatrics running our country. We need term limits. It'll be like term limits on your life. Like, no, you just got stop. You know, like, it's just too much. We can't keep going.
[00:55:21] Speaker B: Yeah, we need a leader like George Washington to show people the way. Because otherwise people, that'll be me, I'll.
[00:55:26] Speaker C: Be like, guys, I'm ready to go.
Yeah, I'm the George Washington of life, you know, but. But it's funny because I think that this conversation you're saying about your liver, you got me rethinking it too. I'm still going to check out at 85, but I think at 44, my liver is already rebelled against me because the amount of whiskey it's had to process and clean through. So yeah, certain Things like that. I could definitely see a nice cleanse and a little bit of rejuvenation.
[00:55:58] Speaker B: Well, interestingly enough, though, the liver is one of the organs in our bodies that can regenerate.
[00:56:03] Speaker C: So I know that's.
[00:56:04] Speaker B: Hey, I'm just going along with nature's plan.
[00:56:06] Speaker C: I'm just giving it a little boost that gives me hope in the long run. That's why we'll keep doing it.
[00:56:11] Speaker B: But you got stop drinking for the liver to regenerate.
[00:56:13] Speaker C: There you go.
[00:56:14] Speaker B: But no, I mean, it's definitely something, though. I mean, this. This train is. This thing is happening because this is something. This is another one of those things that's kind of ingrained in humanity, like the fountain of youth was. They dreamt that up thousands of years ago. And people spending their whole life searching in lands, they have no idea what's going on, looking for that thing. So this is. This is something that's innate in us, you know, like the search for more. The search for, you know, extension to extend, whatever, you know. And so, like, I think that it's interesting to see some type of concrete type of advancement in it. And, you know, I'm interested to see what happens. I mean, like I said, it'll be. It'll be how different people react and want to take advantage of, you know, will also be something that we'll see in society. Like you said, like, some people go one, want to go one way, others and so forth. But, you know, it's fascinating, though, to see, you know, like, that this is. This is happening. But I think we can wrap this up from here, man. We appreciate Rob for joining us, man, and taking. Taking time out pleasure. Yeah, yeah, it's taking the time out to be with us here. And we, as always, we appreciate everybody for joining us on this episode of Call It Like I See It. Subscribe to the podcast, Rate it, review us, tell us what you think, send it to your friends, and until next time, I'm James Keys.
[00:57:26] Speaker C: I'm Tundeguanlana.
[00:57:28] Speaker A: I'm Robert Buell.
[00:57:30] Speaker B: All right, and we'll talk to you next time.