Episode Transcript
[00:00:00] Speaker A: In this episode, we consider whether it's time to start worrying about where AI is going and what it will do to us in light of the super realistic Veo3AI video generator that was recently released by Google.
Hello, welcome to the Call Like I See it podcast.
I'm James Keys, and joining me today is a man whose podcast takes have been known to shake things up. Tunde. Ogonlana Tunde. Are you ready to, you know, shake it to the max today?
[00:00:42] Speaker B: Of course, man.
Shake it till I fake it, till I make it.
So I just made that up.
[00:00:49] Speaker A: Let's see if it turns all right. Hey, just put it on TikTok, man. We'll be all good now. Before we get started, I ask if you enjoy the show that you subscribe, like the show on YouTube or your podcast platform.
[00:01:02] Speaker B: Subscribe.
[00:01:02] Speaker A: Doing so really helps the show out. We're recording on June 3, 2025. And Tunde, last month, just May, Google announced the Release of its VAIO3AI generator. Video generator. And the videos we've seen coming from this thing are concerningly realistic right now. VAO3, this video generation model makes eight second videos, but more so what it's given us is like a snapshot of the current state of the tech and the current capability to. To just fabricate visuals, you know, that people would not necessarily be able to tell whether they're real or not, you know, especially at a glance. So just to start us off today, a lot of people were expressing surprise, shock, and everything about how realistic these AI generated videos were in the images that were being captured from them. So what was your reaction to seeing some of these, you know, these images in these videos?
[00:01:55] Speaker B: I was amazed, man, to be honest with you. That was my first reaction.
Amazed at the idea that all of this was done artificially, that there was not one single real video, you know, real or however you can say it, you know, there was no filming of anything that actually existed in reality. This was all produced by, you know, binary code and AI. And so that's, to me, was just amazing. Like, all right, we are now at the point where we actually can't discern real videos from fake ones. If someone really wanted to produce something to convince the rest of us of whatever they wanted to convince us of. So that, to me was just, you know, kind of blown away by that. Yeah, I got a few other ones, but I'll pass it back just to.
[00:02:42] Speaker A: Well, no, I mean, where you ended is where I started. Like, okay, I think we've crossed some, you know, some Rubicon Here where you know, you look at this thing on a glance especially but like you would not be like, okay, yeah, that's fake. Like, like the fact that we've got to that point and we got to that point pretty quickly, it feels like, like it's, it's, you know, we're just a couple years into really the chat GPT thing being, you know, for commercially, you know, commercial use of being available and all that, which is, you know, more of a text generation type of thing. But the where that tech is able to do that right now with the current capabilities means that in 6 months, 12 months, 18 months, it's going to be so much further as well. So it's not just what it means for now and it means what it means moving forward. And so I do think it's time like we're looking at this like, okay, well when we look at just even like the, you know, what's next for the next business cycle, we're looking at the next set of announcements as far as, you know, companies profit goes. Do we have to worry about you know, like fake videos going around talking about, oh, the CEO, this CEO said, you know, that they're expecting this from a profit standpoint and this thing affecting the stock market or anything like that. Like it seems like we're getting to this. We're coming very quickly to the place where you can, you already know, you, you can't trust what you read.
So now we're going to get to this point where we're not going to be able to trust what we see, you know, visuals, which is going to.
[00:04:03] Speaker B: Be much harder for us.
[00:04:04] Speaker A: Like people don't like to read. People love to, to, to, to view, to see video. And like now you can't trust that either. So yeah, I mean I, I was just blown away that how, how it was not something you could readily discern like oh, those kind of features are off in this person or anything like that. Like these people don't exist, you know, but they like somebody let me jump.
[00:04:23] Speaker B: In on that one because that's the one that got me a bit. I want to hit something you said when you said you already know that what you, you can't trust what you're reading a bit. And I not, not to pick on what you're saying because I know what you mean. But what I'm saying is think about what the way you said it, you already know you and I think we know right?
Most seriously like, but what are we, to be humbly honest, you and I would say Submit to the idea that, yes, maybe In a certain 30,000ft, we don't really know what we're reading, you know, how true or not, but how many people out there see something that they read quickly on TikTok or Instagram, whatever, and based on reading it, they just run with it like it's true. So I think your point is well taken where? Video, symbols, imagery. Why do you think flags are so important for humanity? You know, cultures for thousands of years, symbols mean a lot. Or, you know, sign symbols, images, visual humans, visuals.
[00:05:14] Speaker A: Yeah.
[00:05:15] Speaker B: So the ability, like you're saying, to really recreate truth through images for the first time in a way that has only been done in print in terms of creating new realities and new sets of facts or history. Yeah. Like you're saying, I think this is, this is crazy. So.
But one of the things that got me thinking was, to your point about the fact that all those people were fake. One of the things I thought of is imagine if you saw something that, you know was created by AI, like one of those videos that Google put out, but then you saw someone that's a spitting image of yourself. Because I was thinking about it, the algorithm is taking binary information, code and these things and just mashing it up through randomness. Right? And I thought about like, like a sperm hitting an egg and the DNA, like the way you look. The way I look was a random set of events within that merging of the sperm and egg, all that information that's packaged into the DNA. And I just thought it's interesting. What?
[00:06:12] Speaker A: Because it's a good point. Well, let me throw something. Let me throw it. Because I don't think random is the right word. It's probabilistic, you know, like, so it's, it's not complete. It's not like, you know, 0 to 0 and 1. It could be either, you know, we don't know. But there's a probabilistic thing. And so it is digesting a lot of images and seeing, you know, okay, probabilistic, this is how wide eyes might be. You know, there's a, you know, maybe a 20% probability. It's this wide and 20% probability. So it's weighing all that stuff instantaneously. So not random, but probabilistic. But that is kind of similar to your, you know, like DNA, DNA thing. Like the whole.
[00:06:45] Speaker B: It also has certain.
[00:06:48] Speaker A: What genes end up being dominant or what gene like that you get from your mom and dad. There's a probabilistic thing there as well. As far as what you're going to ultimately look like, you know, so that's an interesting kind of.
[00:06:58] Speaker B: I mean, so in a sense that's why it's. It is interesting to live through this. Because it's like this whole thing, like I've seen AI referred to by some in the computer science community, if I guess that's the proper sector for it title is as a species, that this is a new species of AI and stuff like that. And it's just interesting that there, there's these similarities, let's call it, you know, the characteristics. Obviously, I'm not saying that AI is alive or sentient, but like you said, this probabilistic nature of, well, what. What should a face look like? And that's no different than in the DNA. And what I was thinking, James, I was thinking of this joke for you because you're an attorney. If I saw someone that look exactly like me spitting image from one of the end. We know. I knew for a fact I never was filmed doing this and I'm running Mission Impossible or something, right? I was thinking, because this is so fresh and there's no real law about this, can I go sue Google and act like they did, they did film me and they just didn't pay me because I'll be like, you guys got to prove that's not me, because look, that's me. Let's go to court to see if 12 people in a jury says that looks like me. You got to pay.
[00:08:03] Speaker A: Well, I mean, there's a practical thing of Google has an army of lawyers and you have you. You would hire maybe one. Well, I got you, but beyond that, you know, but no, I mean. Okay, so I want to play this out just a little bit. I want to play this out a little bit further. But not the example you gave because you mentioned something to me offline that really got my mind turning on this. And so not if you see something in these videos, you see a person in these videos and it's like, oh, that looks like me. What if you see a person in the. Or you know, a quote unquote person, person, a generated, you know, kind of image, and you're like, like, what if people start falling for these things? You know what I'm saying? And like, oh, like we know how people really fall for celebrities and don't know these people and they just see them in screens and, and stuff like that. It's very conceivable that there can be AI generated characters that people start fall. Like, you know, we got, you know, these Young men, you know, like already happened, man. Oh, yeah, yeah, yeah.
[00:08:53] Speaker B: It's, it's.
[00:08:54] Speaker A: But it's going to become more like, maybe it's more people that can get kind of caught up in that stuff. And then the worst part about it is people might get caught up in it unwittingly, like don't even know that they're looking at something that's not real, you know, and so forth. So because we're still animals, you know, like we're going to see things. And again, the visual, that's the part that I can't emphasize enough. Like we know this with text and. Yeah, good point. That a lot of people do see the text and then we'll run with it. But a lot of times that's because it taps into something. It connects to something emotionally where they want to believe it's true. You know, we all generally know that you can't believe everything that you read. You know, if you get a letter that says, hey, you just won a million dollars, just send, you know, your bank information to, to someone and then they're going to put this million dollars in your bank. Most people can understand, again, there's a few people that would, that would, you know, go down that path. That's why there's, you know, scams that still exist. But most people would understand that just because that's written on a page doesn't mean that it's true. But we haven't made that jump yet beyond like movies where it's like CGI and stuff like that. But this is, those are like, that's million dollar budget stuff. This is like, you know, you just hop onto your computer and create this stuff. So the fact that we won't be that, that we cannot trust things and video connects to us at a such so much deeper level to me is just like, that's a brave new world, so to speak. So, yeah. Now I'll ask you this though.
Generally speaking, in our society we got our tech optimists, we got our tech pessimists and so forth. And so this is another one of those kind of events that brings out everybody in that sense. And so we see a lot of people expressing excitement, you know, over this kind of stuff. We see a lot of people expressing fear and, you know, I guess we probably would be on that side a little bit more, but nonetheless expressing fear about, you know, the AI systems and the capabilities and how they're increasing and so forth. So generally speaking, you know, like I would ask you just bigger picture wise, are there Things that you're very excited about. As far as that I will be able to. The effects that it can have on society. And are there things or what stands out as far as you. As far as what we would be worried about? You know, like on both. If you could give, you know, maybe one or you know, give both and then we'll kind of talk about.
[00:11:05] Speaker B: Yeah. So I think there's obviously like, there's, there's pros and cons with all this kind of new technology.
I think the pros are things that we have seen already. The ability to increase productivity in certain, I would say kind of white collar environments. For example, for my business, we are testing out an AI software for our CRM, which is really good and really robust and.
[00:11:31] Speaker A: CRM?
[00:11:32] Speaker B: Yeah, Client relationship manager. Basically the database like a salesforce or something like that. But long story short, we're gonna make the investment in it because it is that good. Now the downside for the labor force is probably gonna allow me to knock out an employee. So, you know, that's the reality too. There's a tension there. Right.
And so for the business, it's good because it'll allow us to be more productive and actually save money by knocking out an employee. But I know that there is, if millions of people like me around the country do this, business owners, then we're going to obviously have an effect on the labor market. So I think there's these tensions of pros and cons. And you know, doing my preparation, I saw that the, the, the sectors of the economy that stand to benefit the most over the next five years because, you know, I think it's very difficult to try and predict two, three generations out with this kind of stuff. But it's education, health care, our two industries, finance and law, and then transportation.
And I found the healthcare and science kind of research very interesting because with the ability of AI to really do its own learning and do millions of calculations exponentially faster than humans, the idea of more research into things like the MRNA and vaccines and all that, we can expect just in a few years, a lot of change that we may not have had without this technology. So things like that I think are very interesting. And then, you know, I'll hand it back to you before I get into the cons, because that, that's.
[00:13:05] Speaker A: I thought you did a good job of winning, you know, like kind of the, a lot of those things. And I like that you focused on the economic piece, because I actually was looking at, I'm looking at it socially and you know, like, okay, on one hand, I think a lot of times people rightfully so, you know, we're human humans, and so we see threat around every corner before we acknowledge the possibilities that there could be put to good use. And a lot of times threat kind of, you know, runs a couple laps around the track before people with good intentions even get out of bed. But the fact that you can generate imagery, and we know that imagery affects people to a greater degree than text, that this is something that can be used conceivably, I mean, in the same way that social media could, but you could, to connect more people, to connect more people to kind of a mutual struggle to that. That, you know, kind of we're all in this together. There are ways that you may be able to create imagery, create visuals that explain things, you know, like whether it be climate change or things that, hey, if we don't get this under control, blank under control, then this is the kind of thing that's going to happen and actually not talk about it in some abstract language, but actually visualize it for people, which could, you know, kind of bring people on a more common, objective mind to a more common objective mindset.
And so I think it can be used in a way that is very effective for persuasion. Now, I think that's a pro and a con too, because a lot of times the desire to divide people or to get people distracted about different things, or more so even to tap into people's emotions for. In order to control them, you know, I think that that, like I said, that kind of impetus seems to be more ascendant than the. Or at least get. Get the head start, so to speak, on people that may come behind and do stuff to actually bring more common purpose. So I think that ultimately what it becomes is. Is just from an economic, excuse me, from a social standpoint, it's a way that's going to be effective in moving people more. Like I'm getting. You get people to care about stuff more. You got a democracy now where in the United States where, you know, such a large chunk of people don't vote, you know, and so it's like, okay, well maybe we can get more people to vote that if, you know, if you want, if we're all on board with more people voting, you know, so I, I just think that it's a. It. The possibilities are endless in terms of its ability to use the visual to, to. To spur people to greater understanding or greater action. It's just that that's a. That coin has a pro side and a Con side. And so I think it's all in one.
[00:15:41] Speaker B: Yeah, well, and for me, the cons were much longer than the pros.
[00:15:48] Speaker A: Well, give us one or something.
[00:15:50] Speaker B: That's why I say maybe I'm one of those guys seeing threats around every corner, you know?
[00:15:53] Speaker A: Yeah, that's called a human being.
[00:15:56] Speaker B: Yeah, well, you know, or squirrel. You know, the squirrels I feed, they all around, you know, and I keep telling them I'm the same guy feeding you every day, but you still don't trust me. You know, it really hurts my feelings. But no, one of them is, I think, one that we can all appreciate in the way that technology has evolved. Even I'd say pre AI, But I just think that AI will make this more pronounced, which is the end of privacy and at least privacy. As you and I grew up and prior generations grew up, I just assume everywhere I'm going, I'm surveilled. And it's not cause I have bad intentions. I just assume there's cameras randomly just on the streets we walk. Whenever you're in buildings, there's always something out there. And I know that my phone is constantly on and even when it's off, the GPS is still pinging somewhere, some towers. So the idea of me doing or really going somewhere and anticipating some level of privacy in terms of where I'm going and that data not being shared by somebody out there, I recognize that that world is gone.
[00:16:57] Speaker A: Well, remember, just to tap into that real quick, that was, you know, I'm blanking on the book right now, but one of the books we read recently talked about how in those totalitarian situations, the biggest limitations wasn't the collection of data about what everybody was doing. It was the lack of analysts. They didn't have the analyst. And you know what?
[00:17:15] Speaker B: I'm getting there.
[00:17:16] Speaker A: Oh, okay.
[00:17:17] Speaker B: Yeah. Real life. Yeah, no, we got a real life thing going now that is on that. Exactly. So that's.
[00:17:22] Speaker A: That's AI to actually make the point. The AI allows right now we gather all this data about people and you got people sitting at a computer trying to analyze it. Well, once you allow the AI to analyze the data and you can actually create a totalitarian situation where everybody's actions are constantly evaluate, seen and evaluated and threat assessments are made about people all the time. And so you end up in this Minority Report world, conceivably.
[00:17:48] Speaker B: All right, so now you're going to force me to bring it up now and I'll go to rest my list later because that's exactly it. So you're Talking about, number one, it's being copied in the United States, the model from China, the social credit system.
Whether it is used like this in the United States or not, who knows, we'll see. But right now, it's being well discussed in the media that Palantir, which is owned by Peter Thiel, has a contract of $795 million from the US government to help it build a national citizens database, using AI to coordinate everything. I'm going to assume the things that the Doge Department was gathering inside, primarily the Social Security Administration, but they're tying it together with the irs, Department of Homeland Security, and again, this is in the works. Who knows what the final product looks like? But it looks to me that they are trying to do something similar to the Chinese, where they will have a database on hundreds of millions of Americans, and that the government will have a centralized ability to know certain things across agencies.
How that's used or not, who knows? But like you said, if we want to be paranoid about it, we could see that being used in the ways that you're mentioning.
You know, a totalitarian state leader from the middle 20th century would be salivating for this type of ability. So that's what I was going to say, that in regardless, one of the.
[00:19:13] Speaker A: There's no constitutional. Well, I was going to say there's no constitutional way to use that. I mean, like, however it's going to be.
[00:19:19] Speaker B: I mean, what is.
[00:19:19] Speaker A: Because what is that something that is not consistent with the whole limited government thing?
Yeah, well, that's the.
[00:19:25] Speaker B: I think that's the danger, James, is that the reality of this technology may run up against the rocks of our desire for individual.
The kind of individualism that we seek in American culture.
[00:19:38] Speaker A: Well, there's no maybe about that. You know, like, I think that the point of the technology is to run up against that, you know, and so. And to me, I would say just to add the express con kind of mindset, not the pro that could be used for Khan. But to me, it's just what we've seen with these technological innovations that deal with feeding us information is that they ultimately move towards playing on our cognitive biases. And so we end up in this world where the Facebook algorithms understands that confirmation bias is the best way to generate engagement.
[00:20:19] Speaker B: And.
[00:20:19] Speaker A: And so it puts us all in these silos where it just feeds this confirmation bias and. Or just just supplies to feed our confirmation bias. And then we end up all kind of not knowing what's going on outside of this one little silo. And half of that stuff in the silo is not there because it's true. It's there because it'll keep us in the silo. And so I think that the other. Another one of these biases that comes into play is the mere exposure effect, which is if you see something, you see something once, you see them twice. When you see it ultimately or later on, as you continue to see something, it becomes more acceptable to you, becomes more true to you. And so this is the type of thing where video generation could be terrible, you know, or great. Like you could see a generated video of a person doing something or saying something and become more receptive to the idea that that person either has or will do that, you know, and so you can be again, you know, like imperceptible changes in behavior is kind of what social media does. Well, that's kind of what this can do as well. So to me, it's because of the power of the visual, you know. So I think that the cons being that just the. It opens up the possibilities for deception beyond what, you know, it opens the possibility of control and it opens the possibility of deception beyond what we can see.
[00:21:40] Speaker B: I think if you, if you put those two together. Yeah. Just deception, control, all this, what it does. I think we lack the imagination to see where human society can go in, in general. And so, for example, as you were talking earlier, James, it made me think of something like the War of the Worlds. Right. We know that if you, you know, heard the story about the original. I think it was in the 1920s.
[00:22:02] Speaker A: Yeah. Shortly after the radio program becomes widespread.
[00:22:05] Speaker B: Yeah, correct. What if there was an AI generated video that looked very legit with well known newscasters. Right. Who are telling us that either aliens are invading and they're showing the image or another nation is invading. What if it was actually showing.
[00:22:19] Speaker A: But you don't even need the video for this. This isn't much different than the stories of what the Facebook, you know, the Facebook social media post did in Myanmar when they had people creating, like believing that there were these massacres going on and they were going around killing people. You know, so, like, this is always there.
[00:22:35] Speaker B: But imagine having the ability to create the images that could disturb people within just a few minutes.
[00:22:40] Speaker A: Yeah.
[00:22:41] Speaker B: For 295amonth or whatever the cost for the software is. So it's just. That's what I mean. And the other thing is.
[00:22:46] Speaker A: Well, let me. I'm gonna keep us moving though, because we.
[00:22:48] Speaker B: Well, let me just say this one, because this is. I wouldn't mind your Feedback. This is the first time that we're gonna have machines creating things that humans are consuming. And I think you're very, you're onto something with this. How does it affect our psychology?
Because as humans we suffer from things like loneliness, the need for belonging and all that. And I don't think we're prepared for this, that human beings could start following culturally something that's not made by a human, like, you know, an idea or a theme or something that is generated by one of these AIs. So that's, you know.
[00:23:20] Speaker A: No, no, I mean, I think that particularly as like we've seen that AI in the chat bots at least has shown some manipulative tendencies. You know, like, not that somebody's behind the scenes pulling strings, but the actual chat bot itself is going down the path of manipulation. And so if you put that together with the video generation, then yeah, you know, you could, you could be walking people down. All types of.
[00:23:46] Speaker B: That's how people can fall in love with it. Yeah.
[00:23:49] Speaker A: So, but the last. So we say all this and so the question I'll put to you is, and this is more of a cultural commentary, you know, like what you think of the values that our culture promotes and actually sustains is that, let's say we knew that some of these, best case, these worst, excuse me, these worst case scenarios, we were moving towards that. Are we even in a society, does our society even have kind of the structure and the values where we would be able to stop? Like, is this happening no matter what? Like, like because of the way our culture works? Because whether it's, it's growth at all costs or even if the cost is us, or it's, you know, like progress, so to speak, quote unquote, and all costs, even if the cost is us. Like, do you think that if we knew for a fact that something terrible was going to happen as a result of this march, you know, this unrestricted march towards, you know, wherever we're going, do you think we'd even be able to stop?
[00:24:44] Speaker B: No.
So here's my examples historically. So I would say this. I think humans were better at maybe imagining a calamity that could be so catastrophic that we don't want, once we see it, once we're like, okay, let's not. And I think nuclear weapons are a good example of that. Yeah, two bombs go off in Japan in 1945. Since then, no one's really wanted to use it.
[00:25:08] Speaker A: CFC, this is one of the ones from the 80s. Our youth, you know, like with the hole in the ozone and all that.
[00:25:13] Speaker B: Well, let me, let me go through my two examples I, I thought of for this, for this discussion, because one was the automobile and the other was the way we responded to things like firearms and food. So if you look at the new technology of the automobile 100 plus years ago, cars were crazy. There was no, think about it, there was no speed limit ever before because you didn't have to worry about that with horses. Right?
[00:25:35] Speaker A: Well, and there was also, there was a mechanical speed limit, though.
[00:25:38] Speaker B: Like, let me get through the. Well, let me just get through the discussion. That's why I know we got limited time here, but generally there was no stated speed limit. Right. There was no seat belts. There was not none of this stuff that we have now, the safety stuff, all that. There were no car seats and people died. A lot of people just died in car crashes. There was no speed limit in school zones. Little kids got run over, all that. And over the decades, every time one of these bad things happened, the system responded with a new law, new regulation, all that to where the automobile industry is regulated the way it is today.
So I was thinking that's an example of a brand new technology that came into our society that over time is now a lot safer than it was originally. But then I, so I was like, okay, that's an example where we've shown we can do it. But then I thought about things like the firearms and I thought, I really do believe, James, if we took a time machine and took a microphone and a camera back in people's faces 50 years ago, 60 years ago, and just explained that, hey, one day in America, we might have mass shootings, more mass shootings in one year than days in a year.
There's going to be guns by 2020 will be the deadliest thing for Americans under 18 in the United States, blah, blah, blah. And you ask people, how do you think Americans and the government would respond?
I don't think they would have said, oh, people will be apathetic in general, and that the government wouldn't do anything about it, really. And the firearm companies will keep lobbying and everything would just be allowed to continue.
So that part, to me, and the way that we react to food, meaning food has changed imperceptibly over time in the last 30, 40 years, and we're all sicker generally as a society, more obese, but we seem to accept it. So my concern is that I think that it'll go the direction of the latter, not the former, like the automobile. I think this is going to Be something that in 50 years if me and you took a time machine to the future, we may be horrified and be like what what? How are people doing this? Like people falling in love with robots and people committing suicide over this stuff. Because I'm concerned that just we're not going to deal with this in a way that I think you and I will be comfortable. So well.
[00:27:46] Speaker A: But I think it's the question goes to the culture like so what about the culture allowed the automobile to be reined in to some degree from a safety standpoint and has not allowed firearms to be reined in. Now that's a different discussion because with automobiles the reigning in still allows for the car companies to sell as many cars as they can as they are able to versus your regulation of firearms doesn't allow the firearms companies to sell as many as sell as many firearms as they want to. So I think that there's a key distinction there and I think that our society, this is where the valuing of growth of all and I don't think we even understand know this about our, our society. Like when, when we were looking at, you know, the, the book talking to our daughter about the economy not too long ago, we were talking about the invention of the profit motive. And it's like, you know, you sit there and sit there like, huh, like our whole world operation in our, you know, in the United States, our whole world operates on this profit motive. And it's like this was invented, this was created, this didn't exist. This isn't just how, this isn't just human nature, you know, to operate in this way. But I think that that's kind of the God that we worship in this culture. You know, is this profit motive, this growth thought process, you know, would invest in money, invest like so AI right now, like the, the that's not even something that is. Is profitable at this point, you know, but the investment in it is increasing because the potential is. Is seen. And so this, this growth mindset, I think once it takes hold in certain industries, I just don't think that there is a way for the, the only place where that kind of that could come from a cultural kind of, you know, like oh, as a culture we believe in, you know, temperance or whatever, like that could come from something like that. That stuff is gone. You know, the, the culture here is a growth culture. It could come from government, it could come from, you know, government regulation.
[00:29:38] Speaker B: Temperance was left a long time ago.
[00:29:41] Speaker A: But you know, I don't think that government is is in, is in any shape to it, particularly because the government takes a lot of its cues from the culture. So ultimately, I think that where we're looking at right now is that this is something that, with the CFC's example, was the one where you used to have all these products in these aerosol cans. They were putting a hole in the ozone, and industry was able to get a handle on it. But it still did not come to, like, there was a small party into, like, the people that made the CFCs were like, oh, yeah, we're out of business, or we got to switch to making something else. But in this instance, there is so much money and investment involved in going down this path that. And there's no. There aren't any cultural values that would stand in its place. And in fact, I would say the cultural values that we have reinforce this. So, yeah, I don't think there's any way that, like, I think we're going on this rock, you know, and then, so we. We pull our seat there, you know, our. The seat, the bar down over our lap, and we'll just see where it goes. And then that doesn't mean you don't try to make a difference in terms of, you know, where. Wherever you think you can and, you know, with. With your votes, with your pie, with you, how you spend your money or whatever.
[00:30:42] Speaker B: But.
[00:30:42] Speaker A: But at the same time, like, the cultural values that are in place right now are one that we will. If something is aligned with growth, we will do it until it kills us. Like, that's just kind of the way our system is set up right now. So any last thoughts before we close it up?
[00:30:55] Speaker B: Yeah, just finish on this, because the point you made about the profit motive, I think that's an example just for the listener or the viewer, that why we lack the imagination sometimes. Because I think there's a lot of Americans that want to hear that, James, and think you're absolutely crazy. They would think, what do you mean? That there's a world that can exist without the profit motive, this and that. And that's what I'm saying. Like, you're right.
We lack the imagination in the United States to imagine that humanity has existed for a long time without this constant growth mindset at all times for profit.
[00:31:27] Speaker A: Well, that's not a lack of imagination. That's a lack of knowledge, because I felt that way until it was explained to me.
[00:31:33] Speaker B: How did it go? I. I read the same book, and I'm just being honest. I still lack the imagination of how the world could work without a profit motive. That's just me personally. I just don't. I don't understand how that could. How people would wake up every day being motivated to do something without getting paid for it. That's so. And I'm not saying that's impossible. I'm not saying that doesn't exist.
[00:31:51] Speaker A: It would happen, you know, like, that's what I'm saying. I don't understand. I don't know how it would happen in, in the current context, but I'm saying I understand how.
[00:31:57] Speaker B: Yeah, I understand it did happen, and I believe it. I'm saying I like the imagination of how that actually could play. Just like I like the imagination being raised in America with my melanin complexion of what a world without race looks like. I just. I just lack that imagination. I. I've never experienced that. So even though I'm. I know it existed prior to modern era. So, you know, that's, that's just, you know, we'll have to all live through, like you said, put the strap down and buckle up.
[00:32:24] Speaker A: All right.
[00:32:24] Speaker B: All right. So.
[00:32:25] Speaker A: Well, no, I mean, and that's. That's what we're gonna have to do. So, you know, but I think that, I mean, at the very least will be entertained along the way.
I guess that's the silver lining.
[00:32:34] Speaker B: Let's hope so.
[00:32:35] Speaker A: Yeah.
[00:32:35] Speaker B: Yeah, yeah.
[00:32:36] Speaker A: So hopefully we don't go to a dark place too quickly, but, but no, I. I think that, I mean, it's something that in the short term, seeing these things, it is a way. It's. It's wonder, it's amazement, you know, so there's that part and where we end up going. I mean, there is always the opportunity for human beings to, To. To diverge and from the current track. So, you know, like, being aware of this stuff, maybe there's more of a chance of that happening, but we'll see. So we appreciate Rob for joining us on this episode of Call. Like I see it, we'll have a. A second episode this week posting as well. So check that out as well. Subscribe to the podcast, Rate it, review it, tell us what you think. Send it to a friend. Till next time, I'm James Keys.
[00:33:10] Speaker B: I'm tuned to Aana.
[00:33:12] Speaker A: All right, we'll talk.