AI Hype vs. Reality: Why Haven't Billion-Dollar Investments Driven GDP Growth?

Episode 359 March 25, 2026 00:41:34
AI Hype vs. Reality: Why Haven't Billion-Dollar Investments Driven GDP Growth?
Call It Like I See It
AI Hype vs. Reality: Why Haven't Billion-Dollar Investments Driven GDP Growth?

Mar 25 2026 | 00:41:34

/

Hosted By

James Keys Tunde Ogunlana

Show Notes

James Keys and Rob Richardson take a look at a recent claim from Goldman Sachs that the ongoing AI boom we are living through, with the hundreds of billions of dollars of investment and endless hype from media and corporate interests, did not actually produce GDP growth in the US in 2025.  The guys also consider whether there is any truth to the narrative that AU development is being held back by too much regulation in the US.

You’ll Snort-Laugh When You Learn How Much AI Actually Added to the US Economy Last Year (Futurism)

Goldman finds ‘no meaningful relationship between AI and productivity at the economy-wide level,’ but a 30% boost for 2 specific use cases (Fortune)

Is AI already driving U.S. growth? (JP Morgan Asset Management)

‘Things are going to get much, much worse’: Andrew Yang says AI could eliminate millions of jobs and split the US economy — how to stay ahead (Moneywise)

Uber CEO says other execs are lying about AI: 'They say it'll be fine' publicly but privately admit millions of jobs are gone (Moneywise)

Why the Trump Administration’s Latest Approach to AI Deregulation is Dangerous (NYU Stern Center for Business & Human Rights)

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: In this episode, we take a look at a recent claim coming out of Goldman Sachs that the ongoing AI boom we are living through did not actually produce economic growth in the US in 2025. Hello, welcome to the Call Like I See it podcast. I'm James Keys and joining me today is a man who podcasts not just for himself, but for his whole tribe, Rob Richardson. Rob, want to ask you, man, you ready to show us here on this show how you kick it? [00:00:42] Speaker B: Hey, I'm ready to disrupt and bring the thunder. Let's do it, brother. [00:00:46] Speaker A: All right. All right. Now before we get started, if you enjoy the show, I ask that you subscribe and like the show on YouTube or your podcast app. Doing so really helps the show out. And we're recording on March 24, 2026. And Rob, Goldman Sachs recently released a pretty shocking report that outright asserts that despite all of the investment into and hype around AI, AI tech thus far is not producing economic growth net wise in the US in 2025. Now we all know there's been lots of investment. So many would think that that would suggest or that that would lead to, or that on the back end of that would come a lot of economic growth. So this analysis from Goldman Sachs is pretty shocking. And so, and so them and some other people that have had some work into this, which I'll have in the show notes, suggests that we're a little over our skis with this investment in terms of actually delivering economic growth, which conceivably it would be for. So I want to ask you, what is your reaction and what context do you think may be missing from this idea put forth by Goldman Sachs and their calculation? I mean they're bankers, so they're doing this from a monetary standpoint, not necessarily to push an agenda. But what's your reaction that? That AI has had zero impact on US economic growth in 2025. [00:02:08] Speaker B: So when you think about the reaction that AI has zero economic impact according to Goldman Sachs, they're thinking about it like bankers straight up with the numbers that probably has that, that has partial truth we'll call it. Right. Because it's definitely, if you look at right now in terms of, if you're talking about how they're measuring it, right, in terms of like jobs created, how much actual capital was net returned to the US that's pro, that has a lot of truth. There's just, there's no getting around that now. Yeah. [00:02:35] Speaker A: Well, one thing, let me add to you real quick. Like one of the things they point to is the fact that a lot of the chips being bought are, are from foreign manufacturers, which by definition would not be something that would go to US growth. But go ahead. [00:02:46] Speaker B: I'm sorry, in terms of like the regular person, the regular corporation, how they're using, how they're seeing AI, they're not seeing this impact. And I'll talk more about that later. That's a separate issue. In terms of why they're not seeing it, I think it's how they're applying it. But in terms of the overall macro issue, it's right now people aren't seeing it because we are at the beginning stages. We are the equivalent. I can say this. We're at the stages that we were when people were just coming on and Microsoft was coming online. In terms of where we are with [00:03:15] Speaker A: AI, it's just this is like the AOL dial up phase. [00:03:20] Speaker B: That's right, we have the AOL dial up phase of AI. So. And there's a lot of infrastructure and costs that we're paying towards that. And so we could talk about whether that's justified and I'll get to that. But bottom line is we're at the beginning of this. And so, yes, we're not seeing that impact because it's literally, we literally just started like two seconds ago. [00:03:40] Speaker A: Yeah, yeah. No, the timing of this actually was very notable to me because that was my biggest reaction was like, well, hold on. Cause it's two qualifiers in the statement. One is that in 2025. And then second being that it is us, it's us growth. Now that's not to say that if you take all those qualifiers then there's all this growth that's being obscured by it. But when you do take a small, like what we're talking about here potentially is a large sea change in how, how business is done, how our economy works in the same way. I mean, you look at the Internet, you know, and people who were around and doing business before that, there was a big change in how the Internet, the, you know, all of the, all that came along with that, you know, how that increased productivity, how that grew businesses, you know, Amazon didn't exist before that. And now companies in the world and so forth. So like how businesses came from that. And remember there was a lot of, you know, know, false starts and you know, there was a bubble that burst. All that time got to the other end. So on its surface, I looked at this and said, this is very preliminary, you know, but one thing you said that was really interesting to me or that I kind of connected as well. Was that. Part of the issue that we have is that. So the way AI can be used now in certain ways to automate tasks or things like that to increase productivity, that's been happening a little bit, but I don't know that everybody necessarily knows how to do that yet that well. So a lot of it also is that we have these tools that are being developed, but that people don't necessarily know how to best deploy them and how to use them in a way to effectively increase the productivity in many ways. So I think that it's very preliminary, it's not a bad thing. You should take a temperature of these kind of things at all times, you know, to see, okay, hey, a lot of money's going into this. You know, there's a lot of people that are worried about it, a lot of people that are excited about it. So we should take periodic temperatures of it. [00:05:28] Speaker B: So. [00:05:29] Speaker A: But I think that context needs to be had that I don't know that we're in the stage yet where we should be seeing a ton of growth from AI now what that stage ends up looking like we can, we can get into later. But that to me is, you know, a lot of what really jumped out. [00:05:43] Speaker B: Right. So there's two things I want to, I want to really hammer on that you just said. So the first part is really thinking about. You made a really good comparison to Amazon. I think that's. I want to start there, right. Amazon started during that period and there was a big. There was. And that started during the period of the dot com bubble. You know, you and I are sort seasoned enough to have gone through enough of these cycles and bubbles. I'm not gonna call myself old, I'm not gonna claim this, but what I'm gonna say is that. So we saw that. So let's take Amazon. It took Amazon, I think nearly like 12 years to make any profit. So hold on that a minute. It took uber years and years to make profit. And so there's a lot of commentary online. Cause there are people that just like to be against AI, just to be against AI. And some people are just for it and say it's gonna. Cause you know, it's gonna cure everything in the world. Like neither one are true. [00:06:29] Speaker A: Yeah. And so, yeah, both of them actually aren't that helpful to listen to the people that tell you everything is great. People tell you everything is bad. You know, like that's, that's not where we're trafficking here. [00:06:38] Speaker B: But, but, but you know, there's this, there's There's a saying that I know, you know, we often overestimate what we can do in one year and underestimate what we could do in five years. Yeah, you take that with AI. That is where that is. That is, that is the genuine kind of point I want to emphasize. People are, are over emphasizing what could be done in one year and they're underestimating I think drastically what's going to be done in five years. [00:07:00] Speaker A: Yeah. [00:07:01] Speaker B: Point the integration in terms of how people are using AI. So you're seeing real transformative impact. There are real companies that are making tons of money mid journey Cursor. These are companies that don't have any more than 50 people and they're making two or $300 million a year because they're AI native. Which is a fancy way of saying they don't just use AI like 99% of us, which is just using it as advanced Google and helping us write a better email and just a basic thought partner. They're actually integrated into their everyday workflows allowing people to do 20 or 30x more than they could create more, build more, make more decisions. So that is real. [00:07:43] Speaker A: Well, let me add something to that real quick and I'll kick it back to you. But because it's a very, very, very, very important point. The native piece means that they didn't have systems in place prior to the deployment of the AI systems. So everything that they are trying to do, they conceived of the way they wanted to do it with AI in mind. And so it's much more direct to get from point A to point B. When you're saying I have a blank slate or relatively a blank slate. I'm trying to get from point A to point B than it is when you have 20 years, 100 years or whatever of established ways that you go from point A to point B. You got staffing for that, you got all these things in place and you have all this inertia basically. And we have to change that stuff now to find because there's potentially a better way even if it's much better to get from point A to point B. You know, so being, being native to it offers a lot of advantages as far as the fast implementation. And really, you know, again, you conceive of your processes with this stuff in mind, not trying to retrofit, you know, to, to. To what it was or what, what it is now versus what you've done already. [00:08:47] Speaker B: That that's exactly correct. So one of the problems organizations have is so people are we See organizations spending millions, billions, hundreds of millions. And what they're doing is they're spending, they're, they're trying to use AI, thinking that it's going to solve all their problems. When your problems is not going to do that. AI is a enhancer and amplifier. AI will, will, will increase your chaos. If you have broken systems, it will amplify chaos. That's what it's going to do. So you still have to have things organized. You have to have a system. And then you. And then another problem. Not surprising. A lot of these corporations and organizations are thinking that, oh, we're going to just be able to get AI, replace our people and do that. You know, that might or may or may not be true in the future, but what I'm going to say is not true right now. Okay? And what's going to happen, and what has happened is, and what you've seen happen is that organizations have implemented this technology and have not provided any type of workforce training, change management opportunities to teach people how to actually integrate it, to work with it. They say, here's a new tool and it's supposed to save you money. And we just spent millions of dollars, so you better do, you better use it. And so what do people do? They find bull raisins to see how they use AI, but didn't approve anything. Yes, we used AI more. What, that doesn't mean anything, right? But you have to actually systematically approach, work differently, integrated, and teach people. Instead. Corporations just think because they spent this, because some consultant, they paid $10 million, told them, this is going to change your life. It's going to, that's going to happen. It doesn't happen that way. In fact, the better way to put it this way, because you and I are both athletes. James, FORMER I tell people, actually with AI, here's the truth, you're going to get worse before you get better. So when you first start working out, you get sore, you get it's hard and you're not better. That first month or two is really difficult. AI is the same thing. You're training new muscles, you're integrating new systems. And so people expect this instant result because you have AI. That's not true. So that's been the problem. How these organizations have approached it has been the problem. And these consultants have lied to them. And surprised, of course, you don't see any change because you didn't implement any change, you didn't help, you didn't pay for your workforce to do it. You, you just say, we have these consultants that Said, this is gonna change our world. [00:10:59] Speaker A: And it didn't. [00:11:00] Speaker B: So why didn't it work? [00:11:01] Speaker A: Yeah, no, I mean, and that's a big part of it. Because where the benefit really would come is once people know how to use it. Because AI is about. It's about making decisions more efficiently. It's about going through information more efficiently, sorting out what's important, what's not. It can do a lot, you know, a lot of repetitive things that can make happen in a snap. Now, there are concerns with this though, like you mentioned it a couple times, and I want to, I actually want to hit this directly head on because I often think that I understand from the business owner standpoint or from the capitalist standpoint, like, yes, this thing has the potential to make business run much more smoothly. But what I'm reminded of with that is the idea that, well, business owners and capitalists have been trying to come up with ways to replace workers since the beginning of capitalism. And so I wonder though, because of that and because of the types of things that AI promises, whether or not we're seeing another round of that more so than anything. So the increases in productivity, the growth that we're speaking of, is that actually more so the ability to either maintain productivity with less work or increase productivity with less workers? And then what does that really look like for the rest of us? So should the rest of us be so excited about the potentials of AI if, as the business owners are, so to speak? Because, you know, like, should the factory worker, you know, on the assembly line, should that person have been as excited about, you know, the coming automation that came, you know, when it came to building cars and so forth? And there are arguments that, yes, yeah, you'll get cheaper products if you can find work, you know, your things will be cheaper and so forth. But that's the conversation that we're not having as well though, is what does success ultimately look like from a societal standpoint when what this growth that's promised. Because normally when people go out of jobs, there's economic principles on this. People go out of jobs, 1% increase in unemployment will drop GDP by 2%. Well, conceivably that may no longer be the case. If you're able to integrate tools that allow one worker to do what a hundred workers can do. So when Jack Dorsey lays off thousands of people or whatever, then it's like, well, he's not doing that because he thinks it's gonna drop his company's productivity. So that though would justify explosive growth in stock market valuation. Explosive growth in investment that would justify investment more so than it would justify maybe what the expectations of regular people would say. So that goes to the other side of hey, you know, every, you shouldn't be 100% up on AI and you also, you can't be a hundred percent down on it. It's like the. But it's going to change things in ways that we probably can't conceive of in a lot of ways. And one thing I'll let you react to what I just said, but one thing I want to get to though is just kind of, I want to directly hit on the idea like what Goldman Sachs is pointing out. And we've said, hey, this is early and things like that. But what type of things? And you may have already touched on it, so you can really reiterate, but what types of things do you think will reverse kind of the trend that we see right now, the year long trend where the level of expectation and investment isn't matching what's actually the growth being produced, at least in the United States? [00:14:16] Speaker B: All right, so there's a lot set there. So. [00:14:22] Speaker A: Well, you can go back, you can go back if you'd like. But I do want, I want to get your take on the other piece as well. [00:14:27] Speaker B: Yeah. So let me, so let me go back and then I'll get the, then I'll take on like how we reverse the trend. So we discussed a lot about essentially should the common worker be excited about what's happening with AI or is this just, is this just going to the rich and those who are the business owners? The answer to that is we, if the average worker should be excited is a complicated one because we have been going in this trend at least the United States that we don't care about workers so very. [00:14:57] Speaker A: That's stock market. Company stock goes up when they lay off, when they, when they're firing workers, you know. Right. [00:15:03] Speaker B: Like we have to, I think especially with AI get back to having a system where we actually value workers and people not just as numbers, it is means to an end. I believe that fundamentally as a person and as a human being, and that's what we need to be a society. We're not there now we value. How can we drive the cost the lowest in the most efficient way, no matter what that looks like to a point of extreme. Right. This is, this is, this is who we are right now. Right. This is who, this is the form of government and philosophy we, we have chosen and it is just a fact. And people have been voting this way no matter, no matter how bad it is for them. So this is where, so I think we have to do, we're going to have to do a fundamental reexamination of that. [00:15:47] Speaker A: This is going to force that. Because just, just real quick, the, the AI isn't necessarily your villain. The point is excellent. I just want to reiterate. The AI, the capability of AI isn't necessarily your villain here. If we're looking at people losing jobs, the villain here is the mentality as a society that we've been operating with that will, then, then, then people are seeing, the people who are at the top of that are looking at AI as an opportunity to go further in what they've been trying to do already. Anyway, that, but the AI again, as we've said many times, is just the tool in that sense. [00:16:18] Speaker B: Right, but let me get to it. But, but let me also then flip that, that I just told you my fundamentally, my fundamental belief, my fundamental belief is that we should value workers more. We need to have a reexamination of how we work in this society and AI is going to force that conversation even more. However, hopefully, hopefully, hopefully. However, I'm also right, this is the part I'll get to next. However, us as individuals, everybody listening to the sound of my voice, you are literally cutting off your face, nose, arms, ears. If you don't implement and learn how to use AI from how to improve yourself as an individual, as a business. Because what I am certain of is that those who understand how to use AI and implement it and understand it will be more better, they'll be better positioned than everybody else and that knowledge is going to compound. I will 100% say this AI is a fast moving train and it is moving its slowest right now. So every opportunity you have to learn to engage and to actually use the technology beyond just having a chat, this is not, it is not Google actually learning how to use it to be an agent for you, to help you no matter what you, no matter what you do. If you're in social justice, there's a way to help you. If you're a lawyer, there's a way to help you. Learning how it can help you individually, learning how you can be better at your job. All those things are real and you should be investing time and resources, doing that and just sitting down and building theirs. Because you can't hope that people are going to see what's right. Because that's for the movies, as we say all the time. Like you're waiting for superheroes. They're probably not coming. So you need to figure out how to use AI to both help advance yourself and help those in your community around you. [00:18:03] Speaker A: Yeah, I mean, that's a very key point because in the same way that when we saw even the Internet and how a lot of the initial what we see now is not what it was initially, you know, even, and I'm not even saying initially post.com bubble bursting, but what it did for many people is it leveled the playing field actually that smaller, you know, individuals or smaller groups and companies were able to compete on a level with larger entities if they were able to use and leverage the technology of the Internet effectively. It did wipe out, let's say the mom and pop shop, you know, like the local hardware store that the big boys, you know, where the Walmarts or whatever were able to gobble them up. But people were able to spring up who were able to use the Internet or maintain who use the Internet and actually compete on a better level with the big boys. So it leveled the playing field to some degree and then we kind of lost the rope on the antitrust thing and things consolidated a lot more. But that again is a policy issue that we ran into more so than the tool itself. The tool itself didn't lead to greater consolidation. In fact, the tool itself actually allowed for some leveling of playing field. AI may be able to do the same thing. One person, if one person can do the same, one person in a year can do what it takes 20 people to do right now. Well, that actually can level the playing field some, you know. So again, it's not a all doom and gloom. We should be very vigilant because this is not necessarily something that the only way it's going to be deployed is going to be ways that help people. And all of the stuff we hear about, you know, billionaires saying, yeah, we can make it so nobody has to work anymore. They didn't become billionaires by trying to make it so people didn't have to work anymore except by firing them. [00:19:44] Speaker B: So that like, come on now I got to brisk. [00:19:48] Speaker A: Yeah, yeah, like don't wait for the billionaire hero. Iron man is not coming. No come and and put a universal basic income in your pocket. That's not how he got a billion dollars and how he planned it to keep going up to trillion. [00:19:59] Speaker B: I'm gonna say this like Iron man is a fan of, of Iron man and all that series. You're not gonna get Iron man. You're gonna get Dr. Doom. So like Robert Howdy plays both of them. They. But one has different intentions. [00:20:12] Speaker A: You're probably. Well, no, for sure, for sure. I mean, to me. And just on what will. I think time reverses the trim. Kind of what we were talking about before. We're early. Where AI, you start seeing growth time, and more importantly, what comes with the time is more people learning how to leverage it to their advantage. And that's happening. We're on the early end of that now. People learning how to use AI agents, how to integrate them into what they're doing already, or to come up with new ways to do things or new things to do because of the expanded capability. That's where you'll start seeing the growth when it actually can be deployed in ways that are more helpful. That said, I am concerned on whether or not that growth will be something that more in society can take advantage of versus less. Because the people that are most motivated right now with this are the people that are looking at all the people that they could not have to pay anymore. And so, like, our motivation needs to kind of match them in a way and wait. [00:21:09] Speaker B: A really key point. If I can build on that, our motivation has to match theirs if you want to. People with good intention and ability have to become builders in this space. Yeah, they have to become builders. So, like, it is literally the. It is literally the fight we have to be in. When you talk about what can you do to really make a difference here, we can use AI. There's so many applications. We can use AI to help in ways that weren't possible before. And the good thing about AI, and it has lots of downsides, but it's a tool. It's a knowledge that's generally available to anybody that's willing to take the time to learn it. And it's more. It's not. I won't say it's easy, but it's the most accessible we've ever had any knowledge and information ever be. Right. So we can use it. I'll give just one clear example. Like, people have used it to be able to fight back medical bills that were used against them. To say, like, when their loved one died, they put forth. They put it through NAI and saw all these errors that came back on their medical bills and they got their medical bills dropped by $60,000. And so what I'm afraid of and what we're seeing happening, and I'll get to this a little more later, is that we will start seeing some type of policy and regulation seemingly look like it's there to protect people, but it's really there to make sure people can't use AI in the way that. Right. [00:22:31] Speaker A: No, what I'm concerned about, actually, there's a couple of things in history. I'll go back hundreds of years, then I'll go back thousands of years. My big concern here is that the first hundreds of years, the enclosure period in England, where something that was accessible to the public, which was land at that moment, once the idea of, broadly speaking, once the idea of the ability to raise sheep and then sell wool in the international market, they closed off the land. Public lands became private lands and landlords, you know, the monarchy and their people basically exploited everything from the land and kicked the people out. And eventually the people hundreds of years later made it to factories and stuff like that. It's kind of the beginning of all this. But what's available and accessible to the public right now may not necessarily always be available and accessible to the public to improve their lives or to maintain their lives and so forth. So that I'm definitely worried about, especially because of the motives that are in play at the highest levels. And so, yes, right now, as we're helping build the knowledge base, they want us all to use it. But yeah, what happened? But again, this is a policy concern. If we don't have people in government who are willing to look out for our best interests and not just try to rub our bellies and make us feel good about this or about that, then we have to worry because those people then will allow these public tools to be cordoned off. The other thing real quick I wanted to mention is this actually reminds me the most like, as far as how transformative it can be, but also how, you know, like the, the, the, the, the what it may end up being, the trade off may end up being, I'm reminded of, of the book Sapiens where it talks about the, the introduction of agriculture and the adoption of agriculture and how actually, you know, it. It's amazing, you know, you can have more people in your family, more people in your village, more people. Like it allowed for human populations to explode, but the quality of life for individuals actually got worse. You know, their teeth started falling out because of the foods they were eating, their back started having back problems, started living shorter lives and so forth. So, you know, ultimately what ended up happening with agriculture in one way to look at it, is that instead of humans being free and taken from this plant or taken from this animal as they so pleased, they became servants of their plants or their animals. They're like, hey, I got to tend, I can't go anywhere. I got to tend to my corn and I can't go anywhere. I got to milk my cow. Like they became servants of, of, of their crops or their, or their livestock. I'm wondering now like we're doing all this with you know where I'm going with this. But we're going to become servants of our computers, you know, like and where it's computer will start thinking for themselves or to some degree. So again just that's history. I look at history all the time and I'm just like, okay, what is [00:25:09] Speaker B: well taken so with history in terms of will we be servants to technology? In some ways we're already going down that path. [00:25:16] Speaker A: We're already there. [00:25:19] Speaker B: So we have to, so we have to see that. So part of what when we talk to like I recently taught a lot of teachers about implementing AI and implementing it safely. And the one thing that I talk to them about is like, you know, make sure as you are walking through I tell this to parents with your kids, you need to have this, you need to be monitoring it and you need to go and you need to make sure that you understand how it works. Like it is not, it is not Google. You tell them it's going to come back and say things that it feels like it knows you, it's talking to you. And kids have not and some adults haven't have the, have informed the ability to have their own thoughts and understand how that works. But certainly kids are still developing that muscle of actually, how to actually know what's right from wrong and how to think. And, and so when you have an AI, that that is, that is talking to you as if it's a friend because it's pattern predicting what we have to sit there and talk to our kids about that to say no, this doesn't know you. This is math, this is how it works. Right? [00:26:17] Speaker A: It's all that stuff. It's a compilation of millions and millions and millions of interactions and so forth. And it's playing off of that. And you know, I wonder just, and I want to ask there's one more question I want to get to but I wonder as far as what will be lost because something will be lost, you know, like as we move into a society that's more dependent and more enabled by AI, I worry about the expertise particularly that'll be lost a lot of the discussion now in terms of the types of things that AI can do right now for you because a lot of times it's entry level white collar stuff and it's like, well yes, that stuff is repetitive, that stuff is pattern. There's patterns you're trying to recognize. But that stuff is also training for people too. You're training AI, but you also could train people with that. And so I wonder what will be lost and if the opportunities for people to get that kind of training so they can become, you know, a, an attorney, a high level attorney or a high level anything, you know, like because they did all that boring menial work, you know, the first couple of years in the thing and we're like, oh, we can get rid of all this stuff. And it's like, well maybe that stuff had a purpose that, that we might. So we'll see, you know, like. But I wonder that I've learned that with anything, you know, like most things in life, you know, there are trade offs there. You know, there are no free lunches or anything like that there. [00:27:36] Speaker B: Trade off solutions. [00:27:38] Speaker A: You're the one who told me to read that. I think it was called Essentialism. That book. [00:27:41] Speaker B: That's where, that's where you're quoting from. [00:27:42] Speaker A: Yeah. So yeah, it was like it's all about the trade offs. And so the last point I want to get to, I want to get to it. I know we don't have a bunch. [00:27:50] Speaker B: Let me just say this. I have something really important to say on that. When it comes to us training this current kind of entry level, upcoming workforce, we have to. This also goes back to higher education. Education itself has to reimagine what that looks like. Yeah, we now we have a model that's built from the industrial revolution almost agricultural model. We now have to move from a model that integrates and works with computers and others and teaches our students and others how to come up with context, how to solve these problems earlier on so they can learn that process as they go along. This should be happening in colleges K through 12 and know how to integrate AI into helping them do those remedial tasks. But still learning that process because I think you're right. [00:28:34] Speaker A: Let me add, because the thing is, is that it's how, it's how to be able to operate in the new systems. But I don't know that we have the answer yet on how you can build the expertise. Like it. That may be something like you said, reimagine like that we actually have to come up like we need to be able to give people, to help people develop a certain level of expertise without being able to having the access or the opportunity to do a bunch, to get paid to do a bunch of Repetitive tasks, they kind of lay the groundwork for you. So, yeah, that, that is my thought [00:29:01] Speaker B: on that is very quickly to finish the thought is giving people the ability to critically think and get clarity of thought. So because what's not what AI is, I don't think it's ever going to do this. It's never going to be able to completely have judgment, taste and context that requires those things you're talking about, because human beings and a machine is always going to think differently from a human. So we have to know how to critically think, bring, bring pieces together and have be very clear about what we're trying to accomplish and people will be surprised. It is very difficult to be clear. It takes a lot of work. And that's what we have to teach students and others how to solve problems, how to get clarity about what you're trying to achieve. And that's going to be the basis of all of our training? [00:29:41] Speaker A: Well, no, and that's the clarity and the communication oftentimes is where we've turned to a lot of the best and brightest among us to try to look at what's going on and to be able to make sense and provide clarity to people about that. So, yeah, that, that need hopefully will still be one that is and can be met by humans, but it's not necessarily a given. I mean, and this is again where you get into when a society decides what's important to them, because there are plenty of people that would prefer to offload that stuff, you know, like. And so we'll see. I mean, we are at the beginning, I mean, and as we started at the beginning of this podcast, we're at the beginning of this stuff and I like your message overall as far as how we need to take a very active role in engaging with it because it's going to increasingly become more and more be a part of more and more of our life. And right now it's still all available to us, you know, like, so the, the, the idea that this stuff, the curtain could be pulled at a certain point and you know, like, that's not here yet. So right now let's try to learn as much about it as we can, improve ourselves and then, you know, see, be prepared for however the next shoe drops, so to speak. So the last thing I want to get to you, and I know we don't have a bunch of time for this, but what we've also seen, and looking back at the growth piece and AI investment, not necessarily lining up with growth and so forth, there's one Thing we've seen a lot of people throw out is that the reason that we're lagging in growth is the fact that the tech space in general, or AI more in particularly, is so highly regulated. In the US they're saying that it's all this regulation that's holding back the ability to really unleash the AI and to really be able to pull, to fully leverage its benefits to create more growth. So I imagine your thoughts from the belly laugh that we got, but your thoughts on deregulation being what is standing in the way between growth from AI or standing in the way from us getting economic growth in the US from AI. [00:31:38] Speaker B: What AI regulation are they talking about? That was my first question. That's why I laugh. But if they're talking about the basic level of there are some states that have some regulation around privacy and that you can't just use everybody's information without disclosing it, is that what they're talking about? So here's the thing. When people tell me that we can't innovate because regulation is stopping us. So aren't we like the leaders in. At least we were in air aviation. We've been that right for a long time. And. And it's highly regulated. [00:32:14] Speaker A: Yeah. [00:32:15] Speaker B: Guess what? So these things are, these are false choices that are put in front of us because I think they don't want to have to comply with regulations. [00:32:23] Speaker A: They serve an agenda. Yeah, yeah, they serve an agenda. [00:32:26] Speaker B: Regulations are needed to answer the first question about how do we make. The second question, we need regulation to also make sure we move forward economically. I believe that we need a framework to work for them that's universal, that makes sure that people can have trust in this technology that's being used and we can innovate. We need to have that. We need to know the rules of the road. Anthropic, as many folks know that are listening to me here, anthropic was one of the Central AI LLMs being used by the United States government. And recently the current administration said that because it wouldn't agree to its terms, which, by the way, anthropic is probably the best out of all these people in terms of having a basic frame. But that doesn't mean they're perfect because they're still a multibillion dollar corporation. Their goal is to make money, but they at least have a scope in terms of what they will or will not do. And I respect that. They only had two conditions. One condition was that that the AI would not be used to surveil US citizens. Only US Citizens. It didn't say any other citizens. But that's another thing. Number two, that it wouldn't be used for autonomous weapons. Those were the two things. And allegedly one of the targets during this Iran war used AI and that was the target that killed the girls at that school. What we're saying is There's a reason AI. There should be some things 100% that should always have human intervention and decision making. So. [00:34:02] Speaker A: And no, no, no decision should be one. No, no, and I agree with you on that in the sense that at minimum when you have that then somebody's butt is on the line, you know, like. And that's kind of the, what you're concerned about in these situations a lot of times is the further it gets from a human decision, you know, regardless of whether the AI is capable of making the decision, the further it gets from a human decision, the more plausible deniability people have. And nobody's responsible for something, all of it. [00:34:29] Speaker B: And I don't agree with. [00:34:29] Speaker A: We just can't live in a world where nobody's responsible for death, you know, where it's just happening. Exactly like oh well what are we going to do? [00:34:37] Speaker B: But I'll say this because James is an important point. You, when you with the AI, the decisions lie with the people, individuals in the company. So according to, according to current US law, corporations are people. So if you're a person and you use AI, you as a person are responsible for its, for its ultimate actions, period. Hard stop until that changes. That's what I think. [00:35:00] Speaker A: Well yeah, but a corporation itself can't be put in jail. So I mean the idea that we're just gonna make will a corporation as it to make them a person by decree or of a court or whatever still falls into some realities that it just keeps. [00:35:15] Speaker B: I agree, I was being facetious. Yeah. [00:35:17] Speaker A: But nonetheless I think that the, you know, yes, this is agenda driven. Just the idea of hey, we gotta. People who don't want regulation anyway are trying to bang this drum because again, the regulation is so minimal. You made that point very well. I'll add this though, because just for a full understanding of this, generally speaking, in newer industries you do want to have a lighter hand with regulation because you would like the industry to explore some and figure out where the, where the problem areas are and so forth. And then once you figure out the problem areas, then you can put the regulation on what without necessarily stifling innovation. That's in theory, but there are still. That doesn't necessarily mean that you need to have complete. You can do whatever you want. Like, at the very beginning. We're at the beginning, we gotta do whatever we want. There still has to be some level of common sense applied, and this is why we have a government, is to look and say, okay, what makes sense here? What serves the people? We're supposed to have a government of the people, by the people, and for the people. But again, if corporations are people, that kind of gets bastardized a little bit. But setting that part aside, the people, us, we have to actually, we're supposed to have representatives that look and say, okay, does it serve Americans to have AI be able to surveil them regardless of what the law says? And that is something that I think we would want our elected officials to say. No, we don't want that happening. Or having autonomous weapons going around and killing people without just autonomously, as the word says. Those are decisions that we want policy makers. We don't need to see AI driven weapons kill people in order to know that we don't want that to happen. So. [00:37:02] Speaker B: That's right. [00:37:03] Speaker A: Yeah. [00:37:04] Speaker B: AI driven things around, like having conversations with our kids, sexualizing them. These things are. Yeah. [00:37:11] Speaker A: There are certain things that we can figure out without seeing the AI go too far. Like, hey, we don't want to do that. Exactly. So I thought you used a good term before with false choice in the sense that, yes, we don't want overly restrictive regulation on AI. We do want it to have some space, but people have been around long enough. We've lived long enough to say, hey, there's certain things that we don't want the AI doing under any circumstances. We don't want to go around just killing people. We're good. We're good on that one. So I think that when we're presented with these false choices, and again, even if they're dressed up with the rationale and so forth, as I laid out, we have to be able to, again, see the big picture and say, okay, yeah, we're not stifling innovation by saying that, hey, the AI should not be surveilling us nonstop all the time when we have a Fourth Amendment that says that that shouldn't be happening. So, yeah, I mean, I think it's one of those things that this is where the vested interests will try to lead people astray and they'll succeed with meaning with leading many people astray, but where the rest of us will have to come together and realize that, hey, somebody's trying to make a monkey out of us. And, you know, like, be able to stand up to that through our elected officials, hopefully. [00:38:23] Speaker B: Absolutely. [00:38:26] Speaker A: So yeah, I think we can wrap this topic from there. But we did want to get into a conversation today just that really was allow people to understand some of these issues. A lot of these issues are talked about at a high level and obviously we didn't go in and break down everything at a granular level. [00:38:40] Speaker B: But. [00:38:41] Speaker A: But try to give more perspective on a lot of these discussions that are happening or a lot of these. You know, again you'll see this headline AI0 economic growth. All that. And that's pretty shocking. So what's the context of that? Or what's the context of the deregulation and so forth? So Rob, I appreciate you joining me. What tune days out today? Tell the people where they can find you, man. [00:38:59] Speaker B: Well, you can find [email protected] of course, I have my own podcast at Disruption now as well. And I'll just tell folks, as a final note, you know, AI is neither going to be your savior nor your villain. We have to understand both how to. We have to hold both its problems and its potentials at the same time. Because I see people, usually they operate on one extreme or the other. I can't tell you how many comments I've had to say AI is useless, don't do it, makes you dumber and all these other things. I tell somebody said AI makes you dumber. I said only if you're dumb like you have to figure out a way to use it to help you increase [00:39:35] Speaker A: versus yeah, I was going to say if you use it correctly, it won't necessarily, but it can. You become just like anything if you become dependent on it in ways that you could. But if you use it, if you use it in a way it can increase your capabilities and won't make you dumber. If you use it another way, then [00:39:51] Speaker B: yeah, it'll make you lazy like anything else. And then we also anything else, like when we talk about policy, there are two extremes of this. We do need an overall framework. What is. What don't we want to leave in the hands of an AI agent? Right. What has to happen? What is our relationship between workers and society? That's bigger than AI. [00:40:11] Speaker A: That's a big one. [00:40:12] Speaker B: That's a bigger one. That's bigger than AI. This is not an AI question. That's a who are we? Question and one that I hope we answer better in the coming years. But as we look at regulation going forth, we also have to be careful of these individual states that may be well meaning that have harm, too. There's a law in New York that is tempting to limit the ability to use AI to help people legally. And that's. That's really just protecting lawyers. But people individually might need this to help them, but they can't afford a lawyer. And so there are ways to do this that we can both help people and protect and protect society. But we have to be mindful of it and really just be nimble in our mind, because this is an opportunity right now. But if you're an individual, you're a person, you're a business, what I know is this for sure. You need to learn how to use it and help your business do it safely, learn it. But there's no disadvantage right now to moving forward. You're only disadvantaging yourself if you're not going to figure out how to implement it in a real way. [00:41:07] Speaker A: There you go. There you go. All right, so we'll subscribe to the podcast, rate it, review it, tell us what you think, send it to a friend. Until next time, I'm James Keys. [00:41:16] Speaker B: And I'm Rob Richardson. [00:41:17] Speaker A: All right, and we'll talk to you soon.

Other Episodes

Episode

November 22, 2019 00:32:40
Episode Cover

Can Bernie Sanders Save Black Lives?

Can Bernie Sanders tell you how not to get shot by the police (0:41)? Would you rather be offered honest advice or empathy, and...

Listen

Episode

March 30, 2021 00:55:32
Episode Cover

Election Interference Appears to be the New Normal; Also, the Fight Over Women’s Sports

Alarmed by the U.S. Intelligence Community recently released a declassified assessment of foreign threats to the 2020 U.S. election, James Keys and Tunde Ogunlana...

Listen

Episode

December 26, 2023 01:07:41
Episode Cover

The Politics of Star Wars as a Warning to Democratic Societies; Also, is Human Immortality in Reach?

James Keys and Tunde Ogunlana react to an excellent video from Arken the Amerikan entitled “How Liberty Dies: The Politics of Star Wars” and...

Listen