Episode Transcript
[00:00:00] Speaker A: In this episode, we take a look at the big move two big Hollywood studios just made to try to kneecap what's going on with AI.
Hello, welcome to the Call Like I See it podcast. I'm James Keys, and joining me today is a man who's had sick take takes for decades at this point. Tunde. Ogonlana Tunde. You ready to show them how even 28 years later, you still got it?
[00:00:39] Speaker B: Yes, sir. And they're so sick. I gotta take em to the doctor today. So.
[00:00:44] Speaker A: Hey, might be. Might be past that point at this point, man.
[00:00:48] Speaker B: Yeah, but the 28 thing references zombies, so they might not be curable.
[00:00:52] Speaker A: Yeah, I was gonna say it might be too late for that.
[00:00:55] Speaker B: Yeah, so I get it.
[00:00:57] Speaker A: Now, before we get started, if you enjoy the show, I ask that you subscribe and like the show on YouTube or your podcast platform, doing so really helps the show out. We're recording on June 24, 2025, and we just saw two of the biggest players in Hollywood, Disney and Universal, file a federal lawsuit against the AI company Midjourney for copyright infringement. Now, the lawsuit takes issue with both the way that Midjourney generates near exact copies of characters and images that are protected by copyright and owned by Disney or Universal characters, like copies of characters like Homer Simpson or Darth Vader. But also it looks at and attacks the larger way that AI models generally, and this would affect the entire AI industry, are trained indiscriminately on any content. A lot of times it's just available over the web, which will include a lot of things that are protected by copyright. So, Tunde, you know, Hollywood studios historically have been pretty slow to do much with AI. I know a lot of the talent has been very concerned about this, but the studios themselves, in fact, a lot of thinking was that the studios were just trying to figure out ways to leverage it for themselves as opposed to being too worried about what the actual AI companies were doing.
So this is kind of a sea change in the way that the big studios are looking at it. So do you make anything in particular about the timing of the lawsuit and just kind of the direct attack that we're seeing here from, again, two of the biggest players in Hollywood?
[00:02:28] Speaker B: I don't make much about the timing. I'm thinking out loud. And this is. I know your world of the law that these. I'm sure a big case like this takes a long time to prepare. And, you know, the fact that, like you said, the two big titans of the industry coming together to have this lawsuit probably mean They've been working on this for a while. So I'm not going to look at the timing of it as anything to comment on. But I do think that I would say this, maybe in this way. I'll answer that. The timing to me may make sense now because of what you said leading into the question which is that AI has become a threat now to those industries and their bottom line. And it's clear and they have documented that in that case, in this case, which with the company they referenced mid Journey, which I know we'll talk about. But I also think you made a great point which is it's kind of, this is classic established industry tactics. A new technology evolves or is released.
The first instinct is hey, how can we take advantage of this ourselves? And I do think that yes there's now beginning to be this actual split we can see from kind of the quote unquote. I'll say tech bros, people like the Sam Altmans and the Musk's and the Thiels and those guys and traditional, let's say media.
And when I say media I'm including everything from Hollywood, movies, all that. The kind of the, the traditional industry that has a large infrastructure and a big scaffolding. And, and, and it's interesting to see that some of the case made by traditional, you know, big American corporations like Universal and Disney are actually cases you would see made by maybe governments. You know they're actually making cases about not disrupting the employment market and how much they pay out in I think I saw a number of 229 billion in total salaries paid out.
[00:04:21] Speaker A: Yeah. Talking about by this good. Basically as part of their, part of their manager. Hey yeah, this is bad for the public good. Well and to be specific and what you were referencing, I'm an intellectual property attorney and this deals directly with copyright infringement. So that's kind of your reference just to my field and to me I actually make specific note of. I think the timing of this is significant.
You know the sea change is the biggest factor really to discuss and I think it's. You kind of alluded to it. It's the, it's the, I think they're seeing them as a threat more now. Maybe four or five years ago the hope was that they could either swallow these AI companies whole so let them kind of let them germinate a little bit, let them kind of do their thing and then we'll just take them over. But now the AI companies may be their power, their influence may be something that the establishment companies and you know, I would say the Hollywood studios establishment here are starting to be concerned about. And so yes, I think you, you can view this in the lens of kind of the, the old story, you know, like where you have upstart businesses come and they try to, you know, basically subsume the, the, the, the existing companies. And so, and we've seen that, you know, even within the entertainment field where you got players like Apple and Amazon now that and Netflix that have, you know, dwarf, they dwarf the size of the Disney's and then the Universals and so forth in many respects. So I think the lesson though, and this was that I heard this in the discussions because this topic was talked about a lot last year, but from the context of the creators, not the people that work for the Hollywood studios, particularly with the strike in Hollywood last year, wanted protection against AI, but they wanted protection against AI being used by the studios to replace them. Whereas here we're talking about the studios wanting, we're suing an AI company, trying to make sure that the AI company doesn't replace the studio. So everybody now is kind of looking at the AI companies like, hey, you guys are looking at, you guys are taking all of this stuff, all of our existing stuff, you're training your AI models on it and then you're planning to replace us with this stuff. And so I think they all see the threat now and maybe there's more alignment from Hollywood and the analogy you might look at. I think the music industry kind of 20, if you look back 20, 25 years ago, might have missed this opportunity where music was going more digital, which was helpful for the consumer and actually drove innovation. I'd say in terms of delivery of music, you know, we don't have to all go buy CDs, physical CDs to actually to own something or we don't have to buy the whole album, we can buy songs or you know, and then it led ultimately to streaming where you can stream anything for, you know, a set amount of money. So that ended up being pro consumer. But the music industry players were very slow to react to the changes in technology and so forth. Now it's not the same, it's not apples to apples because those, those weren't players necessarily that would come and take over everything in the same respect, but it's still the same kind of thing. So it may be an earlier action now that's happening, saying, hey, before these people get so big that we can't take them out anymore, this is kind of our shot to take a shot at them. And to try to establish some legal protections for copyright, for our legal ownership of this stuff so that they can't just use it all, grow from it and then wipe us out completely.
[00:07:33] Speaker B: Yeah.
And I think part of it is, as you're talking, it reminds me of some of my thoughts in preparing was basically, this is what happens when we have these new disruptive technologies, that there's going to be players that find their way through the seams and figure out how to, you know, maybe get ahead of certain things, you know, and make a profit doing it.
[00:07:55] Speaker A: Think about Amazon and bookstores. You know, we used to, we grew up in a time there was bookstores everywhere. Disruptive technology. Amazon leveraged that. And then now there aren't bookstores everywhere. But.
[00:08:05] Speaker B: Well, this goes deeper because like professional hat this, this, this now is at the point of piracy, which, you know, my understanding is Amazon wasn't engaged in that.
And so what I'm getting at is because I'm reading here now from the article, specifically the lawsuit against a company called Midjourney. Yeah, Midjourney charges, it says they have monthly fees ranging from $10 to $120 for, for a mega subscription, I guess is the $121. Their fees. Their revenue was $50 million in 2022 and last year in 2024 is 300 million. So that is an exponential jump. So what explains that jump is that. And this is where they showed images of this in the article.
That.
[00:08:45] Speaker A: Which we'll have all this in, in the show notes. But yeah, go ahead.
[00:08:48] Speaker B: Yep. So, so if you look, if I go to this mid journey and ask to ask it to create an image of Darth Vader, it basically creates the image of exactly what I want. Darth Vader. The difference is under, you know, other industries and, and the rules that have been around property rights in your world. Yeah. They would have to give a piece of that revenue back to Disney. Disney. They would have to license the, you know, Darth Vader's image from Disney and have an agreement with Disney and say, okay, for every $10 subscription or $100, one you guys get $2 or whatever, whatever the agreement is right now they're making all this profit and revenue and generating all this without compensating those who own the intellectual property rights. Because remember, George Lucas designed Darth Vader and created him.
Right. Then he sold that to Disney over a decade ago and now Disney owns the rights to that.
[00:09:45] Speaker A: Yeah.
[00:09:45] Speaker B: So you're right. This is a fundamental shift.
If this is allowed to continue, which is why the lawsuit's interesting.
[00:09:51] Speaker A: And yeah, that's. That's Disney Universal's.
That's their point, is that, hey, people, if your customers are using this to create things that would otherwise violate our intellectual property. Right. And so there's two. I think it's worthwhile just to take a quick second to kind of lay out the two main thrusts of this. And this is kind of what we were talking about offline before.
One is what the AI companies are doing. This is all of them. This is a broader issue in AI, is that they take existing material and they, quote, unquote, train their AI on that. So they. They'll take a million, not a million, but a thousand images of Darth Vader and run that through the system. And the system learns how to draw characters from that. But obviously it'll be all characters they'll run through. And so the system has been. These AI systems have been trained on these copyrighted images. So it's like, if. So, yes, if you're asking for. I think another example was, hey, generate a. You'll give it a prompt to generate toys, you know, action figure toys. It'll start generating Buzz Lightyear because it's. It's been trained on understanding that that's what a toy looks like. Or give me a villain that wears all black and wears a mask. It's been trained to generate or to understand that that's kind of what that would look like. And so there's the training of the AI models using this copyrighted material and whether there needs to be a license.
A license is an agreement to allow somebody to use something that otherwise is legally protected, whether they need to be a license. And there's other AI industry groups that are in line with this and saying, hey, we have licensed libraries that you can train on. So there is a market developing for this. So there's the training piece and then there's just the straight up. When it generates an output that is a copy of Homer Simpson or Darth Vader, then that itself is, hey, you've made a copy of a copyrighted material. You haven't changed or anything like that. You haven't created something else. This is a copy of ours. So what we're looking at in this lawsuit is they're going after both of these things, and really the training one is going to be the bigger fight ultimately, because there's not much to say when it's like when you are generating exact copies of existing characters which would be protected by copyright.
So the other thing I wanted to get with you on this, you know, was the issue with the training because that seems to be shaping up as a key issue in our time, you know, of our time. And the reason I say this is because we hear there's a lot of lobbying going on in our government and so forth. Talking about this because there is the argument being made by the Sam Altmans for example that we, they see saying that AI companies need to be able to train on, on whatever's out there indiscriminately in order for their, for our models, our quote unquote, you know, US based companies models to be able to keep up with models that are being developed in China. And so you'll hear it termed as a national security issue that hey, we got to disregard copyright law, we got to disregard everything else and be able to train this, train these AI so they have the most robust, the biggest library to train on. We can't start cutting up these libraries that we can train on stuff we can't train on because otherwise the Chinese companies won't be subject to these restrictions and they will fall behind them. And so I'm going to ask you this, I kind of know the question is going to sound crazy. So do you think we should buy this kind of argument from the AI companies that basically the existing legal structure shouldn't apply to them because we got to keep up with China or do you think they're kind of setting us up for the okie doke here?
[00:13:22] Speaker B: Hold on, let me check what percentage of my portfolio is in tech stocks before I answer that.
[00:13:28] Speaker A: No, I'm just put on a different hat. You got a different answer, right?
[00:13:31] Speaker B: Hold on. Yeah, if it's 90% in tech and AI related stocks, then I'll have to agree with OpenAI.
If I call my stock broker and we're only 10% in tech stocks, then I'll have to say yeah, especially if.
[00:13:46] Speaker A: You'Re in Disney or Universal.
No way.
[00:13:51] Speaker B: Let me be honest, as an American capitalist, all that's going to determine the outcome of my answer. But no, but on a serious note, I mean look, I, I, it's funny, as you were talking I had my own stuff ready to go. But that just reminded me another example of where we've had fear mongering from China in my lifetime has been the environment, right? Very similar. Well, we shouldn't have environmental regulations and try and halt pollution in the United States because that's just, China's just going to, you know, with their factories, it's going to pollute the air.
Okay. So then we shouldn't try and still have ideals and hold ourselves up to better standards and maybe try and bring the Chinese along to having better standards as well. So you know that argument, to me, I agree with you. When I saw that, and I want to actually read a quote regarding China that they said. That's exactly what I thought of is just the fear mongering, the projection. And the reason I use the term projection specifically is because what they're asking for us to do in the United States is for us to allow them, the tech industry, to do to all of us what they are accusing China of doing to all of us.
[00:14:57] Speaker A: What they're trying to scare us that China's gonna do.
[00:14:59] Speaker B: Yeah, that's what I'm saying.
[00:15:00] Speaker A: That's why I said the tick tock thing. I thought you were using because it's like, oh, hey guys, you don't want, we got to protect you from TikTok because they're stealing all your data and they're spying on you and stuff. And it's like, well, hold up. So isn't that what you guys are doing too?
[00:15:12] Speaker B: I mean, that's a good point too.
[00:15:14] Speaker A: With you doing it. Not, you know, but, but I should be worried.
[00:15:16] Speaker B: That's what I mean, it's, it's a great point. It's another projection. Right. They're, they're trying to get us scared of something that they're telling us to be scared of someone else doing that they're doing. Yeah. Or their, at least their desire to do. And so, but here's, here's why I started laughing. I'm gonna quote this paragraph here.
So quoting, this is OpenAI's response to, I guess the people who wrote this article when they interviewed him, quote, OpenAI's models are trained not to replicate works for consumption by the public.
Instead they learn from the works and extract patterns, linguistic structures and contextual insights. I just thought of myself, man, this is like this great example of these guys hiring people to use big ass words to drive everyone off the scent because that really doesn't mean anything. So they don't, they don't replicate works. They train their models to create the same thing. So it is still replication.
[00:16:04] Speaker A: That's kind of the subtlety. That's a, that's a expansive way to say what I had said like 10 minutes ago or five minutes ago. Yeah, you're giving, it's almost in a sense giving it thousands and thousands of examples of something, you know, so that it can then supposedly be able to understand how it works and then come up with its own solution. The issue is that that doesn't always play out like that, but go ahead.
[00:16:25] Speaker B: And the term expansive is interesting because what is expansing is the pocketbook of the lawyer who wrote it. Right. This reads to me as a business guy, oh, they hired a good firm that they paid $1,000 an hour to. That's what they did. And they just put a bunch of words in there to make someone say, oh, these guys are smart.
So then I'll finish. The last quote is from OpenAI. This means our AI model training aligns with the core objectives of copyright and the fair use of doctrine, using existing works to create something wholly new and different without, and here's what I underline, eroding the commercial value of those existing works. And I just thought that's absolute bs. First of all, yeah, they are making something up. They are making up their own definition of what aligns with the core objectives of copyright and the fair use doctrine. That's, that's AI, that's, that's why I guess we're on this lawsuit because the court's going to decide whether they're full of crap or, or not. And then.
[00:17:20] Speaker A: Well, they're trying. No, you're right though. They're trying to strong arm new understandings into the public. Like what they're saying doesn't make sense in a traditional understanding of all of these concepts of copyright law, fairness, which we don't have to go into all that. But what they're trying to do basically is say we're going to say this now and, and then we're going to try to strong arm it, to make it what everybody understands going forward. And it's like, well, hold up, that didn't change actually what's happening. You're just trying to change the paradigm in which people perceive it.
[00:17:49] Speaker B: Hey, it's alternative facts.
[00:17:51] Speaker A: I think that's exactly what it is.
[00:17:54] Speaker B: But here's the one that kills me. That's why reading the same thing, using existing works to create something wholly new and different.
So they're all in that sentence, they're acknowledging using existing works and they're saying they're creating something new. And that's why I'm saying I, I asked the audience to go look at the show note you'll put up because if you look at the image of Darth Vader that Mid Journey created the AI, it is exactly the same.
He's got a red lightsaber, the same black suit. The whole thing, even the background is looks like A Star wars set.
So the fact that their AI recreated, if I type in I want to see Darth Vader doing something, it's given me the same thing is like you're saying the language is just bs and so long story short, the, the part I, I underlined without eroding the commercial value of those existing works is a lie. That's what the lawsuit from Mid Journey, that's what I just say before million in 2022 and two years later in 2024, they made 300 million and didn't give anything back to Disney.
[00:18:56] Speaker A: Basically. Your point being? Because remember or not, I mean, just for the purpose of the audience, like Mid Journey is like a lower level AI, less capitalized, less funded than open AI. OpenAI. In fact, a lot of people think it's strategic. Disney and Universal went after Mid Journey because they're going to have, they're not going to have as good a lawyers. And you know, like, then maybe that's, that's where we want to set the example is with this Mid level AI firm, you know, versus we'll go after one of the top guys and we might lose. And then that creates this law that's going to hurt us all. But yet like the, based on what OpenAI is trying to will into existence this concept, all of that growth from midjourney was not at the expense of the existing industry. It's like, hold up. So that just that many more people just were creating more images as opposed to, as opposed to buying images that existed. They were generating them. And so yes, I think it's a great example of our modern time where there's this thing where you just say something and by saying it you're expecting people to believe that it's true and even if it's demonstrably false, you know, and like we've seen, we see this in our political game a lot. And so I mean, I guess they're just taking cues because it seems to be successful in many respects.
[00:20:04] Speaker B: Yeah.
[00:20:04] Speaker A: Where you just, I'll watch a video of something and then somebody will go out there and say, oh no, that's not what happened. Something else happened. And it's like, well, but I'm watching the video like, and this isn't, and this isn't a deep fake. So you're not saying the video is false, you're just saying don't. Wait, what do you always say? You talk about don't believe your lying eyes type of thing. Yeah, and so, yeah, I mean I think that, and I think the training issue Like I said and I, where we get into this is, oh, even what if you take what OpenAI is saying at face value and saying, hey, we're just trying to build the most robust model as possible. We want a model that understands every permutation or has seen every permutation of everything. And then eventually when we deploy it, or when we, as we deploy it, we're going to lean it more towards new stuff and not replicating stuff. You know, even if you take that at face value, that's still a temporary game. You know, ultimately their game is to be able to do what Disney does or what Universal does without the need for Disney. You know, like, so the question is whether or not the content, when you're looking at the training of what are the content that Disney and Universal have built up over 100 years and are continuing to build, whether that stuff can be used as towards this end goal of an AI company to ultimately replace them in five years, you know, and do so without compensating them at all. And there, that's just not the way the law has been set up historically. Now one interesting thing about the timing of this also though is that part of the 2024 election, you know, this was an issue that there were a lot of people that were cozying up to candidates, you know, and saying, okay, well I want, you know, like I'm going to, I want, you know, due to, I want you, I'll give you money, I'll support your candidacy if when it comes to AI, you take a more hands off approach and let us do what we want to do. And understanding, even in the budget bill that's been bouncing back and forth between Congress for the past month or so, one of the provisions in there is something that prevents states from doing anything on AI, any kind of restriction or limitation on AI for the next 10 years. And so it's like, so there is a impetus to act right now.
If you're going to speak up about AI, and particularly AI being able to use all your stuff to build a better model to ultimately replace you, now is the time to speak on it. Because from a federal standpoint, at least in the executive branch, it seems like the walls are starting, are going to start closing around you where you might speak up too late and look up and you're owned by OpenAI or Mid Journey or something like that. And, and you're just working for them in whatever capacity that they say. Cause they don't need much work, you know, like they, they got the computer generating everything. So Any final thoughts before we get up out of here?
[00:22:45] Speaker B: Yeah, man. Just to finish on that note, you know, that's why I found it interesting making the comment that in the suit, Disney and Universal cite the public good. Yeah. Because I almost thought, like, well, shouldn't the government be advocating for the public good? Like, should that be one of the agencies, like the Federal Trade Commission saying, hey, you guys can't do this because. Because we're gonna. It's gonna disrupt society.
[00:23:06] Speaker A: Nobody does political donations for public good.
[00:23:09] Speaker B: I know, it's hilarious. So.
But that's what I mean. Like, we're relying on the old industrial age behemoths of Disney and Universal to save public jobs because so we can.
[00:23:21] Speaker A: Still have human animators and people writing stories and so.
[00:23:25] Speaker B: Yeah, yeah, yeah. But it's interesting. Cause I mean, I didn't know those stats. 2.3 million jobs and 229 billion in salary. So again, the way the current system works, that's important for the system. If these guys at the AI, the Tech Bros. Got a different way and they're going to do the universal basic income and bring about techno feudalism or whatever they're doing, you know, maybe they need to show us a little bit more under the curtain, all these crazy ideas. But I agree with you. And that's where I was going to land as well. Which is, you know, again, quoting the article that I'm. That I'm citing here that OpenAI urged the US government to enact federal law that preempts state laws attempting to regulate AI. And it says that that federal law OpenAI suggested should set up a, quote, voluntary partnership between the federal government and the private sector. So it's voluntary. So I gotta trust these Tech Bros are gonna voluntarily just come to the government and say, oh, we're gonna do the right thing from all these people after all these shenanigans we've seen. So, yeah, that's what I was thinking too. James, just to finish it off is right now this is what happened. We got the Tech Bros running the government.
And this goes back to like you had brought up. And I wasn't going to go directly there, but it's good you did it because it's all connected, which is money and power.
And this is again, laws of unintended, or maybe they were intended consequences of things like the Citizens United ruling of the Supreme Court over a decade ago and the chipping away of campaign finance laws and allowing corporations and allowing money to be considered speech, like a person's speech. And that is now legal in the United States. So to your point, Whether it's the UAE given a President United States $2 billion in a meme coin that the president's family owns or whether it was Tech Bros. Lobbying going into an election, this is looks like it's the result. And it looks like the type of thing that most Americans generally haven't liked. When other industries have done this in the past, like the oil industry, that has been seen as pushing around the government and taking us into wars in the Middle east and all that, now we've got Tech Bros in there because oil was the commodity of the 20th century. Now it's data and information. So now the Tech Bros. Have the money and they have now and they're beginning to influence the power structure.
[00:25:40] Speaker A: Yeah.
[00:25:40] Speaker B: And we're watching it.
[00:25:42] Speaker A: Yeah. I mean, and I think it's always important to remember, like we see these headlines but we don't see the headline doesn't include that there is strings attached to this money. I mean, and that's kind of the thing that you always get. You kind of have to know that yourself. Like, oh, okay, this person got, you know, a free plane. Okay, well it wasn't really free. There's strings attached to this, you know, like. And so when you, when you accept these gifts, particularly on that level, that's the. We've had emolient. We haven't. We had, I guess we're still there, but it's just not enforced an emolience clause and the Constitution to stop officials from taking gifts, you know, from other countries because we didn't want a bunch of strings being attached to all this stuff. And you know, if you got the wrong person in there that wants to take all this stuff for free and then do all these, you know, oh yeah, then I'll take care of you in this other way. Then, you know, we end up in a situation where our leadership is beholden to either specific industries, you know, like, like, you know, whether you have the AI industry in this case or other countries or whatever. And the last thing I'll say though in this is just that I think what's obscured a lot of times in these arguments is that there is another way, like they don't have to do all this to take all this data to train on for free. Like there is a way. There are agreements that can be made. There's always business solutions, business solutions to these types of things where you, you put up some money or you put up some equity and say, okay, we want to train on this stuff, but we're going to compensate for you, you know, compensate you for it. And you know, then, then everybody can win and then all the boats can rise. So what's, what's happening right now is more, is a strong arm technique. Like, look, we don't, we're not going to pay you anything, we're not going to give you anything, we're not going to do anything. We're just going to take it and you sue us if you want to stop us. And then we're going to try to weasel our way into the government so that if you sue us, we can exert influence in other ways to try to win the lawsuit by other ways, you know, so, but they could, there is another approach that they could take. You know, like these guys are raising investment in the billions all the time. They could, they could pay for this stuff and use it, which I'm in favor of them training on large data sets. But I don't think you should be able to take, you know, like, and so this isn't the only loss in Reddit. You know, Reddit has sued, you know, an AI company. We've seen Thompson, Reuters, which is a legal, you know, research company. They've sued, they've won against an AI company that was trying to take all their data for free and then use it to replace them. So the courts, it's harder, you know, to, to, to exert a bunch of influence on the courts quickly. It takes more time, as we've seen, you know, like the, the effort that to influence the courts has taken over over the last 20 years. So I guess that's kind of the last bastion right now for this type of stuff. So, so we'll see how it plays out. But you know, it seems like, you know, a lot of the major players now, which you would include the Hollywood studios now, or at least on the same page that, hey, we can't let these people just run roughshod over us or else we're going to look up and they're going to, it's going to be too late. And I think we're at that point right now.
[00:28:26] Speaker B: No, and I think that just the public needs to be aware of these games, like this whole thing about the 10 years that the, you know, states can't fight against this law if it works.
[00:28:37] Speaker A: States aren't allowed to put in laws, you know, like, and it's like, that's.
[00:28:40] Speaker B: What I'm saying is, so if this is allowed to go, this is a big precedent in the United States that could upend states rights So, I mean, that's what I'm saying. This is.
[00:28:46] Speaker A: I mean, that's a pretty drastic thing if you think about it. Like, we're gonna make a law that says nobody else can make any loss about this.
[00:28:52] Speaker B: That's what I'm saying. So.
[00:28:54] Speaker A: Yeah.
[00:28:55] Speaker B: So. But, yeah, we're gonna live through seeing if it happens or not.
[00:28:57] Speaker A: You get what you pay for. At least you try to get what you pay for, and that's what we get. You know, as far as as I.
[00:29:02] Speaker B: Think I'm gonna run for president, is what I'm hearing. It's because.
Seems a lot more lucrative than me trying to work every day.
[00:29:08] Speaker A: It's been made that way, that's for sure. So you get yourself a meme coin, you do a bunch of things, man. So. But I think we can wrap this topic from there. We'll have a call out later today, so check that as well. Subscribe to the podcast. Rate it, review it, tell us what you think. Send it to a friend. Till next time, I'm James Keys.
[00:29:22] Speaker B: I'm tuned to.
[00:29:24] Speaker A: All right, we'll talk soon.