Is Facebook Making Big Money From Scam Ads? Also, ChatGPT and the Upsurge in AI Psychosis

Episode 341 November 19, 2025 00:36:31
Is Facebook Making Big Money From Scam Ads? Also, ChatGPT and the Upsurge in AI Psychosis
Call It Like I See It
Is Facebook Making Big Money From Scam Ads? Also, ChatGPT and the Upsurge in AI Psychosis

Nov 19 2025 | 00:36:31

/

Hosted By

James Keys Tunde Ogunlana

Show Notes

James Keys and Tunde Ogunlana take a look at recent reporting which raises the alarm about scam ads on Facebook and how selling ad space to scammers appears to be substantial boost to the company’s bottom line (1:22).  The guys then consider whether ChatGPT is making people crazy, or at least crazier, in light of recent revelations about people ending up hospitalized, divorced, or dead following intense interactions with it (20:19).  

Meta is earning a fortune on a deluge of fraudulent ads, documents show (Reuters)

Here’s How Many Billions Meta Earned From Ads That Are Trying to Scam You (Inc)

Rohingya sue Facebook for £150bn over Myanmar genocide (The Guardian)

OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week (Wired)

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: In this episode, we take a look at the reporting about Facebook raking in billions, pushing scam ads on its users. And later on, we consider whether ChatGPT is making people crazy or crazier in light of recent revelations about people ending up hospitalized or divorced or dead following some intense interactions with their intense interactions with it. Hello, welcome to the Call like I see it podcast. I'm James Keys, and joining me today is a man who can make it look effortless even though it ain't easy. Tunde. Ogonlana Tunde, are you ready to bring that me against the world mindset here today? [00:00:51] Speaker B: Yeah, man. I feel like my wife needs to hear you say that quote. She might not agree with you. Got it. I'll just. Yeah, no, but it's better if it's like live. I'll call you later today, let you say it on speakerphone. Door. There you go. There you go. Now, before we get started, I'll tell her you're my attorney too. [00:01:11] Speaker A: There you go. [00:01:12] Speaker B: There you go. [00:01:13] Speaker A: Now, before we get started, if you enjoy the show, I ask that you subscribe and like the show on YouTube or your podcast app, doing so really helps the show out. We're recording on November 11, 2025. And Tonde, we recently saw reporting from Reuters that Facebook is making billions of dollars selling ads and showing these ads on the platform, knowing that they're selling ads that are. That Facebook has a pretty good idea are scam, scam ads. And so anybody who's on these platforms has seen these type of ads and so forth and hopefully has ignored them and so forth. But, you know, I want to. What stands out to you in this reporting where, you know, Facebook is already, you know, rich, rich company, but they're making potentially up to 10% of their revenue coming from showing their users scams, things that they know are scams, even to the extent where we've seen in the reporting that they charge a higher price for these ads that they deem to be scammish, so to speak. So what really stands out to you in this? [00:02:12] Speaker B: That that part stands out, actually, the part you just mentioned that that up to 10% of their revenue may come from scam selling scam ads. It's just like. So there's a lot there. I mean, I know this is a very interesting topic because number one, it makes me appreciate the concept of a shadow economy because I think, yeah, that. [00:02:31] Speaker A: Was where that was the camera. [00:02:33] Speaker B: Yeah. We don't appreciate, I think, just us as Americans in general, how much of our GDP is actually Stuff that we probably wouldn't want to know about, you know, just from. From laundering money for drugs and human trafficking, all that kind of real shady, kind of what would be considered underworld economy to things like this, like actual large corporations, companies that are big names in The S&P 500 type of thing that we think are all clean. And they also benefit from whether they look the other way with certain shadow economy activities or whether they themselves participate in things that we would otherwise look as not that ethical. And I think that this is a big one because to me, just that as a financial planner, I have to deal with elderly clients that get scammed. People in their late 70s, 80s, who are older, they click on a link off of an ad, maybe on a Facebook, maybe on some other platform, they get text messages where they click on links. And as much as the family and people tell them not to do it, right? They're old and they're forgetful memory and all that. And they're easy prey, they're easy marks. And so it's like, I'm surprised, but I'm not surprised, James, you know what I mean? Like, it's one of those where this is kind of like, oh, okay, wow. But then it's like number one is Facebook. And we know that they have had now 15 years we've been seeing internal memos leaked and studies where they know that their product is harmful for humans. Kind of like the topango industry was. [00:04:07] Speaker A: The scam ads, I don't think even rises to the level of genocide. [00:04:11] Speaker B: Exactly. [00:04:12] Speaker A: I mean, just. I'm saying it's just maybe this is a cleaner version of Facebook. [00:04:16] Speaker B: Yeah, exactly. [00:04:17] Speaker A: It's. It's. [00:04:18] Speaker B: It's like when the tobacco industry, like if we just found out that, you know, they were putting in, I don't know, some mundane chemical and not the ones that really were bad for us. But I don't know, man, that's why I say I'm surprised, but I'm not surprised. But I find it just interesting again, to. Again, like I said, the shadow economy thing, the fact this is a new industry, the whole kind of this whole tech thing with the social media and all that, and then the fact that just like tobacco and oil in the last century, these guys have now, with their money, they've infiltrated power, right? They now have a seat at the table in American power. So the question is, will the regulator class, will the government actually do something to protect Americans against these kind of scams or not? And I think that probably not, at least for now, because of Just like in the 20th century, the relationship between money and power. [00:05:14] Speaker A: Yeah, yeah, yeah. Because a lot of the money is. So much money is made on the, the scams. And I looked it up, I mean UN estimates like 2 to 5% of global GDP is criminal activity, which is a lot. And that's probably undershooting it probably higher. [00:05:28] Speaker B: Yeah, yeah. [00:05:29] Speaker A: You seem really low compute it. You know, so I mean, so 10% Facebook based on, you know, this scam likely stuff is, is probably right in line. And yeah, that to me, the part that you, that you said that actually like, I think the tobacco industry is a good comp in the sense that we do know the tobacco industry added things to their historically particularly, but even now they would add chemicals that are bad for you to tobacco to, in order to make it more addictive, you know, like, so they were doing that purely as a bottom line thing. You know, it was like, hey, if we make it more addictive, people will buy more and so forth like that. And yeah, I mean, but again, that's the same industry. They're advertising the kids and so forth through, you know, all these different approaches. So that industry, if not restrained by, if there was nobody, if there's nobody looking out for the masses, basically, that industry would exploit us for a profit. Is a pretty well established principle here in the United States. So that from that standpoint, you shouldn't be surprised. You can't be surprised. I'm surprised as an attorney. I'm surprised more so that at just the level of how Facebook has been able to, through lobbying and so forth, limit their, their liability for stuff like this. Because Facebook or you know, whatever, Instagram or any other stuff, this is a platform that they control. So if you go into a grocery store and swipe, you know, at the register and then from there some actor, you know, oh, that wasn't actually a real, a real register from that grocery store. That register was a register from Joe Blow Co. And then they steal your credit card and they start using all these charges. You, you would go to the grocery store like, yo, hold up, I was in your store. I trusted that I was dealing, doing business with you. How come I'm getting scammed while I'm thinking I'm doing business with you? You would think that there'd be some type of liability to that grocery store saying, hey you, I'm in your room, I'm in your house, in your doors, I'm inside your building and I think I'm doing business with you. Or that at minimum the people I'm Doing business with have been allowed in here by you. So therefore, if there's people out here doing stuff that's underhanded, you have some level of responsibility for that. And in many areas of law, that's how it works. But apparently in this case, this. That's just not how it works. Or at least it hasn't been tested, because now this reporting's out, I'm sure. I'm sure there's lawyers looking at it. They'll be like, hey, you know, like, this is. Because this does not pass the smell test that you go to Facebook and stuff that Facebook is serving you through their technology is things that is scams, you know, not like just, oh, you know, they kind of make it look a certain way, but like, just. And Facebook knows it so much to the extent that they charge these scams more money. Why do they charge them more money? You know, because one, they can. Cause who else is a scam? [00:08:03] Speaker B: I know, that's great. [00:08:04] Speaker A: And then two, conceivably, you charge them more money because the potential liability they have for putting it out there. So I think there's more that's going to come from this. But. And again, like we talked about, you know, and what was in Myanmar where the Facebook. And we've seen, you know, the Facebook. It was. The Facebook algorithm helped foster a genocide. And so we've seen Facebook is not some white knight actor. We know that they will cut corners or do things that will harm people in order to make more money. But this, to me, like, it seems like this is such a direct kind of, hey, we're going to use this platform to push this stuff that ultimately scams people. It seems like this seems like it can't continue. Like, this would seem like there's a lawsuit, a big lawsuit and that. And again. But the ability to lobby to be able to get out of liability using, you know, like the just getting laws written saying, hey, you can't sue somebody for this is pretty powerful. So I would imagine they. If they haven't done it, they're working on it now because, you know, this would seem to be. [00:09:03] Speaker B: Well, there's. Let me jump on that delimited liability because I got. I got a thought. But I actually want to stay on this for a second because it's a very good point you make, you know, and you said that it reminds me of the gun manufacturer, that it's another example of something that we all know is there in our country, which is the issue with guns. And I don't mean people that Just want to hunt and treat the Second Amendment in a way out of respect. I'm talking about people that go do school shootings and people that walk into Walmart and kill 20 people or whatever. Right. And so my point is that it's come out recently, I think two, three years ago, somewhere in the last two, three years, that the number one cause of death for Americans under 18 years old who we otherwise call children is firearms. That's crazy. When you think about where to. You're supposed to be the top country in the world and all that, and so. Well, I mean, we are the top. [00:10:01] Speaker A: Country in the world when it comes to that, aren't we? Okay, as far as kids getting killed. [00:10:05] Speaker B: By firearms, I meant. I meant top in other ways. Oh, you mean like we're supposed to be top in positive ways? That one to me is negative. Yeah. So. But in any case. But that's kind of why I say it's interesting. [00:10:17] Speaker A: Right. [00:10:17] Speaker B: But instead of talking about that, like I said, we live in a state where drawing rainbows on state property is illegal. Right. So. So what I'm saying is the direction of our conversation. [00:10:27] Speaker A: But hold on, hold on, hold on, hold on. How many kids have been killed because people drew rainbows on state property? [00:10:34] Speaker B: I mean, I need to go investigate that, actually, because I apologize. [00:10:38] Speaker A: It must be a number that justifies that, you know, because it wouldn't rise to the level of the guns. Right? [00:10:44] Speaker B: Yeah, I didn't come prepared for that in this discussion. So since I'm not well researched, I guess you won this round. So the public. [00:10:50] Speaker A: Oh, no, that's not how the Internet works, man. You're supposed to say stuff and not know if it's true. [00:10:56] Speaker B: You know what's sad, James? [00:10:57] Speaker A: Oh, I'm sorry. Go ahead, go ahead, go ahead. [00:10:59] Speaker B: Now, I was out the other day and I saw an actual real rainbow in the sky, like a real one, because it had just rained. And I was kind of sad. I was like, man, leave it to this group of people that actually make something that beautiful, like the enemy of our, you know, cultural enemy. It's crazy. So anyway, the topic at hand, and that's why I say it's. It's like, I love going to shoot guns and all that stuff and have fun with my friends doing it. So it's not about not liking guns, but it's this idea that we don't seem to have that conversation. We want to talk about, you know, what the risk of trans people is to our kids or whatever people want to say about that topic. But we don't talk about the guns and the fact that the statistically it's been shown that guns kill more kids than anything else in this country. And so it shouldn't be a shock that one of the largest companies that makes up 5% of the entire S&P 500 makes 10% of its revenue on scam, allowing scammers to scam people on its own platform, its own customers. And so that would be like, James, if a bank found out that some company was just defrauding its customers and all that, they just said, yeah, well, we're just going to charge you higher bank fees. [00:12:09] Speaker A: Yeah. [00:12:09] Speaker B: You know what I mean? Like, we're not going to actually call the FBI and the Treasury Department and stop this and protect our customers. And so that's what I mean by it's a new industry. And, and they've been able to go through these regulatory seams in the cracks in the system. And I guess. And like you said, through lobbying and other things and them just, you know, throwing money at the government now, I guess. Well, I want it the other way that. [00:12:34] Speaker A: Well, yeah, but the thing of now, and I'm. Because I'm not. Hold on. [00:12:36] Speaker B: I just say this joke. If you fund an inaugurate a ballroom at the White House, then you don't, no one looks at you. That's what apparently the message. [00:12:43] Speaker A: I mean, yeah, that's kind of how it works. That's how they think it works because all of them put their money up on it. [00:12:50] Speaker B: That's what I'm saying. [00:12:51] Speaker A: This is buying up a get out of jail free card. But to me, the limited, the limited liability thing is very interesting because, like, there's a lot of debate in terms of whether service providers for Internet channels, you know, or like for, for being able to provide, you know, the, the tubes that can, that allow the Internet to be delivered everywhere. And then also the service providers, the is the ISPs and stuff like that. Whether when people do something that is, quote, unquote, illegal or something like that, using their service, should they be liable for that? And there's a big difference. We've talked about this because a lot of times the social media companies will say, well, we should be covered under that as well because we shouldn't be liable for anything somebody says or does on our platform. Just like, you know, the Xfinity or something shouldn't be liable if somebody's using, you know, their Internet connection in order to build a website that does something illegal. You know, the biggest difference, though, Is that the Facebook people and the social media people, you know, that stuff is algorithmically done, you know, versus, you know, the Internet service providers literally providing the service to anyone. You buy different levels of service, and it's not like, oh, well, Xfinity will show your website more if you do this or do that, so to speak. At least, you know, I'm not going to get into a net neutrality debate right now. But nonetheless, the where Facebook is different from that is that Facebook does make decisions on what on their platform, what you see, what you don't see, how often you see it. So they aren't the same as when you're talking about these limits of liability for Internet service providers. That debate does not ensure it should not incorporate the social media companies who are making decisions at every well through algorithms are making decisions on what you see. So the Facebook algorithm is determining when you see these scam ads. And more interestingly to me also in this reporting was that if you click on one scam ad, the Facebook algorithm sees you kind of like as a dupe. And so we'll start serving you more scam ads because, oh, yeah, this is the kind of idiot that clicks on scam ads. So let's show them more because we can get more money by showing them more scam ads because they're likely to click on them. And so to me, I mean, like, this is so offensive to me, you know, like, it's just. Just like, like somebody. Like, we need lawsuits filed. We need, like, this is just mind blowing to me. But this. And this is going to be 10% of their revenue. Like, so I'm glad we're able to talk about it. I know we want to get out of here. I know you have one more, but I know you have one more thing, but. But I'm just blown away by this. This is. Now, again, it's not as bad as genocide, but it is in which, you know, Facebook has been a part of, or you know, that their algorithm has been a part of something increasing with their. But this is. I mean, this is really bad, you know, because you're preying on the weakest of the society at that. So, I mean, let me just. [00:15:40] Speaker B: Let me just do a thing, because when you were to use that word, genocide, be careful of today's tea. You were talking about Myanmar. He wasn't trying to bring in a whole different topic here because I feel like when you use that word, it's a very loaded word in today's discussions, you know, generally. But here's the thing. Man, you make a great point. And I'm going to say again that American people chose to go this direction. So that's why I just look at it as a matter of fact. It is what it is. For whatever reason, people wanted to. The type of leadership we have, and that's fine. We live in a democracy, representative government, so this is what it is. But one of the first things that was. [00:16:18] Speaker A: But to be fair, I think what you're saying is that not that people chose to go this way, but more Americans than not decided that other things were more important. It was more important that. To stop the Haitians in Ohio eating cats and dogs. Yeah, exactly. That was more important. So I get it. So, I mean, that's really what it is more than anything. [00:16:38] Speaker B: Thanks for being generous. So let me go back. No, on a serious note, James, just think about one of the first things that Doge knocked out was something called the Consumer Protection Bureau, right? [00:16:50] Speaker A: Yes. [00:16:51] Speaker B: The part of the government that was supposed to stop companies from doing this to their own customers. Now, I remember reading about that cost the United States taxpayer $800 million a year to run that, that department. [00:17:04] Speaker A: So we had to free that money up. So we can give 20 billion or 40 billion to Argentina. [00:17:08] Speaker B: Yeah, of course. Well, that math is interesting. But okay, so. But no, my point is, on a serious Note, it cost $800 million to run annually and it collected $21 billion in recovery for American consumers. So it was a net profit to American taxpayers. But that wasn't good enough because people like Zuckerberg, who bought themselves into an administration, or let's say, you know, the Tesla guy, the bottom line is that's what they don't want. They don't want regulators looking at their companies because it's much easier to make 10% scamming people who already are customers than it is to go and compete, right. And not be a monopoly and have to actually compete in the marketplace with other companies that might be able to compete with what you're doing. So that to me is one important distinction that again, people aren't talking about, that the mechanisms to protect all of us as consumers have been dismantled. The destruction of the administrative state. [00:18:09] Speaker A: And so they were in place previously. I mean, again, longer view these types of issues came up before. So these mechanisms to give consumers and a heavy hammer also give. Allow us to fight fire with fire through the government action were put into place and then these things were dismantled, you know, in the past year so that the consumers, so that the people did not have, weren't able to fight fire with fire with this type of conduct. So I mean, yeah, I mean, that's the direction. Like you said, people have other priorities. You know, a good chunk of the American electorate have other priorities other than trying not to get scammed by huge companies, you know, so that's, that's, that's the world we live in right now. And I guess, yeah, maybe once people feel the, the pain from it, they will decide that not getting scammed is a higher priority, you know, and maybe not we will see or other people, more people will come and participate that realize that these other folks that are so worried about Haitians eating pets, they need to be voted out, they need to be outvoted, you know, like that. Which you know would be more My thought is that, hey, these people have, their priorities are out of whack, you know, we gotta out vote them because they're the dog following around any, anything. Just, just, oh, what about this? Oh, what about that? They're just, they're, they're too afraid to, to, to make rational decisions and so we got to outvote them. [00:19:25] Speaker B: So But I think we gotta be Hondurans. It's gonna be Hondurans eating their cats and pets next. That's right. Don't worry, wait. It'll be a new, it'll be a new book. [00:19:33] Speaker A: It'll be something else. It's always something else. That's what I'm saying. [00:19:35] Speaker B: When all the Haitians are gone, it'll be someone else. [00:19:37] Speaker A: These people are too easy to make afraid of. They can be made afraid about anything. And so we gotta outvote them because they're just so fearful, you know, like, it's just, that's just what, that's what they're gonna do. You know, you sitting here try to talk reason to em. That's not necessarily the way it's gonna work because you can't, they, they're smart. You can talk reason to them and they can agree with you, but then something else will be put in front of their face that'll make them afraid and then all that goes out the window. And so that's kind of the, the way it works. [00:20:00] Speaker B: You know what, so you'll get your tax cut though. So just take it and smile. That's what I'm doing. [00:20:05] Speaker A: So, but no, I, I, I, I wanna close this, this part up. We'll have a second part as well and, but we appreciate everybody for. Jo, we'll talk to you on the other side of the break. All right, Coming back, Tunde we recently saw reports and a lot of this is from OpenAI that's providing, that's releasing information about how ChatGPT1, about the intense interactions people that a small percentage of users of ChatGPT have with the platform and how some of these. There's a psychosis involved, there's people that are, you know, that are having episodes, you know, manic episodes, all those types of things following their dealings with ChatGPT. And this is released in the context of apparently, you know, the chat GPT, you know, OpenAI and chat GPT trying to make their system better at dealing with these things. But seems like there has been, and pardon the expression, that some eggs are being broken, you know, in terms of trying to make this omelet with people having issues. I mean, even a term that we've seen is AI psychosis and so we where it makes it worse or at least isn't improving it and isn't able to direct people to healthcare providers. So tode, what do you make of these recent reports as far as how ChatGPT really seems to be in some cases not helpful to people's mental health or even taking it the wrong direction? [00:21:25] Speaker B: It's very interesting, man. I think at first I was very alarmed when I saw the headline and kind of conceptually thought about this, that, you know, does Chat GPT have some outsized way that it is disturbing people? But then after reading the article and it made, I kind of broadened my thought and said, no, that's what we're doing here. [00:21:48] Speaker A: I thought we were just reacting to the headline. [00:21:52] Speaker B: She read the article. [00:21:54] Speaker A: You're not doing the Internet right, man? [00:21:57] Speaker B: Yeah, yeah, man, you're right. [00:21:58] Speaker A: So, no, go ahead, please continue. [00:22:00] Speaker B: And the show's over. If you just want me to talk about your head, you can talk about the font. The Wire did a good job job on the font, you know, and it's kind of cool, legible. I was able to read it and comprehend it. But. But no, it's. But it's. [00:22:13] Speaker A: But it's. [00:22:14] Speaker B: No, I think it's. The bigger question is not so much to beat up OpenAI or ChatGPT. It's more of kind of like how I felt about social media over the last decade. Like, wow, this is the first time in like human experience that we have the ability through this technology for millions of people to go onto a system and tell everyone else what they're thinking. You see what I'm saying? Like, it's like this. It's like this look all at once into millions of people's brains. And so I think part of the issue is I realized that I think living in the modern world is, is damaging for a certain amount of people's mental health. And I don't mean that as a joke. I just mean that, you know, if you think about the human brain developed, you know, out there foraging in the woods, right? It's, it's, it's, you know, people probably spend a couple hours a day maybe figuring out how to eat. And then the rest was like, you only see your pets, like your dog or your cat, they kind of lay around and don't, they don't do much. Right. I'm sure that's how humans spend most of their waking hours for most of humanity up until the last couple thousand years of agrarian. And now the industrial age put us all in cities. So I think that that's probably the bigger issue, that there's still some, some people that have a tough time dealing with all the pressures of being alive in the modern world of the 21st century. And so because when I looked here, I saw the percentages of 0.15% have a psychotic episode. And there's, and I think it's important to lay out some of the stats. There's around 800 million users a week on chat CBT. [00:23:53] Speaker A: So this was self reported, you know, OpenAI is reporting. Yeah. [00:23:56] Speaker B: And so they said about 1.2 million are possibly expressing suicidal ideations. 1.2 million may be prioritizing talking to Chat GPT over loved one, school or work. So, you know, prioritizing Chat GPT as a relationship and not other humans. And that 560000 people a week may be expressing, exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. So then I did this out of curiosity, James. I went to Chat GPT. I figured, let me go use the thing that the article's about. And I asked Chat GPT what's the percentage of psychosis or psychotic episodes in the population? And the way it answered it was 3% of everybody in the population will have a psychotic episode during their lifetime. And I thought, okay, that's, that's a higher percentage than what Chat GPT is reporting. But also to be fair, that's saying that 3% of all of us will have it one time, some point in the lifetime. So that's, you know, people live a long life. So we're looking at 800 million people a week that maybe some use it every week, maybe some don't. But I feel like the numbers on chat CPT are in line with the numbers in the greater society. So that tells us is this just something like 10% of all people are sociopaths. [00:25:19] Speaker A: You. But you can't say that from the information you have there, 3% over the course of someone's entire life versus what happened last week. You know, like that's. That you can't extrapolate that. You know, like that's. We don't know about. [00:25:30] Speaker B: About 0.15 users had high. 0.15% had. [00:25:35] Speaker A: Yeah, but what I'm saying is that. So my point is you can extrapolate. I'm pushing back against the presentation those numbers as certainty is what I'm saying. Like. Well, I'm not saying. [00:25:44] Speaker B: What I'm saying is it's, it's. But it's understandable that if 3% of all humans have a psychotic episode during their life, then 0.15% of weekly users of chat GPT expressing something similar isn't a surprise. I'm not saying it's a direct correlation. I'm just saying that it's a small percentage of the population, so it's not a surprise. [00:26:03] Speaker A: No, no, I'm with you on that. I think, yeah, that would happen. Shouldn't be a surprise because this is out there. You know, it's kind of. [00:26:10] Speaker B: Yeah, that's my point. [00:26:11] Speaker A: Yeah, yeah. So I just didn't want it to make it seem like, oh, okay, the numbers line up is kind of more so what I was saying. So I'm not against your point, but for me, I think that like I, and I with you, you know, the headline, I would say that the, the copywriters did a great job with the headline because the headline is like, what, what, what is happening here? And you know, you come in with this level of skepticism. Now part of the discipline, I think, that is helpful in this modern world is even if you come into something with skepticism or with a certain feeling is to try to digest the information. Not from the standpoint of since I'm mad about this, I'm only looking for the things that support to make me feel better about being mad about it or if I'm really excited about this, let me ignore all the things that are possibly warning signs, you know. So to me, once I looked at this closely, what I really saw was it reminded me of social media in the sense that it looks like an incredible opportunity actually to, to intervene, to provide a outlet to people. And I'm not one of these people saying AI is going to solve all your Problems. But it's possible, though, that people would go to these AI systems a lot of times that may not be ready to talk to a person. So I think Altman actually kind of recognizes. And so if you can use the AI systems to then, once those type of attitudes or mindsets are detected, to direct them to, like, professional help, you know, then. And not try to solve the problem on AI, by the way, but like, actually direct them to a human being that can help them, then that potentially could be. That could be, you know, life saving for people, you know, or for, you know, relationship saving or, you know, anything like that. So there's an opportunity I see there. And the reason it reminds me of social media is that in the beginning, social media, it was all opportunity. It was like, oh, man, this is amazing. You know, you might be able to connect with people, you might be able to broaden people's horizons, you know, everything like that. Now, once money became the primary objective of social media, all that opportunity went away. It became about, you know, making, you know, isolating people, making them feel afraid, making them feel, you know, making them feel like they can't leave the platform and all that kind of stuff. So the opportunity was lost, you know, once. Once the profit motive took over. But there was an opportunity at one point, if you're younger, you just have to trust me on that. You know, it was like, oh, man, this is potentially great. And so obviously that track record doesn't make me feel great about this opportunity that I see with the AI chatbot, because it's like, well, they're gonna. They already said that they're already trying to pivot away from this initial mission of not being about maximizing money to being about maximizing me. They're already working on doing that now so that this opportunity is there. I don't necessarily think that we'll necessarily follow up on it, at least in the short term. I mean, society might, you know, boomerang back at some point once people get tired of everything being about maximizing money all the time, you know, that's all it's about. But the other thing I'll mention is that the piece about this that would be concerning, though, also is that if the AI system isn't qualified, no matter what you put in it, it's not qualified really to deal with a lot of these matters. So it's. There's a very fine line to be able to try to get someone to express their thoughts, get someone to kind of talk about it, and then direct them somewhere else than actually Just responding to it and responding to it in ways that are callous or that may contribute to more psychosis and so forth. And I think that's where you've seen these reports where it's like, hey, people are having bad episodes after they deal with chat. Some people are having bad episodes after they deal with ChatGPT. So that tightrope seems to be a really tight one where you can receive this information, be able to try to push it in a productive way or a positive way, but not take a person into a more. Into more depth. [00:29:50] Speaker B: Yeah. You know, you said something which, which to me is interesting about. When you said about social media. It could have gone one direction, but it went another, which is very true because I find myself recently watching videos on YouTube of dogs protecting the babies in the house. And it's really cute. And I started thinking about, man, this is what it used to be like on the Internet. Yeah, yeah. And it's funny, I looked at one and then the algorithm started sending me more. So I'm like, maybe I gotta click on this stuff a little more. [00:30:25] Speaker A: We'll give you more. [00:30:26] Speaker B: It's interesting about. No, but like you said about like. Because when industry gets their hand on something. So it made me think of. First I thought of Nikola Tesla who, who was deluded into thinking that he could give away electricity for free because he was going to do the right thing. And then a guy named Edison teamed up with a guy named John Pureport Morgan and they made it the for profit endeavor. And then that got me thinking. I remember hearing a rumor that the. Or not a rumor, but there's some historical fact that the first electric car was actually invented in like 1895 or something. [00:30:57] Speaker A: Yeah. [00:30:59] Speaker B: So then I looked it up and I'm wrong. It's actually the first electric cars are developed in the early 1800s with Robert Anderson creating a crude electric carriage around 1832. Yeah, I had no idea about that. So the idea that we could be. Things could be a lot different if we went certain directions and we didn't. And so yeah, that's. [00:31:20] Speaker A: History is littered with that almost 200. [00:31:23] Speaker B: Years later and all the wars for oil in the Middle east, you know, wow, that's a whole nother show. One of the things that. Going back to what you're saying though, because I think it's a great point you make that AI could be used to uncover some of these concerns that we might have about our fellow citizens if they are vulnerable emotionally and maybe. [00:31:44] Speaker A: Particularly ones that they may not actually feel comfortable talking To a person about like that. [00:31:48] Speaker B: No, I mean, I think that's a great observation. Yeah, I think that's a great observation. And like you said, instead of trying to build an AI to solve it. Yeah. Direct them to a professional, all that kind of stuff. Because one of the things I highlighted was GPT is designed to express empathy. Sorry, GPT 5, the latest version of the newest one. [00:32:07] Speaker A: Yeah. [00:32:07] Speaker B: Is expressed to. Designed to express empathy while avoiding affirming beliefs that don't have a basis in reality. And then I thought my conspiracy brain was like, well, who decides what's to react? There we go. That part of the beauty of humanity. There we go. Some of the greatest things and inventions and the greatest leaders we've ever had have been people that probably when you looked at, you know, they were a little bit deluded. Right. [00:32:32] Speaker A: Hey, man, I'll tell you this. To people 500 years ago, self governance was a delusion. [00:32:37] Speaker B: You know, there's a lot of. Yeah, but it. [00:32:39] Speaker A: Reality is. [00:32:40] Speaker B: There's a lot of religions to it, man. Yeah, well, as I was saying, there's a lot of religions where the main guy who people all think is awesome now was probably seen as a crazy guy going into cave talking to himself. [00:32:51] Speaker A: Right. No, no question about. So it's like. [00:32:55] Speaker B: Yeah. You know, and it's interesting, man. It got me thinking about when we were talking on a personal conversation the other day about how I was recently reminded of how Aristotle lost his life. Right. That, you know, we're used to being him being seen as the greatest philosopher of all time and the modern of father of modern kind of thought and philosophy. But here's a guy that was, you know, forced to commit suicide based on a trial jury. At some point somewhere in his society, he was seen as the enemy. And so. And so that was what got me, man, when I read that. And it was like affirming beliefs that don't. It's gonna help people avoid affirming beliefs that don't have a basis in reality. I'm thinking like, yeah, I like making fun of crazy people. But, you know, the chaos theory tells me that we kind of need people. Some crazy people. [00:33:42] Speaker A: Yeah, yeah. [00:33:43] Speaker B: They're the ones that shake it up for the future. [00:33:45] Speaker A: Yeah. I mean, well, or just in general, we gotta be very careful of designating that certain people are gonna define what reality is. That's just that, like, we got to be very careful with that because that can get down to a place where, you know, the societies that tried that, you know, a lot of times, like you said, you got people, you know, free thinkers being forced to commit suicide or, you know, Galileo being on house arrest or, you know, you know, a lot, of, a lot of times, you know, that hasn't worked out for the benefit. So, yeah, defining reality and keeping everybody into that one reality is dangerous. So. But no, I mean, I, I, it's, I, I prefer to, at this point to still look at this as a potential positive because again, you know, it's not every day that there is even an opportunity to intervene. Sometimes when these things and how it does it, you know, is going to be a mad. The execution matters a lot. But there potentially, there's an opportunity there. I'm not gonna, I'm not immediately gonna go to the potential downside. I recognize that it's there though, you know, like that, that this could be, you know, anything that can be used for mind control, which, like social media and algorithmic curation is, and AI, you know, and then being able to engage people in conversations in ways that can alter your perceptions is very, very, very fraught. I'll just leave it at that. [00:35:01] Speaker B: I was about to say, no, I was about to say, I'm not going to be glass half empty today. I was going to say, man, I'm just glass half. And then with that last like 10 seconds of your ramp down back to heaven. So you almost had me being positive, like, oh, yeah, I'm gonna be like, James. And be like, ah, you may not be negative about this stuff. I'm like, nah, you lost me on that last one. So now I'm all about, it's mind control. Somebody's gonna get all this data, figure out how to screw people up even more psychologically. And I'm gonna go, sorry, I'm gonna go get my tax. [00:35:36] Speaker A: Can't believe we ended up call it a day, man. I can't believe. Actually, no, I can believe that. I just, I can't believe how close we were to ending up with you feeling good about this and then, and. [00:35:47] Speaker B: Then you screwed it up, man. [00:35:49] Speaker A: What is it? [00:35:49] Speaker B: You, you, you stole victory from the jaws of defeat or whatever that saying is, right? I'm about to do a fool me once again. There you go. [00:36:00] Speaker A: But I think we can wrap it there. I mean, before it gets any darker for you, you know? Yeah. But we appreciate everybody for joining us on this episode. Call like I see it, subscribe to the podcast, rate it, review it, tell us what you think, Send it to a friend. Till next time. I'm James Keys. [00:36:13] Speaker B: I'm Tunde O Lana. All right. [00:36:15] Speaker A: We'll talk soon.

Other Episodes

Episode

February 11, 2020 00:46:09
Episode Cover

From the New Deal to the Raw Deal

James Keys and Tunde Ogunlana discuss what we see generally with the cost of living squeeze being felt by many American families (0:55), looking...

Listen

Episode

June 20, 2023 00:51:23
Episode Cover

In a Liberal System, Where Do the Bounds of Individual Liberty Come From? Also, Is It Woke to Discuss Juneteenth?

After checking out David Brooks’ thought provoking piece in the Atlantic on the outer limits of liberalism, James Keys and Tunde Ogunlana discuss liberalism...

Listen

Episode

July 06, 2021 00:57:55
Episode Cover

The Effects of Climate Change Apparently Picking Up Steam; Also, the Cloning of a Long Dead Animal

Seeing the effects of climate change appearing to continue to pick up steam, James Keys and Tunde Ogunlana discuss extent to which what we...

Listen