Episode Transcript
[00:00:14] Speaker A: Hello.
Welcome to the Call like I see it podcast.
I'm James Keys, and in this episode of Call like I see it, we're going to discuss the turmoil that took place with OpenAI last this past couple of weeks, including key things that seem to lead up to it, as well as the past things seem to be on now in its aftermath. And also consider whether the exercise of caution and development, the development of AI, is even possible, you know, in our current world setup.
And later on, we're going to take a look at a recent study involving how often or how rare it actually is for people to make eye contact when communicating. And, you know, which is something that is, we obviously all do on a regular basis is speak to people. But it might surprise you how rare it is, at least based on this study, that people actually connect as far as making eye contact.
Joining me today is a man who's the shining star of the call it like I see it podcast. Tunde Ogon Lana Tunde. Are you ready to show the people the way of the world?
[00:01:21] Speaker B: Yeah, but hold on. If I'm the shining star, what are you, man? Come on. You can't single me out like this.
[00:01:28] Speaker A: You got a lot to live up to in this show, man.
[00:01:32] Speaker B: There you go. I'll take that.
[00:01:33] Speaker A: All right, here we go. So we're recording this on November 28, 2023. And over the past week or two, we've seen a flurry of activity. It's centering around OpenAI, around its board of directors and its former and now current again, CEO, Sam Altman. In a nutshell, on the 17 November, the board of directors suddenly fired Altman, but never really gave a detailed reason why. The reason they gave was just kind of, it was not very explain. It didn't give a good explanation.
In the days that followed, a firestorm hit the company and really the industry. And, you know, the employees at OpenAI started to revolt. And Microsoft, one of the big investors of OpenAI, swooped in, was trying to hire Altman and offering to hire Alton. The employees wanted to come, and so it was just madness. And then by the 22nd, you know, just a few days later, Altman was rehired as the CEO.
And the board, you know, the board that pushed him out, they were on, you know, on the app, so to speak. So what this, you know, this is just typical corporate stuff, you know, like, but so what makes this more interesting is that OpenAI is not set up like a typical company. And also they work on something that is new and, you know, something that is, you know, with AI and so forth. But AI is governed by a nonprofit board, and the board is set up to. In this board, which fired Altman. They're set up to make sure that OpenAI, which is. This is from their charter, is acting in the best interest of humanity, in their development of AI. So with us all being sufficiently primed by science fiction to be worried about disaster when thinking about AI, when the people tasked with making sure that this thing is acting, or this company is acting in the best interest of humanity make a sudden move like this, it gets a lot of attention.
So Tunde. But before we look at the turmoil, the actual turmoil that enveloped AI the past few days, first tell me, what do you think of this company, the world's leading AI company, and its stated mission, which I just talked about, and also how it is taking a decidedly non big tech approach to the development of AI.
[00:03:49] Speaker B: Yeah, no, great questions. And before I get into that, I just gotta apologize to you and the audience that I have the hiccups, so.
[00:03:56] Speaker A: Especially people watching, man.
[00:03:58] Speaker B: Yeah, no, no, but people watching the show that are some video that see my chest heave up every 10 seconds. That's why. Cause I'm one of these weirdos that when I get the hiccups, it's like an all day thing. I can't. Oh, wow. There you go. It doesn't shut up. So I apologize.
[00:04:11] Speaker A: So I'm surprised the first time it's come up in the years we've done the show.
[00:04:16] Speaker B: Yeah, yeah, no, it's rare, but that's why I hate when it happens, because it lasts like two, three days. So I'm just. That's my weirdo quirk. But, yeah, no, so just, I'm looking at my notes here because. Yeah, it's a great questions. I mean, this is, it's.
Every thing you said is, obviously, I'm following up on that. And what makes this interesting is this whole, the fact that OpenAI was developed with the idea of being a nonprofit organization. And I even looked up their charter, which is very altruistic that. I'll read it here, quote, charter will guide us in acting in the best interest of humanity throughout its development. Excuse me, development. And so the thing is, is that that's a very high aspiration to make something that's going to help all of humanity, because there's actually an opinion, part of that, which is what you think might help humanity might be different than what someone else thinks. So I think what I got out of a lot of this after doing the reading and prepping for today is actually the idea of language is very important.
And I think that the board had a certain language and a certain way they saw this happening.
Then there's others in the tech sphere that are looking at this with their own, like you said, their own experiences of how to do, do things like funding startups and funding these ideas. And what happens, it seems that through trying to figure all this out, between having a nonprofit that is courting for profit companies and asking them to invest in something which donate, you know, type.
[00:05:58] Speaker A: Yeah, you know.
[00:05:59] Speaker B: Yeah, but, but as an investor, from a for profit angle, they're looking at this from the culture of the tech space. And, okay, so if we invest or if we donate, for example, what is either a, our return on investment, or b, what do we get in return for our donation? Our seat at the table, so to speak. And so I think that they try.
[00:06:18] Speaker A: To marry ultimately profit driven companies. You know, like there's, that's. So that their calculation is much more clear from a dollars and cents standpoint, whereas OpenAI has words which are supposed to be their driving force, so to speak.
[00:06:33] Speaker B: Yeah. And I think that's where there's a kind of that conflict of a nonprofit owning a for profit subsidiary.
And in that sense, to your point, their goals aren't aligned. One is altruistic. Yeah. And I kind of looked at the nonprofit like a Nikola Tesla back, you know, 120 years ago, that thought, altruistically, this electricity thing is awesome. Let's give it to everybody for free.
[00:06:59] Speaker A: Yeah.
[00:07:00] Speaker B: And then the for profit side is like JP Morgan and Thomas Edison saying, yeah, this is awesome. And somebody's going to make a lot of money on this. So let's be the ones. Yeah, let's be the ones driving that, that bus of creating a system that can, can, you know, make money from the dissemination.
[00:07:20] Speaker A: And let's drive out Tesla. Let's get Tesla out of here, since he's trying to give it away to people, you know.
[00:07:25] Speaker B: Yeah. And so, like that.
[00:07:26] Speaker A: Well, let me, let me back up, because one of the things that it's important to understand just with the conversation you just had is that OpenAI started as just a nonprofit. It was just a nonprofit when it was set up. And then because of the constraints they had with raising enough money and keeping continually raising enough money to pay for the computing power that they needed to develop the AI and to run the AI systems and also to pay for the talent, the people, the smartest people around that can, that can develop this stuff. The nonprofit just couldn't keep up in terms of the amount of money that it was raising. And so they created underneath the nonprofit, a for profit company. And the for profit company is governed by the nonprofit, but the for profit company, as much is, it's much easier to raise capital and with the for profit company. And so that's kind of that tension that you, that you point to, is it? So it evolved over time. They initially set out to set it up as a nonprofit. To me, what stands out about this, though, it wasn't a bunch of, like, guys who, you know, like hippies who just said, hey, let's create a nonprofit and develop AI. This was capitalists who said, who said, hey, we want to develop AI, but we don't necessarily want to do it in the, in the typical way that we've been doing things here at Silicon Valley, because we want to make sure this thing operates in the benefit of humanity. And, you know, this dangerous if it doesn't. And it's like, well, to me, that's like an acknowledgement that the way Silicon Valley set up is not going to develop things in the best interest of humanity. Like, so we basically, you can infer from this that the tech people know that what they're developing in Silicon Valley, the way it's currently being developed in the for profit model for corporations, is not in the best interest of people. You know, it is a very exploitative model, which we know that, or many people know that. But it's just, it's an interesting acknowledgement to come from those very people and say, hey, we can't do this the way we've successfully done so many other things.
We got to do this in a different way. Because if we do it the way we successfully have done so many other things, it's gonna end up. It's gonna be terrible for everybody. And it's like, oh, well, what about all the other things that you're developing?
[00:09:33] Speaker B: Yeah, no, it's interesting you bring it up, because as you say it, it makes me feel like that old saying, where there's smoke, there's probably something burning. I don't wanna say if it's a full dumpster fire or if it's just a few embers burning, but there's something there. And the smoke, to me, has been probably the last twelve to 24 months of, like, you're saying people inside the industry from. Remember the engineer at Google who had concerns about certain things he was working on? Dan Musk has been very public and is one of the members, original members of the board of open was.
[00:10:02] Speaker A: Yeah, he was, he was one of the founding. Yeah, one of the founding people.
[00:10:05] Speaker B: Yeah. And so, which is to my point.
[00:10:06] Speaker A: Though, of, like, it's not people who weren't interested in making money who said, hey, let's set up open AI as a nonprofit.
[00:10:13] Speaker B: Yeah.
[00:10:14] Speaker A: Love making money. That set it up.
[00:10:16] Speaker B: No, and that's, that's where I'm comparing it, actually, to my industry and financial services. We have, it's been well established in this sector, something called an SRo, which is a self regulatory organization. So FINRA, for example, FINRA, which stands for the financial industry Regulatory Authority, is the one that actually polices my securities licenses and makes sure that I'm doing the right thing. And they audit my office and all this kind of stuff. But they're actually not a division of the federal government. They are a private entity, which is called a self regulatory organization. Basically, the financial industry, all the firms came together and said, hey, for the benefit of all of us, having a smooth, smooth industry and maintaining public trust. Kind of like the old story of the golden Goose. We've got it. Like you said, we've got a good thing going here in Wall street and financial services, and we enjoy making money. Obviously, we know that banks and investment banks and all that make a lot of money, but they're kind of smart enough to say, well, let's protect this golden goose by making sure we kind of regulate ourselves in a certain way to keep the interest of the public alive. So they're the ones that actually save us from the Bernie Madoffs and all that. Why we don't have more of those.
[00:11:28] Speaker A: Another part of that, though, is let's do that so that we can tell the federal government and the state government, hey, you don't need to worry about it. We got it covered. And so as long as there's not.
[00:11:37] Speaker B: A bunch of over regulators. Yeah.
[00:11:38] Speaker A: Don't over. Long as there's not a bunch of crazy stuff happening, then the government, a lot of times then will say, all right, fine, correct. And like, now, if they failed at their mission, then the government probably would start looking.
[00:11:48] Speaker B: Yeah.
[00:11:49] Speaker A: And so as long as they can maintain, like, hey, there's nothing crazy stuff. No, Craig, we're nipping this stuff in the bud. Then it helps them in their argument against regulation.
[00:11:58] Speaker B: Yeah, and that's actually a different discussion. But it is a good discussion of that's when government and private sector actually have, can work together well to have a good regulatory framework that still allows the private sector room to breathe and operate. But the reason I just bring all that up. Not to get totally off the topic is just that there is proof of concept of that already out there. Just that there are certain industries that are mature enough that the big players have that type of dominance where when they come together they can police themselves relatively. Okay. Along with the government oversight. And I think the maturity of the.
[00:12:38] Speaker A: Industry and also the specificity of the industry matters in that sense. Like, that's not something you can say generally for corporations or something like that. Like the, the industry itself is a very, very highly technical, so to speak. Like there's things that you need to know in order to really be able to be proficient in it because a lot of times in situations like that, we don't end up with self regulating behaviors, we end up with a race to the bottom. And so I could say that what you're talking about is more the exception. And it's because there are specific, there's a high barrier for entry, higher barriers for entry, so to speak. Relatively. To get into something like that. At least to get into it enough to where you can really affect a large number of people or a significant number of people.
[00:13:19] Speaker B: Well, that's where this is interesting. That's why I don't want to get too off the topic we're on. But, but the reason I bring it up, yes, there's, there's proof of concept out there already that this was not some crazy idea that they were coming up with.
[00:13:32] Speaker A: But this is still a different setup than that though.
[00:13:34] Speaker B: Well, no, that's where I'm going. So that, that's what I was going to say is clearly, like we've just, because one with mature industries that are highly specialized, that's one thing clearly AI is a specialized industry, but it's totally infant in its infancy.
[00:13:46] Speaker A: Yeah, it's still a develop thing. It's not even really, you know, it's not something that's really the derived yet, you know, from a, hey, we can make a lot of money on this right now type of thing. It's still in the we can make a lot of money on this in the future type of thing and a lot of other things we can, we can do a lot here. And so the risks, I think, I mean, like I said in the intro, like we're all sufficiently primed on the risks of AI. Like large scale risks, but there's smaller scale risks and we'll see those as it continues to develop.
I want to also, or I want to get to though actually what happened, you know, just kind of, and most people, and I gave kind of the broad strokes. There's plenty of reading. We'll have stuff in the show notes as far as, if you want to really get down into the details of it. But just what stood out to you, you know, in this turmoil, you know, the firing, the uproar, you know, inside and outside the company, just industry wide. They're, the rehiring also, you know, within like five days, you know, like all of that, like. So what stands out to you in that?
[00:14:42] Speaker B: No, I have several things. One is, I mean, you know, mister Altman also has acknowledged that he wasn't, he wasn't the best at communicating with the board.
[00:14:51] Speaker A: Yeah.
[00:14:52] Speaker B: It seems that the board has acknowledged that their initial reason to fire him had nothing to do with, you know, just any financial bad practices. Yeah, yeah. Malfeasance of his ability to, to run the company, all that. It seems that it had something to do with truly the interpersonal stuff, just communication and certain things like that.
[00:15:15] Speaker A: What they said for further, they said basically that they weren't comfortable that he was being sufficiently forthright with them, you know, and keep it like in terms of what was happening. Like, he wasn't, they didn't call him, they didn't say he was lying. They didn't say he was withholding information, but they were saying that he wasn't necessarily giving all the information, so to speak, that could or should have been get being, they should have been getting, you know, essentially.
[00:15:38] Speaker B: So that's kind of vague. Yeah. And that's where I'm leading into. That's, that's one part that both sides have just acknowledged that they both had kind of part of this. And it wasn't the normal stuff or like corruption type of stuff that we normally see of this kind of quick firings of someone at the top like that.
[00:15:54] Speaker A: By the way, another thing real quick that was, that was said over the weekend that, hey, this didn't have anything to do with malfeasance or, you know, money, stuff like that. They did say, and then, but after the fact, I would think coming together and say, oh, yeah, you know, it was my bad. Oh, yeah, it was my bad too. And that's kind of the making up, the public making up because they have to rebuild public trust at this also. So, but go ahead.
[00:16:13] Speaker B: Well, it's, it's, and there's a couple, that's why this is very interesting. And like you're saying about, I'm sure more will come out over time about maybe the specifics over, you know, the initial thing. But what really going back to long story short is language number one, realizing that the role of the nonprofit is to look out for its constituent constituents. Where the role of someone like Mister Altman, who's the CEO of it's like this, where it's weird, right? He's got a, for a nonprofit company, but you got a for profit situation in there. And he CEO of the whole thing.
[00:16:46] Speaker A: So he CEO of the for profit. So his obligation is to shareholders and, you know, and so forth, nonprofit boards, obligation is to humanity.
[00:16:56] Speaker B: Correct. That's what I'm going at. So as a nonprofit, instead of saying like most nonprofit, like let's say in our backyard, we've got YMCA and boys and girls club and all these regular nonprofits, most people know about, like American Red Cross, we know that all of these individual things have a certain mission to the community, so on and so forth. You're right. That's where language is important. Their mission is to protect humanity. That's a very broad brush. And so what does that mean?
[00:17:20] Speaker A: As you said initially, that's something that different people. It means, it means different things to different people.
[00:17:24] Speaker B: Exactly. And so, and so to your point about saying, if he's a CEO of this for profit firm that is negotiate, negotiating with investors and there's some very kind of technical stuff, but it's good to discuss, I mean, there's real life stuff here. Like he was working on another series round of funding. Because in these startups, what happens is you get a lot of equity, a lot of times as a person, as.
[00:17:48] Speaker A: An employee, as an employee, but you.
[00:17:50] Speaker B: Don'T get a lot of salary because there's not a lot of cash, cash yet. So what happens is these funding rounds are important to keep morale and to keep people that have been there building this thing to continue to stay there because they got to feed their families. So what they said is one thing.
[00:18:03] Speaker A: He was negotiating with.
[00:18:05] Speaker B: Correct, exactly. Can eat paper, literally. Yeah, if kids don't eat paper. Right. You know, we, let's not try it. But I can't live in stock.
But, um. So the thing is, is that this series, the next series funding partially was. So that's it could, they could, they could cash out some of the employees stock so these people could just live their life. And so this is what a CEO is dealing with. He's got employees, he's got people under him that are rising up their complaints. And, you know, my kids got to start the new school season and all this. And so he's got to deal with those realities and that can conflict with the realities and the goals of the nonprofit. And I think it's understandable that in.
[00:18:46] Speaker A: That environment, there's a point there that you have to say, though.
[00:18:48] Speaker B: Yeah.
[00:18:48] Speaker A: And because of the things he may have to agree to do or the products that he's going to put on the market, whether they're ready or not ready, you know, so to speak, there's things that, in terms of the, the funding or raising the funds, there may be things that he has to do that makes the board uncomfortable or that he may. Yeah, you know, again, this is speculation. He may not want to necessarily tell the board completely. Here's the. We're going to, we're going to get this money, and then we have to do a, b. I don't know. And by the way, we have to do c, too. Like, yeah, you know, there, there's. It creates a conflict, but go ahead.
[00:19:17] Speaker B: No, it does. And then, so the other thing that we've learned that is interesting, again, that creates more, I don't want to say conflict. It just makes it more kind of weird. And how do you deal with this is Microsoft is a 49% owner of the nonprofit.
So it appears that Microsoft, just by they invest in, they donated, let's call it not investing. They donated to a nonprofit, $13 billion.
So, yeah, we can say it's a donation nonprofit, but the real world, everyone will know that Microsoft made an investment and owns 49% of OpenAI. So.
[00:19:52] Speaker A: Which is why, you know, like, the, the chat GPT stuff is making it into Bing and, you know, all that kind.
[00:19:58] Speaker B: Yeah. So, so clearly, like you're saying, like, yeah, if a big player like Microsoft has already, let's just call it a half stake in the OpenAI nonprofit parent.
Obviously, it appears that Microsoft's not doing this because they just got a nice tax deduction for a nonprofit donation. They are trying to position themselves to be ahead of their competitors like Google and Amazon and Cisco systems and IBM and the rest of them. So again, if you're a nonprofit that's trying to help humanity, it's interesting that you're kind of doing it in this environment where you're forced to court certain large corporations in order to make sure that you have the funding, the technological prowess behind the scenes, like the human part of it, like people that actually know how to do this kind of engineering and coding. So that's why it's just, you know, and then the whole thing of when they fired him, the Microsoft strategically welcomed him in and said, you know what? We'll create a whole new division for you. We're gonna. We're gonna fully fund it. And then guess what? 90% of the employees of OpenAI said, okay, if he goes to Microsoft and they're gonna do all that, well, just follow him there.
[00:21:14] Speaker A: Yeah.
[00:21:15] Speaker B: And that's one reason it looks like the board had to kind of say, okay, well, let's rehire him, because if that happens, then we don't blow up the company.
[00:21:22] Speaker A: Yeah, they would blow up the company. And, you know, like, there were some that were speaking then saying, hey, you know, well, maybe they were wondering whether or not that was kind of the board's goal, but then they basically got cold feet at a certain point, was like, all right, well, we're not going to go through with it, like, where the board was so concerned about whatever was. Was coming and then, you know, forthcoming or whatever, that they were like, all right, we got to hit eject, which is something that the board was empowered to do, you know, based on this mission. So a lot of that, we just don't know. And we won't know. Most likely, the. The full story we'll get, you know, stuff will trickle out. But to me, what was really notable about this was how important. I'm gonna sound like you here, how important leadership is, you know, in charisma, you know, because Altman is by all accounts, a very charismatic, you know, leader. And, like, he can. He gets people to follow him, you know, like, whether, you know, historically, he's just been a guy that he works on startups and, you know, and things going, and he is a guy that just people like to follow. And so it's interesting to me that. And without knowing on the inside, but just looking from the outside, like, people sign up, they want to work at OpenAI.
OpenAI. This. This public mission as far as, hey, we want to be the company that develops this stuff cautiously and doesn't put the pedal to the metal like a big tech, normal big tech company would do. So if you're coming to work here, presumably you know that already that, hey, you're going in, you're signing up to do this in a more responsible way or something like that than a big tech approach. But when the board, in their discretion, and, again, whether they're right or wrong, takes issue with some of the steps that are happening along that mission, Altman is such a charismatic guy that the employees, like, it's reported, like, 90% of them signing this letter saying. They're signing a letter saying that he needs to be brought back and the board needs to get out and yada, yada, yada. And not even they follow the man, not the mission, so to speak, you know, like, and that's maybe that maybe they have chosen. Oh, well, I disagree with the board's take that the mission is compromised by Altman's ongoing presence. Maybe that's the case. But it looks more like to me that they were just, they were down for the guy. And, and I think that when you're talking about any kind of institution, that's just very interesting because we can see that in other scenarios where people say they're there for the mission or they say they're there for the, the principle, but actually they get caught up. And this is a human being thing for many, many people. Some people are more susceptible than others, but, you know, people get caught up and it's like, no, no, actually, I'm just here for that guy or for that, for that leader, you know, for that guy, that girl, you know, whatever. And, you know, so it actually, it can go so far as to put the mission in jeopardy, you know, or put, put the principles in jeopardy because people, enough people just say, you know what, whatever we got here for initially, now I'm here because that I'm here, I'm following that dude, you know? So that, to me, is what really stood out about it because that, again, not knowing anything else that's happening, either all of those people independently decided or decided to get well. And if they decided together, it's something else. But independently decided, no, the board, you know, technically what we're doing is still consistent with this mission that I signed up for. Or they were like, yeah, I'm not with it.
[00:24:20] Speaker B: You know what? I just want to speak to that before we jump to the next, you know, section of this conversation, because they did. There's an article that we'll have up in the show notes that that did speak to kind of the board's dysfunction, which I found interesting. So I think there could have been some of that that the employees saw, which is, like you said, mister Altman has a track record of building successful startups. And so a lot of this comes down to, and it's good you bring up this idea of leadership because really what it comes down to as well is the idea of trust. What we learned is that 90% of the employees of OpenAI trusted Mister Altman more than he did the board. And so what some of the things that were discussed, I'll give you one example. Apparently a board member wrote a research paper that was actually critical of themselves, meaning OPA and AI and complementary of one of their approach competitors. Yeah. And so I'll quote the article here, says, while the board member defended her paper as an act of academic freedom, writing papers about the company while sitting on its board can be considered a conflict of interest as it violates the duty of loyalty.
Excuse me. If she felt strongly about writing the paper, that was the moment to resign from the board. You know what I thought of James in this era, and we've talked about this in various ways. Like when we talked about what the corruption on the Supreme Court and these guys flying private jets and taking all these gifts without reporting them. And what do we say? We said, no one's got a problem with you sitting on 100 foot yacht and hanging out with your billionaire friends and getting kind of gifts and paid to do that and people buying your mom's house and all that. Thing is, just don't do it. When you're a Supreme Court justice, you can, you can retire from being a justice and show up at any of the biggest law firms earning millions of dollars a year and live that lifestyle and live that. When you chose to serve the public in that role, part of that responsibility is severing certain things in your life. And I think that this was a good reminder that is not just in the government and areas where we see this kind of attitude of like, I'm just going to do it because I feel like it, because I say so. And it's this thing, like, I liked how they said it. While the board member defended her paper as an act of academic freedom. It's like, yeah, it's like freedom of speech. Yeah, you can say certain things, but as a leader, you're also responsible because your rhetoric can have consequences like we've seen in politics a lot. And I think this is another example. Like, you're on the board of OpenAI, you want to save humanity and do all this altruistic things, but your ego is so big that you have to write this paper. You can't just say this is.
[00:26:53] Speaker A: Mischaracterizes it, though. I mean, I think you're mischaracterized.
[00:26:56] Speaker B: It's not a difference of opinion because I think.
[00:26:58] Speaker A: Well, no, I mean, I just think. I think you're selectively. You're selectively giving the information about what's like, this paper was about safety. And so I agree with that. The board members should not have written this paper. Like that wasn't. You're on the freaking board. If you have an issue with the safety, then take that issue up fire Sam Altman. But writing the paper as a, writing the paper is something that somebody without power would do. And so I agree with you that it was in bad form that the board member wrote that paper. But if it's about safety, AI safety, which is what the board should be concerned about, then a board member shouldn't be writing papers. They should be doing things as a board member, exercising power, you know, and so, like, but I don't think this is where I disagree. I don't think this is ego. I think this is somebody who doesn't understand the nature of the game. They're, you know, because it's like, well, hold on. You're doing what the powerless would do when you're one of the powerful, you know, so, but again, the board's job, if this is about AI safety, that's what she should be. Could that board member should be concerned with, this is a, by the way.
[00:28:02] Speaker B: Where they're talking or not. It's dysfunction for, I guess, is my point.
[00:28:06] Speaker A: It is dysfunction. I agree.
[00:28:08] Speaker B: We can't get in our head. But, but the idea, and that's kind of what was my point is saying the employees are seeing all that, right. And it kind of like they probably made a choice. That's what I mean. Like, we trust Mister almond Moore because he's shown that he can do certain things where these, these people on the board look all dysfunctional and they just look like they're, you know, jockeying for something that, that might not be in our best interest.
[00:28:30] Speaker A: Yeah. At minimum. Or, you know, like, that could be that, but it also could be, again, they haven't shown the ability to navigate the waters. And maybe he has, you know, against, again, a board member who wants to enact change and make AI, open AI be better with AI safety. You don't, the way you do that isn't writing a paper that will undermine people's faith in you because it's like, well, yeah, again, that's what an anonymous employee does, that, you know, say, not like a, somebody who doesn't have the power to make any changes. So, but we can move from there. I do want to look forward, you know, and then this is looking forward. You can, you know, take, talk about the AI engine, you know, the development of AI in general, but then also open AI also like this. I want to know, does the path, one, does the path remain viable to develop AI with any kind of guardrails or with any kind of safety in mind or is that basically been obliterated at this point?
And you know, whether that, again with OpenAI or just the industry in general.
[00:29:31] Speaker B: I don't know, to be honest with you. I mean, that's the whole idea of defining viable and all that. I think that this whatever, I mean, this is approach, no, I mean, I think it this specific approach of trying to fit this square peg in the round hole of a, you know, trying to figure out how to raise money for something through asking people to invest in something that is a loser until it makes money is always difficult. That's the whole point of startups, and that's why most of them fail.
The difference here, by the way, that's.
[00:30:08] Speaker A: One of the things that capitalism has proven over the years, over hundreds of years to been very good at, is how do you get people to sink money into something that may or may not work out? This was no different than when people were sinking lots of money into ships to go sail across the Atlantic that may never make it back or whatever on the chance that it did. And it hits really big. So capitalism is uniquely suited, or has proven to be uniquely suited to be able to push the envelope with things like this. But go ahead.
[00:30:37] Speaker B: And this is why the insurance industry has been very profitable for the last 3400 years, because they insured all those ships that try to find the gold from Spain and South America and all that, and all the slave ships.
[00:30:49] Speaker A: But same thing now we're getting. But it's the same money in companies that may make it or may not, and most of them probably won't, but the ones that do, we're going to hit big, you know, so it's correct.
[00:30:58] Speaker B: And so, and so that's where I'm getting it as. Okay, so from a conceptual level. But Ken, do we all know of stories of kind of mom and pop funded businesses that took off and people borrowed 100, 200,000 from their friends and family and it took off. Yes, that does exist. But in certain instances in history when we've had the need for massive capital investment to try and figure something out, like let's go to, let's say, the Manhattan project when it was like, all right, we need to figure out his nuclear bomb thing now when, let's say, the space race, just Kennedy made the announcement in what, 61 we're going to the moon by 68. 69.
[00:31:40] Speaker A: We're going to move before the end of the decade.
[00:31:42] Speaker B: Yeah. And landing a man on the moon by 69.
So how does that all happen? That's not that. The defense contractors like Lockheed Martin and Bell labs all got together themselves and said, okay, let's take all our shareholders that are investing with us and buying our stock and tell them that we're just going to plow hundreds of billions of dollars for the next few years into something that may not work. And watching all those rockets blow up on the, on the launch pad. So usually this is where the only one who can afford to make those kind of huge investments at deficit spending for a couple years to see things get off the ground is the us government, because the government has a big.
[00:32:21] Speaker A: Budget funded by all of us in recent times. Right. Now, though, like that.
[00:32:26] Speaker B: You're talking World War two era. Yeah. Which, which again, is there a coincidence? I mean, there's a correlation to me that the technology has ramped up since then. Right? I mean, we went from flying biplanes in World War one that pretty soon after World War two. I mean, you got basically the f 16 and f 15 were all developed in the 1960s. I mean, you know, these, these type of advanced type of fighter jets and all that. That didn't happen.
[00:32:51] Speaker A: I mean, no, the creation of the Internet, you know, like something that was.
[00:32:55] Speaker B: Worldwide web, government funded fence. Correct. Like DARPA, which is one of the major arms, research arms in the defense agency, came up with a lot of this stuff. And so traditionally that's where a lot of the beginning of the innovation would come from, is the government would go to the tech companies themselves and say, hey, we're thinking about this needs to happen.
We're not you guys, right. We're not tech companies, but we got a budget. You guys got the researchers and the talent and the ability to make something happen. Let's marry the two. And so another more recent example actually was Tesla. Remember the idea in the early two thousands of an electric car and really building that, and that the consumers would take hold to it and that you could actually build a battery that could go a couple hundred miles or something like that without reading a refund that didn't exist.
[00:33:49] Speaker A: Yeah.
[00:33:49] Speaker B: So part of where they got grant money was part of the 2009 stimulus.
During the recession, Tesla got a $500 million grant from the federal government along with other grants they got from state and local agencies and tax credits and all that to be able to fund along with private sector investments. So that's where I think that there is this tension with this one because it's trying to do what's traditionally been done that way. Because they said it in various articles I read that this is going to require hundreds of billions of dollars at the minimum. So if you're going to ask the private sector and investors to invest that money. Then at some point there's going to be a Microsoft that shows up and just says, hey look, we got all this capital. We're going to take this off the shelf and we're going to develop it then. And so if you don't want that to happen, you need to restructure, I think, how we're dealing with this and maybe centralize it.
[00:34:43] Speaker A: Back when you, I was gonna say when, which the examples you gave, there was an organizing structure. And so it wasn't necessarily only being like it was understood that anything the companies developed they would be able to deploy then for profit purposes and then they would get paid to develop the stuff whether it worked or not. But the government was the organizing principle. Hey, we're gonna do this for the interests of the us government, so to speak. And so that organizing principle isn't there with AI development right now, you know, so to speak. It's the private sector getting together and to their credit and saying, hey, you know, like, we want to do this and we're trying to look up peers at least we're trying to learn from how things have unfolded thus far. And so we want to do this in a different way now. That's not that there's other company, Google's developing AI right now. You know, meta is developing AI, so it's not like OpenAI is the only shop in town. There's other entities that are trying to do so in a, in a safe way as well, other than OpenAI. They just have been, you know, kind of the most prominent one.
And with this, though, I think you raise a good point in terms of the government because they prided that kind of this organizing principle. I think that the idea that OpenAI tried to come up with is probably not one that's viable in the long term. And I think we're seeing why you just, companies are successful a lot of times because there is alignment and this is fundamentally creating like a disaligned, you know, disalignment amongst the board and the executives. Like it's, it's built in. Like they're never going to be aligned because pushing the issue, staying ahead of Google, staying ahead of meta is, is super important to the for profit side, but nonprofit side is like, ah, it's not necessarily thing we're most concerned about. You know, we're most concerned about not creating something that's going to eat us all one day, you know, so to speak. So that misalignment, I think we're bumping up against the limits of that, so to speak. And they may be able to continue on for a little bit with this, but ultimately, the board of the nonprofit is going to have to start working for the for profit side. They tried to take control here in a clumsy way, in a way that was not executed well, but they failed. And so they ended up, basically, they turned over all the power at this point with this failure. And the failure doesn't necessarily have to be because of what they tried to do. It might have been how they tried to do it, but nonetheless, they. They blew it. I think, moving forward, the development of AI, and I'm happy that the Biden administration seems to be on this, at least, like, paying attention to it. I know the Europe EU is working on stuff as well. Legislation, regulation. I think it's. Regulation is what's going to have to be become a part of this. And it's always difficult when you're regulating new industries, because there's a balance of, you want to allow innovation, but you don't want to avoid catastrophe, but when it's new, you don't know what, what to do to allow innovation and avoid catastrophe. It's not like we're regulating railroads, where it's like, okay, yeah, yeah, we got to make sure that the track doesn't bend too fast, and then that'll avoid that, or we got to make sure the train's not too heavy. We have a hard enough time, you know, with, with corporate lobbying and the railroads and stuff lobbying, we have a hard enough time keeping those regulations in place. So now we're saying, oh, we need to try to regulate something that not only do, like, okay, there's. There's the limitation of the people who would do the regulating their limitations as far as understanding what's going on. But again, I'm saying, on top of that, even if you understand what's going on, the balance that you have to strike with regulation is not an easy one to strike. And we don't have the contours yet of where this can safely go and where the risk factors are. The things we might think are okay may be completely terrible, you know, for all we know, in five years. So, you know, I think we're flying blind. It's not the first time we've flown blind, but with this, with similar to maybe the Manhattan project, when we get on the other side, we end up with something that, you know, is potentially world ending for us. Not the end of the earth, but the end of human beings or something like that. And so the stakes are pretty high. So I would think we do want, and like I said, I commend the Biden administration. I commend the European Union for at least paying attention and trying to get on top of this in some way. The Biden administration, you know, just put forth executive orders in October, you know, so, like, they're, they're trying to get on top of it and, but it's going to be an ongoing thing.
[00:38:57] Speaker B: Yeah. And I think, you know, look, we're in a, we are now and fully in the information age. And I think we need to, as a culture, our societies and global society, really, we need, we need to be comfortable, which is not easy because change is uncomfortable for humans. We need to be comfortable. We need to begin to become comfortable with the idea and figure out how to become comfortable with the idea that we are now at a point where there's technological changes happening within a generation that could totally affect things differently five generations from now. So that's why, you know, I'm thinking.
[00:39:31] Speaker A: Of, I mean, one generation from now. That's the crazy thing about things nowadays.
[00:39:35] Speaker B: No, of course. And that's my point, is saying that it makes sense that the people who really are at the top of the technology space are concerned because they can remember just 1520 years ago, this idea of something called social media and this new platform called Facebook, which took over a prior platform, remember, called MySpace, and that everybody is sending, sharing pictures of their cats and their kids, and you can see your old friends from high school, and this was just a nice, sweet thing.
[00:40:02] Speaker A: And here we are, the world was full, and they were all filled with optimism.
[00:40:05] Speaker B: Correct. And it was gonna, you know, all this stuff. And then we see the damage it's done to our society, our ability to have discourse, the misinformation, blah, blah, blah, blah, blah. Right. So it's, it's, it's natural for them to now be concerned and say, okay, we've got now this new. And I pulled up this article from the MIT review, technology Review. So I wanted to make sure I read from people that are way smarter than me, because what, what scared some of these folks was the new system they called Q, which I thought you couldn't pick another letter to not promote more in this day and age and disinformation online. Come on, you have to call. Yeah. Q. So this is going to be the thing that takes over the world now, and I might as well just fall in and be.
[00:40:50] Speaker A: Well, no, it's called Q, though.
[00:40:51] Speaker B: Right, I know, but they call it Q for short. Of course. So. But the long story short is that it's all about one of the things that it's, it's, it's learn how to solve elementary level mathematics. And one of the research scientists in here saying, you know, math, math is the benchmark for reasoning, so on and so forth. So, of course, there's all these fears now that this thing is going to start, you know, learning and all this stuff and kill us and be like the next matrix thing. And I guess what I'm saying is that we really are in a new era and it's hard for us to imagine what can come in the future. An example I'll give is when we did the show on the gilded Age, we discussed that in the 1790s when the founding fathers were forming the country and trying to figure out how to have laws and all this, it was a very agrarian country with very local economies. Literally 100 years later, by 1890, there were 70,000 miles of rail track and economies were shifting. No longer local, they were now, you know, you had trains running from Montana, bringing coal into Pennsylvania to make steel. So what happened is you had to start regulating commerce differently, because what do you do with a guy like Carnegie that owns all the railroad passing through all these states, and he's not paying any taxes and he's hoarding all this money? So. So the society started asking different questions. And people, there's a lot of poor people in the country saying, how come I'm working for this guy and I got to buy from his, his store and I got to live on his property and da da da. So things change from 1790 to 1890 and laws change. And I think we need to accept the fact that, like the founding fathers, we can't imagine where this is going to go. So we need to be nimble enough and keep the system nimble enough to be able to address whenever we have new innovations and the AI itself does something new, we need to then quickly be able to come back and say, okay, how are we going to deal with this in our society? And I don't have that answer. I'm just saying that this is a new, clearly a new chapter in our ability to have a technology that we don't understand.
[00:42:51] Speaker A: Yeah, yeah. I mean, and I think the word that comes to mind for me there is framework. You know, we want to develop a framework. And that from what I understand with the executive orders, the recent executive orders, is that's kind of the attempt is to not necessarily put the speed limit in, but to introduce the idea of the framework of, okay, here's the kind of things we're looking at. And so I think that's a good approach. But one thing you mentioned there, and I do want to get to our next topic, but I just want to comment on one thing you said, and that's with, it's the same, or it's the generation of tech entrepreneurs is still, they're still out there, the ones that brought us social media and so forth. So there's actually a benefit there.
A lot of times with society, we talk about living memory and so forth and how societies have to learn the same lessons over and over again. And so you have the Great Depression, and then all the people that were alive then learned that the lack of regulation in the financial markets leads to bubbles that burst and then create depressions. And so by the 1990s, the lid that was out of the living memory, all the people that were making decisions then for, you know, basically they read about this stuff, but they didn't have a, they wasn't intimately in their, their character to really understand that. And so they lessen all the regulations, all the stuff that was put in there after the Great Depression, they were like, oh, yeah, we don't need this stuff anymore. They get rid of it all. And then we have, you know, the financial crisis and, you know, we did the great recession and so forth. And so we had to learn those lessons again because the leadership or, you know, the people, well, there's always people wanting to take advantage of the situation, but then the leadership, they just didn't have the living memory anymore of, okay, yeah, we had this stuff in place for a reason. We got to try to avoid that same mistake, so we got to make that mistake again. One benefit, though, that, like, this plays to our advantage now. That kind of the same generation or a generation and a half or so of people who went through the social media thing and saw how with all that optimism came, you know, like, there's still good stuff, but it came a lot of stuff that they didn't know and a lot of consequences that have been very harmful to society. And so that to even have the thought at the outset of this AI stuff and say, hey, you know what? We probably need to take a second and figure out how we can do this in a way that doesn't end in catastrophe. And so I applaud the effort to create open eye as a nonprofit and to even now, to still try to operate it under this framework, even though I think ultimately, you know, it's not one that's going to be on a long unless everybody does it, which is the point of regulation, by the way. The point of regulation is that in a competitive marketplace, it makes no sense to self regulate either. Everyone has to do it through, like FInRA, where you said, where we get together as an industry and regulate ourselves or the government. So everybody has those same constraints. Otherwise it's suicide to do so if the other people aren't, you know, if, if I'm disposing of my waste properly and my competitors dumping it in a lake, he's going to make more money than me and then eventually I'll compete. Me. So you end up with that. So, I mean, those are a couple points I mashed in together. But you got anything real quick before we move to the next topic? Speaker zero.
[00:45:50] Speaker B: Yeah, no, and it's great points you make here at the end because that's kind of where I was going to with this idea of the role of government, because I'm agnostic and neutral on the idea of, like you're saying, whether it's an SRO environment and the private sector does this or the government does it. But to the point, we've just had this whole discussion that, you know, this, the system they had clearly seemed dysfunctional and probably is unsustainable.
[00:46:13] Speaker A: It just created this misalignment, you know, like.
[00:46:15] Speaker B: Yeah, exactly.
[00:46:15] Speaker A: It was a good attempt, or it was a honorable attempt, I would say. But it just, you know, that misalignment is on ultimately, you know, come back.
[00:46:22] Speaker B: But my point here, just to finish off, is that when I think some people want to hear people like us talking about, well, the government should have a role and all that. Remember, we've been in, especially american culture, we've been so conditioned to have this distrust of the government. And what I would say, because I thought about this in preparing for today, I thought, what's the downside of the government kind of having a control or an interest or being the conduit for this investment? And of course, there's the orwellian fears of all the government's going to use the AI to control and this and that. But I thought about it like, at the end of the day, the government will just buy whatever technology from the private sector. If you had the type of people in government that wanted to do that, you know, use AI in a sinister way against the american people, the way.
[00:47:05] Speaker A: Government can use social media against us if they.
[00:47:07] Speaker B: Correct. Yeah. So my point is what they would do is they would pay for it at an inflated price because they're buying it, an already finished product from the private sector. So we would just pay more for the government, its ability to manipulate us anyway. So what I'm saying is just more like, I think people, we need to, as a culture, mature in this conversation about the government that, yes, it's good to have a healthy skepticism about the government. But in general, like, when I give the example of Tesla being, you know, its initial grant came from the federal government, we need to remember that there is a way that the government and the private sector have had a very healthy exchanges for decades in our country, and we should begin to start refocusing on that.
[00:47:51] Speaker A: It's one of the big advantages that the United States has had over the last hundred years, you know, 100 years or so. And I'll say this with the regulation piece, that is another area where living memory, you know, like, we look at regulation and thumb our nose at it commonly because we don't remember a time when anything could be in your water or anything would be in your air or you would buy food and it would have stuff in it, and then you had no idea. Like, we don't. All those regulations that are just, we just take for granted, you know, in terms of water quality, in terms of what's in your food, everything like that. We have no, like, so that's out of our living memory. If people who used to buy water or get water and it was full of pollutants, those people were like, yeah, yeah, we want. We want the government to regulate what people can put in the water.
[00:48:33] Speaker B: Amazing. James. If they actually had a school system that would teach this to children so that every generation wouldn't.
[00:48:39] Speaker A: Good idea.
[00:48:40] Speaker B: Every two or three generations, you wouldn't have to relearn this, you know, it's a good idea.
[00:48:44] Speaker A: Man hears that, you know what I'm saying? That's why I say you're the shiny star, man.
So. But I do want to get to our second topic. The second topic today, we, you sent me something really interesting about. It was a study looking at eye contact and basically monitoring. They took a group of people and put them in a room and monitored, you know, with contraptions, how often they looked at each other's faces, how often they looked at each other's mouths when someone was talking, and how often it actually made eye contact. And it was surprisingly low, you know, which was the big, the biggest thing. It was like, it was a very low percentage, single digit percentage of how often people's eyes were in contact, you know, making sustained contact. And so is this something that surprised you? Was this something that you thought was, you know, about. Right, or what was your take on this? And then, you know, if you want to critique, you know, on the study as well.
[00:49:31] Speaker B: No, it was very interesting because what it reminded me of is a show we did a couple years ago on sleep.
And it was another reminder to me, like, yeah, this is, you know, eye contact is something that is actually very important. We all know that in a sense, whether you kind of consciously consider it or you subconsciously just understand, eye contact is very important between human beings and even animals. I mean, I got dogs and cats and, you know, I'll stare them in the eyes and have a staring contest.
Funny, I always win. I remember my mom used to tell me that when I was a real little kid, like five or six, she was like, if you stare at an animal, they always blink first. They always look away first. So I always, always remember that.
But the reason I bring up the sleep conversation we had a few years ago is it reminded me of that because I realized, like sleep, it's something that is extremely common for all of us, and we all share. This is something every human being shares. It's like we all have to sleep. But just like sleep, eye contact is little understood by the research and scientific community. So it's, again, one of these things that permeates our life.
[00:50:39] Speaker A: Understood by some more than others too. You know, in terms of some people use eye contact as, you know, purposefully in interactions.
[00:50:46] Speaker B: Correct. Well, and that's another thing too, unlike sleep, because sleep is something that is more of a physiological need that we all share in a similar way. You know, everyone kind of needs that. Eight to 10 hours generally, right? Most people of sleep to be healthy. But eye contact, I realized in reading this, just made me think of other things in my life and experiences. And whether I was overseas or here is actually very cultural as well, because I thought about it like 100 years ago, you and I, in 1923, in certain parts of this country, we could be jailed by looking a white person in the eye. So that was a culture that if you went to those towns at that time and you were a non black person trying to talk, speak to a black person directly, they would not look you in the eye out of fear of some sort of reprisal. So you, if you had took a time machine from today's world, you would probably be offended. I say, you know, what's wrong with this person? They're not looking at me, am I? Or that person's saying, I can't look. This person?
[00:51:45] Speaker A: Yeah.
[00:51:46] Speaker B: And I thought about it too. Like, if you and I took a plane to fly to Afghanistan or Saudi Arabia or Iran, an area of the world that has a very strict muslim culture, and let's say we went into someone's home and we decided, hey, let's have a conversation with the lady of the house. She probably would also look down unless given permission by her husband or father, whoever the patriarch is, because of the cultural significance of eye contact. And so that's also what made me appreciate this topic. Like, yeah, this is something we all take serious and share, but actually, because of the cultural differences, we can actually disturb or influence each other in ways we don't. We don't really understand because of culture. Yeah.
[00:52:29] Speaker A: The thing that stood out to me about this is you kind of went into that, is that to some people, eye contact has a lot of meaning, and, you know, whether it be a status or dominance thing. And so that we're starting to study it, I'm interested. I was. I actually, I wanted more from the study, you know, and I thought the study had some limitations, which I'll throw out there right now. One, it was a small sample size, you know, talking teens, you know, people. So it wasn't. And then two, they were studying strangers. And so that's going to be a different level of eye contact than you might make with people that, you know. But. And then, you know, the other thing I say, just as a former, you know, you know, played sports, you know, like, as a kid and, you know, as a young adult, eye contact, what came to my mind really quickly was just the level, the amount of communication that can happen, like on the football field or basketball, like a team sport based on eye contact. You know, like, you look at somebody in the eye and you're telling them to do something, or you understand, they're telling you, hey, make that cut or, you know, run that way, just with the eye contact. And so, yeah, there's a lot going on with eye contact. And, you know, so set this study, you know, was interesting, but it was limited. You can't really glean too much even the amount of icon they knowledge in the piece. Read that, hey, this number might be higher if you're talking about friends. But, you know, it's just, there's a lot going on there, you know. Cause I thought the cultural things that you talked about, again, that's very interesting in the sense that.
[00:53:49] Speaker B: Yeah, yeah.
[00:53:50] Speaker A: Cause somebody decided at some point that if you look at somebody in the eye, then you're saying you're on the same level with them, or there's a certain. You're trying to exert some dominance over them or something like that. And, you know, like that. That's very primal, you know, and it gets into culture, you know, historically, something like that.
[00:54:06] Speaker B: Well, that's why. I mean, a couple of things. I want to definitely address a couple things. You said it was very good. But one thing, just thinking even about just from part one, what I found interesting about this is it's the technological improvements that have allowed us to study. And I know I've said that in various discussions we've had about kind of this mind body and the kind of human research on us humans, but they speak specifically here, it says, the team explains, partly because previously that research was limited, because previously available mobile eye tracking technology had limited measurement of eye movements. So, basically, we've heard of things like micro expressions and all that where our muscles will make movements in nanoseconds.
[00:54:48] Speaker A: Yeah.
[00:54:49] Speaker B: And we are.
[00:54:50] Speaker A: We may not be aware of which we.
[00:54:51] Speaker B: Like, we. We're not aware of it consciously, but subconsciously, we do pick them up. But if you ask someone, like, if you and I look in, I mean, I know we're. We're doing this on a screen, but if we were together, we wouldn't be able to recognize it kind of consciously, but subconsciously, we could kind of get an idea of separate that.
[00:55:10] Speaker A: Separate that, actually.
[00:55:11] Speaker B: Yeah.
[00:55:11] Speaker A: Like, the person making the micro expression doesn't recognize that they're making it off. Who is picking it up? Picks it up in their unconscious, not their conscious.
[00:55:21] Speaker B: And so that's why it's only now where we have this technological advances, where they can put, like, they talked about the type of glasses they made people wear when they did this study, and the glass had a camera in it that could see that person's field of view, but it also had things that could measure where their eyes were looking.
[00:55:38] Speaker A: Eyes were looking.
[00:55:39] Speaker B: Correct. So that's how we got stats that 12% of conversations that they studied, it was eye to mouth, where the person's eyes are watching the other person's mouth. Only 3.5 times were face to face. Sorry, eye to eye contact. 3.5% of a conversation was actually eye to eye contact. Now, what's interesting is they said out of the test subjects, they could only actually use about half of the samples to study because there was malfunctions in the software and the technology of the other half. So it tells you that again, that's why I said it reminds me of the first part, that the technology is not even perfect yet. But I bet you in five years, they'll be able to do this to ten times as many people, and it'll probably work fine. So as we have more technological advances, we're able to learn more about ourselves as humans.
[00:56:34] Speaker A: It's very. I mean, even just wearing a smartwatch, you know, like, it's something that, you know, like, oh, it tells you you're asleep. This and that. So, yeah, it's a good point. As far as one thing I want to mention before we get out of here, man, is the piece that you just mentioned. The eye to mouth stood out to me, by the way, because that's like, I know a lot of times. Excuse me. People read lips as well, like, either consciously or subconsciously, just to get a secondary pickup of what someone's talking to you or when somebody's talking, you're not only listening to it, but you're also reading their lips, and it's like a secondary piece to gain better understanding. So the fact that that happened much more than the eye contact was very interesting to me. You know, that's something I picked up on. Did you have anything before we get out of here?
[00:57:17] Speaker B: Yeah, yeah. No, I think just to finish on what you piggyback on what you said earlier about. About just the kind of cultural part. I mean, the sports analogy is great, actually. I think, you know, how many. How many alley oops have I caught when I was younger? Because the point guard just looked at me a certain way and we kind of knew. We knew what we were about. And that's what I was going to say when I read this. It made me realize that it'd be very interesting if they can in the future do this study not only with a wider range of test subjects, but also, I think, like, we're talking in different cultures, different parts of the world, because then you might have settings different. Like you said. I was going to say, because what they were very clear about is this was a test setting where they brought test subjects that were strangers together. And that's what I kind of thought is because they were saying that one of the things they noticed is people would look at someone, but when the person started looking at them, they will kind of look away from their eyes. And I thought, well, that's kind of cultural, because, like, we're talking about now, right? Part.
[00:58:13] Speaker A: And also strange. That's also strange. A stranger thing.
[00:58:16] Speaker B: That's my point.
[00:58:17] Speaker A: They may not do that.
[00:58:19] Speaker B: Well, that's what I was gonna say. In our culture, let's say we're two men. If I don't know you, James, and you don't know me. We both know there's a certain level of eye contact. As men, we kind of have that's. That's culturally. In our cultural norms. Anything more than that, and this is not to be joking, right. But to be serious, either one of us is a homosexual, which could disturb the other one. Meaning, you know, if you think I'm trying to hit on you because I'm looking at you, or like you said earlier, the or someone's gonna take it as aggression. So that's what. That's what I was saying. So.
[00:58:47] Speaker A: So the wrong cue. It's just a misaligned cue then, cuz it's like, why is that dude looking at me? You know, so to speak?
[00:58:54] Speaker B: Meaning the cultural impact, I think, is much greater than I even realized before.
[00:58:58] Speaker A: Yeah, yeah, no, so, I mean, it's good, though. I mean, it's something to look forward to as far as. As they continue to study. But the initial studies at minimum were, you know, were interesting and how little it was, so. But, yeah, I think we can wrap from there. We appreciate everybody, for joining us on this episode of call. Like I see it, subscribe to the podcast, rate it, review us, check us out on YouTube, send it to a friend. Till next time, I'm James Keys.
[00:59:19] Speaker B: I'm Tunde Ogalana.
[00:59:20] Speaker A: All right, we'll talk to you next time.