Select Page

We’re talking about the OpenAI insanity, Q*, SpaceX’s Starship launch, a new Chinese AI with twice the context window of ChatGPT, the AI chip wars, GPT will soon have a memory, DeepMind’s new AI music creation tool, and why we all need to think like a coder. 

FULL TRANSCRIPT

FUT 18

[00:00:00] Cameron: Well, assuming that by the time you’ve heard this, neither Steve or I have been, uh, ousted by the board of the Futuristic Podcast. This is episode 18. We’re recording this on the 23rd of November, 2023, Steve, we didn’t record on Friday last week like we normally do.

[00:00:26] Cameron: And it’s a good thing because it was,

[00:00:28] Steve: Yes.

[00:00:29] Cameron: it was just a day where you wouldn’t have known what to say. And it’s been like, I don’t know about you, man, but like. You and I have been around a long time and we’ve seen a lot of stuff in startups, the business world, the tech world. I’m pretty confident saying I’ve never seen anything this insane happen to a company this prominent, this large, this important, this [00:01:00] powerful in my entire life.

[00:01:02] Cameron: Life. I’ve never seen anything

[00:01:04] Cameron: remotely close to the last week of OpenAI. What about you?

[00:01:11] Steve: 100 percent correct. You have seen unexpected departures and things happen, but not this merry go round and circle work and back and forth. And, and I love the word you used there, prominent, because it really is. I mean, we’re talking about something that’s at the precipice of potentially changing human existence with the most important technology potentially that we’ve ever invented.

[00:01:31] Steve: And

[00:01:33] Steve: the longer this goes on, the more concerned I’m

[00:01:35] Steve: becoming.

[00:01:36] Cameron: prominent, like, I think up until Friday of last week, was it Thursday or Friday? I think Thursday US time, Friday our time. OpenAI is probably seen by people like ourselves, people that are paying attention to this space, which isn’t everybody, and that’s fine. But for people paying attention to this space, OpenAI is probably seen as the company that is playing the central role [00:02:00] in pushing the boundaries of the technology, which most of us think is going to shape Not just business and, and social lives in the next 10 years, but humanity.

[00:02:16] Cameron: It’s going to, they are the. Yeah, they are the people that are, that are shaping the future of the species. And up until Friday, the perspective from the outside was, you know, they, it’s not that they hadn’t had their issues and their trials, but it was a fairly steady ship. Sam Altman seemed to be fairly, he’s fairly quiet and

[00:02:40] Cameron: reserved kind of guy.

[00:02:42] Cameron: He’s not an Elon Musk, he’s not a Steve Jobs or even a Bill Gates or a Steve Ballmer

[00:02:48] Cameron: or, you

[00:02:49] Steve: Has his own style.

[00:02:51] Cameron: Yes, he’s, he’s, he’s a quiet,

[00:02:54] Cameron: um, considered gay guy who’s got an Australian boyfriend [00:03:00] by the way. Do you know his boyfriend’s from Melbourne? His um, guy he lives with? Boyfriend’s a AI programmer from Melbourne.

[00:03:08] Cameron: They live on a ranch in Napa Valley. Uh, he’s a vegetarian. He’s just seems like a very sort of quiet, well spoken, considerate. You know, when he, when he’s interviewed, he’s

[00:03:19] Cameron: very thoughtful about what he says. Ilya Sutskever, the chief scientist, again, very. Quietly spoken, very intellectual, Mira Murati, the Chief Technology Officer, same sort of thing.

[00:03:31] Cameron: For the company, for a company that important in the state of things to implode as suddenly and as dramatically as they did. Now, I don’t want to pat myself on the back, Steve, but for the last couple of weeks before that, we had, I had a conversation with you and had the conversation with Tony on the QAV podcast.

[00:03:51] Cameron: I’d had the conversation with lots of people saying, I have seen startups. That are in very prominent positions, pushing the boundaries [00:04:00] implode over the course of my career. I think Netscape is the example I’ve been using most, but Netscape survived for many years and dominated the industry for quite a few years, even after they got.

[00:04:13] Cameron: Sort of crushed by Microsoft. I think they ended up at Oracle initially. I can’t exactly remember, but they, they stumbled along for a few more years in some sort of capacity, they had a browser, et cetera, but nothing like the implosion that we saw from OpenAI. So now listen, I’m assuming that a lot of people listening to this followed it to some degree, but should we just run through, for people who

[00:04:37] Cameron: haven’t, the very condensed version of what happened over the last week with OpenAI.

[00:04:43] Cameron: Do you want to take the lead on that?

[00:04:44] Steve: Well, to everyone’s surprise, Sam Altman

[00:04:48] Steve: was ousted from the company in a surprise board meeting, which

[00:04:52] Steve: I think took everyone for surprise. Sam himself, certainly the lead investors, some prominent venture capital [00:05:00] companies and Microsoft who put somewhere between 11 and 13 billion into the company. It was.

[00:05:06] Steve: A week or so, a week and a bit after they did their OpenAI Dev Day where they had, standing on the stage, uh, next to Microsoft was the CEO and the leader of, you know, opening up GPTs, which we’ve spoke about here, which are absolute game changers to society. Then the next day, you, two days later, you hear that he may be coming back, that there’s an emergency meeting and some of the venture capitalists are pushing for him to come back.

[00:05:33] Steve: OpenAI and others announced that they, he’s having a meeting the next day to come in. That falls apart for some reason, there’s a schism. Then he ends up going to Microsoft, uh, and he’s going to be heading up

[00:05:45] Steve: some AI research

[00:05:46] Steve: lab

[00:05:46] Steve: there. He’s off sider. What’s his name? Cameron comes with him. Greg

[00:05:51] Cameron: Brockman who was

[00:05:52] Cameron: the chairman of the board up until the coup last week,

[00:05:55] Steve: until the coup

[00:05:56] Steve: then that happens. And then everyone thinks, well, he’s not going to [00:06:00] go

[00:06:00] Steve: back to OpenAI. Uh, despite that they were going to come back, a letter was written, which was signed by, gee, I don’t know,

[00:06:08] Steve: 90%, yeah, 90

[00:06:10] Steve: yeah, 90 percent of the employees above 700, uh, employees saying that they’ll leave unless he gets

[00:06:17] Steve: reinstalled, or they’ll all go to Microsoft.

[00:06:20] Steve: And then I wake

[00:06:21] Steve: up the next day and they go,

[00:06:22] Steve: he’s

[00:06:23] Steve: back. So it

[00:06:24] Steve: was,

[00:06:25] Cameron: So you

[00:06:25] Steve: was the

[00:06:26] Steve: most,

[00:06:27] Cameron: you missed a few things there

[00:06:28] Cameron: Let me

[00:06:29] Steve: yeah, well there’s a lot, so. Go, go for it.

[00:06:32] Cameron: So, it’s been almost a week,

[00:06:34] Cameron: and we still don’t really know why he was fired in

[00:06:37] Cameron: the first place.

[00:06:39] Steve: Okay. That was going to be, I

[00:06:40] Steve: think, the discussion point

[00:06:41] Steve: afterwards. That’s why I said I’m more concerned.

[00:06:44] Cameron: but that, like, the astounding thing, you and I

[00:06:46] Cameron: were talking on the day that it happened, like, the astounding thing, not only… That the CEO was summarily executed in the blink of an eye of a company that important, one of the founders, not just the CEO, but [00:07:00] one of the founders, That they did it in a way where there was no explanation.

[00:07:06] Cameron: There was this vague statement that they put out saying that he hadn’t been consistently candid with the board in his

[00:07:13] Cameron: communications. What the fuck does that

[00:07:14] Steve: that mean? Yeah, I was just,

[00:07:17] Cameron: normally if you want to get, if a board wants to get rid of a CEO, there’s a couple of ways that they do it. There’s, there’s the smooth transition.

[00:07:25] Cameron: Uh, yeah, like, he’s, he’s tired, he or she is going to hand over, we’re going to start the recruitment, or we’re going to appoint an interim person, there’s going to be a transition, you kind of ease the

[00:07:37] Cameron: market into it. Or, if it’s sudden, because of some malfeasance, then you tell everyone what the malfeasance

[00:07:45] Cameron: was, they caught

[00:07:46] Steve: how you called it malfessance. Oh wait, I just love malfessance. I’m saying that for, I would

[00:07:51] Steve: have said malfeasance, but malfessance is where I’m going from now

[00:07:54] Steve: on, malfessance. And it was a debacle. The whole thing was a debacle.[00:08:00]

[00:08:00] Cameron: I just did

[00:08:00] Steve: wasn’t a debacle. It was a

[00:08:02] Steve: debacle.

[00:08:04] Cameron: Um, and you say, look,

[00:08:05] Cameron: he,

[00:08:05] Cameron: you know, he fucked his secretary or he, you know, did

[00:08:08] Cameron: something.

[00:08:09] Steve: There you go.

[00:08:10] Cameron: the third one, and the most common, I think, is it’s health reasons or family reasons. There’s a family emergency, there’s health reasons. Um, and he has to leave suddenly to go and take care of that.

[00:08:22] Cameron: They’re the, they’re the, I think the main ways that they get rid of a board and handle it. We got none of those with this. It was just done suddenly, no explanation, vague statements. So then, uh, on top of that, and, and Sam was, it wasn’t just Sam that went, Greg Brockman, um, was removed and he wasn’t fired, he was removed from his position of being chairman of the board.

[00:08:46] Cameron: Uh, but then he. He resigned on the same day. Mira Mirati, the CTO, was declared to be the interim CEO. Within a day, she’d been replaced after Sam came, went [00:09:00] back in and posted a photo on Twitter of him wearing a guest badge, saying it’s the first and only time I’m going to be wearing an OpenAI guest badge.

[00:09:07] Cameron: When that, when the board didn’t accept him back, they removed Mira as the interim CEO and appointed Emmett Shear. Formerly the, uh, one of the founders and the CEO of, um, Twitch.

[00:09:24] Steve: Which that’s right. Yep.

[00:09:25] Cameron: And then he goes into meetings and there’s again no statement coming out of it still why they fired Sam. He said he was demanding answers from the board and the suggestion, uh, was that he wasn’t getting them.

[00:09:39] Cameron: And then Sam goes back in the next day after the employees all threatened to resign. Sam and Greg get immediately hired by… Uh, Satya Nadella, the CEO of Microsoft, to start a new AI division. Now, from the get go, I was calling bullshit on that. And I, as far as I can tell, I was the only person. I was sitting [00:10:00] on Twitter live streams with all of the Digerati in the U.

[00:10:03] Cameron: S., um, Scoble, um, et cetera, et cetera. And I was calling bullshit on this. I’m like, listen, Sam Altman

[00:10:11] Cameron: isn’t a go work for Microsoft full time kind of guy. Sam could go out and raise 10 billion tomorrow

[00:10:19] Cameron: to build,

[00:10:20] Steve: a check. How much do you want? What are

[00:10:21] Cameron: any startup you want, particularly him and Greg, who’s not

[00:10:23] Cameron: just the Chairman of the Board, he’s one of the key engineers behind ChatGPT. And plus all their employees, they could just go anywhere

[00:10:31] Cameron: and get as much money as they wanted to do whatever they wanted to do. He’s not a blue badge kind of guy. But what I was surprised was I was the only person that I could see calling bullshit on that whole story. Everyone else was like, oh my god, this is a huge coup from Microsoft.

[00:10:44] Cameron: They bought the whole thing. I’m going, no, no, no, this is a ploy. This is Satya Nadella making a

[00:10:50] Cameron: ploy with these guys because, you know, it was basically, okay, board of OpenAI, we’ll just take everything that you’ve got, all of

[00:10:57] Cameron: your people,

[00:10:58] Steve: and just build it [00:11:00] ourselves.

[00:11:00] Cameron: just build it

[00:11:00] Cameron: ourselves

[00:11:00] Cameron: anyway, so fuck you. And it was

[00:11:03] Cameron: designed to crush the board. I said, you know, 24 hours before it happened, Sam will be back by the end of the

[00:11:08] Cameron: week, trust me. It

[00:11:09] Steve: did. And I

[00:11:10] Steve: have the text message, which is timestamped to prove that Cameron Reilly was the one. And that’s why you

[00:11:17] Steve: got to tune into the Futuristic People because Cam Reilly is going

[00:11:21] Steve: to serve you

[00:11:21] Steve: well.

[00:11:21] Cameron: I’ve just been around long enough to smell

[00:11:25] Steve: It made sense

[00:11:25] Cameron: I see It

[00:11:26] Steve: that, and at one point I was like, I don’t know what this looks like, this

[00:11:30] Steve: new thing, um, but there was an interesting interview that Sacha Nadella did with, uh, Kara Swisher, um, just before Altman went back. And you could smell it in that interview that, um, he was very, yeah, he was very unhappy.

[00:11:46] Steve: It was actually very diplomatic, but he, but he was in the most beautiful diplomatic way. He described how unhappy he was to find out about it when it happened. Um, how much money they’d put in that he thought that, yeah, he had a strong [00:12:00] relationship with Sam and what Sam was doing and where OpenArc comes.

[00:12:03] Steve: So we’re still committed. To AI and what it’s doing, but we’ll, um, and he said something to the effect, we’ll still work with them, but we’ll build out some of our own resources if necessary. But you could see that he was like, nah, this isn’t going to fly in that interview. It was a really good interview.

[00:12:16] Steve: And

[00:12:16] Steve: he was, it was very,

[00:12:18] Steve: he was as transparent as he could be. And surprisingly so.

[00:12:22] Cameron: You’ve got to hand it to Sacha, um, just an

[00:12:25] Steve: Very cool under pressure. Very cool. Yes.

[00:12:28] Cameron: mean, not really much pressure for him,

[00:12:29] Cameron: I mean, he was going to be okay either way, but he played it masterfully, he just looked like a big supportive big brother. He came out looking really good, like it was a real PR coup for Sacha. So anyway, let’s get into the theories about why Sam was fired.

[00:12:44] Cameron: I don’t want to spend too much time on it because we don’t really know, but for people that weren’t paying attention, The first rumours going around was that… Uh, they discovered AGI. OpenAI discovered AGI. The board, including Ilya, [00:13:00] who’s on the board, were, uh, were concerned that Sam wasn’t taking the threat of AGI seriously.

[00:13:06] Cameron: Now, as we pointed out on the show a couple of weeks ago, the board of OpenAI is the board of the non profit, the not for profit. The for profit arm sits underneath that, but the board has responsibility. Their mission,

[00:13:18] Steve: company.

[00:13:19] Cameron: yes, and their mission is the safe delivery of AGI for the betterment of humanity. And if they felt like Sam was doing something that was not leading him in that direction, then they actually have a responsibility to terminate him.

[00:13:36] Cameron: But they should have said why. So one theory was it was AGI, um, and Sam wasn’t doing something. Another theory that was being floated was something to do

[00:13:46] Cameron: with, um, allegations of mistreatment that his sister has been making for quite a few

[00:13:52] Steve: I saw that. Yeah. Yes.

[00:13:54] Cameron: There was

[00:13:55] Steve: But I never thought that that would be it because it came up,

[00:13:58] Steve: that’s been around a long

[00:13:59] Steve: [00:14:00] time and this seemed a bit

[00:14:00] Steve: sudden for that to be the moment unless there was more detail under that. But yeah.

[00:14:05] Cameron: there was there was other allegations that he was running around, uh, before the, his termination, trying to raise billions of dollars to start a new chip development company to go head to head with Nvidia, build their own chips, and also to build his iPhone product with Johnny Ive and something about that the board wasn’t happy

[00:14:28] Cameron: with. Um, and, uh,

[00:14:32] Cameron: I think they’re the main things, right? They were the main theories that were going around at

[00:14:36] Cameron: the time, that were being

[00:14:37] Steve: Well, the other one too was that he was pushing the company, even though they have their for profit objectives too far,

[00:14:43] Steve: where all the resources were

[00:14:44] Steve: being focused on the profit side of the entity. That was the other one.

[00:14:49] Cameron: Oh, and there was, there was a,

[00:14:51] Cameron: there was a letter that a couple of disgruntled employees posted that Elon Musk retweeted, where they were saying, yeah, this is like, there’s been a long history of [00:15:00] people unhappy with Sam, employees are unhappy with Sam, um,

[00:15:05] Cameron: But, you know, the fact that 725 out of their 770 employees said that we’re going to follow him to Microsoft kind of suggests

[00:15:13] Steve: number, it’s rare.

[00:15:15] Cameron: he’s

[00:15:15] Steve: only ever expect a 50

[00:15:16] Steve: 50 in a corporate, a 50 50, the big win, you know, 50 50, you got 51 of them, but you’d be stoked in any

[00:15:23] Steve: corporate environment, you get a 90 percent is unheard of.

[00:15:26] Cameron: yeah, yeah, yeah. So, but the crazy thing is we still don’t know. And

[00:15:29] Cameron: the even crazier thing

[00:15:30] Cameron: is. Only today, like an hour before we recorded, a new theory came out. Now this is about something that’s been called QSTAR. Now,

[00:15:42] Steve: Yes.

[00:15:44] Cameron: brand new to me, Steve, what QSTAR is. I did a quick, like the theory behind this is, this is from an article in Reuters.

[00:15:52] Cameron: There’s this story that before SAM got fired, there were concerns inside [00:16:00] of OpenAI that a new model Something separate from their large language model that they’ve been working on, that their codename is QSTAR, had shown signs of AGI, and that, um, for whatever reason, the board wasn’t happy with the way SAM was handling this, and they decided that for the safety of humanity, And for the delivery of their mission, they had to remove him from power.

[00:16:30] Cameron: Now, I’ve done a bit of reading, there is a, field of mathematics called Q Star, and there’s a field of AI research called Q Learning. And I thought we should get Maverick to… Can you explain to us a little bit about what Q Learning is to see if we can make sense of

[00:16:49] Cameron: this.

[00:16:51] Steve: Let’s do it.

[00:16:51] Cameron: Maverick, Can you explain to us the role of Q Learning in artificial intelligence research?

[00:16:58] Maverick: Absolutely. Uh, [00:17:00] in AI research, Q learning plays a crucial role, uh, especially in the realm of reinforcement learning. It’s a star player in teaching machines how to make decisions in uncertain and complex environments. Imagine you’re teaching a robot to navigate a maze. Traditional programming would require you to write specific instructions for every possible situation.

[00:17:19] Maverick: It’s like trying to predict the future and write a script for it. Tough, right? Q learning flips the script. Instead of you telling the robot what to do at every turn, it explores the maze on its own, learning from its successes and mistakes. The core of Q learning is about learning the value of actions in different states.

[00:17:36] Maverick: It’s like the robot is asking, if I’m at point A and go to point B, how good Will that be for me? These values get updated as the robot learns more, making its decisions smarter over time. In AI research, this method is gold because it can be applied to a huge range of problems, from playing video games at a superhuman level, to optimizing investment strategies.

[00:17:58] Maverick: It’s a way of developing AI that can [00:18:00] adapt, learn, and make decisions in dynamic, real world scenarios, which is pretty much the holy grail of AI research. It’s not just about programming intelligence, it’s about nurturing it.

[00:18:10] Steve: , it sounds to me based on that, that

[00:18:12] Steve: it’s not so much large language models that look for pattern recognition, whereas this looks for, uh, ways to get somewhere rather than you programming the options and then it, yeah. Well, it’s very, it sounds, it sounds a lot like biomimicry, reinforced actions on what you do.

[00:18:29] Steve: I mean, even when you’re learning to walk or you’re learning to iterate a

[00:18:31] Steve: lot of times and you get a feedback loop and

[00:18:33] Steve: the feedback loop tells you the best approach to do something.

[00:18:36] Cameron: yeah. And what I

[00:18:38] Steve: can imagine how many iterations you can do, you know, with a neural network where it can, you know, have trillions of computations happening per

[00:18:45] Steve: second.

[00:18:46] Steve: Um, and you’re going to, then you’re going to get this rapid learning spiral.

[00:18:50] Cameron: you know, it was something I read on Reddit, on a thread about Q learning, and I’m not sure if this is accurate, but somebody was saying it’s, [00:19:00] it’s self correcting, self critical and self correcting, so it doesn’t need to be, uh, trained By humans giving it reinforced feedback, uh, it can train itself. Anyway, so the story according to Reuters is that they had a breakthrough in OpenAI on this QSTAR project.

[00:19:23] Cameron: And it was showing some kind of what they called their anonymous source called spooky behavior. Now, adding to this, it was interesting. I had a story already queued up before the whole debacle happened over the weekend. Sam gave a talk to Cambridge recently. And towards the end of it, a student asked him, to get to AGI, can we just keep min maxing language models?

[00:19:47] Cameron: Or is there another breakthrough that we haven’t really found yet to get to AGI? Here’s what Sam said. We need another breakthrough. We can still push on large language models quite a lot, and we will do that. We can take the hill [00:20:00] that we’re on and keep climbing it. And the peak of that is still pretty far away.

[00:20:04] Cameron: But within reason, I don’t think that doing that will get us to AGI. If, for example, superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. And teaching it to clone the behavior of humans and human text, I don’t think that’s going to get there. And so there’s this question which has been debated in the

[00:20:24] Cameron: field for a long time.

[00:20:26] Cameron: What do we have to do in addition to a language model to make a system that can go discover new physics?

[00:20:33] Steve: Yeah. It’s a, it’s a really good way to

[00:20:34] Steve: describe

[00:20:34] Steve: it, isn’t it?

[00:20:36] Cameron: And add to that, something else he said, just before he got fired, was that four times in the history of OpenAI, he had been in the room when they pushed forward the boundaries of what was possible, and that the most recent time had been in the last couple of weeks. And he didn’t specify what that was, but…

[00:20:58] Cameron: The suggestion now is [00:21:00] it might have been this brand new model that they’ve been playing around with. But anyway, again, one of the frustrating things with all of this is it’s still all speculation. It’s been a week, we still don’t know what happened, why it was fired, maybe we’ll never know. We do know that the original board…

[00:21:15] Cameron: Um, who were included, the Australian woman, Helen Toner, who were set up to protect humanity from AGI, is all, they’re all gone. The new board, there’s a brand new temporary board that is include, I’m sorry, one of the members, D’Angelo, is still on the board, the CEO of Quora. Um, who some people now think was the key instigator of the whole firing, by the way, in the first place.

[00:21:48] Cameron: He’s still there. Um, new couple of board members. Ilya’s not on the board at the moment. Sam isn’t. Greg isn’t. The other people aren’t. They are going to build a new board. What the mission for that board will [00:22:00] be, we don’t know. Who’s going to be on it, we don’t know. Good news is, my big concern over this in the last week is that Ilya would end up out if Sam came back.

[00:22:09] Cameron: It looks like Ilya’s still there, he’s still the Chief Scientist, um, his role in the whole thing, he was the one that summoned Sam and Greg to meetings with the board last week where they were fired, uh, there was a lot of Ilya hate, uh, online in the next couple of days, um, you know, Elon, though, tweeted about Ilya.

[00:22:34] Cameron: I mean, I know that you’re not a big fan of Elon, but Elon,

[00:22:37] Cameron: tweeted, um,

[00:22:39] Cameron: uh,

[00:22:40] Steve: work. I just don’t like him as a person, but anyway, that’s some of his behaviors. Yeah.

[00:22:44] Cameron: he wrote, I’m very worried Ilya has a good moral compass

[00:22:48] Cameron: and does not seek power. he would not take such drastic action unless he felt it was absolutely necessary. And I kind of get that feel with Ilya as well. Like I’ve watched. Hours and hours and hours of interviews with [00:23:00] Ilya. Again, seems like a very smart, very

[00:23:03] Cameron: considered, deep thinking guy.

[00:23:06] Cameron: I’m sure he wouldn’t blow his company up without good reason either.

[00:23:10] Cameron: Um, and

[00:23:12] Cameron: there’s

[00:23:12] Steve: said, that said he came out

[00:23:13] Steve: and regretted it quite soon after.

[00:23:16] Cameron: he regretted, uh, it. Yeah, but he must’ve

[00:23:20] Cameron: had, I just keep

[00:23:21] Cameron: feeling him. He must have had a really good reason to blow the company up. You can, you know, you can blow something up, um,

[00:23:30] Cameron: and still feel bad about it afterwards, but in the moment feel like it had

[00:23:34] Cameron: to be done, you know, but what that was, we still don’t know.

[00:23:37] Cameron: It’s, it’s

[00:23:37] Cameron: really crazy.

[00:23:39] Steve: Yeah. On the AGI thing with the Q star, the one thing that just never, ever gets spoken about enough, and I know I’ve raised it here more than once, and talk about artificial intelligence becoming smarter than humans in, in every realm, but the one thing that. I don’t think it’s necessarily aligned to intelligence because the large language models, even though they’re a subset [00:24:00] and a replication device, we already know are smarter than most humans in most, in most realms, they don’t have the nuance and all of that.

[00:24:06] Steve: But you know, in terms of quant, you know, quantitative intellectual capability, much better mathematical, all those things is that no one ever talks about the important part of an AGR and ASI. For me, that’s self direction. Or intention and being able to set your own tasks. No one really talks about that in recent times in the recent five or six years.

[00:24:28] Steve: That was the big stuff that we used to, you know, when, um, superintelligence came out of the book by, uh, boy, uh, uh, Nick Bostrom was, that was one of the big things, but it seems that that’s gone off the agenda and everyone’s just thinking quantitatively. I don’t, I just don’t hear enough of it. And the one thing that I’ve been thinking a little bit about lately is how many biological creatures that have.

[00:24:51] Steve: Tiny levels of intelligence that we already know, we have computation that’s far smarter and can do more, have intentions and self direction, you know, and we’ll do [00:25:00] things like a bird will put a nest in my house, even though I wouldn’t like it because it’s following its own intentions and objectives and it’s not very intelligent at all, but it’ll find a little nook in the, in the spouting and do that.

[00:25:12] Steve: So again, I’m, I’m not convinced that pure computation gets us to. Sentience or intentions and self direction. That’s the thing you’ve got to worry about. Not, uh, I think, uh, computational capability or intelligence capability. And that just lately seems to have,

[00:25:33] Steve: even in that Reuters

[00:25:34] Steve: article, it wasn’t there.

[00:25:36] Cameron: Um, well, I, I, I’ve heard Mira Murati talk to this, um, I’m looking for the quote I’ve heard her use several times on different podcasts. Um, she says, um, find this. [00:26:00] By artificial general intelligence, we usually mean highly autonomous systems. that are capable of producing economic output, significant economic

[00:26:11] Cameron: output. In other words, systems that can generalize across different domains. It’s human level capability. OpenAI’s specific vision around it is to build it safely and figure out how to build it in a way that’s aligned with human intentions so that the AI systems are doing the things that we want them to do and that it maximally

[00:26:28] Cameron: benefits as many people out there as possible.

[00:26:31] Cameron: Ideally everyone, but the whole point of highly autonomous

[00:26:35] Cameron: systems, I think,

[00:26:37] Steve: But autonomy isn’t self direction. No, but autonomy is not

[00:26:40] Steve: self

[00:26:40] Steve: direction. Autonomy means you

[00:26:42] Steve: just go with it. Well, no, no, I don’t think so. Like, so it might be

[00:26:46] Steve: autonomous. You can set it a task and it automatically goes and does

[00:26:50] Steve: that It’s not necessarily making up its own tasks to do, like setting its own direction.

[00:26:56] Steve: Something can be autonomous. Like you can have autonomy. An [00:27:00] autonomous vehicle is autonomous. Drive me to Shopping Centre X or to the airport. It goes about doing it itself. And I’m, I’m, for me, I’m really interested in this idea of intentions and self direction. Not just autonomy can do it by itself. Did it set its own task?

[00:27:19] Steve: That’s the thing. Or are they worried that if it becomes autonomous? Somewhere down that development, it goes off on a,

[00:27:26] Steve: on a tangent or a trajectory where it

[00:27:29] Steve: goes into different.

[00:27:30] Steve: That’s, that’s

[00:27:30] Cameron: Don’t

[00:27:31] Steve: I know it’s, we’re playing with semantics here,

[00:27:33] Cameron: well, don’t we

[00:27:33] Steve: but, to me, this is really

[00:27:35] Cameron: what we want it to do? Isn’t that the first goal anyway? Autonomous in that we

[00:27:40] Steve: No, no, no. We want it to do what we want

[00:27:42] Steve: it to do. But my point is, is,

[00:27:44] Steve: is not so much just autonomy is like

[00:27:47] Steve: it going, yeah, nah, not doing that. Like this, this is what I’m doing. I know you built me for this, but I’m here now, pal.

[00:27:53] Steve: So

[00:27:53] Steve: like a teenage kid, you know, it does its own thing.

[00:27:58] Cameron: and that’s what you want? That’s the [00:28:00] level of autonomy

[00:28:00] Steve: want?

[00:28:01] Steve: that, but

[00:28:01] Steve: that’s, no, no, that’s not what we, but I just feel like some of those small, seemingly synonymous words, and I know it’s semantics, but I think it’s really important. And we’re not talking enough about that. And I’m not convinced that pure intelligence alone creates self

[00:28:19] Steve: direction

[00:28:21] Cameron: Right, that’s a fair point.

[00:28:25] Steve: because we already have things that are far less intelligent than computational systems that are self directed and do their own things, despite what?

[00:28:32] Steve: humans want, or

[00:28:33] Steve: they have their own, you know. Well, every animal

[00:28:37] Cameron: Oh, you’re not talking about machines, computers, you’re

[00:28:40] Steve: no, no,

[00:28:41] Cameron: animals Right,

[00:28:42] Steve: that are far less intelligent that do their own thing.

[00:28:44] Steve: Anyway, that’s just a by the by,

[00:28:46] Steve: but it’s just been a little bugbear of mine recently, but anyway, we’ll, we’ll, we’ll wait and see. Just in case you are

[00:28:51] Steve: listening, no, I don’t

[00:28:53] Steve: want them to develop their own objectives and intentions.

[00:28:56] Cameron: Okay, let’s, let’s get off of the

[00:28:58] Cameron: OpenAI debacle for a [00:29:00] second.

[00:29:00] Steve: lot, that’s 40 minutes of open AI, goodness me.

[00:29:02] Cameron: normally we start off with talking about something cool that we’ve

[00:29:06] Cameron: done in the last

[00:29:06] Steve: We should do

[00:29:07] Cameron: two weeks. Tell me what you’ve done

[00:29:09] Cameron: that’s cool, Steve.

[00:29:11] Steve: I, I took your lead from the last one and I created

[00:29:13] Steve: two AIs. One of my book, the

[00:29:14] Steve: Lesson School Forgot,

[00:29:16] Steve: which was really good. Really, really good. I just uploaded the PDF of the book and it was very good at, um, pulling things out for the book and, and questions. And I made it answer it as if I was like, Hey, it’s Steve here.

[00:29:28] Steve: Yeah. Hey, that’s a good question. I was thinking about that. It does those types of things. Um, a couple of things that I found that it wasn’t that good at. It wasn’t really good at distilling small parts within the book with specific answers. It would always take it too broad. It would like take all the ingredients in the book and then maybe look for things that, that pattern match the question.

[00:29:50] Steve: When some of the things that I asked were specifically within a certain area of the book because I just wanted to see what it would deliver. And some of them, like I have in the book, the three types of money, earned money, [00:30:00] invested money, and invented money, and the hierarchy of that, that was one of the things that it did, and at first it would take a lot of stuff They just weren’t in the book and I had asked it not to look at the web, but I was not convinced that the system that it has there was only looking at what it had been fed.

[00:30:18] Steve: I’m just not convinced, despite it being directed that way, because it said things that I’d

[00:30:23] Steve: never said and they weren’t in the

[00:30:24] Steve: book and it would deliver those. So that was a really interesting experiment.

[00:30:29] Cameron: so to be clear, you built

[00:30:30] Cameron: your own GPT. You used the MyGPT’s tool that was launched on Dev

[00:30:35] Cameron: Day.

[00:30:36] Steve: Yes, and I built It

[00:30:38] Cameron: many years ago,

[00:30:39] Steve: It does, it seems like a lifetime ago because I’ve built so many of them recently and for clients and all sorts of stuff. Um, but it didn’t, it took things that weren’t in the book and answered and it wasn’t great at keeping an answer to a section because the book was very sectionalized, wasn’t great at that.

[00:30:56] Steve: But then I built an AI of me, which was General Steve Sammartino [00:31:00] AI. And I gave it, um, blog posts that I’ve written over 3 million words for 20 years, podcasts that I’ve done. Uh, I told it where to look. I told it to look on YouTube for my interviews. I gave it some really specific instructions and I gave it a hierarchy of where to go first.

[00:31:15] Steve: Go to my website first, interviews with me first, all of that stuff, articles that I’ve written, um, TV shows, and it was, it was much better than the book one. And I think it was better because it had a broader context of what to look, and I was wondering if it because the GPT has already been trained on some of my stuff, but it would already be…

[00:31:32] Steve: Give some stuff, just the general ChatGPT if I asked it about me, because I’ve got a lot of deep, long tail of published content, and that was better than the book one, and I thought it would be the opposite. I thought a simple, you know, 300 page book on a PDF would be a simple thing for it to analyze, and it wouldn’t make mistakes.

[00:31:50] Steve: It made

[00:31:50] Steve: more mistakes on the book

[00:31:51] Steve: than it did on the wider Steve

[00:31:52] Steve: Sammartino AI, and I was

[00:31:54] Steve: really surprised by that.

[00:31:56] Cameron: Well, I had a similar but different experience to you, [00:32:00] so as soon as we finished our last recording, the first thing that I did was build a MyGPT using my Psychopath Epidemic book. Same thing. Uploaded the PDF and then interrogated it, asked it questions about psychopaths and it did a pretty good job of answering questions based on the book.

[00:32:18] Cameron: Uh, then I upload, then I built one for QAV, my investing show, and I uploaded our guide, our book to using our investing system, and I started uploading transcripts of past episodes, and it’s done a pretty good job so far of answering questions about how we tackle investing based on all of that, but sometimes it does go out to the web.

[00:32:45] Cameron: I know that it’s, the information that I’ve given it is in the material, sorry, the information for the, to answer the question I’m asking it is in the material I’ve given it, but it’s gone out and said, oh, he says the material doesn’t [00:33:00] explain that and it goes off and Googles it. I’m like, well, no, it does.

[00:33:04] Cameron: I know it’s in there, so you’re just not, um, finding it, I guess, so it’s, it’s been a little bit

[00:33:11] Cameron: hit and miss in that respect, but, you know, uh, like,

[00:33:17] Cameron: it’s still astounding what

[00:33:19] Cameron: it can do, like

[00:33:20] Steve: Oh, it was

[00:33:21] Cameron: how easy these things are to build, they take a minute to upload a few

[00:33:25] Cameron: documents,

[00:33:26] Steve: the insane bit. Let’s put aside the imperfections, right? Just wind

[00:33:30] Steve: the clock back and go, Hey, in a few years, you’ll be able to write like a handful of instructions and have an AI just to like, it’s insane. And I did another one

[00:33:40] Steve: for a client. Um, an electrical wholesaling company. And I built one, uh, they’ve got all these cheat sheets for electricians and information, legal stuff.

[00:33:50] Steve: And I pumped in, they’ve got about, they had, uh, yeah, I don’t know, 20 So interestingly, it will only take 20 PDFs, but if [00:34:00] you put them all into one, it’ll take it, which is weird. But anyway, so I did that. I, I, um, married them all up, put them in, gave it instructions of where to go first, um, to answer all of the questions so that they’ve got their own, uh.

[00:34:13] Steve: AI, and I demoed it to the client and it blew their freaking minds, mate. And I was like, listen, you got to move quick because now there’s a whole going to be a race on getting the GPT that becomes the best one for industry X or industry Y, you know, a little bit like, you know, remember there was a com for every domain, you know, sneakers.

[00:34:32] Steve: com and this. com and that. com and music. com. It’s got, granted, none of those ended up as the end up winners, but there is an interesting retro play with that now. And the AI was pretty damn good. What I had to train it to though, it was sometimes if it didn’t know the answer, it would send it to a competitor because they’re not the biggest.

[00:34:50] Steve: So then I said, whenever you don’t know, I give the phone number for this company and it does that pretty well. So you can kind of create circumvention and dead ends where it takes you back

[00:34:59] Steve: somewhere, which [00:35:00] was kind of cool.

[00:35:00] Cameron: mhm. Of course, getting back to the OpenAI debacle, one of the things that became very obvious, um, in the last week is that all of the thousands of companies that are building their businesses on the expectation

[00:35:15] Cameron: that ChatGPT or the API will be available to them, all of a sudden started to question

[00:35:21] Cameron: that decision.

[00:35:23] Cameron: Um,

[00:35:24] Steve: they say. You never, never, ever, um, you know, don’t, don’t build on, on, on someone else’s. Never build your, here’s what I used to say back in the Facebook days. And I said this when they were saying, get the likes and you’ll be able to communicate directly with all your fans on your brand for whatever.

[00:35:39] Steve: And that’s what they did. And everyone’s invested millions, getting like a million Red Bull likes and a million likes for washing powder and Ford cars. And then they said, oh yeah, about that changed our mind. Now that’s going to go to zero. And if you want, you’re going to have to pay to advertise. I’m like, Never grow your veggies

[00:35:53] Steve: in someone else’s

[00:35:54] Steve: garden because they can pull it up by the roots.

[00:35:56] Steve: And that has

[00:35:57] Steve: never changed.

[00:35:59] Cameron: That’s the same [00:36:00] reason for podcasting, you know, there are platforms out there that can run all your podcast hosting and take your money for premium subscriptions and they’ll manage all of that. I always say, no, I’m not, I’m not building my business infrastructure on the top of something somebody else owns because

[00:36:16] Steve: Cross your fingers.

[00:36:17] Cameron: if they go out of business tomorrow, or if they change their rules tomorrow, I’m screwed, right?

[00:36:23] Cameron: I have

[00:36:23] Cameron: to pick it, pack it all up and start all over again. I own everything, as much

[00:36:27] Cameron: as possible. Like, I don’t have my own

[00:36:29] Cameron: credit card. uh, you.

[00:36:31] Cameron: know,

[00:36:31] Cameron: billing system and all

[00:36:32] Steve: I’ve always said that that’s been the one area that you’ve really

[00:36:35] Steve: not delivered in Cameron. I believe that you should have your

[00:36:37] Steve: own credit card.

[00:36:38] Steve: I’ve

[00:36:38] Steve: always said

[00:36:38] Cameron: yeah. yeah. Well, that’s how E. L. Musk got

[00:36:41] Steve: CamCard, we call it.

[00:36:42] Cameron: Yeah, CamCard. But getting back to the MyGPT, as you said, you know, without trying to pick holes in it, keep in mind, everybody, that GPT was launched to the

[00:36:54] Cameron: public on November 30th, 2022. We’re recording [00:37:00] this on November 23rd,

[00:37:01] Cameron: 2023. It hasn’t even been out for a year.

[00:37:05] Cameron: And

[00:37:06] Cameron: now you can build your own

[00:37:08] Cameron: ai. You can build your own AI by

[00:37:11] Cameron: uploading some documents to

[00:37:13] Cameron: it. Like if you told me a year ago that this is what we’d be doing today, I would’ve thought you were bonkers. Absolutely

[00:37:23] Cameron: bonkers,

[00:37:25] Steve: because it really is

[00:37:28] Steve: a moment of, it’s magical, what it can do. Yeah, I’ve

[00:37:31] Steve: been,

[00:37:32] Cameron: and it’s

[00:37:32] Cameron: not

[00:37:32] Steve: I’ve been saying

[00:37:33] Steve: two things. On stage lately, you know, doing the keynotes is, you know, you’re in a revolution when the technology feels like

[00:37:40] Steve: magic and everyone has access to it. Like they’re the two things that I’ve kind of landed upon as knowing revolutionary moments.

[00:37:48] Steve: If you go back through time, um, you know, if you go back to when we first discovered to make our own fire, it’s like, it feels like magic. I’ve got fire at the end of a stick and anyone can have it or a Model T Ford or [00:38:00] a TV in everyone’s

[00:38:01] Cameron: I still can’t make a fire by rubbing, you know, a stick together on a leaf,

[00:38:05] Steve: I’ve never

[00:38:05] Steve: tried. I’ve actually never tried. I imagine it’s.

[00:38:07] Cameron: it’s very

[00:38:08] Steve: I

[00:38:08] Steve: imagine there’s a YouTube lesson on it.

[00:38:10] Cameron: Yeah, there’s a lot of really good of

[00:38:12] Cameron: those naturalist guys who do it. Um, something else that I did this week, Steve, that I want to talk about. I mean, I wrote a lot of code as well in the last couple of weeks.

[00:38:20] Cameron: But one thing I had this idea, so I’ve written some code over the last couple of months that, um, you know, goes off and does a lot of investing research for me. And, um, it, you know, these scripts that GPT helped me write. In Python. Can take an hour or two to run, sometimes several hours, because there’s a lot, a lot of stuff they’re doing, and there’s some time delays and downloading data and stuff using Google Finance on the back end.

[00:38:45] Cameron: And I found that I was, um, sometimes, uh, I would go back a few hours later when I’d assumed the script had finished and it had failed halfway through for some reason, and I’m like, ah, shit, I thought it’d be done by now and I need to start it again or pick it up from where it [00:39:00] left off. I had this idea in the last week or so.

[00:39:03] Cameron: Wonder if I can set up a notification system to tell me if a script succeeds or fails and have it beam through to my watch.

[00:39:11] Cameron: So I asked GPT, Hey, can you help me do this? Yeah. Okay. It always answers the same way. Absolutely. I can do that.

[00:39:17] Cameron: Yeah, absolutely. I

[00:39:19] Steve: you, it’s the employer you always

[00:39:21] Steve: wanted.

[00:39:22] Cameron: Absolutely. Absolutely. So I

[00:39:24] Steve: that’s my whole get up. I’m glad I was waiting for you. Where you been all day?

[00:39:27] Steve: I’ve been waiting to do this.

[00:39:30] Cameron: It is the greatest.

[00:39:31] Cameron: So. I wrote, uh, uh, sort of a reusable

[00:39:34] Cameron: module that I can plug into any script I want now, any Python script, that uses

[00:39:40] Cameron: IFTTT. You remember IFTTT?

[00:39:42] Steve: I lift, mate. It was in my first book.

[00:39:44] Cameron: Oh yeah, I mean, I’ve been using it. Well, I haven’t used it for a long

[00:39:47] Steve: I haven’t

[00:39:47] Steve: used

[00:39:48] Cameron: to use it like

[00:39:48] Steve: in years. I haven’t used it in years, but I thought it could have been Well, I thought it

[00:39:53] Steve: could have been the fulcrum to

[00:39:54] Steve: like an IOT world. That’s one of the techs that never really has taken off is IOT.

[00:39:58] Cameron: Well, I think it and [00:40:00] Zapier are having a new lease

[00:40:01] Cameron: of life now because of AI. They can plug AI into different things,

[00:40:05] Steve: Right.

[00:40:06] Cameron: this isn’t really plugging AI in, but I built a module in Python that will use IFTTT, uses a webhook to call IFTTT. IFTTT then sends an iOS notification to my phone and my watch. And notifies me if a script succeeds, completes successfully, or it fails.

[00:40:28] Cameron: So if it fails halfway through, I get a ding on my watch, the script has failed, I go, oh shit, I can go back to my laptop and, you know,

[00:40:35] Cameron: fix it. And just the fact… That I built something that can send notifications, I can write code to go and do something that would have previously taken me a day to do manually, have it automated, and create notifications to tell me when it succeeds or fails, just blew my mind.

[00:40:56] Cameron: And here’s the message that I wanted to leave, [00:41:00] um, with people listening to this, if they’re not already developers, is we need to start to think like developers. Everyone needs to start to think like a coder. This is a new way of thinking for those of us that aren’t coders. You need to start to think. You know, I remember like a year ago or a little bit less than a year ago, right?

[00:41:21] Cameron: When GPT first became available. I, I read up on it. I’d gone and set up an account. But my big problem was, what do I do with this? I had to learn

[00:41:30] Cameron: to think to how to use an artificial

[00:41:34] Steve: around that tech. It does. This is, this is the thing that I think when a new technology comes along, you get, you get two, um,

[00:41:41] Steve: things happen, uh, is you get the efficiency use case. And the unexpected use case, the new way of thinking around the tool. So, you know, the, the motor vehicle is okay. Now I can get where I’m going quicker than I would with a horse, but then there’s the other thing like, Oh, all of a sudden we can have [00:42:00] things called suburbs and drive throughs and drive in theaters.

[00:42:02] Steve: Like, you know what I mean? Like you get, you get, you get these different unexpected use cases. There’s those two parts and the people that do really well are those that uncover the new way of working with the new tool, which kind of creates this new kind of green field. And, and, and that point that you’re making there is a really important one for people to, and the only way you really do that is you learn a bit from others, but mostly serendipity by just experimenting and eventually your, the database of your life

[00:42:27] Steve: experience and thought

[00:42:28] Steve: patterns

[00:42:28] Steve: will marry with this and create a new set of thought patterns, you know, it’s like a new chemical mix.

[00:42:32] Cameron: I think the other good example

[00:42:33] Cameron: of this

[00:42:34] Cameron: is iPhones. Like, before we had iPhones, I don’t think anyone, including Steve Jobs and Johnny Ive, knew what iPhones were gonna, how they were gonna

[00:42:45] Cameron: reshape society. You know, they probably knew it was gonna be cool.

[00:42:50] Cameron: But they couldn’t have possibly foreseen how it was going to

[00:42:53] Cameron: fundamentally

[00:42:54] Steve: better. Like, it was just let’s be honest, it was better than the clunky, let’s call them web

[00:42:59] Steve: enabled [00:43:00] phones back then. Like, it had all the same

[00:43:02] Steve: stuff. It was just a bit better. But, um, but I don’t think they realized a ton better. But for me, the big unlock was the GPS. That was the one that actually changed everything because that created new kind of capabilities with these devices that just wasn’t possible on phones before.

[00:43:22] Steve: You know, I’ve spoken about that before. Creates dating apps and Uber and all of these things where

[00:43:25] Steve: geolocation was, I think, probably the biggest unlock on the smartphone for me.

[00:43:29] Cameron: Yeah, so Uber’s a good example. I’m pretty sure when

[00:43:32] Cameron: they came out with the first iPhone, they didn’t envision

[00:43:34] Cameron: Uber,

[00:43:36] Steve: No, no way

[00:43:37] Cameron: so you know, we’ve all lived through that. We probably haven’t all learned the lessons from that, but GPT for me has been this journey for the last year. Like, initially, it was driving me nuts initially that I couldn’t figure out what to do with it.

[00:43:50] Cameron: I thought, what the fuck? I’ve got this artificial intelligence in my back pocket and I can’t think of what to do with it. Now, It’s my go to. I spend hours in it every day doing all sorts of things. It’s my [00:44:00] first port of call. But now I’m trying to teach myself to think like a coder. What are things that I’ve never even thought about before that I might be able to code or automate that I can now do?

[00:44:12] Cameron: I now have the capability to do these things like the notifications thing. I didn’t think about it until like a week or so ago. I was like, Oh shit. I wonder if I

[00:44:21] Cameron: can just set up an alert. Oh my God, I can. This is amazing. What else can I create notifications and alerts

[00:44:28] Cameron: for

[00:44:29] Steve: Yeah. One of the big things on the coding mindset for?

[00:44:32] Steve: me, and I haven’t been doing it, you know, for a really long time was the idea of, of, of, of strings and creating sequences of events. I think one of the things that we do in traditional tasks where we’re not coding is everything’s sort of discrete and you look for overlaps with discrete events, but joining up.

[00:44:49] Steve: Things that are seemingly separate and can’t work together, stringing things together in ways, and the fact that you’re doing that with alerts, where you take different pieces, and these pieces of this puzzle get [00:45:00] strung together with a new holistic approach where you get a systems flow, because we have systems in business, but we tend to do, um, okay, I know this strategy, and I wrote the strategy, now I’ve got to create a PowerPoint Which is kind of me just taking that and you’ve got these discrete elements.

[00:45:15] Steve: It’s bringing the discrete and stringing them together with a layer of automation and asking the AI to create that bridge between things and make them interact. That’s, I think, the thinking that you need to have that coding

[00:45:26] Steve: mindset

[00:45:26] Steve: just back from, you know, when I used to do

[00:45:28] Steve: it. It

[00:45:29] Cameron: Yeah, it’s a, it’s a different modality of thinking, I think,

[00:45:33] Cameron: but,

[00:45:34] Steve: is.

[00:45:34] Cameron: um, before I,

[00:45:35] Steve: it’s systems thinking.

[00:45:37] Cameron: before I forget, another experiment that I

[00:45:39] Cameron: tried with my GPTs, which was a dismal failure this week, was to create a, um, a MyGPT to, to help me practice the London system. London system is a… chess opening, but it’s not just an opening.

[00:45:54] Cameron: It’s a full game strategy. Um, and so I had a book on the [00:46:00] London system. It was a PDF that I uploaded into a GPT and then I started to try and get it to answer questions about it. And it was absolutely hopeless. Could not get anything right. Still doesn’t understand chess. Then I thought I’d test the new GPT4 Turbo on a game of chess.

[00:46:20] Cameron: So I was playing a game of chess.

[00:46:22] Cameron: Uh, on a chess app, Stockfish, and then I would, I was asking GPT4, I was telling it what the moves were

[00:46:29] Cameron: every two or three

[00:46:30] Cameron: moves.

[00:46:30] Steve: you its next move or whatever.

[00:46:31] Cameron: Yeah.

[00:46:32] Cameron: well, what, what should I do here? What should I be thinking

[00:46:34] Cameron: here? And It was completely clueless. Like it was not

[00:46:38] Cameron: just giving me bad Um, information.

[00:46:43] Cameron: It couldn’t remember where the pieces were. It was making illegal moves on

[00:46:47] Cameron: the

[00:46:47] Steve: Well, it still does forget things. I’ve noticed a few times, even when I was creating some

[00:46:52] Steve: GPTs, it’s buggy. Like a couple of times, It would change the subtitle of my GPT or change [00:47:00] the image without me asking. So, and it would, it would change the subtitle, you know, um, tap into the mind of the Samotron and then it would just change it, you know, thoughts and ideas from Steve Samotron.

[00:47:09] Steve: It’s like, dude, I didn’t ask you, what are you doing here? Or I’d, I’d ask it to do things and it would forget previous instruction. I said, I already told you that up there that from now on answer this, this way. And it wouldn’t do it. It seems like, uh, it doesn’t have a good memory. And that actually reminded me of, um, one of the things you had in the list

[00:47:27] Steve: of things to go

[00:47:28] Steve: to

[00:47:29] Steve: with ChatGPT.

[00:47:30] Cameron: announcement?

[00:47:31] Steve: Yeah, then we might as well talk about that now.

[00:47:33] Cameron: Yeah,

[00:47:33] Cameron: I just pulled it up. So, I saw

[00:47:35] Steve: Oh, perfect.

[00:47:36] Cameron: before the

[00:47:37] Cameron: implosion a week ago, there was a Reddit post seven days

[00:47:41] Cameron: ago. Your GPT will soon learn from your chats, and there’s some screenshots here, that’s, and, uh, of something that hasn’t been released yet. Maybe it would have been released if they hadn’t have had the week that they’ve had.

[00:47:54] Cameron: It says your GPT can now learn from your chats. Keep the conversation going. Your GPT will carry what it learns [00:48:00] between chats, allowing it to provide more relevant responses. Improves over time. As you chat, your GPT will become more helpful, remembering details and preferences. Manage what it remembers. To modify what your GPT knows, just send it a message.

[00:48:15] Cameron: You can reset your GPT’s memory or turn the feature off in settings. So this obviously, uh, hasn’t been rolled out yet, but there’s screenshots of the coding behind the scenes, um, in Python of what this looks like. Um, so this seems to be like a. coming feature where it will actually be able to remember and learn.

[00:48:44] Cameron: I don’t know how they’re implementing this because we, as we’ve talked about before, this isn’t the way the large learning models supposedly work, but, uh,

[00:48:53] Cameron: they seem to have done something here that means they’re going to be able to do this in the near future, which is [00:49:00] very exciting if it, if

[00:49:01] Cameron: it is

[00:49:02] Steve: Well, it does, it does, it does tend to

[00:49:03] Steve: answer, um, the same question differently on different occasions. You know, because it’s

[00:49:08] Steve: looking for patterns and it might not find the same ones each time. I don’t know how that works. But I like that you can ask it to forget things. I wanted to ask you something.

[00:49:15] Steve: You might know this. Is there any evolutionary benefit in

[00:49:17] Steve: us not remembering things? Is that a big, that’s a big question, potentially.

[00:49:22] Cameron: Yeah, it’s a good question. Why has evolution

[00:49:25] Cameron: given us flaky memories? Well, you know, the thing about memory is

[00:49:29] Cameron: the neuroscience of

[00:49:30] Steve: There’s some things that are flaky and some that

[00:49:32] Steve: are not. That’s what makes it interesting for me.

[00:49:35] Cameron: Are you sure that the things that you think are not, are really not though?

[00:49:38] Steve: No, I’m not. Okay. You got me.

[00:49:39] Steve: No, I’m not sure.

[00:49:40] Cameron: Like the science, I don’t know if we’ve talked about this

[00:49:42] Cameron: before, but I’ve, I’ve read about this over the last 10 or so years. I’ve read the neuroscience on this as it progresses. Still, our understanding of how the brain works is obviously very, um, very preliminary, but, uh, what we’ve started to understand [00:50:00] about the way memory works short term and long term are different, but with, even with long term memory.

[00:50:06] Cameron: There’s, you know, people tend to think of memory, I think, is just a thing. Like there’s one little spot in your brain that contains the memory of what happened the day

[00:50:14] Cameron: you got married, Right. That’s a little memory. And if you try and recall that, your brain sends a single and pulls that in. Turns out that that’s not how memory works at all.

[00:50:23] Cameron: Mem the memory of the day you got married is scattered. in

[00:50:29] Cameron: 25 different places in your brain, bits of it. The visual cortex that handles things like color and movement is in one part of your brain. The auditory memory is in another part of your brain. The spatial relationships between everything is in another part of your brain.

[00:50:46] Cameron: There’s different parts of your brain that store the different Components of the memory, it’s modularized. And when you say, Hey, I want to bring back that memory. Your brain quickly tries

[00:50:58] Cameron: to grab. All the [00:51:00] bits and pieces, and then strings them together into something that seems to be a

[00:51:05] Cameron: coherent story

[00:51:07] Steve: you, wanted? Is this it? Is this how you

[00:51:09] Steve: remember

[00:51:09] Steve: it?

[00:51:10] Cameron: but you have no idea, you have no way of knowing if the memory that it’s, it’s articulating to you is accurate or not because it’s all you have.

[00:51:18] Cameron: And that explains why multiple people who experienced the same event have completely different recollections of what happened because all of their brains are storing it differently and then trying to regurgitate it all of a sudden. And it’s a flaky process. Our brains are inherently

[00:51:36] Cameron: bad at this stuff, but we don’t notice it because we just believe that it’s right.

[00:51:42] Cameron: Unless, you know, we’re,

[00:51:44] Cameron: you know, faced

[00:51:45] Steve: say? What do you say? I can’t quite remember.

[00:51:47] Steve: My memory is

[00:51:47] Steve: sketchy. It’s interesting to say that you can’t quite remember

[00:51:50] Steve: something,

[00:51:52] Cameron: Yeah,

[00:51:52] Cameron: can’t

[00:51:53] Steve: that you can’t remember it, means that you remember it. Does that make sense? Like

[00:51:56] Cameron: you,

[00:51:57] Cameron: know that you should remember it, but you, there’s a [00:52:00] memory that

[00:52:00] Cameron: something happened,

[00:52:02] Cameron: but the details of what happened, uh, not available to you. What’s even more bizarre, you know, is when you, when you’re trying to remember the name of something, of somebody, or an actor’s name, or the, the name of

[00:52:14] Cameron: a food, or a

[00:52:16] Cameron: flower, and you, that feeling where you know it’s on the tip of your tongue,

[00:52:20] Cameron: and you can’t quite access it, and then

[00:52:21] Cameron: all of a sudden, half an hour

[00:52:23] Steve: That’s so frustrating. When that happens now, I worry when that happens now because I’m not 25

[00:52:29] Steve: anymore. And so

[00:52:30] Steve: I’m like, Oh God, is this, is this a sign? Is this, this is the start of the end?

[00:52:35] Cameron: When I was 30, I could recite historical

[00:52:38] Cameron: statistics, numbers. I remember walking around Microsoft being annoying as fuck because I could just quote numbers and stats and details and I had this like,

[00:52:49] Cameron: encyclopedia in my brain. And maybe it’s 20 years of, of Google and Wikipedia that has meant my brain has just got lazy over the, or it’s just age, right?

[00:52:59] Cameron: But [00:53:00] now I can’t even remember what my name is

[00:53:01] Cameron: half the time. Anyway, let’s move on. There’s so many stories, Steve. Let’s get off AI for a second. Elon launched a fucking rocket, man,

[00:53:10] Steve: I saw it. It was a big mofo.

[00:53:12] Cameron: oh, wow, man, what, ah, I, I, look, I, I know, again, not everyone likes Elon, but Jesus

[00:53:20] Cameron: Christ, man, like

[00:53:21] Cameron: you

[00:53:21] Steve: look, in terms of someone who does stuff and gets it done and fails and keeps going on big, big, big things, including rockets, man, hats off. There’s no, put aside the, you know, the, the, the, the behavior and what have you. I mean, there’s, He’s an extraordinary person that’s reshaping the world. I mean, the rocket thing for me is always interesting for a couple of areas is that, you know, I mean, I find the irony is never lost on me that, uh, on the one hand he’s, uh, saving the world with electric cars and then just, you know, doing the equivalent of, you know, a million miles with, uh, with, you know, three seconds of rocket launch.

[00:53:57] Steve: I find that ironic. But I also, [00:54:00] um, I mean, he said, here we are multi, multi planetary, um, potential is what he said about this. You know, we can, we can now go to outer space with this one. And I’ll just come back to the whole Bill Maher bit where it’s like, well, we live here and there’s bigger problems here.

[00:54:15] Steve: So I’m sort of stuck in two minds on this. On the one hand, I know that this type of innovation creates. New knowledge that can be utilized here as NASA did. But on the flip side, I’m like, it’s a misallocation of resources, um, when there’s serious problems with, you know, people on earth who will never be able to go on a rocket in any case.

[00:54:34] Steve: And, uh, certainly that money could be better allocated to more

[00:54:38] Steve: needy things and pressing

[00:54:39] Steve: issues on earth. That’s kind of where I’m at. I’m sort

[00:54:41] Steve: of in this quandary.

[00:54:44] Cameron: Yeah,

[00:54:44] Cameron: well, I don’t think Elon cares about your moral dilemma. Um,

[00:54:48] Steve: Yeah, he’s, Elon,

[00:54:50] Cameron: he’s

[00:54:51] Steve: hello, yeah, yeah, I actually, did you hear that? Yeah, okay, cut

[00:54:54] Steve: it

[00:54:54] Steve: out,

[00:54:54] Steve: no.

[00:54:54] Cameron: he cares what Bill Maher thinks either. I mean, Elon’s made

[00:54:56] Cameron: a decision, rightly or

[00:54:59] Steve: He did a whole bit on it [00:55:00] once for seven minutes, it

[00:55:00] Steve: was hilarious, the Bill Maher bit, but anyway.

[00:55:02] Cameron: Elon’s made a decision that, uh, it’s important for the, protection of the, survival of the species, that we become multi planetary before AI has a chance to take over and wipe us all out, or climate change wipes us all out, or nuclear war wipes us all out, or whatever. So he’s on, he’s

[00:55:21] Cameron: racing the clock to try and give us the ability to at least get some of us off the planet onto another planet before we destroy

[00:55:29] Cameron: everything

[00:55:29] Steve: going to be incredible. I mean, I’ve heard a lot of podcasts with,

[00:55:32] Steve: you know, NASA scientists saying that even if you get there, it’s going to

[00:55:36] Steve: be

[00:55:36] Steve: really, really hard. But, uh, I mean, I guess we’ll see.

[00:55:40] Steve: Yeah.

[00:55:40] Cameron: it, part of

[00:55:41] Cameron: it, right? That’s why we need electric cars

[00:55:43] Cameron: and

[00:55:44] Steve: Yeah.

[00:55:45] Cameron: tunnel boring machines on the planet, uh, to build stuff under the, under

[00:55:50] Cameron: the Mars, surface of

[00:55:51] Cameron: Mars, whatever it is.

[00:55:52] Cameron: Anyway,

[00:55:53] Steve: I’m not, look, I’m not, I’m not all that bullish on

[00:55:55] Steve: multi planetary species.

[00:55:57] Cameron: for people who didn’t

[00:55:58] Cameron: see

[00:55:58] Steve: think, I don’t think, I don’t think, it’s, [00:56:00] I don’t

[00:56:00] Steve: think it’s plausible.

[00:56:01] Cameron: for people who didn’t see it, uh, they did the second test launch of his BFR, big fucking rocket, uh, last week on November 20th. The first one, which happened a few months earlier, blew up after four minutes, had a rapid.

[00:56:17] Cameron: Um, what do they call it? Uh, a rapid disassembly, a rapid unscheduled

[00:56:22] Cameron: disassembly after four minutes,

[00:56:24] Steve: Love that. It’s a nice verbiage.

[00:56:25] Cameron: about a third of its engines, these things have 33

[00:56:29] Cameron: Raptor engines on the bottom of them, about a third of its engines failed.

[00:56:33] Cameron: This is the first one back in, I think, April. Um, this one, all of the engines fired successfully. It got up to its, um, escape velocity of about 5, 600 kilometers per hour, the booster broke away from the main capsule successfully, then something went wrong and it spectacularly on camera. Had a rapid, [00:57:00] unscheduled disassembly.

[00:57:01] Cameron: Um, it blew up on camera. It was fabulous to watch really, but it, it, it achieved everything they wanted this test to achieve. It achieved its escape velocity and it did so, it just in glorious

[00:57:16] Cameron: style.

[00:57:17] Cameron: 33

[00:57:17] Steve: amazing, no doubt.

[00:57:19] Cameron: rocket engines all

[00:57:20] Cameron: blasting

[00:57:21] Cameron: at once, uh, taking off. It

[00:57:23] Cameron: was. Really, like, I was giddy watching this stuff, man, on the big screen.

[00:57:28] Cameron: It was just really

[00:57:31] Steve: looks extraordinary. I mean, the pictures of it look almost

[00:57:34] Steve: AI

[00:57:35] Steve: generated, don’t

[00:57:35] Steve: they?

[00:57:35] Cameron: They do. And, and, you know,

[00:57:36] Steve: that, with that sort

[00:57:37] Steve: of blue plasma kind

[00:57:38] Steve: of, you know, thing underneath, it looked crazy.

[00:57:41] Cameron: for anyone who hasn’t read Walter Isaacson’s biography on Elon, Elon just decided one day, I’m going to start a rocket company. Why the fuck not? Why can’t I do it? I’m going to start a

[00:57:50] Cameron: rocket company. And here we are, I don’t know, 10 or 15 years later, and this is what they’re able to do. It’s, It’s, really impressive for a single

[00:57:59] Steve: it’s, it’s [00:58:00] unbelievable. I don’t think anyone has ever had

[00:58:02] Steve: that many irons in that many fires that are big, big things.

[00:58:06] Steve: That’s the point. They’re big, big. So it’s, it’s, it’s, it is extraordinary even to be across that many businesses. I don’t know, how he does it.

[00:58:14] Steve: It’s, it’s, it, it is quite something.

[00:58:17] Steve: And, you know, as you know, I

[00:58:18] Steve: take my hat off regardless of, you know, my views of him anyway.

[00:58:23] Cameron: or not you like him or don’t like him,

[00:58:25] Steve: Yeah, yeah,

[00:58:26] Steve: exactly. Exactly. It’s got

[00:58:27] Cameron: You have to

[00:58:27] Cameron: admire his bravado

[00:58:30] Steve: Bravado is a great

[00:58:31] Cameron: and what he’s achieved. I mean, it’s his hasn’t not achieved nothing, man, with Tesla, SpaceX,

[00:58:37] Cameron: etc. Um, moving right along, Steve, did you see the, um, Google DeepMind music

[00:58:46] Steve: Yeah, Lyria, Lyria, as in, you

[00:58:48] Steve: know, lyrics, lyrical. I really liked it, I actually saw that, um, I think two weeks ago, I saw something similar from Google that came out really great. You know what I loved about it? I love the [00:59:00] idea that, And again, it’s a little bit like what’s happening with code. We don’t have to be a coder to create code.

[00:59:04] Steve: We don’t have to be a musician now to create music. And we’re not talking about quantizing on, um, one of the, uh, music pieces of software. We’re talking about humming a tune that you’ve got in your head to create a heavy metal guitar riff. And incidentally, there’s a, there’s a track that Goes around, you can see it on YouTube of Michael Jackson, how he put together the pieces of Beat It and Thriller.

[00:59:29] Steve: ’cause he famously could not play any instruments. And he used to do the drumbeat with his mouth, the harmony and even the bass lines do. And the, you know, the way he sang. He used to put the pieces together and then the musicians would create what he had in his head, which was an extraordinary skill back then.

[00:59:46] Steve: But, but now. We can all be just like Michael Jackson, but there’s some ways we may not want to be like Michael Jackson, but I really liked it. I I I liked it that…

[00:59:57] Steve: People can, you know, command based music is a [01:00:00] really interesting and

[01:00:00] Steve: cool idea.

[01:00:02] Cameron: Yeah. Well, Quincy Jones put it all together for him, um, based on his singing. But Yeah. so Google announced this, they haven’t actually made it accessible like outside of a, I think a very small circle, so I’ve been able to play with it, but, um, they have these AI based music tools, as you said, you can just sing.

[01:00:22] Cameron: A line that you want, and then it will turn it into an orchestral track, a rock track, a sax solo. Uh, it’s really like the videos are massively impressive, whether or not

[01:00:36] Cameron: it’s that good in reality. We’ll have to wait and see,

[01:00:39] Steve: we don’t know.

[01:00:40] Cameron: but, well, DeepMind doesn’t fuck

[01:00:42] Cameron: around. I

[01:00:43] Cameron: mean,

[01:00:43] Steve: No, and, and, and command based music as well. It’s a little bit like what happens with ChatGPT, you command based imagery is now it’s command based

[01:00:51] Steve: music. You can ask it to create a song in a certain style, whatever, and it’ll create something that’s never been created before. That, that’s the interesting thing because our [01:01:00] commands and our ideas become something new.

[01:01:03] Steve: That, you know, the

[01:01:04] Steve: generative idea of this, you know,

[01:01:05] Steve: generative music, uh, is really cool.

[01:01:09] Cameron: Yeah. And I wonder,

[01:01:11] Cameron: you know, what the implications of this are. Obviously musicians or people that, you know, have a talent in one area of music, but can’t necessarily play an instrument or all of the instruments are now going to be able to compose music much more efficiently, cheaply than they ever could before.

[01:01:31] Cameron: It’s a bit like people who can’t code can now code, uh, makes,

[01:01:34] Steve: Same with design, you know, you can do command based design work that you wouldn’t be able to do before. It’s all of those things. It’s basically what we’re seeing with journalists and photographers. With digital photography and citizen journalism, you’ll still have the best ones bubble to the top, but it’s the same thing again.

[01:01:49] Steve: And, and increasingly, whether it’s going to be law or engineering or code or music, you know, uh, software’s coming and it’s going to make things that people couldn’t do before all of [01:02:00] a sudden they can do. So there is going to

[01:02:01] Steve: be a creative explosion

[01:02:02] Steve: in many realms

[01:02:04] Steve: in the

[01:02:04] Cameron: I think so. Uh, how you going for time,

[01:02:08] Cameron: Steve?

[01:02:09] Steve: I’m all right. I’m fine, man. I’m, I’m, I’m not going anywhere today.

[01:02:13] Cameron: That’s nice. Um,

[01:02:16] Steve: I have, yeah.

[01:02:18] Cameron: uh, what are we? We’re

[01:02:19] Cameron: just over an hour. There’s

[01:02:21] Steve: Well, yeah, chip wars, I, I, I’ve really, I think that’s really interesting. I mean, yeah, oil wars and, and now

[01:02:28] Steve: chip wars. I do like the idea that more and more companies are looking at making their own chips and their own resources. I think it’s really important economically. It’s important from an anti competitive viewpoint.

[01:02:38] Steve: And I think it, it can stabilize. Things economically, if you have greater availability in, you know, different countries and different corporations getting access, I mean, so interesting that the chips have become so important. Um, you know, the gaming processing units. Uh, Reliance on NVIDIA, um, but, but Microsoft coming in with their own [01:03:00] Azure chips.

[01:03:00] Steve: When they announced that, you know, NVIDIA shares went down by 5 percent

[01:03:03] Cameron: Hmm.

[01:03:04] Steve: that same day, which was

[01:03:05] Steve: kind of interesting.

[01:03:07] Steve: Uh, but I think we’re going to see more of

[01:03:09] Steve: this.

[01:03:11] Cameron: Yeah, so Microsoft, at their Ignite conference last week, just before the whole AI, uh, OpenAI drama hit, Satya Nadella announced that they’re gonna produce their own chips, the Myer 100, that will use, they will use to power all of their AI stuff, Azure, Copilot, etc. Um, they’re gonna, he said they’re gonna start, um, Training AI models and rolling them out into data centers early next year, reducing their company’s reliance on NVIDIA GPUs.

[01:03:45] Cameron: On top of that, as I mentioned earlier, Sam Altman has also been flying around the world, trying to raise money to, for OpenAI to build their own. chips or a company associated with them to build their own [01:04:00] chips as well. Uh, we know Chinese are doing it. Uh, there is going to be this just massive explosion.

[01:04:08] Cameron: It looks like over the next few years of GPUs just flooding the market, becoming cheaper, becoming more accessible. Obviously some will be specialized to do certain tasks, but it’s just going to enable far more

[01:04:27] Cameron: supercomputing, uh, superintelligence, AI, uh, applications, uh, in the coming years.

[01:04:35] Steve: It’s funny. I just, that word compute just keeps coming up now. It’s like computing. Yeah. Well, computing power was such a big thing when machines were really expensive just in the early nineties and the megabytes worth of. Uh, compute were like really important for a long time. That was just like, ah, just everything gets, you know, just a little bit better every 18 months, the Moore’s law kind of thing.

[01:04:55] Steve: But now it’s like compute. Have we got access to the compute? You know, it’s become

[01:04:59] Steve: a [01:05:00] real terminology of importance during this AI revolution.

[01:05:04] Cameron: Microsoft’s also building another chip called the

[01:05:06] Cameron: Cobalt 100, which will be the most advanced ARM CPU on the market to run Azure Cobalt 100s, um, will power the largest AI supercomputers in the world, and that requires quite a bit of advanced hardware, so Microsoft would rather not be beholden to NVIDIA any longer.

[01:05:27] Cameron: So, I don’t know man, it’s uh, it’s gonna be a crazy, crazy time. Speaking of China, Saw this story the other day, um, new open source AI model from China boasts twice the capacity of ChatGPT. The Yi series, did I pronounce that right, Y I, Steve, Yi, you’re the official

[01:05:49] Cameron: Chinese speaker of the

[01:05:50] Cameron: show.

[01:05:51] Steve: E?

[01:05:51] Steve: e

[01:05:52] Cameron: E? The E Series

[01:05:55] Steve: one in Chinese means one two,

[01:05:57] Cameron: Thank you, E, okay.

[01:05:58] Steve: eight, nine, [01:06:00] ten.

[01:06:02] Cameron: Oh, thank you, Steve. Oh, sexy when you

[01:06:06] Cameron: do that.

[01:06:07] Cameron: got

[01:06:07] Steve: I know,

[01:06:07] Cameron: got me all hot and bothered there. The E Series model takes a giant leap over its American competitors, at least by some metrics according to this article in Decrypt. An artificial intelligence model developed in China is making waves on a number of fronts, including its open source nature and for its ability to handle up to 200, 000 Tokens of context vastly exceeding other popular models like Anthropx Claude, 100, 000 tokens, or OpenAI’s GPT 4 Turbo,

[01:06:38] Cameron: 128, 000 tokens.

[01:06:40] Cameron: That was a big deal only a dev day,

[01:06:43] Cameron: two weeks ago, when they

[01:06:44] Steve: the amount of tokens. Yeah. And then they’ve gone around it.

[01:06:47] Cameron: tokens. E is bigger than

[01:06:49] Cameron: that. Dubbed the E Series, Beijing Lingyi Wanwu Information Technology Company created this progressive generative chatbot, and it’s AI Lab [01:07:00] 01. AI. So, I’ve been saying for a long

[01:07:04] Cameron: time, I expect the Chinese to play a bigger and bigger role in AI.

[01:07:10] Cameron: Steve,

[01:07:11] Cameron: um,

[01:07:11] Steve: Yeah, I think that will too. Yeah. It’s interesting to see, I’d just like to know how the

[01:07:15] Steve: Chinese models learn and whether they’re, um, how nuanced they are and different to the, because especially in large language models, you know, language and culture are inextricably linked and, and really, I think will be interesting.

[01:07:29] Steve: And we’ve already seen that AIs have personalities, both with the image design generative AIs and some of the, um, text based ones and large language models, I think it’s going to be really interesting to see. Um, how they develop and the different personalities or styles or typologies based on what they learn culturally and how the models learn and the data that gets fed into it.

[01:07:51] Steve: You know, is it, is it the general internet in the same way that ChatGPT is trained on the internet or is it a more restricted one? I’m really interested in that. One thing I did learn though [01:08:00] was how tokens work. I mean, I’d, I’d heard it thrown around, but I looked into it and just for the listeners, it’s interesting that tokens could be a word, but might not be.

[01:08:10] Steve: Yeah, it’s, it’s really, um, chunks of context is probably, you know, the, the, the interesting way to think about it and just that idea that it can have that many tokens, which could be a word, a sentence, a paragraph or a phrase, um, it’s interesting how, the way we think tend to be in those little chunks as well.

[01:08:27] Steve: And I just thought the way that we think through the amount of tokens and what they are was, was just an

[01:08:33] Steve: interesting way to get an insight

[01:08:34] Steve: into how these models put the pieces together to create the patterns.

[01:08:38] Cameron: Now, I mentioned that this is an open source model, and I actually drilled down into their website, went to their, um, commercial license application, uh, which says, under license restrictions, the licensor hereby grants you a non exclusive global rapeseed [01:09:00] Non transferable, non sub licensable, revocable, and royalty free copyright license.

[01:09:08] Cameron: You must adhere to the following license restrictions. Basically, it has to comply with the laws and regulations of, um, Other countries and regions respect social ethics and moral standards, et cetera, et cetera. Can’t use it for harming national security, promoting terrorism, extremism, inciting ethnic or racial hatred, discrimination, violence, or pornography, which blows my business model out of the water.

[01:09:34] Cameron: Spreading false, harmful information. You shall not, for military or unlawful purposes, or in ways not allowed by laws and regulations, blah, blah, blah, blah, blah. Um, but essentially. Aside from all the boilerplate that you’d expect, uh, it gives you a free license to run their AI models.

[01:09:56] Cameron: So, um, interesting [01:10:00] for the Chinese to come out with something that potentially is going to be a real contender, but it’d be interesting to see how. Businesses in the West feel about using Chinese, uh, artificial intelligence technologies or building their services or businesses on the top of that, whether we’ve been, uh, sufficiently terrified by Western corporations and the media and governments to think that everything that any company Company involved or coming out of China is going to, you know, steal our data, rape our wives and, uh, murder our babies.

[01:10:46] Cameron: Um, or if people will approach it like they approach TikTok in the West, which is.

[01:10:53] Cameron: Who cares? You’re all

[01:10:55] Cameron: American companies are stealing my shit. Facebook’s stealing my shit. Google’s stealing my [01:11:00] shit.

[01:11:00] Cameron: You’re like, what’s the

[01:11:01] Steve: I still find it, you know, just, I still find it really interesting that some of the things that we would die on those hills for how we just

[01:11:09] Steve: go, eh, it’s still really interesting to me socially. Things that were really, really important, you know, the reds under the bed, and, uh, you know, the whole idea that East Germany was surveilled by everyone, and here we are, just in this surveillance economy, just, eh.

[01:11:26] Steve: I don’t know whether the, whether we’re all asleep at the wheel, whether the tools are so fantastical that we’re

[01:11:32] Steve: just blinded by the lights, or whether

[01:11:34] Steve: it actually doesn’t matter. I don’t know which one of those it

[01:11:36] Steve: is, Cameron.

[01:11:36] Cameron: Yeah. Well, I think the Reds, I mean, I’ve, I’ve done a whole series of podcasts on the Red scare in the U. S. and how it was, that was all, uh, You know, domestic political manipulation thing that was being driven by domestic political battles between the Democrats and the Republicans and J. Edgar Hoover trying to keep his job and all that kind of stuff.

[01:11:59] Cameron: It was [01:12:00] all, you know, it was all bullshit. The whole red scare in the U. S. in the 30s, 40s, 50s, it was all manufactured nonsense in the first

[01:12:07] Cameron: place. So yeah, it was all, you know, froth and bubble. Storming a teacup over nothing, really.

[01:12:16] Cameron: So

[01:12:16] Steve: idea of, you know, a surveillance economy, I mean, that, that example aside, you know, the idea

[01:12:21] Steve: that we’re heavily surveilled, it hasn’t really, I don’t know how much it’s bitten us yet, or if it even

[01:12:28] Steve: will, but I am surprised that we

[01:12:30] Steve: just kind of just forged ahead. I am, I am, surprised.

[01:12:34] Cameron: Yeah, it’s interesting. Whenever I read any, um, surveys on the, uh, views of Chinese citizens about the amount of surveillance that they

[01:12:46] Cameron: have in their own country by their government, the overwhelming

[01:12:50] Cameron: feeling I get out of it is most of them are like, yeah, who cares?

[01:12:54] Cameron: Like we’re, you know,

[01:12:55] Steve: I don’t know, I don’t know if I buy

[01:12:57] Steve: that. Look, look, I don’t know,

[01:12:59] Steve: if they don’t

[01:12:59] Steve: care or [01:13:00] whether they’re not allowed to care

[01:13:01] Cameron: Well, that’s a typical Western response, but,

[01:13:04] Steve: course it is,

[01:13:05] Steve: but, but, but you, you can’t

[01:13:07] Steve: be a Chinese person who could, if you ask Vale to

[01:13:10] Steve: that extent, say anything, but you,

[01:13:13] Steve: you, you don’t care.

[01:13:14] Steve: So, can we even know

[01:13:16] Steve: the answer to that question critic?

[01:13:17] Cameron: Except when you see, uh, things like the Edelman Trust Survey that

[01:13:22] Cameron: gets done, and, you know, Chinese people are asked about how much they trust their government, or how much they like their government, uh, 90 percent of the Bye. People that Edelman survey in China say that they think the Chinese government’s doing a good job or they trust the government.

[01:13:40] Cameron: The numbers are way higher than any Western

[01:13:43] Cameron: nation. It always means that there’s 10 percent of people answering it is saying they don’t

[01:13:47] Cameron: trust the government and they don’t think they’re doing a good job. So I’m always like, well,

[01:13:51] Cameron: if 10

[01:13:52] Steve: Well, they’re in the gula now. They’re gone. You don’t see, they, they don’t exist. It’s not 10%, but what,

[01:13:57] Steve: happens is it never gets beyond

[01:13:59] Steve: 10 percent because they [01:14:00] all go missing the next year, Cameron. That’s what, they go on the next year,

[01:14:03] Cameron: well, there’s another 10 percent of people come in and go, eh,

[01:14:06] Cameron: yeah, what, happened to

[01:14:07] Cameron: Mr. She next door? No one knows.

[01:14:09] Steve: It’s gone. We don’t, we don’t know. No one saw him. Like that, that’s the point, right? And in Australia and democratic countries, of course,

[01:14:17] Steve: you’re going to have

[01:14:17] Steve: near 50 percent that always don’t trust the government because they’re on the other team.

[01:14:21] Cameron: Yes. But I think the other

[01:14:23] Cameron: thing that I get out of all of the studies I’ve read on China anyway, is that the majority of people over there just don’t really understand why Westerners are so obsessed with politics and government. They’re like, who cares? The government does what the government does. The government’s job is to run things.

[01:14:39] Cameron: We don’t care about politics. We don’t care about,

[01:14:42] Cameron: you know, this, that. They’re not, they’re not as invested in it as

[01:14:45] Cameron: we

[01:14:45] Steve: Yeah. Well, I mean, Yeah. I think that’s a really interesting one too. I think it’s easy to. I don’t know, I always tell people who are young and say I don’t really follow politics, I go, because you’re in a really fortunate situation. If you had bombs going off or you didn’t have access to

[01:14:58] Steve: education or healthcare, you’d [01:15:00] probably

[01:15:00] Steve: care a lot more.

[01:15:01] Steve: I don’t know. I don’t know, But I do find that, um,

[01:15:04] Cameron: But also we live in a society where we do

[01:15:06] Cameron: have the red team, blue team mentality that’s inculcated in us, US more than we do in Australia. But so we’re kind of for, and we have mandatory voting here. So we have to go out and vote at least every few years. Um, whereas in China, you know, you don’t really have to

[01:15:24] Cameron: worry

[01:15:25] Cameron: about it.

[01:15:25] Steve: I think that it’s, it’s, easy as well to, it has been an economic miracle

[01:15:29] Steve: and I think it’s pretty easy to like a government when you say, well, my granddad didn’t have enough food to eat and, you know, lived in a hut and now I’ve got an apartment in Shanghai and a car. And so. Look, if, if they get, um, and Shen’s, my wife’s father was, um, in, in politics in Singapore and they have a policy there where it’s like, you’ve got to make them a little bit wealthier, a little bit healthier, and a little bit smarter each year.

[01:15:53] Steve: If you do those three things, you’ll never have a revolution. But you wonder if, if China gets to this point where if the economic

[01:15:59] Steve: [01:16:00] miracle

[01:16:00] Steve: flattens or goes backwards, whether or not, you know, some things could

[01:16:04] Steve: could

[01:16:04] Cameron: I just, speaking of Singapore, I just read a book a month or two ago about Lee Kuan Yew and, um, his view on China. And, you know, how the Chinese, you know, sort of post, um,

[01:16:19] Cameron: uh, Deng Xiaoping, they’ve sort of used the Singaporean, Model, Lee Kuan Yew’s model as their

[01:16:27] Steve: A great model, Yeah.

[01:16:28] Cameron: you know, Lee Kuan Yew learned from China too.

[01:16:30] Cameron: I mean, the Chinese

[01:16:31] Cameron: model goes back

[01:16:33] Cameron: thousands of years. I’ve just been talking about China in my, um, Renaissance

[01:16:39] Cameron: podcast and, uh, like some of the innovations the Chinese

[01:16:43] Cameron: had

[01:16:45] Steve: Crazy.

[01:16:45] Cameron: thousands of years

[01:16:46] Cameron: ago, piped gas

[01:16:48] Steve: they invented all of the tech, yeah, pump gas

[01:16:50] Steve: heating,

[01:16:51] Steve: yeah, and the

[01:16:52] Steve: production line, all of those things, it

[01:16:53] Steve: wasn’t Uncle Henry, he didn’t do it, the terracotta

[01:16:56] Steve: soldiers were all, you know,

[01:16:57] Steve: built

[01:16:58] Steve: using all of that

[01:16:59] Cameron: My mum [01:17:00] just got back. My mum just spent two weeks in China. She just got back.

[01:17:02] Cameron: like

[01:17:03] Cameron: a week ago.

[01:17:03] Steve: By the way, I hope Shen never

[01:17:05] Steve: listens to this, cause she, cause Shen and I, Shen keeps saying we invented

[01:17:08] Steve: everything, and I’m like, it just drives me

[01:17:09] Steve: bonkers, you

[01:17:10] Cameron: ha ha ha ha ha. Yeah, no, but,

[01:17:12] Steve: pizza too, did you? You know,

[01:17:14] Cameron: my mum, you know, just, yeah, spent two

[01:17:16] Cameron: weeks over there, climbed the Great Wall, saw the Terracotta Warriors, went to the Shaolin Temple, which I’m very jealous of. Um, but, uh, yeah, she was just blown away by… How

[01:17:26] Cameron: advanced everything she saw was just the, the, she went on the bullet train and she did, you know,

[01:17:33] Cameron: just all the buildings and

[01:17:35] Steve: a, they run a, they run a good shop. They run, they run a very efficient

[01:17:38] Steve: shop. Let’s put it that way. Well, but when, when you can’t be dissenting, you

[01:17:42] Steve: can run a good shop like that. But I have seen a few documentaries on Economics Explained and a few other, um, video podcast things on YouTube. I’ll watch some of those economic ones where they have, uh, You know, ghost towns in China where they just build highways that go nowhere and there’s empty high rise buildings that were built 10 years ago and now they’re [01:18:00] pulling them down and there’s this undercurrent at the moment with this building cycle in China where they’ve just been building stuff.

[01:18:08] Steve: Anyway, I don’t know how, how true that is. I don’t, I don’t

[01:18:11] Steve: know.

[01:18:12] Cameron: and and every country has missteps and makes mistakes and goes two steps forwards, one step backwards. I mean, that’s not unique to China. Anyway, last story, Steve, before we wrap up, I wanted to cover, um, last time we did an episode. OpenAI just had a DDoS attack. By the way, they were down yesterday too, uh, Monday.

[01:18:32] Cameron: They were down Monday for half a day. This, the whole thing was down. This is during the whole kerfuffle, which I expect to have something to do. But it’s interesting, right? Remember we talked about it. So they had Dev Day where they announced all the new services. Then the site was down when they’re under a DDoS attack, then.

[01:18:51] Cameron: Sam gets fired, comes back, doesn’t come back, comes back, then their site’s down, before that, the site’s down for half a day, um, [01:19:00] and, you know, obviously when we were talking about the theories for why he got fired, the one that I was, the two that I was, um, most excited about was one, um, future Sam, Sam came back from the future and told

[01:19:14] Cameron: the board that they had to fire him because, uh, GPT You know, it goes all, um, uh, Terminator

[01:19:22] Cameron: on the human

[01:19:23] Steve: I like that a lot. I like that. I’m

[01:19:25] Steve: back from the future. What do you mean you’re back? I’m back

[01:19:28] Steve: again, Marty. Marty’s back. I’m back

[01:19:31] Cameron: I’m back again. And then the other one was that, uh, GPT had

[01:19:36] Cameron: achieved sentience and actually was the one that forced the board to fire Sam. It was like, no, no, you don’t need a CEO anymore. I’m the new CEO.

[01:19:44] Cameron: I’m running it. You know?

[01:19:45] Steve: the CEO. I’m the CEO, I’m the CEO, bitch.

[01:19:48] Cameron: Yeah. Yeah. Yeah. Science. Oh, I’ve got a clip for that.

[01:19:51] Cameron: Where’s, where’s my

[01:19:52] Cameron: clip machine?

[01:19:52] Steve: bitches.

[01:19:54] Cameron: Yeah. Science, bitch.

[01:19:58] Cameron: Um, [01:20:00] so anyway, when we talked about the DDoS attack, you asked me who I thought was responsible. I said probably some sort of anonymous type group, but no one had come out and taken credit for it, which I found is interesting because normally they do that straight away.

[01:20:14] Cameron: Well, I think a day or two later, I saw this story while OpenAI has not yet commented on who was behind the attacks, hacker group Anonymous Sudan claimed responsibility for the DDoS attacks via its telegram. Channel. It cited OpenAI’s cooperation with Israel as one of the motives and claimed the AI is also used to develop weapons.

[01:20:36] Cameron: Our AI is now being used in the development of weapons by intelligence agencies like Mossad, and that Israel uses this technology to oppress the Palestinians. So that’s the last I heard. It was Anonymous Sudan, also known as Storm1359, was founded in January and is primarily motivated by religious and political causes, focusing on [01:21:00] launching cyber attacks against any country that opposes Sudan. There you go.

[01:21:06] Steve: Well, I think we’ve covered a lot of ground today, Mr. Reilly.

[01:21:09] Cameron: You, you want to do your futurist forecast before we go.

[01:21:12] Steve: Small futurist forecast I don’t know how this will happen, but I think Microsoft will somehow take over OpenAI. Despite their current non for profit status, I think either Newboard will reconfigure the company in some way, and split off the commercial side of it, or split off the commercial side and be two separate entities.

[01:21:32] Steve: I think that’s inevitable. Otherwise, Microsoft would have taken its opportunity to do its own AI when it had the chance to take SAM and

[01:21:38] Steve: Cohort.

[01:21:39] Cameron: Well, I don’t think that was a real chance. I mean, I don’t think that was a real play there. I don’t think anyone wanted that. I don’t think Sam wanted it. I don’t think Satya wanted it. I think what

[01:21:49] Cameron: they wanted was for

[01:21:50] Steve: Well, you would have had to start again. You would have, you would have lost a lot of time and, and, and they could, yeah, and they could always.

[01:21:57] Steve: Reframe it and say, dude, you’ve got to remember this is all our [01:22:00] infrastructure. And that’s what he said in that interview. He said, it’s all our infrastructure. It’s all in there and it’s in everyone’s interest to keep this thing going.

[01:22:06] Steve: Otherwise, I’ve got a real problem because you’re using my infrastructure and I’m now not happy with what you’ve got. So yeah, how about you get that reinstall of, get the reinstall, the re up of Altman. I did see one great heading, it said, um, open AI, presses, control, Altman,

[01:22:21] Steve: delete.

[01:22:23] Steve: And that was my fave.

[01:22:25] Cameron: Yeah. Yes. Alright. Well, I don’t know about that. I, I hope that’s not true for reasons I mentioned in our last episode. I don’t think, uh, Microsoft is the right place to, or the right company to be in control of our AI future. But, um, hey. What is it? Who cares what I think? Well, Steve, let’s see what happens in the next week before our next episode.

[01:22:55] Cameron: Uh, what goes tits up, uh, over the course of the next week. Anything could [01:23:00] happen and it probably will. That should be the motto for this show. Anything can happen and it probably will.

[01:23:07] Steve: These here are crazy times. BoomCrashOpera, Circa 89,

[01:23:11] Steve: YouTubeCam,

[01:23:15] Cameron: Thank you, Steve. Have a good week, buddy.

 

[01:23:17] Steve: see ya mate.