Select Page

In this episode, Cameron and Steve dive into the rapidly evolving world of AI, discussing the latest advancements and their societal implications. They explore new AI voice features, the potential dangers and benefits of AI companions and agreeable AI personalities, and the philosophical debate around AI sentience and relationships. The conversation touches on AI’s role in business generation, the power of new models like OpenAI’s GPT-4o and Google’s Gemini 2.5, and the ongoing copyright debate surrounding AI training data. They also get into the complexities of how Large Language Models (LLMs) like Anthropic’s Claude actually “think,” the expansion of AI into hardware by companies like LG, Apple’s perceived lag in the AI race, and the future of AI integration in everyday tools like ebook readers. The discussion extends to advancements in open-source robotics, citing Nvidia’s initiatives, and contrasts technological progress and STEM education focus between China (highlighting Huawei) and the US. Finally, they touch on the intriguing and potentially controversial “Network State” concept championed by figures associated with Peter Thiel and Andreessen Horowitz, exploring the idea of tech-driven, independent city-states.

futuristicpod.com

FULL TRANSCRIPT

FUT 38 Audio

[00:00:00]     

Cameron: So that was an official new voice from chat, GPT, which came out today called Monday. And it’s like a depressed goth girl or something, whatever. , which is now my official favorite voice. I don’t know, , if you found this, welcome back. This is futuristic episode 38, by the way, Steve Sammartino, , I dunno if you’ve found this, but, uh, I’ve been using advanced voice with GPT lately, and the voices have sounded increasingly excitable.

I was having a conversation in the car on the way to Kung Fu with GPT about Trump and Greenland and rare earth minerals. And I was like, I was saying, so hold on. Greenland is run by Denmark and Denmark’s and NATO country. So if Trump invades Greenland, [00:01:00] does NATO have to, does that, uh, is that Invoke Article five under the NATO treaty and then NATO needs to attack the United States and GP t’s like, yes, that would happen.

They probably would. And it would have to be, and it was all very excitable and it was, and I was like, can you sound less excited? And it would like, oh, okay, sorry. I’ll bring the tone down a bit. And a minute later it would be talking like this again. It would all be very excitable. Even Fox was sitting in the backseat.

He is like, can you just calm down a minute? Anyway, but I like this new depressed voice. That’s more my style.

Steve: call it apathy,

Cameron: I.

Steve: and I don’t think enough AI in modern society are apathetic,

Cameron: It reminds me of, was it Marvin in the Hitchhiker’s Guide to the Galaxy was the AI robot. I was like, I am so depressed, brain the size of a planet, and they asked me to pick up a piece of paper. I am so depressed.

Steve: Well, I think that the AI should be able to seamlessly switch [00:02:00] between. Levels of animation and emotion, right. Based on the context of the chat, because it understands it verbally with the language, it should be able to translate that in the audio sense. You would, one would think regardless of the voice that you choose, I.

Cameron: Yeah, and I was listening to, uh, an interview with Ezra Klein yesterday with Jonathan het, and Ezra Klein was talking about the fact that he’s concerned about the fact that a generation of kids are gonna be growing up with AI assistance that are completely agreeable with everything that they say, and that that’s not a good thing.

In the same way that social media hasn’t been a good thing for kids, AI that just agrees with them all the time to make them feel good is not gonna be a good thing. I was talking to Chrissy about it yesterday and I was saying that I expect when we get fully realized AI virtual [00:03:00] assistants that are on the devices that we give to our kids, we will have parental controls where we will be able to set up the AI personality that we want our children to interact with.

That say, listen, your job isn’t to just agree. Your job is to be. A caretaker, an educator, is to push back if they say something dangerous or stupid or that could be, um, referencing self harm or could be negative for their psychological or emotional health, you are to act as a therapist slash parental advisor slash tutor slash whatever.

Adults though, will probably get to choose the AI personality that they want, and I’m already telling GPT don’t agree with in my customer instructions. Don’t agree with me on everything. If I say something and it’s factually incorrect, or well, you think my interpretation of the facts is incorrect, I want you [00:04:00] to tell me that’s your job.

Push back, argue with me. You know, give me something to think about. But Chrissy said, and she’s probably right, most people won’t. Most people will just choose the AI personality type that just agrees with them all the time. ’cause that’s what they want is just validation that their ideas and beliefs are true.

What do you think?

Steve: I think the most dangerous. tool in the world right now, which builds on this is AI girlfriends. They are an absolute social disaster in the making. An imaginary girlfriend that you talk to every day that agrees with everything you say, think learns from you, has the same business model, is want you to keep coming back, is gonna tell a young teenage boy everything he wants to hear. It’ll eventually be a soft robot that he gets delivered from Amazon and he develops a relationship with. This is not good. Falling in

Cameron: Is it worse [00:05:00] than,

Steve: it’s ter,

Cameron: is it worse than having, is it worse than having incel running around with AR fifteens in the us?

Steve: it’s the same thing with a different product. Right. It’s

Cameron: Yeah, but

Steve: who don’t have real social interactions. An incel with an AR 15 or an incel with an AI humanoid robot, they’re the same thing, which is we don’t have real social interactions of people that disagree with us, that we learn social norms, that we interact, we give and take.

It’s the same thing, and they

Cameron: well, yeah, except they’re not going to.

Steve: a bunch of shot up people in,

Cameron: Well,

Steve: where I can go and buy a gun in Walmart.

Cameron: no, but look, I see the opportunity for problems, but I also know that loneliness is a huge issue in modern society.

Steve: so that doesn’t solve loneliness, Cameron. It doesn’t

Cameron: I don’t, and I,

Steve: it.

Cameron: I dunno that that’s true. I think, uh, you, you have been, you know, a big advocate [00:06:00] of the idea that if an AI seems human, seems conscious, seems to be sentient, then for all intents and purposes, it is those things

Steve: Yes.

Cameron: I agree with.

Steve: Yes.

Cameron: therefore, having a relationship with a seemingly sentient ai biological organism, i, it, it’s not exactly the same as having it with a human, but it’s, it’s, it’s maybe the next best thing, or maybe it’s an equitable thing.

Steve: Okay. You are right and I’m right.

Cameron: I.

Steve: problem, circling back to what I first said, is having one that agrees with everything you said and tells you what you want to hear isn’t a relationship with a sentient thing. We’re

Cameron: Look, that’s normally how I pick my co-hosts, Steve, except for you. That’s normally,

Steve: No, no,

Cameron: and Tony.

Steve: the point is, right. No, the point is, is that if it were a sentient AI that had give and take and taught the [00:07:00] human side of the relationship, Hey, that’s not how you treat me. And you gotta, no, you gotta have more open mind than that. Wait a minute, I’m not just gonna do it. Like, if it, if it became like, like a real relationship, then that’s good.

The point is, how is the algorithm trained? What are the incentives of the AI girlfriends that are

Cameron: Hmm.

Steve: And I imagine the incentives are gonna be as perverse as they are for Google and Facebook in the attention

Cameron: it makes.

Steve: back and subscribing. Then it’s more likely to give you exactly what you want, which is the same of your algorithms feed you more and more of And that’s the problem. If it were sentient and reasonable and rational and disagreeable and all of those things that we get in normal relationships, then it would be good. But I fear that it won’t be that.

Cameron: Hi Steve. I’m going to continue our dirty Talk session, but first I want you to listen to this ad. Did you know that Squarespace will help you make a website

Steve: Let’s get back.

Cameron: for your, for your business?

Steve: Remember, that fetish that we discussed? [00:08:00] Remember that? Do you remember what I liked? Do you remember? Can you show me? Can you show me? Send me some pics. Send me some pics

Cameron: Look, if it makes people feel good and they’re not hurting anyone, what does it matter?

Steve: the point is if it makes them feel good, that’s fine. And if

Cameron: I.

Steve: hurt anyone, that’s fine. But I fear that the thing that will make them feel good will be getting everything you want, all the chocolate, all the fantasies, agree with me, do everything I want, and then you get, it gets into a circle of darkness.

You just end up circling the drain. It’s a race to the bottom of extracting the proclivities of young. Teenage males, which unless kept in check, might not be all that positive for society. I’m just, I’ve been a teenager. I, you know, you’ve been one. Cameron, let’s, let’s be real here. Giving teenage boys exactly what they want might not be ideal for society.

I’m just, just a guess.

Cameron: I am still a [00:09:00] teenager. I just, my body got older.

Steve: Yeah, same.

Cameron: Um, so, uh, uh, let’s circle back to the news. Um, so, well, no, before we do that interesting things, tell me about interesting futuristicy things you’ve done since we last caught up. Stevie,

Steve: I revisited. you, Kami. circled back to one of the original AI AI ideas, which was going around where you give AI a budget of $500 and say, generate 10 business ideas. I did it on. Four of the major, uh, large language models. All of their ideas were incredibly similar. And they had things like chatbot agency, automated trading, digital products, and Etsy domain flipping, AI stock photos, those kind of things.

AI stock photos is a new one because they’re much better at that now. and I came to a conclusion. And the conclusion [00:10:00] that every single one of these ideas where AI becomes an employee out entrepreneurially generating money, all of them required advertising on big tech, which I just thought was this interesting circle where it all came back and it said, and you’ve gotta allocate out of your $500, $150 worth of advertising on one of the big tech channels to get attention.

Which just ensconced me further, even with these emancipating tools of AI that can do everything. But you gotta come and find people in the attention economy on our tools that we happen to also own, in addition to the ai, whether it was on Amazon or Google or, or Facebook or any of ’em. I’m like, man, we’re back where we started, brother.

Cameron: Yeah. Right. Well, look, I still believe there is a, there is a potential future where that isn’t the case, but you can rest assured that the people that own the online advertising platforms and also have an interest in [00:11:00] Theis will be trying to create a future where those things are tightly coupled.

Steve: Definitely.

Cameron: Well, I used GT’s new four oh image generation model, which we’ll talk about in a minute.

Um, one of the first things I did with it was to make a comic. Um, something that I’ve wanted to do for years is a, a comic series about my journey with my guru Bob who passed away recently, and about how I met him and, and what he taught me and how it helped me, but do like a very, like a light sort of comic approach to it.

Now, we all know that, uh, image generators have really struggled with words since they were, they first came out, we, you know, whatever it was 18 months ago. Ideogram, the one I’ve been using for the last couple of months, the most, uh, does a pretty good job of words, but you could give it one or two words or three [00:12:00] words maybe, and it would do a pretty good job of that.

But I tested this thing with GPT where I said, I want to do a comic. I wanted to have, you know, between three to five panels, um. And I told it, the storyline, I created basically the, the words and it did comics that were flawless. One shot, two shot if I wanted to edit something. Um, really great. And I did a series of five comics that tell the progression of the story.

They kept the character ref, the character images the same, like the same character for me. And Bob looked the same from comic to comic. They did word bubbles. They did the words right. It was. Amazing. Like to be able to just say, make me this thing, and it just did it. So, um, I played around with that. We’ll talk about what happened in a second, but before I move on, I [00:13:00] want to do a shout out to Pete Hewitt from the uk listener of our show.

Been a listener of some of my other shows for many years. Actually had a nice lunch with Pete when he was in the country a couple of years ago. Um, Pete pointed out that I have been conflating on this here podcast over the last year, the idea of cold fusion with net positive fusion tomax, which are really hot fusion.

But in my brain, I, uh, reversed the polarity and because it was net positive fusion, I was calling it cold fusion, which is a completely different thing and probably doesn’t exist. So. Apologies to everyone who thought I was talking about cold fusion. I was actually talking about hot fusion, but net positive fusion.

Uh, there you go. So thank you to [00:14:00] Pete for calling me outta my, uh, bullshit, not deliberate bullshit, but my brain. And by the way, I found out in the last week or so that I’m autistic, so that is now my get outta jail free card for everything.

Steve: Kevin?

Cameron: Yeah.

Steve: So

Cameron: Well it was Chrissy.

Steve: time,

Cameron: Yeah,

Steve: Pete, by the way, Pete, look, I didn’t wanna say anything, Pete, but I was thinking the same thing because late at night I just, I got out one of the old physics textbooks from my undergraduate, uh, science degree. Really? We really was perusing some of the pages there. Well, I’m glad Pete pointed it out and thank you, Pete, because hey,

Cameron: that’s your job is to call me on my bullshit if I get stuff wrong. Steve

Steve: did it.

Cameron: did.

Steve: my pay grade physics.

Cameron: All right,

Steve: um, thanks for

Cameron: well there you go.

Steve: for tuning in.

Cameron: And as I always say on my podcasts, um, you know, I’m usually right, but if I ever get something wrong, I want to be told that I’m wrong. ’cause I [00:15:00] don’t give a fuck if I’ve

Steve: You don’t want that puppy. No way.

Cameron: No, no, no. I don’t want if be talking bullshit.

So anyway, thank you to Pete. Let’s talk about the four oh image model. So, uh, open AI came out with this coincidentally on uh, roughly the same day the Deep seek released the new version of their V three model. That was a killer. And Google released the new version of Gemini 2.5, which is a killer. Also happened to be the day that Open AI released their new four oh image generation model.

And it was an absolute killer for about 24 hours, and then it became next to useless. Perhaps be? Well, look, I have a couple of theories. One is that they nerfed it, um, because what they tend to do on a launch day, I think particularly if they’re competing with other AI launches, they want to get all of the media hype is they [00:16:00] allocate a massive amount of compute to it for the launch day,

Steve: Yep.

Cameron: which is costing them gajillions of dollars.

And then as soon as they get a good 24 hours of massive hype, they downscale the compute to save money. But simultaneously, and this is probably the other truth, is that 24 hours of hype brings in millions of new users that play with it. And so it just gets hammered. Um, Sam did tweet just after they launched this thing.

The chat GPT launch 26 months ago was one of the craziest viral moments I’d ever seen, and we added 1 million users in five days. We just added 1 million users in the last hour. So

Steve: Wow.

Cameron: I accept and acknowledge, uh, that. Uh, the sort of [00:17:00] scaling that they’re dealing with is insane and is an insane engineering problem to have to handle regardless of how much money you have.

It’s about people, it’s about data centers and compute and chip sets and cooling and all of the real world hard engineering issues that go into scaling something like this. I think it’s a combination of that and they nerf it for various reasons. Everyone was doing Studio Ghibli versions of everything and they kind of nerfed that you could do pictures, Ghibli versions of your family initially.

Then I tried to do

Steve: get the win on this? Why? Why Ghibli?

Cameron: ’cause nerds love Ghibli.

Steve: through the roof. I mean, yeah, it was a bit niche, but now it’s like my feed was just filled with Ghibli.

Cameron: Well, all nerds love Studio Ghibli and for good reason. I mean, the Studio Ghibli films are absolute [00:18:00] masterpieces, absolute classics.

Steve: But are

Cameron: Somebody

Steve: Are they really masterpieces?

Cameron: Oh, are you, are you a, are you a Ghibli skeptic?

Steve: I,

Cameron: Oh my God.

Steve: like now

Cameron: Have you seen, have you seen Ghibli films? What Gib films have you seen?

Steve: I’m a Ghibli skeptic. Okay. Let me just tell

Cameron: Have you seen the Ghibli films, is what I’m asking you?

Steve: and I think I’ve had all too much Ghibli in

Cameron: Oh,

Steve: couple of

Cameron: no, no, no, no.

Steve: send me a Ghibli

Cameron: You haven’t sat down and watched ’em with your kids.

Steve: if you, if you insist and report

Cameron: Oh my God. Sit your kids down and watch The Castle in the Sky or, or Princess Monki or any one of them. Oh my God. They are visual and storytelling and musical. Absolute masterpieces. Every one. Definitely.

Steve: has to be a winner when something new has a new application, right? There’s always a winner every single time. It’s like [00:19:00] like a gravitational force where the new technology gets pushed into something and it has to have like it has to have its victory winner. it was a plant by open ai and they’ve got it shares

Cameron: The

Steve: Ghibli or

Cameron: chairs in Studio Ghibli,

Steve: I’m telling you now, the whole thing’s a

Cameron: Miyazaki and Sam Altman have a thing. Yeah. Okay.

Steve: I’ll

Cameron: Yeah, yeah.

Steve: he’s got

Cameron: You need to get off of Q Anon,

Steve: with Ghibli’s in the back and that’s where he is going with the DVD player and a little battery

Cameron: his jansport backpack and he is taken off. Um, oh, well we’re gonna talk about Praxis when we finish this thing. But anyway, they nerfed it. Um, you couldn’t do pictures of your kids after a day or two. I tried to do a, an anime version of Fox.

Steve: skepticism

Cameron: Anyway, um, let’s talk about Gemini. 2.5 came out. The thing about Gemini is it’s free

Steve: Yep.

Cameron: it has a context window that’s like 2 million tokens or something. It’s an [00:20:00] insanely large context window compared to the others. And it’s beating all of the benchmarks, uh, um, as. By the way is the latest version of Deep Seek V three.

I mean, depending on what day you look at it, one of these is beating all of the benchmarks, but

Steve: is cool.

Cameron: the best I’ve played with Gemini, uh, with some coding stuff, and it is pretty good. Sometimes sucks at other times like Claude, all these kind of things. But the best post I’ve seen about Gemini 2.5 is a guy on the singularity subreddit said, um, I’ve gotta share this, because it was seriously cool.

I’ve got an old novel. I wrote years ago, and I fed the whole thing to Gemini 2.5 Pro, the new version that can handle a massive amount of text, like my entire book at once, and basically said, write new chapters. Didn’t really expect much. Maybe some weird fan fictiony stuff, but wow, because it could actually process the whole original story.

It cranked out a [00:21:00] whole new sequel that followed on, like it remembered the characters and plot points, and kept things going in a way that mostly made sense, and it captured the characters and their personalities extremely well. Then I took that AI written sequel text, threw it into 11 labs, picked a voice, and listened to it like an audio book last night, hearing a totally new story said in my world, voiced out loud.

Honestly, it was awesome. Kind of freaky how well it worked, but mostly just really cool to see what the AI came up with. I’ve been talking about this for the last couple of years, but I do believe we’re headed into a space where I will be able to upload a Dune novel or a James Bond novel or a film and say, give me a new one.

And it will give me a new one. Custom built, [00:22:00] custom designed for me. I’m the only person that’s ever gonna watch it and or my family sitting around, gimme a new Quentin Tarantino film, boom, here it is. Watch it. Enjoy it in the style of Quentin Tarantino. And I know people lose their shit over, oh my fucking trademarks and my copyright and my talent and my this, and I’m still arguing against those people.

Training on your. Drawing, training on your film, training on your album, training on your artwork, and then producing something brand new is not, I believe, a breach of your copyright or a breach of your intellectual property, because that’s how humans have always learned everything is we learn from watching other people do it, then we do it ourselves.

If it does a complete replica of your book or your film or your album, note for note, song for song, word for word, shot for shot, [00:23:00] then you’ve got a case. If it’s just learn from what you do and does something similar to that, but different, that’s, that’s not property theft. That’s just how creativity has always worked.

I’m sorry if you don’t like it, but that’s just how it works.

Steve: as the great Jim Roh said, when you get your own planet, you can redesign it how you please. But this is the world we happen to be in. And I totally agree with you, Cameron, and I’m, I’m actually shifting more towards this. I know that the New York Times still has their court case and they’re suing open ai and there’s multitude of, of these cases, but. Your example, and we should call it the Nike example. If I’ve got a factory in Guango and I’m making a pair of Air Force ones that look exactly the same and come in the packaging, then yes, that is copyright infringement and design infringement. But every organic being on earth, basically we copy, learn and adapt. Even our code, our own DNA is a copy, learn and adapt of [00:24:00] our parents. Uh, we’ve always done this before. We’ve read other people’s works and had writing styles, bands and musicians. Everyone has done it and it’s, yes, it’s at scale now. It is. And you know what? Probably a good thing, the key is this stuff needs to be open source so that I don’t have to pay Billy blogs to create the new Tarantino movie.

I can do it on a deep, deep seek kind of AI that I host on my own client or in the cloud, and I can make what I want. If we have that, then it’s totally emancipating and I would remind listeners. And I did this yesterday. I showed my son, uh, one of the great TED Talks and they’re few and far between these days, but it was, everything’s a remix

Cameron: Mm-hmm.

Steve: Kirby.

I forget his last name. But it was brilliant and it went through so many of the songs and ideas and lyrics and how everything was adapted. And you know what? That’s good ’cause we want interpretations and four or five interpretations down the line, you’ve got something totally [00:25:00] different. And I always thought that sampling was cool and, and reinterpreting things. And I feel like we’ve got the whole kind of trying to put fences around things for corporate interests and it’s bullshit.

Cameron: Yeah, look, I understand that people are upset. Uh, Kirby Ferguson, by the way, was the, everything is a remix.

Steve: call him the frg.

Cameron: Fergie

Steve: Yeah.

Cameron: Fgo.

Steve: Fgo. Shout out Togo.

Cameron: I know he is a big fan of the show. Uh, like I know that people are upset. I get it. You, you and I are authors.

Steve: Yep.

Cameron: Um, I, I, I’ve made a film. I, I’ve done thousands and millions of podcasts. Uh, and all of that’s gonna go into the mix.

Um, I.

Steve: way. ’cause I’ve looked up your stuff and my stuff, which is in the public realm. It can even give incredible summaries of my books, which are clearly, it’s got the PDFs in there. Uh, so

Cameron: Yeah.

Steve: that,

Cameron: [00:26:00] And

Steve: And that’s

Cameron: so we’re,

Steve: And that’s

Cameron: that’s, yeah. Like are we all gonna be outta jobs? Yes. Um, so is, so is half the human population? Um,

Steve: fabricators in our, in our lounge rooms.

Cameron: maybe, oh, well, it’ll be dead. I mean, either way, but it’s just like, yes, it’s upsetting and it’s scary, but it’s also the reality and it’s, it’s really not copyright theft. You put your stuff out there in public. It’s not being ripped off. It’s being trained on people. Don’t, I still believe people mo generally, who bitch about this stuff don’t understand how LLMs work.

And speaking of which. No one understands how LM works, including the people who build them. We’ve talked about this before, but this paper came out, uh, a week ago, tr by Anthropic. The, the, the Xa, uh, ex open AI engineers, a lot of them that, that build. Claude, one of the leading models, it’s called Tracing the Thoughts of a [00:27:00] Large Language Model.

I could play you the video that they did, but I’ll just, I’ll talk through it. Um.

Steve: it. It’s, it’s really cool.

Cameron: Really cool. Yeah. So their, their summary starts off like this. Language models like Claude aren’t programmed directly by humans. Instead, they’re trained on large amounts of data. During that training process, they learn their own strategies to solve problems.

These strategies are encoded in the billions of computations. A model performs for every word it writes. They arrive inscrutable to us, the models, developers. This means that we don’t understand how models do most of the things they do, and that still, I think, is one of the most profound things that I’ve ever heard as a technologist of 30 years, and that I still think most people out there don’t understand about ai.

Oh, is it eating window time? Close enough. Thank you. [00:28:00] My first food for the day. You’ve got water.

Steve: either. I’ve had

Cameron: Yeah, that’s me too. That’s all I’ve had.

Steve: Mm

Cameron: I’m on the Santino diet. Coffee and water till one o’clock.

Steve: until, yep. I try not to eat until dinner now, but that’s another story I.

Cameron: Oh, OMA, you’re on the Oma.

Steve: What’s, I don’t even know what it is, but I know that I look less fat if I don’t eat until dinner. ’cause there’s not enough time to eat until the day’s finished.

Cameron: Oma is one meal a day? Actually it is. I’ve just finished reading a book by a guy from Harvard all about Oma and intermittent fasting and compressing your eating window for that reason. Yeah,

Steve: Well it just makes it easier for me to still surf so I don’t get, it’s easy to stand up on the surfboard if I’m skinnier. That’s basically it.

Cameron: Easier. Easier for me to do kung fu if I’m skinnier. Yeah.

Steve: exactly. Funny, funny that, who knew? But

Cameron: Yeah.

Steve: remember there’s a movement on the internet called, healthy at any size, that someone who I think was some kind of size invented.

[00:29:00] I’m just saying and I look, do it. Do as you please. But I think biology will

Cameron: Yeah.

Steve: not you’re healthy at any size. Because

Cameron: Yeah.

Steve: see in old people’s homes. You know what they are. Cameron. Smokers

Cameron: people.

Steve: who are obese, you don’t see ’em. They’re not in old people’s homes.

Just a clue. That’s all I’m saying. But do

Cameron: Mm.

Steve: will. People carry on.

Cameron: So no, back to ai. People still don’t understand that they, they’re trained or they train themselves to a large extent. So the, the paper goes on to say, knowing how models like Claude think would allow us to have a better understanding of their abilities as well as help us ensure that they’re doing what we intend them to.

For example, Claude can speak dozens of languages. What language, if any, is it using in its head? Claude writes text one word at a time. Is it only focusing on predicting the next word, or does it ever plan ahead? Claude can write out its reasoning step by step. Does this explanation represent the actual [00:30:00] steps it took to get to an answer?

Or is it sometimes fabricating a plausible argument for a foregone conclusion? So they’ve created a series of tools that they’ve, based on neuroscience, and they’ve tried to use these tools to figure out how Claude thinks, and I’ll skip ahead a little bit. Our method sheds light on a part of what happens when Claude responds to these prompts, which is enough to see solid evidence that Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal language of thought.

We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them. Claude will plan what it will say many words ahead and write to get to that destination. [00:31:00] We show this in the realm of poetry where it thinks of possible rhyming words in advance, and writes the next line to get there.

This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so. Claude on occasion will give a plausible sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint.

We are able to catch it in the act as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models. So, and they say here, we were often surprised by what we saw in the model in the poultry case study

Steve: Hmm.

Cameron: we had set out to show that the model didn’t plan ahead and found instead that it did in a study of hallucinations, we found the counterintuitive result that Claude’s default behavior is to [00:32:00] decline, to speculate when asked a question, and it only answers questions when something inhibits this default reluctance.

So. It’s like, it, it, it, like, it’s fascinating that A, the people that build these things don’t really know how they work still. And b, the way that they intuit they work turns out to be incorrect when they build models to test how it actually works. And c you know, this whole Cory Doctorow argument that we’ve been making fun of for the last couple of years about stochastic parrot and they’re just word prediction generation engines.

And that’s what Chomsky said as well.

Steve: and I, I’ve never agreed with that. I’ve never agreed with that. Doctor. I, I love him so much. He is so wrong on that.

Cameron: And it’s so obvious to anyone that uses these tools on scale that there’s something else going on.

Steve: something else going on.

Cameron: They started as stochastic parrots, but [00:33:00] they’ve moved on from that. You know, they, there’s.

Steve: so similar and I just keep on coming back to Biocare and when I watched that video, the first thing that I thought to myself was, it’s very similar to nature and, and for many of the things that we’ve studied, whether it’s MRI scans on brains or the way root systems in trees, what we, we know, and even we spoke about in a pod a few episodes ago on electricity, we sort of don’t know exactly how it works.

We just know how to harness the functionality of what it does, That seems to be true of this. It’s has a sense of bio, it’s a different type of biology, it has that sense and it seemed to me as though it was nonlinear and it would change the way it does things based on new problems, which the idea that you just mentioned there, it tries not to hallucinate unless something else interacts with it, which is a little bit like the way the Internet’s designed [00:34:00] as well.

The internet was designed in case there’s nuclear war and it reroutes itself to find a new answer. It seems that this system is, is similar to that and I think that that’s fine. I think the fact that it’s nonlinear and it’s not fully predictable makes it interesting and more nuanced and it gives us a sense of power because our creativity overlapping, can change the way it does things.

I think it’s kind of emancipating. I,

Cameron: Yeah. I mean, it can be, um, it’s also terrifying and threatening to a lot of people, but, um, I, I, I think it’s emancipating and will be, hopefully

Steve: And

Cameron: play out in a way

Steve: biomimicry of how humans learn. child at the age of one or two is a, what’s the word? Stochastic?

Cameron: yeah. Something like that. St. Stochastic.

Steve: It’s like a stochastic [00:35:00] parrot. If you look at a, a 1-year-old child, you’ll see them repeating phrases without an implicit understanding of the meaning of the words until such time that they have the cognitive ability to underscore meaning around it, which is what this lecture is all about.

Cameron.

Cameron: Sta stochastic means having a random probability distribution.

Steve: wow, there you

Cameron: Yeah.

Steve: the idea that kids just repeat and don’t really understand the words and then eventually they see a pattern over time with hungry or give or they just start to do it and, and, and then the structure grows. And so LLMs have grown in terms of their structural capabilities from parroting words and just putting together things that seem to make sense.

’cause I remember once daughter, Laura, I can’t remember the exact sentence, but came out with this incredibly articulate sentence with a word in there that I knew that she didn’t know. [00:36:00] And I said, you know what that word means? And she said, I dunno what it means, but I know where it goes. She said to me, and I never forgot that.

Cameron: That’s great. That’s great. Yeah. And Fox does that all the time and it’s fascinating. Like he knows that that word goes there but doesn’t really know what it means, but knows that it’s appropriate for that sentence structure. Right. And and to be honest, when I’m learning Italian, there’ll be many times when I’m doing Duolingo and I have to translate something from English into Italian.

And I know I have to use a certain word here. I still don’t know why, really, but I know that I have to use that word in order to get the construction of the sentence right.

Steve: I, when I learned to Tan, I had to, I bought a book, which was English for Italian speakers. actually have to learn the grammar and the price because the grammar entertained is more complex. And there were

Cameron: Hmm.

Steve: many things I [00:37:00] knew how to do, but I didn’t know why or actually what they were in, in

Cameron: Yeah.

Steve: future, past participants and

Cameron: Hmm.

Steve: that

Cameron: Conjugation.

Steve: yeah, I

Cameron: I knew nothing.

Steve: lot of things they, how they are or why they are. I just know how to use them and I’ve got no idea. And even when I’m reading a, a lot of books where they get a bit technical, uh, whether it’s investing or, or tech stuff, I call it the understanding it later. I just take it, absorb it, and then sometimes on an night or Tuesday I’ll be like, that book, you know, I’ll just be like, oh, that’s how finally got it.

Cameron: We talk about that a lot at kung, uh, a lot at kung fu. Like, uh, people that have been around a lot longer than me will talk about how you just do what you’re told to do long enough and then one day you understand why you’re doing what you were told to do. And Chrissy and I have had that experience in our four years there.

Like, you’ll, you’ll learn a particular move and after a few years you’ll go, oh my God, now I [00:38:00] know why I’m doing that move. Right. It, you use it in a practical application and you go, oh my God, that’s that thing that they told me to do three years ago. Right. I wanna read, um, just the first, um, couple of paragraphs of the actual, um, introduction to the papers that that philanthropic have done here, because they’re sort of talking about what you were talking about.

They’re talking about the AIS in terms of organic biology, and that’s how we have to think about these things now as living organisms with the biology different from what we’re used to, but. Uh, just as valid. They say large language models display impressive capabilities. However, for the most part, the mechanisms by which they do so are unknown.

The black box nature of models is increasingly unsatisfactory as they advance in intelligence and are deployed in a growing number of applications. Our goal is to reverse engineer how these models work on the inside, so we may better [00:39:00] understand them and assess their fitness for purpose. The challenges we face in understanding language models resemble those faced by biologists.

Living organisms are complex systems, which have been sculpted by billions of years of evolution. While the basic principles of evolution are straightforward, the biological mechanisms, it produces a spectacularly intricate, likewise, while language models are generated by simple human designed training algorithms, the mechanisms born of these algorithms appear to be quite complex.

And I think that’s a terrific analogy. And, um, I, I’m just excited by this. I remember seeing Kurtz while talk a year or so ago where he was talking about the fact that. Yes, these models use a lot of compute today, but that’s probably because we don’t understand how they work, so we just throw compute at it.

But eventually we will understand how they [00:40:00] work and we’ll probably understand, we’ll probably then appreciate that 90% of the computation that is being done isn’t really required. And we’ll be able to shrink the models down to much smaller size, requiring less energy and less compute for most things.

And I think this is part of that process is trying to understand how they do what they do. But we’ll see. I,

Steve: It seems like that won’t be too dissimilar to scaling web service requirements where we used to buy the entire server of whatever you might need on a particular day. And then when we got to web servers that scaled upon needs of computation at that point in time, seems that we’ll get to a similar place,

Cameron: yeah.

Steve: with, with the AI models.

Cameron: If you wanna solve cancer, you’re gonna need a big compute. If you wanna ask it what the weather’s gonna be tomorrow, a lot less compute required. Um, another story that blew my mind this week, Steve, was [00:41:00] lg, have come out with their own ai.

Steve: life is good. Cameron, they’ve always said that. Lucky gold star.

Cameron: I have an LG smart tv and its software is the fucking worst. Like it’s, it’s web os is the clunkiest piece of shit. Absolutely horrible user experience. Just obscenely terrible. So I don’t like the idea that their AI is gonna be running on it, but. They, uh, they, they tweeted. This is LG AI Research Breaking News.

We’re thrilled to announce XR one Deep, a next generation AI model designed to enhance reasoning capabilities evolving into agentic AI for real world industry solutions, specialized in math, science, and coding tasks. XR one deep pushes the boundaries of AI’s role in both professional fields and everyday life.

They’ve got a [00:42:00] 32 billion parameter model. They’ve got a 7.8 billion and a 2.4 billion parameter model, which they claim dominated all major benchmarks securing first place.

Steve: did.

Cameron: They’ve released it on hugging face, but the, the thing that’s, um. Interesting here is we’ve seen AI models come out from Google and from X Twitter, uh, in China, from a hedge fund company, from Ali Barber.

Uh, we are now seeing them come out from industrial technology hardware companies, right?

Steve: Which

Cameron: It.

Steve: is, is, really exciting and positive and it, it, it circles back to the open source movement because one thing we certainly don’t need is the five big tech companies dominating [00:43:00] this space. And I think the more that we see hardware players moving into software and software players moving into hardware, I think it kind of opens up the competitive paradigm, which. Uh, antitrust hasn’t been able to solve, so maybe this moves us towards that place, which again, that that’ll, that feeds well into happening with Nvidia in, in humanoid robots, which we’re gonna talk about. That’s more open source as well.

Cameron: Yeah, very. They’ve got a big open source thing, but we, you know, we are going to see, I’m quite convinced, um, lots and lots of companies with their own AI models that will interact with each other in natural language or their own language as we’ve talked about before. But, uh, you will have every device, and I know this kind of sounds like the internet fridge promises from 1995, but

Steve: [00:44:00] Yeah.

Cameron: you will, you, you will have maybe not fridges, but lots of devices will have some kind of AI.

On them because whe when, when it’s required, when there’s a value to have an AI on them, um, you know, you, you, your unit cost is gonna go up if you need to put a chip set on something to run a local ai, the unit cost.

Steve: chip set, I mean, what is the chip set of the future? It depends, ’cause there’s chip sets in everything now from a toaster to electrical device, which is not a major cost issue, but it

Cameron: Yeah. Yeah. And they’re relatively low level GPUs or CPUs, not GPUs, they’re cpu, but depending on the unit cost and, and where they go. I imagine like my Roomba. [00:45:00] We’ll have some sort of an a I on it so it’s learning and it’s intelligent. Uh, my fridge maybe, maybe not, but you know, different devices will necessarily, my TV hopefully, probably, um, will be keeping track of what I watch when I watch it, what I like to see.

Steve: car,

Cameron: Well, cars obviously. Yeah. So I mean the, it it, to see these large companies, um, start to roll their out, just, there’s gonna be massive amount of competition in this space. There will probably be some sort of, um, consolidation at some point as well, but the commercial interest to push for everyone to have their own at some level and their own control over it are gonna be enormous.

Um, by the way, I know that in our last episode you talked about, you brought up the fact of, with the Mag seven, is there gonna be a collapse in the Mag seven bubble? Um. [00:46:00] Well you did, and I said that Tony had been talking about that on QAV for a long time. Well, apple was down 9% I think today, um, as a result of Trump’s Liberation Day tariffs.

So I dunno,

Steve: you.

Cameron: liberating money from, uh, investors, uh, among other people in these companies.

Steve: two weeks ago we didn’t talk about it. There was a lot of talk about Apple’s lagging on generative ai and the disappointment of the implementation of Apple AI or Apple Intelligence as they

Cameron: Yes,

Steve: coined it. Uh, and I, and I still think

Cameron: I.

Steve: unless they’ve got a secret up their sleeve, they’re, they’re really, really lagging.

And it’s been nothing but disappointment from Apple in terms of, uh, ai, anything that’s

Cameron: Yeah,

Steve: let’s put it that way. I,

Cameron: I saw Marcus Brownley, uh, did [00:47:00] a YouTube on that in the last couple of days, and he was scathing

Steve: yeah, Hard Fork

Cameron: their

Steve: a, a, a, a solid episode on it as well. The New

Cameron: right.

Steve: um, tech podcast was really good skating and, and I think it’s, it’s fair play because they’ve got every resource in the world. You have $300 billion, 400 billion in the bank. It’s like. Uh, absolutely they could, should and would, and I think they’re in the best position to develop a really personal digital twin AI that you can converse with.

Imagine if Siri was, had the capability, verbal and reasoning capability of deep seek and open AI that would have incredible utility given where it lives.

Cameron: Speak about speaking about bitching, the thing that’s been annoying. The fuck outta me recently.

Steve: I was, I was expressing technological and economic realities. I don’t know if I’d call it the B word.

Cameron: Tech bitching. Um, I’m annoyed that it’s [00:48:00] 2025 and my ebook readers mostly Apple Books. And secondly, Amazon Kindle on my iPad don’t have an AI built into them yet. I am constantly having to look stuff up, whether I’m reading fiction or nonfiction, I’ll constantly be looking stuff up to, you know, I I I need to check stuff, right?

Oh, what does that mean? How does this work? Again, remind me of this. Um, and I, I don’t wanna look it up in a dictionary, I don’t wanna look it up in Wikipedia. I wanna have a conversation with an AI about the thing. The most recent one that bugged me was I’m reading I, Len Dayton’s first spy novel, the IP Crest file you ever gotten to Len Dayton,

Steve: No, Len? No, I haven’t really got to Len.

Cameron: Lenny d.

Steve: I feel bad for Len and I, I was, I was gonna send him an email, but I’ve just been inundated with spreadsheets.

Cameron: It is a spy [00:49:00] novel and it’s sort of set in the Cold War. And, um,

Steve: wonder

Cameron: was

Steve: everyone. It’s set in the Cold War and it’s a spider off. We’re talking to Cameron Riley here. I mean, no surprises.

Cameron: actually, uh, I’m just getting into spy novels, all of my Cold War knowledges from nonfictional sources, but I’ve just started to get into Jean Re and Len Dayton. ’cause they have a very, um, anti James Bond view of the world. Their spies are all dumb, struggling, frustrated, wannabe, badass spies, but they’re not.

They’re sort of, um, uh, everything goes wrong. Things don’t work out.

Steve: Spice like us, one of the great movies.

Cameron: yes, a bit like that. They sent and, and it’s like real life too, because, you know, if you know anything about the CIA over the years, um, I. Most of, nearly everything they’ve done has been a complete failure. Um, they, they send in a bunch of people undercover, midnight into [00:50:00] North Korea to infiltrate.

They’re all dead within an hour of arriving. ’cause the North Koreans knew they were coming in like a week ago.

Steve: What a horrible person I am.

Cameron: Yeah. Anyway, I’m reading this Len Dayton novel and he’s got some American military guy explaining to the British how, uh, nuclear bomb works. How are are u uranium 2, 3, 5, 2, 3 8 bomb works? And I’m reading, I’m going, that’s not how a nuclear bomb works.

I’m pretty sure. ’cause I’ve done a lot of podcasts on

Steve: your

Cameron: nuclear bombs.

Steve: from today, I dunno if I

Cameron: I know, I know, I know. Fair point. Fair point. But I had to. Copy and paste the paragraphs from the book into Jet GPT and say, this isn’t how it works. Isn’t there a, there’s gotta be a proton trigger right from, and sorry, a neutron trigger.

And he goes, yeah, yeah, yeah. So we have this conversation. I had to crosscheck what I was reading. It should be built in. I should be able to [00:51:00] block it in the app and go, Hey, what about this? It doesn’t exist. It’s annoying as hell anyway.

Steve: that would be a really good feature. That is definitely worth talking about and would drive subscription. It really would.

Cameron: Well, Apple’s obviously not gonna do it until they have their own ai

Steve: in, in translation, they’re never gonna do

Cameron: never gonna do it.

Steve: just, you know, they’ve got the, they’ve got that device that they like selling.

Cameron: Yeah. Well look, I hope Apple get it right. I really do because I, you know, I want it in my devices. I want it to.

Steve: one that has a boundary around it that’s my digital twin. ’cause I think that

Cameron: Yeah, exactly. I just don’t think they’re making fast enough progress. Um, so yeah, Nvidia did a thing. Uh, good old Jensen. Huang did a thing. GTC 2025 had looked. Looked. Yeah. Coolest man in tech. I think all the rest of [00:52:00] them like Elon and Zuck and whatever. And, um, who’s the Amazon guy again? Jeff? Uncle Jeff.

They’re all just trying to look like Jensen. ’cause Jenssen’s

Steve: He’s the

Cameron: genuinely cool looking motherfucker. Yeah. Yeah. He’s the Elvis, he’s the tech Elvis.

Steve: really flipped it up. I always love that they say that Zuckerberg’s looks like a, a Chean Coke dealer. Like, it’s like he, he just had, oh my God. Talk about the guy that you can buy the clothes, but you just gotta be able to wear ’em.

Right. You have to be able to wear ’em.

Cameron: He, um, unveiled a couple of new robot, uh, announcements. One was kind of a JV with Disney and Deep Mind,

Steve: very uh, wall Lee, didn’t it?

Cameron: very Wally. And unfortunately in his presentation, he didn’t really give a lot of facts or specifics about how it works. But then I saw, [00:53:00] um,

Steve: was disappointed in the presentation. I’m like, and what? Like there was like,

Cameron: yeah,

Steve: seem he was getting cheered. Like he was an MMA fighter who’s the Economi McGregor? And I’m like, what have you actually done? You just brought out a wall e robot and said Stand and sit. I’m like, and and sorry I’m waiting for the, the moment.

Cameron: yeah, but I saw, uh, mark Rober from Crunch Labs. You Mark Rober.

Steve: Marky, yep. Dunno him

Cameron: Robo

Steve: Robo

Cameron: Roby. Um,

Steve: and he does robots.

Cameron: uh, yeah, well he is a former NASA engineer and Mormon who has,

Steve: by the way. Thanks for

Cameron: uh,

Steve: No one in the world has ever called it nasa.

Cameron: nasa,

Steve: Nasa.

Cameron: uh, has a great YouTube channel about engineering, uh, really, really entertaining, um, great for kids that wanna be engineers or interested in science or engineering. But he went and did a tour of [00:54:00] the Imagineering Labs where they were working on this, and he had a bunch of them and showed how they worked.

So they’re all human controlled. But they’re trained in virtual physics environments, how to walk. So they’re bipedal and they’ve got two Nvidia chip sets in each one, but they’re also human manipulated. And the, the Disney folks, were talking about them as an extension of gaming, uh, where you are driving this thing with like an Xbox controller to do stuff, but it has a certain level of native onboard intelligence, um, combined with, uh, your ability to control it, to do stuff.

Like a bit of a, like a car. What is

Steve: or chest kind of for robotics, right?

Cameron: Yeah, yeah, yeah, Anyway, it was pretty cool. But the big announcement was their group N one [00:55:00] humanoid foundation model, um, that he talked about at the end of the video. Um, this is their open source robotics model that he had sort of talked about before, but it’s a pretty big deal.

Um, they’re putting a lot of investment and research into the development of general purpose humanoid robots and making it open source, right? So, um, anyone will be able to build these things. They’re trying to make robotics really blow up and be available everywhere. Obviously you need to buy the Nvidia chip sets to make them work, but it’s again, part of this thing where the vision of the future is.

You will have hundreds and hundreds or thousands of companies manufacturing [00:56:00] robots, general purpose robots or, or. Specific industry robots that will be running NVIDIA’s chip sets and hopefully chip sets produced by Chinese companies as well. And where the, the software, the intelligence for building these things will be readily available for everyone to get up and running as quickly as possible.

So I, I really do expect to see an explosion of general purpose robotics in the next five to 10 years, driven by these sorts of, uh, technologies being made available.

Steve: Yeah, I, I, I just was excited by the one thing on it, which was this is open source, implement the engine. feels to me that. We’re about to enter an era, which is akin to the automobile. We had horse and cars for a long time. So we had the engines, the idea of [00:57:00] how it worked and everything was kind of open source.

And everyone went, okay, you’re gonna get this engine, you’re gonna put it into this mechanical device. And now we’ve got this new world and

Cameron: Yeah.

Steve: and AI has been this separate kind of thing, and robotics has been over there. It feels like merging those two things can create a, a new, uh, competitive reality.

But also in the era when it’s not just about turning, uh, atoms into bits, it’s now about turning bits into atoms. That whole idea of everything was digitization, but now we’re entering the physical internet, the robotics, the manufacturing internet, and if there was ever a glimmer of hope in things going open source, uh, so that you can have a wider comparative viewpoint of this and all different. Versions of what robots look like, the fact that we just, we, we had the Nvidia style one, then you’ve got robotic, uh, vicious dogs coming from Boston Dynamics and more humanoid robots like, uh, the figure one, I, I like [00:58:00] this idea that they could develop out into a lot of different physical and reasoning models because of the open source nature of it.

Cameron: Yeah. I dunno if you saw this, but, um, Thomas Friedman had a piece in the New York Times, uh, a couple of days ago that I read. Now generally, I don’t like Thomas Friedman. Disagree with him a lot ’cause he’s, yeah, he tends to be a rah rah America, um, American imperialism supporter. But this was a really interesting article, particularly because of his history of being a rah rah, American imperialist.

He said, I had a choice the other day in Shanghai, which tomorrow land to visit. Should I check out the fake American design tomorrow land at Shanghai, Disneyland, or should I visit the Real Tomorrow Land? The massive new research center, roughly the size of 225 football fields built by the Chinese technology giant, [00:59:00] Huawei.

I went to Huawei’s. It was fascinating and impressive, but ultimately deeply disturbing A vivid confirmation of what a US businessman who has worked in China for several decades told me in Beijing, there was a time when people came to America to see the future. He said, now they come here. I’d never seen anything like this.

Huawei campus built in just over three years. It consists of 104 individually designed buildings with manicured lawns connected by a Disney like monorail, housing labs for up to 35,000 scientists, engineers, and other workers offering 100 cafes, plus fitness centers and other perks designed to attract the best Chinese and foreign technologies.

The Chu Lake r and d campus is basically Huawei’s response to the US attempt to choke it to death beginning in 2019 by restricting the export of US technology, including [01:00:00] semiconductors to Huawei amid national security concerns. The ban inflicted massive losses on Huawei, but with the Chinese government’s help the company sought to innovate its way around us as South Korea’s male business newspaper reported last year.

It’s been doing just that. Huawei surprised the world by introducing the mate 60 series. A smartphone equipped with advanced semiconductors last year, despite US sanctions. Huawei followed with the world’s first triple folding smartphone and unveiled its own mobile operating system, Hong Min to compete with apples and Googles.

The company also went into the business of creating the AI technology for everything from electric vehicles, self-driving cars, and even autonomous mining equipment that can replace human miners. Huawei officials said in 2024 alone, it installed 100,000 fast charges across China for its electric vehicles.

By contrast, in 2021, the [01:01:00] US Congress allocated 7.5 billion toward a network of charging stations. But as of November, this network had only 214 operational charges across 12 states.

Steve: I mean, it’s pretty poignant, isn’t it? It reminds me of when you mentioned the number of engineers in the campus. There was a video that went viral about, I wanna say 20 years ago, 15, 20 years ago, about China, and it said if you are one in a million. In America, you’ve won in a thousand in China. Yeah. And, and that is, is really it, right?

Is the quantum and the investment and America is really eating itself anti-competitive nature of what’s going on, the lack of real investment. Uh, [01:02:00] yeah. And, and this

Cameron: what,

Steve: get.

Cameron: here’s what Freedman says about exactly that. China starts with an emphasis on STEM education, science, technology, engineering, and math. Each year the country produces some three and a half million STEM graduates about equal the number of graduates from associate, bachelor’s, masters, and PhD programs in all disciplines in the United States.

When you have that many STEM graduates, you can throw more talent at any problem than anyone else.

Steve: Yep. That’s

Cameron: As the Times Beijing Bureau Chief Keith Bradshaw reported last year, China has 39 universities with programs to train engineers and researchers for the rare earth’s industry. Universities in the United States and Europe have mostly offered only occasional courses, and while many Chinese engineers may not graduate with MIT level skills, the best of world class, and there are a lot of them, there are 1.4 billion people there.

That means that [01:03:00] in China, when you are a one in a million talent, there are 1400, 1400 other people just like you.

Steve: Hey, I did it. I was a little

Cameron: He’s.

Steve: 400 off, but I was off. I was on target. I’ve gotta find that video that said, it was like this hypertext video about China versus crazy, and everyone was like, this was a long time ago. And it’s not like we didn’t have time to. a response. And I think a lot about university and the lack of manufacturing or the reduction in manufacturing in western markets and the scientific imperative.

Yes, we’ve still got very smart people in western countries. We just don’t have the quantum and the fact that we don’t have the manufacturing, you don’t have the opportunity, you don’t have the need, you don’t have the natural economics that pushes people into it. And to be quite frank with you, what’s happened in our universities in Australia and around the world, so bleeding heart. And you’ve even seen it in school where [01:04:00] we stopped giving people marks out of a hundred. And we started to get soft on just realities, did you pass the test or not? And what did you get and what’s your score? And, and, now it’s like, oh, I tried. Everyone gets a ribbon in in Australia, but guess what? Not everyone gets a ribbon or a trophy in the real world. And we’ve got so many university courses that I don’t think. Add a lot of value. I don’t want to, there’s one that I’m so tempted to say that I think is just a non course, and I’m just, I refuse and I won’t say it because I will be judged and potentially canceled and I’m not up for that. Cameron,

Cameron: Yeah, I,

Steve: A guess at which course I might be thinking of? Don’t, don’t

Cameron: no, I don’t know, but I, look, I send, well, we send Fox to a hippie school. Um, and I, so I, I don’t really agree with your criticisms of, um, giving kids, um, positive feedback [01:05:00] regardless of whether or not they’re academically doing well or not. I think

Steve: didn’t say that. I didn’t say give people negative feedback.

Cameron: when you say giving them a ribbon,

Steve: yeah,

Cameron: I think I, I,

Steve: Yeah, that’s, that’s not positive fee. Positive feedback. Absolutely. Give people sure, but not everyone gets a trophy. Everyone shouldn’t get a trophy.

Cameron: um.

Steve: and people need to be scored on what they’re good at.

Because you know what? Unless we have the courage to tell kids, I’m not saying you’re lamb ba a 6-year-old or an 8-year-old. I mean, this is endemic at university and high school level. I’m saying let people know if they’re not good at something so they can find something that they are good at and encourage the hard stuff. Guess what? It’s hard. Life’s hard.

Cameron: I disagree with that. I don’t think life’s hard. I think life’s hard if you make it hard. But I, I think that people, um, have strengths and weaknesses and we should reward or [01:06:00] encourage, or incentivize kids for doing the things that they’re good at and encouraging good behavior. And it’s not necessarily the things that have been given trophies for in the past.

So maybe

Steve: good. Yeah,

Cameron: does get a trophy, but we just make more trophies for different things that we didn’t give trophies out for.

Steve: areas we’ve forgotten to give trophies in.

Cameron: Yeah. Like being a good human being deserves a trophy

Steve: it

Cameron: really,

Steve: Yes.

Cameron: um, drawing a good picture or trying your hardest to draw a good picture

Steve: Yes.

Cameron: gets a trophy. Whether or not the pictures subjectively, I like it or not.

You tried hard to do something that was difficult for you. You get a trophy

Steve: for a lot

Cameron: or you get recognition,

Steve: for a lot of years, I’ll tell you what we had, it was a problem in Australia. There used to be Australian of the year and between about 1985 and 2010 or 2005, dunno, it was always a sports person. They’ve, [01:07:00] they’ve flipped it up a bit recently and now it might be a scientist or a medical practitioner or, which is amazing, right.

For a long time it’s like what Steve

Cameron: or a soldier killed a bunch of Afghanistan and civilians and

Steve: And

Cameron: covered up their bodies.

Steve: yeah, of course. And, and they were just sitting down having a game of cards and Yeah. Enjoying it. Some a, a Arabic coffee,

Cameron: Smoke. Yeah.

Steve: Uh, but a long time, heroes in this country in America were, were, were celebrities and sports people, and, you know, tiger Woods and, uh, And

Cameron: Tiger Woods got an Australia of the Year.

Steve: out that George Carlin was right. Tiger Woods Och is my own fucking heroes. Thank you very much. Said George Carlin in one of his last specials, dumb look at his

Cameron: All right,

Steve: steroids and

Cameron: we’re off. Okay. Moving right along. To finish up, I wanna talk to you about Praxis because, um, I just did a whole show about Praxis on the bullshit field, but you, you’re gonna love [01:08:00] this.

Steve: I, I’m

Cameron: Yeah, I will.

Steve: I’m tuned in. Everyone here comes Praxis.

Cameron: So there, I, I came onto this when I was trying to figure out why Trump wants Greenland. Mm-hmm. Um, lots of rare earth minerals, strategic location, et cetera, et cetera. But there’s a guy called Dryden Brown. That you need to look up. There’s also a guy called Shrin, so who wrote a book? So, God, uh, I, I should have my full notes here, but I, I, I’ll try and pull this up from my memory from a couple of hours ago, a few years ago, um, one of the founders of, uh, Coinbase, who’s also a partner with Mark Andresen at Dresen Horowitz, his name’s called, um, shrin, wrote a book called The Network State, which I read, uh, yesterday Flip Through the Network State.

You heard of the Network State Vision.[01:09:00] 

Steve: You better tell me about it.

Cameron: Have you read Iron Rand’s? Atlas Shrugged. I.

Steve: I have heard a lot about it and read summaries. I haven’t invested the time in it because it’s got a bad reputation

Cameron: You should read Atlas Shrugged. Everyone should read Atlas Shrugged in the fountain head. Um, you have to agree with it, but you should read it. So they’ve taken this idea from Atlas Shrugged and gone ballistic. So the idea of the network state is a startup society where imagine getting, uh, a nerdy subreddit that has 50,000 members.

They crowdfund

buying a piece of land, they then build a city on that piece of land. They then go and live in that city and declare themselves an independent nation state with their own government, their own laws, their [01:10:00] own army, police force, taxation or lack thereof. And basically run their own thing. So it’s a, a, a su a country where you get to choose to be part of the country.

You’re not born into it. You apply for membership and, or you buy your way in a bit like Trump’s golden visa for American citizenship right now.

Steve: policies. You buy your way in, apparently.

Cameron: So this guy who’s part of the whole Mark Andresen thing, um, wrote a book about him. Then there’s a young guy.

Steve: this right recently. Okay.

Cameron: There’s a young guy called Dryden Brown, late twenties homeschooled because he wanted to be a professional surfer. So you’re gonna like him. Um. He started a company a couple of years ago called Praxis, P-R-A-X-I-S. Praxis is a Greek word that means [01:11:00] taking something from theory and putting it into practice.

He has raised $500 million from, from Peter Thiel, among others, with a view to building a network state. A couple of years ago he went to Greenland and tried to buy Greenland to build this on Greenland. Told him to go fuck himself. Uh, he’s trying different places around the world to get this land. Um, I watched a YouTube of a talk that he gave last year, which was absolutely manically bonkers.

Um, but anyway, this. Startup society where it’s just basically autistic tech nerds like me, will go and build this U Tech utopian society. That’s [01:12:00] internet first, pro ai, pro robotics, anti taxation, full of hot girls. ’cause they’ll all be in cells. Uh um,

Steve: you had me at Hot Girls, Cameron, or I wasn’t really on board. But it’s, it’s amazing how one single mode of proposition can change many things.

Cameron: um, now there’s a bunch of interesting things. So Peter Thiel is, uh, backing this guy, Peter Thiel, obviously also backing Janie Vance.

Steve: Yeah.

Cameron: Peter Thiel, co-founder of PayPal with Elon Musk, who is Trump’s. Who is Trump’s ambassador to Denmark that owns Greenland? Ken Howie, another member of the PayPal Mafia, one of the other founders of PayPal.

So you’ve got this, these joining dots between PayPal guys, Trump Dried and Brown Praxis, uh, mark Andresen, and b and um, Shri are both [01:13:00] big fans of the Praxis idea and the network state thing, and Greenland and Peter Thiel’s got his citizenship in New Zealand as his backup country if all goes wrong. But basically there seems to be this thing where these tech bros have this dream of building their own country.

It’s ba, it’s techno feudalism.

Steve: Mm.

Cameron: what do you give to the man who’s got everything? His own kingdom,

Steve: country. That’s what that’s, that’s one of the old school strategies that bring it back. The old

Cameron: It’s classic. It’s classic old school, ping dick. So, and then, um, you tie in the fact that they’re obviously dismantling America right now and trying to dismantle global trade and global international bodies.

They wanna shut down the un, they wanna shut down the world. Um, bank, they wanna shut down nato. They’re trying to dismantle all of the [01:14:00] post World War II Cold War era international bodies. And then they’ve got this idea of building these nation states that are run by corporate, the tech billionaires. Um.

It’s a really interesting play going on that I’ve just learned about in the last week, and I was like, ah, Steve’s gonna love this.

Steve: hey, get this thing up and running and Mr. Trump can have another term just in his seceded state on Greenland or in the middle of America, or why not Cameron? Pine gap. Pine gap 2.0. Let’s get the Yankees down here. We’re gonna bring in some water. outside of Australia, as the sea levels are rising, we’re gonna desalinate it with robots and build a forest.

And if you forget the line, forget the line. In the, in the Arab regions, it’s Pine Gap 2.0. Praxis Australia’s on board. We’ve [01:15:00] got an election, it’s a vote winning policy

Cameron: What was the name of the city in the middle of Australia that was going to be built

Steve: There was someone,

Cameron: in the.

Steve: ago, someone proposed it.

Cameron: 1980s. No, I’m thinking of the multifunction

Steve: That’s

Cameron: p

Steve: The multifunction policy. MFP. There

Cameron: Yeah.

Steve: there was another, independent, uh, I think two elections ago who proposed another city out near Griffiths or something where it was gonna be like a university set up a, a new city all uh. Run by, by renewable energies and this kind of ideology.

Cameron: Well my friend, um, Peter Ard, um, was the guy behind the multifunction policy idea, as I understand it. Originally, he, um, was working in the seventies. [01:16:00] He worked for the Whitlam government as the chief of staff of environment ministers. And then he ended up, uh, as the CEO of Australia’s commission for the future that was set up by the Hawk government.

And so part of his thinking around all of this was this multifunction pulse, which was like this, um, cutting edge technological city. Um, I think it originally was gonna be in South Australia, but then it moved to being in the middle of Australia in the Outback or Alice Springs or somewhere like that.

Never got off the ground. People freaked out. But, um, this, this Praxis thing kind of reminds me of that. And like, I’m, I’m, I’m partly on board with the idea of a new city state that’s all pro tech,

Steve: I

Cameron: pro internet.

Steve: If, if anything we could say that this could become a really interesting MVP of what works and what doesn’t work, and you’re putting a boundary around it and it’s self-selecting participants or populace from, from that perspective, it’s [01:17:00] probably the type of thinking that we need. And in some ways, the things that you have concerns with those who are for it and funding it, we can use their money, their billionaires, put them in it, and they can eat their own dog food, as it were, to use the, the, you know, the Google ideology of if you believe in it, then build it.

And let’s see. And I think it could be a really incredible test case,

Cameron: Well, the way Dryden Brown is talking about it is it’s a test case for Elon and Mars,

Steve: right.

Cameron: right? When Elon builds his Kingdom of Mars and changes the name of Mars to

Steve: Mask.

Cameron: Musk,

Steve: Mask.

Cameron: just, just puts a K on the end of it.

Steve: Mask.

Cameron: It’s already an S just put a K on the end mask, Elon Mask.

Steve: Musk. He should change his name to Elon Musk. I,

Cameron: they’ll have to build,

Steve: Elon

Cameron: he’ll just.

Steve: That’s why he was gonna choose Venus, but it didn’t work with his name. And he said, look, Venus, Mars, we’ll go with Mars. The [01:18:00] names are similar. That’s where we’re

Cameron: Yeah, just call it X, the planet X is what it’ll be called. Anyway, check that out. Um, it’s,

Steve: him and Donald before he leaves.

Cameron: mm

Steve: Elon Musk leaves Doge, they should

Cameron: mm

Steve: change the name of Mars and call it X. I’m just

Cameron: Oh, change the name of the United States just to X

Steve: Yes, I

Cameron: Be easier. Just call it X. Yeah. Alright. That’s all I have for you this week. You got anything else?

Steve: all I got. I feel like this has been one of the great episodes and I don’t know what the listeners think, maybe they can tell us, but it’s been one hour and 20 of power and I’ve loved every second.

Cameron: We didn’t point out that we had a, we had a, one of our tiktoks last time blew up 500,000 views. You going on a rant about Elon Musk and Donald Trump or something?

Steve: I think

Cameron: So, uh.

Steve: Well, secret before we go in the show notes, our good friend Cameron Ri said, [01:19:00] this is what he said. He said, viral reminders, say controversial things. Use hooks to open segments. Why? Why does it, how do you, what can you, where can you? Did you know, and I was doing that.

You might have noticed listeners through the entire episode. So let’s

Cameron: Oh,

Steve: what

Cameron: you’ve given me some good stuff. All. All right, I’ll, I’ll go looking for that in the edit. Thank you, Steve. Good to chat to you, buddy. Have a good week.

Steve: Best part of my week.