Select Page

This week on the Futuristic podcast: Running GPT4All (local LLM) & SuperWhisper  (dictation that actually works!) on a Mac; SamA says GPT5 will solve all of the problems of GPT4 – and AGI is coming soon (also, Ilya’s status is still ?), Figure 1 coffee robot , Midjourney CEO in office hours just said he thinks they “can get to the holodeck” by 2024, what happens to “work” after the AI revolution , and we see how far we’ve come since this NYT article on OpenAI from 2018.



[00:00:00] Cameron: Welcome back to the Futuristic, Episode 19, Sammartino. Recording this on the 19th of

[00:00:15] Cameron: January, 2024. Forgive me, Father, for I have sinned. It has been about two months since our last episode. Domine Patria, Filio Spiritus Sanctis. Steve, uh, what have you been up to for the last couple of months, man?

[00:00:29] Steve: Oh, It’s been a been a tough couple of months, been, I had a whole lot of stuff on my plate. So, if anyone needs to be forgiven, it isn’t Cameron, it’s me. So let’s just leave it at that.

[00:00:38] Cameron: I was forgiving you, but you

[00:00:40] Steve: Oh, oh, thanks. No, thank you, sorry. I

[00:00:42] Steve: thought you were asking for forgiveness from our, from our, uh, audience.

[00:00:47] Cameron: was just getting ahead of it. Um, how are you doing, buddy?

[00:00:50] Cameron: You good?

[00:00:50] Steve: Yeah, good, good. Had a good day so far today. It’s been a good day.

[00:00:56] Cameron: Well, well, we haven’t been replaced

[00:00:57] Cameron: by AI yet. Uh, but we’re going to do a quick show

[00:01:02] Cameron: today, new format,

[00:01:04] Cameron: quick as, quick as lightning. We’re going to do Speedy Gonzales

[00:01:07] Cameron: Roadrunner show today. So let’s get into it, Steve, um, what have

[00:01:12] Cameron: you done of note, technology wise, in the last, uh, say two months since we last spoke?

[00:01:19] Steve: Yeah, and even since Christmas. I haven’t done anything really. And normally I’ve got, I’ve been working on here and I’ve seen this and for the last three weeks, I’ve kind of just been hanging with the kids and living a pretty analog life, gave me a really strong insight towards the end of last year, a lot of things happened and I

[00:01:39] Steve: was just signing up to read and consume everything.

[00:01:42] Steve: I got to the point where. My email lists and the things I’m trying to read are just out of control. And it’s a little bit like chasing rabbits. When you’re trying to chase a whole bunch of rabbits, you catch none. So I haven’t read anything, anything. It’s just all too big. So I feel two things. First one is I really want to refine what I read and listen to, to a small little trusted cohort where it’s going to summarize everything and then I can dig into that.

[00:02:06] Steve: So that’s one of the things I want to do in the next week is just decide which of the three email lists I sign up to that. Keep me in tune and know what to look at. So I need to do that because I’m just chasing rabbits and catching none and almost just blinded by the lights and not doing anything. But here’s what’s happened in the past three weeks.

[00:02:23] Steve: I feel so outdated. I feel like I haven’t got a clue as to what’s going on until I kind of read what we’re going to go through today with the top three in tech. And now I feel, um, I’ve got a bit of an update again on what’s going on. It was really refreshing to see the things that you had chosen, but it did remind me.

[00:02:41] Steve: That this is how 95 percent of society probably feels. This world is just racing ahead. They’re doing what they do. If they’re not working in technology, I imagine that they just feel blindsided by all this and confused and anxious about it. So it made me feel a little bit like the other half or the other 95 percent probably do all the time.

[00:03:02] Steve: That’s my insight for the week.

[00:03:05] Cameron: Oh God, I feel sorry for you. I never want to feel like the 95 percent Steve. That’s my fear. My fear has always been waking up one day and finding out I’m average. It’s bad enough. Yesterday I was reading the Financial Review. There was a story about one of my oldest mates who’s running a 4 billion company now.

[00:03:23] Cameron: He runs Kinetic that runs all of Melbourne’s buses and they’re going to take over the trams and thousands of buses around the country. My other mate, Dennis, who’s a billionaire now and runs a pharmaceutical company. One of my former employers just became the queen, employees just became the queen of Denmark last week.

[00:03:38] Cameron: I’m like, and here I am sitting in Brisbane, still doing podcasts. Like where did I go wrong? But anyway, moving right along. I know a lot of people have been listening to the show. I’ve been talking to people, people coming up to me over the last couple of months saying, Hey, love in the futuristic. When’s another one coming out?

[00:03:51] Cameron: Um, people getting AI because they’ve been listening to us. So we’re having some impact on people, which is nice. Uh, I had a lot of conversations with people at Christmas parties and events. Most people have got no idea my, my, my Christmas party trick is pulling out my phone and, and pulling up GPT4 voice and letting them have a conversation with GPT, just so they can see the quality of it, that, and the thing is.

[00:04:19] Cameron: The thing that bugs me about that is most people just go, Oh, yeah, it’s pretty good. I’m like, what? Like, they’re, they’re, I expect their eyeballs to roll up into the back of their head and then have a convulsion at how amazing it is. They’re like, yeah, yeah, it’s pretty good. I’m like, what? Anyway, I, um, My highlights just in recent week or so, Steve, is I got, um, I finally got a local GPT, a local LLM, let’s say, running on my MacBook.

[00:04:44] Cameron: I, I, I’m using Mistral. 7B is the model, trained on 7 billion parameters. Uh, I’m running it using an app called GPT4ALL. You can download that if you’ve got a fairly modern Windows machine, or a Mac, or a Linux machine. It’ll run, it’ll install locally. You can install whatever mod you can install GPT4, you can install LLAMA, you can install Your own local model, and you can train your local model.

[00:05:14] Cameron: That’s the next step for me is training Mistral on my own documents. But before you can do that, you need to do this thing they call pre processing of the documents. You need to, uh, you know, remove, uh, all of the, uh, the, and words. You need to chunk it down into, uh, token sizes. You need to do a bunch of work to, in order to train your own LLM locally.

[00:05:36] Cameron: But I’m going to be. Doing that over the next week or two with a bit of luck if I get some time. But you know, I’m at a point like a little bit over a year ago, wasn’t playing with AI at all, now I have a local AI installed on my machine that I can train and teach and train it on my documents, my knowledge, what I want it to know about.

[00:05:57] Cameron: So that’ll be interesting. The other app that’s really made a big difference for me recently is a Mac app called Super Whisper. It’s a dictation tool that uses OpenAI’s Whisper technology on the back end. Steve, I’ve been playing with dictation software since Dragon Dictate circa 1996. I was at an IT conference in Sydney.

[00:06:22] Cameron: I was on the Aussie mail stand where we were trying to convince people to get a dial up account. And the drag and dictate guys were in the stand next to me. And I’ve been testing dictation software ever since, and never found anything that’s great, even the stuff that comes in Mac OS is good, but not great.

[00:06:43] Cameron: If you, if you dictate a large amount of text, it’ll have like an 80%, maybe 90 percent accuracy, if you’re lucky, depending on the terminology that you’re using. And that’s not really good enough because then you need to re read through everything and fix it up. Super Whisper, 100 percent accuracy for me, even when I’m talking about Ancient Rome, which I’ve been doing with it for the last week.

[00:07:09] Cameron: So, you know, when I do podcast notes for my history shows, I’ll tend to have a bunch of books open in front of me. My process for the last 10 years is I’ll read a page of a book and then I’ll write some notes. Then I’ll read another page of a book and I’ll write some notes and I have to think about what I want to say and then I’ll write the notes.

[00:07:26] Cameron: Now I just have Super Whisperer open and I’ll read a paragraph and then I’ll just speak out loud what I want to say about that and then read another paragraph and basically like having a secretary that I’m talking to. And it’s accuracy is 99. 99%, even when I’m using long Latin

[00:07:43] Cameron: names like, uh, you

[00:07:45] Cameron: know, Lucianus, uh, Plotus, whatever it is.

[00:07:50] Cameron: It gets it mostly right. Calpurnius, Piso, it gets them nearly always right,

[00:07:55] Cameron: which is astounding to me. So check out SuperWhisperer as my big tip for the

[00:07:59] Cameron: week.

[00:08:00] Steve: I did write large chunks of my last book into Mac, you know, the, the notes function on my phone. It was pretty good, but I had to go back and refine words and simple errors. and so if I ever do write a book again, then you don’t write a book, you talk a book and actually it’s better to read because that way you’re hearing it the way someone would really say it.

[00:08:24] Cameron: yeah, I mean I, with podcast notes I don’t need to go back and re edit it obviously because I’m just going to be talking about it anyway, it’s my notes to talk to, but I’ve seen there’s another tool called, I think it’s Open Interpreter, where you can then integrate Super Whisper With this other tool, Open Interpreter into your Mac and you can do everything on your Mac.

[00:08:45] Cameron: It basically gives it OS access and you can just tell it to, you know, uh, whatever you want. It’ll, it’ll run the entire Mac just with voice. I’m a little bit dubious about

[00:08:56] Cameron: giving something. Um,

[00:08:58] Cameron: that I’m not really sure about the genesis

[00:09:00] Cameron: of access to my operating system at this stage. But I think that’s where we will be at some point in the next couple

[00:09:06] Cameron: of years, is you will just talk to your computing devices and most of your devices, actually, and they will just function based on speech.

[00:09:13] Cameron: The days of using a keyboard to access, uh, your computing or your technology in general will

[00:09:20] Cameron: disappear.

[00:09:21] Steve: I

[00:09:22] Steve: think,

[00:09:22] Cameron: Top three?

[00:09:23] Steve: I think so too, especially just on, if you see anything in the future, I mean, voice is the killer app of humanity, language is the killer app, the keyboard. The only thing that I think will save it for

[00:09:33] Steve: now is that you can work next to someone

[00:09:36] Steve: just on your keyboard when everyone’s talking.

[00:09:38] Steve: That,

[00:09:38] Steve: that’s a, that’s a tough thing. And I wonder if like, at some point we’ll circumvent where you just think it.

[00:09:43] Steve: And it, and it, just somehow just takes that, but it is true. Definitely. The keyboard, um, feels like it’s days and numbered

[00:09:51] Cameron: Well, the thinking side of it is what? Elon and others working on with, uh, neural implants. Yeah, no, I, I, I think you’re right. Like if you work in an office, um, but if you have, uh, people speak on the phone in offices, right? So if you have a headset and you’re just talking quietly to your

[00:10:08] Cameron: computer,

[00:10:09] Steve: Yeah.

[00:10:09] Cameron: probably don’t need to worry about it. If you’re out in public and you’re, You know, you’re on a plane or sitting next to someone maybe, but

[00:10:15] Cameron: you know, I

[00:10:16] Cameron: haven’t

[00:10:16] Steve: you know, what my favorite thing is, Cameron, my, my favorite thing. Out in public is when people

[00:10:23] Steve: talk to their phone on loudspeaker in public. That’s one of my favorite

[00:10:26] Steve: things. And it’s one of my dad’s favorite pastimes. That’s all I’m saying.

[00:10:31] Steve: He loves that. He’s like, let’s put it on speaker. We’re in a cafe.

[00:10:35] Steve: Sure.

[00:10:36] Steve: Everyone’s

[00:10:36] Cameron: yeah,

[00:10:37] Steve: great. And he holds it out here if people can’t see, but he just holds it out there. Like it’s some beacon of hope.

[00:10:45] Cameron: yeah, I’ve had my mother, she was staying with us for six or seven weeks over Christmas and, uh, you know, she’d have very loud telephone conversations with people sitting in the room with us. I’m like, what are you doing? Like, go to your, go to your room, go outside. Why are you having a very loud phone conversation when we’re surrounded by other people who don’t want to hear this?

[00:11:04] Cameron: It’s a weird, uh, generation

[00:11:06] Cameron: gap

[00:11:06] Cameron: thing.

[00:11:07] Steve: just pull me aside if you see me doing that.

[00:11:10] Cameron: Shoot me. All right, let’s get into the top three news items for the week, Steve. The one I wanted to start with is Sam Altman, uh, over Christmas. I’m not sure. I don’t think we, uh, we did do a show after this, but Sam got fired. Then he got back. We still really don’t know what happened, uh, all this time later.

[00:11:26] Cameron: We did hear, he did do an interview in the last couple of days where somebody asked him about Ilya Sutskever’s status, and he basically said, I love Ilya. Don’t really know what his status is at this stage. I hope we’ll find a role for him. So, sounds like Ilya is probably out, out. If it hasn’t been resolved yet, I don’t know if it will be resolved.

[00:11:48] Cameron: But, uh, the main thing I wanted to say is Sam recently, like in the last week, spoke at a Y Combinator event, as people probably know. He used to run Y Combinator. I think he was, he had a startup in Y Combinator, I think, initially, and then he ended up running Y Combinator and then OpenAI sort of came along after that.

[00:12:11] Cameron: At this talk, he was, at this event, he was talking about all of the problems that GPT 4 has. Hallucinations, etc, and said they will all be resolved, pretty much all will be resolved with GPT 5. He’s now publicly, you know, it wasn’t that long ago when he was saying they weren’t even working on GPT 5, now he is talking openly about GPT 5.

[00:12:35] Cameron: as basically solving all of the problems of GPT 4. He sounds very

[00:12:41] Cameron: confident.

[00:12:42] Steve: big

[00:12:43] Cameron: I think he said, uh, oh, let me pull up the quote here. This is from somebody who was at the event. This is on X, formerly known as Twitter. Uh, Howie Xu. Howie said at Y Combinator, he’s a, he’s an AI, um, Data Executive Entrepreneur.

[00:13:00] Cameron: Howie said at Y Combinator W24 kickoff today, Sam suggested people build with the mindset that GPT 5 and AGI will be achieved relatively soon. Most GPT 4 limitations will get fixed in GPT 5. So, um, so we’re getting it second or third hand removed, there is a photo though of Sam giving a talk at this W24 event.

[00:13:29] Cameron: So, I don’t know. What do you make out

[00:13:32] Cameron: of that? I mean, I mean, Sam doesn’t tend to be a guy who says stuff about what they’re, what’s coming down the pipeline that’s not, uh, somewhat controversial. I mean, not, sorry. Um, that’s, what’s the word I’m looking for? Conservative. No, he, he, he, he’s pretty conservative.

[00:13:54] Cameron: WI He’s not one, he’s not an

[00:13:56] Cameron: eel

[00:13:56] Steve: He’s not an Elon who

[00:13:57] Steve: says there’ll be, you know,

[00:13:58] Cameron: of

[00:13:59] Steve: robo taxis by 2019. I’m still waiting for one to pick me up.

[00:14:04] Cameron: Yeah. Well, they, they do exist in the

[00:14:05] Cameron: us

[00:14:06] Steve: Yeah. But yeah, there’s not a million robo taxis. And by the way, can I just point out when it comes to robo taxis, Uber have their own clocks. They have their own minutes because it says seven minutes and it’s always nine.

[00:14:16] Steve: And they

[00:14:16] Steve: count down one, minute every three minutes. That’s how it works in case anyone wants to know.

[00:14:21] Cameron: Uber time.

[00:14:22] Cameron: Yeah.

[00:14:22] Steve: call it Uber time. Uh, I

[00:14:24] Cameron: I’ve

[00:14:25] Steve: Yeah, well,

[00:14:26] Cameron: What does, what does he mean? What do you, what do you

[00:14:28] Cameron: think

[00:14:28] Cameron: here, man?

[00:14:29] Steve: AGI is, a,

[00:14:29] Steve: really big statement, um, I think, yeah, removing the hallucinations and making sure it gets simple things like maths right, is obviously really important.

[00:14:39] Steve: It feels like all of those little hallucination elements, And let’s call them quirks are probably easy to fix in terms of it being AGI. I just wonder if it’s going to do some stuff, which is far more

[00:14:50] Steve: extraordinary with the multimodal AI, obviously the Google stuff came out and a lot of people said that it was gamed and it wasn’t really working as well, but if they get, take another leap from where they did last time from posting an image and it tells you what’s in the image or image generation and voice and putting all this multimodal stuff together in a way that is, um, And all of those pieces of the puzzle come together.

[00:15:16] Steve: You’d have to say that that would be an AGI because in terms of intelligence that you see on, you know, horizontal ideas, whether it’s history, code, engineering, marketing, it already does that pretty well. If it can get that multimodal element. Just further along, as well as not having the hallucinations, you’d have to say that the capability that it already has is extraordinary.

[00:15:39] Steve: It’s actually just bridging these pieces, removing the nuances and bridging what it can already do. I don’t think it needs to be that more extraordinary in terms of what it can generate. It’s actually the ability to generate things in a multi modal fashion without errors. If you just achieve that, and just that alone, that would put it pretty damn close to the AGI now.

[00:16:02] Steve: I mean, they say that it’s a, you know, average IQ of 155 in every single topic already. If that’s not general intelligence, I don’t know what is. So for me, it’s the linking and the bridge between all the capabilities, and cross referencing and going backwards and forwards, and giving instructions to create video, and then analyze video, and All this back and forth stuff and then removing just the, the, the well documented problems.

[00:16:27] Steve: And I’ll tell you what, it doesn’t seem like it’s outside of the realms of possibility real soon.

[00:16:35] Cameron: Well, I’m just drilling down into Howie Xu’s tweet stream here. He was there himself. Uh, he said, just attended the kickoff event, et cetera, et cetera. Key takeaways. He says, um, OpenAI API will continue to be faster, more reliable and cheaper. However, there will always be a balance between performance and cost and.

[00:16:57] Cameron: I know that Sam has also just, uh, said in the last couple of days that one of the big issues moving forwards is we need more power to drive all of the compute that AI is going to require. We’re going to need way more power generation across the planet. But the other thing that Howie says, that Sam says, is that it is not advisable to build companies that focus primarily on addressing current GPT 4 limitations.

[00:17:22] Cameron: Most

[00:17:22] Cameron: limitations will get

[00:17:23] Cameron: partially slash entirely fixed in GPT

[00:17:26] Cameron: 5. And you’re seeing that with the GPT Store,

[00:17:30] Cameron: which finally launched in the last week or so. A lot of those are just, uh, tack ons trying to fix some of the limitations to

[00:17:38] Cameron: GPT 4, and you can already see how they’re just gonna get smoked. When GPT 5 comes

[00:17:44] Cameron: out in probably this year, I’m

[00:17:46] Cameron: guessing.

[00:17:47] Steve: it does seem like if you do achieve that AGI status, that the need for GPTs, which are kind of like coaching limitations out of the GPT that’s there, you wonder how well a GPT store will work or whether or not it will be needed.

[00:18:05] Steve: If the AI is smart enough to give you what you need,

[00:18:10] Steve: then how much of it needs to be Preordained in a specific GPT.

[00:18:15] Steve: That I don’t know the answer to, but it seems like it’s not going to be As specific and needed as a lot of the apps were in the app store. It feels like it’s capability might be so vast that the requirements for specific GPTs might not even be there. And maybe he’s hinting at that. I’ve

[00:18:34] Steve: noticed

[00:18:35] Cameron: know, I think one of the justifications for a specific GPT that I can see, and I’ve tried to build some along this line is where you train it on a certain dataset. Like in my case with QAV, with the investing side of stuff, I can give it a whole bunch of transcripts. For our pod, from our podcast. And then you can train it to answer those.

[00:18:56] Cameron: So it’s going to have a

[00:18:57] Cameron: specific knowledge, specific knowledge base that the general GPT won’t have. Problem I’ve had is that it doesn’t do a very good job at this stage of, uh, being able to really be trained on those documents. It’s sort of fairly hit and miss in its ability to answer those. So the one case for it that makes sense, it doesn’t really work yet.

[00:19:19] Steve: the same thing. So I’ve built a GPT like you have of Steve Sammartino AI, which I put in my books, my podcasts, my

[00:19:28] Steve: blogs, and then asked it to refer to the web. If it can’t find what it needs out of my own database, the first thing I notice is it’s always a lot slower to generate an

[00:19:36] Steve: answer because it’s reviewing the materials,

[00:19:38] Steve: which it, it refers

[00:19:40] Steve: to.

[00:19:41] Steve: But often, um, I can see it’s answered things that I know I haven’t got in my works. So it’ll revert to the web, um, which I’ve told it to do, but to try and keep it within the realm of what I’ve said, so it goes off piste, uh, a bit, and this is, you’ve, you’ve raised a really good point. The future of GPTs isn’t giving, uh, OpenAI direction of how to answer something.

[00:20:05] Steve: I think the future of GPTs is putting new data into the database that wouldn’t otherwise be there, so you can give specific answers from your corporate realm or your information realm, which puts a boundary around what it’s learning from and how it spits out an answer for that. And if you’re a GPT developer or looking at doing that, then I think that’s The future task rather than trying to just curate the possibilities of what exists already in OpenAI’s database.

[00:20:33] Steve: It’s actually about adding more meat to the brain so that it draws on your specific experience or data that you put in there. I think that’s a really good point, Cam.

[00:20:42] Cameron: And I suspected it won’t be OpenAI’s LLM that people end up doing that on from a corporate perspective. Maybe small businesses will, but from a corporate perspective, they’ll do what I’ve been doing lately is install your own local large language model And train

[00:20:58] Cameron: that one.

[00:20:59] Steve: And you would see that Microsoft would do that. I mean, Microsoft are in the perfect position to come in and help curate and build bespoke AIs for their corporate clients. You see there’s a massive

[00:21:07] Steve: revenue stream from Microsoft who are already deeply engaged in 90 percent of corporations around the world.

[00:21:12] Steve: I

[00:21:13] Cameron: AWS. I mean, they’ll have, like you can just run up an AWS now running Lambda or something.

[00:21:19] Cameron: You’ll be able to just run up your

[00:21:21] Cameron: AWS instance running an LLM, and it’ll be, you know, your private LLM. You’ll be able to feed it whatever you want to feed it, and then it’ll all be contained within a particular ecosystem that you have some level of control over that’s corporatized and firewalled and billable and all that kind of stuff.

[00:21:42] Cameron: Okay, Steve, what do you want to talk about

[00:21:43] Cameron: next?

[00:21:44] Steve: want to talk about the coffee robot, Cam. I’ve had a terrible coffee today, and I just want to know if I could have had a better one with a robot.

[00:21:52] Cameron: So, I saw this on Twitter, Brett Adcock, who’s the founder and CEO, I think, of a robotics company called Figure. They put this out on January 7th. The Figure 01, which is an Android y type robot that they’ve built, has learned to make coffee. Our AI learned this after watching humans make coffee. This is end to end AI.

[00:22:21] Cameron: Our neural networks are taking video in trajectories out. They got video of it on Twitter. So, it’s, uh, pod coffee. It’s, it’s not, uh, tamping down and, uh, scooping out ground coffee or anything that fancy. It’s a fairly simple process, but the key, as he says here, is they didn’t Program it, how to make coffee.

[00:22:43] Cameron: It watched a human and then in other places that I’ve read about him going into detail with this, it took about 10 hours. It watched video of humans making coffee with pods. Then it spent about 10 hours training itself how to get it right, self correcting its mistakes, until it was able to achieve the outcome.

[00:23:05] Cameron: Now, you might think 10 hours is a long time for a robot to learn how to put a pod in a

[00:23:10] Cameron: coffee machine and press a button, and that’s fair

[00:23:13] Cameron: enough, but here’s the thing. Imagine you have thousands of

[00:23:17] Cameron: robots. watching videos of, humans doing things, then spending 10 hours learning how to

[00:23:23] Cameron: replicate that. Once they’ve built the model for how to replicate it, they can share that model with a million, million other robots immediately that all know how to do it.

[00:23:34] Cameron: Instantly.

[00:23:35] Steve: So this

[00:23:36] Cameron: a world

[00:23:38] Steve: this is huge, Cam. And it almost gets lost in the nuance of, oh, isn’t it cute? A coffee made a robot. But what you’ve just said there

[00:23:46] Steve: is

[00:23:46] Steve: huge.

[00:23:46] Cameron: made a

[00:23:47] Cameron: robot?

[00:23:47] Steve: So, yeah, well, it did. It was a special coffee. Robot made a coffee. Thank you for the pickup. It’s really huge. The fact that it.

[00:23:56] Steve: learnt from video. Is, is really extraordinary.

[00:23:59] Steve: Mind you, it

[00:24:00] Steve: took 10 hours. It takes a human about seven years to make a coffee. Cause until you’re about seven, you’re probably not going to deal with anything hot and electrical or that anyway. So, I mean, it’s easy to forget the context of, yeah, you can teach. An adult an hour, but it takes seven years for, you know, a kid to learn a whole lot of stuff.

[00:24:21] Steve: So, um, that that’s not a throwaway statement, like to learn something in 10 hours, it’s really, really quick. And of course it’s going to get quicker and it’s going to get better. And then it’s going to have a database of learning coffee. And then that sort of has, you know, something that goes onto cooking bacon and eggs and it just flows on horizontally.

[00:24:41] Steve: I saw something similar, which was, um, from Elon Musk’s robot, uh, company, Tesla, or Tesla, the division there, the, the robots of one of its bots folding clothing, again, it was doing it a little bit slow and kooky, but it’s like, Oh yeah, this isn’t like, Oh, come back to us in three years. This is like, come back in three weeks and then three weeks after that.

[00:25:04] Steve: And this is where the exponential improvement is easy to forget. Um, I think robots doing many, many things and highly dexterous tasks. So making an iPhone, right? If it can do that, then foreseeably it can watch and do any task. And, and I think this for high cost labor markets is really, really interesting.

[00:25:27] Steve: Uh, Because all of a sudden, you know, robotics, we know can help in large scale manufacturing in the high cost labor markets, but it might be in a lot of those human areas where there’s a lot of people and bodies in manufacturing through China, this potentially has. Uh, an opportunity to, to disrupt the Silk Road and, and that low cost labor market modality as well, which is, that also circles back to the importance of the chip wars and access to raw materials because, you know, a lot of things can come back to high cost labor markets and again, it further ensconces, uh, maybe wealth disparity, you know, what happens to, to low cost.

[00:26:07] Steve: People in highly dexterous, um, nuanced tasks that currently humans can still only do.

[00:26:16] Cameron: Well, just today, in fact, Brett Adcock from Figure announced that they’ve just signed a commercial agreement with BMW to deploy general purpose robots in automotive manufacturing environments. FIGR’s humanoid robots enable the automation of difficult, unsafe, or tedious tasks. throughout the manufacturing process, which in turn allows employees to focus on skills and processes that cannot be automated, as well as continuous improvement in production efficiency and safety.

[00:26:43] Cameron: He said that they’re hoping to start rolling these out of BMW. In 2024, the humanoid robots in the BMW manufacturing facility. Now, you know, of course, robots have been used in manufacturing and warehousing for a long time now, but we’re talking about humanoid robots as they use these sorts of commercial deals.

[00:27:09] Cameron: to scale up their, you know, for the scaling up of their manufacturing process. Eventually the price point is going to come down and you will see them get involved in more and more industries. And eventually you would imagine in the home, in elder care facilities, et cetera, et cetera. It’s interesting. I don’t know if you saw Bill Gates, my old boss has a podcast

[00:27:29] Cameron: now. And it’s called Unconfuse Me and he had Sam Altman on and they were talking about what’s going on with all of this stuff. And Sam made an interesting statement. He said, 10 years ago, the general consensus in tech circles was the order in which technology was going to replace human jobs was first robotics, Then, knowledge workers, and lastly, if ever, creative

[00:28:02] Cameron: workers, because that was seen to be too

[00:28:05] Cameron: difficult for computers to ever really

[00:28:07] Cameron: replace.

[00:28:08] Cameron: He said, what we now know is that the opposite is actually

[00:28:11] Cameron: true. It’s creative workers are first, knowledge workers will be next,

[00:28:16] Cameron: and, you know, manual labor being replaced by robotics will probably be

[00:28:21] Cameron: last, only because of

[00:28:24] Cameron: the cost in building these

[00:28:26] Steve: and, and yeah, cost of building but also non repetitive tasks have been more difficult for any AI or robotic system to learn. The robots that we have are doing highly repetitive tasks and, you know, car manufacturing and so on. Uh,

[00:28:39] Steve: if you

[00:28:39] Cameron: Uh,

[00:28:40] Cameron: now,

[00:28:40] Steve: digitally. Yeah. Yeah. But if you can. Teach something visually and it can do non repetitive tasks. It is interesting that humanoid robots, uh, coming out, uh, are in a way, I guess, let’s call them general purpose robotics. And it was when we had the general purpose computing revolution that, uh, It really opened up the use of computers with the smartphone and, and,

[00:29:05] Steve: also, you know, the personal computer was the general purpose nature of computers rather than the old big banking systems, which were designed to do one thing that changed everything.

[00:29:12] Steve: And that pattern remains true. General purpose will change. Change everything. And I do really feel like the overlap between AGIs and robotics and soft robots, eventually, they’ll have to change their structure to be less dangerous and have, you know, soft exoskeletons, is going to be a dramatic shift that I think we’ll visit in the double dive.

[00:29:35] Cameron: You gotta be crazy man. The last story I wanted to touch on today, the CEO of MidJourney in, uh, uh, for people who don’t know, MidJourney is one of the, um, image generation apps. Um, I’ve been using it a lot. MidJourney 6 only came out a couple of weeks ago. It’s a really astounding leap way above DALI 3 and where MidJourney 5 was, particularly in the photorealism of what it creates.

[00:30:01] Cameron: Um, in a live chat in the last week, MidJourney’s CEO said, We’re gonna build a lot of stuff this year. I think we’ll build more stuff than I’ve ever built before. By the end of 2024, hopefully, we have real time open worlds.

[00:30:20] Cameron: Now, little bit unclear what he means by that. People are referring to it as the holodeck.

[00:30:27] Cameron: Uh, ha ha ha. If you can, like an open world, I’ve just, my boys have gone to LA for a couple of months and they gave me their PlayStation 5 and I’ve been playing The Last of Us, part one, some of these more advanced, more recent games, which have, I don’t know if you’ve. Touch these things, man. But the, like the, the, the realism and the graphics on a big screen TV running a PS5, they’re kind of insanely good, uh, graphics, but I, you know, as some sort of open time, a real time open world suggests that

[00:31:00] Cameron: you won’t just use

[00:31:01] Cameron: mid journey to create an image or a

[00:31:03] Cameron: couple of seconds of video, like some of the other tools that, are out there today, you’ll be able to create a real time.

[00:31:10] Cameron: Open world just by giving something like Mid Journey, uh, a paragraph of text, it’ll build a real time world that you can, an avatar can walk around in. Is that what you take away

[00:31:23] Cameron: from this statement?

[00:31:24] Steve: Yeah, it would be. I mean, so I’m imagining you need some sort of a VR device to be able to do that, or you’re just talking about a screen where you’re just navigating through a flat screen.

[00:31:35] Cameron: There’s another quote from him here that is, Mid Journey isn’t a really fast artist, it’s more like a really slow game engine. The future isn’t one image a minute, but 60 frames per second, full volumetric 3D.

[00:31:49] Steve: So it is, I mean, the idea of being in a holodeck, uh, for the non Star Trek fans, it’s really, it’s really compelling. And, you know, I’m not sure if you would be on

[00:32:00] Steve: a travelator floor. I don’t know how the holodeck works in Star Trek that, you know, you just walked into the room and it was a new space. So I don’t know how

[00:32:08] Steve: you replicate the physicality.

[00:32:10] Steve: I know that your body can move. Maybe you’re in like a hanging kind of like haptic suit where you’re sort of like hanging from the wall and your arms and legs and you’re moving as if you’re, I don’t know, but if that was, I imagine that it’s achievable. Uh, audibly and visually and all of that, if you could in some how, some way make your body feel though, as though it’s moving as well, that would be quite compelling.

[00:32:35] Steve: And, and, and I was thinking about this driving here when I was, you know, thinking in my head what we’re going to talk about. I think that this has The potential for people just, if you think kids get addicted to gaming now, I think there isn’t a person in the world, if you could live a fantasy life, the thing that you’ve always wanted to do, whether it’s, you know, skiing down a certain mountain in powder snow or surfing or whatever you’re into, or if you, whatever other fantasies you might have, if you could, now let your own imagination take that, if you could have or do that.

[00:33:12] Steve: That would be so compelling. It would almost be addictive. What is that movie with Leonardo DiCaprio where he goes into the dream sequences?

[00:33:19] Cameron: Inception.

[00:33:20] Steve: Inception. You know how people got addicted to that world of living in the fake world? You know, like they’re in like a little opium den, sort of addicted to this.

[00:33:30] Cameron: Mm

[00:33:30] Cameron: hmm.

[00:33:31] Steve: I could see this as people just living in a world of fantasy where you almost might not even exit. I mean, if this stuff could happen this decade, that’s a pretty extraordinary shift.

[00:33:45] Cameron: Well, that’s sort of the the world that the cyberpunk authors have been predicting since the 80s and early 90s. You know, Neuromancer, Ready Player One, and more recent

[00:33:57] Steve: Yeah, yeah,

[00:33:58] Cameron: 20 years ago now, where you strap on goggles and put on your, your, uh, Paptic gloves and, you know, you just navigate in a Fully immersive 3D world, you know, he, David, um, David Holtz, by the way, is the, the mid journey founder, CEO, um, you know, that’s kind of what he’s suggesting that we might have this year, which is, uh, kind of crazy, but even if it’s not fully realized by the end of this year, anything even remotely because of that, plus the Vision Pro comes out, Apple’s Vision Pro comes out this month, my, my boy Taylor, who’s in LA at the moment is going to buy one and bring it back.

[00:34:35] Cameron: Um, to play with. You know, imagine a Vision Pro with a fully immersive world

[00:34:40] Cameron: that you can create of your own.

[00:34:43] Cameron: I mean, it’s mind blowing. By the way, speaking of robotics, David Holtz, the Mid Journey guy, also tweeted a couple of days ago, We should be expecting a billion humanoid robots on Earth in the 2040s and a hundred billion robots throughout the solar system in the 2060s.

[00:35:02] Steve: yeah. I don’t know what to say. A billion robots. Well, they’re going to be hard to build. I’m thinking about the logistics. What sort of raw materials do we need? How many chips? All of that. But anyway, I mean, it’s a, it’s a big, big number. It’s a big number and

[00:35:15] Cameron: Well, that’s my news stories.

[00:35:17] Steve: yeah. Wow. Huge.

[00:35:19] Cameron: Let’s move into the Double D. Steve, the Deep Dive, what did you want to

[00:35:22] Cameron: talk about

[00:35:23] Steve: I just wanted to talk about the industrial revolution. I know that you and I have spoken about this a lot. Um, the thing that is, is evident is that when that happened, And it took, you know, a couple of hundred years for it to happen. I don’t know, what do we say? Sort of 1650

[00:35:39] Steve: was when it sort of maybe kicked off around about that time.

[00:35:43] Steve: Yeah. The idea that everyone moved from farms to cities, from agriculture to various forms of manufacturing and industrialization. It really changed work like super radically. And I don’t think we could have even imagined then what we would do. You know, the idea that, um, someone’s going to make advertisements for TV, for marketing, for cars.

[00:36:06] Steve: There’s so many layers that get to you being an advertising executive, uh, in New York city or a madman, you know, on Madison Avenue. Um, there’s so many layers that gets to that, that it would be very, very difficult to imagine. How that might happen in, you know, 1850 before anyone has a car, right? It feels like we’re at the dawn of that now. We’ve talked about soft robotics today. We’ve talked about ChatGPT 5 solving all of the problems of 4 and having an AGI that’s multimodal. We’ve talked about Holodecks and us having, you know, immersive reality, which has a no noticeable difference to actual reality. in an AI revolution. Let’s say a decade from now, you know, because it takes a little while to adapt and companies are slow and a bit conservative, but you have all of this capability.

[00:36:54] Steve: Let’s say, you know, a decade from now. I am seriously, it’s difficult to imagine what work goes to. Now I’m a non believer that we never work because that’s been touted forever and ever and a day and it just never happens. And I think humans need to do something. But what does work Go to from, from this to that, like when AI gets radical and we’re talking beyond efficiency, like so often a new technology just is a more efficient version.

[00:37:23] Steve: Email is a more efficient version of mail and a car is a more efficient version of a horse and cart. When an AI can do everything a human can do and different and better and, and there’s almost nothing it can’t do and everything can be done at light speed. What, what happens to our lives? What happens to the work we do?

[00:37:43] Steve: What happens to the revenue flows? Like UBI is something I’m a non believer in. Um, if you have a handful of companies that basically totally control everything and everything can be done almost for free, I’m just wondering if you have any ideas on what work becomes. Remembering that 90 percent of people worked in agriculture before the industrial revolution and now it’s less than 1 percent of humanity and, and all of those things that happened were impossible to predict at the start of the industrial revolution and my contention is it’s impossible to predict what we’ll do.

[00:38:18] Steve: So I was just wondering if you have any ideas on this.

[00:38:23] Cameron: No, I honestly don’t. I mean, I think that people, I agree with you, humans, you know, I think in terms of Maslovian hierarchies, humans need to feel useful.

[00:38:35] Steve: Yes,

[00:38:36] Cameron: We need to feel like we’re, we’re fulfilling something, whether it’s an artistic ambition. You know, if you don’t, let’s leave aside work for money for a second.

[00:38:51] Cameron: Let’s assume a world where we

[00:38:53] Cameron: either have some form of UBI or We all have a nanofabricator of some sort in our house and a couple of robots doing stuff for us, building stuff, make whatever we

[00:39:04] Cameron: want, download a blueprint.

[00:39:07] Steve: at the atomic level,

[00:39:08] Steve: just,

[00:39:08] Steve: you

[00:39:09] Cameron: and at the atomic level or at, or at a, you know, a more macro Newtonian level, but you have a robot that can make your clothes, can build stuff for you in the, know,

[00:39:18] Steve: Earl Grey,

[00:39:19] Cameron: Yeah. Well, yeah, but you have a, you know, I mean, maybe each suburb has a. Massive factory staffed by robots that are just making everything that you need. Like it’s, and it’s just self-funded by the government or by people, you know. You just have this massive, let’s say our governments have factories scattered around the country.

[00:39:46] Cameron: Every, every city has its own factories, staffed with thousands and thousands of humanoid robots that are just making stuff that you need, that you can’t make yourself at home with your nanofabricator, they’re just making stuff. Um, and we, we have a world where you don’t need to earn money anymore. You don’t need to do things to earn money because everything’s just available.

[00:40:09] Cameron: It’s a Star Trek economy, everything is just available that you need. What do you then do for fulfillment? For your life. And we’re also going to have, we’re going to be living 150, 200 years. Uh, we’re going to have to deal with problems of, you know, overpopulation, all that kind of stuff, people will move off planet, whatever.

[00:40:26] Cameron: But let’s say we’ve, we solve all of those

[00:40:29] Cameron: issues. You know, I’ve already created a life for myself and I know you have too, where we do whatever the hell we want for, you know, we do need money, me more than you, because you were a lot smarter than I was when you were in your

[00:40:42] Cameron: twenties. But we, we have crafted lives for ourselves.

[00:40:47] Cameron: Where we do things for work that are really just hobbies that we manage to get paid for,

[00:40:53] Cameron: right?

[00:40:54] Steve: Yeah. I mean, I really like

[00:40:55] Steve: what we do. It’s good. It’s good stuff.

[00:40:57] Cameron: you think about the future for a living, you’d be doing that anyway. I, I read books about history and philosophy and technology, which I’d be doing anyway. I just get paid to do it now. I used to do it before podcasting was invented.

[00:41:11] Cameron: And then I was like, Oh, I’ll just take what I do anyway and get paid

[00:41:14] Cameron: for it. I think everyone will find those sorts of things. People will be able to find the things that they love to do and dedicate themselves to doing that. You remember? What’s the Japanese thing? The, the three concentric circles that we used to talk

[00:41:30] Cameron: about in the 2000s? Um, uh, Kaizen? No, not Kaizen. There was another Japanese thing.

[00:41:37] Steve: was the, that was the efficient manufacturing thing.

[00:41:39] Steve: I’m not

[00:41:40] Cameron: Well, whatever it was, the basic idea was, and I kind of, you know, designed my life around this in my mid 30s. Find the thing that you really enjoy doing that adds value. That you can get

[00:41:54] Cameron: paid for.

[00:41:55] Steve: Yeah. That’s

[00:41:55] Cameron: you can find where those, if you can find where those three things interact, you’re going to have a great life.

[00:42:02] Cameron: You know,

[00:42:03] Steve: That’s really good advice. And that’s, I

[00:42:05] Cameron: I was talking,

[00:42:06] Steve: figured that to an extent.

[00:42:08] Cameron: yeah, I was talking to my Sifu at Kung Fu, the husband of, the husband and wife that run our Kung Fu school a couple of days ago about

[00:42:13] Cameron: this. He and his wife did the same thing. He said, you know, we,

[00:42:16] Cameron: we took a big hit to our income when we decided to open a Kung Fu school. They both had good jobs.

[00:42:22] Cameron: They decided to

[00:42:23] Cameron: quit that. He said, but we traded. You know, jobs that we didn’t really like for doing something that we love doing, helping people, teaching them Kung Fu. We don’t make as much money as we used to, but we’re good at it. People want it and you know, we get paid

[00:42:37] Cameron: for it.

[00:42:38] Steve: It’s the old idea. It’s

[00:42:40] Steve: like, uh, you know, what am I gonna do? I’m gonna work really hard, get a lot of money, do a job I hate. And then, you know, when I’m 65, I’ll sit in a chair and enjoy. It’s like

[00:42:48] Steve: back to front. Just do what you enjoy. And if you’re enjoying what you do, how much money do you need? It really is that, that

[00:42:52] Steve: one. And yeah, the studies are numerous on how much money do you need? You know,

[00:42:56] Steve: happiness doesn’t improve after a certain number, you know,

[00:43:00] Steve: and, and that number I guess, changes over

[00:43:01] Steve: time. But it is

[00:43:02] Cameron: So like.

[00:43:03] Steve: I’ve,

[00:43:03] Steve: I’ve.

[00:43:04] Cameron: I think if it works out well, everyone will just get to do whatever brings them joy and hopefully adds value to other people around them, which is, you know, what I have

[00:43:12] Cameron: found is

[00:43:13] Cameron: actually part of fulfillment. If you’re doing something that’s just for you, it has

[00:43:16] Cameron: limited utility. If you’re doing something that helps others,

[00:43:20] Cameron: it’s more fulfilling, I think.

[00:43:22] Cameron: I think that’s probably true

[00:43:23] Cameron: for

[00:43:23] Cameron: most people.

[00:43:24] Steve: And I think the one thing that we chase and we do it by proxy is we’re looking for recognition and so often position and money and asset accumulation is a proxy for recognition. Some people want power, but I think recognition is the really powerful human one that. Make sense? You want to have recognition that, and again, it comes back to what you’re saying, you’re creating value for society, but people use these proxies of position and power and money and asset accumulation instead of actually just being recognized for doing something that’s valuable.

[00:43:55] Steve: And maybe if we have access to all of the resources we need, you know, food and otherwise, um, then we’ll, we’ll do more things that people recognize that just create value for society because the economic value part of the equation is not as important as it once was.

[00:44:13] Cameron: When you think

[00:44:13] Cameron: about, what do we

[00:44:14] Steve: recognition is really important. You, I mean, you want to be recognized as being knowledgeable. You want to be recognized when Cam really knows his stuff with the Cold War, Cam. You know, and I want to be recognized that, like, Steve gives good voice when he’s on stage and he gives good commentary.

[00:44:26] Steve: Like, you, the recognition is a really, really important and valuable part of it. I think just pursuing your own goal. Like, if I go out and just pursue surfing all day, because I like that, it’s pretty shallow and thin because it’s just, for

[00:44:37] Steve: me, it’s just, you know, a hedonistic pursuit that doesn’t really create value for anyone else.

[00:44:43] Cameron: Well, the things that make me happy is when I get an email from people saying, Hey, you know, I’ve listened to your podcast for 20 years and, you know, it’s really given me a lot of, uh, it’s given me a passion for history and I’ve learned a lot and I’ve had a lot of laughs. Uh, it’s

[00:44:59] Cameron: really made a positive impact in my life, or I watched your film and it, and it, you know, got me out of religion and I thank you for that, or I read your book and that helped me deal with psychopaths in my life, or the three illusions book.

[00:45:10] Cameron: It’s given me, when you get emails from people saying. Hey, that piece of work that you did really changed my life. That’s, that’s

[00:45:18] Steve: Yeah, I get that

[00:45:19] Steve: occasionally. I get that. I get someone who said they’ve read my book

[00:45:21] Steve: and you, you, I had some people say, one guy, he goes, I read your book. I

[00:45:24] Steve: hopped on a plane in Perth and I read your book, The Lesson School Forgot. And I was on my way home to South Africa and I’m a mechanic and I’ve always hated this job.

[00:45:33] Steve: And I got off the phone, I

[00:45:35] Steve: got off the plane, I bragged my boss and quit after I read your

[00:45:38] Steve: book. I was like, holy fuck, I didn’t do that. And then I thought, well, You know, it wasn’t me. He already wanted to do that and, and, and, you know, I was the conduit. Um, but when you get those emails, I get them too every now and again, where people say, man, it really had an impact on me.

[00:45:53] Steve: It’s just, it gives you a feeling in your chest that money cannot buy, doesn’t it?

[00:45:58] Cameron: Well, I mean, that’s if you

[00:46:00] Cameron: do the work because you want to make people’s

[00:46:03] Cameron: lives better and then? people contact You and say, Hey, you made my life better. You’re like, okay, well, it was worth the four years that I put into that or the 20 years that I put into doing podcasts or whatever. Anyway, moving right along, Steve, technology time warp.

[00:46:16] Cameron: Um, I don’t, I want to go way, way back in the technology time warp, Steve, to the year 2018.

[00:46:24] Steve: I don’t even think I was born then, Tam. I don’t think I was born then. What happened way back then? You better tell us

[00:46:33] Cameron: Steve. I read this the other day. There was an article in the New York Times written by Cade Metz, November 18th, 2018, called finally a machine that can finish a sentence. Completing someone else’s thought is

[00:46:47] Cameron: not an easy

[00:46:48] Steve: Most of us have that. Wait, most of us have that. We have a partner. There’s a machine that can finish a sentence. Anyway, keep

[00:46:54] Steve: going.

[00:46:54] Cameron: yeah, Completing someone else’s thought is not an easy trick for AI, but new systems are starting to crack the code of natural language. In August, researchers from the Allen Institute for Artificial Intelligence, a lab based in Seattle, veiled an English test for computers. It examined whether machines could complete sentences like this one.

[00:47:15] Cameron: On stage, woman takes a seat at the piano. She A. Sits on a bench as her sister plays with the doll. B. Smiles with someone as the music plays. C. Is in the crowd watching the dancers. D. Nervously sets her fingers on the keys. For you, that would be an easy question, but for a computer, it was pretty hard.

[00:47:35] Cameron: While humans answered more than 88 percent of the test questions correctly, the lab’s AI systems hovered around 60%. Among experts, those who know just how difficult it is to build systems that understand natural language, that was an impressive number. Then, two months later, a team of Google researchers unveiled a system called BERT.

[00:47:55] Cameron: Its improved technology answered those questions just as well as humans did, and it was not even designed to take the test. BERT’s arrival punctuated a significant development in artificial intelligence. Over the last several months, researchers have shown that computer systems can learn the vagaries of language in general ways, and then apply what they have learned to a variety of specific tasks.

[00:48:19] Cameron: Now, it goes on to quote Uh, fellow Aussie Jeremy Howard, the founder of Fast. ai, they’ve got a lab based in San Francisco where they’ve been doing a lot of AI work. I think he’s from Brisbane, actually, originally, Jeremy. Um, then they say a system built by OpenAI, a lab based in San Francisco, analyzed thousands of self published books including romance novels, science fiction, and more.

[00:48:44] Cameron: Google’s BERT analyzed these same books plus the length and breadth of Wikipedia. OpenAI’s technology learned to guess the next word in a sentence. BERT learned to guess missing words anywhere in a sentence. And it goes on and on and talks about where the technology was at. In the weeks after the release of OpenAI’s system, outside researchers applied it to conversation.

[00:49:05] Cameron: An independent group of researchers used OpenAI’s technology to create a system that leads a competition to build the best chatbot that was organized by several top labs, including the Facebook AI lab. Um. So I, uh, the thing that blew my mind about this, Steve, is it was 2018,

[00:49:24] Steve: Yeah.

[00:49:25] Cameron: 2018. It was basically five years ago, this article came out,

[00:49:30] Cameron: where they’re like, Hey, these crazy

[00:49:33] Cameron: kids doing this AI stuff have figured out how to build technology that

[00:49:37] Cameron: can finish a sentence.

[00:49:39] Cameron: Within Four years of this. It came out in November 2018. GPT 3 came out November

[00:49:47] Cameron: 2022. Within four years of this article, they blew the doors off the whole thing.

[00:49:54] Steve: Massively, like extraordinarily, because it sounds so rudimentary. It’s almost like, oh, isn’t it cute? It can finish a little sentence of a girl sitting at piano and now it can, you know, just, and, and the word chat and chatbot is a bit deceptive because it’s, Yes, conversation is highly intellectual, but it’s just feels a bit limiting and thin to the true capability of LLMs, you know, large language models is, is I think, you know, better verbiage to describe what it is.

[00:50:25] Steve: And the fact that it’s blown the doors off in these four years and our minds are totally blown now, what we need to do is put on our Ray Kurzweil hat here and think about the law of accelerating returns. If that has occurred in four years, then what we have spoken about today. With Mid Journey and ChatGPT 5 and Soft Robotics, I think we’re going to have our total minds blown, uh, in the next four years.

[00:50:51] Steve: And it’s really close to the K Kurzweil idea of the singularities, sort of 2028 was kind of one of his numbers, and I’ve got to tell you, it feels like almost incomprehensible, as it is incomprehensible, because if it can do everything we can do now, I just don’t know what it’s going to do. And I revert back, and it might be a nice way to sort of close this out, that it may well be that AI is going to save us from ourselves.

[00:51:17] Steve: It may well be able to give us You know, geopolitical diplomacy and the ability to cure disease and find new ways to generate energy, you know, without ruining the planet and use resources. Because if it can do all of this human stuff now, what will it do that is beyond human comprehension or capability?

[00:51:39] Cameron: Well, the end of this New York Times article is kind of classic. It says But there is reason for scepticism that

[00:51:45] Cameron: this technology can keep improving

[00:51:47] Cameron: quickly because researchers tend to focus on the tasks they can make progress on and avoid the ones they can’t. Said Gary Marcus, a New York

[00:51:55] Cameron: University psychology professor who has long questioned the effectiveness of neural networks.

[00:52:02] Cameron: These systems are still a really long way from truly understanding running pros, he said. By a long way, he meant

[00:52:10] Cameron: four

[00:52:10] Steve: I mean? Four years, a long way.

[00:52:12] Steve: Well, and even understanding, that whole question of, it doesn’t matter if it understands. What matters is what it delivers. The understanding is irrelevant. And I still remember one of the most profound things my daughter ever said. She was like four or five once, and she said some large articulate

[00:52:29] Steve: word that you wouldn’t expect in a four year old’s vocabulary. I said, do you even know

[00:52:33] Steve: what that word means? And she said, no, but I know where it goes.

[00:52:38] Cameron: Yeah. That’s it,

[00:52:40] Cameron: right?

[00:52:41] Steve: That’s exactly it.

[00:52:44] Cameron: And if you know enough about where it goes, you can infer what it means,

[00:52:49] Cameron: if you think about it.

[00:52:51] Steve: The inference and the inference is where the value comes from. It’s not necessarily understanding. Understanding is nice, but it’s not necessary. Certainly when it comes to computational intelligence.

[00:53:01] Cameron: Well, that’s where we’ve come from in four years, Steve. Uh, well, no, it’s been five years since then, but yeah, GPT hit for four years.

[00:53:09] Cameron: We’re a year after

[00:53:10] Cameron: that, and yeah, it’s just blowing my mind still every day. I mean, I the stuff that

[00:53:15] Cameron: I’ve been doing with it, since we last spoke has just been astounding. It’s still crazy to me, all the software that I’ve written, the code that I’ve done, how it’s sped things up. Yeah, anyway, man, that is all I’ve got for today. 49 minutes,

[00:53:30] Cameron: Steve.

[00:53:30] Steve: That’s good for us. It was good. We said

[00:53:32] Steve: half a, but 49 was real good. I knew, I knew it wasn’t going to be half an hour, but we have achieved the objective. It’s like setting a budget for a company, right? You set a budget and if we get

[00:53:42] Cameron: like if I, if I tell my wife we need to leave by 3, we’ll get out of here at 3. 30. You know, so it’s, if I really want to

[00:53:49] Cameron: leave at 3. 30, I’ll

[00:53:50] Cameron: say

[00:53:50] Steve: in. Build the FED into the supply chain.

[00:53:54] Cameron: Thanks Steve. Have a great week, mate. I’ll talk to you

[00:53:56] Cameron: soon.

[00:53:56] Steve: Thanks, Cam.