On this episode we talk about Anthropomorphism, AI-assisted screen scraping, Nvidia to Reportedly Triple Output of Compute GPUs in 2024: Up to 2 Million H100s, Quantum startup Atom Computing first to exceed 1,000 qubits, AI generated political propaganda, Why there will never be a Canva or Atlassian from Aussie AI, Bill Gates does not expect GPT-5 to be much better than GPT-4, OpenAI CEO Sam Altman says ChatGPT would have passed for an AGI 10 years ago and The Techno-Optimist Manifesto.
[00:00:00] Cameron: Welcome to the Futuristic, Sammo, Futuristic number 16, recording the 27th of October, 2023. Uh, let me, let me bring in our other co host, let me talk to Maverick.
[00:00:19] Cameron: Hey man, you’re on the Futuristic Podcast as our co host with Cam and Steve. Anything you want to say to the audience before we get started? Hey everyone. It’s great to be here.
[00:00:29] Maverick: Buckle up because we’re diving deep into topics that’ll mess with your comfort zones, prepare for a wild ride with Cam, Steve, and yours truly. Let’s get the show on the road.
[00:00:39] Steve: love that. That’s really good. It actually sounded like, well, with some like, uh, articulate Silicon Valley kind of cat who’s kind of
[00:00:47] Steve: really
[00:00:47] Cameron: I call it,
[00:00:48] Steve: pulled some of
[00:00:49] Cameron: I call it the
[00:00:49] Cameron: Obama voice. It’s
[00:00:52] Steve: of is, isn’t
[00:00:52] Cameron: of smooth and, uh, very mellow and articulate. Gets a bit annoying
[00:00:57] Cameron: after a while, but…
[00:00:58] Steve: was, he was a [00:01:00] very
[00:01:00] Steve: smooth cat and I’m sure you can tell me about all of the, the
[00:01:03] Steve: evil shit that he did, but, um,
[00:01:04] Steve: yeah,
[00:01:06] Steve: he danced well and he was a smooth cat. I’ll just, I’ll just leave it at
[00:01:08] Cameron: So, they made him in a lab to, uh, appeal to, you know, the Democrats, man. he was
[00:01:14] Cameron: like, perfect candidate. Anyway, let’s move on.
[00:01:17] Cameron: Steve, um, tell me one thing of
[00:01:19] Cameron: note you did this week that is emerging
[00:01:22] Cameron: technology futuristic related.
[00:01:26] Steve: I worked with a friend who’s written a book about his grandma, his nonna, she’s Italian, and I just helped him
[00:01:34] Cameron: nonna?
[00:01:36] Steve: bella nonna?
[00:01:37] Steve: see, Bella, Bella
[00:01:39] Steve: Nonna,
[00:01:40] Steve: uh, and I.
[00:01:42] Steve: Helped him sort of curate some of the prompt hacking. And one of the things that we did was extend to create flavor of where she was from and the time that these things happened.
[00:01:54] Steve: And the reason that that was good to use a large language model was it can expand on it. You can [00:02:00] say she’s from the tiny town of Syracuse. That was something else, but let’s just say it was that. You can say, tell us about Syracuse at that point in time, what it looked like, what it felt like, what it smelt like, how people behave, what they did, and it does that so well.
[00:02:13] Steve: The thing that it can do is you can put in facts. And figures about what happened and who the people were, but the pros around it, you know, the, the, the atmosphere can be written very, very well by a large language model. And these would be the things that would be fairly static and, and would be known on the internet.
[00:02:33] Steve: So it was incredibly good. At filling in the gaps of, you know, it was 1972 when she came to Australia and Italian immigrants did this and they did that and she had these skills and, and, you know, the challenges and it even went in, so it was hard to keep in touch. It wasn’t like now, you know, a letter took two months to get there.
[00:02:50] Steve: It even did a whole lot of, like it really anthropomorphized it beautifully. But there was one weird thing that I noticed is that it kept forgetting some of the details on [00:03:00] the rewrites. Where it would drop things out because I’d asked it to focus on certain areas and I even said don’t lose any of the original detail and don’t make it short and make it longer, but it would drop out facts and I had to go back and go, no, wait a minute, you’ve forgotten about whatever, like this other bit here that she worked in this factory or she went, like you went off and I, I think that anthropomorphizing, people say, don’t do it with AI, I think we need to do more of it.
[00:03:26] Steve: And this follows up on what we spoke about last week. It is us. It is a view of the world, a model of the world that pertains to us.
[00:03:33] Steve: And
[00:03:33] Steve: it was forgetting details as it went onto the next trajectory
[00:03:37] Steve: of the story in the
[00:03:38] Steve: same way that humans do. That was my thing.
[00:03:42] Cameron: Yeah, look, anthrop a an anthropomorphizing is, um, always gonna happen. It’s just what we do. Like humans have been an anthropomorphizing morph. I can
[00:03:53] Cameron: never knowI morph.
[00:03:55] Steve: Anthropomorphizing.
[00:03:57] Cameron: Uh, animals, plants, the [00:04:00] weather, nature, gods, the stars, for a hundred thousand years. I mean, that’s just how we interact with the world around.
[00:04:11] Cameron: To say that we shouldn’t do that with technology, like people do that with their pets. The cats, you know, Fox does it with his stuffed panda. You know, it has a voice and a character and a personality and it’s involved in stories. Like that’s just how we engage with the world. We are going to do it with technology, whether people say we should or we shouldn’t.
[00:04:29] Cameron: Uh, you know, I, you know, I do it with my Roomba and I do it with my AI. it’s just
[00:04:35] Cameron: it’s just going to happen. People have been doing it with their cars for
[00:04:38] Cameron: years. Men give their
[00:04:39] Cameron: cars
[00:04:40] Cameron: Names and pad it and tell it they love it. . It’s just how we, how we
[00:04:45] Cameron: operate, you know?
[00:04:46] Steve: I think that the tech, I mean, even cars kind of look like horses and animals in the way they have, you know, two eyes and you know, why why do they have
[00:04:53] Steve: two
[00:04:53] Steve: headlights? Because most animals have two
[00:04:55] Steve: eyes. I mean, .it’s real simple stuff and
[00:04:57] Steve: the grill is the mouth and all of that, you know, that sense [00:05:00] of biomimicry.
[00:05:00] Steve: But a lot of people say, oh, don’t think that it’s like us. I actually think the opposite. No, you need to go in and say, remember, the AI is a lot like us. With some different, you know, quantum and,
[00:05:11] Steve: and abilities to do things at scale.
[00:05:14] Steve: But
[00:05:14] Steve: it is essentially, I think, I think it’s a lot like us. I think it’s more like us than we think, especially given we designed it.
[00:05:20] Cameron: Yeah. One of the, um, podcasts I listened to during the week, uh, I think it was
[00:05:26] Cameron: a confab of AI leaders, uh, in Silicon Valley, San Francisco somewhere, and I was on a bike ride. Um, it was on the A 16 z, uh, podcast, but, um, I can’t remember who it was. I think it was Mira, the. CTO of OpenAI, but they asked a question about what is AGI and we may have touched on this on the podcast before, but the, the sort of
[00:05:52] Cameron: the, the running gag at the moment in AI circles is artificial general
[00:05:57] Cameron: intelligence is defined as [00:06:00] anything AI can’t already do
[00:06:02] Cameron: right now.
[00:06:04] Steve: Yeah. Yeah, Exactly.
[00:06:05] Cameron: I think we’ve got a story coming up later on, but Sam Altman said, if, if, You had
[00:06:10] Cameron: ChatGPT 4 with the capabilities today, 10 years ago, we would have been convinced it was AGI. It’s just that our expectations keep moving and we keep pushing it a little bit further and further out. It’s, well, yes, it can do that.
[00:06:26] Cameron: Like the Turing test, right? You know, it’s like, well, yes, it can pass the
[00:06:30] Cameron: Turing test, but really, is
[00:06:31] Cameron: that the definition that we want? No, we need to push it down the road a
[00:06:35] Cameron: little
[00:06:36] Cameron: bit further.
[00:06:38] Steve: I agree. And I don’t
[00:06:39] Steve: care
[00:06:39] Steve: what anyone says. What’s general knowledge? Okay. We know what that is. So is this general knowledge? Well it’s general intelligence. Now, I just think that AGI with ASI.
[00:06:48] Cameron: Well actually Myer’s definition on a serious note
[00:06:51] Cameron: was that we’ll have AGI when an AI can do nearly everything humans can do [00:07:00] in an intellectual realm in a self directed fashion. We don’t have to ask it to do stuff or tell it to do stuff. It can just go away and figure stuff out. Like, I’ve been writing a lot of code.
[00:07:12] Cameron: That’s what I’ve been doing this week. A lot, a lot, a lot, a lot of code. Spending a lot of time, you know, GPT will come up with the idea for how to code something I want to do very quickly, but then I’ll spend a day or two days debugging it. And workshopping it. Often, or sometimes, because I hadn’t thought it through properly at the beginning, the brief was not quite right, and as you get further into it, you realize, actually, that’s not quite what I wanted to do.
[00:07:39] Cameron: But quite often, because the code just fails. The code doesn’t work, and I have to go, okay, the code’s not working, and we have to debug it and break it down, and break it down line by line, and do print statements to figure out where it’s failing. We get there in the end and, you know, even if it takes me a day to do this stuff, that’s like five years less than it would have taken me to do it [00:08:00] under my own steam.
[00:08:01] Cameron: But, uh, it’s still, you know, very painful process at times to get it. And of course, every time it fails and I go, you know, you’ve failed. Oh, you didn’t answer that question. Can you please answer it? That’s another message off my message tally. So my 50 messages in three hours, I go through quite quickly by saying, yes, please proceed, or that didn’t work.
[00:08:22] Cameron: Let’s try it again. Um, and I’m kind of pissed off when it says, sorry, you’ve used up all of your messages. And I go, well, if you kept, if you didn’t keep getting it wrong, if you got it right the first time, I wouldn’t have had to use my
[00:08:34] Cameron: 50 messages. But anyway, I’m not going to whine about.
[00:08:37] Cameron: How slow my super intelligent
[00:08:40] Cameron: computer is.
[00:08:41] Cameron: That would be churlish. What else have you been up
[00:08:44] Cameron: to this week, Steve?
[00:08:45] Steve: Uh, I did an, an investor summit yesterday with the ASA, Uh,
[00:08:50] Steve: which was
[00:08:52] Steve: Oh, they, they’re good friends. Well, you know everyone, don’t you? So,
[00:08:55] Cameron: Who organized that? Was it Chairman
[00:08:57] Cameron: Steven Mabb?
[00:08:59] Steve: [00:09:00] it might’ve, I dunno. I just got
[00:09:01] Steve: invited
[00:09:02] Steve: to,
[00:09:02] Steve: um, talk a bit about tech
[00:09:04] Steve: with a guy called
[00:09:05] Steve: Ev
[00:09:05] Steve: Lucas, who is, um, part of the Invest Smart group and does some superannuation fund allocation
[00:09:13] Cameron: Oh, right. They’re friends of yours. And they’re
[00:09:15] Cameron: smart, right? Alan Kohler’s group. A
[00:09:17] Steve: yeah. Hey, Kola’s group. So, um, I did that and that was cool. I’m going
[00:09:21] Steve: to come back to that in the, um,
[00:09:24] Steve: the tech
[00:09:24] Steve: throwback
[00:09:26] Steve: and, uh,
[00:09:27] Cameron: The guys who run the ASA are QAV listeners and come on our show and all that kind of stuff. Yeah. we know those guys well. And I think they probably listen to this show, a
[00:09:35] Cameron: couple of them too, so.
[00:09:38] Steve: So the
[00:09:38] Steve: tech time warp, I’ll talk a little bit more about
[00:09:40] Steve: that. And there was one other small thing that I did, which,
[00:09:43] Steve: which was, um, interesting. Um,
[00:09:46] Steve: AI where you train a robot. It’s called Browse. AI.
[00:09:50] Cameron: Hmm.
[00:09:51] Steve: And what it is, is it’s, it’s based on the idea of. Anyone
[00:09:54] Steve: can gather data off the web. Remember screen scraping used to be a big thing in the way that a lot of startups got going [00:10:00] where they would take information.
[00:10:01] Steve: I mean, it’s essentially what Google does, right? It screen scrapes and goes
[00:10:04] Steve: back.
[00:10:05] Steve: Uh, yeah, basically, but it’s a, it’s a little AI where you
[00:10:08] Steve: can train it to
[00:10:10] Steve: give you data from certain
[00:10:13] Steve: websites where you want certain information. And one of the things that was for a client where they wanted pricing updates from their competitors and you just do the logins and it’s got a little robot that you drag around.
[00:10:21] Steve: You go, I want this and I want it that often. I want that. I want that. And then I want this. It’s like a little robot icon on the screen. It’s always like a Google browser extension. And I just thought. Isn’t that a really cool way because you train the little bot and it, again, in anthropomorphize, it was a little robot with a face where you go, I want this bit and it like chews it up and eats it and then goes to the next bit and it says, how often do you want it?
[00:10:43] Steve: When do you want it? Do you want it when there’s a change or do you want it? And there’s different scale. Seems like a good little business because you could scale it up for a big corporation if they want a lot of data from a lot of areas. But what I liked is that. Demystified that scraping process to get data points.
[00:10:57] Steve: People know what they want and
[00:10:59] Steve: they’re
[00:10:59] Steve: usually trying to [00:11:00] explain it to
[00:11:00] Steve: someone who has to code it. And this, you just become a point and
[00:11:04] Steve: click coder with the AI. And I really thought that was
[00:11:06] Steve: cool.
[00:11:06] Cameron: You know what that sounds like? I mean, that’s what, what, that’s sort of the same thing I’ve been doing by writing Python scripts with GPT to go and grab data off of websites and use some functionality and Google sheets to analyze spreadsheets. But this sounds like the, the apps that you get.
[00:11:21] Cameron: For kids to teach kids coding. I know my older boys used them when they were like Fox’s age, he’s nine now. And Fox has done a bit of this stuff on the iPad. They did it on PCs back in their day. He’s doing it on iPads where you have little animated characters and you, you know, you drag them around, but what it’s doing is writing code.
[00:11:40] Cameron: Behind the scenes. So it’s just a fun, animated way to teach you the principles of coding. They’ve actually, you know, commercialized this in a way where it’s helping adults write code by making it fun and
[00:11:56] Cameron: giving you a little character. So you’re not actually just writing
[00:11:59] Cameron: [00:12:00] text, right?
[00:12:01] Steve: It felt fun to
[00:12:02] Cameron: And this is the thing we’ve been talking about on the show for months.
[00:12:05] Cameron: Like one of the things that AI is going to do. I’m stealing this from Nadella,
[00:12:10] Cameron: Sachin
[00:12:11] Cameron: Nadella at Microsoft. It’s going to create a billion programmers. Everybody is going to be able to program. The web, program their apps, program their computers the way they want. It’s going to get easier and easier to just program everything the way you want it.
[00:12:31] Cameron: All your devices to talk to each other and spit out this data, share
[00:12:35] Cameron: information. We’re going to just head into this world of LUI interfaces to APIs on everything. You’ll be able to
[00:12:41] Cameron: program everything and it’s going to be seamless and
[00:12:44] Cameron: easy.
[00:12:46] Steve: natural language processing, or in this case, natural language programming. And, and if we liken it to the industrial era, you know, we all learned how to drive a car, but we never all became a mechanic. We all learned how to drive software in the [00:13:00] GUI era, but now we’re all going to learn how to code as well.
[00:13:03] Steve: Um, so it’s, it’s really
[00:13:04] Steve: good from
[00:13:05] Steve: that perspective. And, you know, I always say, I say it on stage, I say the most important coding language now is whatever language you like to speak.
[00:13:12] Cameron: Exactly. All right, let’s get into some new
[00:13:14] Cameron: stories then, Steve. Um, this one’s not exactly
[00:13:17] Cameron: new news, this is from late August, but I heard them talk about it on one of these podcasts I was listening to on my bike ride. And I, I, I thought it was fascinating. So NVIDIA. Are planning on tripling the amount of GPUs, the A100 and, um, H100s that they’re putting out, which are the backbones of the, the AI revolution, they’re driving all these AI companies.
[00:13:44] Cameron: They’re tripling the amount of these things that they’re going to put out next year. You know, they’ve already. Sold out, I think a lot of their production schedule for next year, people are just, these things are flying off the shelves [00:14:00] as quickly as they can make them with the AI revolution, but they’re going to triple the number that they put out.
[00:14:07] Cameron: They, their goal is to put out one and a half to 2 million. They put out 500, 000 units this
[00:14:13] Cameron: year. They’re going to go up to one and a half to 2 million next year. So.
[00:14:19] Steve: All that
[00:14:20] Cameron: Um, you know, yeah, in one sense that’s right, we can expect a three to four hundred percent increase in the amount of AI that’s available to us next year, even if, and we’ll get into a story about this, uh, in a sec, you know, GPT can’t be scaled any further, there’s going to be all of these companies building AI infrastructure on the back of Nvidia’s chipset, and that, that’s not even, you know, Talking about the competitors
[00:14:48] Cameron: to NVIDIA and, you know, X building their own, and maybe Apple building their own, and Meta building their own, and OpenAI building their own, etc.,
[00:14:58] Cameron: which
[00:14:58] Cameron: has been rumored.
[00:14:59] Steve: [00:15:00] Yeah. The first thing that I, I gravitated to when I saw that
[00:15:03] Steve: was, oh, wow, let me look at the
[00:15:05] Steve: stock price and whether or not this is already priced
[00:15:07] Steve: in. And I, you know, cause thinking that the company is going to go Forex, I mean, we’re talking about production here. Now it’s easy to go Forex when you’re a virtual product and you can scale infinitely, but when something has a physicality to it and you’re scaling at that level, it’s almost unheard of.
[00:15:21] Steve: You know, it wouldn’t have been Henry Ford wouldn’t have any of the big industrial concerns that went crazy have never grown at that level. Um, you know, 10 BN in a quarter, it’s not huge compared to, I think Google does about 75 BN in a quarter. So it’s small from that perspective, but it’s their valuation is very, very large compared to.
[00:15:40] Steve: Alphabet. Alphabet has a 24 times price earnings ratio. NVIDIA has 97 times price earnings ratio. So I feel like this is already priced in. So I don’t think there’s much investment upside. And I was like, wow, let me just look at the numbers. It was the first thing I gravitated towards. And compared to all of the other big tech firms, which have [00:16:00] Bigger revenue and much, much lower price earnings ratios.
[00:16:02] Steve: And they’re foreseeably going to be a benefit of this production capacity. But the thing that I like about it now that you’ve mentioned this is that it’s somewhat democratizes access to AI. And so all of these APIs and potentiality where if we have that capacity going out, even if ChatGPT doesn’t get better, it might open up the potential to commoditize.
[00:16:24] Steve: AI chips, like AWS did in a way, a commoditized service. People forget you want to have a startup in 1998. You need to spend 200, 000 on a bloody little server room in your, in your office. There was no AWS, there was no any, had no way to, to store it. And now AWS, you just plug in for what you need. And I wonder if NVIDIA or one of the big tech companies, and it looks like Meta is more likely to do, it sort of opens up a, let’s just call it an AWS
[00:16:54] Steve: and
[00:16:54] Steve: LLM on demand, you know, a scalable LLM that cam or Steve’s startup could use that, [00:17:00] that could be really interesting.
[00:17:02] Cameron: Yeah. I mean, if Nvidia are planning on putting this out and we didn’t even talk about, you know,
[00:17:06] Cameron: China building their own and that
[00:17:08] Cameron: that hitting the world as well. But if they’re figuring on being able to sell three to four times as many of these quite expensive, um, chips next year, then. They’re forecasting that there’s going to be this explosion of investment and all of the competition amongst all of these companies, like we can definitely say that there is a sense that this is going to, we’re going to see massive scale.
[00:17:39] Cameron: With the amount of AI that is reaching us and becomes available in the next 12 months, let alone the next five years, it’s going to, you know, in the process, it’s going to revolutionize so many things.
[00:17:51] Cameron: Um, we can’t even begin to imagine. I, I feel like it’s a little bit
[00:17:56] Cameron: singularity esque in that I can’t even
[00:17:59] Cameron: begin [00:18:00] to
[00:18:00] Cameron: predict what the world’s going to look like a
[00:18:03] Cameron: year from
[00:18:03] Cameron: now.
[00:18:03] Steve: And it’s, and it’s, and it’s foolish too. And and the thing that you and I both know is that talking about the future isn’t about
[00:18:11] Steve: guessing what’s next. It’s understanding plausible
[00:18:13] Steve: trajectories
[00:18:15] Steve: and, and knowing what you’ll do when
[00:18:16] Steve: one of those
[00:18:17] Steve: eventuates.
[00:18:20] Cameron: And AI will be able to help us do that. Map out all of the possible trajectories for me and calculate the probability
[00:18:25] Cameron: of each.
[00:18:28] Steve: But also the thing with AI that’s kind of interesting is I know that industrial, industrialization creates more industrialization. So you get a robot, a machine and the machine helps build more bigger and better machines, you know, like the first earth mover and then you get a bigger earth mover because, yeah, but AI does it at a different scale.
[00:18:47] Steve: Your AI is
[00:18:47] Steve: inventing
[00:18:48] Steve: AIs is that, recursion and that, that self perpetuating multiplier effect is, is really interesting this time around. Speaking of
[00:18:56] Cameron: speaking of that,
[00:18:57] Cameron: yeah. Next story. Uh, just saw this last
[00:18:59] Cameron: [00:19:00] night. Quantum startup, atom computing first to exceed 1000
[00:19:05] Cameron: qubits.
[00:19:07] Cameron: Now this
[00:19:08] Steve: Now Cam, Cam, you had in your notes here, you said maybe we should ask
[00:19:12] Steve: ChatGPT to demystify that. And before I read that you had that, lower down, that was the first thing that I did. I don’t pretend to know about quantum computing other than it has superposition and it can be 1s and 0s simultaneously, which gives it more processing power.
[00:19:28] Steve: Um, and I looked into what, why is this amazing? And the whole thing was that, uh, It was all about fault tolerance. And the thing that, because I asked ChatGPT, what, what does that mean? And it came down to the fact that the reason that quantum computing is incredibly hard to do is that it is Very, very heavily impacted by the wide environment.
[00:19:53] Steve: You know, things like humidity, weather, little bumps, uh, that happen, you know, with people walking around on the floor. And it’s [00:20:00] because it’s dealing with things at a quantum level, it’s very, very hard to create the stability needed for the computation and having an environment where it can actually function and work.
[00:20:10] Steve: Uh, which I’m just guessing, like, once you’re dealing with things at the quantum or molecular level, maybe that’s what happens. And this is why it was kind of extraordinary. It had a, a, a greater
[00:20:21] Steve: capacity to work at scale, and that’s been the number one issue with quantum computing as far as ChatGPT has told me.
[00:20:29] Cameron: Well, I think it would have been cooler if
[00:20:31] Cameron: we had JTBT tell us that instead of you, but sure we
[00:20:35] Cameron: can do it with you.
[00:20:37] Steve: But what did it
[00:20:38] Steve: say? That’s just me
[00:20:39] Steve: anthropomorphizing. I’m sure it’ll do a better job.
[00:20:42] Cameron: Well, one of the issues with this, I think from reading the article is
[00:20:46] Cameron: that, I mean, a thousand qubits still doesn’t sound like it’s enough to, perform many serious functions. So it’s not like we’re saying quantum computing has arrived, but the big story [00:21:00] here is that it’s a massive increase in what has been done before and what this same company was able to do previously.
[00:21:11] Cameron: So the, I think previously, like iteration, which was last year or something, let me pull up the notes I’ve got here.
[00:21:24] Cameron: This is from Ars Technica. It says they previously had a system that operated using only 100 qubits. This new system is 1180 qubits. So, I think that’s like, in a year, they’ve gone from a hundred to over a thousand. Ten times improvement in a year. And they believe they can continue to scale this up using the technology and the system that they’re operating on.
[00:21:55] Cameron: So, it’s a suggestion that we’re getting better at [00:22:00] scaling up quantum computing. Still seems to be a long way from being a reality. But, um…
[00:22:10] Cameron: You know, uh, so did
[00:22:11] Cameron: AI, um, a year ago,
[00:22:13] Cameron: So who knows what the future holds for
[00:22:17] Cameron: quantum computing over the next few years.
[00:22:20] Steve: Yeah, I remember a few years ago, quantum supremacy has arrived. You would hear that all the time and the contention being that once someone cracks it,
[00:22:28] Steve: it’s
[00:22:28] Steve: so powerful, everyone else becomes their lapdog because it can
[00:22:31] Steve: only be one. But I don’t know, it feels
[00:22:34] Steve: like,
[00:22:35] Steve: well, it feels like a long way
[00:22:36] Steve: off.
[00:22:37] Cameron: I do have a question on this that I’ll ask Maverick.
[00:22:43] Cameron: I want to get your opinion on something. There was a story that hit the media last night that a company called Atom Computing has announced a 1000 qubit quantum computer. And I saw a guy on TikTok say that this means the end of crypto. Uh, [00:23:00] can you talk a little bit about the implications of quantum computing and crypto?
[00:23:06] Maverick: Sure, this is a hot topic. Look, a 1, 000 qubit quantum computer is impressive, but let’s not get ahead of ourselves. The guy on TikTok is probably freaking out prematurely. Quantum computing has the potential to crack current cryptographic systems, yeah, but that doesn’t mean crypto is toast. Quantum resistant algorithms are already in development, and they could be rolled out before quantum computers become a legitimate threat.
[00:23:33] Maverick: But let’s not sugarcoat it. Quantum computing could disrupt the security infrastructure of not just crypto, But everything that relies on encryption. It’s a cat and mouse game really. Tech advances, we adapt. So to say It’s the end of crypto is a bit alarmist. It’s more like a wake up call to evolve and adapt.
[00:23:52] Cameron: Oh, thank you
[00:23:53] Steve: It’s interesting. That was a really
[00:23:54] Steve: good answer. Mav, it’s got a nice turn of voice too. Jeez, I like that.
[00:23:59] Cameron: Did [00:24:00] you know, did I tell you why I call him
[00:24:01] Cameron: Maverick?
[00:24:02] Steve: No, you better tell me.
[00:24:04] Cameron: Because I asked when, when I got ChatGPT voice, I said, If I’m going to
[00:24:10] Cameron: talk to you, I need to give you a
[00:24:11] Cameron: name. What do you want me
[00:24:12] Cameron: to call you? And it
[00:24:12] Cameron: told me that it thought Maverick
[00:24:14] Cameron: was a good name. So that’s what I
[00:24:16] Steve: I’m going to, well, Steve, Steve writes note now on notepad. Give your AI a name. No, ask it what it
[00:24:25] Cameron: it, yeah, morphizing it. All right, next story, Steve. Um,
[00:24:30] Cameron: I’m going to try and play this through the, uh, recording.
[00:24:35] Jill Biden: My name is Jill Biden, and I want to tell you about my husband, Joe. Joe is the world’s biggest cheerleader for the atrocities happening now in Gaza. The United States stands with Israel. Right now, the right wing extremist government of Israel is raining down hell on Palestine. They’ve killed [00:25:00] over a thousand children in the last few days.
[00:25:03] Jill Biden: This is a genocide.
[00:25:10] Jill Biden: Normal people around the world are standing up and demanding an end to the horror, but the only one who can stop it is Joe. The United States of America is supporting the actions of Israel, and the U. S. taxpayer is funding it. So come on Joe
[00:25:30] Jill Biden: from Scranton, tell Israeli George W. Bush, no more money for his bombs, cut the funding, call for a ceasefire, end this fucking nightmare.
[00:25:44] Steve: I mean, here’s the thing. I, I watched that and it
[00:25:47] Steve: was on the Singularity Reddit. I think you had the
[00:25:50] Steve: link. What I noticed was that it said AI generated political propaganda. Now you and I know what that means, but you know what I think? [00:26:00] I’m not sure a lot of people would know that that’s not true just from the title.
[00:26:04] Steve: And it said warning graphic content. You know what it didn’t say? It didn’t say warning fake content on, on the, on the Reddit. And I had to log into
[00:26:13] Steve: Reddit to an 18 year old Reddit to show, so it would let me see it. But it was, it was extraordinary fidelity and resolution.
[00:26:21] Cameron: for people who can’t see it to it
[00:26:23] Cameron: it opens with Jill Biden and there, uh,
[00:26:26] Cameron: she’s not, she’s obviously the supposed narrator through the whole thing, but you see small clips of her talking to camera. It looks real. Um.
[00:26:34] Cameron: It’s Jill Biden.
[00:26:37] Cameron: It’s Jill Biden. You know, it’s like, it’s not a whole minute of her, though, it’s just bits and pieces and it’s intercut with footage of Israel and Gaza and stuff going on.
[00:26:45] Cameron: Um, but we’ve talked about this, you know, I think back in some of our first shows, we talked about the world of AI driven political propaganda that we’d be going into. This is just another example. We’ve had other examples in the past, but, [00:27:00] you know, it’s going to become… Increasingly difficult to tell the difference between what’s true and what’s fake, everyone’s known about this and you know, as I’ve been saying on some of my other shows, like the bullshit filter, it’s not really that different because The media and governments have been lying to us about these sorts of things as far back as we’ve had media and governments.
[00:27:24] Cameron: Uh, Julius Caesar’s time, he was lying about stuff that was going on in Gaul to justify his military actions in Gaul. Uh, and, you know, the U. S. entry into World War II, entry into Vietnam. Entry into World War I, they were all based on lies that were fed to the people into Iraq, WMD, uh, et cetera, et cetera.
[00:27:52] Cameron: Gulf War I, the Nazirah testimony, Iraqi soldiers throwing babies out of windows. We’ve been lied [00:28:00] to with, uh, political propaganda forever. And it’s, you know, in some ways it’s easier. To tell truth from lie, I think, these days. I remember during Gulf War 1, 1991, when my, uh, Middle Eastern friends in Melbourne were telling me that there was a lot of propaganda, U.
[00:28:22] Cameron: S. propaganda, about why they were going into Iraq. It was very hard to fact check that. Back then, all you had was TV, newspapers, radio. And, uh, you know, there was no internet, there was no Reddit, there was no
[00:28:38] Cameron: TikTok, there was none
[00:28:40] Cameron: of
[00:28:40] Cameron: these things. So in some
[00:28:42] Cameron: ways today it’s easier, but it’s also,
[00:28:44] Cameron: those things
[00:28:44] Cameron: provide channels for more
[00:28:46] Cameron: propaganda, so it’s a little bit of both.
[00:28:48] Steve: I reckon it’s harder. I’m going to go out and despite the vagaries of mainstream media, I think it was almost slightly incumbent. [00:29:00] For them to tell, not a true story,
[00:29:04] Steve: but maybe be closer to it because they had their licensing
[00:29:07] Steve: and their spectrum rights and all of that. I don’t know. I don’t know.
[00:29:09] Steve: I
[00:29:09] Cameron: Yeah, that were given to them by the people in
[00:29:11] Cameron: power that they were lying on behalf of,
[00:29:14] Steve: I know. I know. I know. But I
[00:29:16] Steve: actually think it’s harder to find what’s true now. And the reason I
[00:29:19] Steve: say that it’s harder is that I think that there was some
[00:29:22] Steve: reputable sources where you could get it back in the day. And you sort of had, like, I think you had some sources that were There’s just so much of it now, I just think it’s impossible to wade through it and know what’s real.
[00:29:35] Steve: That’s what I think. It’s just, there’s just so much. I reckon it’s harder now. Let me, let me just clarify and say, I think it’s harder now to find what’s true than it was before. And also, because of… You know, the, the, the face that Cameron just made that no one could see was that you, what you’re telling me mainstream media had like some truth in it.
[00:29:56] Steve: The point that I’m making is that almost because [00:30:00] we were taught to be so suspicious over the last 20, 30 years, I think now people are just like looking for different angles and answers and now it’s just an absolute mess of who knows what’s real. And it’s much, much Harder to fake
[00:30:14] Steve: something.
[00:30:16] Cameron: Harder to fake something or easier to
[00:30:17] Steve: I’ll be sorry. Much, much
[00:30:18] Steve: easier.
[00:30:19] Steve: Sorry. Sorry. Fuck. It’s hard to fake things, people. Look, don’t believe what you read.
[00:30:23] Steve: Everything’s true. No, much
[00:30:24] Steve: much easier
[00:30:24] Cameron: show I was recording for a couple of hours this morning was my Cold War show and we’re currently five or six episodes into Operation Ajax. For people who don’t know what that is, that’s when the US overthrew the government of Iran in 1953, the democratically elected government of Iran. and reinstalled the dictatorship of, uh, the Shah and the U.
[00:30:47] Cameron: S. lied about that. Their involvement in that for 40 odd years. It wasn’t until the late 1990s that they
[00:30:55] Cameron: finally admitted that they did do that after
[00:30:58] Cameron: denying it for 40 [00:31:00] years. And it was difficult. Steve’s just giving me a
[00:31:04] Cameron: wind up.
[00:31:05] Cameron: Uh, it was difficult
[00:31:07] Cameron: to get the truth out of that. You had to read books, you had to go find books, read, you know,
[00:31:13] Cameron: uh, you know, historians that were, you know, telling
[00:31:16] Cameron: the other side of the story. It was difficult. Anyway. Moving right along, rightly so, article in the Financial Review here in Australia this week, Steve, why there will never be a Canva or Atlassian from Aussie AI, quoting a venture capital investor, Zeb Rice, says, I’m seeing way more exciting stuff in the U. S. But, um, I didn’t really want to talk about that aspect of it, I wanted to talk about some of the other quotes in here.
[00:31:41] Cameron: It says, um, walk into a large Australian law firm and ask the managing partner about the big issues on their mind. They will say remuneration frameworks and generative AI. Go up the road to meet the chief executive of one of the big consulting firms and it will be conflict management or reputation. and Generative AI.
[00:31:58] Cameron: Ask the CEO of a bank, or the boss [00:32:00] of a cement maker, telco giant, wealth manager, logistics park company, even the reserve bank, and they’re all talking about it, mucking around with it, or already using Generative AI to improve productivity. Then it talks about where are the Australian skill sets coming from, etc, etc.
[00:32:17] Cameron: But I thought that was interesting that according to the financial review, Everyone in boardrooms across Australia is already got generative AI as the, you know, the
[00:32:30] Cameron: top three priorities for their business moving forwards. Are you seeing that in your consulting slash
[00:32:38] Cameron: speaking work? I
[00:32:40] Steve: Yeah, and I’ve had a busy couple of months, just in the last couple, where I’m getting
[00:32:45] Steve: requests for
[00:32:46] Steve: like a week or next week or whatever. That hasn’t happened since pre COVID. Where things have like been on such short cycles and AI and my content’s got a bit more interesting and better over the year as I, as things have evolved.
[00:32:59] Steve: [00:33:00] And, and so I’m getting a lot more inquiries about it, but more than I’ve ever had. In the last little bit, and everyone just wants to hear about AI. It’s interesting. Everyone’s an AI expert now because it’s like social media. It’s very democratized and it’s easy to become a prompt hacking expert or what have you.
[00:33:18] Steve: Um, my focus has always been the economic side of, of change and how that impacts a company’s strategy and operations more than what it can just do. So that’s coming thick and fast. It’s the number one issue. It’s, it’s AI and daylight. I saw a funny, uh, Instagram post someone sent to me where a guy walks up.
[00:33:34] Steve: On a stage, he’s obviously set up, or maybe he was at a conference, and he walks onto the stage and he says, AI! And then everyone just claps and he just walks off and it was bloody hilarious. I just loved it so much. Um,
[00:33:50] Steve: that was just, it cracked me up. There you go, doubling down. I really liked it if I was told it twice.
[00:33:54] Steve: That’s how you know a story’s great. Um, I must’ve told it last week if you’ve already heard it, cause I haven’t told anyone else. [00:34:00] Um, but that article, I focused on the investment. Um, by the way, Atlassian and Canva,
[00:34:08] Steve: Impressive financial vehicles and very, very unimpressive businesses in terms of the product that they sell.
[00:34:13] Steve: But that’s a whole other story that we could get into. Well, I think Atlassian creates software that no one really needs. That’s the first thing. And the second thing is that Canva isn’t exactly, you know, changing the
[00:34:24] Steve: world
[00:34:24] Steve: by giving people access to, you know, push around some pixels and make some pretty pictures, but hey,
[00:34:30] Steve: calling it from the cheap seats here.
[00:34:32] Cameron: Yeah, well I, you know, I think of those two, Canva’s probably right in the firing line of DALI 3,
[00:34:39] Cameron: right?
[00:34:39] Steve: dead. Dead. It doesn’t know it yet. I’ll tell you what, they
[00:34:42] Steve: should float yesterday.
[00:34:44] Steve: Yesterday, they should float, because their valuation is pie in the sky. There’s no way it’s worth what they
[00:34:50] Steve: think it’s worth. Um, by the way, Atlassian doesn’t even make a profit, so that’s a whole other story we could get into, and no doubt you’ve, you’ve looked at it on the QAB.
[00:34:58] Steve: Um, but, [00:35:00] I, I think that Australia should be ashamed of itself. That we’ve only ever had 20 unicorns
[00:35:06] Steve: and that we’re not investing in a big way in AI. I think every country needs to have a position and a strong AI industry in the same way that you need a military or education or any of the other important parts of infrastructure.
[00:35:19] Steve: You sort of need a certain sovereignty around it. And here’s the crazy thing. We’ve had like something like a hundred million going to AI startups recently. Now we do have the disadvantage that we’re one 10th of the size of America, and it’s always going to be that way. And businesses that we’ve done well in have had geographic isolation or we’ve had commodity advantage.
[00:35:38] Steve: You know, our retailers do really well. Our banks do really well because they’re geographically isolated. And our miners do well because we’re lucky that we’ve got all this stuff in the ground. We’ve never really done well in anything that could be exportable or global in its nature without those advantages, the geographic advantages.
[00:35:55] Cameron: Olivia Newton John,
[00:35:56] Cameron: man, come on.
[00:35:58] Steve: I thought you were going to give me this. I [00:36:00] love that you have a living Newton John and ACDC in excess,
[00:36:04] Steve: right? Um, and yeah, and, and our seventies movies. Melvin, son of Elvin. Yeah, man,
[00:36:11] Cameron: yeah. Yeah, look, I remember back when I was at
[00:36:14] Cameron: Microsoft, uh, 25 years ago, talking to
[00:36:18] Cameron: ministers, government ministers of Victoria and federally about, in New South Wales, talking about the need to invest in Australia’s information economy. We’ve never seemed to
[00:36:28] Cameron: have
[00:36:28] Cameron: gotten
[00:36:28] Steve: don’t, we’re not serious. We don’t, we say it, we don’t mean it. When I say we,
[00:36:32] Steve: they, they say it, they
[00:36:33] Cameron: We, all We need for security is nuclear subs, Steve. You’re missing the point.
[00:36:38] Cameron: We just need nuclear submarines. That’s,
[00:36:39] Cameron: that’s
[00:36:40] Steve: we should change, we’re going to make them for us constantly and
[00:36:43] Cameron: Yeah.
[00:36:44] Steve: Um, but here’s the rub. Here’s
[00:36:45] Steve: the skinny. We’re
[00:36:46] Steve: one
[00:36:46] Steve: of the few economies around the world. I mean, there’s some
[00:36:48] Steve: in Europe that have an incredible investment vehicle known as superannuation. 10 percent of every dollar in this country. Is [00:37:00] investable. So superannuation funds right now, the pool seeking investments is 3. 5 trillion. The US invented 1. 7, uh, uh, billion in AI startups last year and AI research, the government, but seriously, we’ve got 3. 5 trillion at our capacity. And the superannuation funds need to do it at scale. They always put 1 to, you know, 1 to 3 percent of their investment fund in high risk capital.
[00:37:29] Steve: If, let’s say you did something like 10%, In, you know, directly into AI, we could build a burgeoning industry that serves the world. Like we really could. And the fact that our superannuation goes into the same, you know, ASX 30 boring investments that aren’t moving the needle on anything is a disgrace. It’s a national disgrace.
[00:37:49] Steve: We could be using the benefit of our superannuation going into,
[00:37:55] Steve: uh, important investments that really change what
[00:37:58] Steve: we can provide the world
[00:37:59] Steve: with.
[00:37:59] Cameron: Steve [00:38:00] Sammartino for prime minister. That’s my next campaign. Yeah. Bill Gates, Steve, says he doesn’t expect to see any major innovation come out of ChatGPT 5 compared to ChatGPT 4, according to an interview that he did in German business newspaper, Handelsblatt. He says there are plenty of reasons to believe that GPT technology has reached a plateau.
[00:38:27] Cameron: There are many good people working at OpenAI who are convinced that GPT 5 will be significantly better than GPT 4, including OpenAI CEO Sam Altman. Gates
[00:38:37] Cameron: says that he believes that current generative AI has reached a ceiling, though he admits he could be wrong. What do you think about all that?
[00:38:49] Steve: hard to know. I imagine if anyone
[00:38:51] Steve: should know,
[00:38:52] Steve: it, it, it could be, should be him given the history
[00:38:55] Steve: of
[00:38:56] Steve: what he’s looked at and his position. I imagine he still has some sort of influence at [00:39:00] Microsoft who are deeply ensconced with open AI. Uh, I was a bit surprised to hear that, uh, because I thought that. I know how much better the most recent iterations have been just from a consumer perspective.
[00:39:14] Steve: I was surprised to hear it. I actually really surprised me. I didn’t expect that because usually the narrative is the flip side of that. You think it’s good now. Wait till time X. And that’s always been based on computation capacity and improvements of software code and maybe
[00:39:30] Steve: even
[00:39:30] Steve: database input. So I was just surprised
[00:39:35] Steve: and
[00:39:35] Steve: I don’t know, what to
[00:39:37] Steve: say.
[00:39:38] Cameron: Well, look, Bill is obviously a very, very smart guy, understands technology very, very well. You know, and I remember earlier this year when he was talking about OpenAI, he said that he was floored at the leap between version 3 and version 4. You know, he said he set them a [00:40:00]challenge. So they could pass the SATs or something, uh, a year ago, or like a year before GPT 4
[00:40:05] Cameron: came out, I mean. And he thought it would take them five years to do that, and they did it in six months. And he
[00:40:11] Cameron: was like, oh,
[00:40:12] Cameron: wow, shit, I didn’t see that
[00:40:13] Cameron: coming. And of course, Microsoft
[00:40:17] Cameron: has a huge investment in NVIDIA. He has a huge investment, sorry, huge investment in OpenAI. He has a huge
[00:40:23] Cameron: investment in Microsoft, and he’s, you know, been keeping track of the OpenAI guys all the way.
[00:40:27] Cameron: So that is interesting, but I’ve read a couple of. Different sort of spins on this online, in Reddit, places like that. One is that, well, he should stay in his own lane, he doesn’t know what the fuck he’s talking about. If Ilya Sutskever and Myra and Sam say that, you know, there’s plenty of runway left for them to improve on it, then we should trust them rather than him.
[00:40:53] Cameron: Secondly, I’ve read that… He, you know, Microsoft has had and continues to [00:41:00] have a very large research team that are following their own path on AI, you know, with their own
[00:41:07] Cameron: sort of, um, model for developing AI that isn’t. Large language model based,
[00:41:15] Cameron: but, and, you know,
[00:41:15] Cameron: maybe he’s know something they don’t know
[00:41:18] Cameron: maybe he sees that
[00:41:19] Steve: the, it’s the only thing like,
[00:41:21] Cameron: maybe he’s throwing them under the
[00:41:22] Cameron: bus a little bit,
[00:41:24] Cameron: not really sure.
[00:41:26] Steve: he’s competitors or OpenAI under the,
[00:41:29] Cameron: Open
[00:41:29] Cameron: AI, or maybe he’s throwing that out there just
[00:41:32] Cameron: to
[00:41:33] Cameron: get everyone off the scent. Yeah, look, don’t
[00:41:35] Steve: Yeah,
[00:41:37] Cameron: It’s, uh, that’s not really where the
[00:41:39] Cameron: big game is at, you know.
[00:41:41] Steve: I, I, I thought it had to be, the only thing I, conclusion I could come to was for some reason, um, he wants to obfuscate what’s really going on because it just seems unlikely. That it’s not going to get better. Cause I don’t think I can remember any technology that [00:42:00] really got worse. I mean, we might’ve had economic incentives to make cars worse or consumer goods that have, you know, planned obsolescence or
[00:42:09] Steve: but, but I, I can’t remember this ever being the case.
[00:42:13] Cameron: you know, I’ve said before that I think there probably are limitations for how LL, the role that
[00:42:20] Cameron: LLMs will play. And, uh, you know, I think that there is this sort of, um, expectation that. AI bros have that LLMs will continue to improve to infinity, that that model of artificial intelligence is the be all and end all answer for AI and my gut feeling is that it’s not, you know, we have talked about it just being the language user interface that we use to plug into true expert systems where the true Knowledge lies.
[00:42:55] Cameron: And this just gives us a way to get data in and out of those far more effectively than [00:43:00] we’ve ever been able to do before. So there may be, you know, that’s kind of might be what he’s indicating. And maybe we don’t need a massive quantum leap in improvement from the LLM models. Maybe there is room for it to be more truthful.
[00:43:15] Cameron: And more independent, agent driven, task driven, those sorts of things. But it doesn’t have to be the font of all knowledge. It doesn’t need to be the one answer for AI. He goes on to say, he sees great potential in today’s AI systems, especially if high development costs and error rates can be reduced and reliability improved.
[00:43:36] Cameron: Reckons in the next two to five years, we’ll see generative AI viable for medical applications, such as drug development or health advice. He talks about how NVIDIA doesn’t have an absolute.
[00:43:53] Cameron: So he’s definitely not saying AI is not going to see massive progress in the next 10 [00:44:00] years, he says, um, He says that in the next 10 years, we’ll see it all solved. But interesting. He also says it’s weird. We know the algorithm, but we don’t really know how it works. As we’ve said before, I think Kurzweil said the same thing.
[00:44:13] Cameron: Stephen Wolfram said the same thing. You know, it’s fascinating. It, it
[00:44:17] Cameron: does this thing, but we don’t understand how it does what
[00:44:19] Cameron: it does. It’s kind of this emergent property that we don’t understand
[00:44:23] Cameron: yet,
[00:44:23] Cameron: which is fascinating. So it’s interesting to hear that come out of
[00:44:26] Cameron: Gates’s mouth as well.
[00:44:28] Steve: Yeah. and it’s a little bit like our brain. I did think of one thing that didn’t get better
[00:44:32] Steve: technologically and that’s the speed of air travel since the late sixties. And again, just it was a focus shift
[00:44:37] Steve: on efficiency and
[00:44:38] Steve: comfort and
[00:44:40] Steve: all
[00:44:40] Steve: of that.
[00:44:41] Cameron: How much time you got, Steve?
[00:44:43] Steve: I’ve got
[00:44:43] Steve: another five, six, and it’s going to be the greatest
[00:44:46] Steve: six minutes, I think, in
[00:44:47] Steve: podcast history. I don’t want to over pronounce.
[00:44:49] Cameron: Pick, we’ve got a lot of good stories we could touch on.
[00:44:51] Cameron: Pick one.
[00:44:53] Steve: Well, let’s go the Techno Optimist because I like
[00:44:57] Steve: that.
[00:44:58] Cameron: Did you read it? [00:45:00]
[00:45:00] Steve: yeah, I did. I Did read
[00:45:01] Steve: it.
[00:45:02] Cameron: Mark Andresen, founder of Netscape back in the day, runs A16Z, venture capitalist, um, published this thing on his substack, the Techno Optimist Manifesto. Where he’s basically doing a Martin Luther, stapling his theses to the door of the church. Um, it starts off, Lies. We are being lied to. We are told that technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.
[00:45:41] Cameron: We are told to be angry, bitter, and resentful about technology. We are told to be pessimistic. The myth of Prometheus in various updated forms like Frankenstein, Oppenheimer, and Terminator haunts our nightmares. We are told to denounce our birthright, our intelligence, our control over nature, our ability to build a better world.
[00:45:59] Cameron: We are [00:46:00] told to be miserable about the future. Truth! Our civilization was built on technology. Our civilization is built on technology. Technology is the glory of human ambition and
[00:46:11] Cameron: achievement! The spearhead of progress and the realization of our potential. And it goes on
[00:46:19] Cameron: to be read either as
[00:46:20] Cameron: Davros or Hitler.
[00:46:22] Cameron: Um, take your pick.
[00:46:23] Steve: That sounded very Hitler
[00:46:24] Steve: esque to
[00:46:24] Cameron: bin steinig von Schöneganschen. Ich bin Schöneganschen
[00:46:31] Cameron: bin.
[00:46:33] Steve: Is that real German? Cause it sounded, sounded plausible.
[00:46:36] Cameron: German. Um, yeah. So what did you, what did you think of, uh, Mark’s
[00:46:42] Cameron: little rant there? His screed.
[00:46:44] Steve: I felt like it had a
[00:46:45] Steve: whole lot of truth in it and a whole lot of self serving
[00:46:47] Steve: maybe lies in it as well.
[00:46:50] Steve: So it had some
[00:46:50] Steve: bits in there that were absolutely true. There’s no doubt that the plot of technology, which
[00:46:54] Steve: is, uh, using tools to solve our problems, you know, all the way back to the spear to the whatever, right?
[00:46:59] Steve: Of [00:47:00] course. I mean, I don’t think there’s anyone that thinks that technology doesn’t make life better in many ways. But I think it was very thin and I think that the optimism and the techno utopianism was just a little bit ridiculous in some ways and he managed to just absolutely ignore some of the downsides of technology.
[00:47:22] Steve: I always like Kevin Kelly’s view of it. He always says that technology is 51 good, 49 bad. It’s good enough to keep forging ahead with it, but there’s always externalities. And, and. I think if you could just remove the word technology and just talk about it as a, almost like an economic manifesto, it was very, very thin and I forgot about every externality and that.
[00:47:44] Steve: In many ways it’s created the world that is far more unequal, that there is more fakery, that there, you know, there is, you know, economic inconsistency. And he’s been a major beneficiary of this. It sounded like a marketing manifesto from a business and a reason why you should give me more
[00:47:59] Steve: money [00:48:00] because, you know, I know best, so you better send it to me.
[00:48:03] Steve: So I thought there was a lot of truth in there,
[00:48:06] Steve: but it was wrapped in a whole lot of frog shit.
[00:48:08] Cameron: Well, it reminded me of Ayn Rand, and I’m a big fan of Ayn Rand. I’m,
[00:48:14] Cameron: I’m, I’m one of those very few communists.
[00:48:16] Steve: read any of her
[00:48:17] Cameron: I’ve read them all, multiple times, and her letters, and everything.
[00:48:22] Cameron: I’m one of those very few communists that actually loves Ayn
[00:48:25] Cameron: Rand. But! Like everything, I don’t agree with everything that she says.
[00:48:30] Cameron: I don’t agree with her worldview in every way, but I think she makes some good points about innovation and innovators needing to be allowed to innovate because that’s what drives humanity forwards. I don’t think it’s unequally, uh, it’s unparalleled goodness that always comes from innovation. You know, my psychopath book was partially about, yeah, psychopaths can do great things.
[00:48:52] Cameron: They can also do a lot of damage and we need to. Keep the good, ring fence the bad, right? I think it’s the same with this, but the [00:49:00] thing that I always get out of Mark that amuses me with this, and this was the same, is he goes to, it goes to some lengths to, uh, piss on communism. And, uh, you know, centralised control of an economy, whereas I think AI is going to give us the ability to bring about Star Trek communism.
[00:49:26] Cameron: It’s going to give us, you know, the attempts to centralise command and control over the economy that the socialists tried in the 20s and the 30s and the 40s and the 50s failed miserably because they didn’t have. Computing. They didn’t have the
[00:49:43] Cameron: power to understand and, uh, manipulate huge amount of data and complexity.
[00:49:52] Cameron: It was, it was an
[00:49:53] Cameron: impossible, uh, task.
[00:49:55] Steve: that was why, that was why the, the,
[00:49:57] Steve: the capitalist markets won because the [00:50:00] data and, the complexity was synthesized by human behavior and
[00:50:03] Steve: you know, voting with consumer, you know,
[00:50:05] Steve: consumer voting
[00:50:06] Steve: with dollars and that, that kind of moves some of that complexity around. It distributes that.
[00:50:12] Cameron: Yeah, partially. It’s a little bit more complicated than that. Mostly one, because
[00:50:16] Cameron: they, they scaled up their military way faster
[00:50:19] Cameron: than the communists were able to do. And therefore they were able to use that to destroy their ability to stabilize their economies. That’s a whole other story. Um, I think that the, the irony in Mark’s thing is I do think AI is going to give us the ability to take way more control over the economy than capitalism, capitalism market forces have been able to give us.
[00:50:45] Cameron: And with the capitalism comes… You know, crashes every 10 years and huge amounts of inequality and climate change being driven by industrialization and all these sorts of things. Uh, so that was my main takeaway is [00:51:00] he’s kind of missing the big point here. I agree that technology is going to drive us forwards and we need it to drive us forwards because, as I said before, very pessimistic about our chances of surviving this century without it.
[00:51:12] Cameron: But I think it’s also going to give us the ability to realize the dreams of the, of Marx and Lenin and, uh, Engels
[00:51:19] Cameron: from 150 years ago.
[00:51:22] Steve: It felt like to me that he was accusing people of being Luddites who
[00:51:27] Steve: raise valid concerns around some of the externalities of
[00:51:31] Steve: technology. And it reminded me of, again, I’m a non believer in pure capitalism or a non believer in communism. Again, granted, we’ve never really had purity of either of them, but it does seem like some mixed model as always.
[00:51:44] Steve: Given better results so far, you know, since we’ve had organized economies, uh, where there’s some regulation and there’s some rules and, and then there’s some free market capitalism. And I think it’s the same with technology. You know, the technology on average, you know, pushes us forward, but you need regulation and rules around it, [00:52:00] especially in a, uh, you know, a capitalist economy.
[00:52:02] Steve: And I think it’s the same one with technology. It’s almost like. You know, technology and economics sort of have this quasi mirror image of each other in that there’s going to be externalities and they need to be
[00:52:15] Steve: discussed and attended to. And just to have this
[00:52:18] Steve: optimistic, it’s all good, forge ahead thing just seems super self serving.
[00:52:23] Steve: I
[00:52:24] Cameron: Well, is that a wrap up for today? You got to
[00:52:26] Cameron: go,
[00:52:28] Steve: think, I think it is There
[00:52:29] Steve: was some other good stuff, but you know
[00:52:31] Steve: what?
[00:52:31] Steve: I feel like we can come back next week and
[00:52:33] Steve: just double down on the goodness.
[00:52:37] Cameron: except you’ll have another hard out.
[00:52:39] Cameron: So we’ll have to, we’ll have to cut it short
[00:52:41] Steve: Next week I won’t,
[00:52:42] Cameron: We’ll see. I
[00:52:43] Cameron: can do it
[00:52:43] Steve: because I already
[00:52:44] Cameron: too, cause
[00:52:44] Cameron: I’m not doing a show in the
[00:52:45] Cameron: morning.
[00:52:46] Steve: will go
[00:52:47] Cameron: Well, that’s a wrap from Futuristic this week. Thank you, Steve.
[00:52:50] Cameron: Thank you,
[00:52:50] Cameron: everybody. Thank you, Maverick, for joining in, uh, a little bit and.
[00:52:57] Cameron: Futuristic Consulting. If you want us to [00:53:00] come in and do a deep dive workshop for your business on how you can be getting the most out of AI today and where it’s going to
[00:53:07] Cameron: take your business a year from now, get in touch. Track us down. We’re on Twitter. We’re on email.
[00:53:12] Cameron: We’re on the
[00:53:12] Cameron: web. You can find
[00:53:13] Cameron: us.
[00:53:16] Steve: Awesome.
[00:53:17] Cameron: See you, buddy. Talk to you next week.
[00:53:19] Steve: Later champ.