Select Page

This week we’re talking about my analogy between AI and quantum mechanics, and why GPT sucks at playing chess, why Steve got interviewed by Sir Bob Geldoff about AI, and lots of news around technology, including – Microsoft rolling out its version of AI copilot in Windows 11 to developers, China’s recent advancements in AI, Google’s Deep Mind CEO saying their next algorithm will eclipse Chat GPT, the new AI app PI that my wife and son have been using as a therapist, Meta is gonna make their next LLM free for commercial use, Kosmos-2 is a new groundbreaking, multimodal large language model, the first drug has been developed by generative AI and is being administered to patients as we speak, Google has reportedly made huge progress in its quantum computing approach, and lots of companies, including GitHub, Reddit, and Twitter, are starting to try and prevent AI from being able to freely train on their data. We do a deep dive on Mark Andreessen’s recent paper on AI called “Why AI Will Save the World”. In our Technology Time Warp, Steve talks about why tech won’t ever end up in massive unemployment and then launches an attack on the idea of a UBI, a universal basic income, and he also talks about why the Productivity software Bubble will burst as a result of AI.

Full Transcript. 

[00:00:00] Cameron: Welcome to another jam-packed episode of the Futuristic podcast. This is episode seven with Cameron Reilly and Steve Sammartino. This week we’re talking about my analogy between AI and quantum mechanics, why it is the GPT sucks at playing chess. Why Steve got interviewed by Sir Bob Geldoff about ai, a lot of news around technology, including Microsoft rolling out its version of AI copilot in Windows 11 to developers.
[00:00:31] Cameron: China’s recent advancements in ai. Google’s Deep Mind CEO saying their next algorithm will eclipse Chat, GPT. The new A I PI that my wife and son have been using as a therapist, meta is gonna make their next LLM free for commercial use Kosmos-2, as a new groundbreaking, multimodal large language model.
[00:00:54] Cameron: The first drug has been developed by generative AI and is being administered to patients as we speak. [00:01:00] Google has reportedly made huge progress in its quantum computing approach, and lots of companies, including GitHub, Reddit, and Twitter, are starting to try and prevent AI from being able to freely train on its data.
[00:01:14] Cameron: We do a deep dive on Mark Andreessen’s recent paper on AI called Why AI Will Save the World In Our Technology Time Warp. Steve talks about why tech won’t ever end up in massive unemployment and then launches an attack on the idea of a ubi, a universal basic income, and he also talks about why the.
[00:01:32] Cameron: Productivity software Bubble will burst as a result of ai. That’s coming up in the next 90 minutes of the futuristic podcast. Jump in.
[00:01:47] Cameron: Hey Steve, welcome back to Futuristic.
[00:01:50] Steve: Hey Cam, it’s good to be back. Couple of weeks.
[00:01:54] Cameron: Yeah, been a couple of weeks. Episode seven. I wanna start this week’s show, Steve, by giving [00:02:00] you one of the insights that I’ve had in the last couple of weeks. And it’s we’ve talked a bit on the show about definitions of intelligence and, it’s a, this really interesting place that we’re in now where it’s really important that we talk about what is intelligence and what does it mean and how do you know it when you see it?
[00:02:19] Cameron: It’s a bit like that old US Supreme Court case about pornography where the judge says I dunno how to define it, but I know it when I see it. I thought of this analogy between, I. AI intelligence and quantum mechanics. One of the interesting things about quantum mechanics when you read interviews with quantum physicists and you read the literature, is most quantum physicists will freely acknowledge they have no idea how quantum physics works.
[00:02:46] Cameron: Nobody knows how it works. It, it goes against everything that we thought that we knew about traditional physics, Newtonian Einstein, and physics up until the early 20th century, and it goes against their intuition.[00:03:00] But most quantum physicists will say something to the effect of, I don’t know why it does what it does.
[00:03:07] Cameron: I dunno how it does what it does, but I know how to use it. So it, there’s a saying, it’s just shut up and just do the maths in quantum physics. So stop trying to philosophize about whether the Copenhagen interpretation or the many worlds interpretation is the best interpretation for explaining how a wave de coheres and becomes a particle, and what happens when it’s still a wave and what happens when it becomes a particle n not that’s in the realm of philosophy for most quantum physicists.
[00:03:38] Cameron: It’s it’s like mental masturbation gets a little bit too close to religion for them. So they’re like, just shut up and do the maths. That’s basically their philosophy. I don’t have to know why it does what it does. I don’t have to know how it does what it does. I just need to know how to use it to build electronics or to get the job done.
[00:03:58] Cameron: And I suspect we’re gonna get [00:04:00] to the same place quite quickly with the. A debate around AI and intelligence. I dunno why it’s intelligent. I dunno why a large language model is able to display what we would normally assu associate with intelligence when all it’s doing is predicting the next word.
[00:04:18] Cameron: But it works. I, so who cares how it works? Let’s just do it for the most people. I think there will still be people out there that are, that deep down in it, that’ll be trying to figure it out. But for most of us, it will just be good enough to say that it works. And one of the problems with the fact that it works so well is when it doesn’t work, it’s quite confusing as to why it doesn’t work.
[00:04:39] Cameron: One of the, one of the challenges I had within the last week is I tried to get it to analyze a chess game that I played with one of my sons last Sunday. I gave it the, all the moves and I said in ch in standard chess notation. And I said, analyze this game for me. And its analysis was shit like, seriously terrible.
[00:04:58] Cameron: Why? I [00:05:00] don’t know. It turns out that GPT is really bad at chess, among other things.
[00:05:07] Steve: I would’ve thought that in the training, given that they’ve trained it to code and do language and medicine and maths and research analysis, that would’ve been one of the parameters that they’ve put inside the training models.
[00:05:20] Steve: Given that, that was one of our earliest attempts to say, computers can be intelligent in inverted commerce.
[00:05:29] Cameron: Yeah, I would’ve thought it would be really good at chess too, because there’s plenty of literature out there about chess, both in books and online tons of content on the web around how to play chess.
[00:05:41] Cameron: I would’ve thought that it would’ve absorbed all of that and it was database and it would do a good job, does a terrible job. Not only analyzing a chess game, but playing chess. So after I did a bad job of analyzing this game I don’t know if you’re a chess player, but at the end of the, I resigned in my game against my son when it was down to just Kings.
[00:05:59] Cameron: I think we had a rook [00:06:00] each, but he had five pawns and I had one pawn. So from an endgame perspective, that’s, terrible position for me. He has a very strong advantage. Very difficult to pull off even a draw out of that. But when I asked GPT to analyze the game, it said it got to the end.
[00:06:16] Cameron: It said it’s fairly even anyone could win this. And I was like, what are you talking about? He’s got four porns more than me. And it was like, Oh yeah, you’re right. No that’s,
[00:06:26] Steve: yeah. Yeah, you’re right. Actually, now that you’ve mentioned it. If large language models replicate human behavior, then I shouldn’t be surprised with that answer.
[00:06:34] Steve: Some idiot comes and goes, yeah, this, that, and you go, actually, have you thought about that? Go. Actually, one second thought. You’re actually, you’re right there. No, look, if anything that confirms, it’s
[00:06:44] Cameron: very much like us. Very human. Yeah. And then I tried to play a game of chess with it. So I would tell it my move.
[00:06:49] Cameron: It would tell me its move. And I, I’ve got a bit of a screenshot of it here. It says, all right, I’ll go with B F five, Bishop F five, it’s your turn. I said, night to D two. It said, I’ll develop my other night. [00:07:00] It said, E six, your turn. I said, you can’t put a night on E six. It says, I apologize for the confusion.
[00:07:05] Cameron: There was a miscommunication. I was indicating a porn move to E six, not moving a night, but the wrong thing. I’ll clarify my move. E seven to E six. You know it, it’s bad at playing chess now. There is a chess plugin for GPT, which makes it a little bit better, but still not great. And so I started to try and get my head around anyway why it sucks at playing chess and what the implications are of that for other domains.
[00:07:34] Cameron: I also believe it’s not very good at maths. A lot of, I’ve seen a lot of talk online. I don’t trust it to do maths or use the wool from Alpha Plugin and get it to confirm everything that it says about maths. You got any ideas as to why it might suck at those two things? Steve, apart from the
[00:07:50] Steve: obvious,
[00:07:55] Steve: I, I wonder, given that the parameters within large language [00:08:00] models, horizontalized and cross-reference, where it confuses the topic with what people think about the topic and the language inference. And so large parts of the learning, when it looks at, let’s say the word chess and just chess and then it looks at all its parameters that refer to chess.
[00:08:17] Steve: And so it’s confusing language with maths because it is looking at what people say about that topic and then getting probability on what the next sentence or the next thing is. Cuz that’s actually how it works. Yeah. It’s not pure mathematics and because it’s a horizontal technology and isn’t in one realm, it’s gonna cross reference different things, which might have a lot of discussion around them, which include really bad chess players who are probably more likely to ask questions online about what they did wrong and Right.
[00:08:48] Steve: It’ll have a higher number of those to refer to than just the pure maths of chess. That’s my
[00:08:53] Cameron: guess. Yeah. I think you make a good point that we have to continually remind ourselves, and I catch myself having to [00:09:00] do this quite often, that a is not a, an AI out of a science fiction book or film yet.
[00:09:08] Cameron: It is regurgitating.
[00:09:11] Steve: Regurgitating, yeah. Human
[00:09:13] Cameron: written knowledge. And it’s not really thinking too deeply about a lot of this stuff. The other example that I
[00:09:21] Steve: have it’s thin and wide in, in a long way. Most narrow artificial intelligence. That’s what my wife
[00:09:27] Cameron: says about me.
[00:09:28] Steve: But anyway, keep narrow and deep.
[00:09:29] Steve: Yeah. So they go very deep onto one subject. And because this is general in nature, and I argue again, it’s always, it is a general intelligence. It goes thin and wide and has a probabilistic outcome of all the different references to the topic that it’s trying to decipher and give something back.
[00:09:48] Steve: Whereas a narrow intelligence probably gives you a better result on something specific. Yes. Like chess or driving a car or any of those things. Cause it’s a language. Yeah, exactly. And that’s I [00:10:00] think one of the things we need to remind ourselves, and that’s why it has hallucinations.
[00:10:03] Cameron: Yeah. So we’ll talk about in later on in news section that Google coming up with a new AI that they’re going to merge with their Alpha Go AI that they developed a few years ago that could play Go game, that’s supposedly even harder to play than chess and was able to beat the, world champion at Go.
[00:10:26] Cameron: But one of the other analogies that I use is, I like to talk philosophy with ai, so I’ve done it with GPT and also with pi. We’ll talk about PI a little bit later on. I’ve used PI a lot the last week or so, and, I have lots of interesting conversations about the intersection of science and philosophy.
[00:10:44] Cameron: I, I. Personally don’t believe that I exist as a separate entity. I believe there is only the universe and we are all that. And I was having a conversation. I’ve done this with both the GPT and pi where I’ll mention this in passing about my [00:11:00] philosophy, and it’ll say, oh, that’s really interesting.
[00:11:02] Cameron: And I’ll say, yeah I think that’s the scientific consensus or that’s the view I came to through science is that I don’t really exist as a separate entity. And this happened to me last night with Pai. It started to argue with me. I said that’s not exactly the view of science. We, science is gradually coming to the point that everything is connected, but still separate.
[00:11:22] Cameron: I said, okay, let me ask you this. Does an atom have a boundary? And it said no, actually atoms don’t have boundaries. The electrons that or atoms are mostly empty space. They have a nucleus. The electrons that orbit the atom are mostly defined as probability clouds. They don’t have a hard shell. I said, so if an atom doesn’t have a hard boundary and I made of atoms, do I have a hard boundary?
[00:11:46] Cameron: And it’s oh, I see where you’re going with that. I guess you’re right. And I say, so if I’m made of Adam’s and everything’s made of Adam’s and Adam’s don’t have a hard boundary, then nothing has a hard boundary. So what separates me from everything around me? And it’s oh, okay. Yeah, [00:12:00] no, you’re right.
[00:12:01] Cameron: There is no hard boundary between you, me and the other. But I’ve had, I had to
[00:12:04] Steve: lead it there, but there’s soft boundaries. Sure. There’s illusory boundaries. So yeah, illusory boundaries and soft boundaries and physically things bleed into each other. And we can shift atoms by you’re having a stronger atom, push weaker atoms.
[00:12:24] Steve: You’re even just slicing some butter with a knife in in, in the basic sense. And so yes, everything’s connected, but some connections are stronger than others. Sure. But again, that, like you say, a loser is a good word to describe what those boundaries are.
[00:12:40] Cameron: It’s really, it’s our senses perceive a certain level of reality.
[00:12:44] Cameron: Yes, we’re not seeing what’s really happening at a quantum level. But my point is just that I kind of part of my brain still, even though I know that’s not how these AI work. I expect it to really have deep intelligence and to be able to [00:13:00] jump to these sorts of deep conclusions immediately.
[00:13:03] Cameron: Whereas I often find I have to lead it. To the conclusions then it will agree with me, but it doesn’t get there first because it’s probably it’s reflecting back the sum total of human knowledge weighted against the way that most human knowledge is conveyed. Like the vast majority of written material out there would disagree with my conclusion about separateness in the universe.
[00:13:26] Cameron: You have to go really hard science to get to that conclusion. Yeah, and it’s reflecting back to me the mainstream view, not the niche, hard science, hard materialist view. This
[00:13:39] Steve: is the fork in AI that not at yet we might get to. I think it goes to the fear of whether or not AI will become sentient and develop a different, I’m gonna call it a different perspective on what intelligence is, cuz at the moment it’s really just a multiplier effect of what we do.
[00:13:56] Steve: With a probabilistic inference on what it [00:14:00] thinks the answer or the direction should be on the discussion or the question, or what you’re asking it to do. If the emergent behaviors within it are real and they become stronger and more likely to occur as it develops, it might have, there might be a fork in the road where it goes down to a different direction and analyzes and do, does things in a different way to us or it might just be more of the same, just with the stronger multiplier effect.
[00:14:26] Steve: I don’t think anyone knows the answer to that yet. We have seen some views of emergent behavior within it, where it does things we didn’t expect. But it still at this point is very similar to us. And if you look at it in terms of, in the same way we might look at fractals, I’ve been so interested in fractals in society and in science.
[00:14:49] Steve: Lately, and it’s interesting, given that you’ve mentioned the whole is anything separate? Are you part of the whole universe? But there are patterns of things which. Are the same again and again in smaller and bigger sizes. And even the [00:15:00] idea of a neural network. And if you even look at a map of the internet, which there’s lots of those, you can just Google it.
[00:15:05] Steve: Yeah. What does the internet look like if you mapped the nodes and the communication? It looks very much like a human brain. So I wonder if everything in the universe has a certain pattern which emerges just at different scales again and
[00:15:18] Cameron: again. Have you ever read Steven Rom’s book, A new Kind of Science on cellular automat?
[00:15:23] Steve: No, I haven’t. I’ve gotta write that down now. Get the pen out. See, I knew that he’d give me some stuff, which is gonna stop me from doing work this afternoon because I go That’s interesting. Yes, I’ve got stuff I could do. Yes, I’ve got client work. I could be pitching myself, but no, I’m gonna read what Cam recommended, and that’s what this is all about.
[00:15:41] Cameron: It’s a futuristic it’s about 25 years 20 years old now. New kind of science. Stephen Wolf from the guy behind Wolf from Alro and Mathematica wrote a. Quite a large and important book. I think where he basically posited that new kind of science was the name of the book that we [00:16:00] that the underlying all of the mathematics, which underlies the universe is actually a, he believes a relatively simple code that is a form of cellular automat.
[00:16:14] Cameron: Cellular automat. I think the original guys that came up with that were, yeah, tell me what that is. The two of the founders of the hydrogen bomb and computing Vaughn Neuman and Stunner Love Ola in the late forties, early fifties were thinking about this basically cellular mater is a system, very simple system.
[00:16:37] Cameron: Let’s say you’ve got two dots in a system. You’ve got a white dot and a black dot, and you have a s a simple algorithm that says if you just have a white dot, add another white.to it. If you have a black dot, add another black.to it. If you have a white dot, followed by a black dot, add a white dot.
[00:16:54] Cameron: If you have a black dot, followed by a white dot, add a black dot. And just a [00:17:00] simple rules at a cellular level. And once you have those simple rules in place and the program just runs and runs and runs and runs, you can end up with extremely complex and beautiful results. In his book Wolfram Shows some of the different, I think he tested like a couple of hundred different relatively simple algorithms.
[00:17:22] Steve: I’ve seen a few of those visualized which, which are really interesting. And a lot of the patterns that you see with intelligence at species levels, like the way beehives are created, follow simple algorithmic patterns like this where there’s, three or four rules. Yep. But yeah, the multiplier effect of those rules comes out and builds something that the people actually build it.
[00:17:44] Steve: People, I shouldn’t say people, the species or the elements or what have you building it actually don’t know the thing that they’re building. They just follow basic rules in front. And then you have an emergent pattern or structure that yeah. Comes after that
[00:17:55] Cameron: complexity emerges from a very simple set of rules, and his [00:18:00] thesis was, and still is, that the universe is run by a relatively simple code that it, it just looks really complicated to us because it’s been running for a very long time.
[00:18:12] Cameron: Anyway, moving right along. What does Sir Bob Geldoff think about ai Steve?
[00:18:17] Steve: Yeah, so he, he interviewed me this week and he’s,
[00:18:20] Cameron: okay, stop there. Why were you interviewed by Sebel? Bob Geldof. Where and why? Lucky
[00:18:25] Steve: I guess Al where and why, see, maybe he’s been reading my thought leading blog posts on, my AI series, which is just touching the world capturing attention.
[00:18:33] Steve: Yeah. Yeah. Captured his attention. No, to be honest, completely honest. He, did he tell you why he doesn’t like Mondays? How many people would, I said, look, I’m getting interviewed by Sir Bob, and he said, just ask her if the silicon chip inside her head was set to overload. Yeah, just ask her.
[00:18:49] Steve: Yeah. Our old listeners will get that. I got lucky. So Libby Gore, who I know really Oh yeah. Was working on a project, radio project and she tipped me in. So Sir [00:19:00] Bob interviews me. He’s terrified. Really? Yeah, he’s terrified. That’s the long and short of it. But the thing that I found most interesting, he’s a very super intelligent guy and most of the discussion was about, the Jeffrey Hinton style.
[00:19:15] Steve: We dunno what we’re doing. And because of that, the risk is extraordinary and it’s not. The risk of the risk is out in everyone’s hands, and we’re gonna talk about that later today, that things are open source now. But he’s really quite scared about it. He, when it’s really deep philosophical wormholes Faustian bargains and, books from literature and 1500 that I certainly haven’t read.
[00:19:39] Steve: So it was a tough interview. And in fact at the end of the interview, he was still asking me questions when we went to an ad break. And Libby Gore had to say, while Steve and Sir Bob are discussing this in detail, we’ll be off to a break. And I’m sure they’ll uncover some answers that no one will hear anyway.
[00:19:52] Steve: So it was pretty funny. But one of the things that he couldn’t fathom is why people aren’t more concerned about the threat. [00:20:00] And there’s two things that, that for me are really interesting is because in the last 34 years, we’ve been so focused on economics rather than humanity. I think. I was briefed to do a keynote on AI with all of these C-suite executives in a couple of weeks.
[00:20:18] Steve: And I, they said, we want you to talk all about ai, everything that matters. So I had to raise the dark side PDO and all of that stuff. And they said, no, don’t talk about that. We want it to be positive. We want we don’t want anything negative. We wanna stay on the upside economic opportunity. That’s what our event’s about.
[00:20:34] Steve: It’s don’t talk about the war. Likewise, I wonder if the reason we haven’t seen protests from, let’s call it the creative class or young people in media and music, really protesting against the powers that be doing things that might be bad or negative for society. And by the way, I’m not necessarily saying that AI is that, but the reason we don’t see that, I wonder if it’s because the young [00:21:00] cohort need these tools.
[00:21:02] Steve: To provide them with the attention and the life that they like, and they’re ensconced in it. Whereas a nuclear threat was something other and external and out of our control. So I often think about that and Sir Bob was going down that angle as well. Why isn’t there a protest movement about things?
[00:21:18] Steve: I just think that today’s youth are too inextricably linked to the tools and the people that provided to actually fight against it. That’s one of the things I’ve been wondering.
[00:21:26] Cameron: Later on in the show, we’ll talk about Mark Andreessen’s views on all of this. He takes the completely opposite view to Sir Bob
[00:21:34] Steve: Utopian, let’s say Utopian.
[00:21:36] Cameron: Yeah, the utopian view. Absolutely. And he thinks all of the doom and gloom, the doomsayers are either crazy or have an agenda. Yeah, we’ll talk about that a little bit later on. Interesting about, so Bob, though Steve, lots of news. We try and pick three stories in our news segment to talk about.
[00:21:56] Cameron: I’ve got about 10, so I know
[00:21:59] Steve: [00:22:00] top three in tech cam.
[00:22:05] Cameron: I’m gonna futurist, I’m gonna run through them quickly. You tell me if you wanna stop and drill down on anything, Microsoft.
[00:22:11] Steve: Okay. So I’ll give you the stop, stop.
[00:22:14] Cameron: Microsoft rolling out a preview of co-pilot in Windows 11 already to their developer community. I’ve read some interesting threads on Reddit about it.
[00:22:24] Cameron: People are either excited or very nervous about. An AI being in their operating system. If I know you’re a Mac user like me if Apple came out, as I’m sure they will at some point with AI rolled into the operating system, how would you feel about that, Steve? Would you be concerned or excited?
[00:22:45] Cameron: No.
[00:22:46] Steve: Would you be concerned? I wouldn’t. I actually would really like it because the thing that I find increasingly difficult is to navigate my own data set. And I feel and I wrote about this about four years ago, I said, we’ll all have a personal [00:23:00] Google search equivalent. Where it rummages through our data and creates meaning on what we’ve created.
[00:23:05] Steve: Because going back and finding things is really difficult. And actually being able to synthesize six or seven ideas you had on some work that you’ve done across different platforms. I’ll use the Microsoft terms here. It’s something I had in a PowerPoint and then a Word document where I wrote a white paper and then there was some numbers in a spreadsheet.
[00:23:21] Steve: The ability to correlate those and create meaning and deliver that back will create extraordinary value. For everyone. So I’m not really that concerned about it. I actually think it’s a really good thing. The question is there a boundary does it then go out to the outside world and then bring some stuff in?
[00:23:36] Steve: Or does it stay within a an encapsulated AI within that? Look, I don’t really see much of a downside for that. Having a copilot be able to navigate a corporate os I think. I think that’s extraordinary, and this is one of the reasons I’m really bullish on Microsoft as a company. I think it’s doing extraordinary things well.
[00:23:55] Steve: I think the
[00:23:56] Cameron: concerns are largely about privacy. If you [00:24:00] have an AI that has access to all of your information that resides on your computer, how much of it is it going to share with the greater world? What if it goes rogue? What if it’s sharing it with the NSA or with the ccp? You mean what? What do you mean why exactly?
[00:24:15] Steve: What do you mean why exactly? Sharing it with nsa. This, see, this is the misnomer. Yeah. People are acting. There is still some boundary on data. It’s zero. It’s like the Snowden revelations is like everything digital is hackable, traceable in the outside world forever. The day you press send, you don’t even have to press send.
[00:24:36] Steve: Now that we’re a cloud oriented society, there is no such thing as cold storage or hard data that’s protected. I’m sorry. The fact that’s even a discussion now is insane.
[00:24:44] Cameron: I feel the same way about, mark Andresen again, we’ll get to him later, but he, he’s one of his arguments for why the US needs to push ahead with AI is because if the US does it China well, and if China gets hold of it, they’re going to listen to [00:25:00] everything that we do and everything that we say, and they’ll use it to spy on us.
[00:25:04] Cameron: I’m like who’s worse? The Americans spying on us, or the Chinese spying on us.
[00:25:09] Steve: He also says that they already do that, but we’ll get to that. The, but
[00:25:12] Cameron: yeah.
[00:25:12] Steve: Does the, I’m. Yeah, I’m not, I’m, look, when I say I’m not worried about it, I’m resigned to the fact that there’s not much that can be done about it, unfortunately.
[00:25:24] Steve: Yeah. And it’s been this way for a very long time. The horse was bolted a long
[00:25:28] Cameron: time ago.
[00:25:29] Steve: A lot, decade over a decade.
[00:25:31] Cameron: Speaking of China my other news story is that China’s making a lot of progress in ai, at least according to some of the benchmarks. And now CE Val is the main one, and it’s a Chinese run benchmarks.
[00:25:44] Cameron: So some people believe you can’t trust anything that China says is about its own benchmarks. I’m not as skeptical. I think China’s taking this stuff very seriously for a whole bunch of economic and geopolitical and governance reasons. But people may not know [00:26:00] about this because most of the Western media is talking about open AI and Microsoft and Google, but it’s worth paying attention to what China’s doing.
[00:26:10] Cameron: They’ve got chat G l M two, which is surpassing GT four in the c EALs secured three of the top four spots. For large language models in the CE evals. Wow. Their text image model they call Raphael also leads its respective list. So China is making massive progress in this space, and I think we should expect to see that continue.
[00:26:41] Cameron: And just like we’ve seen China’s TikTok dominate. The social media space in the last few years to a point where, as we’ve talked about in earlier episodes, American social media companies like Meta have been urging the US government to shut it down. Yes. Or force buy dead. It’s a
[00:26:59] Steve: [00:27:00] security risk cam, not a financial risk, just a security risk,
[00:27:03] Cameron: or force them to sell their product to an American company.
[00:27:08] Cameron: I think we’re gonna see the same battle play out very quickly where China will have some publicly available competitors to GPT and the same sort of concerns will be playing out. This is
[00:27:21] Steve: gonna be part of, I wrote about it and yeah. At the start of the pandemic, and I sensed it a few years earlier.
[00:27:29] Steve: We we are definitely moving towards a period of de-globalization and artificial intelligence and robotics will be the bellwethers for it because it enables high cost labor markets to compete. We’re gonna see it with chips. I’ve got zero doubt. We’ve already seen it with Huawei and Nvidia being told not to sell chips to China there gp GP four gpt, four chips.
[00:27:50] Steve: So we’ll see that in a whole manner of realms. We’ve gotta remember China outlaws our social media and, Google and Facebook and everything are outlawed in their country. So I [00:28:00] still can’t get my head around why we freak out when we wanna say yeah if you don’t let us play well when you’re not gonna play here either.
[00:28:08] Steve: For me, that’s straight up. I just don’t get it. I just don’t get this further towards we have to be globalized. Even if they’re not. It’s like you’re playing against a different set of rules. Yeah. And it even goes way back to the whole button plan and Reaganomics, we need to compete on a global scale.
[00:28:26] Steve: You, the car industry in Australia, shut down cuz we couldn’t compete. Yeah, that’s cuz we’ve got occupational workplace health and safety. You can’t just chop off someone’s arm and say, sorry about that Barry, hope you have a better day tomorrow. No wonder your prices aren’t the same. They don’t have any safety or and then they expect us to compete.
[00:28:42] Steve: It’s like you can’t compete unless you’re all abiding by the same set of economic rules. So I just don’t understand why we don’t push back and say, yeah we’re in a competition here and if you don’t let us play then we won’t play. For me, that’s real straight up and I think we’ll see [00:29:00] de-globalization to a massive effect with production.
[00:29:03] Steve: With supply chain, with chips, with artificial intelligence, AI, and robotics. I’ve got zero doubt that’s already happening. Yeah.
[00:29:09] Cameron: It’s not that easy. There was an article that I mentioned on my other show, one of my other shows, the Bullshit filter just recently the c e o of Raytheon, one of the biggest America, not Raytheon.
[00:29:21] Steve: Yeah, one of the, one of your favorites. The military industrial complex. Can, yeah. Has to bring, I love
[00:29:26] Cameron: it. He said we can de-risk, but not decouple from China.
[00:29:32] Steve: Yeah. He said That’s a nice, that’s a nice verbiage. That’s nice. He
[00:29:35] Cameron: said this whole idea that we can decouple from China is nice and all, but 70% of the technology that we use in our weapons systems.
[00:29:47] Cameron: Comes outta China, the rare earth materials to make them comes out of China. It would take us decades. Decades. We better start today. If we devoted everything to it now, it would [00:30:00] take us decades to get our independence back in this space. Look,
[00:30:05] Steve: And this is the point where de-globalization doesn’t mean it’s binary, it’s all or none.
[00:30:11] Steve: It means that you start to take pieces of the puzzle back, which gives you more power, more negotiation, and it balances out that spectrum because this whole idea that efficiency was everything is a really bad idea. I, one of my theories is you need to be inefficient on purpose. You need fat, you need to have excess.
[00:30:31] Steve: And that’s why I hate this whole idea of productivity at all costs. I’m not into it. I actually think that being unproductive is really valuable. And having fat in supply chains and excess is really great because, just look at, it’s, there’s a sense of biomimic. Why do you have fat? You have fat in case you don’t get food.
[00:30:49] Steve: Why do you have fat? Because you might get sick. Likewise, we need that in, in, in economics as well. And just going for the leanest, cleanest, lowest cost, most [00:31:00] efficient thing means that something can break down really quick and easy and then there’s no recourse. And we saw that during covid. Yeah.
[00:31:07] Cameron: We’ve spent decades building our just in time systems in a way that, we don’t have any fat in there.
[00:31:14] Cameron: It’s gonna take a long time
[00:31:16] Steve: because it’s suited a few capitalists. It suited a few. And I’m a raving capitalist with social proclivities. Who did globalization benefit? Just have a look at the disparity in incomes now and CEO wage who did it benefit. Benefit and benefited the one percenters.
[00:31:32] Steve: That was the greatest trick of all time. And
[00:31:34] Cameron: it benefited the 850 million Chinese people that were pulled out of poverty.
[00:31:38] Steve: Sure. During that whole period, no, but. Billy blogs out in the Midwest rust belt. He’s watching Fox News getting angrier by the
[00:31:44] Cameron: second. It doesn’t matter. Mark Andreessen says all that’s gonna be resolved with ai, it’s gonna fix everything.
[00:31:49] Steve: I’m glad that Andreessen has got it for us. Mind you, his interview was extraordinary. There was a lot of great in it. Yeah. And this is one of the things that we need to get really good at, [00:32:00] and we’re not as a society, and you see it on TikTok and everywhere, people barrack for people and they’re very poor at being able to like something that someone has said that they don’t like and vice versa.
[00:32:16] Steve: You, it’s okay to agree with some parts of something someone said and not other parts. You, we’ve gotta stop barracking for people. Yeah. What? And ideologies, we need to be able to delineate and say this bit’s good, and that bit’s not so good.
[00:32:28] Cameron: I agree with this and I don’t agree with that.
[00:32:30] Cameron: Like with Andresen stuff, I listened to his interview on Lex Friedman. I read his white paper and I listened to his podcast, his own podcast. The A, the AZ 16, AZ six a 16 Z podcast, A 16 Z. And listen, I think a lot of what he says is, I agree with some of what he says, I don’t agree with, I think you’re right.
[00:32:51] Cameron: Intelligent people should be able to pick and choose the ideas from wherever. I always say I’m the only person I know who can read both Iron Rand and [00:33:00] Noam Chomsky and Che Guevara and put, take something out of all of them. Exactly. I like all of it, there’s stuff I agree with, stuff I don’t agree with.
[00:33:07] Steve: Same. I can listen to some of the things that people are. Really they rally against and there’s, this mano sphere stuff where there’s a lot of these guys who are, saying it, you men are getting a raw deal and all that kind of stuff. Now it doesn’t mean that there’s a couple of sentences in what they say that has value, but people just turn off or turn on.
[00:33:29] Steve: And, one of the things, first things we do in tertiary studies, in later schooling is being able to debate and look at both sides of an argument and delineate that and then develop your own view. Jimmy Rowan says, there’s two books. One, one person says, don’t read that book. He read this one, and the other guy says, read this one and read ’em both and make up your own mind.
[00:33:51] Cameron: Exactly. Absolutely. All right. Moving into more news stories. As I indicated before, the founder and c e o of [00:34:00] Google’s DeepMind AI Lab, DEMA Sabas has said that their new system, that they’re combining the DeepMind stuff with Alpha Go, that they, the new system they’re calling Gemini will be more capable than chat g pt.
[00:34:16] Cameron: He says Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT four, but he says his team will combine that technology with techniques used in AlphaGo. Amy to give the system new capabilities such as planning or the ability to solve problems at a high level.
[00:34:35] Cameron: You can think of Geminis combining some of the strengths of AlphaGo type systems with the amazing language capabilities of the large models. We also have some new innovations that are gonna be pretty interesting. The reason I think this is interesting, Steve, is we’re not locked into the open AI l m model of ai.
[00:34:55] Cameron: I think that there’s a tendency for people to think that’s where [00:35:00] AI is gonna stop. It’s just gonna be large language predictors. And it’s obviously not. What I think the LLMs have given us is an amazing entree into publicly accessible artificial intelligence, natural language interface.
[00:35:18] Cameron: Into the power of supercomputing, and it’s obviously going to be integrated and augmented by lots of other different kinds of multimodal systems. We’ll talk a little bit about some multimodal systems in a minute, and different kinds of artificial intelligences that will all be networked together.
[00:35:40] Cameron: They’ll all have their strengths and their weaknesses. Some will be hermetically sealed private that, that are residing on your phone or your desktop, and don’t share your data. They’ll have access to all of your data, but don’t share it with the outside world. Yep. In theory, outside of the NSA and things that are, publicly available knowledge and information that you wanna be [00:36:00] available, there’s gonna be a whole mix and match.
[00:36:02] Cameron: What do you think?
[00:36:04] Steve: The technology overlap is where things get interesting. The classic example is the phone has a number of technologies that. Have this then overlap with each other that creates new utility. I really like the idea of large language models and search and gaming parameters put into there to, to create, again, new insight and new ways of doing it.
[00:36:23] Steve: So I think it, it’s says to me two things. Overlap is great. Competition is great because competition just enables us to have different options, but also keep power a little bit more distributed. But, the overriding thing that I think LLMs have done, which then becomes the new paradigm, is it’s a natural language interface.
[00:36:47] Steve: Because what that does is it democratizes. And it’s up to the technologists to create new modalities and data sets and. Forms of intelligence into [00:37:00] it. But the fact that the layer outside of it is a natural language interface really democratizes the technology in an interesting way because then you get creative people who aren’t technologists, the ability to create new ways of using it.
[00:37:14] Steve: So that, that’s actually for me, the really exciting part. Competition, multimodal all of that stuff is great, but on top of it, the wrapper outside is the natural language element.
[00:37:24] Cameron: Speaking of rappers, have you heard of Pie yet?
[00:37:28] Steve: Pi? No, you’re gonna have to no pi. Oh, it’s therapist. Is this the therapist?
[00:37:33] Cameron: It’s one of the things that it does. So it’s been put out by a company called Inflection ai. Bill Gates, Reid Hoffman and Nvidia have invested in it. They’ve raised 1.3 billion already. Reid Hoffman’s one of the founders of LinkedIn. They, the, according to the chief Executive Officer of Inflection, ai, Mustafa Suman, he said they’re [00:38:00] building a, an ai, a chatbot that is kind and supportive.
[00:38:04] Cameron: So I’ve been playing with this for the last week or so. I’ve got my wife playing with it, one of my sons playing with it. And one of the things that it, like when you open it up on your phone, they’ve got an app or you can open up in the browser, there are some preset. Prompts that you can use.
[00:38:21] Cameron: And one of them is, help me feel calm. Another one is Help me be more productive. Another one is what, I can’t remember the others, but there’s about half a dozen.
[00:38:33] Steve: I need that. Yeah. Cause I, I suffered from massive anxiety. We were talking about it before. I get really anxious about things that haven’t happened and probably won’t happen.
[00:38:43] Steve: Just do, I’d live inside my head massively. You should
[00:38:45] Cameron: test this out because my wife’s been, she suffers from anxiety and A D H D, so she finds it very hard to focus and be productive. She’s been, and she’d recently found out that a relative that she’s very close to has got terminal [00:39:00] cancer and she lives in the US and she’s been very sad about that.
[00:39:03] Cameron: So she’s been just talking about it with PI and just having this conversation, and it’s been giving her some ideas about, how to process her sadness and her grief. And it, like Chachi pt, it’s endlessly patient. It’s there for you 24 7. It’s always willing to help. And one of my sons, it’s free.
[00:39:22] Steve: Yeah. It’s a pretty
[00:39:23] Cameron: good price. It’s free at the moment. One of my sons has recently got dumped by his girlfriend and he’s been, it’s the first time he’s early twenties. Yeah. First big breakup. He’s taking it pretty hard. He’s struggling with it and I’ve been talking to him a lot, but I said, Hey, why don’t you jump on pie.
[00:39:41] Cameron: Try with that. He loves it. He said it’s been fantastic just helping him. Just someone to talk to and, and it’s giving him some suggestions about ways to think about it, ways to process it. But it’s just more importantly, it’s just someone who’s there for you to talk to. Yeah, 24 7. Never too busy.
[00:39:59] Cameron: [00:40:00] Never gets bored of you, needing help and it’s never distracted and doesn’t have other things on its plate. It’s there for you. And this is one of the thing, Reid Hoffman not Reid Hoffman. One of the things that Andreessen was talking about is just imagine a world where everybody has their own therapist, 24 7.
[00:40:19] Cameron: Somebody there to, walk you through whatever you are dealing with, whatever you are feeling. Imagine the positive impact that could have on the world. And PI is an early implementation of that pi.ai is where you’ll find it on the web. I wrote it down. I highly recommend people check it
[00:40:38] Steve: out.
[00:40:38] Steve: I’ve noticed. Yeah. Yeah. What I have noticed on my TikTok feed is, again, don’t judge me people, it, it has on there. Get your own AI girlfriend where it has like a visual representation of someone, which looks quite realistic. And I imagine it’s got something like an API from a chat gp, PT four, or maybe [00:41:00] one like this where you chat to a girlfriend who I think, I don’t know that this is a good thing.
[00:41:05] Steve: I imagine if you get one of these, You could really go down into a wormhole of her telling you exactly what you want to hear, which I don’t know if that’s good for any other teenage boys out there. I’m not so sure. I do like the idea of having a therapist that’s available all the time, and I imagine that the type of feedback you get would be relatively positive and it would be all of the learnings and studies that your classic psychologists would, and ideas that they would have as well.
[00:41:33] Steve: But I’m not so sure that it’s the same as someone looking you in the eye who’s felt what you’ve felt and being able to hold your hand, which is one of the things that I talk a lot about. It’s like AI can do it all, but sometimes I just want a human to do it. It’s not that it can’t be done, it’s do you want it?
[00:41:51] Cameron: Yeah. Look, I agree with you, but I think, the,
[00:41:54] Steve: But it’s a good measure. It’s a it’s. If you’ve tried to Yeah. As part of the repertoire
[00:41:58] Cameron: of options, if [00:42:00] you’ve tried to get in to see a therapist in Australia in the last couple of years. Not easy. Not easy. They’re booked.
[00:42:04] Cameron: Sold and expensive.
[00:42:05] Steve: Expensive. About
[00:42:06] Cameron: 200 bucks a go. Yeah. It’s like trying to get a financial advisor. Very expensive, very difficult to get in. Not
[00:42:13] Steve: and bad advice. And nine outta 10 times fucking terrible advice. Yeah. I gotta be honest, nine outta 10 financial advice and not worth a fucking pinch of shit.
[00:42:23] Steve: That’s Tony fine. You can learn to quote our boy from yeah, who’s our boy from Boston, who was the super intelligent guy in the movie, won the, what’s our boy’s name? Matt Damon. Oh yeah. Yeah, you’re gonna spend 200 grand learning something you could have got with fucking $8 60 of late fees from the library.
[00:42:37] Cameron: Yeah. Tony Coniston on my QAV Investing podcast always paraphrase as Buffet, like, why would somebody who drives to work in a Rolls-Royce take advice from somebody who catches the subway? Like we, we always say don’t take financial advice from somebody who isn’t rich and didn’t get rich from investing.
[00:42:54] Cameron: Cuz if they knew what they were doing, they would be too successful to work as a financial
[00:42:59] Steve: advisor. So [00:43:00] just on this, and this is really important, you and I both like the financial side of things is that all too often someone can be really successful in one realm and then people believe that they understand money and they’re not the same thing.
[00:43:15] Steve: You can have a really successful business person who created a factory, making plastic widgets. It doesn’t matter. And they’ve just done one thing, and it’s almost like a glitch in the matrix where they understand one thing and they’ve been totally focused on that, and they have excess capital.
[00:43:29] Steve: It doesn’t mean that they’re good with money.
[00:43:31] Cameron: It’s like, why would you listen to a guy who had one hit in the eighties talk about the dangers of ai?
[00:43:41] Steve: He was asking me. But he’s scared. Yeah.
[00:43:44] Cameron: All right. Moving right along. More news meta company behind Facebook, Zuckerberg is gonna make their next l m free for commercial use. Wow. Who would’ve seen this coming? The freemium giveaway strategy from, wait
[00:43:59] Steve: a [00:44:00] minute, but is it? So here’s my, yeah.
[00:44:02] Steve: Okay. So can I just tip in here? Yeah. By the way, I love that we’re up to the new story. Eight, but that’s okay. I think today’s gonna be a long one. Okay. Here’s what I think they’re doing. I think they’re doing what they did with their APIs in the early days where everything was really open until they got what they needed.
[00:44:17] Steve: Yep. Crowdsource their development and then say, yeah, we’ll buy this, and this. Yeah. Thanks for coming. Yeah. See you later. Sure. Twitter did it too. The classic old, we’re open source. We’re about the crowd. Here’s our a p i until you get what you need. And they go, yeah. About that changed our mind.
[00:44:32] Steve: Yeah. Yeah. Worked last time. Why
[00:44:34] Cameron: wouldn’t they do it again?
[00:44:35] Steve: Worked last time. Why wouldn’t it work again? Exactly. That’s exactly what they’re doing. It’s catch up. Yeah, absolutely.
[00:44:41] Cameron: But in the short term, they’re gonna be making it free for commercial use to businesses.
[00:44:48] Steve: Yeah. Until the business then becomes totally reliant on it.
[00:44:51] Steve: And in those terms and conditions, which no one will read, of course. You can always get an l M to read them for you and say, what are the watch outs? The watchout is that you’ll bring it in and at [00:45:00] any point in time they can just switch it off. And then you’ve got a business which is reliant upon it, and then you have commercial risk in turning it off.
[00:45:08] Steve: And then they say, Yeah. Bait and switch. And they did that with Facebook pages. Remember they said early on, build your Facebook audience and your community. Then what you’ll be able to do is reach them at no cost. Because you’ve built a million followers on your Red Bull page. Everyone invested millions of dollars in building a follow base, and they said, oh, and now there’s no organic reach.
[00:45:29] Steve: You’ve gotta advertise to reach the people. You just spent money off. They’re just gonna do that again. Yeah. They shut off. And if companies fall for that, if they fall for that, It’s their problem.
[00:45:38] Cameron: If Facebook, if better are making this free for commercial use, then it’s gonna put a lot of economic pressure on the other major players.
[00:45:45] Cameron: Open AI and Microsoft and Google Sure. Et cetera, to do the same thing. Is good. Competition is good. Yeah. So it’ll be interesting to see how this plays out. I think you’re right. I’m sure they have an agenda and they’ve got a long term plan for how they’re gonna stitch up the market by doing this. But [00:46:00] it’s gonna mean massive proliferation of businesses adopting the integration of large language models.
[00:46:06] Cameron: If it’s free for commercial use, even for a short period e, everyone’s going to be trying to figure out how to get the drop on their competition by integrating LLMs first into their service provision. We’re gonna just see a crazy amount of innovation over the next couple of years as all of the backend providers.
[00:46:28] Cameron: Meta, et cetera are racing for market dominance in this space. And this gets back to one of Andreessen’s points. He says, look, talking about regulation of it in the US and his case for not doing regulations, he said, we’ve already got a cartel of banks. We’ve got a cartel of software companies, a cartel of social media companies, cartel of, tele telephony providers, a cartel of insurance companies.
[00:46:51] Cameron: Do we really want another cartel of ai? Is that really what’s the best thing for our country and our economy? So he’s [00:47:00] arguing for it to remain open. And he argues that the companies that are pushing, as we’ve mentioned on earlier episodes, the companies for pushing for regulation have an agenda.
[00:47:10] Cameron: Hit start.
[00:47:10] Steve: Yeah. Yeah, hit start. Definitely. I think. And he mentioned too that the laws already cover some of the issues. I think the issues for regulation aren’t really at a competitive level where it matters. It’s actually at a developmental and nation state level. So where we need the regulation is probably where you wouldn’t get them.
[00:47:30] Steve: You’d get them in the wrong spot. They’d be, they’d thwart competition. Yeah. Rather than actually have that top level, development element. Which has an impact on nation states. Yeah.
[00:47:42] Cameron: That’s all gonna play out one way or the other, where people like us are just gonna be Witnessing how it plays out and trying to take advantage of it while we can.
[00:47:53] Cameron: Speaking of things, playing out Steve Cosmos too. Cosmos With a Cake. I only read about this morning. Not the s [00:48:00] Sagan version. No. Classic though. That is, I always say that was the show that turned me into a science nerd was Carl Sagan’s Cosmos Mate,
[00:48:11] Steve: I’ve read all these books. Extraordinary man.
[00:48:13] Steve: In fact, you gave me my first copies of Cosmos. On D V D I think. Did
[00:48:17] Cameron: I’ve still got my copy of it. Yeah. Yeah. Fantastic. TV series. Anyway, cosmos with a K Cosmos two is a multimodal large language model that has just been released. There’s a live demo. You can download the code on GitHub and this is an interesting space.
[00:48:35] Cameron: So what you can do with Cosmos two in the live demo is upload a picture to it and it will tell you what’s in the picture. It will create a form of markdown that tells you what’s going on in the picture. The idea being now you’ll be able to interface with AI using visual elements as well as text, not just having something like [00:49:00] Darley or Mid Journey create images for you from text-based reversing inputs.
[00:49:04] Cameron: Rev. Yeah. But you give it images and ostensibly video. And it will be able to work out what’s going on in those pictures and video and use that as part of its tasks that it’s running in the background.
[00:49:18] Steve: So if it’s telling you what’s in the picture, cam, can I ask, does it do it in 1000 words?
[00:49:23] Cameron: No, it’s like a sentence, but you missed my
[00:49:27] Steve: joke.
[00:49:28] Cameron: Oh, is the main thing a picture’s worth a thousand words? Is that it? Very good. Very good, Steve.
[00:49:32] Steve: Yeah. It was for the children.
[00:49:35] Cameron: Now it’s not. I tested it a bit this morning and it wasn’t perfect. I took one of Fox’s photos of a Lego clone trooper on a tree stump. And tried it a couple of times and it kept telling me it was a chainsaw stuck in a tree.
[00:49:51] Cameron: So there you go. It needs a bit of work, but a couple of other pictures that I threw in there, like of some flowers and some other natural [00:50:00] objects, it immediately knew what it was. Now this isn’t groundbreaking stuff. We know that Apple photos and Google Photos have had the ability for a while to recognize faces and objects and go into Apple photos and say, show me all my pictures of cats and it’ll show you all the cats.
[00:50:18] Cameron: But this is taking it to a new level. I created a painting some days ago in mid journey of a sunflower painted with acrylics on Canvas. Showed it that, and it told me it was a painting of a sunflower painted with acrylics on canvas. Quite a detailed level of visual analysis. Oh,
acrylics.
[00:50:40] Steve: Acrylics. That’s interesting. I guess it’s gonna go in all directions. Yeah. This, the multimodal is interesting, the idea that things cross reference in many different directions. Probably what you’re gonna see for the next two years is a whole lot of AI is being added too. It’s just add AI really [00:51:00] is what’s happening.
[00:51:01] Steve: Yes.
[00:51:02] Cameron: Now I’ve got a bunch of other news stories. We don’t have time, but just a high level. The first drug developed by generative AI has gone into human trials. Google is reportedly making a huge breakthrough and quantum computing and GitHub and Twitter among others have just started to lock down their services on one hand, supposedly to prevent.
[00:51:28] Cameron: Large language models from feeding off of them, but probably in, in order to try and come up with a way of making large language models pay to feed off them, they’re gonna create a new think it’s level of
[00:51:41] Steve: access. Think it’s pay. Yeah. Especially if they’re a data source that has a lot of value. I can see how GitHub, has it in natural language processing for code.
[00:51:50] Steve: I can see that, how that has a lot of value. Twitter I think will be really valuable. Maybe not as much now as it might have been a few years back, but very [00:52:00] valuable for large language models to infer news and give up-to-date information cuz it’s a fire hose of what’s happening now. So I think that would be very valuable training data set for LLMs.
[00:52:11] Steve: Read it also so it’s clearly, sorry. Keep going. It seems clearly to me that this is to create an economic model. Where they can enforce
[00:52:21] Cameron: payment. Yeah. Reddit just massively increased the pricing for a p i access to their data shutting shutting down effectively some of the Reddit apps that were quite popular like Apollo, much to the annoyance of millions of users.
[00:52:37] Cameron: But again I guess it’s part of their attempts to cash in on large language models trying to trade on their data.
[00:52:48] Steve: What is interesting is that as soon as the availability of data is cross-reference around corporations, they’ve moved very quickly to protect something that [00:53:00] they know has value.
[00:53:01] Steve: However, for the last 15 years when our data has been sucked up and bundled up in the surveillance capitalist model, Humans have been very bad at protecting something of value. Now that might be because it’s very hard for you to extract, your dollar, twenties worth of value on your data that goes into the revenue stream for the big data set.
[00:53:20] Steve: But it is interesting that companies will protect their data very quickly as this emerged. And yet stupid humans haven’t done anything and said sure have it so long as I get a free photo like that. That for me, is a really interesting moment in time to see who protects what and humans don’t. So that was so interesting.
[00:53:39] Steve: And the first drug developed by generative ai, for me that’s one of the areas that’s probably get not getting enough attention. We’re so focused on what AI can do for us in our lives and our day-to-day. The idea that AI could develop drugs and, work in areas that we can’t see answers to, for me, is probably where the biggest benefits from [00:54:00] AI will emerge in the long run.
[00:54:01] Steve: Things like that, abundant energy and
[00:54:04] Cameron: Medicine. Yeah. Scientific breakthroughs. That’s one of the big hopes for it. And speaking
[00:54:08] Steve: of and by the way, just on the com quantum computing breakthrough every three months, I just say a quantum supremacy article. I’m just bored of them. I was, I is all I can say to that.
[00:54:20] Steve: It’s like I, that’s, yeah. Quantum supremacy has just been achieved except that
[00:54:24] Cameron: it hasn’t, that’s how I felt about AI news for the last 20 years, Steve, until that AI came out with gpt. Exactly.
[00:54:32] Steve: I’m sure when the quantum moment happens. Yeah. We’ll really know if the quantum supremacy moment happened. We’ll know, just like we know that now AI is democratized,
[00:54:39] Cameron: but there’s a lot of progress being made in quantum computing.
[00:54:43] Cameron: It’s still nowhere near practical use for commercial or personal, but they’re making a lot of progress.
[00:54:51] Steve: Yeah. But the idea is that quantum supremacy is so interesting because. Theoretically, there’s only ever gonna be one, because once one develops it, [00:55:00] it usurps everything else. It’s like the winner is the theory, because once you develop that, then it is so much better than everything else out there.
[00:55:11] Steve: It just sucks all of you, almost like a quantum, black hole. It just wins. But anyway, it’s the
[00:55:17] Cameron: same theory with whoever develops an AGI first, right? Yeah, exactly. Yeah. All right. Speaking of utopian views on what AI will accomplish in science and medicine, let’s spend a little bit more time talking about Mark Andreessen’s, why AI will save the world thesis
[00:55:39] Steve: as I deep dive. Yeah, look, deep dive cam
[00:55:41] Cameron: time for the deep dive. We’re running late, so let’s make it not so deep. And we’ve already touched on bits of it, but basically Mark is saying, look, AI is gonna save the world, not in the way that I posited. In our last episode where I said, I’m very pessimistic about human humanity’s, chances of surviving the century, we really need AI to be our [00:56:00] savior.
[00:56:01] Cameron: He really believes that there’s only good gonna come out of AI right across the board. And anyone who says anything different doesn’t know what they’re talking about. What are your thoughts on his arguments? Oh, look.
[00:56:16] Steve: He presents a very convincing argument, but it’s heavily weighted to his mindset.
[00:56:23] Steve: And I think that I think it’s a bit skinny. I don’t think he presented any of the arguments on where things could go wrong. And when you read through his piece he’s all very, will go down the road of where the positives can come from. But he doesn’t really talk about the potential negatives that could come from, he just writes it off, oh, the movie’s about the bad robots.
[00:56:47] Steve: And I, the reason I think that he doesn’t do that is I don’t think he goes deep enough into what in intelligence is. And he has a viewpoint that, one of the most [00:57:00] interesting things is, that the use of energy and that energy is needed and if it uses a lot more energy for the intelligence, so it doesn’t operate like us, but.
[00:57:09] Steve: I feel like he didn’t explore what intelligence is and how it can fork and go down a different road enough. That’s what I felt. I, it was very convincing and very compelling, and even when you listen to him on some of his interviews, it was incredibly compelling. But I felt like it was just leaning too far on one side.
[00:57:28] Steve: It was almost quasi political. It was like hearing the arguments from a left or right
[00:57:34] Cameron: politician. Oh, very much and it was a very much a pro. American capitalism is gonna save the world view, even though he is very skeptical about some of the initiatives to regulate. He has his whole baptist and bootleggers analogy from the days of prohibition, which I think is interesting.
[00:57:54] Cameron: I agree with a lot of that. It’s a
[00:57:55] Steve: good idea. It’s a good
[00:57:56] Cameron: idea. But, one of my, one of my [00:58:00] criticisms of his argument, like he says he shits all over the idea that the AI are gonna become sentient and wake up like Skynet and turn on us. He said, that’s just not how computers work.
[00:58:12] Cameron: They don’t work that way. They need to be told what to do. And one of the things he doesn’t touch upon
[00:58:20] Steve: is this is the point. He’s putting a boundary, he’s putting a boundary based on yesterday around what could evolve.
[00:58:25] Cameron: That’s one, one side of it. The other side of it, he doesn’t really touch upon the idea of bad actors like bad actors that take a sufficiently advanced ai, and by bad people that wanna do harm let’s call them terrorists or people that are just disgruntled, that instruct an AI to do damage to the world’s infrastructure.
[00:58:47] Cameron: It doesn’t have to be the AI becoming sentient and deciding to do it by itself.
[00:58:51] Steve: That doesn’t be right. In fact, that’s probably, that’s a higher risk because you, we’ve seen all forms of terrorism to this point have [00:59:00] been secondment of industrial tools, whether it’s bombs or weapons or Boeing planes being flown into buildings.
[00:59:09] Steve: It’s the secondment of tools that people have access to. Yeah, he does
[00:59:13] Cameron: argue against the paperclip problem, which Oh, sure. I think we’ve talked about before on the show. And he says basically, if you could build a machine smart enough to turn every piece of matter on the planet into a paperclip, then it would also be smart enough to know that’s not a good thing to do, and it would counter demand, demand its own instructor instructions.
[00:59:36] Cameron: And then, mate, look, there’s some logic to that argument that, a machine that was smart enough to do that level of damage would probably be smart enough to see the floor in doing that to its own existence, if nothing else. Whether or not it cares about its own existence is another question.
[00:59:53] Steve: But th But this is the point, right?
[00:59:55] Steve: The point is caring, right? And actually, yeah, it’s that form of [01:00:00] intelligence that to me, matters the most. It isn’t. That if it’s intelligent enough, it’ll see the bad side and won’t do it. The point more interesting than that is, is does it care? Or why should it care? And we’ve already seen at the moment with the ais that are emerging that they all have really different personalities.
[01:00:18] Steve: One of my favorite things to do is to look at the same prompts from a Dali versus a mid journey. The exact same prompts give different outputs. That’s personality based. That’s based on the code that goes into it and how it’s been trained. Just like people. And so if you have an L M or any AI that is out there, and we just talked about open source ai, any disgruntled person with a laptop and an agenda could develop an ai that is disgruntled just like they are. And so to assume that AI’s getting more and more intelligent, which just won’t do bad stuff, assumes that it’s almost like a mono thinking. That there’s one AI or one, but there’s lots. And if there’s bad actors, then there’ll be bad ais too. And [01:01:00] if they’re super smart, then they could do things that are not re Yeah, like you say, you don’t require sentience for it to develop a its own agenda or objectives.
[01:01:10] Steve: But if it was benevolent, depends on who it’s benevolent to. Is it benevolent to humans or the earth? Cuz if it’s benevolent to the earth, just gets rid of the humans. We’re the number one problem on the planet. Yeah. Georgie Carlin, the earth is fine. Humans are fucked. Yeah. Climate change doesn’t affect the earth.
[01:01:25] Steve: That’s fine. The earth couldn’t give a fuck. Same with every other species.
[01:01:28] Cameron: It’s a, except humans. It’s survived the dinosaurs, it’ll survive us. Here’s a summary of andreessen’s thoughts on where we’re going. He says, in our new era of ai, every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.
[01:01:47] Cameron: The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of Infinite Love. Every person will have an AI assistant slash coach slash mentor slash [01:02:00] trainer slash advisor slash therapist. That is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful.
[01:02:07] Cameron: The AI assistant will be present through all of life’s opportunities and challenges maximizing every person’s outcomes. Every scientist will have an AI assistant slash collaborator slash partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every business person, every doctor, every caregiver will have the same in their worlds.
[01:02:29] Cameron: Every leader of people, c e o, government official, nonprofit president, athletic coach, teacher, will have the same. The magnification effects of better decisions by leaders across the pe, across the people they lead are enormous. So this intelligence augmentation may be the most important of all.
[01:02:46] Cameron: Productivity growth throughout the economy will accelerate dramatically driving economic growth, creation of new industries, creation of new jobs and wage growth, and resulting in a new era of heightened material prosperity across [01:03:00] the planet. Scientific breakthroughs and new technologies in medicines will dramatically expand as AI helps us further decode the laws of nature and harvest them for our benefit.
[01:03:09] Cameron: The creative arts will enter a golden age. As ai augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before. I think ev I even think AI is gonna improve warfare when it has to happen.
[01:03:27] Steve: Improve warfare keep going by
[01:03:29] Cameron: reducing wartime death rates dramatically.
[01:03:32] Cameron: Every war is characterized death
[01:03:33] Steve: rates. We’re gonna reduce wartime death rates, sorry.
[01:03:36] Cameron: Every war is characterized by terrible decisions made under intense pressure, and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
[01:03:55] Cameron: In short, anything that people do with their natural intelligence today can be done [01:04:00] much better. With ai, we will be able to take on new challenges that have been impossible to tackle without ai from curing all diseases, to achieving interstellar travel.
[01:04:13] Steve: Okay. The first part of that was the most interesting.
[01:04:16] Steve: If there was ever a utopian statement, that was it. I’ll just come back to the basic one. You’ll have an, every kid will have an AI by their side, this, that, all of that. Is that the world you wanna live in? That’s the first question and the most important one. That’s great. Fantastic. So this, to quote Greg Giraldo, this is fantastic.
[01:04:33] Steve: It takes a lot of drunk daddies, missing dance recitals before you decide to blow a goat on the internet. And if that happens, where’s it gonna leave me on a Friday night with my new high speed connection?
[01:04:47] Steve: So that’s great. Does that mean that Joey, the bad dad can just go off to the pub and just goof off cuz the AI’s got this? Or does Little Mary want the dad to just sit down with her and have a look at the Matt’s book together? Where is [01:05:00] the humanity? The whole point? It’s great.
[01:05:02] Steve: We can have all of that. The deadbeat dad’s,
[01:05:04] Cameron: the deadbeat dad’s AI will be telling him that he should be sitting down with little Mary and having quality time.
[01:05:13] Steve: Whatever it is, the point is, yeah, you can have all of that. You can have all of it, and that’s all great and good and efficiency’s great and we’ll uncover and that’s all great.
[01:05:26] Steve: But there is something beautiful and sp special that happens with every species other than spiders, which just eat each other. But that aside most mammals and social creatures it’s the moment, right? And even when an AI is there, let’s just take the idea of a movie. You wanna watch a movie with someone, you wanna experience something with someone who’s heartbeats in the same way, you know that they feel what you feel.
[01:05:54] Steve: And even though the AI can pretend that it feels, what you feel will never feel what you feel and [01:06:00] all of the efficiency and all of that stuff is all great. But it is just a gaping hole right next to that entire statement. Except
[01:06:07] Cameron: as I said, your AI therapist will be saying to you, Hey, Steve, I think you should put me down and go spend time with your kid or your wife.
[01:06:15] Cameron: You’re spending way too much time online.
[01:06:17] Steve: I, and that was a notable ex exception in the statement that you read. No. He said,
[01:06:22] Cameron: everyone will have their own therapist, their own mentor, their own coach. That’s what therapists, mentors, and coaches do is they make your life better. If he
[01:06:30] Steve: didn’t say, he didn’t say in there, and it’s going to encourage you to be more human, and it might do that, but I just felt like all of that efficiency is real and true and will be there, but it really misses the mark.
[01:06:43] Steve: A lot.
[01:06:44] Cameron: One of the interesting things, he’s of what matters on his own podcast, one of the interesting comments he made was about Theri the writer strike in Hollywood. He said, what writers are missing is that very soon they will be able to use AI to help them write a script and then say [01:07:00] to the ai, now make this movie for me.
[01:07:02] Cameron: Cut out the actors, cut out the directors, cut out the camera people, which I agree with. Be the writer. I can now make the whole film. They should be embracing AI adoption. I agree with that. A hundred you, it’s not the writers that should be worried. It should be. It’s the actors and directors that are gonna be cut out, not the writers.
[01:07:20] Cameron: Yeah,
[01:07:21] Steve: I agree. And even the distribution systems. One of the things that you hoped for and the internet had it early on and then algorithms ruined everything. Where the best work used to bubble up to the top, and it doesn’t in many places anymore. It does in Reddit a bit because they’ve got a smart algorithm that has time-based reality and voting and all of those things and commentary rather than the pure algorithms on other social media,
[01:07:45] Cameron: which well dunno if you saw the thing this week, but Amazon’s best seller lists are just full of AI generated nonsense AI generated books.
[01:07:53] Cameron: I didn’t see that. AI generated books that are just gibberish books, but the whole bestseller list has [01:08:00] been gamed as well. And so all of these books now, hundreds and hundreds and hundreds of books, gibberish books printed by a and in the bestseller lists it’s spam books the same way spam email worked.
[01:08:14] Cameron: We have spam books now.
[01:08:20] Cameron: Yeah. So listen, in our technology time walk. Warp section. You actually have some notes here where you agree with some of mark’s. I wanna get to the
[01:08:30] Steve: UBI thing. I agreed with a, I agreed with a lot of what he said, and in the tech time warp, he talked about the economics of what happens when you have a development.
[01:08:40] Steve: And there’s a really simple explanation for it. I think I can even explain it a bit more simply than Mark did. But Andreessen said that what happens is that you get an efficiency with a new technology and it creates new industries and new jobs. But there, there’s a really simple way to, to explain why technology up until [01:09:00] this point hasn’t resulted in mass unemployment where no one can do anything and robots does anything.
[01:09:07] Steve: It’s a fallacy in the first instance in that people need to remember that. No technology can take over everything and get rid of every job because if people don’t have jobs, they don’t have money. And if no one can buy anything, the whole thing falls apart, right? And you can’t even have all that money in the 1% hands either, because the 1% only get rich taking money from the 99%.
[01:09:24] Steve: So just put that argument to the side. It’s really basic economics. With any new technology, whether it’s a factory, whether it’s the wheel, whether it’s the automobile, whether it’s the internet, whether it’s media, here’s what happens. Technology reduces the cost of production. The cost of production comes down.
[01:09:43] Steve: Something that used to cost a hundred dollars, cost $50. That $50 is freed up. Okay? Competition. When this thing gets cheaper, everyone competes on price and pushes the price down. So it gets, the entire industry gets cheaper, [01:10:00] lower prices, free up capital. That capital then goes to new things. Those new things have new jobs in industry that come around to support them, and then the cycle repeats and this is how the economy grows.
[01:10:16] Steve: The simplest example I can think of is music. So before we had recorded music, we had the medieval minstrel and the only way you could hear music if someone had to play an instrument for you and people would pay to hear them or go to theaters or that person would travel around right? Then we developed a new technology and instead of having to pay, I’m just making this up cuz I dunno what the prices were in the medieval days.
[01:10:40] Steve: Let’s say it was a hundred dollars to hear 12 songs. Then they had an LP or a Come out music came out the phonogram. So you’re buying a phonogram and then you can get 12 songs for $30 and you can listen to it as many times as you want. It was amazing. And then we had the radio where you had free songs.
[01:10:58] Steve: All the time that you could listen to. But the price was [01:11:00] advertising. But the ads then told you to go and buy that album cuz they had the se the sample songs, and then you buy more music. Then we had the CD come, then we had the, we had the tape before that. Then the cd, those, that business model was the same $30 for a handful of songs.
[01:11:18] Steve: But then when streaming comes along, all of a sudden you can have as many songs as you want for free. So what happened to that $30? It went from 12 songs at $30 to as many songs as you want for 10. That money just changes places, goes into your new phone, goes into your data, goes into your Beats, headphones.
[01:11:36] Steve: It just changes places.
[01:11:38] Cameron: The artist, and this is what happens, the artist would say, now they don’t get as much money.
[01:11:42] Steve: They don’t. And they don’t get as much money. And I saw a thing with Snoop Dogg the other day, which astounded me as to how silly he was. It was at a conference, a tech conference.
[01:11:52] Steve: I just love any tech conference that gets to Snoop in, is the kind of tech conference I wanna be at. And so Snoop was up there on stage saying, man, I [01:12:00] don’t understand. I won’t do the Snoop accent. But he was saying, I don’t understand. I used to sell an album, sell a million copies, and I’d get all this money and now I have a billion downloads.
[01:12:08] Steve: Why don’t I get any money? It’s because no one’s paying $30 for 12 songs anymore. Snoop, they’re paying $10 for as many listens as they want. There’s not as much money to go around. It’s really
[01:12:19] Cameron: simple. Here’s my argument against the complaints from these artists. I’ve been using this for decades in most professions.
[01:12:27] Cameron: You you used to be in marketing. I used to be in marketing. In any, in, let’s say you spend your life and become, you go to university you get a career and you become very good. At what you do, you are one of the best people at what you do, and you work really hard at it, and you work eight hours a day, nine hours a day, five days a week, and you’re very good.
[01:12:49] Cameron: You will earn probably a real, a relatively good living above average. You might earn above average a hundred thousand, 200,000, $300,000 a year, [01:13:00] working 80 hour weeks, working really hard. You don’t make 20 million a year unless you are, a, a CEO or something like that. A handful of CEOs, so ceo,
[01:13:11] Steve: a star, let’s put ’em in the star category.
[01:13:13] Steve: Whether it’s a corporate star, an entertainment star, or that’s
[01:13:16] Cameron: whatever. That’s my point. Like the entertainment industry. You talked about where it was 200 years ago, right? Where you had to go out and perform live music to small audiences, and you got paid, you got to clip the ticket. In the 20th century, we had this aberration in the media business.
[01:13:34] Steve: It was an earth word. Aberration. It was
a
[01:13:36] Cameron: short term aberration where control of the channels to reach mass audiences were in the hands of television, film, radio and record companies, media companies. Yep. They were able to limit the amount of artists that could reach a massive audience. And consequently, you had movie stars making 20 million [01:14:00] a film.
[01:14:00] Cameron: You had bands making 20 million a year from record sales. That is obscene. No, no other industry acts like that. It was an obscenity. It was an aberration, and it didn’t last very long because
[01:14:14] Steve: it was absurd. And they should be thankful that they had any of it at any point ever. Yeah. It should never, they punched the sky and go, I got away with it for a little while, Snoop.
[01:14:23] Cameron: Yeah. That is not a sustainable model where, no. Here’s the other thing too. The vast majority of working actors barely make a living. The vast majority, 99%, the vast majority of really talented, hardworking musicians barely make a living. You were, you had these industries where you had 0.01% of the talented professionals in an industry making millions and millions of dollars from putting in a regular work week, and the other 99.99% barely [01:15:00] surviving.
[01:15:00] Cameron: The economics of the industry was fucking ridiculous, and so it was really weighted. So yes, you’re not gonna make millions of dollars a year. Fuck it,
[01:15:08] Steve: get over it. Same with startups Now. Startups are the same bubble. It’s an aberration where you have capital flows that are way too much that. Aren’t really worth what they get and the valuation.
[01:15:19] Steve: And people go, oh, but they’re really
[01:15:20] Cameron: talented. No, fuck off. Go on TikTok, they’re
[01:15:23] Steve: really lucky. Luck is way more important than talent. I promise
[01:15:27] Cameron: you. I saw a guy, I discovered a guy on TikTok this week who is playing Led Zeppelin songs, the entire song, he’s playing guitar and drums at the same stage. I sorry, him too.
[01:15:38] Cameron: And singing It was the bad. And singing
[01:15:40] Steve: and doing the drums, doing the whole day. He was doing that.
[01:15:43] Cameron: Fucking insanely talented.
[01:15:45] Steve: I watched it about 10 times. I go, this guy is blowing my mind. See, TikTok knows that you and I see the same stuff with each other as well. We, our feeds would be so
[01:15:52] Cameron: similar.
[01:15:53] Cameron: How fucking am like, that’s one of the great things about TikTok to me is just this reassertion that there are millions [01:16:00] of insanely of creative people. Yeah. Talented people
[01:16:02] Steve: out there. Yeah, I agree. It actually really is great at Bubbling Up Talent. What I, it really is
[01:16:08] Cameron: what I always say to, people in creative industries, and this has been my motto for myself doing podcasts for 20 years.
[01:16:15] Cameron: If you can work,
[01:16:16] Steve: cam Riley, the OG podcast guy. If you ever had any doubts, then do not. Thank you for the plug.
[01:16:22] Cameron: I often think I’m the only person who remembers my role in podcasting.
[01:16:25] Steve: I remember everything you’ve done for me, cam, you’re the man,
[01:16:28] Cameron: Including getting you on Twitter. Every yes if you can as a creative person, if you can put in a, a solid W week’s work and make a reasonable six figure salary doing something you love, you should be fucking on top of the world.
[01:16:44] Cameron: You should not. Have expectations that you’re gonna be, swimming in a Scrooge McDuck swimming pool full of gold coins. Fuck,
[01:16:53] Steve: fuck off. But I wanna see him dive into the gold coins and just fucking bag his head because I, I don’t know if you can [01:17:00] dive into coins and actually, it’s just
[01:17:02] Cameron: one of my things.
[01:17:03] Cameron: Anyway, that’s my rant done. Speak to
[01:17:05] Steve: me about ubi, but just on that, on the music thing though. Yeah. It was an aberration and the business model was for a long time play music. And then when recorded music came out, you would only play music as a sample to sell the records. The music industry’s had a really clear reversal. Now it’s get a lot of people listen for free. To sell really high price record tickets, concert tickets. And so Taylor Swift yeah. Billions of downloads. But concert tickets that are hard to get, and I think the revenue on a tour is about 4 billion.
[01:17:36] Steve: Oh yeah. That’s a global corporation kind of level. But artists out there need to understand that the model is no longer make money out of selling recorded music. Now the model is people hear your music and want to see you and or use your music in advertising or licensing or other rights. That’s the model.
[01:17:55] Steve: It’s a shift. And I agree a hundred percent. It was an aberration and it’s never going back. The other model
[01:17:59] Cameron: [01:18:00] I’ve been arguing for 20 years is my model for my rest of my podcasts is you build an audience that loves what you do. Whether you’re an author or a musician or a filmmaker or podcaster and you say, Hey, I tell you what, you like what I do, pay me 10 bucks a month and you can get exclusive access to my content because you love what I do so much.
[01:18:20] Cameron: And if you can get, this is Kevin Kelly’s right? 1000 True Fans. 1000 true fans. You can get a thousand people to pay you 10 bucks a month. To get your content. There’s your six figure income. You’re now getting your six figure income to do what you love. That is a great fucking deal.
[01:18:37] Steve: You get exactly a thousand people to pay you 300 bucks. It’s 300 grand a year. That’s extraordinary. And if you’ve got something that’s compelling and niche, and you can do it on a global audience, you don’t need to have, 10,000 people living within a five kilometer radius of you like you had to in the old business days.
[01:18:53] Steve: That’s one of the big opportunities. And I’m a real believer in that, and my business model is very similar to what the [01:19:00] musicians need to realize. I give away a lot of my content for free. And then pay, people pay to see me do it live. That’s my business model.
[01:19:07] Cameron: You’re talking about your porn career or your.
[01:19:10] Cameron: Futurism,
[01:19:11] Steve: mainly porn? No. My my wife often says, if, if I get a back channel message from someone, I say, Hey, look at this person really liked me. Someone who might be interested in me and not understand that I’m married with kids. My wife says, you really should start an only fence.
[01:19:25] Steve: And I’m not sure if
[01:19:26] Cameron: she’s joking. Yeah. I’ve my son knows a couple of people with big only fence channels they make. Oh goodness. Crazy money, man. Listen, before we run outta time, Steve, which we did 20 minutes ago tell me why you hate UBI so much.
[01:19:42] Steve: Oh God. UBI is a total hoax. It does a couple of things.
[01:19:47] Steve: The first thing is, I don’t know that humans just want to be able to do what they wanna do all day long anyway. There’s something beautiful about pain. There’s something beautiful about having an [01:20:00] effort. Doing work, being rewarded that, the vicissitudes of life, just having, it’s eldest Huxley, brave New World where you just have everything you want.
[01:20:09] Steve: It’s a little bit like the matrix as well. I don’t know that we want that, but also this idea that robots are gonna take over and we need to give people a universal basic income. It’s just basically hoarders of capital saying, give the proletariat some crumbs, and it fails to realize that new jobs emerge.
[01:20:27] Steve: And it’s a lazy way of just saying, just give people some money instead of actually redistributing income and or retraining people for the changes in society. So UBI is a total hoax. I wrote about why it’s a hoax and how it’ll just evaporate because all the jobs aren’t gonna go away. They’re just gonna change places like they always have.
[01:20:45] Steve: There’s not that many bison hunters anymore. 10 years ago there was no such thing as an app developer, and now we’ve got millions of them. There was, yeah, three months ago there was no such thing as a prompt engineer. And this stuff just continues again and again. So it’s just a total hoax for people who don’t [01:21:00] understand economics, and it’s boring, quite
[01:21:01] Cameron: frankly.
[01:21:01] Cameron: So you can’t see a version of the future where a combination of AI and robots are able to do the jobs that most humans do far faster and far more economically effectively.
[01:21:15] Steve: Yes, I can see that world, but I know that will free up capital. And humans will want humans to do things. Why would, like right now, today, why would you want
[01:21:23] Cameron: humans to do things if AI and robots can do
[01:21:25] Steve: them faster, achieve, I can give you a hundred examples of it right now.
[01:21:28] Steve: I can get a coffee made by a machine that is as good or better than any barista’s coffee, but I want to pay $5 and watch the guy make it for me. Why? I can listen to the perfect version, because I just did that earlier today, and I bet you did it this week as well.
[01:21:42] Cameron: No, but why? Why do you wanna watch a barista make your coffee?
[01:21:45] Cameron: I just want
[01:21:46] Steve: to go there. I just wanna have a social
[01:21:47] Cameron: interaction. You’re like a cotton plantation owner and southern part of the US and the 19th century. You wanna watch. I feel like you wanna just watch your, feel like
[01:21:56] Steve: your slaves out there in the, you’re judging me based on the color of my skin.
[01:21:59] Steve: Hey, bye [01:22:00] judging me based on the color of my skin. Skin. Make me
[01:22:02] Cameron: that coffee.
[01:22:04] Steve: Hey, boy, that’s terrible.
[01:22:06] Cameron: So that’s what it sounds like. Why do you
[01:22:08] Steve: wanna watch somebody make a coffee? No, it’s not that. It’s not that. The point is I’m talking about social interaction and appreciation and the artistry of a human doing it, even though a machine can do it just as well.
[01:22:18] Steve: I’m talking about the fact that Yeah. But yeah, I can, this is, listen, that’s
[01:22:21] Cameron: version of Beyonce. Yeah. But see, that’s, this is my argument that we made about the movie industry a couple of weeks ago. That’s a 20th century mindset. That’s your problem, is you appreciate, you haven’t let me finish yet.
[01:22:33] Cameron: Human artis, human artisans doing human stuff.
[01:22:37] Steve: Yeah. And I will appreciate human artisans doing things with new sets of tools that are AI tools. But will your children, and I’ll pay a premium, but will your children?
[01:22:47] Cameron: Yes, they will. I’ve got this experiment with Fox recently,
[01:22:49] Steve: but it’s just new tools.
[01:22:51] Steve: I’m not saying that. Old worldy. I’m saying even when new tools arrive, it’s the human layer on top that creates the value. So we [01:23:00] go to the Beyonce concert. She sings a far less perfect version than you hear on Spotify, but there’s gonna be AI and visual backdrops and all sorts of interesting things. And tech used to get into the conference, facial recognition to make sure that you’re not one of these bad dudes at concerts.
[01:23:14] Steve: In you come. You do. I’m not saying that it doesn’t involve ai, I’m saying that the money just changes places. And even though AI can and will do a lot of things will drop off that we used to do that we don’t do anymore, and new things will come in, is my point.
[01:23:28] Cameron: Yeah, I’m not sure about that. I’ve had this interesting experience with Fox recently.
[01:23:32] Cameron: So I’ve told you that I’ve been using chat e p t for a couple of months to write bedtime stories for him. And it’s and it’s evolved how we did it. So initially I would create story prompts, write a story about this, write a story about that. Then he started suggesting prompts, write a story about this, write a story about that.
[01:23:51] Cameron: Now he’s at a point where he doesn’t want any prompts. He says, just tell it to write me a story. And I, [01:24:00] if I try and fuck with it, which I did last night, I said, write a bento story that merges Minecraft and Star Wars. I got half a sentence into the story. He goes, no, stop. What did you tell it? And I told him what I told him.
[01:24:14] Cameron: He goes, I don’t want that. I don’t want you to influence it. I wanna see what it generates. Even though the stories it generates are shitty, he wants to see what it produces. He wants a clean, but it doesn’t AI result.
[01:24:27] Steve: I get it. I get it. But it doesn’t mean that he only wants that in his life, that my whole proposition has always been with technology in the future and the past is that it’s not this or that, it’s this and
[01:24:38] Cameron: that.
[01:24:38] Cameron: He also loves reading all of Rol Dale’s books, so
[01:24:40] Steve: Yeah, you’re right. There you go. You know it’s this and that. How old is Fox now? Nine. Okay. All right. We’re gonna finish off with the futures forecast
[01:24:50] Cameron: can get into it mad
[01:24:55] Steve: real quick. The productivity software bubble will finally burst. I [01:25:00] think AI will eat up lots of it. There’s a lot of SaaS platforms, which I think especially with work productivity stuff just actually creates more work. The work becomes managing the productivity software. We’ve seen a couple of big Australian companies fall off the mantle somewhat.
[01:25:15] Steve: Atlassian’s lost half of its market capitalization. It’s down from a b and down to about 40 billion. By the way. That company still loses money. It actually doesn’t make a, it’s cashflow negative. Another one is Canva, which I think AI could eat into Canva. They might embrace or they might get killed, I’m not sure.
[01:25:33] Steve: But Canva, has raised 1.1 billion in capital and has never actually had up to a billion in revenue. If you add up all its revenue since it existed. And yet for some reason, it has a 25.6 billion valuation. I don’t know how that works. So my forecast is that AI will eat into many of the SaaS platforms.
[01:25:55] Steve: We’ll have our personal ais and the giant bubble on productivity [01:26:00] stuff is gonna go away.
[01:26:02] Cameron: Because people will just replace all of those products and services with some sort of, with
[01:26:06] Steve: AI tool, with their own personal LLMs, gtps and AI tool. Because the problem with many of these ais is that you have 10 different people all use a different productivity tool, and it’s a fucking disaster.
[01:26:18] Steve: It’s like everyone’s speaking different languages, or we don’t use the same roads or different phone systems. There are certain things that need to be natural monopolies. And I think that most of the software on communication and connection, you just don’t need it. I’ve really just used text messaging and an email and that’s it.
[01:26:33] Steve: And I just don’t buy into it. And I just think that bubbles burst. And if you look at a lot of the SaaS platforms that, have unicorn valuations, most of them have terrible revenue and have been net negative in terms of their return on capital.
[01:26:46] Cameron: It’s also interesting to see the future of things like Adobe, like Photoshop.
[01:26:50] Cameron: Came out with its beta version a month or so ago that had generative ai and you could take a, I’ve been playing with this a lot. You could take a portrait photo and [01:27:00] just expand the canvas size to landscape and tell it to fill in the background and it will do an amazing job. That was cool. Yeah. But then in the last week, mid Journey came out with their latest version, does exactly the same thing.
[01:27:14] Cameron: You can create an image in Mid Journey and then say, alright zoom out. And it’ll zoom out on the zoom feature is insane. It’s great, isn’t
[01:27:22] Steve: it? It’s so good. You just watch it and go, how, and like you watch it, first of all, you go how to do that, and then you’re like, of course. Yes, of course. Yeah, of course.
[01:27:30] Steve: Exactly. That’s what the zoom out should look like. Yeah. The zoom out and zoom in is crazy good. It’s crazy
[01:27:35] Cameron: good. And we have to be rapidly approaching the point where graphic designers either using Photoshop or Canva photographers, like I’m making images for my website now for not only, the.
[01:27:51] Cameron: The main images on the website, but also for every newsletter that I write, every blog post that I write, creating images where in [01:28:00] the past I would’ve had to either employ a graphic designer or I would’ve had to go and find some sort of commercially available designed piece of art or a photograph that matched the story.
[01:28:12] Cameron: Now, like I wrote a blog post yesterday about thinking about your share portfolio. During the market crash the same way you think about a house owning a house when property prices crash. This is an asset. I know it’s gonna recover it. I should just hold onto it. I thought, okay. So I wrote the blog post and I thought, okay, I want a picture of a house crashing outta the sky.
[01:28:33] Cameron: Get in the mid journey, give me an image of a house falling outta the sky and crashing on the ground. Boom. There you go. Okay. I wanna talk about somebody panicking. Give me an image of a woman panicking and screaming and running out of a house. Boom, there’s your image. Stick it in. Give me, gimme an image
[01:28:46] Steve: of, I do the same thing.
[01:28:48] Steve: I do the same thing and the
[01:28:49] Cameron: image are crazy and I can just make them in 30 seconds.
[01:28:55] Steve: It’s astounding. And they’re yours. Yeah, they’re yours. And it’s like you have to worry about, attribution or creative commons or [01:29:00] any of that. Yeah. Or paying for use. But likewise, you can find ones too where.
[01:29:03] Steve: You just say I do a lot of Google searches cause I love what Mid Journey churn out. It really suits. I like, it’s, again, I like its personality. It’s got that cyberpunk ethic to it. And I often go to mid Journey when I just want an image for a LinkedIn post or something where it doesn’t really matter.
[01:29:18] Steve: And I’ll just say, The prompt and then just write Mid journey and there’ll be about 10 different mid journeys and I’ll just pick one cause they look cool. Yeah.
[01:29:25] Cameron: And I’m in the process of trying to figure out how to use mid journey to create art that I wanna hang on my walls and playing with prompts.
[01:29:34] Cameron: Yeah that’s actually nice. And I’m really working hard on, the prompt engineering of it to create a certain kind of aesthetic that’s my own. Yeah. I get my own, I’m using Mid journey, but I’m toying with it to put on my walls, like original art that I wouldn’t have been able to produce in Photoshop in a million years.
[01:29:52] Cameron: And then creating the basics of it in mid journey, but then drawing, bringing it into my iPad, getting out my Apple [01:30:00] pencil and manipulating it further and adding more layers on it. Then sticking it back in the mid journey
[01:30:06] Steve: and then say, iterate in this way. Exactly. Yeah. Yeah. Cuz that, and that’s the bit where you put your imprint on the tech.
[01:30:13] Steve: And just, yeah. And that’s what we have to get good at. I think as a creative outlet. It’ss amazing. And that’s where the artists and the writers, it’s the same idea that you said about the writers creating movies. And
[01:30:23] Cameron: yeah, that’s futuristic episode seven. Thanks Steve.
[01:30:28] Steve: I think this is our best one.
[01:30:29] Steve: I think it was amazing.
[01:30:33] Cameron: Thanks buddy. Good to talk to you. Thanks mate. Good to.