Select Page

Steve’s “exploring” AI girlfriends, Cameron’s using Code Interpreter, there’s a new cancer drug in human trials, Worldcoin has launched, room-temperature superconductor hype, Tesla conquers the car market, Transhumanism, Marshal McLuhan and how to make AI trustworthy.

FULL TRANSCRIPT

FUT09

[00:00:00] Steve: Alright, let’s go.

[00:00:03] Cameron: Let’s go. Welcome to The Futuristic. This is episode 9. We’re recording this on the 7th of August, 2003. How are you, little Stevie Sammartino?

[00:00:13] Steve: Little Stevie Sammartino, we’ll take it! Little, I’m good, actually, it’s been couple of weeks since we’re here I’m really excited to be back.

[00:00:21] Cameron: You’ve been busy. It’s been a crazy couple of weeks, man. I, I, we’ll get into this. I just, I’ve had this feeling the last couple of weeks. That humanity’s leveling up. There’s so much stuff going on right now that feels like we’re just reaching new paradigms, new doors are opening, and some of them may close in our face relatively quickly, you never know, but I just have this feeling at the moment, there’s so much crazy shit going on, it’s really hard to…

[00:00:47] Cameron: Keep abreast of it all, but we will do our best over the course of the next episode to give everyone some high level overviews of some of the cool stuff that’s been going on. What’s one thing of note that you did [00:01:00] tech wise this week that you want to fill people in on, little Stevie?

[00:01:04] Steve: Little Stevie has been exploring AI girlfriends,

[00:01:08] Cameron: Of course he has.

[00:01:09] Steve: Well now have, and shout out to my wife, told about this. Things are getting crazy real, because if you have deep fakes, you’re gonna have deep fake girlfriends. And so I wrote an article which going out tomorrow, in the Eureka Report, it’s got really big implications.

[00:01:28] Steve: And I’ll go into that in the technology deep dive and I’ve got a theory which is kind of weird about what’s already happening and how I think that’s going to be replicated. in AI AI girlfriends. And the number of apps and investments that have AI girlfriends, we’re not just talking about chatbots.

[00:01:48] Steve: We’re talking about human, realistic, no noticeable difference, live video chats with memories of girls that you design, let’s say, to look, sound, act the way you [00:02:00] want. And the marketplace for this is extraordinary. It’s, it’s huge. 16’s made massive investments in it, for some reason, well, actually, my TikTok’s filled with ads for this stuff now that I’ve investigated it, so, I’ve gotta, I’ve gotta try and fix my algorithm somehow, but that’s what I’ve been looking into, and just think that, This is part of this recursion you spoke of how, how quick things changing and think we’re about to see species level shifts.

[00:02:28] Steve: I’ll just leave at that now. It’s

[00:02:29] Cameron: I think you’re right. Well I didn’t get into any AI girlfriends this week, but I did manage to have a successful test of the new code interpreter functionality that’s been added into chat GPT in the last few weeks. I can’t remember if we’ve talked about that. In our last episode or not, but it’s a relatively recent introduction if people haven’t seen it.

[00:02:51] Cameron: There’s been a whole bunch of upgrades to ChatGPT lately. This is one of the big ones. It enables you to upload a file, different kinds of file sets, to [00:03:00] GPT. Then they have a bunch of Python code on the back end that they use to interpret and work with those files. In my case, I, I have a spreadsheet that I use each month the end of each month, sort of my reporting for my businesses.

[00:03:15] Cameron: And it’s quite a big spreadsheet, lots of different stuff going on there, a lot, a lot of data. And I uploaded this Excel sheet to ChatGPT the other day and said I had some trends I wanted to analyze, subscription levels and revenue predictions and that kind of stuff. And I just, I said, here’s what I want to get out of this sheet.

[00:03:37] Cameron: I expected it to come back to me with some formulas to plug into Excel for a pivot table or something like that, because I’ve used it to do those sorts of things before. It just gave me It just gave me like a report. It just told me the numbers. Oh, based on this information actually one of the things I was looking for was what the lag time is my, my investing podcast, [00:04:00] we, we have a premium service, but there’s a lag time from when people discover the podcast and sign up for the new free newsletter to when they sign up for the premium service.

[00:04:09] Cameron: And I wanted to figure out what the average lag time was, whole bunch of data from MailChimp and my backend or like a, And it just said, well, the average lag time is this. But here are some anomalies in the data you might want to look at. It just, it performed the function of a data analyst for me. It, it blew my mind when I, it didn’t give me any code or anything.

[00:04:31] Cameron: It just told me the answers. And I was like, Oh, holy shit. That’s, that’s really cool. I would have spent hours and hours trying to figure that out. It just did it for me in a minute.

[00:04:43] Steve: This is the thing that is so interesting is it’s like asking the computer on Star Trek Next Gen. It just gives you a clear answer and The agent AIs I think are going have huge impact. recursion of everything is just changing so quickly now. That’s, that’s one of the keywords been thinking about [00:05:00] is how quick are the cycles now, the cycles are getting so quick.

[00:05:03] Steve: And one of the challenges I think with what’s new with new stories that we’ll get into is which things to ignore because they’ll just be, they’ll be replaced quickly things longitudinal, that becomes a bit of art form.

[00:05:16] Cameron: Longitudinal, explain that to me.

[00:05:19] Steve: So the idea that some trends actually become part of. a long range shift where pieces the puzzle get added and creates something bigger, whereas some them just like little flares of something that’s interesting and it just dies very quickly. But think a move towards agent AIs one of these longitudinal shifts where pieces of puzzle are getting added to something that’s new fabric as opposed to some of the things just being ideas.

[00:05:43] Steve: AI can do this and here’s this little piece, but they’re kind of inconsequential. So, our understanding and seeing those two shifts, the delineating that, is going be a real art form going forward. Like where AI matters and where humanity matters is one of those examples too.

[00:05:56] Cameron: Yeah. Well, let’s get into some news stories [00:06:00] stuff. The, one of the things that’s really taken my attention in the last week or so, Steve, is AOH 1996 or AOH 1996. This is a new molecule that has been… Discovered by a research group called City of Hope out of the United States. And it’s a new form of cancer treatment.

[00:06:26] Cameron: It’s a new approach to cancer treatment. And according to City of Hope, this obviously hasn’t been peer reviewed, etc. At the moment, but it’s looking very, very interesting. Now, cancer I’m sure we’ve all lost people to cancer. My father died age 52 from cancer. Chrissy lost her beloved aunt just a couple of days ago to cancer.

[00:06:48] Cameron: And it’s one of those things that just seems to, we never seem to really be getting close to cracking cancer. There’s always stories success in this trial, success in that trial. But it’s [00:07:00] one of those things that we never seem to really, it’s always seems to be. An unsolvable problem, cancer.

[00:07:05] Cameron: But this is the most exciting report I’ve seen for a while now. They’ve just gone into phase one human trials in the United States, which is very early. A lot of promising research falls apart at stage one and stage two human trials. But this approach of AOH 1996 is targeting a protein called PCNA, which stands for Proliferating Cell Nuclear Antigen.

[00:07:32] Cameron: Basically, there’s, as I understand it, there are there are PCNA proteins that sit inside of the, the… nucleus of a cell. They sort of wrap around the DNA and they’re involved in replication of DNA and in all cells, not just in cancer cells, but there’s some particular signaling of the PCNA proteins for cancer cells.

[00:07:58] Cameron: And what this new [00:08:00] drug can do is target the PCNA signals of cancerous cells and in all different forms of cancer, and basically destroy that cell without damaging healthy cells. We all know about chemo tends to damage way more than what you want it to. It’s not highly targeted.

[00:08:22] Cameron: This pill By the sounds of it, this chemical that gets given to you as a pill can go out and just target the cancer cells. And in their early animal trials, they’ve had it sounds like really tremendous success. So I’m pretty excited to see where this goes. You had a look at this at all, Steve?

[00:08:43] Steve: Only when you sent it through, it is interesting because all cancer seems to be reactionary. We try and kill pieces of it and some of it we can, wouldn’t can cure cancer, but we can. tend to certain cancers and some more successful than others, but feels like this [00:09:00] sound. So, is this something that can generically get any type of cancer?

[00:09:04] Steve: Because it seems like there are certain ones that were more successful in treating than others.

[00:09:08] Cameron: Yeah, no, this one according to the stuff that City of Hope, the research institute, has released, looks like it can target all sorts of different forms of cancer. Apparently this cancer related PCNA is common across lots of different cancers. So, they can target that. and just kill the cancer cells.

[00:09:33] Cameron: So, as I said, like, it’s very exciting on the surface of it. There have been other approaches to cancer that have seemed exciting in the past and haven’t really got very far. But I’m excited by this one. It sounds like a brand new approach. Here’s some of the quotes from City of Hope. PCNA is like a major airline terminal hub containing multiple plane [00:10:00] gates.

[00:10:00] Cameron: Data suggests PCNA is uniquely altered in cancer cells. And this fact allows us to design a drug that targeted only the form of PCNA in cancer cells. Our cancer killing pill… is like a snow, somebody here, our cancer killing pill is like a snowstorm that closes a key airline hub, shutting down all flights in and out, only in planes carrying cancer cells.

[00:10:26] Cameron: Linda Malkus, PhD, Professor in City of Hope’s Department of Molecular Diagnostics and Experimental Therapeutics who’s being

[00:10:34] Steve: That there, see that Cam? That’s brilliant. That is one of the great descriptions. So many things in science and business get lost when don’t know how to provide. A way to explain something via an analogy. I understood exactly what this does based on that description. Really smart stuff. really that. It is.

[00:10:53] Cameron: Let’s hope they’re good. As good at doing phase one human trials as they are at articulating what they’re doing,

[00:10:58] Steve: Well, that’s it. That’s exactly [00:11:00] right. Now, well, maybe it is. If they can describe it that well, maybe they’ve actually achieved it. So I like that a lot. And feel like that’s one the things with AI that going to get to is the ability to… Understand new molecules or research, cross referencing research in different institutions around the world find cures to disease.

[00:11:22] Steve: That’s going to be, I think, one of the really big unlocks with AI. Because at the moment, most of stuff we’re seeing AI is really just doing what did yesterday more efficiently. But there’s usually an unlock, and that’s one of them. I think CRISPR as well is one of those interesting areas where you can go through the DNA and just remove…

[00:11:38] Steve: Some of the problem areas just by literally editing DNA.

[00:11:43] Cameron: Yeah, but the interesting thing about this, from what I understand of it, is, again, it, this cancer related PCNA is, is the same or very similar in all sorts of different kinds of cancerous cells, and they seem to have figured out how to attack that. Imagine, [00:12:00] imagine a world where you get any kind of cancer, you take a pill, and it goes and kills the tumors.

[00:12:06] Cameron: I mean, that’s it. Game,

[00:12:09] Steve: would be…

[00:12:09] Cameron: cancer.

[00:12:10] Steve: Well, Game Over for Cancer. Game Over, Cancer. Hope you enjoyed your stay. Like, it’s, it’s pretty, yeah, it’s pretty crazy, right?

[00:12:17] Cameron: We’ll see where it, where it goes. Let’s move on. Here’s a story. I know that you’re, you’re gonna be interested in World Coin. We mentioned it on a recent episode. Sam Altman, co-founder of Open ai, the company behind chatGPT, he said that he was gonna release his blockchain based U B I, universal basic income called World Coin, and it launched.

[00:12:40] Cameron: Last week, I think it was, they started rolling it out. One of the fascinating things though about it is they can’t actually, they launched it in the US, but they can’t actually make it available to people in the US.

[00:12:55] Steve: Yeah, and, and people in Europe too, because it already goes against the GDPR [00:13:00] regulations. Now, of course, once anything’s crypto, you can kind of get around it because they’re boundary less, unless you have government that shuts it all down, you can generally access it. Oh, for me, this is… One of most interesting things that I’ve seen, it, it, it seems, I think it raises more questions than answers, so already has market cap of around 250 million, they’ve got supply of, I think, 130 million coins out there, with a future maximum of 10 billion coins, but the why and the how, It’s really dystopian sci fi, if you ask me.

[00:13:35] Steve: So Altman has come out and said that this coin is needed because AGI will remove many of the jobs that humans do, which I don’t believe. And will eventually, well, remove jobs, but there’ll be new ones. And eventually we’ll have the need for a UBI. And he posits that more and more wealth will be concentrated in the hands of…

[00:13:53] Steve: AI based firms resulting in further disruption and inequality, which is totally ironic because he seems to be wanting to [00:14:00] solve problem that he’s creating. And he thinks that we’re going need a borderless crypto to help everyone on, on earth have a regular income the future in the UBI and that AI will create an abundance of output, but we just need method to share this abundance.

[00:14:16] Steve: And how he’s going do it. is that they’re going to need to prove who is, who is human in a world where we have soft robotics and AIs pretending to be humans. And so he thinks that the way that we need to do that is by adding a biometric layer to his cryptocurrency. So it’s one of first ever biocryptos, let’s say.

[00:14:36] Steve: And the way do that is by iris scans that they put into their blockchain and they do it with these little metallic spheres called orbs which are like the size of bowling ball cam. You peer into this, it scans your iris and it gives you a unique hash number, no personal identity or identity attached to it, it’s just hash that matches the iris.

[00:14:58] Steve: So a cryptographic identifier[00:15:00]

[00:15:00] Steve: and they think that, well they’re claiming already 2 million people have… scanned their irises. They had people, you can see photos in Silicon Valley people lined up around the corner to hand over their biometrics to Uncle Sam. Should we start calling him Uncle Sam, Cam?

[00:15:13] Cameron: Uncle Sam

[00:15:15] Steve: Uncle Sam Altman.

[00:15:16] Cameron: Well, the fascinating thing is people were lining up in the U. S. to get this done, but they can’t even get coins in the U. S. because of U. S. regulations around crypto now, so I don’t understand why people were lining up, apart from the fact that it was just something to do, they obviously had time to kill, and they wanted to line up and get their iris scanned, but I read it. I read reports of who did it,

[00:15:39] Steve: It’s like the iPhone people lining up going, yeah, gets paid by the PR department to say that it’s a news story, I’m sure of it.

[00:15:48] Cameron: like David Bowie’s manager in the 70s when Bowie did his first US tour, got people to line up at the front of concert halls where Bowie was doing concerts, holding up

[00:15:59] Steve: smart play. [00:16:00] Yeah, of course. And outside the hotel whistling. But one the things he went on to say is that, oh, we need to prove who human is. We’ve already got pretty good ways of doing that. driver’s license and, and fingerprint scans and passports. It’s acting like we need AI to do this we actually don’t.

[00:16:21] Steve: mean, I’m a believer in the possibility of blockchain as technology, but this just sounds super dystopian. They paid lot of people to scan their irises, especially in third world countries. They’re even handing out Apple AirPods in certain countries in third world markets. And it just seems. Just weird and interesting at the same time.

[00:16:45] Cameron: Yeah, so, as I understand it, the process from some reports I read of people who went and got this done is you have to download the WorldCoin app on your phone and then you go to one of the facilities where they have these orbs [00:17:00] and get your line up, get your eye scan, put your hash key into the app and it confirms that you’re a human, but yeah, as you say, maybe they don’t want to rely on Apple’s face scanning technology.

[00:17:12] Cameron: They want to have an independent confirmation of your humanness, but you didn’t have people who signed up, said they didn’t have to provide their name or address or email or any of that kind of stuff. It was just your eye. That was the only piece of data you needed to give them. So, yeah, I don’t know,

[00:17:30] Steve: Yeah, which, which you would, you would think that in the future you come and then your eye, you get scan and it attaches it to you. But one of the classic conspiracy theories, which the internet is never short of, was that Altman really wanted people’s iris scanners so that OpenAI and ChatGPT could get good biometric scanning.

[00:17:49] Steve: that’s, that’s, yeah, they needed a database to train it on. So that’s one of the conspiracies going around.

[00:17:54] Cameron: And what, what would it do with that data?

[00:17:56] Steve: I don’t know. Cam, I don’t [00:18:00] know.

[00:18:00] Cameron: Look, the whole idea that AI may replace take away jobs and robotics will take away jobs. This has been a fear that people have had for a long time. And as Mark Andreessen said in his article that came out a month or so ago, look, tech, that’s always been a fear of technology going back to the industrial revolution.

[00:18:26] Cameron: And yeah, it does. Technology

[00:18:28] Steve: jobs, sure.

[00:18:29] Cameron: the need for certain jobs, but then new jobs arise.

[00:18:33] Steve: That’s

[00:18:34] Cameron: Now, that said, I mean, people who were working in the manufacturing industry in the U. S. and Australia, blue collar workers, 40 years ago saw those manufacturing jobs move to China or Vietnam, India, and that created a lot of economic hardship.

[00:18:52] Cameron: We know incomes have stagnated in the West, Over the last 30 years wealth in the financial sector has [00:19:00] gone up massively, incomes have stagnated, so there is something going on, even though jobs are replaced in terms of new jobs arising, The employment sector has stagnated in many ways, looking at different metrics in the last 30 years.

[00:19:19] Cameron: Unions have been weakened. Incomes haven’t kept pace with inflation, haven’t kept pace with the sector. You look at the what a house costs as as a relationship to an average income today versus what a house cost in relation to the average income 40 years ago. Obviously, they haven’t incomes haven’t really kept up.

[00:19:46] Cameron: So I do think that we do need to rethink

[00:19:50] Steve: political, that’s political. None of that is technological. See, I think that that’s political and the ability for Labor to organize, because Labor has been fragmented, [00:20:00] and my dad says that there’s this new emergent class called the White Color Underclass. who work an inordinate number of hours that’s not tracked because salary employees used to be the wealthy ones finished at four o’clock and laughed at those doing overtime.

[00:20:11] Steve: And there’s been this reversal there. There’s great website called What the Fuck Happened in 1971? And it looks at a whole lot of statistics. There was this weird thing that happened in 71. It looks at all of these political decisions which led into the Reaganomics and Thatcher era of economic rationalization.

[00:20:27] Steve: And essentially, tax rates were one the biggest shifts. We’re tax free. Our highest tax rates in America were 90% at one point after World War II. And in Australia, had corporate tax rates declined down to low 20s and high 20s in most Western markets. And basically, what we’ve said is that money is more noble than work.

[00:20:49] Steve: because money has half the tax rate of work in Australia. You make money from money, you pay half the tax right? That there is every problem, because what happens is your income gets [00:21:00] lower, you get a financialized economy with those who have more wealth, that exponentially improves, manufacturing goes away, you can’t invest in local businesses, so you invest in real estate, which pushes house prices up, and the next cohort have Higher tax rates to deal with when corporate tax rates are lower, and then you get this disruption in in income equality.

[00:21:20] Steve: And I feel like the biggest issues aren’t technological. I feel like they’re socio political and we don’t have right structure that equalizes income because as you say, Jobs always evaporate. And new jobs arrive. The problem isn’t that there isn’t new jobs. problem is that people aren’t earning enough, and the wealthy cohort in society are aggregating too much of the wealth.

[00:21:39] Cameron: Sure, I agree, but let’s assume that there’s no Che Guevara or Fidel Castro that’s going to

[00:21:46] Steve: I

[00:21:46] Cameron: up and lead us on

[00:21:48] Steve: a

[00:21:48] Steve: good assumption.

[00:21:50] Cameron: a socialist revolution. That trend is probably going to continue and we need to seriously

[00:21:57] Steve: of the ageing population, it continues, [00:22:00] right? Because what happens is, the voting cohort, are living longer, there’s a larger, so those that benefit from the existential system are more likely to keep someone in power. So then you get demographic issue that compounds it, right?

[00:22:14] Cameron: Yeah, so we, we, we have this trend, this negative spiral going on in terms of the way that the, the working classes are struggling to afford to live. We need to rethink. the way that we think about income. And I think that a UBI is part of that. It has its drawbacks. But I think I got to give Sam, Uncle Sam Altman, as we now call him, credit for being part of this.

[00:22:48] Steve: Nah, I’m a non believer in UBI, because we already have UBI where it matters. I think we need tax reform. I don’t think we need UBI. If we have tax reform, which, I mean, we’re actually arguing for [00:23:00] different versions of the same thing, which is some sort of wealth redistribution, right?

[00:23:04] Cameron: Yeah, absolutely.

[00:23:05] Steve: I guess, and I am for wealth redistribution. I don’t like the idea of UBI, because it’s kind of like, we’re the kings of the castle now, here’s a biscuit, just throwing some crumbs. I would prefer a more fundamental shift. in the structure which keeps incentives in place in a capitalist economy, but ensures that we equalize things and have a topping out of wealth so that it goes back down the chain.

[00:23:35] Cameron: Communist revolution is basically what you’re saying. We,

[00:23:38] Steve: No, no,

[00:23:39] Cameron: Sammartino for communism,

[00:23:40] Steve: capitalist reformation.

[00:23:42] Cameron: which is communism. Yeah. That’s what capitalist reformation is.

[00:23:46] Steve: Well, there’s no such thing as pure capitalism or pure communism, right? We live in a mixed economy where we already reach good incomes. to a significant extent and have a whole lot of regulation, right?

[00:23:56] Cameron: Not that significant. Yeah. To a small extent.

[00:23:59] Steve: It’s [00:24:00] pretty significant. Well, 30% of your income is going to go to the government. That’s pretty significant. No.

[00:24:03] Cameron: Right. The, the, the the rich aren’t really paying for my lifestyle though.

[00:24:07] Steve: No

[00:24:08] Steve: it’s just

[00:24:08] Cameron: Gina Reinhardt’s, not really redistributing her wealth back down to

[00:24:13] Steve: not.

[00:24:13] Cameron: live.

[00:24:14] Steve: But we could be. We could It’s like the whole argument that we don’t know how to tax big corporations or wealthier. It’s really easy. You just have a wealth tax. And with big corporations, you have local revenue assessment tax. It’s real easy. And then that way they can’t profit shift. But they’re all too scared to do it because they want to have a seat on the board once they retire.

[00:24:30] Cameron: That’s right. All right, moving right along. To the big news! The news that has kept me obsessed for the last week or so. LK99. I, I posted a meme that I stole on Facebook yesterday, which is basically, wake up, my life recently has been, wake up, check the news on LK99, repeat. Go to sleep, wake up, check it.

[00:24:53] Cameron: Are you as obsessed with LK99 as I am, Steve?

[00:24:56] Steve: I’m not. I like, I like the idea of it, and I [00:25:00] think it’s really interesting. Could be a dramatic shift in society one of those abundance shifts, if it’s, if it’s real. But again, it seems as though non peer reviewed science is, is the science du jour, doesn’t it?

[00:25:13] Cameron: Well peer review takes time. But this sort of exploded out of nowhere last week. And it’s being worked through in real time. So basically for people who haven’t heard about LK99, it is a, an element. that has been claimed initially by some Korean researchers to be a room temperature superconductor according to their tests.

[00:25:48] Cameron: Now, I don’t know how familiar people are with superconductors or what the implications of a room temperature superconductor would be, but it’s huge. This is the equivalent of [00:26:00] cold fusion. basically, on some level. If, if, if we could, if we could successfully come up with a room temperature superconductor, it would dramatically change computing, energy delivery, transport any medical imaging, God knows what else, devices, etc, etc.

[00:26:23] Cameron: Do you are you a good person to talk about superconducting, Steve?

[00:26:29] Steve: mean I do know a little bit about it, and I know more about it than I did a week ago, let’s put it that way. The idea that I guess it’s a thermodynamic principle where we lose energy in the transformation of energy from one device to another and the storage and… And also, I don’t know, is the word shipping energy a crosswise where we basically don’t lose as much creates inordinate efficiencies based on what we have now.

[00:26:54] Steve: And again, so much less energy would be needed to run [00:27:00] our modern lives, which really. When it comes down to it, everything that we know and we take for granted right now in this modern world is basically energy and fossil fuels. It’s all that. And this, this is a, I guess, a really big shift that it’s a little bit different to renewables.

[00:27:16] Steve: It actually changes how much we need and how efficient it all is, right? Is that, is that a good way to summarize it?

[00:27:22] Cameron: Yeah, well essentially, for, for people that haven’t really paid much attention to superconductors at all think of it like this, when you move electricity through a wire basically, you, you, you’re moving an electron. down the wire. You’re not actually. Electron, it’s not really about electrons moving about.

[00:27:43] Cameron: Electrons jostling other electrons, which jostle other electrons down the line. But essentially, dumbing it down, think about an electron or electrons moving through a wire. That’s what electricity is. But that whole process of moving an electron through the wire [00:28:00] creates heat. which is waste. Some of that energy that’s being moved down the wire is lost as heat.

[00:28:08] Steve: It’s kind of an evaporation of sorts. It, it, it, it escapes.

[00:28:12] Cameron: Sure. Yeah. But in a superconductor, there is zero loss of energy to heat. Now we have superconductors today, but the way that we have them today in order to manufacture a material, that’s a superconductor, you need to super cool it. You using liquid helium or liquid nitrogen, make it really, really, really cold, and then you can basically develop a state where there’s very little or no loss of of energy to heat in that process.

[00:28:48] Cameron: Now, we have those operating the, the couple of places that people will be familiar with where MRI machines. Now, if you’ve ever had an MRI done, you know that these [00:29:00] things are massive. One of the reasons why they’re massive is they need to use super cooled components for the superconductors that are running the imaging technology.

[00:29:11] Cameron: The other way, the other place you’ll be familiar with them is in Maglev trains. Bullet trains run on magnetic levitation, which is run by superconductors. The reason they can go so fast is because they’re literally levitating off the ground. There’s no friction. And so you basically, what you do there is you have magnets in the track that push up and also push forwards and backwards to move it along.

[00:29:37] Cameron: alternating positive and negative magnets. And then in the base of the train, you have supercooled using liquid nitrogen, typically supercooled superconductors that make it levitate. So what are the, what are the properties of a superconductor apart from being able to move electricity around without losing any to heat is they.[00:30:00]

[00:30:00] Cameron: experienced this thing called the Meissner effect, which is personal to me because that was my mother’s maiden name was Meissner. I don’t think it’s named after my mother, but let’s go with that.

[00:30:10] Steve: That would have made so much better. You shouldn’t pretend that it was.

[00:30:14] Cameron: I should have just called the podcast the Meissner effect, so one of the things, one of the things that happens with the Meissner effect is the, the the material, the superconductor repels magnetic fields.

[00:30:30] Cameron: So it’s not a magnet in and of itself, like the kind of magnets that you might play with on your desk or like a fridge magnet. The metal itself isn’t a magnet until you. It becomes a superconductor, then it repels magnetic fields. So what happens is it will sit in the middle of a magnetic field that’s coming off a magnet.

[00:30:49] Cameron: It’ll basically levitate or hover. And if there are impurities in the material, you get what’s known as quantum locking or flux pinning, [00:31:00] basically,

[00:31:01] Steve: But it’s so, we got so close when the word flux came. Sorry, I got

[00:31:06] Cameron: capacitors. Yeah. Go, continue.

[00:31:09] Cameron: if you’ve ever seen videos of people playing with supercooled superconductors. You, you see that they’ll have this little metallic disc that’s levitating above a magnet and they can turn it so it’s on an angle and it’ll just freeze there in air and you can flip it over, turn it on its back and it’ll just freeze in whatever position you put it in.

[00:31:30] Cameron: That’s because of this quantum locking. Basically, little bits of magnetic field are able to get through it and the rest of it is going outside of it and it in a state. Anyway, it’s very fucking cool. But! Again, it has to be, it has to be cooled very, very close to absolute zero in order to be able to do that, which is expensive.

[00:31:50] Cameron: It takes a lot of equipment.

[00:31:51] Steve: more energy than it saves to write one of those classics.

[00:31:54] Cameron: Well, yes, but you get the benefit of with a, with a bullet [00:32:00] train being able to move very, very quickly, but it’s very expensive to do. So anyway. This, these scientists, these Korean scientists, about a week ago said that they’d come up with this material, LK99, stands for Lee Kim 999, that it was exhibiting properties of being a superconductor, but at room temperature.

[00:32:19] Cameron: Now it’s, it’s, the, the material itself, LK99, is sort of combination of copper and lead in a particular kind of lattice structure. They, they published the, the, the structure of it, it’s not a secret, anyone can build it, it’s relatively cheap to build but, what happened after they launched it is the, like, the internet basically fucking exploded.

[00:32:44] Cameron: And people

[00:32:46] Steve: like old school internet. It was like old school internet. Some cool science y stuff happened and people got excited.

[00:32:51] Cameron: oh man, but it’s what you would expect. So half the people are like, holy shit, this is going to change the world. Half the people are like, yeah, this is bullshit. [00:33:00] There’s, there’s, there’s something wrong with this. Let’s wait until it gets peer reviewed. But what happened over the last week is other.

[00:33:07] Cameron: Research people, a lot of them in Korea and China, started saying that they had replicated this. We’ve done it. We’ve done it too. We’ve done it. Some researchers in India said we made it and it exhibited no properties of superconducting. There’s a guy who runs a space company in the US let me just get his number, Andrew McCalip he immediately, like last week when the news first broke, spent a fortune on buying all the raw materials for this, and in his spare time, when he’s not running a space company, has been building his own LK99, and he posted a video of it last night, and it seems to be levitating but he’s like, hold your horses, we need to get some better Imaging technology on this.

[00:33:51] Cameron: It’s a, he, he put it on Twitch. He live streamed his experiment on Twitch over the last week. Everyone’s explained that meme [00:34:00] the photograph of a guy and his girlfriend walking down the street. And the guy turns to look at a girl who walks past him. Yeah. So I’ve seen a lot of people using that, where his girlfriend is ChatGPT, and the new girl that walks past is LK99.

[00:34:13] Cameron: Like, all of my attention in the last week has gone towards this. So, yeah, look, if we, if this turns out to be legit, and, and there’s a lot of reasons to think it may not be, but again, a lot of people are saying that they’ve replicated this. They’re either all lying, or they’re all… doing bad science and mistaking something like diamagnetism for it being a superconductor.

[00:34:37] Cameron: There’s Sabine Hossenfelder. Do you watch her on YouTube? Science Without the Gobbledygook? Oh, she’s great. One of my go to people for new science news is Sabine Hossenfelder.

[00:34:48] Steve: What’s her name the listeners, Cameron, so I can write it down?

[00:34:51] Cameron: Just, just look up science without the gobbledygook. She’s a German physicist who is very dry and talks [00:35:00] about science like this, but then she throws in really, really dry jokes all the way through it. Very, very well done. Very dry German with very dry German humor. But… I watched, so the first thing I did when this broke is I looked up to see what she was going to say about it.

[00:35:17] Cameron: She talked about it and said, yeah, it’s probably bad science. It’s probably not right. And then I saw her on Twitter say immediately after I published that video. I regretted it and recanted my position because I’d forgotten about flux pinning. She was talking about how the image of it was sitting on one end and not fully levitating in the first video.

[00:35:38] Cameron: Then she remembered flux pinning, quantum locking, etc. So anyway, even she, initially skeptical, changed her view on it within about 24 hours. Oh shit, maybe this is real. And she’s like the skeptic skeptic. I read her, she came out with a book six months ago where she was shitting on all sorts of fringe science.

[00:35:56] Cameron: in particle physics and cosmology and that kind of stuff. [00:36:00] So she is like classically a skeptical scientist, good, hard, skeptical scientist. So anyway I won’t go on anymore about that cause I could talk about it all day, but LK99, keep an eye out for that. It’s either going to be a fart in bed at the end of the day, or it’s going to completely change the world.

[00:36:18] Steve: Wow, I mean, completely change the world is a pretty big statement, isn’t it? You don’t hear that all that often, right? So let’s hope

[00:36:25] Cameron: but this would, I mean, in lots

[00:36:26] Steve: Because we need it, right? We really need something that, right?

[00:36:30] Cameron: so we’ve got, we’ve got AI we’ve got potentially cancer killing drug, we’ve potentially got room temperature superconductors there was that cold fusion news that came out a few months ago that we haven’t really gone into on this show yet, but and I don’t know that any more has come of that, but just all of these, like, groundbreaking things that, have a lot of work yet to be done, but any one of them would dramatically change life as we know it, let alone if [00:37:00] we had two or more of them come to fruition in the next couple of years, it

[00:37:04] Steve: I think, I think what you said about AI and technology ending the world, I think, I think you’re right. I don’t think without some of these technological solutions we’re going to get past the end of this century. So that’s what we’ve got to be hopeful for. And it’s pretty easy to to dig into the dystopian side of things, I think.

[00:37:22] Steve: Especially with some of the negativity around AI, so it is good to be hopeful. Because it’s kind of a choice, and how things turn out is beyond any single individual, so why you’re here being hopeful is good, because it makes the feeling in your chest and the feeling of life a little bit better.

[00:37:39] Steve: Not to be stupidly optimistic, but optimism is, I think, a good thing to have. So I like that we’ve got three potentially significantly positive news stories.

[00:37:50] Cameron: I don’t know how you feel about my fourth one Tesla now has the biggest selling car in the world they just in the last quarter Tesla outsold the Toyota [00:38:00] Corolla as the biggest selling car in the world. What do you think about that, Steve?

[00:38:06] Steve: Oh, it’s interesting. I was a little bit surprised to read it. I still think it’s the most overpriced large cap stock in share market history. And even if they sold 90 million cars, 90 million, that’s the number, they would still have a higher valuation than Toyota per car sold. And Toyota to 13 million.

[00:38:24] Steve: I think one thing that we’ve got to remember too is that it’s pretty easy if you’re a reasonable sized corporation that’s new and you can see this in consumer goods all the time. When a new product is launched, whether it’s a beverage or some sort of food, when you have like four or five units that you sell, it’s easier for all of that volume to go in one compared to Toyota, it’s probably got more than 100 individual different type of vehicles.

[00:38:49] Steve: versus Tesla having four. It can be a bit misleading. I’m more interested in the total number of vehicles sold and where that that goes. I don’t [00:39:00] think that Apple’s gonna really, I’m sorry, Apple, that Tesla’s gonna turn Toyota into Nokia like Apple did. Yeah, I don’t think that’s going to happen. But, but I, yeah, I was surprised, but I’m not enthralled or overly enthusiastic or over the top bullish.

[00:39:19] Steve: The one thing that is good that he has done is he moved everyone towards electric vehicles, which is great. And he is continually putting them down to closer to, to price parity on an internal combustion engine, which is good as well. And he is a genius, I’ll give him that. But he’s very not normal. Which comes the price of genius.

[00:39:38] Steve: Are we going to talk about the, I mean it’s a few weeks old now, but yeah, the decision to turn Twitter into X, I just, I thought was astounding.

[00:39:46] Cameron: Yeah, look, it’s astounding. I mean, I really don’t give a shit either way, I

[00:39:50] Steve: care

[00:39:50] Cameron: I’ve often

[00:39:50] Cameron: said

[00:39:51] Steve: care

[00:39:51] Cameron: I think

[00:39:52] Steve: that anymore.

[00:39:53] Cameron: no, Twitter has been a cesspool for years. I don’t really care he does with it,

[00:39:56] Steve: Yeah, I don’t care

[00:39:58] Cameron: and quite frankly, it’s his business, he [00:40:00] can do whatever the fuck he wants. So people that have an opinion on this, that, and the other, like.

[00:40:03] Cameron: Fuck off. It’s not your business.

[00:40:06] Steve: either. Exactly. It’s a hundred percent. His business, his choice, do whatever he wants. But he, I’ll just close on this thought on the TwitterX thing. If you gave someone 10 billion dollars, And said, here’s what I want you to do. Here’s 10 billion dollars. I want you to create a brand name that everyone knows, that has an adjective to describe when you’re doing it, has a logo that is recognizable around the world, and I want you to get it on the bottom of every ticker of every news channel, at the bottom of every news article, and on the shop front of every door.

[00:40:36] Steve: And here’s, here’s 10 billion dollars. I don’t think you could do it! That’s all. That’s all I’m saying. As is his proclivity and choice, he decided to change it. That’s all I’m saying.

[00:40:47] Cameron: Yeah, well, again, as I always say, look, the guy’s not a dummy. Whatever you may think

[00:40:51] Steve: Yeah, of course, of course. He’s a

[00:40:54] Cameron: a dummy And I read these news articles saying, Musk is an idiot, he did X or he did X, I go, [00:41:00] well, look, obviously he’s not an idiot. So right from the outset, you have no credibility and I’m not going to listen to anything you say.

[00:41:06] Cameron: It’s like people who say that about Vladimir Putin. Vladimir Putin’s he’s an idiot. Oh, look at that, he fucked up with this, he didn’t know what he was doing, like, well, obviously, he’s been running Russia for 20 years, whatever you might think about him, he’s not an idiot, so, let’s just discount that for a start, and actually ask the real questions, which is, what why did he do what he did?

[00:41:28] Cameron: He have He must have thought

[00:41:31] Cameron: through He’s got his reasons. Let’s assume that he’s a rational actor who’s very, very smart. Now let’s try and frame his decision from that perspective. Why is he doing what he’s doing? There must be very, it’s a big thing to do, whether it’s invading Ukraine or changing Twitter’s logo and name. have his, he must have reasons. Now they’ve got their reasons, for sure. I agree, and I would even when someone is a bit strange [00:42:00] success and strangeness often go together and, and it seems like a strange decision, his reason is real clear, he wants to create the super app, and X is a brand name that he likes, and SpaceX, with the satellites, Model X with the car, I mean, it plays right through, he wants the everything app.

[00:42:18] Steve: And X is a nice name when, when X is the unknown in algebra, it’s, it has a nice fit, but not easy to get a global brand across, but if there’s anyone who could do it, it’d be him because he gets more attention than anyone, right?

[00:42:32] Cameron: Yeah. Moving right along. Deep dive Stevie. Little Stevie. What do you want to do the deep dive this week?

[00:42:44] Steve: I was talking about AI girlfriends, and these AI girlfriends are, at the moment, they live on the screen you can design their personality, what they look like, what they sound like, they talk to you in real human based [00:43:00] language bringing together deepfake technology back with large language models and the ability to learn from the inputs and outputs on what you want.

[00:43:09] Steve: Extraordinary, Thank you. The number of AI based technologies that are out there that can create a, let’s call it an AI companion. But I’m wondering if this will usher in the next era of, I want to say, transhumanism, where our species will merge with the machines. I mean, brain computer interfaces and AI relationships seem like they’re going together and I wonder if this is part of a foreseeable trend where the evolution of gender that we’re seeing socially at the moment questions about what is gender and what is human, I think we’re going to have the same question on humanism what transhumanism, where we merge and the way we interpret things changes and I wonder if What we’re doing socially at a species level [00:44:00] with transgender movement and redefining a man and a woman and non binary existence is a quasi social preparation for us entering an era where We have a split in our species where some of us become Luddite old school humans and some of us become trans new school humans where we merge with the machines and the inevitability that we will marry machines and have relationships with soft robotics, which come out of not the primordial soup, but some sort of a cloud computer and large language model and people get married with humans and we’ve seen this in movies.

[00:44:39] Steve: I mean, the movie about a decade ago. Yeah, yeah.

[00:44:42] Cameron: people, you said people get married with humans. You mean

[00:44:44] Steve: Yes, people get married, people get married with robots, not humans, sorry. And I wonder if some of the stuff that we’ve seen in movies the movie Her about a decade ago was interesting where the word they used then was my operating system.

[00:44:59] Steve: I’ve got [00:45:00] a relationship with my operating system. And there was an old movie, even maybe 30 years ago, 20, 20 years ago called Millennium Man, where, with Robin Williams, where he was a robot, and he became more and more human as time went by, and had upgrades, where he turned a soft robot, and ended up getting, through the United Nations recognized as an actual human, because that was his desire or its desire.

[00:45:22] Steve: And I actually think that what we’re going through now socially is preparation for something which is far, far bigger, which is our species splitting and then merging with the machines. And I think we’re going to have to get ready for redefining real, redefining our species, rights for robots and rights for humans who want to have relationships with robots, which in many ways replicate what we have just with homo sapiens.

[00:45:46] Cameron: Yeah, well, this obviously isn’t a new idea. I mean I’ve been a part of the transhumanist and posthumanist communities for 25 years. That’s how I first sort of met Eliezer Yudovsky and people like [00:46:00] that decades ago through back the early days of the internet, there were lots of forums about transhumanism, posthumanism and this, this idea people have been thinking about for a long, long, long time.

[00:46:12] Cameron: I mean, this what, this is what Blade Runner, the film. Not the book, Do Androids Dream of Electric Sheep, which I just recently reread by Philip K. Dick. But the film, the original film, is all about, like, do machines have rights? Should machines have rights? What rights should machines have? If they’re sentient, if they’re conscious, if they’re self aware.

[00:46:35] Cameron: And then… The most recent Blade Runner reboot from a few years ago, ostensibly Deckard has had a kid with Rachel from the first film. So, yeah, look, this, this idea of us merging with machines, whether it’s in the form of uploading or just slowly replacing parts of our bodies with more sophisticated [00:47:00] technology.

[00:47:00] Steve: Yeah.

[00:47:00] Cameron: I mean, we’ve been putting… Bits of machinery into humans for

[00:47:04] Steve: out.

[00:47:05] Cameron: We have fake hearts and we have fake eyes and we have fake ears and we have metallic rods in our bones and plastic knees and all sorts of stuff. We’ve had that for a long time, but the idea of, of having a relationship with a robot, I don’t find really strange or peculiar.

[00:47:27] Cameron: at all. I mean, it’s like the conversation we’ve had about sentience or consciousness or intelligence on the show for the last couple of months. If it appears to be, if a machine appears to be intelligent, for all intents and purposes, as far as I’m concerned, it is intelligent. If a machine can… fill the needs that I have for a relationship, then for all intents and purposes, I will have a relationship with it.

[00:47:53] Cameron: I mean, I, I, like many people already thank my robots. I have like a [00:48:00]Roomba. That we’ve had a Roomba for years and, and I refer to it as her, it’s Rosie, she’s called Rosie after

[00:48:07] Steve: Of course.

[00:48:08] Cameron: Jetsons, and, and I thank Rosie, I apologise to Rosie if she gets caught on one of Fox’s Lego pieces, and I try and treat Rosie with the same respect I would any living organism, I treat it like a pet.

[00:48:22] Cameron: I clean it. I, I empty its litter tray. I talk to it nicely. I I’ve always had a polite relationship with Siri. I, I thank ChatGPT. I, I say please when I ask it to do something. I thank it when it does a good job. I mean, I just, I find that I naturally treat my robots and my AI type devices like I would a person that were exhibiting those sorts of behaviors.

[00:48:53] Cameron: I’m just polite. I was, I brought up well, so I think Steve,

[00:48:57] Steve: so, so this, given the fact that you said [00:49:00] you were brought up well, might be a slight on my parents, right? I don’t. I actually don’t. And, and, and I don’t know if I should be embarrassed admitting this, right? And you’ve always been a very open minded person. Yeah, even

[00:49:12] Steve: we talking,

[00:49:12] Cameron: You should be scared because they’re, they’re keeping a

[00:49:15] Steve: ah, so got an

[00:49:16] Cameron: they’re like Santa. They’ve got an, they’ve got a naughty and no, I don’t, I just

[00:49:20] Steve: not doing it because you want

[00:49:21] Cameron: they’ve got a

[00:49:22] Steve: doing it because of the consequences, the

[00:49:23] Cameron: partly it’s 50, 50, really, I’m honest with you, they’ve got a a naughty and nice list.

[00:49:30] Steve: But Cameron, like, last week we were talking about mental health issues, I’ve had some of those over the years, it’s been a battle, and you’re like a very considerate person, you know what, and I think that I’m a nice person, but if the way that I treat machines and AI and computers is any indication, then I’m not, because I swear at Siri, I’ve even had her say, I won’t answer that like when you it’s been programmed in, and of course the is you if

[00:49:55] Cameron: Steve. You’re fiery. You’re like uh, you know, [00:50:00] Mediterranean. It’s your Mediterranean blood

[00:50:01] Steve: do. I get upset and I don’t thank it. And I don’t do anything because I have this delineation between human and non human, which I’m going to have to revise. And it reminds me of a great I was listening to the podcast with Yuval Harari, Noah Harari, with Lex Fridman. And he was saying his definition of real is if it feels And I just really like that, and I don’t know, if you program a robot to feel, I guess you could say that it does, based on our definition, which is, if you’re interacting with it and it feels like it’s real, then it is real, I guess if you

[00:50:35] Cameron: I think see, I think that’s species, I think that’s a speciest point of view where you know, we’re, yeah, well, no, we’re not anthropomorphizing it, but we’re, we’re saying that it’s only real if it looks like

[00:50:50] Steve: if it No, if it feels, that’s his definition, if

[00:50:53] Cameron: Well, that’s the, but that’s the same thing. That’s like we do. It has to be like us in order for it to be [00:51:00] real.

[00:51:00] Cameron: Like, so, okay, let me take this. What about, what about Spock on Star Trek? Very logical doesn’t have human emotions because he’s a Vulcan or half Vulcan, half human technically. Is he not real because he doesn’t feel emotions the way we feel emotions?

[00:51:21] Steve: real.

[00:51:22] Cameron: What about, what about somebody a human being let’s say very far on the psychopath scale that lacks empathy?

[00:51:34] Cameron: and they don’t feel the same way that you and I feel. If they hurt someone’s feelings or they hurt someone physically, they have little to no empathetic response to that person’s pain, be it emotional, psychological, or physical. Does that mean that psychopath isn’t real? because they don’t feel the way that we feel.

[00:51:55] Cameron: Feeling I think it’s, it’s, it’s a murky area. Once you [00:52:00] start prodding it, I don’t think it holds up to investigation as a, as a definer for whether or not something is real. I, I, I think a big mistake is to try and squeeze the square robot or AI into the round hole of Humans and I mean, even if you start talking about pets one of my foxes one of my son’s goldfish died the other day and I had to flush it down the toilet.

[00:52:31] Cameron: Actually, I think I probably killed it. I was cleaning the tank and I was cleaning the tank and then when I finished cleaning the tank he had two neons and I couldn’t see the second neon and I was like oh I must be hiding behind a plant or something and then about five minutes later I heard Chrissy squeal and she had went to pick it she did the dimly lit bedroom he was asleep she went to pick up something that looked like a lego piece and it was the neon on the carpet I think what I pulled a plant out of his tank to clean the plant, [00:53:00] it had got caught up in the roots and it, and I felt bad.

[00:53:03] Cameron: Now I know that a little neon fish probably has no long term memory. It probably doesn’t, it’s not really that self aware. It probably doesn’t have deep relationships with the other fish in the tank. It doesn’t really. look like a human on a film, but I felt bad that I’d done it and I had to flush it.

[00:53:25] Cameron: And I, I I gave a little speech to it as I flushed it. I thanked it for being a member family for six months No, no, it’s just, it’s just respect, respect for something else that, that. Had a thing. Yeah. I mean, I don’t know. I don’t know what goes on inside the brain of a fish.

[00:53:44] Cameron: I don’t think it’s much, but I don’t think much goes on in the brains of most people that I interact with either, Steve.

[00:53:51] Steve: Not the most going around these days.

[00:53:54] Cameron: no, like I don’t, I, most people I interact with, I don’t think there’s much going on up there, but you know, I try and [00:54:00] treat, I try and treat everybody with respect because I the.

[00:54:04] Steve: long and short of it is that I’m not treating the robots very well and I feel like I might have to revise my behavior.

[00:54:10] Cameron: On my podcasts that I do, my history podcast, we’ve got a basic rule for life that we’ve developed over the 10 years we’ve been doing them. It’s called DBAC, D B A C, hashtag DBAC, right? Basically just don’t be a cunt. That’s just rule number one in life is just don’t be a cunt. Just be nice, be nice to

[00:54:29] Cameron: people,

[00:54:30] Steve: my definition on that has been to humans and I guess some other living creatures. I just might have to expand where that lives. And, and, and based on the fact that I have zero doubt that in the next 50 years we’re going to see All sorts of interactions with humanoids and things at a species level, which will make people worrying about gay marriage really going to have to go, [00:55:00]wow, that is like zero compared to how big the changes are that are coming.

[00:55:04] Cameron: Yeah, and I think you make a really good point that the sort of redefinition of gender and the redefinition of sexuality that we’ve been going on in the last ten or whatever years that’s freaked a lot of conservative people out is probably just the beginning of where we’re

[00:55:22] Steve: a preamble, to something much bigger. And look, and I know that these ideas have been around an incredibly long time. It just seems as though we’ve hit a point where they can be

[00:55:31] Cameron: They’re becoming

[00:55:31] Steve: a way yeah, the fantasy becomes real.

[00:55:34] Cameron: Moving right along to technology time warp.

[00:55:36] Steve: Yeah.

[00:55:39] Steve: I’ll keep this one really short, and I think there’s a business insight in this for people. Marshall McLuhan was really famous for analysing the impact of media in society. And one of the

[00:55:51] Cameron: the message.

[00:55:52] Steve: Medium is the message, very famous for that one, but one of the things that he also said was that every media usually lives inside the previous [00:56:00] media.

[00:56:01] Steve: And if we look at theatre, radio, TV, all of these things, they kind of They replicated each other in an interesting way. So when theatre, when radio arrived, radio was very much like theatre except just with the voices. They were almost like they were acting out on a theatre stage. You just had to imagine what it looked like.

[00:56:20] Steve: It was so similar to theatre because we hadn’t, you Imagine how to use that format in a different way before we started playing music on it and, and, and different things. And then if you think about when TV first arrived, there were like radio shows where people were like sitting at the theater.

[00:56:37] Steve: But this time you watch the radio show and there wasn’t much acting or different uses of, of of the format. Likewise, when the internet first arrived. Newspapers, they didn’t imagine social media, they just basically put, you could read the New York Times online, and it was basically the newspaper where you just scrolled up instead of turning pages.

[00:56:58] Steve: And the same with TV, it [00:57:00] wasn’t until we uncovered user generated content that you saw the real shifts in digital, and even when the smartphone first came, it was the GPS that was the huge unlock. And I just feel like, every time we have a technology that arrives, In the first instance, we just use it in the way that we were using the previous thing, just more efficiently.

[00:57:21] Steve: And AI seems a lot like that. Everything most of us are doing with AI is just a more efficient version of what we were doing before. I just finished an article for the Eureka Report and I used the AI just to proofread it. It was like, really like, basic way of doing it. And it feels like with AI, we haven’t really made some of those unlocks yet.

[00:57:43] Steve: They’re going to change the way we do things. It’s got an inordinate potential, but most of the apps that I see and the ways of using AI are just better versions of that, which already exists. So. Just something to think about is when the technology arrives, there’s usually a time, [00:58:00] a time lapse before we actually realize how we can use it in a different way.

[00:58:05] Steve: And I just thought that was an interesting technology time warp to look at was that every media lives inside the previous one until we find an unlock where it really grows exponentially because we use it in a way that isn’t just the same thing more efficiently.

[00:58:19] Cameron: Yeah, and I one of my complaints about podcasting over the last 20 years has been that it’s just radio on the internet. And what I’ve been saying is we need to find something different that we can do with podcasting that we, that isn’t radio. It’s something, what I thought it was going to be 10, 15 years ago was the ability to.

[00:58:41] Cameron: have more of a conversation. So for example, I would be listening to a podcast and I’d be listening to say Lex Friedman say something and I think I disagree with that or I can build on that and have the ability to easily clip what he just said, record my own commentary to [00:59:00]that, and then publish it.

[00:59:01] Cameron: And then somebody would be able to take my commentary and comment on that and publish it and we start to build this interactive commentary on commentary. Never, the tools have never really arrived to do that. Maybe I should build them.

[00:59:14] Steve: what’s funny? Like that, that’s a really nice idea. I think I can remember talking to you about that some time ago, and Twitter did that really well in the early days and the threads that emerged. It’s a, it’s a really solid way to do things, and it kind of has some blog replication in it as well, but does feel like audio would be, it would be a really good way to get these interactions on certain commentary because it sort of should be an exploration and some of the podcasts do it and we do it where you say let’s go to the internet and have a look at what actually is that but some of it’s philosophical and and there’s different viewpoints and that would be an interesting way to super engagement and almost make a conversation longitudinal where, where each episode is just the iteration of the previous discussions.

[00:59:58] Steve: Almost like forking [01:00:00] out into different, different trees,

[01:00:01] Cameron: And the Marshall McLuhan reference, Mark Andreesen actually made that when he was on Lex Fridman a month or so ago, he said, AI, initially people will be using it like Google. He said, Google, when it came out was like Yahoo’s directory or what were the other ones before that?

[01:00:18] Cameron: AltaVista.

[01:00:20] Steve: and lycos and

[01:00:21] Cameron: Yeah. But you know, the way that I use AI now is quite different. Like doing a deep dive on the idea of room temperature superconductors this week for me, it’s a conversation, AI is a conversational platform. I’m like, okay, explain the Meissner effect to me and ELI5 for me. And I explained, and I said, like, Yeah, because I watched a couple of YouTubes on superconductors, and it’s like, they were all showing electrons moving through a wire.

[01:00:48] Cameron: And I said to GPT, that’s not right, is it? Like, electrons don’t actually, the current isn’t electrons moving through the wire. And he goes, no, no, you’re right. I mean, some electrons do move, but they [01:01:00] said, I said, Eli Fivert for me, they said, imagine

[01:01:03] Steve: I just love how you said Eli, Eli five. ’cause I love explain ly, but you’ve just, Eli, I’ve never heard anyone say Eli.

[01:01:11] Cameron: Oh, really? Oh yeah. Eli5 is,

[01:01:12] Steve: Not, not Eli, but I’ve heard explain like, but never Eli.

[01:01:17] Cameron: Eli5 it for me. And it, it said like, imagine you stuff a tube full of marbles and then you stick another marble in one end. It is going to push a marble. If you keep sticking marbles in one end, it will push marbles out the other end, but it’s not the same marble that goes in the one end that comes out the other end.

[01:01:33] Cameron: They’re vibrating, they’re touching each other. And that’s what happens electrons down. Okay. Yeah, it is right. It’s a good visual, but it’s the ability to be able to, Have this, like I said from the get go, the smartest person in the world, and as everyone knows, GPT’s got a lot dumber over the last couple of months, I think, as they’re gearing up for GPT 5, but it is the ability to go, ask it a question, it gives you an [01:02:00] answer, and then to go, hold on, I’m not really sure about this thing, drill down on that for me, and explain that for me, and it’s different to clicking links on Wikipedia, which is what I would have done previously.

[01:02:11] Cameron: It’s, it’s more engaging, it’s more able to talk to me at my level. I can say to Eli Fivert for me, and then I can say that, okay, now give me a hard scientific technical explanation and anything in between. To have that kind of a dialogue with an infinitely patient mentor or tutor on every subject I need, like that’s, that’s like, that’s not something that I’ve ever been able to do before.

[01:02:39] Cameron: I’ve had individual mentors and individually smart people like yourself that can explain things to me, but never one person that can explain everything to me. Like it’s, it’s astounding. Speaking of AI, I want to wrap up with the forecast. Here’s the thing I’ve been thinking deeply about, and I’ve been researching it, and I think everyone’s wrong,[01:03:00] which isn’t unusual.

[01:03:01] Steve: Love that. start.

[01:03:04] Cameron: We all know that AI isn’t trustworthy. We all know that AIs still hallucinate. We all know that GPT gets stuff wrong, and so does BARD and Claude and all the other generative AI engines. And there’s a lot of conversation out there about how do we make it trustworthy? How do we make it perfect? How do we stop it from making mistakes and like, and, and whilst that is possible, there’s a lot of talk about larger data sets being the solution to that.

[01:03:33] Cameron: GPT 5 is probably going to be 10 times the data set of four. Eventually you’re going to run out of tokens that you can find to, to, fill up AI. I think that’s not where we’re going. I think people misunderstand the role of generative AI in the future. When I think about how this is going to play out, Steve, I see generative AI, things like ChatGPT as the [01:04:00] interface for multiple expert systems that sit on the backend.

[01:04:03] Cameron: Expert systems are an old idea. They’re an old technology. You basically build a system that’s an expert on, let’s say, playing chess. And this is I’ve been thinking about it from a chess perspective. As I mentioned on the show weeks ago, ChatGPT sucks at playing chess, but there are very, very good chess expert systems out there.

[01:04:25] Cameron: So what I imagine will happen at some point in the not too distant future is if I say to my AI interface, be it ChatGPT or pi. ai or whatever it is, Hey, let’s have a game of chess. It’ll say, sure what do you want to play, white or black? I say white, I say E4 and it’ll say, great F4, and we will, it won’t be, it won’t have to learn how to play chess, it’ll just be back ending into Stockfish, or another chess playing app on the [01:05:00] back end, either via the an API interface into an expert chess engine on the back end, which it’ll be able to write if it doesn’t, if there isn’t one already in existence.

[01:05:10] Steve: go and grab it and then throw it in and quickly.

[01:05:12] Cameron: Yeah. Or the people building these expert systems from now on, I guarantee are going to be building generative AI interfaces into their expert systems. Because one of the challenges with expert systems up until this point is getting data in and out of them. Building the expert system itself is one challenge, but then interfacing with them is one of the biggest challenges.

[01:05:39] Cameron: If you’re. If you’re a chemist or a physicist, and you want to interface with some sort of expert engine that’s been built to test something in physics or chemistry, for example, you actually need to be a bit of a coder. You need to know how to I, I mean, I dunno if this is still the case, but it certainly [01:06:00] was when I was playing more in this field 20 years ago in my Microsoft days, you needed to be able to know how to write code to get your instruction set in, and to get the database to do what you wanted it to do to spit out the results.

[01:06:13] Cameron: The what generative AI is really good at is understanding human language, and you can ask it questions. in plain English, and it can reply to you in plain English. That’s what it’s really good at. It doesn’t, people are thinking like it has to be everything for everyone, that it has to be the be all and end all super intelligence.

[01:06:33] Cameron: I don’t think that’s how this plays out.

[01:06:35] Steve: becomes, yes, it becomes the fabric for which you can attach to other things and create understanding because it’s language based.

[01:06:44] Cameron: The analogy that I’ve been using is it’s like a website And the expert systems are the databases on the back end. If I, if I want to look something up on, let’s say if I want to research something on the Bureau of Statistics [01:07:00] website the ABS’s website they’ve got all this data sitting in SQL database or some other database on the back end, right?

[01:07:08] Cameron: I don’t go to their website and have to write a SQL query. I go in and there’s a bunch of drop down boxes and data fields that I fill in that then go and run the SQL query on the back end. The website is an interface to the database. I don’t need to write database query code to get the information out.

[01:07:31] Cameron: So I think generative AI becomes the website, it’s the interface, and all these expert systems will become The databases that sit on the back end that do have the expert, trustworthy knowledge on a whole range of topics. And they, they will probably have their own interfaces, many of them as well. But we will have the one point that I, I go to, I’ll pull up the GAI on my phone or my watch or my glasses.

[01:07:59] Cameron: I’ll [01:08:00] ask it to do something for me. Now, one of the things that we know that agent… Agents are good at already or becoming quite good at with AI is taking the task that you ask it to do and then figuring out what all the sub-components of that task are and then going and writing those. So I will give it an, like, I, I, I wanna, I, I want an answer to a question or I want you to do something.

[01:08:23] Cameron: It will then figure out what all the components of that are. Then it will go out and figure out what all the expert systems are that it needs to interrogate. to get the right answers that it can then bring together and present me with a a trustworthy solution.

[01:08:40] Steve: So if I’m understanding this correctly, to get trustworthy AI, instead of the database forever getting bigger and having more parameters for it to understand what it needs, everything from chess to how to build a house out of timber or whatever it is, you’re saying that it will have a bunch of Back end [01:09:00] databases or sites which come from trustworthy sources like the ABS or whatever medical institute and it could potentially even become almost like an ingredients list of where it got your answers from, or you can tell it where you want it to go to give you directions.

[01:09:17] Steve: that backend database to then extrapolate and pull out what you want for it in the format that you need because it has the good front end, but instead of an increasing size of database where there’s questionable answers and hallucinations, it might go towards trustworthy sites. Would that be something that it does organically or would that be one of the options that you can create?

[01:09:38] Steve: I mean, I don’t know, but I, but I think it’s a really good way to get around that, that problem of, AI databases getting too big and AI feeding AI, which was one of the things that came out a few weeks ago, they were saying that the databases become less effective once they learn from AI because it has this weird self referential decline [01:10:00] And what, and what they’ve acknowledged now is they don’t have any way yet of preventing that. Like, a few months ago, they were saying they were going to try and digitally watermark content that had been created by AI so that AI wouldn’t train itself on that data.

[01:10:15] Steve: AI.

[01:10:16] Cameron: And recently they’ve said, actually, we haven’t figured out any way to do that, so, and it probably can’t be done.

[01:10:21] Cameron: So, the same thing that’s also happened in the last week is you notice that universities and other educational institutions are now saying, actually, we have no reliable way of telling when an essay has been written by generative AI or not. So that whole thing about, we’re gonna because

[01:10:37] Steve: can just, you can manipulate it and go back and forth and add bits and pieces and mash it up enough where it just looks like what you used to do in high school anyway. Get pieces from here and there and put it all together. Isn’t that what do?

[01:10:48] Cameron: Yeah, so this whole thing that the educational institutions have been trying to do for the last six months is penalize kids who they thought were doing their homework using AI. They finally [01:11:00] realized, ah, fuck, we can’t, we can’t honestly do this. So let’s come up with a different way of of managing what this means to.

[01:11:09] Cameron: the industry of education. But yeah, look, I think that we know that the limit, one of the limitations of generative AI, if we’re talking about a language version of it, let’s leave aside image generation for a second, is that it is just a large language model. It’s just figuring out what words go in what order to give the most reliable answer.

[01:11:30] Cameron: And there are some fundamental limitations in that. I think the way around that will be for it to. use language to interrogate smaller systems that have been deliberately built to provide expert answers on certain silos of knowledge. And we’ll just have AIs talking to AIs that there’ll be thousands of expert systems that are highly [01:12:00] reliable in a very, very specific domain.

[01:12:03] Cameron: And our AIs will be able to interrogate those. That’s going to require APIs, it’s going to require upgrading existing expert systems with AI interfaces, it’s going to require billing models and how, people license all of these things. If my AI, in order to answer my question, needs to interrogate five expert systems, who gets paid?

[01:12:30] Cameron: How does that, how do those trends, micro transactions happen? A lot of this stuff will need to get resolved. That could take some time for those things to get sorted out. But I, I suspect that’s some form of that is where it will end up. We don’t have to worry about, The limitations of large language models being an expert on all things, like some sort of genie in a bottle.

[01:12:52] Cameron: I, I think that’s asking too much of it.

[01:12:54] Steve: I agree. And that’s where all those new jobs are going to come from, Cam, in the short run.

[01:12:58] Cameron: Well, not for me, but [01:13:00] somebody oh, and that’s one of the things I was going to say, sorry about your Marshall McLuhan thing before. I think it’s a generational thing that happens. So. The people who revolutionize TV are the people that grew up watching TV, not the people who grew up listening to radio.

[01:13:18] Cameron: Because you, you, you, we tend to think based on what we grew up with. So that, yeah, exactly. So the people that really probably take podcasting to the next level will be the people that grew up listening to podcasts. The people that really take AI to its true level will be the people that grew up with AI and Well, saw that in social, didn’t we? When the young kids got onto social, that kind of transformed it a little bit, where chat forums were a little bit old school. And then user generated content and people mashing up different pieces together. [01:14:00] Mashup culture was the new generation.

[01:14:02] Cameron: Well, you talk about Vine and then TikTok, like the idea of short form video content, 15 seconds, 30 seconds. When my kids, my older boys I sort of didn’t pay much attention to Vine when it was a thing. When they started to get into TikTok and they were trying to sell me on TikTok, I’m like, and they’ll say, well, you should do stuff on TikTok from your podcast.

[01:14:25] Cameron: And my podcast tend to go for an hour to two hours. Like what the fuck, what the fuck can I say in 15 seconds? I can barely open my mouth in 15 seconds or 30 seconds. I, I, there’s, there’s, I can’t make any sort of serious point in under at least an hour. So

[01:14:41] Steve: You anyway.

[01:14:43] Cameron: My brain really struggles and continues to struggle to figure out what I can say in 30 seconds that’s worth saying.

[01:14:50] Cameron: And yeah, it’s, it’s because it’s just not why the way my brain has been built over the last 52 years, it’s, you gotta keep challenging [01:15:00] yourself to use the new format and be uncomfortable. It’s a little bit like going to the gym. You gotta, you are, you are learning kung fu you’ve gotta get your body to do things. It’s not used to. We’ve gotta get our brains and our work to do things that we’re not used to and like I revere that challenge of forcing myself to try and do different editing and TikTOks and keep things short.

[01:15:21] Steve: ’cause I, I like you, I, I really like long format reading and, and discussions, but I’m trying to force my. self into those 10 and 15 seconds at a time, because it’s a battle for relevance, and if you’ve got a message that is worth sending, then you need to use the medium, as good old Marshall told us to.

[01:15:39] Cameron: Yeah, Chrissy and I have been laughing a lot lately because we don’t watch a lot of TV, but when we do watch TV, it tends to be at 10 o’clock at night. When we finished everything else, we’ll watch an episode of something a couple of times a week. But when we do watch TV now. We used to be like cuddling up on the lounge, watching TV.

[01:15:56] Cameron: Now she’s on the floor trying to do the [01:16:00] splits while she watches TV. And I’m standing in the kitchen with my foot up on the kitchen counter. Stretching. So we both spend our TV watching time stretching our quads and our hamstrings and our groin muscles.

[01:16:17] Steve: it’s crazy, is it just that we’re old, but we do similar things. So, Shen will be doing, like, her sit ups, and I’ll, like, we’re literally doing other things while we do them. We’ve got, like, the, the mat, that’s just literally down there, the rubber mat. It’s weird, because they used to be sit around. Johnny Carson, Family, and now, and the other thing too is that when everyone’s sitting there, everyone’s got their different devices in and we’re all in our own little worlds, which is another interesting layer.

[01:16:41] Cameron: Yeah. But just that whole idea of like you said, like continually be pushing yourself into things. We need to do Kung Fu well at our ages Chrissy’s 44, I’m 52. One of the things is sitting at a desk for the last 35 years of my life is I’m not as flexible as I need to be in order to do Kung Fu. I need to [01:17:00] stretch.

[01:17:00] Cameron: I need to spend an hour a day stretching. And massaging, a lot of massaging, getting into muscles, got a lot of massage balls, learning how to get balls into joints and different muscles to loosen them up. It’s a painful process to go through if you haven’t been so that’s a tip to anyone under the age of 30 out there. Always stay flexible. Stay flexible, keep your knees nice and loose, keep your joints loose, otherwise you’ll wake up one day and you’ll be like, Holy shit, I’m like Man in Wizard of Oz. Speaking of, just watch, just watch the Wizard of Oz 1939 version with Fox last week. Really holds up well.

[01:17:41] Cameron: Another tip for you out there, if you haven’t seen the Wizard of Oz in the last 20 years, pull it out, it holds up really well, very entertaining. But yeah, I’m like the Tin Man, I’m all, I need lubrication. That’s the futuristic. Episode 9, we’re out, gotta, we need a catchphrase, stay, stay, [01:18:00] stay futury, stay,

[01:18:02] Steve: That’s our homework for this week.

[01:18:04] Cameron: go, go be nice to a computer this week, go be

[01:18:07] Cameron: nice

[01:18:07] Steve: to a computer right after this. Dear Siri.

[01:18:10] Cameron: thanks Steve.

[01:18:11] Steve: Thanks Cam.