Select Page

Steve and Cam catch up on the latest FUTURISTIC news, including Elon’s new gaslighting AI Grok 3 writing porn, Microsoft quantum computing not totally made-up Majorana 1 chip, the fact that the world now has 1328 LLMs, and Zuck can now turn your thoughts into text. What could go wrong?

FULL TRANSCRIPT

FUT 36 Audio

[00:00:00] Cameron: Welcome to Futuristic. Steve Sammartino, uh, first of his name. Uh, it is Futuristic episode 36. We’re recording this on the 21st of February 2025. I think it’s been two weeks since we’ve spoken, and a lot has happened in two weeks, Steve.

[00:00:21] Cameron: How have you been, buddy?

[00:00:23] Steve: Good. Busy, traveling around. Did one bit of interesting work, Cameron. We can’t talk about a great deal of it now, online, but let me just say this. It’s right up your alley. I did some work with the National Security Apparatus on evolving risks involving technology.

[00:00:43] Cameron: Well, that’s

[00:00:44] Steve: a good conversation.

[00:00:45] Steve: And it was very mind opening in a whole lot of areas. Mind opening. And, uh, uh, I think the net outtake was, is that the biggest risks aren’t the behavior of the people. And this is not a secret. The biggest risks is the behavior of the corporations and the lack of regulation around them, which then facilitates all other risks, which has been clear for a really long time.

[00:01:16] Steve: And the lag effect geopolitically on that. Within this security realm is pretty apparent anyway. So hopefully that landed and they were, I think, grateful to get some new understanding. Let’s just leave it at that because they’re listening, Cameron. I think we know that, and it’s all there to be had. And I had to go through a significant, uh, vetting process to

[00:01:42] Cameron: do the work.

[00:01:44] Cameron: Hold on. You do a podcast with me and you still got approved? I couldn’t believe it. They dropped you. Drop that ball.

[00:01:51] Steve: Please don’t bring Cameron. No, I think that’s why they got me. I was in a Hercules. I was like, no, .

[00:01:59] Cameron: No, really? They’ve got you because now they’ve got you to spy on me. I know how this works. I can’t talk about these things.

[00:02:05] Cameron: Embedded. Embedded. Yeah. Well, uh, that sounds exciting, Steve. Um, I think there. Shuffling deck chairs on the Titanic. I knew you would say

[00:02:17] Steve: something like that. But by the way, do you remember once that you had on your bio on Twitter or somewhere, it said, twisting the nipple of industrial complexes since 1987?

[00:02:27] Steve: That said something like that, didn’t it?

[00:02:30] Cameron: I don’t know. Possibly. I can’t believe you don’t remember it. I remember it. I don’t remember. Yeah. Um, I, uh, I finished my big coding project, uh, I think since the last time we spoke, uh, last week. The, the QAV checklist, fully automated it. The thing, that took me months on and off to do.

[00:02:50] Cameron: Now it’s, what, what, a couple of years ago would take me four hours to do. Uh, now it takes literally one click of a button. Then I go, then, and it runs for four hours overnight. But I click it on a Friday night, I wake up on a Saturday morning and I’ve got an entire buy list with hundreds of companies analyzed and, and, uh, scored and in a spreadsheet.

[00:03:14] Cameron: It’s, um, it was a good feeling to finally finish that, get it code complete.

[00:03:21] Steve: Just on that topic, the coding thing, remember I did an AI portfolio? I tricked ChatGPT into creating a portfolio, it wouldn’t give me one, and then I asked it about Warren Buffett and his principles, and which sectors, uh, those principles follow, who’s doing well in those sectors, who’s likely to, and it gave me a portfolio.

[00:03:40] Steve: Well, it’s two years have passed, and the portfolio, I did an assessment, I’ll send it through. And I compared it to the S& P 500. It didn’t outperform, it underperformed the S& P 500. But all, but interestingly, all of the stocks that it picked within those sectors were the best performers in their sectors.

[00:04:01] Steve: Now the reason the portfolio underperformed the S& P 500 was because the tech sector did so well and it only gave me 40 percent of my stocks in the tech sector.

[00:04:12] Cameron: Well, I don’t know about that, because So, the stocks it

[00:04:15] Steve: gave me, the stocks it gave me, like it gave me some banking ones, some retail, some infrastructure, and, and all, and the companies that it gave me beat their competitors in that sector.

[00:04:25] Steve: But the overall portfolio, because only 40 percent of it was weighted in tech, underperformed the S& P 500, because tech is such a big chunk of the S& P 500 now.

[00:04:35] Cameron: But the S& P 500, I mean, the results have been good, but not Mag 7 good. I’ve got a US portfolio that I run for QAV. It’s doing three times the S& P 500 over the last 18 months I’ve been running it.

[00:04:51] Cameron: That’s really good over 18 months. And we have no tech stocks, because tech stocks, uh, shipping, banking and finance, um, Land’s End, a retailer. Um, you know, the best that’s done like 300 percent is a company called Willis Lease Finance Company. They’re up 300 percent in the last year that we bought into. Um, but no tech stocks because the tech stocks are way too overpriced for a value investor like us.

[00:05:22] Cameron: But S& P’s up about 30%. Over that period of time, our portfolio is doing about 100 percent

[00:05:27] Steve: over that period of time. They’re both extraordinary. Of course, you know, long term average of the S& P is 11. Yeah, it’s been a bonkers couple of years, but we digress.

[00:05:38] Cameron: So, um, big news, uh, I guess this week, Steve, is Grok 3 launched.

[00:05:44] Cameron: For the people who don’t know, that’s X, formerly known as Twitter, X, AI’s, uh, AI, Elon Musk’s AI, let’s just put it that way. This is version 3 that they built. The data center for this, extremely quickly, and built the whole thing extremely quickly. And, you know, Elon, when it launched a few days ago, Elon was out there tweeting saying that it beat all of the state of the art models.

[00:06:15] Cameron: People then got into it and reviewed it, and the feedback that I’ve seen has been pretty good. Pretty good, I gotta

[00:06:23] Steve: say. Did exclude though, it did exclude some of the models in the assessment tests. Like OpenAI03 was omitted in certain areas, there were certain things that were omitted. And when you put those back in, I looked in Reddit, then it didn’t beat all of them, but it did really, really well on average.

[00:06:41] Cameron: Doing really well. And, uh, you know, the way that Elon is positioning it is that, A, it’s very, very smart, but B, it’s not as woke. Let’s say that it’s not as locked down. I hate that term. It’s not, uh, it doesn’t have the same sort of guardrails. That’s a better way to say it. It’s not as

[00:07:05] Steve: regulated. It’s a little bit less regulated.

[00:07:07] Steve: I think that’s a better way to frame it because the reality of the world is that large chunks of the world, certainly when it comes to thought, wishes, behavior, what we write, the internet is filled with porn and all sorts of stuff. So it’s a little bit more of a reality engine, but I did like what he said.

[00:07:24] Steve: The quote was, it’s configured for maximum truths. Which sounded like something out of a comic book in 1965. Configured for maximum truth seeking, Grok 3. 0, the AI that you can rely upon for the world as it is or you want it to be in a non woke nature.

[00:07:44] Cameron: According to the Grok website on X. AI. com, Uh, trained on our Colossus Supercluster with 10 times Colossus Supercluster?

[00:07:54] Cameron: Geez, that was a Colossus Supercluster, wasn’t it, G Rod? That doesn’t happen again. With 10 times the compute of previous state of the art models, Grok 3 displays significant improvements in reasoning, mathematics, coding, world knowledge, and instruction following. Tasks, Grok3’s reasoning capabilities refined through large scale reinforcement learning, allow it to think for seconds to minutes, correcting errors, exploring alternatives, and delivering accurate answers.

[00:08:25] Cameron: Has leading performance across both academic benchmarks and real world user preferences, et cetera, et cetera, et cetera. Now I put it to the test, uh, Steve. Now I, it, they also have a. You know, OpenAI launched their deep research a couple of weeks ago, then, uh, everyone’s got their own deep research model now.

[00:08:47] Cameron: You gotta go deep, Cameron. It’s all about how deep you can get. It’s all about deep. I’ve always said that. I’ve always

[00:08:52] Steve: said that.

[00:08:53] Cameron: Depth is key. They’re all calling them the same thing. Deep research, or I think X calls theirs deep search. So I’ve got a standard test that I’m using for all of these. Um, uh, which is where I give it a list of company names, Australian company names.

[00:09:09] Cameron: I say, here’s what I want you to do. I want you to find the official website for this company. On that website, find their most recent annual report. Open it, read it, and then go to the auditor section and see if there’s anything in the auditor’s section that gives me cause for concern, what’s known as a qualified audit.

[00:09:34] Cameron: Which for people who don’t know is not a good audit. That’s a bad audit. A qualified audit is one where the auditor says we’re qualifying our opinion here because we have some serious concerns about the state of the audit. Or conflict of interest because we’re

[00:09:47] Steve: also the people that help them with their tax avoidance strategies as well as doing the auditing because that’s the way the world works Cameron.

[00:09:53] Steve: They don’t call

[00:09:54] Cameron: that out, no. Um. They should because that’s what they do. Anyway, I’ve tried this. I haven’t tried it on. OpenAI’s deep research yet because I don’t have access to it, but I’ve tried it on Perplexity’s version, which is based on DeepSeek. I’ve tried it on Grok’s version. I’ve tried it on the Hugging Face deep research product.

[00:10:16] Cameron: I’ve tried it with O1 and O3, just basic level on OpenAI, and they’ve all failed. But the fascinating thing for me with Grok 3 when I tried it yesterday was it did it. It gave me a table, I said, spit the output into a table and say, you know, tell me if there’s a, just a yes or a no for qualified audits. It gave me a table and it gave me links to all of the annual reports.

[00:10:44] Cameron: None of the links worked. And by the way, that’s true across the board. Whenever I ask them to do this, none of, none of the links to the annual reports work. They tend to find the company’s website okay. Sometimes they’ll give me like a YouTube video or something, but They can’t get the links to the annual reports.

[00:11:00] Cameron: But with Grok 3, I said, no, no, that’s, you’re wrong. That, it was one particular company. I said, no, that’s, that’s, that’s wrong. And he goes, no, it’s not. I said, okay, well, quote, quote me the section from the annual report, from the auditor’s statement that says there’s a qualifier there. And it gave me this long quote.

[00:11:17] Cameron: That sounded like an auditor quote, and it said it’s on page 47. I opened up the annual report. I go to page 47. It’s not there page, the, the annual, the, the auditor’s statement’s on like page made up 250. Made it up, made it up, up, and it said the auditor was BDO and it wasn’t. They’re auditor’s picture partners.

[00:11:35] Cameron: I said, hold on here. And I gave it a, I said, that’s not on page 47. It said, yes, it is. So I gave it a screenshot of page 47. That’s why it says

[00:11:42] Steve: here for the listeners. I just want to, I just want to tell everyone what it said, uh, what you said. It argued with me. Cam has on our show notes, it argued with me. I love that so much.

[00:11:53] Steve: I gave it a screenshot. AIs were getting very human. They’re getting so

[00:11:57] Cameron: human. They argue with people. It was getting pissy with me too. I gave it a screenshot of page 47 and it said, oh, okay, sorry, I was wrong, it’s on page 59. So I gave it a screenshot of page 59, I go, it’s not on page 59 either, and it goes, oh, okay, it’s on page blah, blah, blah.

[00:12:11] Cameron: And I’m like, no, it’s not, and, and their auditor’s not even BDO, it’s Pitcher Partners. But, but, it was arguing with me, going, look, I, I don’t know why you think that. Think I’m wrong, but obviously I’m not wrong. You’re wrong. It was very Elon Musky, actually. I, I, it’s what I imagine having a conversation with Elon is probably like.

[00:12:29] Cameron: He goes into

[00:12:30] Steve: the meetings when he does a few of those things and he argues with him and just gets put into the training. Probably.

[00:12:36] Cameron: Yeah. Yeah. That’s crazy. That was fascinating that it, you know, I’m used to hallucinations and that doesn’t surprise me, but it was, you know, going all out to gaslight me. It

[00:12:49] Steve: really was going hard.

[00:12:50] Steve: This is really interesting, Cam, and I’ll tell you why. He was espousing the virtues that he’s the only AI That has access to the Twitter firehose, and here is the outcome of the Twitter firehose on an AI. It will argue with you, because I don’t know if you’ve been on X lately. I’ll tell you what, as trained, it becomes as trained.

[00:13:14] Cameron: Yeah, it’s the dumpster fire

[00:13:16] Steve: AI. Right. But, but also we, we spoke about quite a few times with AI models is that they will develop personalities because there is this sense of biomimicry and the training is really important. And it’s not, it’s not a classic symbolic code. If this, then that protocol, it’s a little bit different.

[00:13:35] Steve: It’s connectionist. And so the type of connections influence the training, which influence the output and personality.

[00:13:42] Cameron: Yeah, I think you’re right. Anyway, some people have been putting it to different tests than mine. I saw somebody on Reddit wrote, Grok3 writes a ridiculously explicit erotic story appealing to male sensibilities.

[00:13:58] Cameron: Um, oh, what? The post has been removed?

[00:14:02] Steve: Yeah, I was heartbroken because I clicked on it going through the show notes to read it and I was devastated, but I’ve read the comments, which gave me some good insights to what it was, but, uh,

[00:14:15] Cameron: Oh, I read it last night. Um, and give us the cliff

[00:14:20] Steve: notes.

[00:14:21] Cameron: It was, it was, uh, um, a real, uh, erotic story, you know, it was, um, it was, uh, straight up porn.

[00:14:35] Cameron: It was erotica. Um, now the reason this is interesting for people who don’t know is that because all of the other AIs won’t do that for you or wouldn’t up until yesterday. So they were all, we’ll get to that. They were all guardrailed. Uh, if you tried to ask it to do something that was erotica, it’d go, sorry, can’t do that.

[00:14:58] Cameron: Um, you know, if you try and get an image generator to create any nudity, most of them will say, sorry, can’t do that. Uh, anything that’s ex that, that exis um, Yeah, any expression of sexuality or violence or anything like that. Sorry, can’t do that. Even, they go overboard, you know, with open AI stuff too. Any, like, fictional stories about real celebrities or politicians, quite often they’ll go, can’t do that.

[00:15:26] Cameron: They’re really locked down to try and make sure they don’t get into trouble. But, Grok came out and went, nah, I There’s just no guardrails, it’ll do anything about anything, unless it’s criticising Elon, I’m sure it probably bails on that. But then OpenAI came out yesterday and said they are removing, or have removed, content warning messages from ChatGPT.

[00:15:52] Cameron: In an effort to enhance user experience while maintaining essential safety measures, also known as, well, the gloves are off now and, um, we all need to follow Elon’s lead. Wow.

[00:16:06] Steve: Yeah. Translated to me was competitive threat. Fuck it. Let’s do it too. Like straight up. And especially with DeepSeek and, They’re not the only game in town now, and even though I still think a very large majority, and I sample two to five hundred people every week, they all come back to me, and no one uses anything but OpenAI and ChatGPT, who are in the general corporate populace.

[00:16:33] Steve: So, but as you know, that can change pretty quick. Google was out there for a couple of years and everyone was using Lycos and AltaVista and Answers and Yahoo, and then Google took over. Um, so it could be time, but I still think they’ve got the lead in terms of brand perception and brand awareness. Uh, but they’re certainly sensing the competitive threat and that’s a clear play, isn’t it?

[00:16:55] Steve: A response to, to competition. But, but I just, I just love it that Musk has announced like you can get politically charged, racially charged, offensive content on demand. It’s crazy. Go for it. I wonder if there are, I think, some things that OpenAI does which are far too conservative in terms of the guardrails where it’s a bit of a dragnet policy.

[00:17:18] Steve: I was doing a presentation this week and I wanted to get a cartoon like picture of Tech CEOs in superhero outfits and OpenAI wouldn’t do it. So I had to go to some of the other locations to get it. And they’re all fine with that. And given that they’re public figures, there’s a zillion images of them.

[00:17:40] Steve: Photoshop been doing it for years in magazines. It just seems far too conservative an approach. And I do wonder how far. They will, uh, move that up, but I do think it is far too restrictive at the moment.

[00:17:52] Cameron: I agree, and I, you know, I think the last couple of years they’ve been playing a cautious game, early days of AI, but as, as things progress, I think all of these guardrails will probably come off for reasons of competition and nothing else.

[00:18:06] Cameron: And in terms of competition too, we should also point out that Grok 3 is currently available to everyone, freely available to everyone, um, if you go to the, The X website. Uh, it is supposedly going to be premium only, but as I understand it, it’s just, you’ve got to be a premium Twitter member, which is about 20, 30 bucks a month, depending on where you are on the exchange rate.

[00:18:29] Cameron: So versus 200 bucks a month for big price. Big price difference. OpenAI’s top level model. So we’re going to see a lot of price competition here. Of course, the difference is that Twitter already has a revenue stream. Now it’s also apparently going broke, but, uh, X has a revenue stream already. Kind of reminds me of the Netscape versus Microsoft Internet Explorer 90s.

[00:18:56] Cameron: Uh, Microsoft didn’t need to make money out of IE. We’ve talked about that before. OpenAI has one revenue stream and they need to figure out how to monetize it. And this has always, to me, seemed to be the weakness. of OpenAI is, as competition ramps up, their ability to be able to charge premium dollars for this is going to be dependent a lot on how far ahead they are and what their unique value proposition is, and that’s going to get harder and harder, I think.

[00:19:31] Steve: My view is that OpenAI is Microsoft in disguise. And the only reason Microsoft doesn’t know more than 49 percent but yet has the controlling interest in votes is simply because of antitrust. So OpenAI in my mind is Microsoft and it will, it’s already being put into the Microsoft Office suite through Copilot and other things.

[00:19:50] Steve: And I just think that it eventually, when the timing’s right, it’ll be like, Oh yeah, that, yeah, we always were going to put a little bit like. Zuckerberg. Oh, we could never integrate WhatsApp with the rest of our stuff. They’re just separate. Let’s start. Don’t worry about that. They’re two different things.

[00:20:04] Steve: We just, we’re a conglomerate who just owns them. And then all of a sudden it’s all, that’s going to happen.

[00:20:09] Cameron: It’s Rupert Murdoch’s old buying a newspaper and saying, no, we don’t want to touch anything that you do. We love what you do. It’s great. We want to keep, we

[00:20:17] Steve: bought it because we love what you do.

[00:20:19] Steve: That’s the reason. And we want to keep them separate because we believe a diverse ecosystem serves as all in an information. Based on me. And we’ve always said that a year later, everyone gets

[00:20:32] Cameron: fired. Well, speaking of Microsoft, we’ll get back to LLMs, but speaking of Microsoft, um, they launched something yesterday called the Microsoft Mayorana 1 chip.

[00:20:46] Steve: Say that again please Cameron. That was beautiful.

[00:20:48] Cameron: Mayorana, named after, uh, Ettore Mayorana. Now, he was an Italian physicist in the early part of the 20th century, who disappeared in 1938. Please tell me they never found him, because this just makes this podcast the best one we’ve ever done. They never found him.

[00:21:10] Cameron: And they never, no one knows what happened to him. Someone knows, Cameron. Yeah, well someone new, I dunno if they’re still around. It’s 1938. He was, he was 32 years old. Um, came up with this theory for this kind of ion all, all matter as made of ion’s. He posited a theory for a ion that was its own particle, got known as ana particles and then he disappeared.

[00:21:41] Cameron: He wrote some letters apparently to friends saying, don’t try and find me. And then just disappeared and no one knows what happened to him. 32 years old, Italian physicist, living in Italy, boom, gone, no one knows where or why or whatever. Anyway! Maybe he

[00:22:00] Steve: uncovered the code of the universe. Yeah. The meaning of the universe, which, the meaning of life, which, by the way, was what Grok had chosen to put in as the first question in his demo.

[00:22:12] Steve: Which is 42. It didn’t say 42, unfortunately. You said it was a copy.

[00:22:17] Cameron: So it’s still working on it. Yeah. Yeah. It’s

[00:22:19] Steve: still working

[00:22:19] Cameron: on it. Still thinking. So Microsoft launched this thing. Now I read the sort of Microsoft press release, uh, overview of this. And my initial reaction was, Oh, is it April 1st? Cause this is obviously a bullshit story.

[00:22:38] Cameron: And I actually went into, uh, no GPT someone and said, So, read this, and now tell me, is this a joke? Is this a joke story? And it came back and went, No, definitely not a joke. This is definitely real. That’s how bullshitty this thing sounded, though, to me. It just sounded like they were just making words up.

[00:23:03] Cameron: Like, here’s something from one of the, uh, overviews I read. This chip leverages a topological core architecture and introduces the world’s first Topo Conductor, a breakthrough material enabling control over Mayorata particles to create stable, scalable qubits. And I was like, fuck off. You’re just making words up now to sound cool.

[00:23:28] Cameron: And it’s been really interesting. Like the, so apparently this is real and I’ve seen physicists on TikTok and on Reddit, uh, talking about this and they’re all like, Seriously, what the fuck just happened? Because, apparently, physicists have been trying to work this out.

[00:23:48] Steve: I also said what the fuck just happened, but my what the fuck was like, What the fuck is all this?

[00:23:53] Steve: It wasn’t what the fuck just happened, it was like, I’m looking at what’s a topological core, I’m typing it into ChatGPT, Can you summarize what this is? I mean, yeah.

[00:24:04] Cameron: So, apparently, physicists have been trying to find Majorana particles for decades, and figure out if they exist, and if they do, how could you use them, and then Microsoft just came out, out of nowhere.

[00:24:21] Cameron: And said, Oh yeah, we found them. And not only did we find them, we’ve built a chip that knows how to build a quantum computer architecture on top of them in a stable, scalable way. That’s, well, that’s

[00:24:34] Steve: the question. Is it stable and scalable? Because for the listeners, the core challenge we’re thinking about is, Uh, quantum computing has been the temperature, they need to be near absolute zero, just even vibrations.

[00:24:46] Steve: I think we’ve spoken about in the show before, small things can topple it over and they’re non functional. Uh, so that’s, I guess, it’s not just the architecture that you can build it and scale it, that it’s stable. Cause that’s been the real issue, hasn’t it? Stability.

[00:25:02] Cameron: That’s one of the things, and that’s apparently one of the reasons why using Mayurana, Mayurana Particles.

[00:25:10] Cameron: Uh, is important because they are stable, because they are their own antiparticle, and you can stabilize quantum computation using these, and they reckon they’ll be able to scale it up to putting a million qubits on a chip. Here’s something else from the press release, like the world’s first topper conductor, this revolutionary class of materials enables us to create topological superconductivity, a new state of matter.

[00:25:38] Cameron: That previously existed only in theory. The advance stems from Microsoft’s innovations in the design and fabrication of gate defined devices that combined indium arsenide, a semiconductor, and aluminium, a superconductor. When cooled to near absolute zero and tuned with magnetic fields, these devices form topological superconducting nanowires, With Myerana zero modes, MZMs at the wires ends.

[00:26:09] Cameron: It was at that point that I was like, fuck off. You’re just making shit up now. This isn’t real.

[00:26:14] Steve: I was watching the video and the video is crazy. It looks like a simple little, the thing that’s crazy about the chip is it looks simple and you think, oh, wow, I can just plug that in. It’s, it’s like the old Pentium days.

[00:26:25] Steve: You just add, plug that in and it’s all so simple. Yeah. Sounds incredibly complex. Sounds like a massive curve jumping. One of the things that is so interesting, and if we circle back to the fact that Grok 3 is the world’s biggest supercomputer to power Grok, it’s this kind of a curve jump that Moore’s Law continues, if there is such a thing, and we go on to a something it is Has incredible computation, like, but not even 10x or 1x or doubling.

[00:26:59] Steve: It’s like, isn’t it almost infinite if you have a quantum computer and its capability because of superposition?

[00:27:06] Cameron: Yes. And like, there’s a lot of questions about quantum computing. I mean, there’s a lot of money being spent on it. Obviously a lot of these big corporations, Google, Microsoft, et cetera, believe that there is a huge future in this.

[00:27:21] Cameron: There’s still questions about how scalable it will ever be. And if they can scale it, how useful it is, you know, there are some things that they think it’ll be very, very good for like, well, science, uh, you know, the, uh, the, the official Microsoft video that came out to announce this, which is about, 15 or 20 minutes long.

[00:27:41] Cameron: I watched it last night. They’re saying, look, you know, standard computers can deal with like the behavior of 10 electrons and you can get a supercomputer that can work with theorizing up to 20 electrons. But as soon as you get to the behavior of 30 or 40 electrons, even supercomputers today, not only can’t handle it.

[00:28:03] Cameron: This modern guy from Microsoft said if you had a supercomputer the size of the planet Earth, it still would take millions of years to be able to come up with all of the variations for just 30 or 40 electrons. Yeah, yeah, because it’s, it’s the Massive sorts of numbers, but quantum computers in theory would be able to do this, which opens up all sorts of opportunities for scientific research and climate prediction, climate change model prediction, etc, etc.

[00:28:31] Cameron: So anyway, um, I’m definitely no expert on quantum computers or on particle physics, but this idea of Majorana zero modes is really interesting. Without getting too technical, it means that when two Majorana particles come close together, Instead of behaving like normal particles where you have a particle and an antiparticle that come together and they annihilate each other, you get two mayana particles together.

[00:29:01] Cameron: And because they are their own antiparticle, whatever that means, , they combine in a special way to form a new kind of particle that they call a may zero mode. So it’s its own antiparticle and. They have like, it has the ability to be stable and yet still operate like a quantum computer. So anyway, the bottom line with this is it’s, they’re not, can’t do anything with it right now, but Microsoft are saying that they’re convinced that this architecture will be We’ll enable them in the next few years to build very large, stable quantum computing devices that will still need to be climate, you know, temperature controlled and all this kind of stuff.

[00:29:51] Cameron: And we’ll initially, if it’s successful, have very Specific applications. They do talk about AI in their video and, you know, the idea that AIs will be able to use these in some way and massively scale up AI, etc, etc. But bottom line is it came out of nowhere and, and, but here’s, you know, this is sort of what we’ve been talking about for the last couple of years.

[00:30:15] Cameron: We are now at this period where these sorts of things like ColdFusion, AI, Quantum Computing, we should expect Black Swans to be happening. On a, on a increasingly regular basis where all of a sudden,

[00:30:37] Steve: Potentially circumvent massive infrastructure investments because it’s like, Oh yeah, I’ve got that printing press.

[00:30:43] Steve: I don’t need any more kind of thing. Yeah.

[00:30:45] Cameron: Yeah. These things are going to come along and just rewrite the rules of, of, you know, what it means to be, um, a human in the 21st century.

[00:30:59] Steve: I’d be curious

[00:31:00] Cameron: to know. Rapidly.

[00:31:01] Steve: The influence on finding these Mariana particles and the zero mode and all of that was found or uncovered by AI.

[00:31:12] Cameron: They didn’t get into that. And in fact, they were very, very light on details. That’s what I’m like,

[00:31:17] Steve: how do you come up with it? And it could well be that now we’re starting to get, cause you’ve got to remember, they’re going to be a few steps ahead, uh, of where we are, uh, or what’s publicly available in terms of AIs and who knows what background breakthroughs are being made.

[00:31:34] Steve: In the infrastructure of possibilities itself. So the AI is helping invent the new ways to facilitate the AI.

[00:31:42] Cameron: Yeah, without a doubt, that’s, you know, what’ll be happening and just like, just like the explosion we’ve seen in AI, you know, or, or, um, deep reasoning recently, once it is known that these things can be done, I always take it back to fission, nuclear fission, once the, the German scientists in the 1930s figured out how to do nuclear fission, That it was possible.

[00:32:16] Cameron: Then all of a sudden the Americans went, Oh shit, it’s possible. They did a speed end run around Hitler, built the first nuclear weapon. Once it was proven that you could build a nuclear weapon, within years, everyone had nuclear weapons. You were pointed

[00:32:29] Steve: out because you, yeah, it’s like the four minute mile, right?

[00:32:32] Steve: It can be done. Your heart’s not going to explode.

[00:32:35] Cameron: Yeah. So it’s the same, but once these things get done, everyone, all the rest of the scientists go, Oh shit, we can do that. Okay. Let’s figure out how they did it, right? So, uh, I expect to see a lot of stories over the next year or two about this kind of topological core architecture and Majorana particles and quantum computing.

[00:32:59] Cameron: Who knows? May lead nowhere, but if, if it happens, it could be. The thing that takes AI, or ColdFusion, or nanotech, whatever it is, to the next level. These things are going to build on each other, and we’re in the knee of the curve where things are going to get fucking crazy in the next few years. Let’s go crazy, delete it.

[00:33:23] Cameron: I was talking to Tony on QAV earlier this week about the Fox, uh, the Murdoch, uh, family legal battles. They’re fighting over Fox News and the future of Fox News. And I said to him, they’re, they’re rearranging the deck chairs on the Titanic, man, because within a year or two, no one’s going to give a shit about CNN or Fox News or MSNBC.

[00:33:44] Cameron: It’s going to be, what does my AI tell me happened today?

[00:33:48] Steve: No, one’s going to

[00:33:48] Cameron: get

[00:33:49] Steve: it from somewhere in a weird way. The news agencies, uh, you know, APP and Reuters could, could do well if they plug that stuff in. Uh, but I also think that Netflix and the streamers will have 24 seven CNN channels. It’s a deck, it’s a desk.

[00:34:10] Steve: You have a desk in each city, uh, with a background, you cross to the same news reports from all around the world, all you do is have a desk, and here’s the hack, here’s the beauty of the hack. Increasingly, countries are demanding that you have a certain amount of local content, and that’s how they get it.

[00:34:27] Steve: You have a desk there with a 24 hour channel with the news and it’ll be based on how many viewing hours in Australia are local based content. I think that’s what they’ll do. They’ll have morning shows and local based news and you’ll put that on in the morning instead of the Today Show. You know, the morning show, and you’ll just have that on in the background?

[00:34:45] Steve: I would. You know, for news like

[00:34:46] Cameron: a CNN. All of the news will be AI generated and it’ll mostly be hallucinated and just made up and no one will know the difference. It’s AI generated, hallucinated, nobody knows the difference. That’s, there’s a rap in that somewhere, Cameron. The thing, the thing that’s been sort of slightly amusing me in the last week or two, now that Trump is pushing Zelensky to do a deal with Putin.

[00:35:12] Cameron: And I’m watching all of these people losing their shit online over the fact that Trump Trump is, you know, they’re saying Trump is just parroting Putin’s propaganda about the war, et cetera, et cetera. The ironic thing is, is the people saying that are actually parroting America’s propaganda about the war.

[00:35:34] Cameron: Um, Trump’s not right either, but. Uh, none of them understand the real reasons behind the war and because they’ve just believed all the bullshit that they’ve been fed by the propaganda engines, which are the news networks in the U. S. and the U. S. government and our government and our news networks that just, you know, are on the same script.

[00:35:58] Cameron: So it doesn’t matter if AI just makes it all up is my point. No one’s going to know. The point is that all news is made up

[00:36:04] Steve: anyway. News is someone’s opinion, right?

[00:36:07] Cameron: What about when the AIs are on all of our devices and are reading all of our emails? Won’t they just be the news service? It’ll be going, Hey, guess what Steve did today?

[00:36:15] Cameron: You know, cause that’s the news, right?

[00:36:19] Steve: I don’t know, but there is another theory. Good mate of mine, Nick Hodges says that they’ll end up being an AI sphere. And the sphere is just the AIs talking to the AIs doing AI y things. And we just get down here and just eat food and do normal stuff. And they’re just up there, just sort of spy versus spy kind of AI ness in like this kind of like ozone of AI.

[00:36:41] Steve: Just lives a layer above us. It’s not too far from what might happen. How is Nick? I haven’t spoken to Nick for years. Catching up with him on Monday week. We were going to try and catch up today, but we couldn’t make it work, so. Tell

[00:36:54] Cameron: him I said hi.

[00:36:55] Steve: I will. I sure will.

[00:36:58] Cameron: Um, alright. Let’s, uh, keep going. I’ve got a story here about China, Steve.

[00:37:04] Cameron: Um, China, according to Chinadaily. com, China is now home to more than one third of the world’s LLMs, according to the China Academy of Information and Communications Technology. Well, we’re not talking about propaganda coming from different channels. Did you plan this Cameron? I thought you’d like this. Um, if you have, if you have data to refute this, please give it to me.

[00:37:32] Cameron: But this is what I thought, I thought, does it include open source? Okay, the point doesn’t matter about where China sits. The point is this next line. The number of LLMs worldwide has reached 1, 328. With 36 percent from China, the second largest after the US, the Academy said. Um, now, here we go. I’m going to take this question and I’m going to give it to Perplexity.

[00:38:03] Cameron: And I’m going to ask, the number of LLMs worldwide, while we’re here, I’ll ask ChatGPT.

[00:38:10] Cameron: Perplexity says, the exact number is difficult to pinpoint due to rapid growth and development in the field. And doesn’t even take a crack at giving me a number, just says, the market’s expanding rapidly, blahdy blahdy blah, OpenAI says, ChatGPT says, as of July 2024, there are 1, 328 large language models worldwide, with China contributing 36 percent of them, and it quotes, uh, a Chinese government website.

[00:38:44] Cameron: The, the global LLM market was valued approximately 5.6 billion in 2024 and is projected to grow at a compound annual growth rate of 36.9% until 2030.

[00:38:56] Steve: Um, notably, a study LLMs not the valuations, the revenue.

[00:39:01] Cameron: Um, I don’t know. Yeah, I guess valuations would be way more than that. You must be right. Yeah, way more.

[00:39:09] Cameron: Anyway, that’s like, just getting to the point of the explosion stuff that I was talking about before. You know, there are 1300 plus LLMs out there today and an increasing number of them are, Uh, right up there, like you’ve got, I don’t think I

[00:39:24] Steve: can name more than a dozen.

[00:39:26] Cameron: No, me either, maybe even less, but you know, you’ve got out of the US, you know, we’ve got Anthropic, we’ve got OpenAI, so we’ve got Claude, the various Anthropic models, you’ve got OpenAI models, you’ve got Grok, you’ve got Google’s models, right?

[00:39:41] Cameron: Gemini, Lama, Facebook. You’ve Lama out of Facebook. Then you’ve got, and there are various versions that they have of that in their different apps. Then you’ve got Mistral out of France, you’ve got You’ve got DeepSeek and Quen, the Alibaba one, out of China, and I see Tencent has just rolled DeepSeek into all of their apps over there too, but that’s just still DeepSeek.

[00:40:10] Cameron: So Uh, and you’ve got Perplexity too, uh, in the US, and, uh, God, there’s a bunch, Hugging Face has all of their stuff that they provide as well, different models. Plus you’ve got people that are, got things like Open Router that are, like, layers on top of the AIs that you can access. I don’t know if that’s included in that number.

[00:40:30] Cameron: But if you go into Open Router or, uh, Hugging Face, and you look at all of the models that they have available for you to play with, there’s just an insane amount of proliferation. And they’re all gonna be relatively as good as each other. They’re all gonna get really close

[00:40:48] Steve: to each other.

[00:40:49] Cameron: And some are going to be free, and some are going to have different business models, and there’s going to be this massive experimentation of business models, and some are going to be advertising based, and some are going to be, you know, I don’t know, premium, freemium models, subscription models.

[00:41:03] Cameron: Some might also

[00:41:04] Steve: be sass ish. I mean, we did talk about the idea of a really good general AI should be able to, you know, Absorb and kill a whole bunch of sass because you won’t need it But then you might find people get really good at developing AIs that are good at really specific industry based things really well trained

[00:41:24] Cameron: Yeah, yeah, go really deep niche and Google’s already doing a lot of that stuff with AlphaFold Etc, etc, but just expect a world of Just a proliferation of AIs.

[00:41:39] Cameron: Well, I like

[00:41:40] Steve: that a lot, because I think that the last thing we need is, you know, three or four, an oligopoly of powerful companies, because we’ve seen where that led to big tech, and it led to some pretty nefarious results. But it does remind me a little bit about social media. Do you remember Web 2. 0? And we used to have those charts of here’s all the Web 2.

[00:42:00] Steve: 0 companies and it was all these logos and it was all so exciting and open and the end of Murdoch and then we ended up with, you know, just A bunch of big tech companies that were Murdochish in many ways, right? I mean, I think they ended up being that way.

[00:42:18] Cameron: Yeah. Consolidation. Yeah. They figured out how to consolidate it and kill it all, kill off all of the experimentation and the low levels and Well, buy them out and kill them and all of that kind of stuff.

[00:42:31] Cameron: Well, you know, it’s an interesting model, um, without wanting to get sidetracked too much, but, um, you know, I, I’ve done a lot of shows about, you know, How Christianity in the US went from being very sort of lefty in the early part of the 20th century to being far far right in the end of the 20th century.

[00:42:52] Cameron: It

[00:42:52] Steve: is far right. It’s all about what you’re not allowed to do rather than helping out someone who needs a helping hand.

[00:42:58] Cameron: And what happened was, in the 50s, a bunch of rich guys in LA started funding a church in LA whose pastor was on board in talking about how Christianity and Capitalism went together and Socialism was evil.

[00:43:17] Cameron: And then he started recruiting other pastors around the U. S. and basically saying, if you get on board the capitalism train, because before then, Christianity in the U. S. was mostly socialist, it was about taking care of the poor, taking care of the weak, you know, the rich were bad guys, you know, there should be more equitable distribution of wealth, etc.,

[00:43:36] Cameron: etc. Then, then, and then FDR brought in the New Deal during the Depression. The capitalists lost their shit, and socialism was becoming more popular right around the Western world anyway. So they, they sort of redefined Christianity to be pro capitalism and pro American. Yeah, because it is

[00:43:57] Steve: antithetical to a lot of the It’s, I didn’t know that was such a

[00:44:05] Cameron: strategic, implemented plan.

[00:44:08] Cameron: It was. There’s been lots of studies done on it. And then what happened was the pastors that got on board got shit tons of money from rich people. Then they could build bigger churches with nicer seating and air conditioning and heating and all the flashy things and all the people. And then the bosses.

[00:44:28] Cameron: of that area went to the new church where they were funding and putting all the money in. So their employees, if you wanted to be good with the boss, you had to go to the same churches your boss went to. So people left the old churches that didn’t get on board and went to the new churches. So the old churches went broke and the new capitalist churches prospered and you get prosperity gospel out of that.

[00:44:49] Cameron: That’s where it all came from.

[00:44:50] Steve: I think the core lesson here is. Air conditioning is far more powerful than people think strategically. I’ve always said that about air conditioning. I think cool air blowing a breeze during the Central and Midwestern America is a great strategy.

[00:45:05] Cameron: Live in Queensland.

[00:45:07] Steve: Queensland. If I don’t

[00:45:08] Cameron: have air conditioning, I get, you know, I get, I get murderous. Ask Chrissy. There you go. Air conditioning. Anyway, moving right along. Right up there with plastics. February 7th. 2025, Headline, Using AI to decode language from the brain and advance our understanding of human communication.

[00:45:30] Cameron: This is on Meta’s website. My son Taylor was down in Melbourne yesterday speaking at Meta, as he was the guest of Meta. Oh, I wonder if they scanned his brain while he was there. Here’s what the article says, over the last decade, the Meta Fundamental Artificial Intelligence Research Lab, FAIR is the acronym, in Paris has been at the forefront of advancing scientific research.

[00:46:00] Cameron: We’ve led breakthroughs in medicine, climate science, and conservation, and have kept our commitment to open and reproducible science. As we look to the next decade, our focus is on achieving advanced machine intelligence and using it to power products and innovation for the benefit of Mark Zuckerberg.

[00:46:15] Cameron: No, sorry, everyone. Today, in collaboration with the BASC Center on Cognition, Brain and Language, a leading interdisciplinary research center in San Sebastian, Spain, we’re excited to share two breakthroughs. That show how AI can help advance our understanding of human intelligence, leading us closer to A.

[00:46:34] Cameron: M. I. Blah, blah, blah, blah, blah. We’re sharing research that su successfully decodes the production of sentences from non invasive brain recordings and accurately decoding up to 80 percent of characters and thus often reconstructing full sentences solely from brain signals. Wow. In a second study we’re detailing how AI can also help us understand these brain signals and clarifying how the brain effectively transforms thoughts into a sequence of words.

[00:47:12] Steve: You know who’s really upset about this? The CIA. And the reason that they’re upset is that they could waterboard people to try and get the truth out and now they can just read it so they can’t be as cruel. So they’re upset! A lot of the hardcore Waterboarding.

[00:47:28] Cameron: I trained on waterboarding for years. And now you’re taking waterboarding

[00:47:32] Steve: away of what you’re telling me.

[00:47:33] Steve: I can just read Harris thoughts. Yeah, crazy. Waterboarders. Gives a whole lot of meaning to the lie detector. I mean, this is next level lie detector. We, we would have found out that Seinfeld was watching Melrose Place. I mean, he was, and that’s for the old Seinfeld fans out there. Maybe there’s none. We don’t know.

[00:47:54] Steve: But the ability to To decode someone’s

[00:47:57] Cameron: thoughts. I mean, well, this is what Neuralink does. We’ve talked about this before. This is literally how Neuralink works, but it’s putting a chip inside your head to measure the brainwaves. This one you, you’ve sitting in a, in a, in a, um, chair that has like a massive helmet on your head, like, uh, you know, uh, EE, EEG

[00:48:22] Steve: scanning thing.

[00:48:23] Steve: Your brain scan.

[00:48:24] Cameron: Yeah, they call it an M. E. G. and it is Magnetic

[00:48:30] Steve: something,

[00:48:32] Cameron: uh, yes, M. E. G. stands for Magnetoencephalography. Encephalography.

[00:48:39] Steve: That’s right, encephalography. Magneto, that’s right. You took the words right out of my mouth, Cameron.

[00:48:45] Cameron: And it’s, it’s basically doing brain scans. So they’re scanning your brain and front, then they take the brain signals and they put it into an AI and then the AI figures out what it is.

[00:48:55] Cameron: Let’s get futuristic,

[00:48:56] Steve: Cam. This is where I want to go. So it’s there and it’s amazing. Here’s a

[00:48:59] Cameron: good name for a podcast.

[00:49:00] Steve: It’s a great name for one. Here’s what I wonder and think. We know that the resolution of technologies. Increases and constantly gets better. What if we could have the equivalent of an MEG where you don’t have to have a helmet.

[00:49:16] Steve: You could walk through a tunnel while you’re about to board a plane and it could read your brain or you walk into your job in the corporation or you’re in the boardroom and we could have it at an ambient level, which you would have to imagine within 10 years, given we’re getting close to the singularity.

[00:49:36] Steve: Would be true. Then, is the ultimate enclave of privacy gone? Like, your thoughts can be read. Like, if the resolution of the scanning gets good enough, this is assuming that it’s true, that it can translate 80 percent of your thoughts, which is enough. I’ll tell you what, 10 percent of my thoughts are enough to get me in trouble, let’s just say that.

[00:50:00] Steve: But 80%? And then if you have some sort of an ambient tool which can scan that, like, where do we go, Cam?

[00:50:07] Cameron: Yeah, I mean, there are various science fiction sounding ways that this can play out or, you know, or it could be flying cars too, right? We, we, we might never get close to that because it’s just the, um, technology can’t get there or the investments to get there won’t be made or the legislative or regulatory issues get in the way.

[00:50:31] Cameron: Although if Elon has his way, there will be no more regulation anywhere in the world. Um, you know, I think it reminds me of Cory Doctorow’s book, Down and Out in the Magic Kingdom. Don’t know if you ever read that. I haven’t read that one, but I’ve read a few of his books. Uh, I remember in that book, when people went to bed, the bedhead would read, would scan your brain.

[00:50:55] Cameron: Wow. So you had a backup of your brain being made every night when you went to sleep. And then if you got hit by a truck the next day, and I think that happens to the main character, you get, uh, rebuilt. Uh, from your last brain scan. So they can basically rebuild you based on your, your backup. You know, you can back up your thoughts, your memories, that kind of stuff.

[00:51:23] Cameron: Anyway, the techno, the way this works without spending too much time on it. Um, they say we used AI to help interpret the MEG signals while participants Typed sentences. By taking 1000 snapshots of the brain every second, we can pinpoint the precise moment where thoughts are turned into words, syllables, and even individual letters.

[00:51:47] Cameron: So it kind of

[00:51:47] Steve: reversed engineered it by having enough practice, it could get the pattern associated with which part of the brain fires at which point when you’re going to do the letter L or think about the word long or whatever it is.

[00:51:59] Cameron: Yeah, and I imagine it works in a similar way to reinforcement learning with LLMs, right?

[00:52:04] Cameron: So, it says, what do you think these thoughts are? What do you, what do you think the sentence was that these thoughts were trying to produce? No, no, no, no. That’s wrong, wrong, wrong, wrong. Yes. That’s right.

[00:52:15] Steve: Yeah. Got it. And then it

[00:52:16] Cameron: goes, oh, okay. So that brain signal is fart. And that brain signal is whatever, two fingers.

[00:52:24] Cameron: Um, so all minutes, all minutes, all minutes. Okay. Uh, that’s my last story. You got a story you want to finish with then?

[00:52:33] Steve: No, I think we’re going to do that next time. Next time we’re going to tap into. Humanoid robots, they’re getting next level where a lot of them now have bones and liquid to move them. I think that’s going to be a really big shift.

[00:52:49] Steve: So I think we, we talk about humanoids. Humanoids are, I think becoming a really big thing. And I saw one recently that sells for as little as 67, 000 and I really was tempted to buy it to get it to mow the lawns. I mean, I just, I could send it out 24 hours a day mowing lawns for me.

[00:53:06] Cameron: I know how that plays out, because I’ve seen Battlestar and all of this will happen again, as they say.

[00:53:16] Cameron: Alright, so it’s a big, there’s a lot of stuff happening out there, you know, pay attention folks, and think about the implications, because that’s, you know, this stuff is all gonna happen faster than you think. Alright, I’ll let you go. Thank you buddy, good to chat, have a good week.