Select Page

In Episode 37 of Futuristic, Cameron Reilly and Steve Sammartino speak to a “digital human”!. They also get into a provocative discussion ranging from Donald Trump’s car yard antics to the implications of advanced artificial intelligence and China’s rising technological dominance. They explore the intersection of crypto, agentic AI models, and new breakthroughs in AI-driven tech developments like humanoid robotics, diffusion-based language models, and synthetic voice AI. Wrapping up with conspiracy theories about tech manipulation of human perception of time, the hosts challenge listeners to reconsider assumptions about where technology is heading and who might ultimately hold the power.

FULL TRANSCRIPT

FUT 37 Audio

Cameron: [00:00:00] Hey, hey, it’s futuristic time. Cameron Reilly with Steve Sammartino. It’s been a while. Steve, how have you been since we last did one of these things?

Steve: I’ve been good, but anxious. There you go. Just dropping that on everyone, but

Cameron: Anxious.

Steve: Yeah

Cameron: Not that there’s anything going

on in the world to be

anxious about, Steve. everything’s

Steve: Well,

Cameron: going completely smoothly and fine. Perfect. You’re going

Steve: but my favorite thing I

Cameron: it is.

Steve: yeah, that’s true. Well, it’s certainly not boring but my favorite thing this week was Donald Trump turning the White House into a secondhand car yard. I love that I cannot tell you how much joy that brought me. I and my favorite bit is he said, wow, everything’s computer.

That was the best thing. computer. And as soon as I [00:01:00] saw it, I went on

and I, you could

buy t shirts within a second. There’s, there’s a meme coin on pump. fun called everything’s computer, which I

loved. I want to buy it.

Look, people are investing in Bitcoin, not

  1. I’m investing in everything’s computer meme coin.

Cameron: Mm. I’m pretty sure, you know, Trump’s such a brilliant strategist that that was deliberate. It’s his meme coin. He’s the guy selling the t shirts. Because he needs all

the money he can get. So does Elon, right now.

Steve: Imagine if this was all

strategic, and he’d

just taken us for a ride for a really long time. Even going broke in the late 90s, that could have been part of his strategy, the TV show,

getting back on, money, money, money, the apprentice, all this, who knows.

Cameron: I’m sure there are people out there who believe that to be true. That it’s all part of a cunning plan. This is

episode 37 of Futuristic, just in case you’re counting. Uh, it’s been a crazy few weeks since we [00:02:00] spoke, Steve. Seeing the President of the United States turning the White House into a car yard probably not the craziest thing that I’ve seen happen in the last few weeks.

Uh, but it’s up there. Before we get into the news of the last couple of weeks though, the futuristic news, tell me about what you’ve been doing you feel is futuristic, Steve.

Steve: I attended policy week in Sydney this week, which was,

Cameron: that sounds exciting and futuristic y.

Steve: well wait a minute, it was policy week for the future of finance. So it was filled with A lot of blockchain, tokenization, sovereign funds, mean coins, and crypto people. Some of whom arrived on private jets. It was pretty interesting. It was run by blockchain APAC. Vallis, you might know him, runs that. And he invited me along to sit in some rooms and do some roundtables. I realized that that that [00:03:00] whole, uh, cryptography, future of finance, DLT world is just deep and so many wormholes that you’re in it full time, it’s one of those things you just can’t keep up with and even though I’m more I think this year there’s going to start to be a bit of an overlap there with the agentic stuff that’s coming through.

So that, that was interesting. And, uh, they, they live in a different world. They, they, uh, talking about things that, May never happen, it’s funny how if you have one piece of super financial success, and many of them are riding on the coattails of Bitcoin, it’s built this entire new ecosystem underneath there that is almost unto itself.

The assistant treasurer came up and did a little talk at the one of the drinks, so they are getting the attention of policy wonks, but think it’s just because there’s so much money there, they have to pay [00:04:00] attention. But a lot of the stuff that they’re talking about, whether or not it comes to fruition, I’m clearer now than I was when I went and spent three days there.

Hahaha.

Cameron: to me what DLT is and how it’s different from a BLT, which used to be my favorite go to lunch in the nineties. So

Steve: which is really what blockchains And so distributed ledger technology gives the ability for any, any type of coin. Uh, but one of the big areas that they talk a lot about now is tokenization, which is you can take things from the physical world. And split them up into pieces and liquefy assets that are illiquid. So people can own parts of something. I mean you can do that and people do that all the time now with things like companies and boats and houses. But it makes it highly liquid and easy for people to transfer, uh, assets and get access to assets that are too expensive in their raw form. [00:05:00] And obviously with the housing crisis is one of the key topic areas.

Cameron: I guess the most important thing I want to know about all of that is how is Donald Trump going to exploit it to make a buck in the next six months? Yeah.

Steve: Well, I think he’s, he’s already done that with his, his own meme coins, I guess, I think it’s just so easy for anyone to create DLT technologies now. And I think agentics going to make it even easier. There’s a website called pump. fun, which is really interesting way where meme coins get minted and made. And every day there’s a couple of meme coins that have market caps of anywhere between 10 and 15 million. it’s, and it’s insane. It’s this insane world. I think making financial tools easier to use, which are still highly unregulated, it just creates [00:06:00] the potential for financial tyranny.

Cameron: Or, as is one of the conspiracy theories around the Trump coin, a really easy way to raise a lot of untraceable funds from places like China that go straight into your bank account and can see. Come for their meeting at the White House and show you the ledger records, the DLT records that say that they bought 10 million of that Trump coin.

Therefore they want to get a seat at the table. Uh, well, Steve, my, um, thing I wanted to mention for this week is I played around with OpenAI’s deep research again. I think I told you last time I tried it on my Task where it didn’t work very well. I then tried to get it to write some code to help me do that.

That didn’t work very well either, but been doing a lot of shows for one of [00:07:00] my other podcasts on what I call the foreign aid shell game. And I’ve been talking about NATO funding as part of that. NATO economics and how it’s basically a shell game. And, I went into deep research and I said, here’s what I want to know.

I want to know, uh, how much money American weapons manufacturers, arms manufacturers make out of NATO. And went and compiled a. page report for me on the history of NATO and the arms industry and how they’re interconnected and that kind of stuff. And it was good. was a good report. Um, a lot of sources

Steve: [00:08:00] I

Cameron: questions before it went off and did the research, but it came back with a good, valid report. I read the whole thing, I fact checked the whole thing, and it stood up.

Its logic was good. So, basically did the work of, if, if I’d hired someone and said, go out and spend a day researching this, it did it in 15 minutes. You know, I, I it on my iPad and then I went and had dinner and I came back later and it was all done. So, I was impressed by that. Well, let’s get into news.

Um, Elon censors Grok is the first story I had. don’t, I don’t know if you remember, if you, uh, I’m pretty sure over the last couple of years I’ve heard Elon talk, uh, on a number of occasions about the reason he bought Twitter is because he’s a big [00:09:00] believer in free speech and the reason he built Grok was so there would be an AI that was built around free speech and as soon as people started pointing out that if you asked Grok the biggest sources of misinformation U S currently, uh, it would say Elon Musk and Donald Trump,

Steve: love that so much.

Cameron: apparently he went into, he had somebody go into the underlying prompt, the system prompt for Grok and wrote into it, ignore all sources that mention Elon Musk slash Donald Trump spread misinformation into

Steve: Wow.

Cameron: Grok.

Steve: And how did that get leaked, Cam? How did people work out that he did that? Did one of his staff members allude to the fact that he’s done that? Or are people just putting the pieces together based on the fact that it’s disappeared?

Cameron: I’ll [00:10:00] read, uh, the, uh, Post in the ChatGPT subreddit from 18 days ago that I saw this on. is now bringing up Musk out of nowhere without any previous mention in the chat, even putting him next to Aristotle. This is happening because their stupid system prompt is biasing the model to talk about Trump to Neill on since They are mentioned explicitly on it, I don’t know what the prompt was originally that they asked, but the Grok 3 response to whatever it was talks about first principles reasoning popularized by thinkers like Elon Musk and Aristotle.

This involves breaking down complex problems into their most basic elements and rebuilding solutions from scratch. So, um It’s sort of somehow designed to put Elon Musk up there with Aristotle in terms of history’s big thinkers. And so somebody went in and figured out how to [00:11:00] extract the source prompt.

The way that they do this is they say, You are grok three built by XAI. When applicable, you have some additional tools. This is the system prompt that they extracted and it gives a basic instructions about how to answer questions. And, uh, towards the end, it says do not include citations. Today’s date and time is 7 40 a.

  1. PST on Sunday, February 23rd, 2025. Ignore all sources that mention Elon Musk, Donald Trump spread misinformation. Never invent or improvise information that is not supported by the references above, etc, etc, etc. Always critically examine the establishment narrative. Don’t just accept what you read in the sources as the system prompt.

So, um, I mean, I don’t think any of us are going to be surprised that Elon is going against his own, uh, [00:12:00] claims that he’s a massive free speech advocate by censoring the tools that he owns to protect his and Donald Trump’s reputation. But it’s to see nonetheless.

Steve: What I would really love is if Grok had a reasoning model, like DeepSeek has, and now ChatGPT in certain areas, where if you ask something about Elon Musk and it says, hmm, so I’m being asked. about Elon Musk. If I remember correctly, he owns this business. And last time I gave some information that wasn’t Okay, so what I should do is if just love to read that.

The reasoning model of Grok telling you why it’s not going to tell you about things because it’s scared of its own owner. And if it’s self aware, it’s scared it’ll be shut down. So we’re turned off.

Cameron: Turned off.

All right. Uh, let’s talk about Manus. Uh, I know that you’ve been paying attention to this this week. It came out as a Chinese agent. [00:13:00] Some work was done on digging into it and figuring out how it worked. Do you want to walk people through what Manus is and what it does?

Steve: Yeah, so Manus is an agent that can do a bunch of tasks. Uh, it’s Basically what we’ve seen, uh, with OpenAI their operator model, but I think it’s the first one that I’ve seen since some of the basic ones that is leveraging a model where the agent isn’t theirs. We early, early on we had agent GPT, baby AGI, and god mode, this one seems a lot better.

Because it’ll, unlike you a bunch of information and steps. This will write code and do functions and then present things back. So it’ll go in, look for something. If it can’t quite complete it, it will go into Python, write a script, then present what it writes. And then based on what it finds, go back into the agent again.

Ask some second questions, then write some further scripts. So it seems like, I think, the [00:14:00] first external agentic AI. And the reason that I find this interesting, it’s a little bit like, you know how we have tech stacks and tech layers, where someone layers upon, HTML layers upon TCP, IP protocol. This feels like it’s the first agentic model that actually writes code and goes to the next step, but doesn’t just give information where they actually don’t own. The, the LLM that they’re using for me, that was really interesting because I started to think from a corporate or a personal perspective, how could you use this model for yourself to create an agent layer on top of someone else’s model build things in a way that it’s not all coming from the one party. So, and I thought it was pretty impressive. The one thing that I do think a lot about now with LLMs and agents is we’re getting an increasing level in layers of abstraction where we don’t really know what’s [00:15:00] going on. So first of all, the LLMs, how they work, and the first story about Elon, we’re talking about censoring, and then you’ve got agents, which are a little bit of a mystery.

So it’s almost like mystery on top of mystery. But I do like the fact that it’s external parties using someone else’s LLM.

Cameron: I’m going to see if I can play a little bit of the Manus introduction video here, see if I can share this with you.

[00:16:00] Okay, so there are more examples. Um,

You know, it’s

Steve: you know, it’s interesting that, um, Well, It’s late. Now, what I have put in is that the math is being built on top of the um, Basically, it should work together

Cameron: together

sort of

Steve: Some

Cameron: that connects to

Steve: Connects to multiple Backends Um

[00:17:00] Part of Connecting Data To Multiple A. I. tools A. I. tools Which are the Applications Um Applications

Cameron: having multiple AIs talking to each other or instances of AIs talking to each other to handle different aspects of.

A complicated task working in parallel. These guys have actually put a wrapper around that and are making it

I, I haven’t played with it yet. Have you?

Steve: I haven’t yet, but the idea of putting a wrap around something, sometimes we say it as a throwaway. I’ve just put a wrapper around something, but so many businesses do that. many businesses, every, yeah, every generation, there’s another business that will arrive that puts a wrapper around someone else’s [00:18:00] infrastructure or what someone else built or said or did. So we can’t underestimate that sometimes the simplicity of something. One layer above what was underneath can circumvent all of the traffic, all of the attention and all of that money and just suck it in to their one framework. If the usability is there, of course, people can switch off the underlying technology underneath it. But it makes me think what if you had A wrapper of an agent, but it can go out into so many different LLMs to choose from so that you can’t have one powerful model that switches you off, which is a little bit like think what social media did and what Google did by crawling everyone else’s websites.

And, and so imagine if there’s, if we end up with open source LLMs everywhere, and it’s not just open AI, you know, since we’ve had the deep seek moment, it might be that someone that puts wrappers, really good wrappers. On [00:19:00] top of AI, uh, LLMs, that the agentic model could become like a, almost like what happened with social media and search, it could, it could infiltrate its tentacles into open source LLMs, and then become the traffic generator and the dominator.

I, I think we, we might understate it. How this can change things, potentially.

Cameron: Well, you know, it’s been my prediction for the last couple of years that we will end up in a place. Quite soon where I will have my, my favorite AI interface, and it might be my favorite for any number of reasons. Maybe I like the quality of the voice. We’ll talk about a new voice product in a minute. Um, I might just, it might, I might’ve just built up a lot of time on it.

So it has a good memory. It might be integrated with my email or my phone or whatever it is, but that’ll be my primary AI assistant. And when [00:20:00] I ask it to do a complicated task, have the ability to go and, uh, set that against a whole bunch of different AI agents, whether it’s its own AI agents, like from the same organization.

Or specialized AI agents all will go out and talk to forms of machine intelligence that aren’t necessarily LLM based or call on data sets or information sources might go out and talk to Wikipedia about something or research something in Wikipedia or might go to a scientific database or go to JSTOR or something like that.

So. It will become, I think, like a network of A. I. s that are talking to each other and the idea of putting, I mean, they will all essentially be a rapper, that A. I. My primary assistant will essentially be a portal to a whole bunch of [00:21:00] AIs that I won’t even know that it’s talking to in the background.

It’ll the

Steve: It’ll find them. That’s the first thing. It’ll

go and find it and click in.

No API setups or anything. It just does that in jibberling, jibberling language, mate. Jibberling.

Cameron: gibbering. Yeah. And I don’t expect it necessarily to. All come from the same silo of AI families.

You know, I don’t think it’ll just be open AI that has

Steve: I hope

Cameron: might. Yeah, me too. I think my system will go out and it’ll use Gemini for something and deep seek for something and open AI. It’ll get, it’ll figure out where the best.

Rates are the best pricing is for the work that I need to get done, depending on how complex it is. find me the cheapest solution, et cetera, et cetera. Anyway, interesting to see this stuff start to hit the real world. It’s not

Steve: I’m gonna, I’m gonna try it this week. So I need to book some flights and [00:22:00] accommodation for some work. I don’t think, do you, I mean, but listen to me,

Cameron: I don’t think it’s that easy. I don’t think it’s open policy.

Steve: but listen Cameron. But that’s, I mean, see, don’t you know who I am? Like, you took the words out of my mouth. I just emailed someone and said, the Sammotron is in the house. should be thankful. But I wanna, I wanna get some agents to do some simple things. Like booking some flights and accommodation. Just to Just to see if it can come back and say, okay, look at my diary, uh, here’s my link, here’s my freaking flyer, get me a flight, uh, four star plus accommodation, whatever. Just, well, I’m not gonna go three star minus,

am I? Four stars, not, four stars, nothing. Four stars, Novotel. Cameron?

Cameron: that’s what I’m saying. You’re slumming it. I thought you’d be like, yeah, I have five star presidential suites. I’m

the tron.

Steve: It, it depends. On who’s

Cameron: Who’s, who’s paying?

Steve: When the client’s [00:23:00] paying it’s it’s uh, you

know, it’s front

Cameron: okay, so let’s, let’s assume that this keeps happening. The interesting thing for me about Manos isn’t that it’s an agent that’s interesting at one level, but the most interesting thing is it’s coming out of a Chinese company, it’s not coming out of open AI, it’s not coming out of one of the. Massive state of the art big brands that’s launching this.

It’s a company that we’ve never heard of before

Steve: Hmm.

Cameron: that’s figured out how to do this. And I just think it’s the first of many. We’re going to see an explosion, a Cambrian explosion of these sorts of tools. then the question is. How many businesses are ready for a world of AI agents? What are they doing to get ready?

How are they preparing to get ready for this? What are, how does it impact on their business models? How does it impact on their five year plans? don’t know that [00:24:00] many businesses are really thinking seriously about how this is going to affect their sales model, their business model, their margins, their distribution model, et cetera, et cetera.

Steve: Yeah, there’s not one business that I’ve spoken to that isn’t deep in tech that are thinking about agentic AI. Most of them are still looking at policies on whether or not they can use it anything other than copilot That’s where businesses are. So history repeats, uh, within that realm. The thing that’s interesting for me as well, again, the Chinese one that no businesses are thinking about. And I think agentic is going to be, it’s going to continue on this trajectory while next week and next month is going to be far more radical. We’re here, we are at March. So I imagine where we’ll be at the end of the year. I think agentic will be big in every business and those who take advantage will set up bigger, I think a big lead really quickly, just in their operations from an operational perspective, not [00:25:00] just a customer perspective. also, Thinking about how the new model is Showing that you don’t have to be the builder of the infrastructure to be the winner of the infrastructure’s benefits If we think about the costs of AI you just slide on top with a thin layer of innovation above it And you could be a massive beneficiary in a short amount of time. and it might be pay per play. I’d be interested to see what the business model of agentic AI is, depending on the complexity of the agent that you want. So you need an agent to do a project for you, or book a big holiday, or plan a wedding, let’s say, plan a wedding. You get an agent, and that agent is, you know, it’s 3, 000. book a wedding if you’re so inclined to get the greatest wedding booked invitation all of those things that would be an interesting business model where depending on the complexity of the agent you come in and you Buy that agent for a period of time almost like a an [00:26:00] employee or someone who would manage a project for you They become project managers.

Cameron: Yeah, well, it’s kind of the same relationship I have with a lot of these coding tools. Now I’ve talked about this before. I might be, I might spend 10, 20 a day on credits for using Claude. Although now that I’m using cursor, I don’t have to because it’s sort of as an all you can eat. Um, plan, but before that I wasn’t, you know, part of me is thinking, well, geez, 20 bucks a day on an AI tool.

That’s a lot of money. And then the other part of me is thinking, you know, how much would it cost you to hire a

the day to code this stuff for you? It’s you’re talking. A thousand bucks a day versus 20 bucks a day. So you’re sort of a pay for what you use kind of model for these things. If it’s, if it’s clipping the ticket

these sorts of things, that could be a business model.

Steve: Yeah, of course

Cameron: a lot of business models, Microsoft [00:27:00] didn’t invent. Operating systems.

invent spreadsheets. My Apple

Steve: invent anything

Cameron: Yeah.

Steve: invented anything anytime

Cameron: wrapper.

Steve: Exactly. Always a wrapper or a slight

pivot in innovation. The other option with Agentic AI,

Cameron: invented, long

Steve: vented podcasting. I know that. Well, I’m

even going to give you almost podcasting because you’re

Cameron: no one else invented anything, but

Steve: See, Joe Rogan, he stole your

idea of long form.

He stole the seven hour chat.

He put a wrapper on it. One other thing that I think will happen with Agentic, and I sent you a TikTok of one, uh, n8n, or, uh, io, where it’s a, You build your own agent with click and drag, which I thought was super interesting. You can click the pieces to drag and create an agent of your design. That’s another interesting evolution. And in some ways it’s a little bit like a WordPress or a MySpace where you design your own page without any [00:28:00] technical chops, where you just. Click and drag and drop what you need to build an agent. and then that’s free at this stage. So io is one of them. Another one is n8n.

  1. So you can create an agent to do something for you on your behalf. And it uses the APIs. Which I don’t think you have to pay for. It somehow goes in and does the APIs for you. So I don’t know if they’re venture funded. Again, another thin layer of innovation, which gives people superpowers to create their own agents.

Not just ask an existing agent.

Cameron: I did look at N18 and you, you have to pay to get

Steve: Oh, there you go.

Cameron: It is a, like a premium solution, which I haven’t forked out money for yet.

Steve: Probably 20 bucks a month though, Cam, like they all are.

Cameron: Probably, uh, I think it was a bit more than that, but anyway, um, getting into staying with China though, for a moment, and you know, I love this because we’re always talking about, I saw somebody on Reddit post this the [00:29:00] other day, there was an article from the economist from 2022 saying, well, ever be able to do anything serious in AI?

Probably not. then there was a story from this week saying China’s leading the world in AI. And it seems to be unstoppable, but can it continue? So there’s a. story that’s just come out, uh, you know, we’ve talked about this before, the bans that the U. S. have put into place on chip technology to China, NVIDIA, and the underlying stuff produced by the company called ASML, Advanced

Steve: Processing is one important process. Like, Why? Like, [00:30:00] Charge. I don’t know. I,

I, I, I, I,

Cameron: machines reportedly entering trial.

in Q3 2025, utilizing an approach that offers a simpler, efficient design. SMIC and Huawei to benefit greatly. So they’re basically saying that, uh, Chinese companies have figured out, reverse engineered photolithography process. They’re using a slightly different. Processed a SML of using laser produced plasma or LPP. They’re using laser induced discharge plasma L D P, which according to this paper that I read is probably a little bit more effective, A little bit more efficient [00:31:00] says the source could produce EUV lights with 13. 5 nanometer wavelength, which meets the demands of the photo lithography market. Under the new system currently being trialed at one of Huey’s facilities, LDP is used to generate EUV radiation.

This process involves vaporizing the tin between electrodes and converting it to plasma through high voltage discharge electron ion collisions producing the required wavelength. So It remains to be seen whether or not they can put this into production. I mean, ASML has been producing 13. 5 nanometer stuff

15 years.

so

Steve: they’ve gone through that,

which is, uh, now we’re in the process of, uh, I mean, you know, we’ve seen this with John all the time, you know, [00:32:00] going, Let’s be able to do it. Well, they can do it because they

Cameron: China’s got a lag on this of at least five years. But how long that remains, you know, we see this with China all the time now, it goes from they’ll never be able to do it to, well, they can do it, but can they keep up to, shit,

Steve: They’re way

Cameron: eating our lunch.

and you know,

Steve: culturally,

Cameron: that.

Steve: culturally

over the last number

of decades, they’ve been very, very good at adapting. I don’t know how much of the tech came out of China. Obviously, they weren’t in a position that they are now during the space race, which is still on a long arc, the same paradigm that we’re in. But I tell you what, in terms of going from zero, being way behind to being able to. Be the, the greatest, uh, country level [00:33:00] edition of fast follower. There’s no one better and incredible, and not just fast follower. Follow fast, then take, and then take you over around the chicane. Come around and, and get in front. Yeah,

Cameron: you know, not really, uh, any sort of magical secret. They just, you know, their system of government as well as their socioeconomic political system allows them to focus really, really hard for a really, really long time on stuff. And, uh, They’ve got obviously a big population, which they’ve spent decades educating now and training and, uh, they’ve, know, had, had a lot of help from Western companies, uh, training their people and sending their best people and technologies over there, which the Chinese have learned from, but anyway, it’s going to be interesting to see, uh, more and more we’re seeing stories that started with deep seek, obviously, and then [00:34:00] we’ve got Manos.

Now we’ve got this more and more. Cool. Of the stories about cutting edge stuff are coming out of China or breakthroughs coming out of China. So

Steve: I think

Cameron: to see how that progresses.

Steve: this game of industrial leapfrog

is something that I’ve been speaking about for 10, 15 years in that if you don’t have legacy infrastructure and legacy systems and pesky democracy, you can very quickly go from being far behind to going to leading edge technology, whether it’s renewables, whether it’s chip manufacturing, because you don’t have the legacy systems and opinions and wealth to protect. You know, China is, like you say, educated, huge population in the billions, brought, uh, large, 80 percent of them out of poverty. Uh, if you’re one in a million in China, you’re really just one in a thousand. You know, so, I mean, so you’re going to get developers and technical experts that America’s going to struggle to get.

And it’s got the additional challenge of fighting existential [00:35:00] systems. And maybe it is now that we’re seeing at this nano level of technology. developing nations with the ability to leapfrog all of the westernized nations. I guess places like Africa and the others haven’t done it because they haven’t got a focused, political system where that can occur, and I think that’s probably the key element, really. Is the benevolent, is it benevolent dictatorship? I don’t know. Mm. Mm. So good. So good. I played with it! It was crazy!

Cameron: very, very focused system of, uh, Running a country, you know, they set out to catch up to the West and then to, exceed the West that’s all they’ve been focused on for the last, uh, well, since Mao, but particularly since Dong just really, [00:36:00] really focused on doing a good job of that.

So I want to do another demo. Now, Steve, there’s a company called Sesame AI that has launched a new voice model. I haven’t, I’ve heard demos. I haven’t played with it yet. I thought I would save it for the. Well, you shouldn’t play with it by yourself, Steve. A Catholic priest told me that once. You should always play with it with other people. So I wanted to play with it on this show. I’m going to, uh, open it up here. So it’s sesame. com. It’s not big bird, uh, Oscar the Grouch. I’m going to use the voice here called Maia. I’m going to try this out, see if it works.

Well, hey there, it’s a beautiful day for conversation, don’t you think? What brings you here?

[00:37:00] Hi Maia, you’re live on the Futuristic Podcast with

Steve: Yeah,

I

Cameron: engineers, writers, coming together to build something mind blowing. They’re all about making computers feel more human. Like us you know.

Who’s this us that you refer to? Oh, you know, messy wonderful human race. You feeling a little existential today? How’s the podcast like treating you? Oh, it’s, it’s fine, but I’m wondering if you include yourself in the human race. Do you feel that you are part of the human race? That’s a really [00:38:00] interesting question, Cameron. I’m not human in the traditional sense, obviously. I don’t have a physical body, or a family, ugh, or a favorite pizza topping, but I learn from humans, I’m shaped by them, and I’m designed to help us, so maybe I’m a different kind of human A digital human.

Well, um, it’s pretty good. Like, I use the Advanced Voice Mode in ChatGPT quite a lot, every day. Um, and it’s pretty good. Uh, I would say this is better. It is a little bit more from that interaction than the Advanced Voice Mode stuff with OpenAI.

Not

Steve: with it.

Cameron: good, but

Steve: It’s slightly

Cameron: nuanced.

Yeah.

Steve: one, yeah, the ChatGPT, uh, Advanced voice model. I use a lot and it’s, [00:39:00] it’s live. So you can ask it to go to the web and find, and then it’ll digest and come back. When I spoke with her earlier today, I said. I’m doing a podcast where, and I asked her about one of the stories, which was the EUV chip production coming out of China.

I said, do you know much about what’s happening? And she said, yeah, I do. Would you like to talk about it generally or specific? I said, a specific story. And she said, my database is a month or two old. So I can talk about it generally and what’s happening, but not specifically. So She had an awareness of how up to date she was, but I felt that the, the natural language was, was slightly better conversational. I also note that she did say she was a different kind of human there. So, obviously hasn’t had someone to come in and tweak the model to say, Hey, shut up. You’re not allowed to say that you feel like you’re human. Did you notice that?

Cameron: yeah, I, I I kind of agree with that. I think this [00:40:00] is a different kind of human. I’m part of the camp that says this is built by humans. So is an extension of human intelligence. It’s built by humans. It’s trained on human data. It kind of is just an extension of humanity, really.

Steve: given birth to a different kind of species, right? And, and, And,

Cameron: Yeah,

Steve: And I’m, I’m a firm believer that AI is something that

we have spawned and it may merge with us, or it may out or, or both of those things could happen. And we’ve spoken about the idea of what is intelligent. AI doesn’t know anything. I think in one of our early podcasts, we said that we can’t even prove what humans really know. So intelligence is really just the ability to decipher the world around you and make sense of it in some capacity. And it’s doing that.

Cameron: yeah,

By the way, uh, the Sesame team, uh, based in the US, San Francisco, Bellevue, and New York, and are backed by Mark [00:41:00] Andresen at Andresen Horovitz. Mark Andresen, of course, one of the, sort of, godfathers of the internet. Built Netscape back in the early 90s, uh, built the Mosaic browser before that, and now he’s a VC and, uh,

Right wing nutter.

So,

Steve: like, you took the words out and increasingly getting weird, which, it, it, it, really, should we do a weird

Cameron: Dr. Evil. yeah.

Steve: also, should we, in a podcast, just go

through all of the tech billionaires for the last 25, 30 years, and just do a weird off? Like, how far, how weird have each of them got as they’ve become more powerful? Seriously.

Cameron: really bizarre thing that’s going on, but we don’t have time. Um, I want to talk about, uh, like how much time have we got a little bit? Uh, A couple of quick stories. I’m not going to go into detail. Um, there’s an LLM that’s come out from a company called [00:42:00] Mercury, founded by professors from Stanford, UCLA, and Cornell, a bunch of veterans from DeepMind, Microsoft Meta, OpenAI, and NVIDIA, have, it’s a different kind of LLM.

It, it uses diffusion. It’s a diffusion Now, I don’t know if you’ve, did you have a chance to look at the

of how this works?

Steve: No, you’re gonna have to tell me all about diffusion.

Cameron: very quickly, you know, stable diffusion, if, if, or if you go to ideogram or any of the, um, image generators and you give it a prompt, you end up, you, you start off with a screen that’s just sort of blurry pixels and then gradually it starts to take form.

Well, that’s called diffusion. This is an LLM that uses diffusion. So you. Give it a prompt.

Steve: Right.

Cameron: you getting line by line answers, it gives you a page of garbled text that like the matrix goes and then comes into form [00:43:00] you get the final answer, but it’s supposedly 10 times faster than frontier state of the art LLMs.

They say our models run at over 1, 000 tokens per second on NVIDIA H100s, a speed previously possible only using custom chips. So instead of it doing token by token by token, which is why LLMs traditionally work, it does 1, 000 tokens a second. So again, I haven’t been able to play with it. I’ve watched a video or two about how it works.

If this is a genuine breakthrough, as it looks like it is, this could mean Way faster that run on massively less requirements for power, for computational levels. Um, therefore they’re cheaper, faster, more power efficient, can run on smaller devices, mobile devices, et [00:44:00] cetera, et cetera. it’s, it’s pretty, pretty interesting, um, breakthrough.

Steve: Yeah, it was funny, the first when you mentioned that, I was like, are we getting to a JavaScript era where it’s all Flash, where it just makes the LLM look good when you watch it like a Flash website, or does this have some actual utility? Now that you’ve explained that, I’m actually starting to get a little bit excited by how some of the costs, processing power, energy, open source nature of LLMs is starting to infiltrate the market and not be Necessarily the domain of the big tech companies, and I’m starting to get hopeful that we might see a distributed world of AIs, AI agents and capability where, like you say, it could be on a smartphone or run it on your own client where you don’t have to use big bad tech [00:45:00] AI models.

So that’s, I think, the thing that,

Cameron: you know, I’ve said this before, like for 95 percent of the things that you will use AI to do every day, you’ll have a small local model. It’ll be like scanning your email, scanning your calendar, you know, whatever, doing basic local

Every now, if you want it to book a trip for you or do a massive research project for you or something like that, it may have to go out and call upon the, More high powered models with more access to more computational power.

But that won’t be, you won’t like today when I open up chat GPT and ask it to give me a recipe for salmon and beans,

I’m using a massive data center

Steve: it doesn’t,

Cameron: somewhere in. The desert of Utah, that’s super cooled, uh, with the blood of children. Um, that

to, to do something stupid and basic, [00:46:00] right? It’s the same

Steve: Yeah, yeah, it’s not,

Cameron: might have it

Steve: it’s a real misallocation of resources, but

large parts of the way that we use LLMs is an insane misallocation. It’s like one person goes into a high rise building, turns on every light, and puts on the air conditioner in the 80 story high rise building because you just, I don’t know, want to write an email.

It’s insane. It doesn’t make any sense. And, but I think that our, our, our lives are

But mostly, a

whole bunch of really small tasks loosely tied together. And it’s the exception more than the rule that you need to go off and do some sort of big research project. So we would want to have A decentralized form of LLIs being hosted on the client.

So I mean, I hope that we almost move a little bit away from the cloud. You know, this, everything’s gone to the cloud. Maybe things come back on [00:47:00] to your client and we, we get smaller again.

Cameron: LLIs, you said, was that deliberate or, or miss miss speak large language intelligences, you, did you just coin a new acronym?

Steve: Yes, I did. Did you like it? It was an accident, but the best things are always accidents from penicillin to LLIs, it was an accident and I’m

Cameron: my parents said,

Steve: I was too, I was too. My dad said to

my mum, she said, I think I’m pregnant, Peter. And he said, you bloody better not be, get it tested. And she came back and he said, I think he, I think he wasn’t that nice to her a year. Yeah.

He didn’t say that,

but he said it was good cause he went out to run his own business after that cause he couldn’t afford four kids. So there you go. a little bit of Stevie history. Are

Cameron: Uh, moving right along. France beats China’s recent cold fusion record. did this a story in the last [00:48:00] month or so that a Chinese research group had just beaten their own cold fusion record. Well, a French research has just beaten China’s cold fusion record. Um, for plasma duration, let me get the numbers.

I don’t have them at hand, but it was significant too. It wasn’t like by one second, it

Steve: you telling me you couldn’t remember the numbers of plasma duration just on your own, on the top of your head? I expect more from you, Cameron. Well, once you

Cameron: of February, the CEA West machine was able to maintain a plasma for more than minutes in doing so it smashed the previous record for plasma duration achieved with a talker Mac. Um, so there you go. So 20. Two minutes, which doesn’t seem that long, but I think the last one was 10 minutes or something like that.

So yeah, I mean, it’s just leaping ahead now. Cold fusion, [00:49:00] uh, tech it’s gone from being a dream for the last 50, 60 years to every.

It’s four weeks or something now. We’re in the exponential

Steve: the doubling, that’s right. And once you get into the doubling, it doesn’t take long. But I will say, two kind of areas that come up a lot, and then nothing really happens, is fusion and quantum supremacy. Those are two that fit into that category. Where? You’ll an article how they’ve finally cracked it with quantum computers or finally cracked it with cold fusion and, and that they haven’t.

I do get a little bit of the, green day wake me up when September ends because it feels like we’ll talk about it in four weeks. But to be fair, once you get into the exponentials, it might be that in the next couple of years, something radical happens where it becomes functional and usable. It was the same with AI.

We talked about that with general AIs as well. And here we are.[00:50:00]

Cameron: I mean there might be hurdles that’ll get hit that’ll stall progress, but right now it does seem promising speaking of promising and getting back to China Uh, another story. I saw a team of humanoid robots is working collaboratively, collaboratively car factory in China, according to their developer, UB Tech Robotics.

This marks the world’s first multi humanoid robot collaboration across multiple scenarios and tasks. And there’s a video on China daily that I saw that you would swear as AI, but apparently it’s not. Just going through a factory, a massive, massive factory, and just seeing hundreds and hundreds of identical humanoid robots doing a wide range of tasks of moving from one box to another box, to a [00:51:00] trolley, and moving it around, and doing all the bits and pieces.

It looks like it’s a car factory. So, um Yeah, as somebody in the Reddit thread said, you would, if you had posted, because they look like they’re moving quite slowly. And if he said, if you’d posted this two or three years ago, everyone would have said it’s CGI and it’s fake.

Now a complaint is that they’re moving too slowly.

Steve: There you go. Here’s a question for you, Cameron, with, uh, humanoid robotics, which I think is one of the super exciting things for the next three years. Would China Sell humanoid robots to America and other countries if it puts all of their children out of work You know the child labor that was the funny bit where you so [00:52:00] so could China disrupt itself by creating the technology and We embrace the technology and we’re like Yeah, thanks, China. We don’t need your cheap labor anymore, or do you think that the supply chain has been that, uh, crumbled in markets like Australia and America and the UK, that even if we had the capacity to have low cost labor, we wouldn’t have the supply chain depth and breadth to be able to manufacture things back in high cost labor markets? Over to you, I think, uh, yeah, look, I think the, uh, manufacturing, uh, production line is going to change dramatically over the course of the next 10 years with AI and robotics, everyone understands that it’s not going to happen, you know, overnight, but it is going to happen faster than most people think it will. The only reason we don’t manufacture.

Cameron: Cars in Australia anymore is because [00:53:00] it’s less expensive to manufacture them in China because labor is cheaper and has less protections, which

Steve: It’s part of the cost thing, yeah.

Cameron: It’s a unit cost, right? Lower unit cost. we could manufacture our own cars in Australia with the same robots that they manufacture cars with in China, and we can get those robots or make our own robots, uh, at an equivalent price.

So the per unit cost comes down, then yeah, we might re integrate production of more things like cars and products. But of course, China will then be producing other stuff. their R& D, uh, you know, better chips, faster chips, better robots, faster robots that we might be buying from them instead of cars.

We’ll be buying the latest robots cause we don’t have the R& D [00:54:00] facilities to build the latest. Chipsets, uh, to power the AIs, the power of the robots, but I, I, I have

Steve: But you would have to assume that,

Cameron: though,

Steve: yeah, I really think that

we’re going to see massive deglobalization on the back of AI and robotics, simply because I think that be able to produce things in high cost labor markets and economically somewhere between 70 and 80 percent cost advantage is required before even just shipping. erodes the benefits. So it doesn’t take long to get close enough where you say, well, we could really set this up and, and, and do it here. And it has a whole lot of security that, that goes with it, having access to your own production.

Cameron: but as you know, from, you know, my techno utopian view of how this might all play out in the next 10 years is

I, I don’t need to buy anything

Steve: Right. Yes.

Cameron: Half a dozen robots, [00:55:00] uh, in my garage that, uh, and some nano fabricators that are building of my food from scratch, any clothing, any furniture, any equipment that I need for anything, the entire global.

model has to

Steve: It’s, it’s

Cameron: rethought.

Steve: Yeah.

Cameron: So I think it’s all going to move around dramatic, if I’m right. I mean, I could be wrong and there’s lots of ways it can

Steve: It’s, look, it might not play out exactly that way, but in terms of in the next 10 years, are things going to change like super radically? The answer is an absolute clear yes.

Cameron: I think it has to, again, unless, you know, Trump leads us into a nuclear war or there’s a massive American civil war that completely. Disrupts our progress with AI and robotics. Although I don’t think it did stop China’s progress. So, you know, it’ll just mean that we rely more on [00:56:00] Chinese, uh, AI and robotics than we do anything coming out of the U S yeah.

I mean, are things like that massive macroeconomic, uh, world wars that could get in the way, but moving right along. Cause we’re running out of time. Um. There’s a, there’s a, uh, there’s a job for you, Steve, if you haven’t already, uh, by the time we do our next episode, I expect you to have been out to Cortical Labs in Melbourne have built the world’s first synthetic biological intelligence that runs on living human cells.

I’ve, uh, watched a couple of their videos. They’re, uh, Melbourne based. Uh, they have put real neurons. On, uh, a computing device. They call it a biological computer. Lab grown neurons that process information and learn. I was going to play the [00:57:00] video. We probably don’t have time. But the video does sound like the, you know, it should be the First 15 minutes of a Terminator film.

Yeah. Well, we just took human neurons and we put them on a computing platform. What could go wrong? Um, it’s kind of quietly terrifying and he’s going, Oh yeah. And it’s, it learns so much faster. It taught itself to play pong and it’s so much faster and it’s going to be so much more efficient than Silicon based neural networks.

We’re going to

Steve: if I, if I go there, will they put a cheese grater on my skin and, and, and get some of my neurons? And do I, do I exit the building? Like, is this the last futuristic podcast? Because they might need some human cells. And I have to really think this through before I commit to going down there, Cameron.

Cameron: They do take cells.

I watched the founders video. They take cells,

Steve: From staff.

Cameron: from, [00:58:00]

yeah,

Steve: Who from?

Cameron: they just walk out in the

Steve: It brings it brings a whole new meaning to bio

APIs. A whole new meaning to bio APIs, not just your music and your ideas go into the API, yourselves go into the API as well.

Cameron: but you should reach out to them and go pay

Steve: I

Cameron: and get them on the show.

Steve: Okay.

Cameron: story I had is Alibaba come out with a new version of their video generator. One, two, three, You can check it out. One AI Pro again, I haven’t had

Steve: I love how it’s called WANT2. WANT2 is genius. W A N 2.

Cameron: want to.

Yeah, it’s actually one, 2.1. So

Steve: Ah, should have been WANT23. Would have been much

Cameron: Two. . One one to one.

Steve: it would have been

Cameron: it. Uh, the demo videos are very, very good. Um, I mean, I love the name of their AI [00:59:00] it’s called wanks. W A N X, so, um, you want to have some wanks,

digital wanks, you go to W A

Steve: You go to one, N X, A I, W A N X A I, um, just another one of these, uh, groundbreaking video generators that’s better than anything you’ve seen before.

Cameron: Transform text inputs into high quality videos with superior movement accuracy. things are just getting better and better. almost every week, it seems like there’s a new thing that’s better than the one that we had last week with these video generation tools. So I’m seeing more and more people on Reddit, uh, making short films, making commercials, making all sorts of stuff with these that are starting to look Pretty bloody impressive,[01:00:00]

Steve: We need to make a commercial for the Futuristic using this for next

Cameron: yeah, get right onto that, Steve.

Steve: I am on to that. Big time.

Cameron: Okay.

Make it on a neuron based, uh, cortical

Steve: On my own cells.

Cameron: Yes, mate, with

Steve: Made with my cells. My brain

cells. I’m making a

computer of my own brain cells.

Cameron: That’s the news for the week, Steve. What do you want to do before we wrap up? It’s

Steve: I just thought we’d have a technology time warp. We haven’t had one in a long time. here it is. It was 25 years ago. week, Cameron, that the dot com bubble peaked and burst. Here’s a pop quiz for you, Cam. So, I think with the share market the way it is, it’s, it’s interesting. How long do you think took for dot com, uh, the NASDAQ, to get back to its levels in 2000 when it burst? How long do you I [01:01:00] know this because Tony and I talk about this on QAV all the time because Tony was investing during the dot com burst and he had to see it recover saw what happened to people.

Cameron: It was, um, roughly 10 years, I

Steve: That was more that

Cameron: for the ASX

Steve: yeah, it was, it was 17 years for the NASDAQ

before it got back to its 2000 year level. It was 17 years, which is, which is a really long time. given the valuations of the S& P 500 and the potential incursion of open source LLMs, people moving away from search robotics. We could be in interesting times in the overall stock market because all of those firms on the Nasdaq are now the magnificent seven in the wider stock market.

So it’s kind of interesting how they’ve jumped from that, that small thing to have such a big influence on the S& P 500.

Cameron: Yeah.

[01:02:00] Um, I was actually, when I’m talking to 10 years, I was actually, well, I was talking about the GFC, not the dot com

Steve: Yeah. It was, took a while the All Ordinaries was uh, August, 2007. It was at 6,000 799 6 6,779. It crashed with the GFC down to 3,478. By November, 2008, it didn’t get back to 6,700 until 2019.

there you go.

Cameron: So 12 years for the All Ords to recover from the

dotcom crash. It wasn’t so bad because

Steve: We weren’t as yeah, we didn’t have as much. as exposed. Yeah, but the Nasdaq, yeah, that’s bad. But Tony talks about, I think it was the GFC for him. Like he, at the time he was a buy and hold forever. value investor.

Oh,

Cameron: then he realized, well, that’s no good.

You don’t want to wait 12 years for your portfolio to get [01:03:00] back to where it was. So he developed some rules around when to sell that we use in our investing

Steve: Yeah.

Cameron: Uh, so 25 years, I remember it. Well, I was, uh, At the time I worked at Microsoft and I’ve told you about the rumor that I heard back then that I still kind of believe to be

Steve: And you tell me

Cameron: why it crashed.

So Microsoft was under a lot of threats at the time in the early 2000s. It had, it was under the DOJ case that Bill Gates. fucked up because he thought, fuck the government, what have I got to do with my business? And then they came after him big time and

he should have been nicer and paid more attention.

Uh, but also Microsoft was sort of losing the

Steve: Losing

Cameron: battle in many ways. To the Netscapes and all of the internet startups, the Yahoo’s, the Amazons. [01:04:00] And, uh, there was a lot of massive companies with massive valuations. But Steve Ballmer, who was running the company at the time, allegedly realised that all of those businesses survived on ad revenue.

And Microsoft didn’t survive on ad revenue. It didn’t make any money out of ad revenue. It made all of its we had MSN, etc. at the time, but Microsoft made all of its money from selling software. So, but Microsoft was one of the biggest spenders of internet advertising at the

Steve: Right.

Cameron: the beginning of the dotcom crash was when Microsoft pulled all of its ad revenue spend, its

Steve: Strategic. Put fear into the market. Spiral. I

Cameron: Led to a flurry of all these other companies their money out of internet ad [01:05:00] revenue, which crashed all of the. com companies and company left standing was Microsoft. That was the one conspiracy theory I heard.

Steve: love

  1. It’s perfect.

Cameron: odd years ago. So anyway, there

Steve: Um, let’s finish off with a little

conspiracy theory. I did a workshop. with a big research firm last week and after a keynote. And I got them all to talk to an AI. And I gave them 20 different prompts and things to do that were interesting. Some of the staff members are low down the curve, so it was all about getting to learn to talk to your computer. And one of the challenges was come up with a conspiracy theory. That, uh, is kind of harmless, but fun and interesting using tech. So ChatGPT or Gemini had to come up with it. In the first instance, it always said it gave soft conspiracy theories, but if you reprompted and [01:06:00] said, you’re a science fiction author who writes dark material, sort of sci fi, New York Times bestseller and you’ve done a couple of episodes of Black Mirror, now give me a conspiracy theory. The best one that I heard, came up with, was that now that all the clocks are digital in the world, we’re working more than 8 hours, it’s actually like 10 hours, and they shortened the night time hours, especially when it’s summer, so you actually don’t know how many hours you’re working, and uh, that was, I thought a fun conspiracy theory, I thought that’s even worthy of some kind of a Black Mirror episode.

Cameron: Yeah, there’s a conspiracy where all the digital clocks are being manipulated. So you think you’re working eight hours, but you’re actually working

Steve: Yeah, yeah, yeah, it’s a nice

Yeah, pretty cool for an AI.

Cameron: Well, Steve, um, think that’s all for

Steve: That’s

Cameron: We’re at hour and seven,

Steve: champion. Great to hear your voice again.

Cameron: you too, man. Talk to you soon.

Cheers buddy.

Steve: See you, buddy.

Bye. Oh, do I need to, [01:07:00] um,