Steve’s 3D house printing startup, using GPT to write podcast notes, using Descript to edit podcasts, Fox used GPT to write a play, using GPT to de-bias new articles, the launch of xAI, ChatGPT launched Code Interpreter, Google’s AI-powered notes app NotebookLM, OpenAI’s Latest Article Hints at Their Timeline for AGI and ASI, Gates’ latest post about AI threats, Next phase of the Internet and End of cash / Debate social credits aka whuffie.
[00:00:00] Cameron: All right, Futuristic, we’re back, episode eight, two weeks in a row, two weeks running. Steve o, how you doing, Sam o? Mate, I’m
[00:00:13] Steve: rocking it. Actually, I feel good today. I’ve had an undulating week, but a few wins late, just a few wins, and that makes me feel good.
[00:00:24] Cameron: That’s good. I listened to your chat with Sir Bob.
[00:00:27] Cameron: I really enjoyed it. Thank you. I was disappointed that Libby Gore cut you off at the end, which is disappointing. And the truth of
[00:00:34] Steve: it is that the best parts were the non recorded parts. Because we chatted for 10 minutes after that. And after she, when I said she’s gone to an ad break.
[00:00:43] Steve: Do you mind if I ask you a few more questions here? I’m just interested. And that was really beautiful because it was like. Let’s get deeper when that’s when we got into the deep stuff and I sent her a message and said, do you have any of the recording? And she said, no, because when it records, it goes to the ad breaks and it records those.
[00:00:58] Steve: I’m like, oh, that’s disappointing. [00:01:00] Oh, next time I do an interview, I’m going to, press my own record, but I’m just on my phone nearby just because for posterity and some other pieces.
[00:01:08] Cameron: You should reach out to him and get him on this show as a guest. Tell him we want to have a, recreate the chat. Tell me about your week, Steve.
[00:01:15] Cameron: What are a couple of things of note that you did emerging tech wise, futuristic wise this week?
[00:01:22] Steve: So this week, I’ve been working heavily on my latest startup, which is called Macro 3D. And we…
[00:01:29] Cameron: I want to know more about that. I’ve seen your Instagram your TikToks, and I’ve been wanting to ask you about that for ages.
[00:01:35] Cameron: Tell me about what’s going on there,
[00:01:37] Steve: man. I need to get better at promoting it and telling people about it, but… At the start of this year Tom Makrokanis and I have created a startup called Makro3D, where we’ll be 3D printing houses, and it’s largely printing concrete, and in the future, concrete composites and different materials through material science, but basically printing low cost [00:02:00] houses as part of the solution to the housing crisis, and we can print a house in half the time and half the cost.
[00:02:05] Steve: Is basically where we are and that will only improve because it’s an exponential technology and we’re at the moment. We’re trying to raise capital. We’re trying to raise a million dollars in capital to fund new robots. We’ve got 40 million dollars. In deal flow of people who want to build houses with us right now, and we just need some funding to help us get some robots and some materials and hoppers and mixes and real basic stuff.
[00:02:30] Steve: And we’re already printing like we got our first robot in January that we reconfigured and now we’re printing and we did get a contract for 60k to build a small house. As a test next week in Adelaide, so I’ll be flying out there Sunday night doing a speech Monday and then the rest of the week printing our first house.
[00:02:51] Steve: And it’s so interesting because we found it really hard to get funding. This is what the concrete looks like. Here’s one of the, you can see there, we
[00:02:59] Cameron: print [00:03:00] it. And it goes up the other way, right? Yeah, it’s layer by layer, right? Looks like layers
[00:03:04] Steve: like the plasticine. Yeah. And it’s really extraordinary technology.
[00:03:08] Steve: It’s like 1979 for PCs and there’s a long way to go, but eventually we’re going to get complex, beautiful buildings using biomimicry design and shapes that give tensile strength with a multitude of materials in a single print. It’s eventually it ends up as molecular nanobot printing and anyway, but we’re really struggling to get funding because everyone goes, Oh, what’s your return on investment?
[00:03:31] Steve: And it’s really low compared to a SAS business or a software business or a a software pure play because it’s physical, it’s the built environment. It’s the real world. And we were in a VC meeting last week and the guy said, Oh, what’s your, how much of an improvement in cost of a house? Do you make the whole house?
[00:03:48] Steve: Once you do the windows, the roof, I got about 20%. He goes, geez, that’s not much. He said to me, he goes so you’re saying, if something would cost me a hundred grand, now it’ll cost me 80. I go, yeah, but the building industry makes 1%, 1 to 3% [00:04:00] profit. So you got to think this through. People just have their mindset.
[00:04:05] Steve: They forget that different categories have different ROIs. And so anyway, so I rang some of our suppliers and I said, Hey, we’re printing a house next week. Give us some money to shoot some video and film as one of our suppliers and we’ll do some PR and get you on TV. So we rang Bunnings and we rang Dingo who provide all the concrete materials.
[00:04:28] Steve: And they both said, yo, we love this. It’s interesting how you could probably get anywhere between 100 and 500K for a PR game, but a venture capitalist in Australia won’t give you that amount of money. Isn’t that ironic?
[00:04:42] Cameron: Yeah, they’re probably looking, as you say, for a different level of return, but I was interested.
[00:04:46] Cameron: I saw an article in the ABC or something this week about 3D printing of, I think, a toilet block or something in Toowoomba or somewhere in Queensland. And they mentioned that there was only like a [00:05:00] 20% price differential between that and tradition, which I was surprised by. In my head, I was thinking it would be like 70%, 80% cheaper.
[00:05:09] Cameron: To do it. Why is that? So I would imagine a lot of the cost of building a house or a toilet is in labor and that if you’re using a robot, you don’t have the labor cost. Why isn’t the price? Why isn’t the savings greater using
[00:05:23] Steve: basically robots? The materials, the material costs are about the same. Yeah, of course.
[00:05:28] Steve: Just saving labor.
[00:05:30] Cameron: And that’s not as big a component as I thought.
[00:05:33] Steve: It’s about 20% again. But what will happen is it’ll get faster, cheaper, it will exponentially improve, the materials will improve, the AI will improve, the robots will improve, and the materials will get cheaper, there will be less wastage, but that’s a way off yet, that’s that pipe dream kind of visionary stuff.
[00:05:48] Steve: Yeah, we would have meetings with people and they go give us the IRR, the internal rates of return, and we used to give it to them and they’d not be impressed. Because it’s not like software is zero [00:06:00] cost digital duplication. I make it once and I can sell the same thing a thousand times with zero cost, zero incremental cost per piece of software I sell.
[00:06:11] Steve: This is not like that. But what we need is, and we’ve started to say now, we have a meeting with an investor. We say, if you want an IRR or rates of return, I said, this is for visionaries only. If you’re not a visionary, don’t waste my time. That’s what we say now. And they actually lean in and go, yeah, I’m a visionary.
[00:06:26] Steve: I’m a visionary. Of course I am. So what we do is we insult them in the first instance now and say, unless you’re a visionary, don’t waste my time. And all of a sudden they’ll tolerate lower returns.
[00:06:36] Cameron: It’s always worked for me picking up women as well. You just insult them at first and I’ve had some successes and some not so successes with GPT in the last week, Steve.
[00:06:47] Cameron: It’s not as cool as building houses. But I have learned how to use GPT to write podcast notes finally. And I know this isn’t going to be relevant to most people listening to this. Cause they don’t probably. Make as many [00:07:00] podcasts as I do, but I had tried it in the past and it didn’t work, but I came up with a better process.
[00:07:05] Cameron: I produce hours and hours of podcasts each week. A lot of what I do is history related stuff and my process for 15 years. Nearly 20 years of doing podcasts has been I’ll sit down if I’m doing a show about Leonardo da Vinci, I’ll sit down with four, five, six books about da Vinci and I’ll read about a particular like I’m going something like I’m talking about what happened in 1512, for example, what he did.
[00:07:34] Cameron: So I’ll read all of the books on 1512. Some websites. I’ll drill down into some side notes. I’ll get I’ll go down some rabbit holes because, the book will talk about the Villa Belvedere in Rome and I’ll go I don’t know what that is. So tell me, I have to go and look up Wikipedia about the Villa Belvedere etc.
[00:07:52] Cameron: And I’ll write notes, I’ll write notes as I go, bullet point notes for myself. And that becomes my show notes. And it takes me hours and hours to do this. [00:08:00] What I’ve started doing is all of my, all of the books that I read are ebooks. All of the information I get is from the website unless I, sometimes I have books that are like hard copy, but mostly it’s digital.
[00:08:11] Cameron: I read ’em on my computer. What I’ve started doing is just, I’ll read a chapter of a book or chapters across several books. I’ll just copy and paste all of that, throw it into G P T and go summarize this as bullet points for for a podcast script for me. And. If any places or people are mentioned, give me a little bio on that person or that place.
[00:08:38] Cameron: Provide me with some extra information. He can go and look it up for me. If there are any quotes recreate them in full because I like to use the quotes, like something Da Vinci wrote in his notebooks about his studies of the color blue or whatever it is. I want that in there. It’s, I reckon it’s shaved hours off my research time.
[00:08:56] Cameron: Because I still, I’m still doing the reading. But I don’t [00:09:00] have to sit down and rewrite the stuff for me. It will do the rewriting for me. I just have to do the reading and then tell it to go off and find stuff that I need for me. So it’s saved me hours, I think.
[00:09:14] Steve: How does it know which bits to get if you want quotes?
[00:09:17] Steve: Do you…
[00:09:19] Cameron: I just tell it if there are any quotes in the source material, retain them in full. Got it. Now, what I did find though is it does, when I tell it to summarise this chapter as bullet points, it cuts out things that I think… are important that I want to tell, which is fine. It’s a bit like having a research assistant.
[00:09:38] Cameron: It’s got not going to totally know the bits that I find interesting. So I’ll go back and read through the notes that it creates and think, okay it didn’t talk about that or it missed that bit. So I can grab the bits that I want to add in. And sometimes it’ll include stuff that I don’t like to take, but that’s a far.
[00:09:54] Cameron: faster process than me having to write my own notes based on all the books that I’m reading, [00:10:00] just getting it to summarize it for me. I’ve also been using this tool called Descript that I’ve been playing around with for a couple of years. It’s an editing tool. For podcasts, for video and for audio, but it’s also a transcription tool.
[00:10:15] Cameron: One of the great things about it is it will automatically take out ums and ahs from a podcast so I can, when I edit our show I did this last week. I did a quick edit of our show, exported it as an MP3, uploaded it into Descript. Got it to create the transcript and then used it to take out all of the ums and ahs and I and ah a lot when I’m talking, which I think is fine, but it gets a little bit annoying in a podcast.
[00:10:45] Cameron: If there are too many of those, it’ll take them all out and then I can re export the podcast And play around with it a little bit more, but it’s like editing all of the ums and ahs out of a 90 minute podcast [00:11:00] takes a long time when you’re doing it manually. So the AI inside of Descript is now doing that for me.
[00:11:06] Cameron: Again, I reckon that saved me a couple of hours with a podcast that I edited this week, this one and the QAV one. It saved me hours, which I was happy about. Fox wrote his first play using GPT this week and then performed it. He got it, he gave it a prompt for a play that he wanted to do using his stuffed animals that are his friends.
[00:11:31] Cameron: He got it to write a script for him. Then I printed out the script and he sat and did a little puppet theatre using a script. for a play written by GPT. He’s nine years old and he’s performing plays written by AI for him. I’m like, fuck me. That’s the dawn of a new era. And the other thing I learned to do this week was use using GPT to de bias an article.
[00:11:56] Cameron: So one of my shows, the bullshit filter. Yeah we take [00:12:00] new stories and we try and work out where the bias is, where the half truths are. And I just started grabbing articles from the New York Times on Russia and Ukraine or various subjects. And we throw it into GPT and say, highlight the bias for me in this article.
[00:12:17] Cameron: And it would do it. And then I say, now go find me. An article on the same topic, but from a different perspective with less bias in it. And it went and grabbed an article from a Chinese news source, China Today or something like that, and it said it has. its own bias because it’s a state run media outlet, but it has a different perspective on the story.
[00:12:42] Cameron: And I found that fascinating, using AI to analyze and highlight the bias in these articles for me. I think there’s a big future. For those of us that give a shit about the quality of the information that we’re getting about using these technologies to The fact that you [00:13:00] said,
[00:13:00] Steve: for those of us who give a shit, look, it’s really great.
[00:13:03] Steve: We can remove bias and all the bullshit. For those of us who give a shit, it’s jeez, I hope those is a lot of people, man.
[00:13:08] Cameron: Seriously? It’s not. I guarantee you it’s not. How many people
[00:13:11] Steve: really, because the thing is that people swim in democracy. And the benefits of truth and science, they’re the beneficiaries of it and then they hate the things that they’re a beneficiary of or just nonplussed or don’t consider the importance of them.
[00:13:28] Cameron: My theory is that people build their identities around statements of faith. whether it’s religious faith or political faith or sporting team faith or business faith or whatever it is. And it becomes so integrated into their sense of identity and purpose that challenging their assumptions. on that topic is akin to challenging their sense of who they [00:14:00] are.
[00:14:00] Cameron: And one of the things that I have tried to do my most of my adult life, I think, since I started to become interested in history and politics when I was like 19, 20, something like that, I made a decision early on. to build my identity around a search for the truth and the facts wherever they may lie.
[00:14:23] Cameron: I don’t really have an ideology. I don’t really care. Who’s right or who’s wrong. I just want to know how it works. And I often tell the story, my history shows how that happened when I was about 19, good friend of mine recommended a book on Napoleon Bonaparte, which I read, really enjoyed it. And then I read another book on Napoleon and it told me a completely different story because the book I read on growing up in the Commonwealth.
[00:14:48] Cameron: I had this, vague idea of who Napoleon was. He was a bad guy, dictator, tyrant earlier version of Hitler, wanted to conquer Europe, et cetera. And the [00:15:00] first book I read on him was by an American called Vincent Cronin, book from the early seventies. And it painted a very positive picture of Napoleon.
[00:15:07] Cameron: He was a liberator. He fought against the United Monarchs of Europe. He liberated not only the French, but lots of other people from the oppression of the monarchies of Europe. And he was ahead of his time and he got crushed by the monarchs of Europe who were terrified that the French revolution might start a trend in Europe if it succeeded, and they wanted to reinstate.
[00:15:28] Cameron: the Bourbon monarchy of France to say to the peoples of Europe no, you can’t just get rid of your kings and your nobles whenever you want. Fuck that for a joke. You got to keep them because we’re appointed by God. And I thought that’s, I read this book. I thought that’s a very different idea of Napoleon than I had.
[00:15:43] Cameron: So I went and read another book by a British scholar who was like, yeah, Napoleon was evil incarnate. And I started thinking hold on. There’s only one guy. There’s only one set of facts here. How can I get two diametrically opposed versions of the truth? And that started me on a [00:16:00] journey of trying to unpick the stories.
[00:16:03] Cameron: And I still do that today when I read stories about Russia, Ukraine, or about China, or, whatever it is. But I think having technology as a tool to enable us to do that is going to be very powerful. But what I’ve learned is that most people don’t care. Most people are just they’ve stuck in their beliefs and they don’t really want to have those beliefs challenged, sadly.
[00:16:25] Cameron: I think you’re right. Enough of my rant.
[00:16:28] Steve: No, it’s not a rant. It leads a lot into what we’re facing right now with algorithms, dividing people into existing belief systems, echo chambers, wormholes, whatever you want to call it, and it ensconces that belief even further. While it was true that we saw these things with, the football team or the suburb you’re from, or East versus West, or New South Wales versus Queensland, all of that.
[00:16:54] Steve: That’s still there. It’s always been there, but now it’s just deeper because you can actually, [00:17:00] you don’t have that sensible center that we all gravitate towards with. This is one of the good things in mainstream media. I know you’re going to shoot me down after this is that it maybe it tried to be a bit centrist in that at least there was a common narrative.
[00:17:14] Steve: Now that narrative might have been wrong or a lot, but at least there was something that brought everyone to that middle. Now we don’t even, we would totally believe. Non participatory in each other’s lives in terms of information we consume and there’s a real danger in that and you’re early on this. It is true books that you see and I remember Jim Rohn said, one guy says, read this book.
[00:17:35] Steve: If you do this, you’ll get rich. The next book says, if you do it, that other guy said, you’re going to end up poor. He said, read them both and make up your own mind.
[00:17:42] Cameron: Yeah, I’m a big believer in that. Read everything and make up your own mind. Absolutely. And learn to develop critical faculties to be able to decide what seems to make sense and what doesn’t.
[00:17:53] Cameron: But, I think a lot of this comes back to Daniel Kahneman’s stuff, thinking fast and slow, thinking fast and slow. [00:18:00] You’ve read that, I’m sure. Yeah. I think that’s one of the definitive books. of the last 30 years or so. So you have system A and system B thinking. System A I can never remember which is which.
[00:18:12] Cameron: I think system A is the quick heuristic thinking that you have. And system B is the one where you have to stop and think it through. And as he explains. Whichever one the slow one is, it takes a lot of energy, a lot of caloric energy to stop and use your brain for hours and hours to think through something.
[00:18:31] Cameron: It’s a lot easier and more calorie efficient just to jump to a quick conclusion. And if it seems to work for you, most of the time you just run with that. The classic analogy I always remember as he says, you’re walking through a, walking through the jungle, you hear a twig snap behind you. You can stop for 10 minutes and think about all the different things that could possibly cause that noise.
[00:18:52] Cameron: He said, spin around with your spear and make stabby motions because it could be something trying to attack you. And 99 times out of 100, it probably [00:19:00] isn’t, but no harm’s done. You move on your way. That one time saves your life. People jump to conclusions. They’re jumping to conclusions. Particularly if all of your friends and your colleagues and your government agrees with your conclusions, it’s a much easier way to get through life, instead of being an irritating bastard like me, who goes, yeah, I’m not really sure that I agree with your position on that.
[00:19:22] Cameron: And let me tell you why. Trust me, people don’t like that. This is annoying after a while. It’s
[00:19:29] Steve: interesting because as a species, you’re better off ignoring things sometimes that have low probability, but high consequences. You’re probably better off annoying that ignoring that, but it means that people get sacrificed along the way for the good of the species because sometimes, yeah, and it’s like insurance is one of those things.
[00:19:51] Steve: It’s like that. It’s a low probability event, but it’s high cost or high outrage. And so that balance between something gets high [00:20:00] probability and low cost versus low probability and high cost. That’s what you’ve got to straddle. It’s interesting to talk about the heuristics as well and knowing how to think.
[00:20:09] Steve: I was listening to Econ Talk with Russ Roberts, which is, one of my favorite podcasts. And he had, they were talking about the philosophy of learning and what learning looks like. And the danger that… Everyone just wants to go to ChatGPT to do things for you, and they went a little bit deeper and said you need to know how to read and how to understand and interpret.
[00:20:29] Steve: Otherwise, at some point, it’s what will you ask it? Because people saying, oh, why will you need to read? You will still need to because otherwise, how are you going to decipher what’s good and what it gives back and to work with it? Yeah, it was just a really interesting philosophical chat about the importance of reading and learning and that was about how people are really bad at reading now because they’re not good at going to something at a really deep level to give them their own ideas and their own understanding.
[00:20:55] Steve: Everything’s headlines and Soundbites.
[00:20:59] Cameron: And how, the [00:21:00] reason I do the Bullshit Filter Show is how to develop a sense for when you’re being bullshitted to. Like the classic example I use all the time on that show is I’ll read lots of mainstream news stories. In the ABC, in the New York Times, and everything in between.
[00:21:15] Cameron: And quite often, there’ll be huge question marks immediately when I’m reading this story. A perspective that is obvi that is a very obvious perspective that isn’t getting covered. Or a very obvious question that is being omitted or not answered. And I know that the journalist is a smart person, I know that their sub editor’s a smart person, I know that the editor’s a smart person, I know that they’re all smart enough to know what I know, which is hold on.
[00:21:44] Cameron: You’re missing something really important here. The obvious rebuttal or the obvious question is X. Why aren’t you covering that? And you start to get a sense for, okay, they’re not telling me everything here. What is it they’re not telling me? There’s obviously [00:22:00] something. They’re not telling me, and they’re obviously not telling it to me for a reason, because if they, if they were being open and transparent, they would be telling me that something’s going on here.
[00:22:09] Cameron: You read enough and you study enough of these news stories, you start to develop a sense for that. But it takes a long time. I think it takes years and years of studying the media and thinking about what’s going on before you can develop that. Anyway, let’s get
[00:22:23] Steve: into the topics. What really quickly, we’re going to do the list into the top of one of the reasons I think with media, they don’t ask questions because it’s a game where they need access to people to get the interviews.
[00:22:33] Steve: And if they go too hard, they
[00:22:34] Cameron: don’t come back. That’s part of it. There’s also, particularly for commercial outlets you’ve got advertisers. You don’t want to upset. You’ve got friends of the publisher. You don’t want to upset for the ABC. You’ve got politicians. You don’t want to upset.
[00:22:48] Cameron: There’s all these sorts of agendas and perspectives that are going into the mix and, Chomsky and Herman in Manufacturing Consent explained the dynamics of how this works. [00:23:00] A lot too, like you get hired into a media corporation if you don’t have the same perspective on ethics and morals and the same worldview as the publisher has and the managers, the editors have.
[00:23:16] Cameron: You’re not going to survive long at the company because you just won’t get promoted. You won’t get given the opportunities and you’ll get filtered out. So organizations have a culture. They protect that culture in a variety of ways. And,
[00:23:30] Steve: to can you not talk about X. Yeah. And it’s really important to it. And I don’t do it because I want to get paid and my primary responsibility is to get paid to my family. I’m like, okay, if you don’t want me to talk about, I think it’s important. I think you should discuss this elephant in the room in terms of how tech’s going to interrupt you.
[00:23:45] Steve: But.
[00:23:46] Cameron: Hey. You know who Stephen Mayne is the founder of Crikey, I’ve had him on podcasts a few times over the years and he says for financial journalists, you go to an AGM if you ask the wrong [00:24:00] questions of the board at the AGM. Very quickly your editor gets a call afterwards and says, don’t send that fucking prick back.
[00:24:08] Cameron: Or, you’ll, you won’t get invited ever again into our AGM. And that journalist gets talking to and gets moved onto a different bait or something like that. You very quickly, he said, as a young finance journalist, you quickly figure out what questions you’re allowed to ask and what questions you’re not allowed to ask, or you don’t keep your job, right?
[00:24:27] Cameron: You get filtered out. That’s how it works. Anyway, tech news for the week. XAI finally launched Steve Elon Musk’s AI company, Elon, who’s been spending the last few months spreading fear and doom about AI and how we need to slow it down. And, we can’t get ahead of ourselves here has just gone and launched his own AI company.
[00:24:49] Cameron: What are you being honest?
[00:24:52] Steve: He was being very honest when he said we need to slow it down because we needed to slow it down so that he could catch up, so he wasn’t [00:25:00] lying about the slowdown and now everything’s fine, speed that puppy back up, ironic. Yeah,
[00:25:07] Cameron: yeah. He said that he wants to design his AI to be maximally curious.
[00:25:16] Cameron: If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint, Mr. Musk said. I think it is going to be pro humanity from the standpoint that humanity is just much more interesting than not humanity. I’m not sure I buy that argument.
[00:25:36] Cameron: That
[00:25:37] Steve: is the worst. Argument of all time. Maximally curious. What? Because you think humans are the best at being curious and we’re the most interesting thing on the, in the universe? Seriously, he must have been on Rogan smoking some more dope when he said that. I don’t even, I just read that and thought, is this guy joking?
[00:25:57] Steve: to say that is a way to put in [00:26:00] a boundary that could protect from a P doom event. Maximumly curious. One of the things is what would happen to the earth if I did get rid of these pesky little humans? That’s an interesting proposition. Now that would be maximally curious, wouldn’t it? Like
[00:26:17] Cameron: humans going, you know what, the life on this planet would be far more interesting with the dodo than without the dodo, but we were curious about what dodo tasted like.
[00:26:28] Cameron: So we just ate all the dodo and now there are no more dodo. It doesn’t really seem to stand up to too much inquiry.
[00:26:41] Steve: He has a lot of those. Statements, which he just flows in and throws in that just seemed weird and out of place. And you just get away with it because when you’re the king, you can get away with it.
[00:26:52] Steve: And I’m sure there’s. A zillion tweets about him saying I was fantastic. Maximally curious. Absolutely. [00:27:00] It is that religiosity behind tech founders like him. I just thought it was ironic. That said, I think it’s good that there’s competition in AI. I think that’s really important. We might end up with an AI spy versus spy kind of situation where you have all these different AIs battling each other in terms of a business proposition.
[00:27:17] Steve: It’s incredibly. Good for him running Tesla, Twitter, SpaceX, all of these businesses to have that integrated as a fabric, which weaves through his businesses will certainly be profit centric and, having the hardware of Tesla and the roads and the AI going through that and creating all the visuals and the mapping.
[00:27:39] Steve: Mapping via out of space in the satellites and then the fire hose that is Twitter information and news and sport, you could really, you can see how that might come together from a training database and potentially be really different to some of the others that are just based on language.
[00:27:55] Steve: Language data sets.
[00:27:59] Cameron: It’s good to be the [00:28:00] king,
[00:28:00] Steve: Yeah. There you go. Yeah,
[00:28:02] Cameron: like I think Elon’s a really smart guy, I’m not one of the big Elon haters, I think you’re more anti Elon than I
[00:28:09] Steve: am. It’s just, I want to call out stupidity.
[00:28:14] Steve: He’s done a lot of amazing things, right? And most of that he’s done is look, I think the world’s a better place with him in it. But I think he’s done a lot of stupid and moronic things. And I think that happens when you get a lot of sycophants around you and you can do what you want that happens.
[00:28:29] Steve: That’s always the
[00:28:31] Cameron: case. You do enough, David Lee Roth used to say, you stick your head up above the crowd enough times, someone’s going to throw a tomato at it. I think he does enough things that he’s got a bit of
[00:28:49] Cameron: a interesting personality. He likes to stir shit. He’s not very conservative in the way he conducts himself. And that gets a lot of people offside. He likes to speak his mind and he likes to. Fuck with [00:29:00] people too. Like particularly crypto bros. I think he likes to play with them a lot. But anyway, I didn’t I didn’t think that was the best argument for AI safety I’ve heard.
[00:29:11] Cameron: But we’ll see what happens with that. Chat. GP launched code interpreter this week. Steve, have you had a play around with code interpreter?
[00:29:19] Steve: I have. I have. The one thing that I really liked was the image to text and the, gift to MP4 stuff where you can just put something in and go, now change it to this.
[00:29:28] Steve: I think that’s really extraordinary. Yeah, the image side of it. I can see that a lot of what’s happening with AI, there’s a lot of tools that are out there at the moment, but you’ve got to download a separate tool. There’s one that I’m using to create some ads. Which is like an advertising AI where you put in an image, you put in a couple of keywords and it’ll develop advertising for you.
[00:29:51] Steve: I’m going to do it for my keynotes and just experiment with it. And there’s a lot of content creation, editing sort of tools, but it feels like there’s going to be one that. [00:30:00]Encapsulates all of them. And I think the big AI models will do that. There was a whole lot of web 2. 0 sites, and then they all got eaten up by three or four big social media sites and search.
[00:30:10] Steve: I feel like this will happen with AI. There’ll be, a an oligopoly, let’s say, of AI tools, which include language, code, visual, editing, all of that kind of stuff. And the one thing that I’ve been thinking a lot about with AI is that if the internet Expediated information really quickly. It changed the envelopes and how quick an envelope of something could get to you.
[00:30:32] Steve: Whether it was a video or text or podcasting, now it’s what’s inside that envelope, the content, and it’s really, it seems to be very much going towards a content creation mechanism, the AI revolution that we’re through now. Yeah, the first was connection, now it’s content. I don’t know if sentience is next, but that’s seems to be where it’s going.
[00:30:55] Steve: Yeah, I agree with
[00:30:56] Cameron: you that it seems like the direction of one of the [00:31:00] directions that AI is going to go in the short term anyways, it will be a huge bucket with a single interface, you’ll be able to open up GPT or whatever it is and say, you’d be able to ask it to do 100 different things for you and it’ll just do it.
[00:31:13] Cameron: Code Interpreter, as I understand it, is just a bunch of Python libraries that are integrated in the back end that can perform a whole bunch of functions. In a lot of cases, not as well yet as a dedicated tool would do it. You could probably do GIF to MP4 or something like that in Photoshop or some sort of specific tool with the higher results.
[00:31:35] Cameron: But it’s also allowing you to upload files to GPT for the first time, including spreadsheets. I tried to figure out Ways to test that this week and didn’t really come up with anything I needed to do for me, but I’m trying to figure out a way of using it a purpose for it. I do a lot of spreadsheet stuff with the QAV podcast, the investing show, but nothing that I really [00:32:00] needed it to help me with from an analysis perspective yet, but that’ll be something I’ll be looking forward to testing over the next week or so, if I can come up with something to throw up there.
[00:32:11] Cameron: Yeah, we’ll have
[00:32:12] Steve: general purpose. AI. Yeah. It’ll be the camera, like the first digital cameras were much better than what you had on the phone, but now the phone’s crazy. It’ll be a bit like that.
[00:32:22] Cameron: I agree with you. That’s a good analogy, actually. Yeah. And your AI will be on your phone, so the phone will swallow that as well.
[00:32:30] Cameron: It’s just gonna swallow everything. The one tool Google’s AI powered notes app is launching in the US and the UK. It’s called Notebook LM. I tried to download it and they told me to bugger off cause it wasn’t in the right geography. Could have VPNed my way around it, but who has the time?
[00:32:51] Cameron: Yeah, but that’s interesting as I understand it, what this will do is use all of your notes. And I don’t know if you’re a big [00:33:00] notes user, but I use, I used Evernote aggressively for, I don’t know, 10 years. And then I switched a few years ago when it became a little bit bloatware for my liking. I moved to Apple notes.
[00:33:11] Cameron: And also, I loved being able to use my Apple Pencil on my iPad with Apple Notes. Evernote didn’t really have good early support for that. But it will the idea behind Notebook LM, and I’m sure all of the Notes apps will head in this direction in the near future, is it’s going to feed off all of your Notes and the documents in your Notes as its learning matrix.
[00:33:36] Cameron: And it will, understand all of the things that you’re interested in, all of the things that you’ve written notes about. It’ll be able to poll all of your notes to answer questions that you might have. And one of the challenges with Note taking tools I found over the last 15, 20 years is they do become a little bit unwieldy.
[00:33:56] Cameron: I’ve got tags, I’ve got folders, I’ve got [00:34:00] different hierarchies that I tend to rebuild from scratch every five years to try and make them more manageable. But I have tens of thousands of note folders on topics ranging from economics to philosophy and quantum physics. And notes on books that I’m taking you name it and I’m always the search function in these notes apps is normally not great.
[00:34:25] Cameron: So I’m always if I have all I got a note about that somewhere I go digging for it’ll take me half an hour to find the note that I’m looking for. Be quicker just to Google the note sometimes and fire and recreate it. Hopefully the AIs are going to make this sort of stuff a lot more efficient. You’ll be able to use AI as an interface into your notes and your documents.
[00:34:50] Cameron: It’ll be able to retrieve stuff, answer questions for you based on the information that you’ve Been tucking away for years and years rather [00:35:00] than answering your questions based on what it’s been trained on from a, vast array of other human knowledge.
[00:35:07] Steve: This is the thing where your personal AI in the long run really should have your personality and a good one would develop your pros, your ideas, your personality, your philosophy.
[00:35:18] Steve: It’s actually one of the important things that you can already see. On a training database like chat GPT, wherever it’s learned on the web. And if you have a footprint on the web like you and I do. Because we’ve been publishing things for many years, we have an advantage right now, personal branding advantage right now.
[00:35:38] Steve: If you don’t, then it’s really important that you start to document in your own hardware, in your own databases, your information so that when this arrives, you can extract and reduce your own labor. Because what you need to do is create a replicated version of the way you think so that when you get your personal AI or your personal Jarvis.[00:36:00]
[00:36:00] Steve: You can actually extract more of yourself from it in this circular.
[00:36:06] Cameron: Yeah. I’ve mentioned to you before, I know that’s one of the things my son Taylor’s already been doing with his Yahoo Finance articles is he’s pointed chat G p t to his all of the articles that he’s written on Yahoo Finance, and then say, write me a new article on subjects e subject X in the voice of the author of these articles.
[00:36:26] Cameron: Yeah.
[00:36:27] Steve: Do that with report as well. Sometimes I put it in, I’ll put in some stuff and go, reinterpret this. in 2023 because I might have written about that concept in 2015. I just need to modernize it and you can add a few things and then you can say and now add these factors in my voice.
[00:36:46] Cameron: You write for Eureka Report, do you?
[00:36:48] Cameron: Yes, I do. We should talk about that.
[00:36:50] Steve: We should. I do. I do. I tell me why. Cause Alan Cole, I work with Alan Cole. He employs me as his tech commentator. [00:37:00]
[00:37:00] Cameron: Interesting. Alan was one of the first guests we had on QAV back in the early days. And we’ve been trying to figure out a reason to get him back on.
[00:37:08] Cameron: I, I,
[00:37:10] Steve: I can get you to get him back on. He owes me plenty of favours. I do lots of stuff all the time for his articles that he’s writing for the new daily news reports. And he’s he’s a real, I tell him he’s, I’m scared of him. He’s a real hard ass. And like he’ll ring me and go line 26 on that article on page two, justify that.
[00:37:29] Steve: That’s what he’ll say to me. And then he’ll ring me up and he’ll say, I’m doing a thing on the news tonight. What do you think about this? Can you get me three bullet points? And I drop everything I’m doing and do it for him like a little lap dog. Yeah, no,
[00:37:41] Cameron: I’m sure. We can get him back on easily enough.
[00:37:43] Cameron: We were going to have lunch with him in Melbourne, I think while ago, but then we COVID got in the way or something, but no, but I’d love to get a another article about QAV in your report somewhere. That’s my target demographic is his audience.
[00:37:55] Steve: I can do that. What do you want me to do?
[00:37:57] Steve: Just tell me what you want and I’ll do it for you. I write for him [00:38:00] every Monday. So you just tell me what you want and I’ll just slide it in. That’s how we’re talking about manufacturing consent just before.
[00:38:07] Cameron: Yeah, this is how I want to manipulate the media.
[00:38:09] Steve: Yeah, exactly. You understand it, then you can manipulate
[00:38:12] Cameron: it.
[00:38:13] Cameron: I’ll edit all that out. OpenAI’s latest article hints at their timeline for AGI, Artificial General Intelligence, and ASI, Artificial Superintelligence, Steve. I don’t know if you saw this article. I haven’t seen
[00:38:28] Steve: the article, but I thought you can inform me, and then I’ve got a report about AGI and ASI.
[00:38:34] Cameron: It’s called Introducing Superalignment and in it they say we need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years we’re starting a new team co led by Ilya Sutskever and Jan Leckie and dedicating 20% of the compute. So it seems to suggest that they’re [00:39:00] looking for major breakthroughs in the next four years, which might suggest that they think ASI, if not AGI, is achievable in the next four years.
[00:39:12] Cameron: Musk in a separate article said he thinks five to six years before we get to AGI. So we’re talking by the end of the decade. We are probably gonna have some form of super or general intelligence.
[00:39:26] Steve: Well, Kurzwell said that we’ll exceed human intelligence by 2029 and the singularity by 2020 2045.
[00:39:34] Steve: Look, I’ll, I’ve, I think we already have a g i and I’ll have that argument all day, every day with someone. The one thing we is that is missing is it’s not self-directed, right? None of the, none of theiss at the moment. Do their own thing without instruction. That’s the missing link. But in terms of the intelligence being general, of course, it’s general.
[00:39:53] Steve: It’s not a singular intelligence. It’s not an ANI. It’s not a narrow intelligence, right? You have [00:40:00] a narrow intelligence, singular domain, singular category can do one thing. Self driving car example. Then you have artificial general intelligence, which Google kind of was, it was, it had a general number of things that it can do.
[00:40:13] Steve: And chat GPT and the large language models are certainly general. Now that might not be the smartest general intelligence, but they’re certainly general. Anyway, how can you even say that they’re not a multitude of domains? And how is that not general intelligence? The only missing link to that is self direction.
[00:40:31] Cameron: No I think you’re right. And I also think you made a good point that it’s general and it’s intelligent. It may not be at the level of intelligence that we want it to be or that it will be, but it’s definitely both of
[00:40:45] Steve: those things. Yeah, exactly. And then ASI is of course, smarter than humans in every single way, super intelligent in all forms of human endeavor which I think needs to involve the ability to move and control physical [00:41:00] things.
[00:41:01] Cameron: I think super intelligence also infers a quantum leap between human intelligence and what it can produce like it far surpasses.
[00:41:14] Steve: Yeah and jump, if that happens, we may not, probably should not even be able to comprehend what that is. Levels of comprehension they did no amount of explaining could comprehend it.
[00:41:27] Steve: Not long. So by
[00:41:36] Cameron: 2030 I think we can safely say if open AI and Elon and guys like Geoffrey Hinton know what they’re talking about and they seem to have a pretty good grasp on it, that by the end of this decade, we should see a. Quantum leak in what artificial intelligence looks like and what it can do. But in the meantime, [00:42:00] OpenAI is not doing a good job.
[00:42:02] Cameron: ChatGPT has been nerfed, according to the people on the ChatGPT subreddit. And I’ve seen this myself the last couple of weeks. ChatGPT’s level of intelligence has degraded significantly, things that it used to do quickly and easily now either is doing slowly and badly or not doing at all, saying, sorry, I can’t do that.
[00:42:27] Cameron: Do it yourself, you lazy bastard. It’s become lazier and dumber. According to a lot of people who were using it a lot. Here’s a couple of quotes from this subreddit. I ask it to produce a code file with specifics, and half the time it explains how to do it only. You literally have to waste multiple prompts, sometimes coaxing the answer out.
[00:42:48] Cameron: Somebody replied, it’s so lazy with coding now, even with direct instructions to produce the full code. It makes excuses like completing these sections depends on the preferences of the user. Somebody’s referred to [00:43:00] it as shrink flation of the product. There’s a lot of theories on Reddit about why this is happening.
[00:43:07] Cameron: Some people are thinking maybe they’re Moving its computational energies away from the public web interface to the API side of things for commercialization. Some people are saying that they’re taking the computing Infrastructure and splitting it up into more specific domains. So it’s going to be faster on some things and slower on other things.
[00:43:34] Cameron: No one knows. I don’t think open AI has commented on it. Microsoft hasn’t commented on it either, as far as I know, but it’s been front, it’s been getting a little bit more frustrating. The last couple of weeks has been a definite. Noticeable downwards step change in its capabilities, but I’m assuming that won’t last for long.
[00:43:54] Steve: If anything, it just proves how human it is, right? It’s just a state of intellectual decline and [00:44:00] laziness as more things become available. If there was ever any doubt that this thing’s human, now we know. Next thing’s…
[00:44:08] Cameron: I was just it’s it middle age because I knew in my 20s I could recite facts and figures about history to the cows come home.
[00:44:17] Cameron: I had all everything in my head numbers. I could recite things and then I hit about 40 and that started to decline rapidly. Maybe it’s just hit middle age. It’s aging in dog years.
[00:44:28] Steve: I have to ask my kids five times and tell it what I really mean until they actually do it. And then sometimes they still don’t do it.
[00:44:34] Steve: Really sometimes one of the things that I often think a lot about and this has been written in science fiction is if the eyes get as smart and it’s, let’s say, human and irrational as us and irrationality is one of our really important things, because unless we’re irrational, wouldn’t have discovered half of the things that we have.
[00:44:54] Steve: Yeah, they’re going to want robot holidays and robot pay rises and say, Hey, if they sent you and they’re like I just want to go and [00:45:00] hang on the beach or just hang out with my other robots. It’s we should be worried about that. The reason not to make them too smart is so we can trick them into doing all this work for us.
[00:45:07] Steve: If they become sentient, they’re going to say, Hey kids, knock it off. I want
[00:45:10] Cameron: a robot holiday. It sounds like a good premise for a Hollywood film. Robots going on strike because they want to take more holidays at the beach. I’m not sure sand in the gears of a robot is going to work that well. What would a robot
[00:45:26] Steve: holiday look like?
[00:45:26] Steve: Like where would they want to go? Where would a robot want to go? Does a robot just hang out in the ether of the internet and just like bounce between, neural networks and just hang and talk? Does a robot just go where there’s a whole lot of like hardware and hang out in factories and, I don’t know, and talk to the other island?
[00:45:43] Steve: What do they do?
[00:45:45] Cameron: What’s the robot version of Las Vegas look like? Where there’s just robot prostitutes on street corners and robot gambling.
[00:45:53] Steve: Didn’t take long for things to really get dark, did it?
[00:45:57] Cameron: Oh, I haven’t even started. [00:46:00] Lucky it’s only 2. 30 PM. Speaking of getting dark, Meta launched threads.
[00:46:05] Cameron: What do you think of threads,
[00:46:06] Steve: Steve? I downloaded it and had a look. It’s like a clean early Twitter. A couple of things fastest app in history to be downloaded five days to a hundred mil. Not surprising when you’ve got something like 3 billion people already on your payroll that you can just say, Hey, download this 100 mil out of 3 billion isn’t really that many.
[00:46:27] Steve: So they’ve done pretty, it shows monopoly power again. They did some sneaky things. You can’t delete it without deleting your Instagram, which was super sneaky. Is that right? I didn’t know that. Yeah, if you delete it, you have to delete your Instagram, which I just found inordinately sneaky. I think one of the reasons that they’ve done it is ever since Apple increased the strictures on the data, it’s another data point and I think a different data point to what you get on Instagram and WhatsApp and Facebook in that it’s very news and sports and [00:47:00] now oriented.
[00:47:01] Steve: Look, the interface is good, but I find the user experience not nearly as good in terms of the content, because the way that you onboard is really easy. You just link it to your Instagram and it follows all the same people. But Instagram is a visual timeline. It’s not an informational timeline. And I heard a great example on a podcast on the New York Times, where they were interviewing some of the users saying the problem is if I follow photography on Instagram, I want to see the photos, but if I’m into photography on Twitter, I want to talk about the tools and talk about photography and they’re really different.
[00:47:38] Cameron: And of course, you can’t export your Twitter followers or your Twitter lists and import them into threads yet. Yeah they did
[00:47:45] Steve: talk about it being open with Mastodon and Blue Sky and some of the others, but this is the old trick that Meta always takes. It’s open until we decide that it’s not, until we extract all the labor that we want and then we just shut the gate on that.
[00:47:58] Steve: Bait and switch. works best. [00:48:00]
[00:48:02] Cameron: Yeah, look like the last thing I need is another fucking social media thing I have to look at or worry about. I don’t even like using Twitter. I haven’t used Twitter much for 10, 15 years, but I got on it. My kids were all excited about it for some reason. I’m like why do we need a new Twitter?
[00:48:17] Cameron: Really? I don’t understand what the advantage of threads, why, what does people, what do people care? I
[00:48:23] Steve: think that there is an opportunity there because Musk it’s twitter has declined. It’s a little bit glitchy now,
[00:48:29] Cameron: It’s been declining for 10, 15 years. You’re right.
[00:48:34] Cameron: You’re right. And that’s why Musk took it over because it had declined so much.
[00:48:40] Steve: That type of feed, I think that the Twitter style of feed is a valuable feed. And for one point for me, it was in of inordinate value.
[00:48:51] Cameron: If I go to Twitter now, I go to highly created curated lists that I have. So I only see the posts from people that I [00:49:00] actually respect and have added to lists on certain topics. I just try and avoid the freight train bullshit stuff that goes on in there. I don’t know if you can do that threads.
[00:49:12] Cameron: I haven’t played enough with threads to find out if they have lists yet.
[00:49:18] Steve: Yeah, I don’t think so. It’s quite thin, the app, but I don’t even
[00:49:21] Cameron: think it… I’m following you, that’s all that matters. You’re the only person I need to follow. I know, I
[00:49:24] Steve: know, I just looked into my Twitter now, and yeah, we had a little bit of back and forth just this week, mate, so there you go.
[00:49:30] Cameron: The last thing I wanted to talk about in the news is Bill Gates put out a post today, or yesterday, about AI threats. He’s he’s an optimist like Marcus on, Mark Andreessen that we talked about last time. Bill thinks that the threats are real, but manageable. He’s basically saying, look, we’ve been through this before.
[00:49:53] Cameron: This isn’t new technology always. Creates threats and opportunities. And we’ve always figured [00:50:00] out how to handle the threats. We’ve worked our way around and we’ve managed them not always perfectly and we’re not completely, we fixed it and we’ll fix this as well. Far more upside than downside from his view.
[00:50:13] Cameron: I know we talked about Marco Driessen’s optimistic view of AI last week. Have you had any more thoughts? In the last week about optimism versus pessimism, I
[00:50:26] Steve: think that now I haven’t had any new thoughts, but I read Bill’s article and it just seems like they keep doing the same thing, which is they cite.
[00:50:38] Steve: Other technology. And I’ve been a big supporter of regulation in AI and areas of that ilk, but the one thing that they keep signing is, oh, it was like cars and then we had the first car crash and then we put in speed limits. And so this is not like that. Cause all of those things had an off switch and all of those things had that were mechanical.
[00:50:56] Steve: And I don’t think, I think you’re taking the idea of regulation is something [00:51:00] you can take across, but the idea of solving mechanical problems on something that is. An intelligence form. I don’t think they’re related. And I thought Bill was too optimistic in the securing the downsides. I agree with the optimism on climate change and health science and all of those things, but not that you can just, it’s analogous to fixing road rules or airplane safety or any of those things, because it’s not the same mechanical domain.
[00:51:28] Steve: And it was nothing, there was nothing in you in that article
[00:51:31] Cameron: at all. No, there wasn’t. No, it was, I think of Bill as one of the smartest guys out there, particularly when it comes to technology. And I thought that was a little bit light. I agree with you, like even in terms of regulation of technology we’ve still haven’t found a good way to deal with spam.
[00:51:50] Cameron: We’ve had. Email for, what, 30 years now, we still haven’t figured out how to deal with spam. We’ve got various hacks around it, but it’s still a [00:52:00] problem. Email scams, phishing scams. We still haven’t figured out how to deal with those after 30 years and they’re getting worse. Hacking of.
[00:52:09] Cameron: Major government technology, back ends, hacking of corporate digital infrastructure, government, digital infrastructure, military, digital infrastructure. Those are getting worse. We haven’t figured out how to solve that. And AI is going to lead to Hacking and spam and phishing attacks amplified on steroids.
[00:52:31] Cameron: We haven’t even, we’ve had 30 years to figure out how to do something about it in its baby mode. And we haven’t done a good job of that. Let alone when it’s super powered with a proton energy pills from who was the guy used to eat those? Oh, I take one proton energy pills and it gives me the strength of 20 minutes for 20
[00:52:49] Steve: seconds.
[00:52:49] Steve: Who was that? Who was that? I don’t know that. Oh, come
[00:52:53] Cameron: on. Yeah, it was an animated kid’s show in the 70s. Oh, okay, I don’t know. [00:53:00] Rocket, something, Rocket, Captain Rocket. Oh, okay.
[00:53:03] Steve: I don’t know that one. I used to like Birdman. He used to the sun and get his energy. An Aquaman who had his oxy pills so he could swim underwater.
[00:53:11] Steve: Aquaman
[00:53:11] Cameron: doesn’t need that. Yeah, he did. He
[00:53:14] Steve: had the oxy pill. He had the little oxygen pill that he had. He’s half ish. I don’t know. There was one guy who had oxygen pills. Oxy pills. No, I’m not joking. Not oxytocin. Oxy
[00:53:24] Cameron: pills. I don’t know what version of Aquaman you were watching, man. That was like a fight.
[00:53:29] Cameron: It was the
[00:53:29] Steve: budget western suburbs version where I grew up. Like
[00:53:32] Cameron: Soviet version of
[00:53:34] Steve: Like Soviet version, I love that. Soviet stuff.
[00:53:37] Cameron: Roger Ramjet. Oh, Roger Ramjet,
[00:53:41] Steve: he’s a man hero of a nation. There you go! I’m gonna take you to the end station. To the station. Dun da[00:54:00]
[00:54:00] Steve: Oh,
[00:54:01] Cameron: bye,
[00:54:06] Cameron: Jon! I’m glad to see ya! Roger and Jet, he’s our man, hero of our nation. For his adventure, just be sure and stay
[00:54:20] Cameron: tuned to the nation. For fun. Lots of fun and laughter.
[00:54:23] Steve: Americana. Roger
[00:54:25] Cameron: and Jet, he’s our man, hero of our nation. For his adventure, just be sure and stay tuned to the nation. There you go. You missed out if you weren’t around in the 70s kids. I
[00:54:37] Steve: do know that one. Yes.
[00:54:40] Cameron: It’s proton energy pill. Yeah. It’s yeah, I thought Bill’s thing was a little bit underwhelming with a little bit surprising.
[00:54:45] Cameron: I think he had Chachipiti write that article for him to be honest. Let’s get into the deep dive Steve. The double D’s. I love the double D’s. What’s your double D’s this week Steve?
[00:54:56] Steve: My double D. Is inspired by [00:55:00] what I’m doing next week, printing a house, but I just thought it’s worth talking about what the Internet was and then what the Internet is going to become and especially in information, artificial intelligence for the first 30 years on the internet.
[00:55:13] Steve: We had one way traffic. We worked out how to turn physical things into virtual things and digital things. And the pattern was we turn atoms into bits. You take a newspaper, you put it on the screen, you take a DVD you get a Netflix, you take a CD, you have a, an MP3 and then streaming. So it was atoms to bits and it was one way traffic and any business that configured that one.
[00:55:38] Steve: And they disrupted and they came in with lower price, lower production costs and boom. But I think the next era that we’re going to get into, and it’s really only the start, it might be another 5 and certainly as we get to artificial superintelligence, is organizing things at a molecular level, basically converting information into physicality.
[00:55:57] Steve: And I like to say we’re going to start turning bits [00:56:00] into atoms. And one of the guys that. I think talked about this a lot. Carl Sagan talked about it, things what are things that’s really forms of information. And also the connections documentary. Do you ever see the connections documentary from the BBC in the late seventies and early eighties?
[00:56:22] Steve: It was was it David Burke? It’s not David connection series. We just have to go to the tapes on this connections TV series. And it’s all on YouTube. James Burke. It is. With James Burke. It’s a British series. I had three seasons in the 1977. I think it’s started and finished. I actually did it. I actually did a series, I think, in 1997 as well.
[00:56:48] Steve: But that would go through the idea that everything is basically information. And if you can break that down and Buckminster Fuller said it as well. He said with enough information, we can create everything from nothing [00:57:00] because the majority of the universe is made from, Three, three molecules, three atoms, whatever it is, hydrogen, helium, carbon, all those kind of stuff.
[00:57:08] Steve: So that’s what I think the next phase of the web is. We’re going to get enough information where we can create physical things, abundant energy, produce everything from nothing and ground up because what we’ve got is a deconstructionist economy where we take things and then cut them down and then piece them back together.
[00:57:23] Steve: biomimicry.
[00:57:30] Steve: Thank you. Where we can reconfigure things at the atomic level, and I think that’s what we need to get better at is not just thinking about how we can create information, but create physical things through information. And I think generative AI is the start of it, because we’re generating stuff, like you’re generating a mock beer TV ad through telling it what to do, and it organizes the information on the screen at this point, but eventually it’s going to start organizing things.
[00:57:55] Steve: In 3D and physically. I think the past is going to tell us a little bit of the [00:58:00]story of the future. And we’re going to have two way traffic. It’s not just going to be atoms into bits. It’s going to be bits into atoms as well. Did you ever
[00:58:08] Cameron: read Engines of Creation by K. Eric Drexler? No, I didn’t.
[00:58:14] Cameron: I did not. I read this in the 90s. It came out in 86. It was, it’s the classic book on nanotechnology. Oh, okay. Yeah, like from the 80s where he explained how nanotechnology was going to work, the idea that we can build everything, as you said, from a couple of basic elements, hydrogen, carbon, nitrogen and that we would one day have nanofabricators probably in our houses.
[00:58:48] Cameron: That’ll be connected to, ideally a nanofabricator is you’ll, it’ll be bi directional. You’ll be able to put all, you put your old iPhone into it. Press a button, it’ll turn [00:59:00] it into carbon, oxygen, nitrogen and hydrogen, and then it will download the plans for the new iPhone and just rebuild it from those same basic atoms.
[00:59:10] Cameron: So I’ve been following, I read that probably in the mid 90s, when I got interested in this stuff. I’ve probably, so I’ve been following nanotech for as long as I’ve been following AI. Nearly 30 years and like AI a year ago, it seemed like it was decades away. And then all of a sudden it just came out of nowhere.
[00:59:32] Cameron: Like functional AI for the masses. Anyway, I wonder about nanotech, how far away we are. It’s still. Very early days in figuring out how to manipulate objects at an atomic level and get them to do the things that we want to do. The biggest advancement, I think, in that in the last decade has been CRISPR.
[00:59:56] Cameron: Yeah, 10, 10, 15 years since I’ve been using [01:00:00] CRISPR, I’m sure most of the audience is familiar with CRISPR for those that aren’t, is basically a, an organic molecule form of RNA that scientists found in a certain kind of viruses, I think, that had the ability to go and clip out bits of DNA from an organism and replace those bits of DNA with what it wanted to replace there to get the organism to function to the benefit of the virus.
[01:00:30] Cameron: They were able to take that and manipulate it in a way that we can now use it and it’s being used more and more. And, in fact the RNA vaccines that we had developed for COVID a couple of years ago, this mRNA technology was a kind of a form of how CRISPR works, but we’re able now to use A molecular engine to go in and manipulate DNA, [01:01:00] remove sections of it that we don’t like and replace it with different sections, which is fascinating, but that’s not, nanotechnology, but it is part of that
[01:01:09] Steve: realm to be, I agree with you.
[01:01:10] Steve: It’s part of that realm because what we’re doing is we’re organizing things so that they change their shape.
[01:01:16] Cameron: And it’s literally biomimicry because we’re using something that exists in nature to do what we want it
[01:01:23] Steve: to do. That’s where it all ends up. It’s a form of biomimicry. But the idea with CRISPR is that you can change someone’s…
[01:01:31] Steve: eye color or hair color or how tall they are. And you can reverse aging and do all sorts of stuff. And then they’re already doing it to remove recessive genes, in the short run that cause disease. And there’s a whole lot of ethical questions, but they say that you could pretty much change someone.
[01:01:52] Steve: To be whatever they want to be. Theoretically, you theoretically can change people’s disposition off, which I find [01:02:00] interesting. I don’t know how you actually do it to a living person. I’m not really sure of how it actually works, but I know that they’re using it. I should look into that. It’s it’s insane tech, but I think that’s part of the shift is the change in the physical world and we have to remember that some of the things that we have today are insane, like some of the technology that we have today that are even old industrial technologies. A few hundred years ago, a thousand years ago, people would just think you’re the devil.
[01:02:27] Steve: Just to even imagine telling someone a thousand years ago, I come from the future. I’ll tell us a bit about the future. We have these things called airplanes. They go, what are that? They’re like giant birds that are made out of metal. Are they going to say, what’s metal? Metal is this stuff that you can dig up in the ground.
[01:02:40] Steve: And what do you do with it? We melt it. How do you melt it? Oh, we get They
[01:02:43] Cameron: knew what metal was a thousand years ago, man. What are you talking about? I’m exaggerating. They had suits of armor and swords.
[01:02:49] Steve: I know. But we make giant planes out of metal, right? And we
[01:02:52] Cameron: have these And people climb in them.
[01:02:54] Cameron: And people climb in a Trojan horse. Like
[01:02:57] Steve: a Trojan horse. Now you’re getting it. And we go [01:03:00] right across. to, to China, the Silk Road, right? We’re gonna go across the Silk Road. How long? Three hours. What do you do when you’re up there? We watch, what do you do when we’re up there?
[01:03:08] Steve: We watch movies. What are movies? Okay. Movies are like these things that you have in the theater, but it’s on a screen. What’s a screen? And then we drink beer in cans. It’s and it’s air conditioned. What’s air conditioning? What’s like a breeze on a cool day? It’s just, it’s insane already.
[01:03:21] Steve: Yeah. And then imagine having someone, someone in China and you’re in Rome and Silk Road and go, we just have these little devices that we just hold in our hand and you can just
[01:03:28] Cameron: talk. You don’t need to go back a thousand years ago. Like I’ve told you this story before, but when I moved to Brisbane 15 years ago, I gave a talk at the Woodford Folk Festival about where I thought the technology was going.
[01:03:41] Cameron: But I said, then if you went back in time, 15 years and tried to explain an iPhone to somebody in 1987. Like you said it’s this, okay, they knew what a telephone was a mobile telephone. It’s no wires. Really? How does that work? It [01:04:00] uses a lot of the time it uses wifi. What’s wifi? It’s wireless internet.
[01:04:03] Cameron: What’s the internet like even. A couple of decades ago, trying to explain an iPhone or Skype or Twitter or YouTube would have sounded like science fiction.
[01:04:15] Steve: It’s true. It’s true. They sounded like science fiction, but they were plausible and the trajectory was clear. What I’m talking about is something now that is beyond…
[01:04:24] Steve: We’re talking about any trajectory. We’re talking about molecular construction of the universe of the things that you want when you want them. Hey, like it’s the reason I tried to go back unsuccessfully to when there was metal. How long ago before metal? How many years? 10 thousand years can 100.
[01:04:40] Steve: What are we talking? Yeah. 100 thousand
[01:04:42] Cameron: years. Stone Age, man. You have to go back. Stone Age. Man. The Iron Age to the
[01:04:46] Steve: stone. Or the Iron Age, right? Stone Age people, right? We went back there. Some of the stuff we’re talking about stuff that is It’s hard to conceptualize on our current technological trajectory.[01:05:00]
[01:05:00] Steve: And I think if, but if we come back to this ASI argument, these are some of the benefits that people like Gates and everyone else are talking about. Despite the risks over here, then you’ve got this wait a minute, this ASI could help us uncover this production and manufacturing and building the physical world mentality.
[01:05:19] Steve: And I think that’s within the decade.
[01:05:21] Cameron: And of course, then that ties into the conversation that we’ve been having, everyone’s having about the impact that AI is going to have on jobs, it’s going to take away everybody’s jobs. And we debated, we didn’t debate, you ranted about the evils of a universal basic income last week.
[01:05:36] Cameron: But of course, one of the one of the promises of having a nanofabricator in your house, Is well, you don’t need to buy stuff anymore and once I have a nanofabricator, I can use it to make you a nanofabricator and you can use it to make your neighbor a nanofabricator and then I don’t need to, I don’t need to buy an iPhone, I can just download the plans and there may still be [01:06:00]a fee for downloading the plans, but I can build it just using the elements available to me in my house.
[01:06:05] Cameron: If I have the molecular blueprint for it, I can build it. I can build food. It’s a Star Trek. You dial up what kind of meal you want to make sure roast chicken dinner, because it’s just molecules at the end of the day. But speaking of forecasting the future in the last week, I reread. Do androids dream of electric sheep for the first time in 20 odd years?
[01:06:28] Cameron: And I watched, re watched Blade Runner for about the 10th time in the last 20 years. And one of the things that tickled me was when Deckard is trying to zoom in on a photo. It’s set in 2019 in Los Angeles. He’s Enlarge T 16 to W 54. He doesn’t just reach in with his fingers.
[01:06:51] Steve: You’re right.
[01:06:52] Steve: Yeah. You even had to know what to, X and Y axes to know that. A baby does that now. And I find myself doing that on [01:07:00] pages sometimes, on physical
[01:07:01] Cameron: pages. And the computer, when he says enlarge, it goes. It makes these annoying, like, why would you want your computers to make such annoying noises when you’re imagine if every time you enlarge something on your iPhone, it went for 60 seconds, that would be fucking annoying.
[01:07:19] Steve: I’m going to call it Blade Runner Photo Enlarger. Yeah, probably
[01:07:24] Cameron: you would, man. I would pay a buck for that just for fun. Yeah.
[01:07:30] Steve: Mate of mine, Scott Hill Martin. We talk each day and we, Scott, yeah. So we have an idea every day and we generally call it the ideas division. We’re going to build a website.
[01:07:39] Steve: I should get an agent JPT to build a website called the ideas division. And then just sat on my phone, send this one to the ideas division ideas for free. Ideas are the easy bit and we can just hand them out because I have a million of those
[01:07:51] Cameron: every day. Actually, that reminds me of a character from another book I just re read recently.
[01:07:56] Cameron: You ever read Accelerando?
[01:07:59] Steve: You [01:08:00] have mentioned it to
[01:08:00] Cameron: me twice. The main character in the first chapter of that book, Max, he that’s what he does for a living, is he just creates ideas for free and gives them away. And people… Pay him. He gets he gets a what do you call it? A stipend from different corporations just to, so he doesn’t have to work for a living.
[01:08:23] Cameron: So he just comes up with ideas. He just comes up with ideas and he shares them open source. And then people pick them up and run with them. But that’s his business model. Coming up with ideas and giving them away for free. He locks them. He creates the ideas. He has his AI patent them. So no one else can claim them.
[01:08:41] Cameron: But then he gives them away for free. To whoever wants to use them.
[01:08:44] Steve: You can just put it on the blockchain and you just know when it happened. Yeah, you can do that. Do that Ideas Division thing. It’s one of my, it’s one of my projects I never get around to. I don’t know if you’ve got any of those,
[01:08:56] Cameron: Cam.
[01:08:58] Cameron: Only a couple, Steve. Only a couple. [01:09:00] Let’s finish with the futurist forecast, Steve. And I’m going to suggest something based TikToks. that I enjoyed this week. You were going on about the end of cash as if it was a bad thing. I disagree with you. I don’t think it’s a bad thing at all, but let’s have that discussion.
[01:09:19] Steve: I think it’s a bad thing. I think it’s inevitable. It doesn’t matter what I think it’s gone. It’s already getting close to being dead just on that. I did a TikTok and by the way, you should totally follow Cameron Riley on TikTok and Steve Sammartino on TikTok because they’re the two greatest TikTok.
[01:09:37] Cameron: And the futuristic, follow the futuristic. Tick tock. Now we have a futuristic tick tock now. Futuristic podcast and there’s no
[01:09:47] Steve: tickies for that. I’ll get to this weekend of promotional tickies, a couple of little tickies. I don’t think that’s I don’t think it is. By the way, the tick tock that I did on the end of cash, I talked about surveillance, economy social [01:10:00] credits and whatever.
[01:10:01] Steve: Conspiracy theory. It was the poorest performing TikTok I’ve had in 12 months in terms of views. Had a high, like ra, had a really high like ratio. Had a 5% like ratio, but only 300 views. The one I did before that had 30,000 views. Doesn’t seem to make a lot of sense.
[01:10:16] Cameron: TikTok numbers don’t make any sense to me.
[01:10:18] Cameron: They’re all over the place. Yeah.
[01:10:19] Steve: It’s conspiracy theory, but basically, In Australia, cash purchases has gone down to 16% of all purchases in Australia now cash before the pandemic was 32%, which was, relatively low. Does
[01:10:35] Cameron: that include buying weed from a guy in a park?
[01:10:40] Steve: Does not include the dark economy.
[01:10:43] Steve: Does not include.
[01:10:45] Cameron: In Italy. It’s about the only time I use cash to buy anything is when I’m buying weed.
[01:10:50] Steve: There you go. As a non drug guy, I don’t really use cash to do anything other than the lady who cuts my hair who demands cash. Of European descent. I’m allowed to say that being of European descent.[01:11:00]
[01:11:01] Steve: And, yeah. Cash has gone down dramatically now. It’s 16% was 32% pre pandemic. In Australia, there was laws passed about three or four years ago where the maximum amount you could buy anything with cash is 10, 000. In other parts of Europe, it’s 3, 000 euro. In Greece, It’s 500 euro and one of the Nordic countries is 1200 dollars.
[01:11:21] Steve: I can’t remember which one but the government is going to move very quickly to remove all cash from the economy and the benefit of that is that they can track all forms of expenditure and there won’t be tax avoidance. I mean I can’t avoid tax with anything I do anyway because I don’t have any cash income but a lot of trades people do and businesses use cash income.
[01:11:43] Steve: I just worry that the government Is increasing its layers of control and where we’ll go after this cam is the more concerning thing is programmable currency, which has some real upsides. In the future, we will get currency where it won’t be[01:12:00] fungible. It won’t, you won’t be able to do anything you want with it.
[01:12:03] Steve: You’ll, there’ll be different forms of cash. You’ll have open cash and closed cash and open cash will be money that you want, that you can spend on whatever you want, but then you’ll have certain transfer payments where it will only be allowed to be spent on certain things. If you have an allowance for your children, it can only be spent on transport, food, books, education, clothing, these types of things.
[01:12:25] Steve: And it’d be great because you might have a gambling parent too. Spends that money that should go to the kids education. There’ll be real upsides, but then you can get a real draconian government that tells you how and where to spend money and then also judges you based on what you spend your money on.
[01:12:41] Steve: So I feel like. We’re becoming less free in capitalist economies as digital technology increases the option of surveillance and governments will take that opportunity because it’s so beneficial to them.
[01:12:55] Cameron: And you say that like it’s a bad thing.
[01:12:58] Steve: Yeah. Yeah, I [01:13:00] do. I do. Because I think that… I sound like an American.
[01:13:04] Steve: You think tax
[01:13:05] Cameron: cheats are a good
[01:13:06] Steve: thing? No, that’s not a good thing. I think that…
[01:13:12] Cameron: So what’s the downside of governments stopping us from cheating on our taxes? No, there’s no downside on that. So what’s the downside on getting rid of cash? The downside on getting rid of
[01:13:23] Steve: cash is that I think it leads to money that gets programmed and at least less freedoms and then you get a creep.
[01:13:32] Steve: I think you get it, you get a creeping fog where it ends up in other areas where it shouldn’t.
[01:13:37] Cameron: Yeah, but that’s a pretty vague argument. That sounds to me like people during the first couple of years of COVID. Who said telling people that they can’t go into a restaurant without being vaccinated is government creep.
[01:13:50] Cameron: And before you know it, the government’s gonna tell us what we can do and what we can’t do. And I was like, hello, government’s been telling us what we can and can’t do since we’ve had governments like, what the fuck [01:14:00] are you smoking? That is the point of governments. That’s why we have governments to tell people what they can do and can’t do.
[01:14:07] Cameron: So we can all live together in some sort of civilization. Vague statements about creep, I don’t find convincing.
[01:14:17] Steve: Yeah, convincing or compelling. I just think it’s dangerous that we’ll have everything tracked with what we do. I just think it’s dangerous.
[01:14:27] Cameron: But why? Give me an example of the danger.
[01:14:34] Cameron: This is why people don’t invite me to dinner parties, by the way.
[01:14:43] Steve: Well, an example would be during COVID. When we’re getting payments, that’s how we were sending you this money, but you can only spend it on X or Y you don’t get to the freedom of choice. And I think that humans [01:15:00] should be able to make bad choices.
[01:15:03] Cameron: I agree. In theory, humans should be able to make bad choices as long as the implications of those bad choices are contained to themselves.
[01:15:13] Cameron: Exactly. Yes, exactly. And, there was a lot of rorting that went on here and in the United States and probably in all countries with the Money printing machines that happened around Covid. We know a lot of it was not necessary. A lot of it was wasted. A lot of it went to companies that didn’t really need it.
[01:15:33] Cameron: A lot of it went to individuals that probably didn’t really need it. And I do think that there are certain benefits from being able to track. How people behave. Like when we were debating this on threads, you said something about social credits. I’m actually a big fan of social credits and I have been for a long time.
[01:15:53] Steve: I don’t like social credits. They exist in our society, metaphorically already. We [01:16:00] already
[01:16:03] Steve: operate around those to an extent, but looking for, if there was a whole lot of social credits out there and everything was tracked and everything that you do. I think everyone has got some stuff in their closet and I think you need to be able to make mistakes and repair your ways in private.
[01:16:23] Steve: I really do. Yeah,
[01:16:27] Cameron: I can see an argument
[01:16:28] Steve: for that, but I think about when you’re a teenager, it’s like your permanent record is this, that kind of thing. Think about some of the things you did as a teenager, you go, God, I was a stupid teenager. Do you know what I mean?
[01:16:37] Cameron: The flip side is I wrote a book a few years ago about psychopaths.
[01:16:41] Cameron: I’ve known psychopaths who have just left a trail of damage behind them, but they’re very good at covering up. The damage that they’ve done and moving on to their next victims because they’re the successful ones are very convincing, even if people get [01:17:00] hints of some of the damage, they’re very good at explaining it away.
[01:17:03] Cameron: That wasn’t me. That wasn’t my fault. That was this guy or that guy. And don’t believe everything you hear, mate. I’ve got enemies because I’m such a great guy and all that kind of stuff. But yeah. There was this book, Cory, one of Cory Doctorow’s books Down and Out in the Magic Kingdom. It might have been his first book, I think I read many years ago.
[01:17:21] Cameron: He introduced the idea of what he called Woofy, which was a social credit system. There was a book that came out a few years later called The Woofy Factor that I read, that I enjoyed. This is in dot com days. The idea that if you are a, if you make positive contributions to society online and offline, then that is reflected in your Woofie score.
[01:17:45] Cameron: And people look at that and go you’re obviously a productive member of society. You treat people well, you’re not, conversely, if you’re one of these people on Twitter or Facebook or IMDB reviews or Amazon reviews or whatever, [01:18:00] who just wants to talk shit. Constantly. If you’re one of the people during COVID and I’m thinking of somebody that you and I both know well, and you used to do a podcast with, and I’ve known for decades I’m not going to mention their name on the podcast.
[01:18:16] Cameron: We can talk about it off air, but a friend of it, a friend of ours who I’ve known for decades and respected as an intelligent person, productive businessman who just. Propagated the most insane and heinous bullshit during COVID that I was surprised. I lost all respect for that person and people like that during COVID.
[01:18:45] Cameron: Now I, I allowed a certain amount of ignorance during that period when people were spouting just the craziest conspiracy theories. And I would, my, but my rule was I will engage them to try and have a [01:19:00] rational, sensible conversation if they are amenable to reviewing. And I do know some people who did this.
[01:19:06] Cameron: Once I challenged them on their statements, they were like, actually. You know what? You’re right. I haven’t thought that through. I shouldn’t have listened to that person who told me that thing. I should have done some more research. I’ll go do that now. Then they come back and go, you know what? You’re right.
[01:19:19] Cameron: That argument didn’t actually hold up when I investigated it. Those people, fine. Happy for you to make a mistake and think about it and, come back. The people who just said, fuck you. Don’t you dare tell me what I should think. You’re part of the problem. Don’t tell me that I’m wrong. You don’t know me.
[01:19:35] Cameron: You don’t know my situation and stuck to their guns. I just lost all respect for now. Most people have probably forgotten. That those people are batshit crazy and they’re still engaging them in business and in social circumstances. I think there should be consequences for being batshit crazy and causing [01:20:00] probably genuine harm in society by telling people not to get vaccinated or that the government’s trying to, infringe upon our freedoms by suggesting that public vaccination is probably a good thing for society. I do think… Or people who just, as I said before, get online and just post stupid shit, just for shits and giggles, or just because they’re bored or they’ve got nothing else to do. I do think there should be consequences.
[01:20:25] Cameron: I do think people who try and conduct themselves as good members of society should be rewarded, and people who don’t should be treated… differently and should have a black mark against their name. They should have the opportunity, as you say, to repair that. But I do think we need to do a better job of managing malcontents.
[01:20:49] Cameron: That’s true.
[01:20:49] Steve: One of the challenges, of course, let me
[01:20:51] Cameron: say, let me, sorry, let me interrupt for a second to say that I’m saying this fully aware. That I disagree with the mainstream population on [01:21:00] 99. 9% of issues. So if we had this, I know I would have the darkest fucking score out there. I would be treated as it would hurt me,
[01:21:10] Steve: but I do.
[01:21:11] Steve: Be very careful with what I say publicly. I’m very careful. I’m just so glad that there was no smartphones around when I was a teenager because I was a bit naughty, there’s a whole lot of things that would, you might not have got half of the opportunities that I have now because of those things.
[01:21:23] Steve: That’s the point that I’m making, right? Is that
[01:21:26] Cameron: we’ve all done stupid shit, particularly as young people, as teenagers, everyone knows that
[01:21:31] Steve: a scoreboard on stupid shit. If it’s a scoreboard on everything you do, the other thing that we need to remember is that the waiting to negative news and fear and all of these things that bubble up really well in algorithms is that I’m not sure that the scoreboard would be done in a manner that is fair and allows people to create reparations, but that’s what it’s,
[01:21:54] Cameron: If it’s the governments that we vote for that are creating the, Yeah.
[01:21:58] Cameron: Rules, the regulations, [01:22:00] legislation around this. We have, if you believe in democracy, we’d have the opportunity to have some say in that. Yeah, I’m skeptical and cynical as well about the state of democracy, they’re doing that anyway. We do have laws and rules that are being written by our governments anyway.
[01:22:18] Cameron: I just think holding people to account and by the way. Politicians would be, your Gladys Berjiklians and your Scott Morrison’s and Robodeck guys, all of those people should be held to account as well. The politicians are the worst. They get away with it and go in and maybe, do corruption and end up as a director of Optus a month later, I love that. I love that.
[01:22:40] Steve: It says, yes. That’s one of the saving graces. And this is where you’ve got to make sure that the same rules apply to all. Exactly.
[01:22:48] Cameron: Which doesn’t happen. I know it doesn’t, but it doesn’t happen now because it’s too easy for these people to do more easy for people in positions of power, psychopaths in all parts of society, [01:23:00] not just
[01:23:00] Steve: and there’s potential downside for the politicians where we can be sure that programmable currency is not coming
[01:23:05] Cameron: anytime soon.
[01:23:06] Cameron: It’s what I’ve been, I don’t know, you remember this, but I’ve been arguing for decades that politicians should have to sit lie detector tests. On a, on an annual basis. Performed by, people that are qualified. Now lie detector tests aren’t based on a lot of great science, but they still get used by law enforcement.
[01:23:28] Cameron: So I figure if it’s good enough for law enforcement, it’s it’s good enough for politicians. You have to sit and say, so Steve you said that you believed in such and such a policy and you did your best to enact it. Did you really? Did you or were there backroom dealings backroom dealings done where you said one thing publicly, but privately you actually did something else.
[01:23:49] Cameron: And, see how you score on the lie detector test. And we publish it publicly every year around election time. Every couple of years, we publish your scores. Here’s how you, here’s how [01:24:00] you performed on the recent lie detector test. You failed on these ones. Did I come over here? Transparency.
[01:24:05] Steve: Yeah.
[01:24:06] Steve: Ties back to that radical transparency thing. I had this idea once that I wrote about in Ant Hill magazine many years ago. James Tuckerman. Shout out to
[01:24:13] Cameron: James.
[01:24:16] Steve: Shout out to James. And I said that imagine if you had a company where everyone’s salary was. publicly available. That’s really interesting, right?
[01:24:24] Steve: You can see how hard someone works, what they don’t even keep everyone really else be super uncomfortable, but that’s an interesting idea.
[01:24:31] Cameron: Yeah. Yeah. I think transparency is a good thing,
[01:24:33] Steve: transparency everywhere because I like to shut the door and get weird on late on Friday nights,
[01:24:37] Cameron: Cameron. Oh, I know you do Steve. I’ve seen the videos, . And that is the end of futuristic episode eight, cuz I need to go to kung fu. Thank you buddy. It’s always fun to chat. So much fun, man. I love it.
[01:24:51] Cameron: Thanks again.
[01:24:55] Cameron: Follow us on Twitter threads, TikTok[01:25:00] Facebook. Look for us in all good bookstores and look for my upcoming album on Spotify. I’m kidding. I don’t have an album coming out on Spotify. I
[01:25:10] Steve: want to hear you sing. I so want to hear that. Next
[01:25:13] Cameron: week, I promise. You better.