On this episode we talk about how GPT has a model of the world; ChatGPT Voice, Vision, Dall-E 3 integration; Meta releases AI personas and assistants; Hollywood Writers strike an AI deal; Now Everyone’s a expert; Neuralink in your brain; The AI Oligopoly.
[00:00:00] Cameron: Welcome back to the Futuristic Episode 15, recording this on the 20th of October, 2023.
[00:00:47] Cameron: Steve O, how are you, mate?
[00:00:49] Steve: I’m going really well. I’ve had a few wins
[00:00:51] Steve: this week. I need
[00:00:53] Steve: those. I need those. They’re always financial at this age. Once you’ve got grey hair, nearly all wins are financial or health [00:01:00] related. They’re the two things. It’s like, have I got enough money to live on when I die? And are my arms and legs and knees breaking or not?
[00:01:06] Steve: That’s basically it. If you can get through those two things, then you’re
[00:01:09] Steve: fine.
[00:01:09] Cameron: You must not be relying
[00:01:11] Cameron: on the markets then because it’s been a shit show out of
[00:01:14] Cameron: days.
[00:01:15] Steve: You ready? Let me give you, let me give you my financial advice on any market and this never, this, this never changes and always remains true. I don’t care. I don’t have to. Real investors don’t care what the market does because they make investments that are based on long term cash flows, which are irrelevant of the whims of Mr.
[00:01:32] Steve: Market. Or MsMarket or whichever fucking pronoun you want to
[00:01:37] Steve: use in 2023. But
[00:01:40] Cameron: Yeah, I keep telling our, uh, subscribers that on QAV, ignore Mr. Market, just keep playing by the rules, but, uh, you know, it’s, it’s one thing to, for people to know that intellectually. I
[00:01:52] Cameron: think it’s another thing for them to see their portfolio get down by 25%.
[00:01:56] Steve: this is the point, right? The only reason you need to worry about your portfolio going down for 20, [00:02:00] 20 percent is if you’re getting ready for an exit
[00:02:04] Steve: or you shouldn’t have invested. Because I don’t look,
[00:02:07] Cameron: Hmm,
[00:02:08] Steve: I never
[00:02:09] Steve: look.
[00:02:09] Cameron: you don’t have to though, you’re not an active trader, right? You’re
[00:02:12] Cameron: just an ETFs, or an ETF. Yeah, we’re active traders. So we need to monitor it to an extent, because we have rules about when to sell and when to buy. But. You know, for me now, it’s a, uh, automatic discipline. I don’t really care. I know that it’s going down.
[00:02:29] Cameron: I know it’ll turn around, it’ll come back up. So, uh, kind of emotionally divorced from the whole thing.
[00:02:35] Cameron: But, uh, I think some people find that
[00:02:38] Cameron: difficult.
[00:02:39] Steve: Okay, so I met one of your fans on Wednesday night from
[00:02:42] Steve: QAV.
[00:02:43] Cameron: Oh yeah, who was
[00:02:44] Cameron: that?
[00:02:44] Steve: It’s Ben Quirisa.
[00:02:49] Cameron: Don’t know, Ben. I don’t
[00:02:50] Cameron: think,
[00:02:51] Steve: He listens to it avidly. As a young investor, and he came up to me and he said, uh,
[00:02:58] Steve: I’m like, where do I know [00:03:00] this guy from? And it was QAV. He said, you’re on QAV. I said, yeah, it was a while ago with, with Cam and it’s Tony, isn’t it?
[00:03:08] Cameron: Yeah. The Samatino method. That’s
[00:03:11] Cameron: it. You’re the man. Tony mentions you
[00:03:13] Steve: method was way before that was on
[00:03:15] Steve: G’day
[00:03:15] Steve: World.
[00:03:16] Cameron: Yeah. But you also talked about
[00:03:17] Cameron: it on QAV
[00:03:18] Steve: Oh, did I?
[00:03:18] Steve: Okay.
[00:03:19] Cameron: Yeah. Yeah. Tony mentions that all the time. There’s a good starting
[00:03:22] Cameron: point for investors, you
[00:03:23] Steve: it is. It is. Look, I, um,
[00:03:26] Steve: look, no doubt. I’ve never really done trading. I just, I haven’t got the stomach for it.
[00:03:32] Cameron: Well, speaking of having the stomach for it. The future is coming, Steve. Uh, I got, and I know you got, cause I told you to go get it. GPT with Vision, Voice, and DALI3 in the last week. We didn’t have a chance to record last week, but I’ve been playing it with them for a good week. Uh, don’t know about you, man, but it’s.
[00:03:55] Cameron: On one hand, it’s blowing my mind. On the other hand, I feel like I really haven’t [00:04:00] had the opportunity to dig into it. I haven’t taken enough photos and stuck it into it to get visual help, but I have been using the voice component of it a lot. Particularly in the car with Fox, in the car, like if we’re on a drive somewhere, often to Kung Fu or something like that.
[00:04:20] Cameron: The other day he had a question about the Ice Age. He’s been learning about the Ice Age at school. So we just have this conversation with GPT in the car about Ice Age and how much of Australia was covered in ice and what some of the flora and fauna was like. And just having this conversation with an AI as you’re driving around in your car.
[00:04:38] Cameron: Uh, it’s been mind blowing, I’m still kind of,
[00:04:42] Cameron: I feel like we’re living in a science fiction
[00:04:46] Cameron: movie every,
[00:04:47] Steve: it really has arrived. It’s one of those moments. It really is a, uh, an arrival moment, a magic moment. I always loved the idea that, you know, when you’ve had a tech revolution where it feels indistinguishable from magic. Right. [00:05:00] And, and, and magic is when something that you think either shouldn’t be possible, or it’s, it’s just involved heavy labor is now just automated.
[00:05:08] Steve: That’s that moment of magic. And we’re, we’re deep in that magic moment. I haven’t played with it as much as I should have, but I, I’m even astounded. And I’ve done quite a few demos this week because very few people, I don’t know if it’s they don’t have access. We just haven’t tried it. Very few people have seen the ability to analyze an image.
[00:05:28] Steve: And I, um, I’ve done some image generation, but I’ve really been interested in analyzing images because I think that it becomes really interesting to see what its first opening gamut is. Right. And the opening gamut of an, and I just did my blog post, which will go out in 20 minutes. Um, the opening gamut of an, of a image description is really generic, but if you push it, it’ll get more specific.
[00:05:55] Steve: So I did one of me, uh, on a plane [00:06:00] this week, I was, I flew to Sydney and I did one of me landing in Melbourne. Out the window, uh, you could see a bit of the wing and a bit of where I was. And it comes back with a generic answer, this is someone flying in a plane, they seem to be in a seat looking out the window, and it just kind of was pretty generic, and I said, tell me where it is, and it said, it looks like it could be Melbourne, because of the ports, and the shape, of the Yarra River, which has the port, which is next to the city.
[00:06:28] Steve: And, um, and the weather feels conducive. And I said, tell me about the plane. And it says, I can’t really tell what the plan is, but it looks like it’s Qantas. And it could be a Boeing 737 or a 787 or a, uh, or an Airbus A320 or Airbus A330, which it operates all of those. And I was disappointed because it should know that it’s a 737.
[00:06:52] Steve: I said, have a look at the wingtip. And tell me which one it’s most like. It said that wingtip looks like it’s from a Boeing 737 [00:07:00] because it’s a single engine plan. It’s the only one that has a wingtip that’s singular, that goes up, that’s long enough to paint a logo on, looks like a Qantas plane. And so then I was like, see, you did know, it’s like when you ask a kid and you say, Hey, you know, the answer to this, you’ve got to push it.
[00:07:14] Steve: And sometimes you’ve got to push it maybe to different parameters within its database to actually know about it. And so I pushed it three times and got three different types of answers. It started to make me think, well, what if, what if I saw an image of something as a starting point, as a fulcrum, and then said, well, okay, if Melbourne is built on a port, tell me about its industrial history of those ports.
[00:07:34] Steve: Or drill down, or what are some photos that might have been taken within this geography if you used, uh, no, I really like this idea of geolocating.
[00:07:42] Steve: And it can tell you about that area and sort of peel back the layers of the onion. So that was one thing that I kind of got into the wormhole of this week.
[00:07:50] Steve: And then I kept doing that. I took one of the cafe and it said, this looks like a local cafe. The barista looks happy, he’s doing this. And then I said, well, tell me about the machine. And then tell me about the type of beans. [00:08:00] And you can, you can, it actually has those. S but it, it looks at the photo gen generally, but then if you ask it to focus on a specific part of that picture starts to
[00:08:10] Steve: take You on a different story trajectory.
[00:08:11] Steve: And I thought that was super interesting.
[00:08:13] Cameron: You know, you just reminded me of a
[00:08:14] Cameron: conversation I had with a vice president of Microsoft, probably circa 98, 99, and I remember telling him that I had this, uh, dream of being able to travel around. Europe, for example, and hold my mobile phone up to a statue or a building in the forum in Rome or something like that, and take a photo of it on my phone and say, tell me more about this sculpture, this painting, this relic, this building.
[00:08:51] Cameron: What is it? Where am I was important and have it tap into a database and be able to tell me everything about it. I said, that should [00:09:00] be possible one day. And he looked at me like I was crazy. And I think we’re there now, right? You’ll be able to do that with GPT. You’ll be able to take a photo of a thing on your travels and it’ll tell you about the history of the place and it’ll drill down as much as you want.
[00:09:13] Cameron: We finally, it’s taken us 25 years, but we’ve hit that point. I haven’t. I, I, I’ve tested it a bit. I uploaded to it a photo. I took a photo out of an art book I had sitting on my coffee table, Brett Whiteley art book. I took a photo of one of Whiteley’s Sydney Harbour. Uh, Paintings, and I said, can you guess the artist?
[00:09:34] Cameron: And it came back first of all and said it thought it was Salvador Dali because a lot of the figures were sort of, you know, warped and manipulated and, you know, dilated and that kind of stuff. And I said, no, guess again. And didn’t tell it anything more. And its second guess was Brett Whiteley. So it took it two guesses to guess that it was a Brett Whiteley [00:10:00] painting.
[00:10:00] Cameron: And look, uh, you know, I’m still of the view that this is a. Early beta tool that we’re playing with here, you know, the fact that it can do any of this
[00:10:09] Cameron: shit amazes the hell out of me. The fact that I’m having conversations with a computer that’s talking to me in English language and it’s saying ah and um and sighing and taking a deep breath and it’s just incredible.
[00:10:25] Cameron: I’m not, I see people
[00:10:26] Cameron: complaining that it’s not perfect, giving perfect answers and I’m like, are
[00:10:30] Cameron: you fucking
[00:10:31] Steve: Louis
[00:10:31] Steve: ck It’s Louis Cck.
[00:10:33] Cameron: Yes,
[00:10:34] Steve: everyone’s on a plane and complaining that the airplane, you know, the seats don’t go back far. And I’m, I’m a little bit constricting. The food wasn’t that great. You’re like flying through the sky at, at 800 kilometers and now 30,000 feet above the ground watching.
[00:10:48] Steve: You’re watching a movie and sipping an imported whinging in air conditioned comfort? Give me a
[00:10:52] Steve: break.
[00:10:53] Cameron: Yeah. Or you can,
[00:10:54] Steve: Look, you know when something’s amazing, as soon as people start to complain about the slight vagaries associated with something that is [00:11:00] pure
[00:11:00] Steve: magic?
[00:11:01] Cameron: yeah, that’s how I feel. I feel it’s pure
[00:11:03] Cameron: magic. Uh, alright, let’s get into, um, one thing of note we did this week. Uh, I’ll kick it off. I oughta, I was, I was, where was I? I was getting my shoulder put back in. By my physio last week after an accident at Kung Fu. And I was lying there with a heat pack on, waiting for her to crack it back into place.
[00:11:22] Cameron: And I was thinking about a process that I’ve had to do every week for the last few years, where I had a freelancer in New York who would process some share related information for me. She’d have to look up Some sell prices and some technical data on a couple of hundred stocks. Usually it would take her the weekend to do it for me.
[00:11:46] Cameron: I’d send her my list on a Friday night and by Sunday night, she’d get it back to me. It’d take her three or four hours to do. I was thinking, I wonder if I can automate that, uh, using the Python script, get GPT to build it for me. Came home [00:12:00] from my physio appointment. Within half an hour, I had the script written.
[00:12:06] Cameron: Uh, it took me a couple of hours to debug it and also get it to do a NASDAQ and a New York Stock Exchange version so I could cover three markets. But now it’s built. Now it basically takes half an hour, because there’s lots of stocks and it’s a little bit slow, but it takes half an hour to do what it took a human three or four hours to do.
[00:12:30] Cameron: And I can run it. Any time I want during the week. I don’t have to ask somebody to do it and to fit it into a schedule and all that kind of stuff. Um, it, it, it just blows my mind. Every time I can just write some code, get GPT to write me some code that will automate a process that used to be manual, like I feel like
[00:12:54] Cameron: a coder.
[00:12:55] Cameron: I feel like a code
[00:12:56] Cameron: God,
[00:12:57] Steve: are. You’re coding in English now.
[00:12:59] Cameron: Yeah, [00:13:00] I just tell it what I need in English and it does
[00:13:02] Cameron: it for me. It
[00:13:02] Steve: We’re all coders now. It’s
[00:13:03] Cameron: machine
[00:13:04] Cameron: speak.
[00:13:05] Steve: welcome, welcome to the new skills that you’ve got. PhD in every
[00:13:08] Steve: subject.
[00:13:09] Cameron: yeah, Wow. yeah, With an assistant on your shoulder, Jiminy Cricket
[00:13:14] Cameron: on your
[00:13:14] Steve: well, you, you, yeah, you really
[00:13:15] Steve: have. And, um, I haven’t done as much of that as I want to. Uh, I want, I think over the summer, I’m going to be playing a lot with this. December, January, pretty quiet for me, but I think I’m going to really get into the wormhole of developing some technology, using the technology.
[00:13:30] Steve: Mostly I’ve been using it, um, just for a few little experiments and little hacks here and there. And automating what I do, but what I want to do is do things that I can’t do with it. Because everyone at the moment seems to be automating what they do because we’re so busy doing our jobs, right? That’s what we do, doing our work.
[00:13:48] Steve: But the really interesting stuff when he goes, well, what if I could use it to do this thing that I’ve never
[00:13:53] Steve: been able to do? And it sort of puts you on a new path.
[00:13:56] Steve: That’s, that’s
[00:13:56] Cameron: got any ideas for things you can’t do? [00:14:00]
[00:14:00] Steve: Most, mostly development. One of the things I want to try and do is build a script that can turn, uh, CAD into code, and that’s for the 3d printing stuff, because what we’re doing with our, we want to do the world’s first building built entirely from AI.
[00:14:14] Steve: And we call it project C5, which is command
[00:14:18] Steve: to, I think I told you this, didn’t I? Yeah.
[00:14:20] Cameron: this
[00:14:20] Cameron: last time. Yeah.
[00:14:21] Steve: Yeah.
[00:14:21] Cameron: I added a few more C’s to it
[00:14:23] Cameron: from memory.
[00:14:23] Steve: Yeah. Yeah.
[00:14:24] Steve: There’s a few more now, as many
[00:14:25] Steve: as C’s as possible. Um, but what, there’s some of the stuff that we want to do isn’t there. And like, well, we’ll just use the AI to build the thing that gets that bridge.
[00:14:33] Steve: So a lot of bridging stuff, um, I want to do. And I want to automate, um, to create some content. around what I do so that it automatically spits out stuff that I want in the way that I do it. So there’s quite a few voice to video tools, but they come out really terrible and contrived. And I think I could essentially just tap in and write some code and then plunk that down and get it to learn from the type of images [00:15:00] that I use where it can feed in from my database.
[00:15:02] Steve: And I’ll give it a link to my Dropbox file so I can learn from that. So that’s what I want to try and do over the summer is actually Use some of the APIs or even develop code and then get it to feed from my own personal database and give it access so that it starts to sound a bit more like
[00:15:17] Steve: me
[00:15:19] Cameron: Fantastic.
[00:15:19] Steve: output that I want.
[00:15:21] Cameron: You’ll have to let us know how that goes. I watched a bunch of interviews over the last couple of weeks with Ilya Sutskever, the Chief Scientist at OpenAI, talking about how they got to where they are today and his forecast for where they go in the future. He’s a really interesting guy. I think he’s going to be remembered.
[00:15:44] Cameron: In the future is, um, I don’t know. As important as Bill Gates, Steve Jobs, who knows, uh, Oppenheimer maybe, yeah. Uh, but he’s saying he thinks [00:16:00] there’s a lot of runway left for the current model of LLMs to get us closer to AGI. Uh, he said we will eventually need a new model, but there’s a lot of space left for improvements with the current.
[00:16:14] Cameron: Models that they’re using, the current transformer models. Although I do note this week that OpenAI has announced that they’ve, I think, collapsed their latest development project, Arrakis, which was to find a newer, cheaper, faster, more efficient model for running it that was being funded by Microsoft.
[00:16:36] Cameron: They apparently it wasn’t as efficient as they hoped it would be so they’ve killed that off. But uh, fascinating guy to listen to. I don’t know, have you watched any interviews with Ilya?
[00:16:51] Steve: I’ll have to tune into some of those. I’ve seen a heap with Sam Altman, but, um, and some of the other developers, there was a couple
[00:16:58] Steve: going around with some of the team, but I’ve never [00:17:00] seen any with him.
[00:17:01] Cameron: Yeah, there’s the
[00:17:02] Cameron: CTO, she’s done a few
[00:17:03] Steve: Yeah, that’s the one that I’ve seen. I forget her name.
[00:17:06] Cameron: Ilya, who’s the Chief Scientist, his story is fascinating too. He was, I think he’s a Russian by birth and he was, grew up in Toronto and, I know he was in, he was in Israel and, um, they, when he was a teenager, he went to Open University, a university, I think he was like in grade eight.
[00:17:28] Cameron: Went to Open University because he was obviously doing well at school. And then he ended up in Toronto working for Geoffrey Hinton. And then he ends up at Google DeepMind. And he said, you know, AI research was, most of the breakthroughs have been done by Universities, academic settings. And they usually had very small teams with very small budgets working away and stuff.
[00:17:51] Cameron: And he had this dream about one day being able to build an AI company that had a lot of money and a lot of resources, and you could build a big team and you [00:18:00] could really throw a lot at it. But he thought, uh, you know, it was sort of a daydream. I’m never going to be able to do that because I know nothing about running a business or raising capital or any of that kind of stuff.
[00:18:10] Cameron: And then one day he got a cold email out of the blue inviting him to dinner with Elon Musk and Sam Altman.
[00:18:18] Steve: No
[00:18:18] Cameron: And so he went and met with them and they said, we want to build an AI company and we want you to be the chief scientist. So it was like a crazy story. And he talks about how they, you know, played, you know, what the original.
[00:18:32] Cameron: thing was that they were doing with OpenAI, which was looking at Amazon reviews and trying to figure out how to figure out the next letter in an Amazon review. And then they were trying to figure out how to, uh, determine the sentiment of an Amazon review based on the words, how it started from there.
[00:18:54] Cameron: But he, he has this interesting quote that I was planning on [00:19:00] playing, but, um, I’ll see if I can. Well, no, I won’t follow it. I’ll just paraphrase it. He said something to the effect of, What people think… is happening with ChatGPT is that it’s just guessing the next word that needs to be said. He said, but it turns out that in order to do that successfully, It needs to develop a model of how the world, develop a model for how the world
[00:19:35] Cameron: works. It needs to understand how the world works, how people think, motivations, desires, intentions, all that kind of stuff. In order to be able to successfully predict the next word, it needs to understand the world. And so that’s what it’s doing, in his opinion, is it, it understands the world. Not perfectly, still makes mistakes, still not all the [00:20:00] way there, but that’s what they ended up doing by basically teaching it to predict words.
[00:20:07] Cameron: They, in fact, gave it, um, an understanding of a
[00:20:10] Cameron: world model.
[00:20:11] Steve: How would they give it an understanding of the
[00:20:13] Steve: world? Because what do you do to do that? Cause I did hear some really interesting explanations of they just put data together and you, you ask it, um, what’s this? And it’ll think. It’ll say it’s an animal, and it narrows it down, it gets narrower and narrower and more accurate as time goes by, as the neural net just sees more and more examples of the same things.
[00:20:36] Cameron: yeah, well, look, I tell you what, bear with me. And I will find the actual quote, so I don’t have to, um, paraphrase it. Because it’s really, it’s really interesting, uh, what he has to say on this.
[00:20:50] Cameron: You know, there’s a, there’s a, there’s a misunderstanding that, that, uh, ChatGPT is, uh, in itself just one giant… [00:21:00] Large language model. There’s a system around it. That’s fairly complicated is it could could you could you explain?
[00:21:06] Cameron: briefly for the audience the the the fine tuning of it the reinforcement learning of it the the you know the various surrounding systems that allows you to keep it on rails and and Let it let it Give it knowledge and you know, so on so forth. Yeah again So the way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model.
[00:21:45] Cameron: It looks like we are learning this. It may, it may look on the surface that we are just learning statistical correlations in text, but it turns out. That to just learn the statistical correlations in text, [00:22:00] to compress them really well, what the neural network learns is some representation of the process that produced the text.
[00:22:10] Cameron: This text is actually a projection of the world. There is a world out there and it has a projection on this text. And so what the neural network is learning is more and more aspects of the world, of people, of the human conditions, their, their, their hopes, dreams, and motivations, their interactions and the situations that we are in.
[00:22:35] Cameron: And the neural network learns a compressed, abstract, usable representation of that. This is what’s being learned from accurately predicting the next word. And furthermore, the more accurate you are at predicting the next word, the higher fidelity, the more resolution you get in this process. So that’s what the pre training stage does.[00:23:00]
[00:23:00] Cameron: But what this does not do is specify the desired behavior that we wish our neural network to exhibit. You see, a language model, what it really tries to do… is to answer the following question. If I had some random piece of text on the internet which starts with some prefix, some prompt, what will it complete to?
[00:23:28] Cameron: If you just randomly ended up on some text from the internet. But this is different from, well, I want to have an assistant which will be truthful, that will be helpful, that will follow certain rules and not violate them. That requires additional training. This is where the fine tuning and the reinforcement learning from human teachers and other forms of AI assistance.
[00:23:52] Cameron: It’s not just reinforcement learning from human teachers, it’s also reinforcement learning from human and AI collaboration. Our teachers are working together with [00:24:00] an AI to teach our AI to behave. But here we are not teaching it new knowledge. This is not what’s happening. If you are teaching it, we are communicating with it.
[00:24:11] Cameron: We are communicating to it what it is that we want it to be. And this process, this second stage, is also extremely important. The better we do the second stage, the more useful, the more reliable this neural network will be. So the second stage is extremely important too. In addition to the first stage of the learn everything, learn everything, learn as much as you can about the world from the projection of the world, which is text.
[00:24:40] Cameron: ChatGPT came out just a few months ago. So this is an interview, by the way, with Jensen Huang, the CEO of NVIDIA,
[00:24:48] Cameron: who makes the chips, obviously, that it runs on.
[00:24:52] Steve: I, Cam, that was great. It, it, it was such a better way to describe it because language has to be an approximation of a [00:25:00] worldview, because as a species, we’re using language to describe what to do, what we did, where we are, what we think. So it can’t not be an approximation. And I think that is a far better description than the, the really dismissive element that has gone with you.
[00:25:13] Steve: It just predicts the next word. It’s, it’s, it’s a far better. Way to describe what it does. And at the end, I found it interesting too, that he says it’s not, it’s not new thinking. It might be, have more power and be able to do more of what humans could do at a, at a higher fidelity in a more powerful way and more than an individual can do.
[00:25:32] Steve: It really reminds me a little bit of powered flight. It’s an approximation of the way wings and, and even though we ended up with fixed wing aircraft, it’s different to nature, but it’s an approximation of the way birds work and the way flight occurs just in a, in a different way. And this like powered flight is a little bit more powerful as well.
[00:25:53] Steve: I thought that was a really great way to describe it because I don’t think that language could be anything but a [00:26:00] worldview. And after I heard that, I know exactly. What you meant, but before I heard it, I didn’t really understand it. What, what you were saying about how it builds a worldview. It’s almost as though that idea of predicting the next word is by the by,
[00:26:14] Steve: it’s almost not what it
[00:26:15] Steve: does.
[00:26:16] Cameron: Well, I think it, it, that is what it does, but in order to do that successfully, as they’ve trained it, both in the pre training stage and in the reinforcement stage, in order to train it to be able to do that effectively, it needs to develop somehow. As an emergent property, an accurate model of the world.
[00:26:38] Cameron: Otherwise it can’t predict the next word correctly. It needs to understand how people think, how people act, how the world works, in order to be able to predict that word successfully. Anyway, it’s um, It’s really fascinating stuff. And I think Ilya is like, I’ve listened to hours and hours and hours of him over the last couple of weeks talk.
[00:26:58] Cameron: I find him really [00:27:00] fascinating guy, an incredible brain. And, um, you know, definitely somebody who’s going to be for better or for worse, shaping the world that we live in over the next few years, just want to talk about, uh, ChatGPTV, Voice, Vision, DALI3 a little bit before we move on. Um. I’ve played around with DALI 3 a lot, the image generator in it.
[00:27:23] Cameron: One of the, one of the things that it can do that I’ve struggled with, with mid journey, stable diffusion, is hands. It can do hands, no problem. That’s a big breakthrough. Initially when I played with it, it could do words pretty effectively too. But the more recently I’ve
[00:27:40] Cameron: really found that it’s struggling
[00:27:42] Cameron: with
[00:27:42] Cameron: words?
[00:27:42] Cameron: again, for some
[00:27:43] Cameron: reason,
[00:27:44] Steve: What do you mean by struggling with words?
[00:27:46] Cameron: if you ask it to
[00:27:47] Cameron: create an image with certain text in the image,
[00:27:50] Cameron: it can not get
[00:27:51] Steve: Oh yes. Yeah. It’s really bad
[00:27:53] Steve: at that. It’s still really bad. It’s even bad if you write the word. In [00:28:00] the image of what you want. So I’ll give you an example. I, for my keynotes, wanted to get like a new header for my name. When I go up there, Steve Sammartino and the prompt was something like, um, put the header, Steve Sammartino in neon pink with some futuristic looking stuff on the background and that looks cool.
[00:28:18] Steve: Um, and, and it, uh,
[00:28:21] Steve: and it couldn’t spell my name right on some of
[00:28:23] Steve: them,
[00:28:24] Cameron: Right.
[00:28:25] Steve: would spell it wrong.
[00:28:26] Cameron: Yeah. Yeah.
[00:28:27] Cameron: It tends
[00:28:28] Steve: I wrote It
[00:28:28] Steve: there.
[00:28:28] Steve: I’m like this, this word, this,
[00:28:30] Steve: here it is.
[00:28:31] Cameron: And you even ask it, what’s the text I asked you to put in? It’ll tell you in the chat box, the correct text, but then it can’t do it in the image. And it’ll often with me, if I put a sentence in like a phrase or something like that, it’ll get most of the words right.
[00:28:46] Cameron: But one or two will just be misspelled and I’ll say, don’t do it again, do it again, do it again, do it again. What’s the text? Yep. That’s the text. Do it again. Can’t get it right. Anyway, have you seen the video of the guy doing the [00:29:00] obvious thing, which I hadn’t done, but he
[00:29:01] Cameron: did? Getting GPT
[00:29:03] Cameron: voice on two phones to have a
[00:29:04] Cameron: conversation with
[00:29:05] Steve: No, that’s, that’s a fun idea. I have seen the infinite loop with, Hey Amazon, Hey Alexa, Hey Siri, Hey, Hey, Google has, and it just goes around in circles for 12 hours. The thing I love most about that is that someone
[00:29:18] Steve: recorded that for 12 hours and put it on YouTube.
[00:29:22] Cameron: Yeah, but the idea of having the AI talking to itself and having a conversation, it’s a short video, it’s limited, but that, you know, when you think about that for a minute, AIs talking to each other, having conversations, exploring the world together, you think, wow, that’s, you
[00:29:42] Cameron: know, that’s a preview of
[00:29:46] Cameron: what’s to come.
[00:29:47] Steve: Although it does, it is a preview of what’s to come. It’s kind of like spy versus spy. Um, some people say the internet, eventually we exit the internet and the internet becomes a place where only bots. And [00:30:00] it’s like an ether, if you can imagine it as like a fog that lives above us and the AIs go up there and they do their AI stuff to sort out the things that we require in their AI world.
[00:30:13] Steve: And in some ways, the infrastructure of the internet kind of does that already, where it shares information and sends it back and forth and we’re just sort of on the receiving end and this idea of AIs dealing with AIs. You know, we see it with, uh, high speed trading, uh, algorithms versus algorithms in the share market.
[00:30:33] Steve: I feel like this is almost a natural kind of evolution of where this will go. And the idea of two chatbots talking to each other on the phone is the shape of things to come. In social media, we already see it. It’s your bot versus bot on who can get the most likes. And so that even comes back to the idea of dead internet theory, where X percent of the traffic on the internet.
[00:30:54] Steve: It’s not people, it’s just bots talking to bots. But I think that there’s a non zero [00:31:00] probability that we will exit the digital world, other than having ambient and edge computing where we talk to it and put something into that world, and it configures requirements and the bots sort of sort it all
[00:31:12] Steve: out and then bring it back to us.
[00:31:14] Cameron: mm. Yeah, like I, I think that’s the inevitable end point for this stuff where we have agents that go off and do everything that we might want to do and it just gives us the outcome that we want. Um, and speaking of that, Meta have released previews of their AI
[00:31:35] Cameron: personas and assistants.
[00:31:39] Steve: Yeah, that was really interesting. I love some of the commentary around the, uh, what, what is the soul worth? Well, apparently 5 million’s the number. We’ve determined that now. What do you sell your soul for? A lot of celebs, uh, the Snoop Dogg laid down and got his 5 to, to get the Snoop up there. Uh,
[00:31:55] Cameron: Well,
[00:31:57] Steve: again, interesting, this idea of having
[00:31:59] Steve: [00:32:00] proxies, uh, serve us.
[00:32:02] Steve: I note in the… In the PR announcement or the media announcement said, and soon they’ll be available to you for you. I mean, there’s a lot of really interesting financial business and ethical questions around this, but it always felt like it was inevitable. Um, and I thought that the fake Drake thing was kind of the start of where might this go?
[00:32:25] Steve: And I think we might’ve spoken about that in one of the first. Uh, podcast that we did, but who gets to own your bio footprint? Is that something that you hand over?
[00:32:36] Steve: Yeah. Well, not mine. I mean, it’s,
[00:32:38] Steve: it’s, it’s, and people will do it and they’ll do it for free.
[00:32:42] Cameron: So here’s the, uh, takeaway
[00:32:44] Cameron: from Facebook’s, um, or the Meta’s press release, 27th of September. We’re introducing Meta AI in Beta, an advanced conversational assistant that’s available on WhatsApp, Messenger, and Instagram, and is coming to Ray Ban Meta Smart Glasses and Quest [00:33:00] 3. Meta AI can give you real time information and generate photorealistic images from your text prompts in seconds to share with friends.
[00:33:07] Cameron: Available in US only. So that’s their version of ChatGPT. Then they say, we’re also launching 28 more AIs in beta with unique interests and personalities. Some are played by cultural icons and influencers, including Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka. Now, I looked at this list of personalities that they have, and I think this is a sign of my age, Steve.
[00:33:35] Cameron: I recognized,
[00:33:36] Cameron: like, three names
[00:33:38] Cameron: on
[00:33:38] Cameron: this list. Um,
[00:33:40] Steve: I recognize not all of them. Half of
[00:33:42] Steve: them. Yeah.
[00:33:43] Cameron: Snoop Dogg, yes. Uh, Paris Hilton.
[00:33:47] Steve: Yes.
[00:33:49] Cameron: Roy Choi, the chef I’ve heard of, the rest of them, well Tom Brady I
[00:33:52] Cameron: think is a
[00:33:52] Cameron: sports guy in the
[00:33:53] Steve: Yeah. He’s a American
[00:33:55] Cameron: Sam Kerr is on there, bizarrely,
[00:33:58] Cameron: Um, [00:34:00] MrBeast I’ve heard of, Kendall Jenner, I have no idea who that is, I know that she’s one of the Jenner clan, but apart from that I have no idea.
[00:34:08] Cameron: Chris Paul, Dwayne Wade, Izzy Zania, uh, law, d i y. I’m like,
[00:34:14] Cameron: who? The, these people are getting paid $5 million to have their personality taken, and I don’t even know who they
[00:34:19] Cameron: are. That’s how
[00:34:20] Cameron: out of touch I must be
[00:34:22] Steve: Well, it’s not that you’re out of touch. We don’t have a shared narrative
[00:34:25] Steve: anymore, right? There are a couple of people that cross the chasm, like MrBeast, who arguably is the world’s biggest star, you know, 200 million subscribers, you know, five times the size of Johnny Carson, whoever was the biggest guy in the world, you know, Seinfeld last episode, 70 million people, whatever it was, you know, he gets that every week, right?
[00:34:43] Steve: So, uh, but we don’t have shared narratives, right? We have wormholes. And there’ll be a few that become global superstars on, on tech platforms. And you know, that that’s like the, you know, Jake Paul and Mr. Beast and these kinds of people and the, the [00:35:00] Jenners and the Kardashians. But, but largely we go into our own wormholes and I don’t think it’s an out of date thing.
[00:35:07] Steve: I think the world is so big and so globalized now you can have what we call global
[00:35:11] Steve: niches. And so on the flip side,
[00:35:14] Steve: you’re, you’re, you’re well.
[00:35:15] Cameron: model out of global
[00:35:16] Cameron: niches.
[00:35:17] Steve: Yeah, exactly. Global niches, right? Because in Australia, a niche, you can’t, you can’t make a living out of it. You know, in America, it’s like our biggest market could be a 5 or 10 percent niche in the U.
[00:35:29] Steve: S. or a global market. But you’re really well versed on who’s running the technology that’s changing the world. You’re telling me about, um, Ilya Sutskever. Is that it? Sutskever. Today. I didn’t know, I didn’t know about him, right? And so, it’s really interesting. That we don’t be hard on ourselves on who we know.
[00:35:50] Steve: And I used to try and keep up too, with a whole lot of things going way back on tech and everything. It’s like, I don’t need to know, because what I’ll do now is I’ll just ask someone that does know, or ask the AI [00:36:00] when I do need to know, because I don’t want to fill up my human filing cabinet with inconsequential
[00:36:05] Steve: shit that I’ve got to carry around with me trying to remember
[00:36:07] Steve: everything.
[00:36:08] Cameron: it’s the Henry Ford
[00:36:09] Cameron: Defense.
[00:36:10] Steve: What is it? Exactly. It’s the same thing. It’s knowing
[00:36:12] Steve: where to look or who to ask. You don’t need to carry it
[00:36:14] Steve: now.
[00:36:16] Cameron: .So for people who haven’t seen these things, basically they’ve taken the personalities of these influencers and celebrities and they’re building AI personalities and avatars around them that you can have conversations with. They’re playing other people though, like Snoop Dogg is Dungeon Master.
[00:36:33] Cameron: Choose Your Own Adventure With The Dungeon Master, MrBeastIsZack, The Big Brother Who Will Roast You Because He Cares, Paris Hiltner’s Amber, Detective Partner For Solving Whodunnits. So they’re basically entertainers. They’re actors who are meta or licensing their image, creating AI avatar characters that they’ve been playing.
[00:36:56] Cameron: But it’s, um, it’s a, it’s a, you know, [00:37:00] bizarre
[00:37:01] Cameron: development that I think is going to
[00:37:03] Cameron: be huge.
[00:37:04] Steve: Here’s what’s going to this is going to be so, so big. I, I, um, presented this week and they asked me some of the things that will be where the tomorrow’s jobs. I said, well, you can imagine there’s going to be an app store for AIs, but there’s going to be bio, bio AIs. So, you’ll be able to download Cam Reilly and have a conversation with Cam Reilly and Cam Reilly becomes your mate.
[00:37:25] Steve: I mean, if we had parasocial relationships, which is what the entire idea of television was based on. You buy dog food from whoever is, you know, got their dog on TV or buy this car because you feel like you have a relationship with Johnny Carson or whoever the celebrity of the day is, that parasocial thing, because you.
[00:37:45] Steve: Apparently your frontal cortex can’t tell the difference between an inferred one way relationship and a two way relationship. Well, we’re going to go into a really deep wormhole where people’s best friends are celebrities that they’ve never met, that they have deep, intimate relationships with. And you’ll be able to [00:38:00] download, you know, the MrBeast API and just chat with MrBeast whenever you want.
[00:38:05] Steve: Phone conversations, driving in the car, it won’t be a podcast with MrBeast, it’ll be I had a half hour chat with MrBeast today when I was driving to work. I was talking to Kim Kardashian about fashion and, and what have you, and they’ll be getting a licensing fee for that. And if you’ve got a micro following, you might be able to chat with some of your fans from, you know, the, uh, the Julius Caesar podcast or, or, um, the historians and just go deep on history.
[00:38:30] Steve: And they’re chatting with Cam Riley because you’ve got enough deep content to do it. And it points to one thing. If you haven’t done it yet, the deeper you can publish your content, the bigger chance you’ve got for revenue later when you can have. The AI proxy of yourself in whatever, you know, biological
[00:38:45] Steve: format that evolves into.
[00:38:48] Cameron: Assuming that
[00:38:49] Cameron: people care about
[00:38:50] Cameron: talking to you as a
[00:38:51] Cameron: personality and
[00:38:52] Steve: Well, again, you, you, you, you might not be, if you, you might be, able to do it at a global niche,
[00:38:57] Steve: right? Um, but, uh, [00:39:00]
[00:39:00] Cameron: Or the AIs just absorb, the AIs just absorb all of your
[00:39:05] Cameron: information and push it back out as a
[00:39:08] Cameron: completely different
[00:39:09] Steve: well, no, but wait a minute, but you might be able
[00:39:11] Steve: to say, this is what I know and add this database to my knowledge bank so that I know more than that. So if you can build a big enough brand, you could then plug it in and say, well, people love me as a person. And now if you’re a physicist, I can talk physics with you because I just download all the physics stuff and it sounds and looks like you.
[00:39:31] Cameron: Mm.
[00:39:32] Steve: I mean, you just got to open your imagination. The other thing though, is that it’s real sad. You could get busy parents who aren’t spending enough time with their kids and outsourcing their parenting, uh, to an AI version of themselves, which hey, we’ve seen that before when it comes to cooking and cleaning and looking after children.
[00:39:48] Steve: So I would expect that that would happen
[00:39:51] Cameron: My parents outsourced my, um, development
[00:39:54] Cameron: to Doctor Who, the goodies, Monkey and Kenny Everett,
[00:39:59] Steve: and
[00:39:59] Steve: and, and, [00:40:00] and, nothing went wrong. Look what we have here today with Cam Reilly.
[00:40:05] Cameron: Speaking of people licensing stuff out, uh, the Rider Strike got resolved in the last week or so, and I read through the agreement, particularly paying interest to the terms around AI, because as I said on the show a while back, I think this is going to go down in history as one of the first collective bargaining agreements that invoke artificial intelligence and try and protect.
[00:40:34] Cameron: uh, an industry and a sector against AI taking their jobs and I thought their terms were like it was an effort there but it was pretty weak and pretty stupid. Um, there was some effort to try and write some legal language that says anything written by an AI won’t be considered as source material. If [00:41:00] writers are being asked to use a script prepared by AI, they have to be paid a full fee, uh, et cetera, et cetera.
[00:41:08] Cameron: But it just seems like it’s really easy to get around. It says The new agreement affirms that AI is not considered a writer, anything it generates can’t be considered literary, assigned, or source material. It does leave room for writers to use artificial intelligence as a tool, provided the production company consents, but writers can’t be compelled to use AI to create material, and the company must disclose if they give a writer AI generated material to work with.
[00:41:38] Cameron: When it comes to using writers work to train AI models, it gets a little more complex. The agreement essentially acknowledges the uncertainty surrounding the current legal landscape and reserves the right of writers to assert that training AI using their writing is prohibited. Now, here’s how I think it will play out relatively quickly.
[00:41:59] Cameron: You will have [00:42:00] production companies that use AI, they, first of all, they train AI on every Academy Award nominated script in the last 70 years to develop a model for how Academy Award winning scripts sound, you know, what the structure is. How the dialogue is written, all of that kind of stuff. Then they hire no name Johnny Nobody writers, kids off the street, who aren’t members of the WGA and who, you know, want to get paid some money.
[00:42:39] Cameron: Listen, kid, come here. We’ll pay you 50, 000, 10, 000 to say that you wrote this script. And we then go turn it into a film. Maybe we’ll give you points on the upside. Maybe we’ll, you know, sweeten it for you in some way, shape or form. And the [00:43:00] WGA, you know, if they try and sue them, they’ll just say, well, prove it.
[00:43:04] Cameron: Prove that we used AI to write this script. How are you going to prove that? How are you going to prove that we trained our AI on 70
[00:43:13] Cameron: years worth of Academy Award
[00:43:15] Cameron: nominated
[00:43:15] Steve: WGA is only relevant if you’re using any of their people inside it and you can just circumvent them entirely.
[00:43:23] Steve: It’s just irrelevant. you,
[00:43:24] Steve: can just
[00:43:25] Steve: circumvent
[00:43:25] Steve: them. And I, I
[00:43:28] Cameron: Yeah, you came to a point. I mean, they have things in place to, they have agreements with production companies to say that they will only hire WGA writers, but that doesn’t start, stop a brand new company
[00:43:41] Cameron: coming out
[00:43:42] Steve: exactly, or a
[00:43:43] Steve: subsidiary of an existing company, because it deals with, it’s with, you know, whoever,
[00:43:48] Steve: it deals with DreamWorks, but this is, this is DreamWorks X, it’s a different company, so, you know, different procedures, so, congrats.
[00:43:54] Cameron: Yeah. I think that’d be Elon Musk’s, uh, uh,
[00:43:58] Cameron: film company,
[00:43:59] Steve: [00:44:00] Well, I just don’t see how this is possible. They’ve got a classic supply and demand problem. They’ve got way too many writers who want to do it. You’ve got automation.
[00:44:10] Steve: It’s just, this is just
[00:44:12] Steve: unwinnable,
[00:44:13] Cameron: And most writers, you know, never make a living out of writing. Most members of the
[00:44:20] Cameron: WGA don’t make a living from writing. Exactly. So production company comes along and says, we’ll pay you, you know,
[00:44:27] Cameron: money to put your name on something and say, you wrote
[00:44:31] Cameron: it.
[00:44:31] Steve: There you go. you
[00:44:32] Steve: can do it. Yeah. I, I think
[00:44:34] Steve: that it’s, it’s, this is just, just wrapping paper and none of it has any real consequence. Uh, AI will find leaks. It’ll go around it. You’ll put people’s name. You’ll give people money who aren’t making money. You’re going to use the tool. It’s like just saying, look, we’ve discovered electricity.
[00:44:53] Steve: Don’t use
[00:44:53] Steve: it. It’s just just not going to happen. Now, of course.
[00:44:57] Cameron: Hey, Steve, um, you’re, you run a [00:45:00] movie studio. I’ve got this great script that a young guy in China just wrote. Uh, here it is. We bought the rights to it, um, for 50, 000. He’s, I don’t know, he’s somewhere in China. I don’t know, but we’ve got the rights to it and you want to make it, it’s going to make, it’s going to make, you know, a billion dollars at the box
[00:45:21] Cameron: office. And, uh, we, we got it for
[00:45:24] Cameron: 10, 000
[00:45:25] Cameron: bucks. Do you want to make
[00:45:26] Cameron: It, Who’s the writer? Some guy in
[00:45:28] Cameron: China. I don’t
[00:45:29] Steve: it doesn’t say though,
[00:45:31] Steve: that they’re not allowed to use AI
[00:45:33] Steve: on its own,
[00:45:35] Cameron: Then they’re allowed to use it, but they’re not allowed to call it a literary creation.
[00:45:41] Cameron: If it’s written by AI and
[00:45:43] Cameron: you can prove that,
[00:45:45] Cameron: uh,
[00:45:46] Steve: A literary creation. The only thing that they care about is the creation of dollars at the box office or at Netflix or wherever they do it. Literary schmitterary. No one cares, mate. I reckon the Academy Awards are a bit of a hoax anyway. I don’t know how much people really care about that [00:46:00] anyway.
[00:46:01] Steve: By the way, just while I’m on the Academy Awards, the fact that they don’t have a best comedy
[00:46:05] Steve: is insane.
[00:46:09] Cameron: All right, agreed. You got a hard, uh, you got a hard end in eight minutes, Steve. What do you want to talk about? What do you want to wrap
[00:46:15] Cameron: up with?
[00:46:16] Steve: Well, the double dive, you know, and I just, you know, the deep dive, the double d i, uh, everyone’s an expert again. Right? Whenever tech gets good, it democratizes the process, right? As soon as you get a car, you’re a race car driver. Right? As soon as you get TV in your lounge room, everyone, everyone is an actor.
[00:46:36] Steve: I could be an actor. I could do that. I could do that. When the internet comes out and we get on social media, I’m a social media expert. And we’ve got that there again. Now with AI, one of the things that I’ve just noticed on LinkedIn, because it’s the world that I live in, everyone’s an AI expert now.
[00:46:54] Steve: Everyone is talking, it’s like, AI, I saw the funniest video ever that someone sent me, where it had a guy walk onto stage, [00:47:00] obviously some sort of a comedy skit, and he walks on stage at a big conference and he said, AI! And then everyone clapped and he just walked off stage. So I think, um, one of the things that’s super interesting is that when a technology truly arrives and it becomes a moment, everyone, everyone becomes an expert in that thing and they start using it.
[00:47:23] Steve: And I think that’s generally good. I think that’s kind of what levels up the world is that something isn’t complex and Limited to the fortunate few or the educated few that have access to that wizardry. And we’re sort of looking on the outside, the fact that we’re talking about you coding in Python and getting some scripts done, and I’m going to do some more of that.
[00:47:44] Steve: This, I think this is really good. And I don’t think there’s been much of that lately. I just hope that some of the revenue that gets associated with these new streams, cause streams are going to dry up. Like writers, and there’s going to be new revenue [00:48:00] streams, like bio APIs or your likeness. I just hope that enough of that, that liquid, you know, that liquid, that cash flows into people’s hands, because we’ve entered a moment in time where everyone can do it and use it.
[00:48:14] Steve: But I do hope that it democratizes to the point where we share in the upside. But I’ve got something in my futurist forecast for that, that we’ll come back to. So that’s my double deep dive. You know it’s arrived when
[00:48:25] Steve: everyone’s an expert.
[00:48:28] Cameron: And that’s a good time to launch Futuristic Consulting. If you, uh, need people to help you navigate the world of AI, call Cam and Steve.
[00:48:36] Cameron: We’ll come and, We’ll come and, set you
[00:48:37] Cameron: straight. We’ll teach your teams
[00:48:39] Cameron: how to get the most out
[00:48:40] Steve: We’re renaissance people who understand the vagaries and the revenue streams. The tech is the easy bit.
[00:48:49] Cameron: I’ve made a, I’ve made a career out
[00:48:51] Cameron: of being one month ahead
[00:48:53] Cameron: of everybody else.
[00:48:56] Steve: Chapter 2 of the of the teacher’s
[00:48:58] Steve: handbook.
[00:48:59] Cameron: [00:49:00] yeah, just one month ahead. I knew what HTTP meant and HTML was before most people did. TCP IP,
[00:49:07] Cameron: podcasting, just got to be one month ahead of everybody else.
[00:49:10] Cameron: That’s the way you
[00:49:11] Steve: In fact, When I do keynotes, I found if I go more than maybe one or two or three years ahead, it’s just, it’s just all, you might as well be watching a science fiction movie now, even if the trajectory of that is quite predictable, and it often is with where technology will go, um, they don’t want to hear it because they can’t use
[00:49:27] Steve: it the moment they leave the room.
[00:49:29] Cameron: That’s right. Yeah. Well, I had a conversation with my niece from Utah when she was here last week about Neuralink. In your brain, I was talking about, well, where it’s going to get to is we’ll have AI, a chip in our brain that just connects us with AI. And she said, I’d never do that. And I pointed out that, well, maybe not, but I’ve been around long enough to remember when
[00:49:54] Cameron: people said, I’ll never own a
[00:49:55] Cameron: mobile phone
[00:49:57] Cameron: because I don’t want
[00:49:58] Steve: I was one of them, I think.
[00:49:59] Cameron: were you [00:50:00]
[00:50:00] Cameron: early?
[00:50:01] Steve: well, I don’t really need it. I don’t want to be hassled.
[00:50:03] Cameron: Exactly. Early nineties, I had a mobile phone and people would say, I’ll never get one. I said, why not? I don’t want people to be able to contact me 24 seven. I’d say, you know, it’s got an off button, right? You can turn it off. Oh yeah, I guess so. I was around when people said I’d never use the internet. I was around when people said to me, this company will never need email because we have perfectly good phones on every desk.
[00:50:27] Cameron: I was around when people said, I’ll never put my credit card on a website into a website because of security concerns. I’ve heard it all. I’ve heard it all over 30 years of playing around with technology and here’s how I think it will play out. Early adopters will get chipped, as they always do. There are always early adopters that are willing to take the risks to play with the new tech and get an advantage.
[00:50:52] Cameron: There will be some setbacks, but then those setbacks will be figured out. And the people that are the early [00:51:00] adopters will have an enormous competitive advantage in the marketplace. If you have AI in your brain, You’re gonna be able to do shit that regular humans, normies, won’t be able to do. Others will be forced to follow because of the competitive, you know, disadvantage that they’re at.
[00:51:18] Cameron: You won’t be able to get or keep your job. Unless
[00:51:22] Cameron: you are chipped very, very soon
[00:51:25] Cameron: into this
[00:51:25] Steve: Well, tell me about that. Like, really, you won’t be able to get or keep your job unless you’re chipped. You’re talking about
[00:51:30] Steve: unless you’re enhanced, an enhanced human.
[00:51:32] Cameron: Yeah. Unless you have the AI in your brain, why am I going to handle somebody? That’s like hiring somebody that’s illiterate today. Or somebody who says, Oh, I have religious
[00:51:43] Steve: Literally.
[00:51:44] Cameron: I don’t use a telephone. I mean, you’re just not going to hire those people. I mean, they’re, they’re basically illiterate.
[00:51:51] Cameron: If you don’t have the internet. Inside of your head, you’re not going to be able to, sorry, the AI inside your head, you’re not going to be able to perform the functions that you’re [00:52:00]required to perform. It’ll be written in the job spec, must, must be chipped, uh, to get this job. And there will be some people, no doubt, that will be Luddites and people who don’t know the history, forget that the Luddites were originally objecting against.
[00:52:14] Cameron: New technology that was taking away their jobs of, you know, basically, uh, knitting machines were coming along. The industrial revolution was introducing knitting machines. Um, Ludd put together the movement to try and stop the introduction of new technology that was going to put all these people, these knitters out of jobs.
[00:52:37] Cameron: It didn’t really work, right? It slowed things down maybe briefly, they smashed machines and torched buildings, but it didn’t work. You can’t stop this kind of technological trend, so anyone who says I’d never get a chip in my brain, well, if you don’t,
[00:52:52] Cameron: somebody else will, and good luck with that.
[00:52:56] Steve: think, yeah. If, if they work,
[00:52:59] Steve: or if [00:53:00] they’re not that, I mean, I, I absolutely would. If I could enhance my cognitive ability, I mean, we’re already doing it, right? And if I could do it more conveniently, then I think I, I think I would. I think I definitely would, actually.
[00:53:15] Steve: The only concern I have is safety.
[00:53:18] Cameron: Yeah, and they’ll be worked
[00:53:19] Steve: would you have an off
[00:53:19] Steve: switch?
[00:53:21] Steve: Hey guys, I’m running, listen guys, if I’m a bit slow over dinner, I’m just running organic tonight, right? You could have organic. And when you go to meet someone on a dating, you can have a new dating website, which is a dating app, which is organics only. So you’ve got to turn off. So you can come in organic, you know, but if you have upgrades then, and I think there’s been a number of science fictions where science fiction films and books where people who have more money get the better upgrades.
[00:53:47] Steve: And so it just further ensconces, you know, the economic advantage of some where AI doesn’t democratize, but sort of delineates.
[00:53:56] Cameron: That’s a possible outcome too. [00:54:00]
[00:54:00] Cameron: Got time to do another segment or do
[00:54:02] Cameron: you got to run?
[00:54:02] Steve: Yeah,
[00:54:03] Steve: I can do it real quick. Well, the other one, just the Futurist Forecast. I mean, I think that the complexity of the AI marketplace, it’s so big and varied now, 7, 000, more than 7, 000 new tools this month again. And every month it seems, it’s actually leveled out at around about 7, 000. It was 000, in the last three months it’s been about 7, 000 new tools.
[00:54:25] Steve: It’s one of the weird things I look at for some reason. But it’s really confusing, and we talked about the idea of keeping up is really hard at the moment, and no one can. That was really evident, um, this week when I was out and about. In fact, just really quickly… I was in, uh, Griffith this week doing a talk in country, New South Wales, and I did some work with guys who run IOT irrigation systems.
[00:54:51] Steve: And I met with some business people there who are incredibly advanced on technology and edge computing and a whole lot of stuff, because as you know, agriculture is actually quite technical [00:55:00] business. Um, but then I was in a room with 300 people and only two of them had used ChatGPT. And really, I fell off my chair because I thought, oh, that’s.
[00:55:10] Steve: And, and we’ve often talked about cities versus country areas. I mean, the divide is, is it’s kind of there a little bit. I think it’s real,
[00:55:19] Cameron: I think the, the voice referendum drove that home if
[00:55:22] Cameron: nothing else.
[00:55:23] Steve: yeah. Um, but I think that we’re gonna end up really quickly to a few big tech players in ai. It looks like it might be the same ones, maybe plus one or two. Maybe it’s OpenAI and a couple of other companies. I don’t know. But it seems like, especially with the integration of. ChatGPT now having visuals with Dali and going back and forth where voice, visual, LLMs are all starting to be integrated.
[00:55:51] Steve: Cause there was a lot of different social media forums as well, blogging, different things in little paces when that first came out. And then they all just like sucked [00:56:00] up into, you know, the oligopoly of big tech. And like Dr. O says, Cory, Dr. O said the internet became five giant websites with screenshots of the other four, or as whoever he stole that off.
[00:56:09] Steve: I think we mentioned that last time, but I think that we’re going to see the complexity of AI really get sucked into the AIs becoming more general and not like LLMs and not images. And I think like where ChatGPT is going is where we’ll see all of the others. It seems to be like, and happening really quick.
[00:56:31] Steve: Like we’re only like nine months in.
[00:56:34] Steve: And, and they’re already coming into that one
[00:56:36] Steve: location.
[00:56:38] Cameron: I actually would argue for the opposite of this. Because what I think we will have, like the problem that we had with the internet, uh, 1. 0 and even 2. 0 was I have a limited amount of attention, limited amount of time in the day that I can research stuff that I want and have go through the learning curve of using a new [00:57:00] platform or a new tool, but.
[00:57:06] Cameron: When I have an AI assistant that’s doing all of my research for me, I can tell it, keep an eye out for the new tools. Look, there’s 7, 000 new AI tools hitting the market every month. You know what I’m interested in. You know the sort of things that I do every day. If you see a tool that you think that could help me be more productive or happier or whatever, bring it to my attention.
[00:57:30] Cameron: It’s my research assistant now that will do all that. It doesn’t matter who it is or where it is. Like only show me, only show me new tools that have got a four and a half star user review rating on independent user review sites that haven’t been AI botted. And, uh, you know, just go and do all of that work for me.
[00:57:49] Cameron: I don’t need these things to be aggregated in four or five sites when I have an AI assistant that’s doing all the research for
[00:57:57] Cameron: me. It is my aggregator, not. [00:58:00] Facebook
[00:58:01] Cameron: or
[00:58:01] Steve: I hope you’re right. I actually would love there to be global niches for, for the different tools that you might need. Uh, I mean, for me, I’m really interested in video and conversion to video and. Language taking my work into different formats where I do one format and it creates the other four different types of formats for me.
[00:58:24] Steve: And I know there’s a lot of those out there now. None of them are really at the level that I like, but I love that idea of using AI to keep the market open and wide. Like what the early blogosphere was before, before you got to a point where you can blog, but it’s really, really hard to have your own audience.
[00:58:41] Steve: I mean, it’s still there and we’ve still got that. And you and I do that. And some blogs I read are independent. But I hope that you’re right. But I get a sense that just the simplicity of going to one
[00:58:51] Steve: place and just accepting that,
[00:58:54] Steve: uh,
[00:58:55] Cameron: AI is the one place
[00:58:56] Cameron: is my point.
[00:58:57] Cameron: Your AI assistant, that, that you’ll get [00:59:00] locked in with
[00:59:00] Cameron: that
[00:59:01] Steve: well, it could be one giant
[00:59:02] Steve: website
[00:59:02] Steve: then. Who is it? Is it Apple? Or is it OpenAI? Or is it
[00:59:05] Steve: Microsoft? Or
[00:59:07] Cameron: Like, I hope, you know, as an Apple fanboy, I hope Apple starts to play in this space, but it will be, right now for me, it’s ChatGPT and my loyalty is there.
[00:59:17] Cameron: You know, I’ll have a relationship with my AI personal assistant, and it will do everything else for me. And that loyalty might get locked in if it’s OpenAI or if it’s Apple, whatever.
[00:59:29] Cameron: That might be hard to, um, Get me out of that relationship because it’ll know so much about me. If it’s been my assistant for a year or two, it’s going to understand me better than even I understand myself. That may be difficult to quickly replicate. There may be, you know, um, hurdles for me to migrate from one to another.
[00:59:52] Cameron: Although I should be able to tell my new AI, talk to my old AI and find out everything that it knows about me, whether or not it’s prepared to export [01:00:00] its knowledge base about you or not depends on how much interoperability But I think it then will be the interface for me with the rest of the world. It doesn’t matter where, what little small dark corner of the internet, something that’s important to me resides on.
[01:00:16] Cameron: It will know about it and bring it to my attention. I have a bunch of filters that it’ll be working with and anything that gets through those filters, it will bring to my
[01:00:26] Cameron: attention. It’s my dream anyway, Steve.
[01:00:29] Steve: Well, that’s what it’s all about at The Futuristic, it’s dream
[01:00:33] Steve: time.
[01:00:34] Cameron: And after the failure of the voice referendum and Israel Gaza and Russia Ukraine, I’m saying I think the human race is done now,
[01:00:42] Cameron: let’s bring on the AI
[01:00:44] Cameron: overlords as quickly as possible.
[01:00:46] Steve: And on that note,
[01:00:49] Cameron: Talk to you next time,
[01:00:50] Cameron: buddy.
[01:00:51] Steve: see ya mate.
[01:00:52] Cameron: Happy AI, everybody!