Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hello everyone. Welcome to the 5 Minutes Podcast. Today I like to go back to artificial intelligence, but this time, you know, I want to talk about new tools, new fascinating tools that will make you save time, improve your ability to write, read, make music, or manage projects. But I want to talk about a different aspect of the ethics and the limits of what we want with AI. And I want to start with a YouTube video that I watched last week. That is, I need to be truly honest with you. I don't know if I say scary or surprising, but it touched me deeply. And this video is from Nick Bostrom. Nick Bostrom is a Swedish philosopher, and I had the chance to watch him live at Ted in Vancouver in 2019, but he recorded a short video, something like five or six minutes, for the YouTube channel called Big Think and why this video was so unique. He said that we know that AI is evolving fast, truly fast, and there is a good chance and it's not only he but many people who think that it will surpass human intelligence pretty soon. And when we think about surpassing human intelligence, we need to understand that human intelligence is not just the intelligence to write or read or to make music or make calculations or handle big amounts of data. But it's also feelings. Why not? I, in the future, will not have the ability to feel and the ability to be sad, angry, or happy.
And this drove him on this talk to say, okay, if they have these feelings, how will they behave if we try to shut them off? Will it exist, human rights or AI rights or computer rights or anything related to that? Because like we have for animals as we have for people, So will they persecute us because we try to shut them down or because we try to reframe the way they do the calculations? And I know, I truly know, that when you listen to that, you say these philosophers are crazy. But if you wait a little bit, it's saying, yeah, you know, I have this feeling too. But you know, what is coming is completely out of the normal charts. And we don't know if this may become true. And the role of all of us is to think, okay, is this the future we want to go into? And what I see today is like this wild western, you know, a challenge where everybody wants to find the gold, the pot of gold with AI; I'm managing some projects in AI, and this is the reason why I want to share this with you because probably you are part of these teams and you are working with that, and you need to understand what us as a society, what do we want with this? Do we want to create something that will improve, will benefit us, will make us save time, and make us make better decisions? Probably yes. But do we want something that may take the control out of us? I'm not sure about that.
I'm not sure. And I'm saying this because I am very much engaged in this. Not, of course not. As a developer. I'm not an IT professional, but I'm very close to what is happening. Okay, maybe not inside of what is happening, but I'm very close, watching carefully. Maybe every single day, someone sends me something like ten different tools, ten different things. And every time I see a new one, I say, Wow, what is that? And when I saw this Nick Bostrom, I said, this was bold. And then we saw, I would say, the godfather of AI at Google Live in Google and saying, I'm concerned. Then you say The Economist last week with Yuval Noah Harari saying, I'm paranoid. And the point is that we are at a point where there is no way we can stop that. So if you still think that a government has the authority to put a hold on this, this will not happen. This will not happen because now it becomes already mainstream. It's not under the control of Microsoft or Google or this. No, it's everywhere. It's everywhere. It's every single startup on the planet is working with AI. Hundreds, if not thousands, of companies, are doing that. So it's like you coming to me and saying, Oh, let's stop email and shut down email globally. You cannot do that at this moment. So we need to see what we want and what is mine. The final message is that you know we cannot avoid this.
We cannot accept with a blindfold what we need. We need to understand what is going on. We need to read we need to watch this disturbing videos like this one from Nick Bostrom to say, okay, I'm pretty sure this will not happen, but what if it happens? If suddenly you want to shut down a computer and you cannot shut it down anymore? The computer outfits, are we moving into a Matrix movie? I know many people say no, it's nothing like that. But it's worth it for all of us to ask these questions before, you know, the society messes up with everything. So this is my message to you. So try to take a look and try to understand the benefits that are undeniable of AI. You know, I'm using all the time, all day many different tools, but up to what point do we want to move further and further and further until there is a point of no return? So this is something, honestly, I don't have an opinion here to say you should do this, you should do that. The only thing I want to share from my own perspective is that I am fascinated and, at the same time, completely scared. And I don't know if what is coming is remarkably positive or if we are just digging into a hole that will be very hard for us to get out of it in the future. So think about that. I hope you enjoyed This Week's podcast, and see you next week with another 5 Minutes Podcast.