Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hi, everyone here is Ricardo Vargas. And this is the 5 Minutes Podcast. Today, I would like to talk about the use of AI in Risk Management, specifically in the identification of risks. If we go back and try to analyze, how do we identify risks? Usually, what do we do? We do brainstorming, we do a nominal group technique, and we get our people together to discuss potential risks. And most of the time, we are evaluating risks based on our previous experience, right? It's our experience, what we did in the past, that will shape the way we'll see risks in the future. For example, I discussed this when I saw the risk report from the World Economic Forum this year, two years ago, the top top top risk was pandemic. Now, the top top top risk is AI technology, misuse of technology, and this kind of fake together with global crisis like wars in this, why, because we have a short memory. But AI does not have a short memory, AI, analyzing combined billions of data to get your insights. I always remember when I was starting to study this kind of big data and this kind of analytics, when people said, Oh, if you in a supermarket put diapers, clothes, to, for example, beer or an alcoholic beverage, you will sell more beverage and more diapers. But honestly, if we think rationally, it's very hard for us to see a very clear logic between putting these two items together and the sales. Because, of course, you can say okay, I will put sparkling water close to still water. These make sense, right? Because it's both are water, they both are complementary in some ways. People want one or another. So if they're together, there is a potential chance that we'll may sell more. But diapers and beer, there is no relationship we can assume and try to guess a relationship. But technically, there is no relationship. So what happened? It's data. When machines analyze billions of data, they notice this kind of correlation that we are unable to identify. This, for example, is why, for example, some people are so afraid of misuse of AI because AI can recognize patterns in a way. For example, if you're doing an interview in the way you blink your eyes, I can try to guess with a good prediction using AI, your political behavior, or your religion, or your sexual orientation. And why I'm saying this is because these are very polemic topics. But this I said on the bed, but if we are using risk and trying to analyze potential risks to our project, this can be excellent because this can highlight some risks that we are not able to see by ourselves. We are not because we never saw anything similar. And we are not able to process this quintillion thinks at the same time s algorithms of machine learning are able to do and this is the tricky and I'm using this all the time. And I know one thing many of you may say, Ricardo: I use ChatGPT to ask a question, or for example, I use a PMOtto to ask a question, and the answer is generic, and I call this a pasteurized answer. Pasteurized means when you pasteurize milk, you're killing all the bad things in the milk, and you're killing all the good things in the milk also because you know when you hit and make it cold very fast, you are killing the bad and the good bacterias. Why am I saying this? When you pasteurize, and you put, for example, I'm building a house, what are the risks? Look, come on, you don't need to use ChatGPT and PMOtto to do that right? What you can do, you can simply simply do one thing. You can write down the answers that ChatGPT will give to you, because probably will say, delay in the construction, rework, licensing, for example, problems of quality of materials, these are pasteurised answers. For you to not have pasteurize it answers, you need to write the right prompt. For example, I'm doing the plumbing of my house, all the equipment in the teams are on site, I need to identify what kind of potential risks will I face while doing the plumbing and inserting the plumbing in the walls and floor that could delay my project, or that could make additional expenditures or increase the safety of the workplace. If you do this prompt, you will see that the answer is very, very different from that. And if you can combine this with project management content, then it will be really powerful. For example, today, I'm using a combination of PMOtto with Perplexity AI. Perplexity AI, look, by the way, I'm not the owner of Perplexity AI. I'm the founder of PMOtto. So, just as a disclaimer, and I'm using, and I'm enjoying it a lot because what I do is I combine the information that comes out of the prompt of the PMOtto to check on Perplexity AI to check on the specific websites and the specific searches that I can use on this topic, for example, identify risks. And I'm doing this several times a week. And it works nicely and gives you a list that it will require a lot of effort for you to do. One final comment and where's the human on that might my advice is that you can generate an initial brainstorming doing this. And this is the way you should start the process. Instead of starting with people. You start with the algorithm and you generate a long list of risks. And then you share this with the team and instead the team to do I would say the brainstorming by themselves. They will do a review of this and explore some additional topics. So, you can combine the power of AI with the experience of the humans in the room. And then you can create in with that you will create a much more powerful risk list that will turn into a much more precise ability to answer these risks. Think about that, and see you next week with another 5 Minutes Podcast.