Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hello, everyone. Here is Ricardo Vargas, and this is the 5 Minutes Podcast. And today I want to make a provocation. Is artificial intelligence actually creating more risks than it's reducing in our own projects? We hear every single day that AI is the solution to monitor schedules, predict delays, identify bottlenecks, and even anticipate contractual failures. And all of this, all of this is true. AI can help reduce operational and financial risks and even catch human errors. But at the same time, we need to face the other side of the coin. AI also creates new risks. Many of them are invisible, and we don't know until we use them. For example, the first one is data bias. Is the data set used to train the AI biased, and if this is the case, the output will also be biased. For example, if you create a data set that is completely paranoid about risks, what will happen when you generate a risk register out of this list? It will be a huge amount of risks because pretty much everything will become a risk because the data set is completely biased. And of course, on the other side, if you create a data set that considers pretty much nothing as a risk, it's the same. You will say, Okay, what's the risk of building a nuclear power plant, and say, Oh, maybe it will fail, but this is nonsense. So we need to understand that another risk, and this is for me, probably the one that makes me more concerned. It's called blind trust. Many professionals tend to take AI predictions and recommendations as the absolute truth. Putting aside critical thinking, for example, people think that AI is like an oracle that can predict pretty much everything and never makes mistakes. And this is not true. And please trust me, I saw so many project managers doing that. You know, they generate a WBS chart using AI, and they don't even read. They consider that as the end result and the means for you to schedule and check the scope of your project. And this is not true, and this is extremely risky. Another thing that is important is technology dependency. If the AI fails, or if the provider changes its business model or simply shuts down the service. For example, there are many AI tools starting and going to the market, and at the same time, many AI tools are just disappearing. And if this happens and your project is truly connected to this AI, the project may stall. And of course, last but not least, there are the ethical and legal aspects. Who is responsible when the AI-supported decisions lead to financial loss or project failure? Let's suppose you build a risk register using 100% of AI, and you did the blind trust, and then the AI did not identify a risk. And that risk happened. And it creates a full collapse of your project. Who failed. This is very important. And of course, please, I want to highlight here that project managers cannot treat AI as a villain, please. It's not. I don't want you to perceive that I'm saying that. Look, all of you know I love AI. I think it's a fantastic thing. And probably one of the greatest things that my generation had the opportunity to see. But we also cannot fool ourselves into thinking that it will solve everything, at least the AI we know today. As of 2025, I cannot say if AI will evolve in such a dramatic way that we will trust AI, for example, to be the pilot of an airplane with 500 people traveling without a pilot. But so far, at the present moment, we cannot do that. The right approach is to use AI as a powerful ally, but always with human supervision, and it's what we call human in the loop. This means reviewing AI suggestions, creating fallback plans if technology fails, and reinforcing a culture of accountability. AI can support decision-making, but the ultimate responsibility will always be ours. Humans. Artificial intelligence is neither the absolute savior nor the root of our problems. It's up to us to find the balance, leveraging the efficiency and predictability it brings, while not letting the technological enthusiasm blind us. You know, make us blind and think that, you know, now we found this marvelous oracle that will solve everything, because this, with the current technology, is not true. It's an enhancement of our human capability. It's not a replacement, at least for the present moment. Think about that. I hope you enjoyed this podcast, and see you next week with another 5 Minutes Podcast.