Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hi everyone. Welcome to the 5 Minutes Podcast. Today, I'd like to talk about why AI projects have so much volatility and why we need to think differently. First, when we talk about artificial intelligence, we need to understand four main points. First, technology. Technology is advancing so fast that something that is cutting-edge today is outdated tomorrow. Let me give you an example. Prompt engineering. Prompt engineering was a super hot topic one year ago. How do you craft your prompt? And I need to tell you with ChatGPT, oh one preview, and these latest models, they are becoming so, I would say, smart, for the lack of a better word, that you do not need to craft a perfect prompt to have a very decent answer. Maybe in the future, you will think, and the answer will come. The second is on technology integration because, most of the time, you are using APIs and connections to AI engines, and these connections change as we breathe. You know, ten times a day, every single day, you need to reconnect. And these disrupt our ongoing workflow. The second thing is the experimental nature of this project. So, while I'm recording this podcast, they are doing research experiments. So we do not know exactly, okay, when this becomes public, will it be tomorrow, in a month, or in three months? Because of this experimental nature and the complexity of the project.
So we need to understand that it's not something we are not building, a building that we know exactly how this is built. We are building into something that we do not know if the next brick will exist or not. Third point: regulatory and ethical considerations. Just today, as I'm recording this podcast, the California governor dropped a bill that aimed to regulate some aspects of AI, and I'm not discussing if this is good or not, but I'm saying that you are developing an AI project, you are working. Imagine that your constitution and your laws change every single day. So, something that you are doing today may become illegal tomorrow. Maybe something that you think is illegal today may become legal tomorrow. We don't know if this will be restricted or not. And second, how the public will the public perceive AI tools as something that will be bad for society or good for society? So, all these regulatory and ethical considerations are really evolving and evolving very fast. And we do not know what the law will be. For example, if you are talking about the chemical industry, the law is there, and the law may change, but it does not change as something that, as we see today with AI. And last but not least is the scarcity of talent. It's high demand for expertise.
The learning curve. It's it's not very easy. So, for example, many times, I'm starting a week talking to someone about an AI project, and I end up finishing my week talking to someone different because the person just left. Look what happened with OpenAI. Openai just, it's pretty much everybody that we knew from last year is not there anymore. So why? Because other companies are inviting because they are developing new projects. And this. So, you need to understand that this becomes a true volatility factor for your project. And what can you do to manage? What would be my three pieces of advice to all of you? One. Embrace agile. Embrace iterative development. Flexibility, continuous feedback. Reduce project life cycle. Deliver and experiment more. And these agile will help you a lot because you do not have the full picture of how the future will be. Second. Invest in robust data management because, with AI, you are producing such a vast amount of data that you need to invest in robust data management because you need to ensure that you have that data with quality and scalable infrastructure, right? Did you see that the data centers? They are building data centers; imagine that data center is like you going and buying a new MacBook. You know they are spending hundreds of billions to construct that. And how you will create policies and establish governance policies for this data.
And last but not least, build a multidisciplinary and resilient team. So think about how you can retain talent and how you can create a continuous learning culture. For example, what I was learning and studying in AI one year ago is different today; of course, what I learned is very valid, but I need to understand how it evolves and cross-functional collaboration. Imagine that for all AI projects, you need to have domain experts, technology experts, legal advisors, people working on communication and ethics, and people addressing technical and non-technical challenges. It's not one person that is able to do everything you need to have a cross-functional collaboration. So, in the end, what do you need? You need to understand that AI projects are different sorry for a better world, different animals. They are not the way you were used to doing projects. And second, the only way to to manage this is by embracing agile, robust data management and a multidisciplinary and resilient team because there is a good chance that if your project lasts three months, you will start with ten people and you will finish with one because the other nine are working in another company or are working on their own project. Think always about that. Okay, sorry for speaking a little bit fast today. Very busy day today, and I wish you a wonderful week. See you next week with another 5 Minutes Podcast.