Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hi everyone. Welcome to the 5 Minutes Podcast. Recently, the Stanford University released the AI Index Report 2024. And it's it's a very dense document, more than 500 pages. But it's it's an excellent tool for us to understand and to look for specific insights on what is going on for those who are inside, deeply inside, on what is going on in AI; maybe you will see something that you already saw or maybe some of the trends, but for those, for example, working in projects that are not using AI as a working tool in every single day, every single minute, it may be worth for you to take a look and mainly on the top ten takeaways, but I want just to give you a disclaimer, I did not read the 502 pages. Okay, and you may ask me, Ricardo, how did you do that? First, I read all the summaries and the takeaways, and then I read some of the chapters. Okay, I am taking a look at things that are more related to my personal interest, the impact on the workforce, the impact on how we do our work, and probably a little bit less on the technical side. And I use ChatGPT. I uploaded the document, and I asked several questions, trying to make sense of this report and how this report would impact my job. And one of the questions that's okay, I am a project manager.
What of the top ten takeaways should I be more aware of, or should I pay attention so it's worth it? I will not give you the answer right now. You will have to listen. However, the use of AI to analyze this document is extremely interesting. But let's go to the top ten takeaways quickly here. First, uh, this is very, very nice. It's a bit human on some tasks, but not all. Many people believe that AI is far behind human intelligence, and other people think that AI has already surpassed us. So what is happening now? Now, what is today's status? I bit humans in several tasks like image classification, for example, visual reasoning and understanding of English translation. But remember, when we go towards a sharper and more precise answer like mathematics, engineering, design, structural design, or complex problem-solving. I am not there yet. Maybe it will be, but it's not there yet. The second industry is still dominating. I research far more than academics, so companies like Google, OpenAI, and perplexity, as well as companies in the industry, produce 51 notable machine learning models and only 15 in academia. And this probably has a reason. The reason is that the takeaway number three frontier models are way more expensive. Ai is expensive, but when you go towards the frontier to improve and create, I would say from excellent to extremely excellent models, then prices skyrocket.
Openai GPT $478 million only in computing Gemini Ultra 191 million. So it's a massive investment for a model. And these of course it's not. It's not. The investment that academia is ready to make. In most of the cases regarding countries, this is very, I would say, obvious. The us is leading, followed by the EU and China, but the US is far ahead. 61 notable AI models from US-based institutions. European Union 21, China 15, and the number five. It's. The responsible evaluation of the large language models is really lacking. What is happening is that there are still massive problems of cognitive bias, fake news, and wrong answers, and this is something that requires a lot of investments. And it's it's not easy to sort it out. Okay. So companies are working on that. But. We seriously lack this concept of responsibility to avoid prejudice and discrimination in the answers we give. Uh, number six, uh, the investments are skyrocketing 25.2 billion USD. Pretty much every single startup has AI in the name. Every single startup playing in this field, especially generative AI, is receiving massive amounts of investments. And there are many, many. If we think about technology projects, I would guess I don't have this, and this is not on the report that at least 85 to 90% of them are AI-related investments.
Number seven. The data. It makes workers more productive and leads to higher-quality work. And this is this is very nice. This is why, for example, I started this episode by saying that the use of AI is making us more productive and improving the quality of our work. Of course, there is just one. One line that we need to think about on this number seven is that sometimes, um, it makes us so productive that there is a risk that people start to become lazy. And without oversight, then we can just weaponize AI models, because then we may see in the future. And this, of course, I'm not talking about an apocalyptic future, but I'm saying that, you know, maybe in the future, we will see. Are people without the ability to even evaluate the results. It's something like you put. Uh, for example, a calculator to use, but you don't know the answer. The calculator is not just to speed up your ability but to solve something you have no clue about. What is the result? Imagine that you don't know how much two plus two is. You only know by pressing two plus two equals. In the calculator, it's the same. So you lose this ability. And this is very serious. This is one of the topics in which I'm more interested. Uh, on the side effect of AI.
Uh, number eight, uh, the Scientific Progress. And this is just amazing. This is the part I like most. It's accelerating even further. The development of medicine, for example, protein synthesis. Asian. For example, MRI diagnosis medicine. Oh, God, this is just incredible because, with the use of AI, you can synthesize proteins using computers at lightning speed. And this is accelerating the scientific progress to develop new medicine, to develop new vaccines, to develop new, for example, diagnosis processes. And this, I think, will be one of the greatest accomplishments of the use of AI. Number nine regulation. We still lack regulation, we still lack, but it improved dramatically in one year. We made a lot of regulations. The EU, uh, released regulation recently. So it's it's improving. Is it there yet? No, but it's much less scary in terms of regulation than one year ago. So this is a great point. If we want to combat fake news, discuss copyright, or discuss, for example, privacy or cybersecurity, this type of regulation will help a lot. This progress. And last but not least uh, people are becoming more and more aware of the impact of AI, and they are becoming more nervous. And this is probably what I'm saying most. For example, I love technology.
I love, uh, I, I started using AI and AI models in projects, for example, in portfolio management more than ten years ago. I have a paper that I published at PMI in 2015 on neural networks, and it is amazing to see the potential impact. But at some point, people need just to understand that the impact will happen and that we need to learn, even for the project managers, for those who think that, no, um, the life of the project management will be just wonderful with AI. My answer is yes if you know how to use it because if you don't know, this could be a nightmare and potentially even the end of your career. So think about that. This is why, for example, I published this episode, and I try to talk about AI because I think I have a responsibility to make sure people are aware. Remember: these documents are absolutely free. So just download it if you don't want to read it, no problem. Put it on ChatGPT and play around. Ask questions, ask about your concerns, and see what comes out of that document to help you understand this. But at least read the first pages just to gain insights. This is exactly what I tried to summarize for you today. Okay, I hope you enjoyed this podcast, and see you next week with another five-minute podcast.