Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hi, everyone. Here is the 5 Minutes podcast. Today, I'd like to discuss one question I received in a webinar we recently attended on AI. The question was about; I would say, the preconceived idea that companies, organizations, and projects have with tools like ChatGPT or generative AI tools and translating them as some sort of cheating. And you know something that people should not do; if you do anything with the support of AI, it means you are not doing it. You are not doing it. And this makes me remember, February last year, my daughter was at the university, and they send a memo to all students saying ChatGPT is absolutely forbidden, people cannot use if we discover you were using you be expelled, you be punished and this and that. And then she came home because she knew that I was working with that, and she came and said and said, Dad, what should I do? Do I need it? Is it, you know? Is it and let go illegal and this? And I said, look, everything is new. Just wait because things will change. Two months later, my daughter was taking classes on the ethical use of ChatGPT. So, did she know how it changed? So, what I'm saying is, first, the use of generative AI is probably one of the biggest improvements we have had in our society in recent years in terms of technology. It's not amazing. It's an amazing plus. Your ability, for example, to create a photo based on a text prompt, your ability to even very soon create a video based on a text prompt to generate ideas and to do an initial thought of brainstorming. This is all fantastic. So, when is the borderline between fair, normal, very positive views and cheating? Cheating is when you do things that you were supposed to and expected to do, but you use a tool to get the credits and the ownership of that product. Let me give you two examples. For example, when I was preparing this podcast, I put some ideas on that I wanted to talk about, and I asked ChatGPT, and say: can you suggest some titles for this podcast I told them I wanted a short point I wanted to mention the cheating, I want to mention what is a right and proper use and I want to mention generative AI, and then it suggested a few names and then I took the name I like most, I did some minor tweaks and I'm using it. This is a normal fare usage because I'm just having ideas because, for example, I'm not, I don't have perfect English, not at all, so this will help me to communicate better and provide my message better to all of you who are listening. In the second case, imagine that I go on chapati and write an article I will send to Harvard Business Review HBR called How AI Will Transform Project Management. Then ChatGPT will deliver 3,4,5,6 pages of content to me. I will copy this. I will put this in a Word document, go to the mail, and send this to the editor of HBR, saying this is my new article. Did you see the difference? In the second case, cheating, Because I am not the owner who owns this copyright first, it's nobody because it's AI content. But who created it was fully AI. So, most of the time, for example, when I'm writing a report for a project, what I sometimes do when the text is not so clear is that I'm not very happy with the tone I'm providing. Then I go on tools like ChatGPT or Grammarly or Quilbot, and I say, can you make this message a little bit softer or a little bit harder or less formal, more formal? Or can you suggest things that I should add to this? To make that content richer, did you see this is the positive news? For example, when we go on LinkedIn today. I would guess that more than half of the posts we see on LinkedIn are AI-generated, and this counts again. If you are doing that, don't do that; just generate and do. Why? Because, in the end, it's not you. You are not behind that, intellectually speaking. To explain so, sometimes I see, for example, people I know, and they don't have the fluency of English. To write that type of answer, and then they write an article as if they were, you know, a native speaker with PhD in literature to write it. You know, it's not, it's not you, it's not you, and unlinking, we want to see you on a project, but it's absolutely the same. We will see multimodal AI helping you transform a chart very soon. For example, burned down short for agile. Doing this in a video using tools like Sora, which will be available soon, is fantastic. That's an absolutely perfect usage of that. But you are the one driving the data. You are the one leading that project because if you say oh I did everything by hand, it's not right. It's like you using Dall-e, creating an image and saying that you draw that image yourself, and this is not true. So, this is the borderline. So, every time you need just to understand one thing, put on someone else's shoes. Read what you did and say: would I be happy? If I had known that that article was written by AI, not by John, Michael, or Ricardo, and if you have questions on that, one thing that I learned a lot about ethics is. If it raises you a concern, stop and think about it. Think about it. Never go ahead with something that is unclear. Never go ahead; go above and beyond a limit where you know. ChatGPT is just helping you to work less and cheat more. What we want is use of ChatGPT to help you to convey better ideas, to convey more creativity to create more I would say insightful content. That you as an individual already have. This is exactly the best use of this kind of tool. I hope you enjoyed this podcast. This podcast was not generated by AI, and see you next week with another 5 Minutes Podcast.