Episode transcript The transcript is generated automatically by Podscribe, Sonix, Otter and other electronic transcription services.
Hi everyone, here is Ricardo Vargas and this is the 5 Minutes Podcast. Today, I'd like to share with you a recent article that I published with my dear friend André Barcauí was published at the London School of Economics Business Review, and it's about artificial emotional intelligence or AEI. And this is a very relevant topic because most of the time when we talk about artificial intelligence, what do we expect? We expect something that will streamline our workflow and make our job easier, for example, preparing reports and charts. Analyzing information, there is a very relevant application that comes with a massive improvement and a massive amount of challenges; that is the use of AI to capture emotions and to have a deeper insight into people's behavior that is not expressed in words, or their are not expressed, for example in terms of for example words, sentences or very clear for example, facial gestures like a smile or example, sadness. But they are very subtitles. This is what I want to talk to you about today; for example, there is a website that is always very intriguing to me: highview.com. It's one of the biggest platforms for hiring people, and it's a completely virtual platform, and you are interviewed in this. On this platform by an avatar and being interviewed by an avatar, what is that you, the Avatar, are able to capture, for example, the way you blink your eyes, the way you smile, and very tiny movements or time changes in the way you talk or the words you use and can infer, and this become extremely powerful. And if we look starting on the good side of that, it's if you are using this in the hiring process, there is a bigger chance that you will find a right emotional feat, for example, for your next project or the team member that you were using for any specific project so you can understand that not only the courses that he or she did, but also the underlying emotions on that. On the health care is the same, for example, if you are working in projects in medicine or in the hospital treatment, you can understand, for example, psychological behaviors that will guide you through the best medicine or the best clinical approach to that specific patient. In education is the same, for example, I was educated with one size fits all approach. I was one of 45 students sitting in a chair in a school, and a professor just taught whatever he or she wanted to teach, and it was my responsibility to learn the best way I could. But with that, the professor at the school can tailor, for example, am I more visual? Am I more, for example, a listener? And then they can tailor this based on that and they can use not a human evaluation on that, but an AI evaluation of that. And I can use this, for example, in negotiations, for example, I'm negotiating with someone, and I can use AI to capture the mood to capture the underlying words. And and this will improve my ability to negotiate. In a more extreme way, there is the website Character.AI, and this website is very surprising for me because on this website, you can have a chat with Albert Einstein, you can have a chat with President Putin, you can have a chat with Brazilian President Lula. Or former Brazilian President Bolsonaro or the current president of the US, Joe Biden. You can talk and not talk, but you can chat with them, and these large language models use, for example, speeches, texts, and all the knowledge to create, I would say, a Biden version of that answer that is different, probably from Albert Einstein difference and this website is the second most popular website about AI in the world, they only lose by Chat-GPT, so it's very powerful. Character.ai also creates virtual connections, and this is also may help people with loneliness who are trying to connect to someone. And these virtual environment helps him or her to do that. But I cannot close this podcast with the biggest highlight of the article, and the biggest highlight I want to share with you is the ethical implications. Because human emotions are tricky, they are not very easy to spot, and people are not very comfortable that you are, for example on a Zoom call, zoom scans your mood and says, oh, this is the time for you to ask, and we raised because the mood of your boss right now is very open for you to ask this or the opposite. Because these will happen, but up to what point? Personal privacy. Up to why? What point? Bias discrimination will not be used in this same channel. In this same channel, and this is what I want to finish, and I finished that, we finished the article and I want to finish this podcast because the ethical implications of this are massive because imagine if I can discover your sexual orientation or your, I would say your religious beliefs, or your ethnicity. Just by analyzing the way you put words or just by analyzing your accent, I can discriminate you from other candidates because you are part of a minority group. So imagine if this becomes a reality with the power of AI, this can create a side effect that can really challenge us as society. So this is why it's so important to understand that every time we come with big, big improvements on AI on this emotional side, we bring a lot of great things, but we bring together a lot of challenges on the ethical side that we need to be completely aware of. So, I hope you enjoy the podcast and also the article the article was published on May 20, 2024, at the London School of Economics Business Review, and always think of these two as side knives; you know it's the same knife that cuts and improves and prepare food is the same knife that could create a lot of challenges and trouble to other people if you misuse them. So see you next week with another 5 Minutes Podcast.