Leyla Isik, a professor of cognitive science at Johns Hopkins University, is also a senior scientist on a new study looking at how good AI is at reading social cues. She and her research team took short videos of people doing things — two people chatting, two babies on a playmat, two people doing a synchronized skate routine — and showed them to human participants. After, they were asked them questions like, are these two communicating with each other? Are they communicating? Is it a positive or negative interaction? Then, they showed the same videos to over 350 open source AI models. (Which is a lot, though it didn't include all the latest and greatest ones out there.) Isik found that the AI models were a lot worse than humans at understanding what was going on. Marketplace’s Stephanie Hughes visited Isik at her lab in Johns Hopkins to discuss the findings.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More