![]() |
||||
Podcast/Video Interviews by Stephen Ibaraki A Chat with Matyas 'Maty' Bohacek: student at Stanford University, working on research at the intersection of AI and digital forensics; Specialized in AI for accessibility; Developed a novel architecture for sign language recognition; Co-founded and exited Verifee, an AI-based app for disinformation analysis; frequent Speaker on the broader implications of his research and AI's societal impact This week, Stephen Ibaraki has an exclusive interview with Matyas 'Maty' Bohacek.
Maty began his collaboration with Professor Farid as a high school sophomore, developing a detector of deepfakes with President Zelenskyy of Ukraine. This detector was extended into state-of-the-art detectors of world leader deepfakes and lip-sync deepfakes. Later, he created a deepfake of Anderson Coope, which opened a primetime news show on CNN, highlighting both advantages and risks of AI technologies. More recently, as part of a six-month research internship at Google, Maty extended his work into text-to-video deepfakes (e.g., Sora, Runway, Veo), creating a new benchmark dataset and a detector to address these free-form deepfakes. His current research focuses on AI model poisoning and collapse - how AI systems degrade when recursively trained on their outputs - and training data attribution, determining whether a specific data point, such as an image, was used to train an AI model. Prior to his work on AI and media forensics, Maty specialized in AI for accessibility. He developed a novel architecture for sign language recognition that established state-of-the-art standard in both accuracy and computational efficiency. He later improved the model's few-shot learning capabilities, dramatically reducing the amount of required training data. Maty also embedded this AI system into educational applications now used in college sign language courses - most recently, at Tulane University. Outside of research, Maty co-founded and exited Verifee, an AI-based app for disinformation analysis in Central and Eastern Europe, backed by Google. He has delivered selected lectures on artificial intelligence in journalism at Charles University in Prague and has been an invited speaker for NYU's AI, Misinformation, and Policy series and Stanford's DATASCI 194D course. Maty frequently speaks on the broader implications of his research and AI's societal impact. Recently, he addressed the United Nations ECOSOC Conference, highlighting AI's potential to foster accessibility; the Stanford Online Trust and Safety Conference, where he outlined the growing ease of deepfake production; the EU Disinfo 2024 Conference in Riga, discussing strategies for deepfake mitigation; and the Future Port Youth Conference, in Prague, presenting on the future of AI deception beyond contemporary deepfakes. Maty has appeared on the Forbes 30 under 30 list in Central Europe. He has received the Innovator of the Year award in Czechia, the Czech AI Personality Award as Discovery of the Year, the Moonshot Award (supported by the Avast Foundation and the Aspen Institute), and the "Ceske Hlavicky" Young Scientist Award from the Czech Ministry of Education. He is also a two-time Apple WWDC Scholar. His research has been published in Science and the Proceedings of the National Academy of Sciences, among other venues, and his work has been featured by CNN, New Scientist, Nature, and (the European) Forbes. In addition, Maty has contributed opinion pieces to WIRED and (the European) Forbes. |