CySafe Lab: Data-Driven Cybersafety
About the Lab
CySafe Lab develops data-driven methods to study, measure, and mitigate harms on online platforms. We work at the intersection of computer security, machine learning, and computational social science, and we build the methods, datasets, and infrastructure that make platforms accountable to the public.
The lab is based at TU Delft’s Faculty of Technology, Policy, and Management (TPM), with some members at the Max Planck Institute for Informatics. Our work has been published at top venues across computer security and computational social science (e.g., ACM CCS, USENIX Security, ACM CHI, AAAI ICWSM, ACM IMC, ACM CSCW), and has been recognized with multiple Best Paper Honorable Mention Awards.
What We Work On
- Online harms and platform accountability — measuring harmful content, recommender system effects, and platform interventions at scale.
- AI-driven cybersecurity and cybersafety — applying machine learning to detect, model, and mitigate emerging threats.
- Hate speech, illegal content, and content moderation — auditing moderation systems and developing fairer, more transparent approaches.
- Generative AI harms — characterizing new risks introduced by generative AI systems and their misuse online.
- Online safety for minors and vulnerable populations — studying how at-risk groups are exposed to harmful content and design patterns.
- Platform measurements and data access — building public datasets, data-donation methods, and infrastructure for independent platform research.
Collaborations & Impact
CySafe Lab partners with the institutions that translate research into action. We contribute to European platform governance through ongoing engagement with the European Commission’s European Centre for Algorithmic Transparency (ECAT) and the implementation of the Digital Services Act (DSA), and we collaborate with Dutch public-interest organizations including ATKM (the Dutch Authority on Online Terrorist Content and Child Sexual Abuse Material), ECP (Platform voor de InformatieSamenleving), and trusted flaggers such as Offlimits. Our research is supported by funders including NWO and Google Trust & Safety, and lab members regularly engage with regulators, journalists, and civil society on the questions our work addresses.
Current PhD Researchers
I am lucky to collaborate and advise the following PhD researchers!
- Vahid Rahimzadeh – AI-driven Cybersecurity and Cybersafety
- Nicolo' Fontana – Hate Speech, Illegal content, Content Moderation
- Breno Matos – Emerging online harms in the era of Generative AI
- Moonis Ali – Online harms related to minors
Past PhD Researchers
In the past, I had the privilege of collaborating and advising the following PhD researchers.
- Mohamad Hoseini – Large-scale measurements of online platforms
