Security, surveillance, AI, and public values, for Marc Schuilenburg, these are not abstract academic concepts. They are urgent societal challenges that demand critical, interdisciplinary, and engaged responses. As appointed Professor of Digital Surveillance at Erasmus School of Law, he focuses on transcending the academic world and making direct social impact: ‘If you want to influence technology, you need to be in the room before the tool is operational.’

Can you tell a little bit more about your academic background and expertise?
I studied both philosophy and law. After graduation, I worked for six years at the Public Prosecution Service, where I trained to become a public prosecutor. Although, I had a clear career path there, I was increasingly drawn to academic work; reading, writing, and asking fundamental questions. I eventually pursued a PhD in social sciences at VU Amsterdam and have since held academic positions. Today, I am Professor of Digital Surveillance at Erasmus University Rotterdam, a position supported one day a week by TNO.
Why is an interdisciplinary approach to research so important to you?
Because reality itself is interdisciplinary. It is messy, entangled, and always shifting. You cannot analyse something so fluid through the lens of a single discipline. If you try, you end up with rigid thinking and fixed categories that do not match reality. I have always tried to resist that. My thinking is process-oriented; it departs from the idea that reality is something that is constantly escaping in all directions. This means that we need theoretical tools that reflect that reality is flowing everywhere.
As Professor of Digital Surveillance, you focus on public safety and AI tools. What makes this topic so urgent today?
Our society has become fully digital, and we have now entered a phase where AI takes things even further. AI systems do not just follow rules, they make decisions and give advice. They are no longer tools fully controlled by humans. They are increasingly becoming self-regulating actors. That makes them unpredictable and difficult to regulate and hold accountable.
Ten years ago, I wrote one of the first articles on predictive policing. Back then, AI felt distant. Today, it is everywhere and developing at a pace that still surprises me. With AI generated deepfakes and autonomous systems integrating AI, sensor technology and connectivity, even I struggle to tell what is real and truth from fiction.
Your research into AI focuses on themes like ‘care’ and ‘public values’. What’s missing in current implementations?
The narrative around AI is dominated by efficiency and safety. Both are important public values. But we often overlook others: transparency, accountability, non-discrimination, and privacy. In my book Making Surveillance Public, I argue that these values need more attention in AI design.
Here, the key is timing. Decisions about which data sets to use, what algorithms to build, and how AI tools function are made at the very beginning of research and development by public and private parties. If you want to influence the outcomes of technology, you need to be in the room before the design begins. That is why I try to involve myself more as an academic in the earliest stage of AI and digital tools.
In your book Making Surveillance Public, you raise fundamental questions about AI and who benefits from it. What surprised you most while writing it?
I was most surprised by how normalised digital surveillance has become. People willingly monitor themselves using smartwatches, video doorbells, fitness trackers and self-driving cars such as Tesla. We rarely recognize this as surveillance. At the same time, public values are under pressure. AI is sold as making society safer and efficient, but we rarely see the democratic costs. This means that presumed technical and economic advantages prevail and sociological aspects such as ‘power’, ‘knowledge’ and ‘experiences’ remain underexposed.
These observations led me to explore the meaning of ‘public’ in Making Surveillance Public in three ways:
- Surveillance we perform on ourselves, through Apple Watches, Fitbits, Amazon Ring doorbells and electric cars. We don’t just tolerate surveillance anymore but worse: we pay for it. Meaning, we buy into the very systems that discipline and control us, just because they’re fast, shiny, and whisper the promises of status and self-actualization.
- Public values are at risk: there is an increasing emphasis on safety and efficiency at the expense of non-discrimination, privacy, transparency, and accountability.
- The need for a public – from scientists and nature to the victims of algorithms – to be involved in the public debate and the design of AI tools. ‘Making surveillance public’ is then a matter of collecting the voices of all those affected by a surveillance issue, and in my opinion more attention should be paid to the manner in which each technology works as a template, whereby the knowledge of particular individuals such as data professionals, which I call the ‘coding elite’, gain the upper hand and those of other individuals are forgotten or viewed as worthless.
That last point changed how I work. I now try to be “in the room” when decisions are made. Whether it is through my role as Professor Digital Surveillance, through work with TNO, or as part of the national police’s Scientific Advisory Board.
You are very active in the public debate around surveillance and technology. How do you see the role of academics in that conversation?
Personally, I find that writing academic papers is not enough anymore. Most academic publications are read by only 1.2 people on average. If you really want to have an impact, you must step out of the academic bubble and engage with those designing and deploying these tools.
That is why I collaborate closely with policymakers, developers, and institutions like the police. I know there is a risk of being used as academic “window dressing.” But I would rather take that risk than standing on the sidelines and criticising behind.
What do you hope to achieve through your work?
I just want to keep my thinking sharp, stay critical, and help others ask better questions. Every four to five years, I deliberately shift my scientific focus to a new theme. This has led to highly acclaimed books as Hysteria, The Securitization of Society and Mediapolis. It’s a way of staying original and not getting stuck.
And through my teaching, I hope to pass that mindset on. My master’s seminars are a space where I share my research in progress. I see teaching as a form of accountability. Students should know what I’m working on now, even if it is not finished.
Is there a book, film, or podcast that inspires you in your field?
Yes, though it is not recent. Some of the most insightful portrayals of surveillance are from the 1970s. The Conversation (1974) by Francis Ford Coppola was made shortly after the Watergate scandal and remains highly relevant today. Gene Hackman plays a surveillance expert who knows, but does not want to know, for which dirty jobs he is being used – a must-see movie on privacy and personal responsibility.