AI & Resistance: why saying ‘No’ is sometimes the boldest move

In the evolving discourse on AI, there is often a default assumption that we must ‘do something’ with AI. But is that really true? Early July, Inge Janse (investigative journalist for the Centre for BOLD Cities), argued for the contrary at a meet-up of Future Society Lab. During his talk, he presented five reasons on why we should resist AI, or at least, consider it. The result was a challenging plea to reimagine AI not as a goal but as a tool in service of humanity.

Door Mirte van der Sangen

Inge Janse’s talk was part of a larger program organised by the Future Society Lab, early July, focusing on AI from four different angles: society, resilience, AI & digital humans and resistance. Alongside Inge Janse, Maaike Harbers (Hogeschool Rotterdam) Maurice de Beer (Veiligheidsregio Rotterdam-Rijnmond) and Rebecca Moody (Erasmus University) shared their knowledge and expertise on these topics with an audience that ranged from researchers, policymakers, and independent technology developers. 

From “Yes, if…” to “No, unless”

The Centre for BOLD Cities positions itself as a critical voice in the digital transition. Janse urges his audience to rethink their starting point. Instead of jumping on the AI bandwagon, with “yes, if it works,” he suggests adopting the stance “no, unless it truly aligns and supports our mission.”

This fundamental pivot is more than semantics. It reframes AI as a tool, not a goal. Using this approach as a jumping-off point, Janse states five reasons why a proactive resistance against the uncritical adoption of AI is necessary.

  1. AI is not inevitable
    There’s a flawed logic embedded in the popular narrative: that we must do something with AI, just as we had to engage with the internet or social media. Janse debunks this, linking it to the common organisational reflex to treat technology as an end in itself. But AI does not fix problems. It often addresses symptoms, not root causes.

    Moreover, as the saying goes, “garbage in, garbage out.” If the inputs to AI systems are biased or flawed, the outcomes will be too. Therefore, any engagement with AI must begin not with data or algorithms, but with a clear understanding of our mission and values.
     
  2. AI is not neutral
    The second assertion in Janse’s presentation is that AI is never neutral. It reflects the values, incentives, and prejudices of those who build it. AI, he notes is the product of historical forces, such as capitalism, institutionalised academia and social inequality.

    The choice for AI is a choice to accept the systems and ideologies in which it is embedded. Thus, Janse advocates for discussions before implementation as AI should follow ethics, not dictate them.
     
  3. AI narrows the world to data
    “If all you have is a hammer, everything looks like a nail”, Janse states to illustrate how AI becomes a default tool. When we start to view the world solely through the lens of data, we start to lose everything of value that cannot be captured in metrics. 

    The reduction of societal challenges to data point risks the danger of ignoring nuance, emotion, and relationships. These elements are crucial for good governance and public service. AI offers one version of a truth, but never the whole picture.
     
  4. AI reduces us
    Janse warns of the outdated implementation of AI, which is currently the standard. Instead of looking at how the novelty of AI can truly transform society, we opt for AI to carry out tasks and procedures already well-performed by humans. 

    More concerning is how this changes us from our jobs, our routines, and our identities. This trend creates what Janse calls the “humanless life syndrome,” in which we increasingly remove all human aspects from processes. 

    To counteract this, we must intentionally define roles where humans are irreplaceable. AI should be used for what it excels at, like calculation and pattern recognition. The remaining should stay with what we uniquely excel at, all that makes us human.
     
  5. AI forgets the world
    Janse leaves the audience with a philosophical point. AI functions in isolated domains with clear parameters. However, society does not function in isolation. Real-life problems transcend sectors, involve multiple stakeholders, and are rooted in complexity.

    In contrast, AI erases this complexity. Janse paints a picture of a future in which digital avatars represent citizens, replacing lived experiences with simulations. To resist this, Janse insists to always centre the real world.

AI as your assistant, you as the authority

Janse does not reject AI but calls for a repositioning of our relationship with it. “You are the boss,” he says, “AI is the helper.” Any implementation must begin with human intention, not technical potential. 

His stance to prioritise human intention becomes even more apparent when faced with questions from the audience. 

One critical listener asks, “What if you cannot avoid using AI because your organisation already does, or it is imposed externally?”. Janse maintains his point to reflect on the use of AI, “This is not a call to never use AI. It is a call to avoid using it simply because it exists. If you have a choice, make it consciously, not out of fear of missing out or just to experiment. Start with a real-world objective, and only then consider what role AI might play.”

Another audience member wonders, “How do you deal with the opposite form of resistance, people who refuse to work with AI out of fear it will devalue their expertise?”.Janse responds firmly, “That is not an AI problem but an organisational one. If someone can be replaced by AI so easily, we should ask why that person’s role exists in its current form. The key is to focus on what makes people unique and to design roles that leverage that.”

For professionals navigating AI's rapid advance, Janse’s insights are both a warning and a guide. The answer is not always resistance, but resistance should always be part of the process. To prevent a future void from humanity, we must ensure that every technological decision is rooted in our ethics and values.

More information:
Visit the Future Society Lab