Brussels / 31 January & 1 February 2026

schedule

FOSS in times of war, scarcity and (adversarial) AI


We need to talk about war. And we need to talk about companies building bots that propose to rewrite our source code. And about the people behind both, and how we preserve what is great about FOSS while avoiding disruption. How do geopolitical conflicts on the one hand and the risk of bot-generated (adversarial) code on the other influence the global community working together on Free and Open Source software?


The immense corpus of free and open source software created by a global community of researchers and engineers, developers, architects and designers is an impressive achievement of human collaboration at an unprecedented scale. An even bigger circle of users, translators, writers, creatives, civil society advocates, public servants and private sector stakeholders has helped to further develop and spread this technological Gesammtkunstwerk far and wide - with the help of the internet and the web. With individual freedoms and user empowerment at its center, these jointly created digital public goods have removed many economic and societal barriers for a large part of the world's population. Users are not just allowed to benefit from technology, but each and every user can in principle actively help shape it. On top of the FOSS ecosystem our global economy has been propelled to unprecedented levels.

Much of this incredible growth was achieved within a (relatively) calm geopolitical situation, in the wake of the cold war which ended in the very same year that also saw the genesis of the World Wide Web at CERN in Switzerland. Economists, philosophers and other observers at the time spoke of the 'end of history' and expected no more big conflicts at the superpower level. We could now globalise the economy and all work together. The flood of innovation taking place all around us promised a bright future for all, with room for altruism and collaboration. In retrospect it certainly was an ideal situation for an optimistic and constructive global movement like the FOSS community to take over the helm.

But apart from the fact that under the surface that narrative was already flawed (with some actors like the USA having a double agenda, as the Snowden and Shadowbrokers revelations exposed) history didn't end. To some ironic extent we are now becoming victim of our own success. In recent years we've seen geopolitical stability punctured by war effort levering low cost technology that includes heaps of FOSS. Social media powered by FOSS infrastructure promote disinformation and have successfully stirred large scale polarisation. Within some of the largest and most populous countries on the planet authoritarian regimes have successfully used technology to break oppression in a new race towards totalitarianism. While for instance Europe has tried to regulate 'dual use' technology, "any use" technology (which our libre licenses guarantee) has escaped our attention. Even in countries which had stable non-authoritarian regimes there is a visible technology-assisted relapse towards anti-democratic movements. On the back of a tech stack which consists of FOSS with a thin crust of proprietary special sauce, unprecedented private capital (sometimes referred to as 'hypercapitalism') is interfering with global politics at an alarming rate. Apart from the direct democratic disbalance the resulting oligarchy is giving rise to overt nepotism, corruption and a new global protectorate for predatory business models and unethical extractive behaviour. Expecting peace in cyberspace any time soon is probably naive, and free and open source technology stands to make up for a significant part of the battleground.

At the same time we are facing other challenges, such as climate change and an imminent scarcity of non-renewable resources. We have more people living on the surface of the planet than ever before, and they are consuming more raw materials and more energy than ever. This won't go on indefinitely. And right at that point we see an army of next generation Trojan horses galloping through the gates of our global commons villages, accelerating our use of both. Generative pre-trained transformers (also known as Large Language Models) kindly offer to take cumbersome and boring coding work off our hands. They can liberate us from responsibility and allow us to do other things or move even faster.

But is it really wise to accept this apparent gift, or should we be a little more suspicious? Just as it has proven way too easy for AI to poison the web with fake content, our software supply chain is vulnerable to manipulation. The attack surface is immense. Due to the inherent complexity of software it is easier to achieve and harder to detect manipulation before it is too late. While many talented and committed people have spent years reverse engineering binary blobs to avoid the associated risks, those blobs were at least isolated and clearly marked. AI is the ultimate black box and it introduces significantly more uncertainty: it rewrites the truth from the inside.

AI in its current form has no actual sense of truth or ethics. Like with Russian roulette, once in a while the models completely bork up and create phantom code and real risk - and that is even in a best case scenario, without assuming malicious intent and manipulation from the outside. In an adversarial scenario (and this adversity can come from traditional nation state actors with non-aligned interests but also from corporate or even private individuals with some determination - like Cambridge Analytica illustrated so vividly) manipulation only requires subtle changes. At the frantic scale at which any available learning content is ingested from the internet these days one can expect targeted adversarial training to manipulate specific code with subtle triggers to go unnoticed.

As a community we have spent billions of hours of careful coding and software engineering to make free and open source technology as trustworthy as it is today. Geopolitical conflict is an incentive to hollow out that trust. AI is an additional leap of faith, and if you look at the forces driving its adoption and their interests, are we really sure those black boxes are safe to invite into our trusted coding base? It is clear that the end game of AI coding is not a healthy FOSS ecosystem, but its total displacement. The threat of machine crafted and man-crafted malicious code in war-time FOSS are equally realistic. Perhaps we can find a middle ground, where we combine some of AI and human skill - and add enough checks and balances, and a variety of assurances through compartementalisatoin, formal and symbolical proofs and other traditional means of quality assurance.

This talk is an open exploration of some of the challenges the FOSS community will have in the years ahead, working towards a hopeful notion of maximal defendable FOSS.

Speakers

Photo of Michiel Leenaars Michiel Leenaars

Links