Brussels / 3 & 4 February 2024

schedule

Reducing the risks of open source AI models and optimizing upsides


Leaders in developing AI from openAI, Deepmind and Anthropic signed the following statement : "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

What exactly is that risk, where does it come from, and how can the open source community work on AI to be as beneficial as possible while avoiding these risks?

Jonathan Claybrough, software engineer from the European Network for AI Safety will briefly introduce the topic and main sources of risk, in about ten minutes, then will open the floor to an expert panel of speakers on AI governance and open source. We plan to interact heavily with the audience so that the open source community gets represented in AI governance. The panel experts will be Alexandra Tsalidis from the Future of Life Institute's Policy team and Felicity Redel on the foresight team at ICFG. Stefania Delprete, data scientist with extensive experience with the opensource community (Python and Mozilla), will be moderating the session.

Key points of the presentation - Current vulnerabilities you expose yourself to using AI models (low robustness, hallucinations, trojans, ..)
- Open weights of AI models doesn't bring the guarantees of open source (you can't read the code, debug, modify precisely)
- Steps to reduce user (developers) risk (model cards, open datasets)
- Steps to reduce misuse risk (capability evaluations, scoped applications, responsible release)
Expert panel debate questions:
- What are downside risks of unrestricted open source AI proliferation?
- What would a governance of open source AI models that leads to good outcomes look like?
- How can the open source community contribute to AI Safety?

Speakers

Photo of Stefania Delprete Stefania Delprete
Photo of Jonathan Claybrough Jonathan Claybrough
Photo of Felicity Reddel Felicity Reddel

Links