Brussels / 3 & 4 February 2024


Fortify AI against regulation, litigation and lobotomies

AI models, particularly LLMs, readily and sometimes unwittingly may plagiarize content—what constitutes acceptable use is currently not fully resolved— opening their users to copyright tort and other forms of litigation. For society an immanent danger is also in how easily they can be poisoned or lobotomized to propagate disinformation. It is also difficult to uncover and address sources of bias. The later is of particular legislative concern—see emerging regulations such as the EU’s AI Act as a prototype for whats to come. Without traceability for data with clear license expressions and provenance, AI models are easy targets. The open source community are particularly vulnerable and defenseless. Without a preemptive strategy legal risks are untenable. We expect an avalanche of litigation in response to AI driven disruption of the business of established information providers and creatives. The talk focuses on both the legal and moral issues at hand but also the means to address the issues through traceability and model governance.


Photo of Edward C. Zimmermann Edward C. Zimmermann