Brussels / 3 & 4 February 2024

schedule

Introducing 'Refiners' – A Micro-Framework for Seamless Integration of Adapters in Neural Networks


Check out our docs!

In the evolving landscape of artificial intelligence, we're witnessing a significant shift in deep-learning design patterns, particularly with the advancement of foundational models. A key innovation in this area is the concept of 'adapters' – small yet powerful add-ons that enhance an existing neural network with new capabilities without retraining its core layers.

However, this innovation brings a hefty software engineering challenge, mainly when we discuss creating an open-source library encompassing a diverse range of these adapters. Neural networks, by their nature, are built using frameworks that procedurally encode their operations. Every time a new adapter is introduced, it often requires a substantial rewrite or rethinking of the existing code base.

To address this challenge, we've developed 'Refiners' – a streamlined micro-framework built on top of PyTorch, one of the most popular machine learning frameworks. The goal of Refiners is to serve as a comprehensive collection of foundational models and their respective adapters. Its primary design pattern sets it apart: envisioning all neural networks as a 'Chain' of primary layers, neatly nested one after the other. Furthermore, we've integrated the 'Context API,' a feature that allows the implementation of complex models, including those that don't follow a simple linear structure. By doing so, Refiners simplifies the integration of new adapters, making it easier for developers to build and enhance neural networks without the need to overhaul their entire code base each time. This approach saves time and opens up new possibilities for innovation in AI development.

Speakers

Photo of Benjamin Trom Benjamin Trom

Attachments

Links