Building Your (Local) LLM Second Brain
- Track: Low-level AI Engineering and Hacking
- Room: UB2.252A (Lameere)
- Day: Sunday
- Start: 11:50
- End: 11:55
- Video only: ub2252a
- Chat: Join the conversation!
LLMs are hotter than ever, but most LLM-based solutions available to us require you to use models trained on data with unknown provenance, send your most important data off to corporate-controlled servers, and use prodigious amounts of energy every time you write an email.
What if you could design a “second brain” assistant with OSS technologies, that lives on your own laptop?
We’ll walk through the OSS landscape, discussing the nuts and bolts of combining Ollama, LlamaIndex, OpenWebUI, Autogen and Granite models to build a fully local LLM assistant. We’ll also discuss some of the particular complexities involved when your solution involves a local quantized model versus one that’s cloud-hosted.
Speakers
Olivia Buzek |