Docker launched "Docker Model Runner" to run LLMs through llama.cpp with a single "docker model" command. In this episode Bret details examples and some useful use cases for using this way to run LLMs. He breaks down the internals. How it works, when you should use it or not use it; and, how to get started using Open WebUI for a private ChatGPT-like experience.
★Topics★
Model Runner Docs
Hub Models
OCI Artifacts
Open WebUI
My Open WebUI Compose file
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More