Je m'absente une demie journée pour aller passer un concours et @Taflelli pousse la traduction à 72% !
Merci, tanemmirt !
#OpenWebUI 0.6.22
Introduced support for the #Kabyle (#Taqbaylit) language, refined and expanded translations for Chinese, expanding the platform's linguistic coverage.
https://github.com/open-webui/open-webui/releases/tag/v0.6.22
ai: Open WebUI -1 problems, more ram, $200 plans and ComfyUI URLs
OK, some miscellaneous but valuable comments in the AI journey
Open WebUI model parameters redux
The context length and other parameters of Open WebUI are really low, like 2K context windows. This doesn't make any sense in the era of AI coding. Some defaults help a bit, which we've covered be
https://tongfamily.com/2025/08/04/ai-open-webui-1-problems-more-ram-200-plans-and-comfyui-urls/
#ai #comfyui #Geek #openwebui
@treibholz Don't get me wrong - I use #AI myself a lot, and I have #Ollama, #OpenWebUI, and #Goose installed locally. I attend agenting coding meetups to discuss the latest developments and how to stay broke by burning your Claude tokens, fighting about whether you should keep your agents on a leash or YOLO your way into production. Nevertheless, I fear we risk losing more than we gain if AI replaces what makes human interaction rewarding. Social animals need the struggle (and the joy) to grow.
Running #OpenWebUI in Docker with #Ollama models: An instant AI chatbot experience that contains zero cloud snooping and keeps everything where it belongs. Who knew privacy could be this much fun? (I just wished for an M4 Ultra with 128GB of RAM now… )
@Gina If I had to choose, I'd probably go with #Ollama (which has been mentioned several times already). It's licensed under the MIT license and the models are about as close to open source as you can get. When I play with LLMs, it's what I use. Locally run and with an API that could be used to integrate with other stuff. I also have #OpenWebUI to make things prettier. Both can run locally, though OpenWebUI can integrate with cloud LLMs too. Of course, tomorrow everything could change.
#nixos roadblock getting a m.2 #google Coral TPU up no more steam left this Sunday morning.
Swapping for #Ubuntu and Docker so I can start testing with #OpenWebUI.
I have not been defeated yet though! Once my PCIe to m.2 e-key adapter shows up I will attempt to get the device working with NixOS once again. I want the device on the main app server.
I basically have a DIY Perplexity setup running in OpenWebUI (which is running politely alongside Plex). I'm using Mistral-Large with web search via SearXNG and the system prompt that Perplexity uses for their Sonar Pro searches.
And since OpenWebUI has an OpenAI-compatible API, I can connect to it from this GPTMobile app on my phone and query my custom search assistant model.
I set up #OpenWebUI on one of my more powerful servers, and it is fantastic. I'm running a couple smaller local Llama models, and hooked up my Anthropic and OpenRouter API keys to get access to Claude and a bunch of other models including Mistral, DeepSeek, and others. I also linked up my Kagi search API key to give web search capabilities to the models that don't have a web index. I will probably lower my Kagi Ultimate subscription to Professional since I no longer have a need for their Assistant.
https://fediverse.tv/videos/watch/dd4f78c4-b66d-40fa-9b56-e26b8bd0d892
Creé un video sobre mi interés en los grandes modelos del lenguaje y su aplicación en la escritura de guiones de promoción, especialmente para el canal de @rikylinux_ar sobre el Proyecto de La capi utilizando #Ollama y Open WebUI para crear contenido...