veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

296
active users

#ollama

8 posts7 participants0 posts today

Introducing MoonPiLlama

Adam Jenkins has made a Youtube video showing how to install #MoodleBox and Ollama on a Raspberry Pi4. MoodleBox is a custom distribution of #Moodle specifically for the Raspberry Pi. However the Moodle part includes good information on generally how to get Ollama and Moodle to work together. It also includes gratuitous use of a yellow rubber duck
(Yes Pi 4 not Pi 5).

youtube.com/watch?v=KqQfzhJJFP

@nothingfuture I haven't dug in too deep into it (yet) but you could find more information about the training data that was used for the models they list in the Google Doc.

For instance, huggingface.co/microsoft/Phi-3 was "trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties."

#nercomp25 #AI #ollama

huggingface.comicrosoft/Phi-3-medium-128k-instruct · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.

Last night I was up until 2AM trying to get #trunas #amd drivers installed inside of a #docker #container so that #ollama would actually use the #gpu. I was so close. It sees the gpu, it sees it has 16GB of ram, then it uses the #cpu.

Trunas locks down the file system at the root level, so if you want to do much of anything, you have to do it inside of a container. So I made a container for the #rocm drivers, which btw comes to like 40GB in size.

It's detecting, but I don't know if the ollama container has some missing commands, ie rocm or rocm-info, that it may need.

Another alternative is one I don't really want, and that's to install either #debian or windows as a VM - windows because I did a test on the application that runs locally in windows on this machine before and it was super fast. It isn't ideal from RAM usage, but I may be able to run the models more easily with the #windows drivers than the #linux ones.

But anyway, last night was too much of #onemoreturn for a weeknight.

Need to classify products by description? 🤔 (Un)Perplexed Spready + AI can do it! Use `=ASK_LOCAL2` to compare product features, packaging and more. It's like having an AI assistant in your spreadsheet! 🤩 Check it out: matasoft.hr/qtrendcontrol/inde

#AIinSpreadsheets #DataDriven #Efficiency #Innovation #AI #Ollama #SpreadsheetAI #DataAnalysis #Privacy #Spreadsheets #Productivity #LLM #Innovation #AutomationRevolution
#StandardizationMadeEasy #CustomerInsights #ProductAnalysis #SentimentAnalysis #MDM

Hongkiat: Running Large Language Models (LLMs) Locally with LM Studio. “Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently. In this article, we will walk you through optimizing your […]

https://rbfirehose.com/2025/03/25/hongkiat-running-large-language-models-llms-locally-with-lm-studio/