veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

279
active users

#ceph

3 posts3 participants0 posts today

Mit #Proxmox lassen sich Workloads auf unterschiedlichen Plattformen virtualisieren. Unser Consultant Robert Sander zeigt in seinem #SLAC-Vortrag den aktuellen Funktionsumfang mit Schwerpunkt auf den Automatisierungsmöglichkeiten via API und CLI.

Robert ist als echter #Ceph-Profi mit einem weiteren Vortrag zum Thema „Der Ceph Orchestrator - Container für Storage“ Teil unserer Linux-Konferenz und zeigt darin u.a. die grundlegende Funktionsweise des Orchestrators.

🎟️ slac-2025.de

I am getting SO tired of Longhorn, nothing ever works, volumes do not attach, and when they do, the container doesn't detect it, and it required manual intervention

If only Ceph had S3 sync compatibility, it would be a no brainer, but at this point, I might as well try it out...

Continued thread

I was in grade school when my mom first set up a RAID box in our house (where she ran business as a consultant). It was a relatively small thing, but she was doing consulting work on storage systems and I got to play with hardware RAID cards which was a lot of fun (I mean, I was ten and I was getting to play with a brand new Macintosh Plus, cutting edge PCs, and anything else she could convince a customer to buy for her).

The first time we lost a drive, she and I spent hours trying to puzzle out how to recover it. There is a big difference between the theory of how RAIDs work and actually sitting at a table ten minutes before school watching it slowly jump from 3% recovered to 4. I mean, it felt like the slowest thing since she was in the middle of a project and we needed the files.

After I got home, the first thing I did when I got home was rush over to see that it was only 80-something percent. That put me in a sour mood. :) It wouldn't be done for another couple of hours but then it worked! It finished about a half hour after she came home and we interrupted dinner to check it out.

That was cool.

It wasn't until a few months later that I found where it didn't work. The house didn't have exactly clean power, and 80s technology wasn't exactly as reliable as it is today, so we lost another drive. But in the middle of the RAID 5 recovery, we lost a third drive.

And then is when I realized the heartbreak of trying to fix something that couldn't be fix. Fortunately, it was only a small project then and we were able to recover most of it from memory and the files we did have.

We ended up upgrading the house to a 200 amp service and then I got some penalty chores of helping my dad run new electrical lines to her office so she could have better power so we stopped losing drives, but that's a different aspects of my childhood.

But it came out as a good lesson: drives will fail. It doesn't matter how big they are, no matter how much you take care of them, or anything else. It also taught me that RAID was ultimate fragile. It handles "little" failures but there is always a bigger failure.

Plus, history has strongly suggested that when my mother or I got stressed, computer have a tendency to break around us. Actually after the derecho and the stunning series of bad luck I had for three years, high levels of stress around me cause things to break. I have forty years of history to back that. Hard drives are one of the first things to go around me, which has given me a lot of interest in resilient storage systems because having the family bitching about Plex not being up is a good way to keep being stressed out. :D

I think that is why I gravitated toward Ceph and SeaweedFS. Yeah, they are fun, but the distributed network is a lot less fragile than a single machine running a RAID. When one of my eight year old computer dies, I'm able to shuffle things around and pull it out. Technology improves or I get a few hundred dollar windfall, get a new drive.

It's also my expensive hobby. :D Along with writing.

And yet, cheaper than LEGO.

d.moonfire.usEntanglement 2021

Guess it's time for a new #introduction, post instance move.

Hi! I'm Crabbypup, or just 'crabby', though only in name most days.

I'm a Linux flavored computer toucher from Kitchener-Waterloo.

I tend to share stuff about the region, Open source software in general and #linux in specific.

I like to tinker in my #homelab, where I run #proxmox, #ceph, and a bunch of other #selfhosted services including #homeassistant.

I'm a rather inconsistent poster, but I'm glad to be here.

Hmm, unser selbstgebautes #ISCSI beherbergt unsere gesamte Test Umgebung, sprich Platten für die VMs unter #proxmox. Das muss dringend ersetzt werden, weil ich jedesmal die Luft anhalten muss, ob es nach Problem X (Strom / HW / Reboot) wieder hochkommt. Nach dem das Prod Ceph ausgetauscht wurde, konnte ich mein Test #Ceph endlich fertigstellen, aber die Ceph #Performance ist wirklich mies. Rados bench liegt im 3 stelligen Bereich für MB/s .

Cursed homelab update:

I learned a _lot_ about Rook and how it manages PersistentVolumes today while getting my PiHole working properly. (Rook is managed Ceph in Kubernetes)

In Kubernetes, the expectation is that your persistent volume provider has registered a CSI driver (Container Storage Interface) and defined StorageClasses for the distinct "places" where volumes can be. You then create a volume by defining a PersistentVolumeClaim (PVC) which defines a single volume managed by a StorageClass. The machinery behind this then automatically creates a PersistentVolume to define the underlying storage. You can create PersistentVolumes manually, but this isn't explored much in the documentation.

In Rook, this system is mapped onto Ceph structures using a bunch of CSI drivers. The default configuration defines StorageClasses for RBD images and CephFS filesystems. There are also CSI drivers for RGW and NFS backed by CephFS. You then create PVCs the normal way using those StorageClasses and Rook takes care of creating structures where required and mounting those into the containers.

However there's a another mechanism which is much more sparsely mentioned and isn't part of the default setup: "static provisioning". You see, Ceph clusters are used to store stuff for systems that aren't Kubernetes and people tend to organise things in ways that the "normal" CSI Driver + StorageClass + PVC mechanism can't understand and shouldn't manage. So if we want to share that data with some pod, you need to create specially structured PersistentVolumes to map those structures into Kubernetes.

Once you set up one of these special PersistentVolumes and attach them to a pod using a PVC, then you effectively get a "traditional" "cephfs" volume mount, but using Rook's infrastructure and configuration, so all you need to specify is the authentication data and the details for that specific volume and you're done.

The only real complication is that you need a separate secret for this, but chances are you're referencing things in separate places to the "normal" StorageClass stuff and giving Rook very limited access to your storage, so this isn't a big deal.

So circling back around to the big question I wanted answers for: Does Rook mess with stuff it doesn't know about in a CephFS filesystem?

No.

If you use the CSI driver + StorageClass mechanism it will only delete stuff that it creates itself and won't touch anything else existing in the filesystem, even if it's in folders it would create or use.

If you use a static volume, then you're in control of everything it has access to and the defaults are set so that even if the PersistentVolume is deleted, the underlying storage remains.

So now onto services that either should be using CephFS volumes or need to access "non-kubernetes" storage, starting with finding a way to make Samba shares in a container.

#ceph#rook#homelab

New blog post: blog.mei-home.net/posts/k8s-mi

I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.

This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.

ln --help · Nomad to k8s, Part 25: Control Plane MigrationMigrating my control plane to my Pi 4 hosts.

Découvrez pourquoi #Ceph est la solution de stockage préférée de notre CTO !

Dans notre dernier article, Thibaut Démaret, CTO de Worteks, explore les multiples avantages de Ceph, une solution #OpenSource de stockage flexible et performante.

👀 Lisez l'article complet pour en savoir plus et découvrez pourquoi 𝗖𝗲𝗽𝗵, 𝗰'𝗲𝘀𝘁 𝗯𝗶𝗲𝗻 !

worteks.com/blog/Ceph-c-est-bi

@ow2 @OpenInfra @opensource_experts @osxp_paris

Replied in thread

One observation from today’s test that I need to figure out:

The rook operator removed custom labels from the ceph-exporter and csi-provisioner deployments when it was restarted. The annotations were untouched. Need to work out is this is by design or not…..

Would it matter if these #rook #ceph deployments are not scaled down?

Continued thread

The cluster is back now. No data loss occurred. Everything surprisingly kept running or came back very fast.

Also, I now know how to do manual surgery on a Ceph monmap.

On the positive side: My MONs are now all located on the right host.

And if I wasn't such an impatient git sometimes, this could have certainly been accomplished without shaving off 5 years from my life expectancy.

Now excuse me while I return right back to my fainting couch.

2/2