veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

230
active users

#cache

1 post1 participant0 posts today

Na toll WordPress – bzw Jetpack Boost. Wenn man euren hauseigenen Crticial CSS Tweak nutzt, funktioniert das Menu nicht mehr
Sollte in eurem Wordpress im Twenty-Twenty-Five Theme das Menu plötzlich nicht mehr klickbar sein, könnte dafür die Funktion

Laden von kritischem CSS optimieren im P
blog.lxkhl.com/na-toll-wordpre
#Cache #Jetpack #Theme #TwentyTwentyFive #Wordpress

🐘 Mastodon Account Archives 🐘

TL;DR Sometimes mastodon account backup archive downloads fail to download via browser, but will do so via fetch with some flags in the terminal. YMMV.

the following are notes from recent efforts to get around browser errors while downloading an account archive link.

yes, surely most will not encounter this issue, and that's fine. there's no need to add a "works fine for me", so this does not apply to your situation, and that's fine too. however, if one does encounter browser errors (there were several unique ones and I don't feel like finding them in the logs).

moving on... some experimentation with discarding the majority of the URL's dynamic parameters, I have it working on the cli as follows:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

the primary download URL (everything before the query initiator "?" has been substituted as ${URL_PRE_QMARK}, and then I only included Amazon's algo params, the rest of the URL (especially including the "expire" tag) seems to be unnecessary.

IIRC the reasoning there is about the CDN's method for defaulting to a computationally inexpensive front-line cache management, where the expire aspects are embedded in the URL instead of internal (to the CDN clusters) metrics lookups for cache expiration.

shorter version: dropping all of the params except the hash algo will initiate a fresh zero-cached hit at the edge, though likely that has been cached on second/non-edge layer due to my incessent requests after giving up on the browser downloads.

increasing the buffer size and forcing ipv4 are helpful for some manner of firewall rules that are on my router side, which may or may not be of benefit to others.

- Archive directory aspect of URL: https://${SERVER}/${MASTO_DIR}/backups/dumps/${TRIPLE_LAYER_SUBDIRS}/original/
- Archive filename: archive-${FILE_DATE}-{SHA384_HASH}.zip

Command:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

Verbose output:

resolving server address: ${SERVER}:443
SSL options: 86004850
Peer verification enabled
Using OpenSSL default CA cert file and path
Verify hostname
TLSv1.3 connection established using TLS_AES_256_GCM_SHA384
Certificate subject: /CN=${SEVER}
Certificate issuer: /C=US/O=Let's Encrypt/CN=E5
requesting ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256
remote size / mtime: ${FILE_SIZE} / 1742465117
archive-${FILE_DATE}-{SHA384_HASH}.zip 96 MB 2518 kBps 40s

@stefano looks to be working now :)

Ech kurde. Właśnie odkryłem, że plugin od cache psuje mi wyświetlanie map na blogu.

Gdy jestem zalogowany widzę na mapie wszystkie markery POI, profil wysokości trasy i mam możliwość pobrania pliku gpx, ale bez logowania jest tylko mapa z narysowaną trasą.

Znowu trzeba będzie dłubać, albo wyłączyć keszowanie całkiem, bo i tak nie ratuje bloga przed FediDDoS-em, a cała reszta ruchu jest znikoma.

👑 Cache is King: Smart Page Eviction with eBPF

arxiv.org/abs/2502.02750

arXiv.orgCache is King: Smart Page Eviction with eBPFThe page cache is a central part of an OS. It reduces repeated accesses to storage by deciding which pages to retain in memory. As a result, the page cache has a significant impact on the performance of many applications. However, its one-size-fits-all eviction policy performs poorly in many workloads. While the systems community has experimented with a plethora of new and adaptive eviction policies in non-OS settings (e.g., key-value stores, CDNs), it is very difficult to implement such policies in the page cache, due to the complexity of modifying kernel code. To address these shortcomings, we design a novel eBPF-based framework for the Linux page cache, called $\texttt{cachebpf}$, that allows developers to customize the page cache without modifying the kernel. $\texttt{cachebpf}$ enables applications to customize the page cache policy for their specific needs, while also ensuring that different applications' policies do not interfere with each other and preserving the page cache's ability to share memory across different processes. We demonstrate the flexibility of $\texttt{cachebpf}$'s interface by using it to implement several eviction policies. Our evaluation shows that it is indeed beneficial for applications to customize the page cache to match their workloads' unique properties, and that they can achieve up to 70% higher throughput and 58% lower tail latency.
#linux#kernel#cache

Po tym jak przy ostatnich publikacjach na blogu, serwer dławił się na jakiś czas (najprawdopodobniej przez federację), wymieniłem plugin od cache z WP Super Cache na podobno lepszy LS Cache, licząc że dzięki temu żądania z instancji fedi pobierających kartę podglądu dla wpisu nie zamulą mi strony.

Rzeczywiście strona po wymianie pluginu śmigała żwawo, więc miałem nadzieję, że przy następnej publikacji będzie dobrze, ale gdy przyszło co do czego, to okazało się, że niestety nic to nie dało.

Pogrzebałbym teraz, ale panel też zwraca 500 i 503. Trzeba przeczekać napór i poszukać rozwiązania później.

Despite the #mastodon media #cache duration option being set to 14 (now 7) days, system/cache/accounts/ is 50gb. Blegh. Are there really that many different people tooting on my timeline?

$ du -sh *
18G avatars
36G headers

`tootctl media remove-orphans` cleared only 77MB.

`tootctl media remove --prune-profiles --days 90 --dry-run` claims to remove 36GB. That's all strangers I haven't fingered in more than 3 months? So retoots/boosts/replies/etc?

For the love of god if you're trying to convince your employer or an organization to "come over" to the #Fediverse, do NOT under any circumstances suggest that they set up a #Mastodon #instance!

Mastodon is the ONLY Fediverse platform that ...
by default ... forces #server #admins to #cache, #copy, and #proxy all #media that passes through its server. This means that not only are server admins paying to host the media their users #upload, but they have to pay to host the media everyone else on the fucking fediverse uploads as well.

Other platforms offer this feature, but
Mastodon is the only one that has this turned on by default.

This results in Mastodon server admins having to shell out thousands of dollars each month in
#S3 hosting costs for no reason whatsoever.

There are much better alternative instance platforms than Mastodon.