veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

302
active users

#llvm

5 posts5 participants0 posts today

One of the reasons I'm still using GitHub for a lot of stuff is the free CI, but I hadn't really realised how little that actually costs. For #CHERIoT #LLVM, we're using Cirrus-CI with a 'bring your own cloud subscription' thing. We set up ccache backed by a cloud storage thing, so incremental builds are fast. The bill for last month? £0.31.

We'll probably pay more as we hire more developers, but I doubt it will cost more than £10/month even with an active team and external contributors. Each CI run costs almost a rounding-error amount, and that's doing a clean (+ ccache) build of LLVM and running the test suite. We're using Google's Arm instances, which have amazingly good price:performance (much better than the x86 ones) for all CI, and just building the x86-64 releases on x86-64 hardware (we do x86-64 and AArch64 builds to pull into our dev container).

For personal stuff, I doubt the CI that I use costs more than £0.10/month at this kind of price. There's a real market for a cloud provider that focuses on scaling down more than on scaling up and made it easy to deploy this kind of thing (we spent far more money on the developer time to figure out the nightmare GCE web interface than we've spent on the compute. It's almost as bad as Azure and seems to be designed by the same set of creatures who have never actually met a human).

📈 Ah, the old "Calculate Throughput with LLVM's Scheduling Model" routine—because nothing screams weekend fun like diving into compiler internals and #microarchitecture performance analysis! 🤓 Just remember, when life gives you throughput, measure it in #IPC and don't forget to bring your inverse throughput for extra giggles. 😂
myhsu.xyz/llvm-sched-interval- #CalculateThroughput #LLVM #SchedulingModel #CompilerInternals #PerformanceAnalysis #HackerNews #ngated

Min Hsu's Homepage · Calculate Throughput with LLVM's Scheduling ModelCompiler, uArch, and a little bit of...jigsaw puzzle?

Is there a way to get #Apple #Clang to warn on aggregate initialization w/ designated initializers in the wrong order? In C++ it's supposed to be illegal to aggregate-initialize a struct in any other order than declaration order, but Clang never warns on that so I don't catch it until CI complains or I try to compile the code on a Linux machine

Replied to Nick W.

I'm wondering why FreeBSD uses #LLVM. Not as in where and why specifically do they use clang/etc. but this in particular: what's the point in using that if FreeBSD core won't even agree with each other on having a X11-based GNOME or XFCE installer that can be adopted with amd64? Not everyone wants a headless FreeBSD server, or just a remote non-GUI file server guys!

"On FreeBSD, LLVM (specifically Clang/LLVM) serves as the primary compiler toolchain, including the compiler (Clang), linker (LLD), and debugger (LLDB), used for building the FreeBSD operating system, kernel, and various software packages from the FreeBSD Ports Collection"

🌘 LLVM Fortran 升級:告別 flang-new,迎接 flang!
➤ Flang 的十年歷程與未來展望
blog.llvm.org/posts/2025-03-11
LLVM 的 Fortran 編譯器 Flang 自 2020 年以來開始發展,並在 LLVM 20 中正式更名為 Flang,標誌著其重要的進步。Flang 的歷程接近十年,包括許多重寫與 MLIR 的採用。Fortran 語言因其在科學計算中的廣泛應用而持續受到重視,近期更在編譯器和工具發展上取得了顯著進展。
+ "Flang 的改名真是一個大進步,期待它能為 Fortran 生態系統帶來更多創新!"
+ "Fortran 竟然能重新煥發生機,這對科學計算領域來說是一個好消息!"
#LLVM #Fortran #編譯器

The LLVM Project Blog · LLVM Fortran Levels Up: Goodbye flang-new, Hello flang!LLVM has included a Fortran compiler “Flang” since LLVM 11 in late 2020. However, until recently the Flang binary was not flang (like clang) but instead flang-new. LLVM 20 ends the era of flang-new.

Performance of the #Python 3.14 tail-call interpreter- About a month ago, the #CPython project merged a new implementation strategy for their bytecode interpreter. The initial headline results were very impressive, showing a 10-15% performance improvement on average across a wide range of benchmarks across a variety of platforms.

Unfortunately, as I will document in this post, these impressive performance gains turned out to be primarily due to inadvertently working around a regression in #LLVM 19. When benchmarked against a better baseline (such GCC, clang-18, or LLVM 19 with certain tuning flags), the performance gain drops to 1-5% or so depending on the exact setup.

blog.nelhage.com/post/cpython-

Made of Bugs · Performance of the Python 3.14 tail-call interpreterA deep dive into the performance of Python 3.14's tail-call interpreter: How the performance results were confounded by an LLVM regression, the surprising complexity of compiling interpreter loops, and some reflections on performance work, software engineering, and optimizing compilers.

Healthy Competition With #GCC15 vs. #LLVM #Clang20 Performance On #AMD #Zen5
With some codebases/workloads there can be strong advantages at time for one compiler over the other, but at a high level the #GCC and Clang #compiler performance is extremely tight with recent versions and on modern #x86_64 hardware.
phoronix.com/review/clang20-gc

www.phoronix.comHealthy Competition With GCC 15 vs. LLVM Clang 20 Performance On AMD Zen 5