veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

297
active users

#cuda

3 posts3 participants0 posts today

NVIDIA finally joins the 21st century by adding #Python support to #CUDA, because who needs cutting-edge tech when you can just catch up with 2006? 🕰️ Meanwhile, The New Stack is begging you to re-subscribe like a clingy ex who just can't take a hint. 📧💔
thenewstack.io/nvidia-finally- #NVIDIA #TheNewStack #TechNews #Subscribe #HackerNews #ngated

The New Stack · NVIDIA Finally Adds Native Python Support to CUDAFor years, NVIDIA’s CUDA software toolkit for GPUs didn't have native Python support. But that’s now changed.

One of the best interviews on AI and GPUs I've every seen was posted earlier today. Jensen Huang is really super smart in my opinion, and I think this interview is definitely worth watching. I can understand how it was a person like him who turned NVIDIA into a company with a market cap more than 1 trillion USD.

Jensen Huang on GPUs - Computerphile
youtube.com/watch?v=G6R7UOFx1bw

#NVIDIA#GPU#AI

Just got my RSS reader YOShInOn building with uv and running under WSL2 with the Cuda libraries, despite a slight version mismatch... All I gotta do is switch it from arangodb (terrible license) to postgres, and it might have a future... With sentence_transformers running under WSL2 I might even be able to deduplicate the million images in my Fraxinus image sorter

Replied to Giuseppe Bilotta

Even now, Thrust as a dependency is one of the main reason why we have a #CUDA backend, a #HIP / #ROCm backend and a pure #CPU backend in #GPUSPH, but not a #SYCL or #OneAPI backend (which would allow us to extend hardware support to #Intel GPUs). <doi.org/10.1002/cpe.8313>

This is also one of the reason why we implemented our own #BLAS routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved #BiCGSTAB that I've had the opportunity to mention before <doi.org/10.1016/j.jcp.2022.111>. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is

a. too much effort
b. probably not worth it.

Again, following @eniko's original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.

6/

AMD YOLO: because why not base your entire #business #strategy on a meme? 🚀🎉 Thanks to AMD's cultural enlightenment, they're now #shipping #boxes faster than philosophical musings on singularity! 🤯 Who knew rewriting a stack could be as easy as beating #NVIDIA at their own game? Just don't tell CUDA—it might get jealous! 😜
geohot.github.io//blog/jekyll/ #AMD #YOLO #meme #CUDA #competition #HackerNews #ngated

the singularity is nearer · AMD YOLOAMD is sending us the two MI300X boxes we asked for. They are in the mail.

Hot Aisle's 8x AMD #MI300X server is the fastest computer I've ever tested in #FluidX3D #CFD, achieving a peak #LBM performance of 205 GLUPs/s, and a combined VRAM bandwidth of 23 TB/s. 🖖🤯
The #RTX 5090 looks like a toy in comparison.

MI300X beats even Nvidia's GH200 94GB. This marks a very fascinating inflection point in #GPGPU: #CUDA is not the performance leader anymore. 🖖😛
You need a cross-vendor language like #OpenCL to leverage its power.

FluidX3D on #GitHub: github.com/ProjectPhysX/FluidX