Dr. Moritz Lehmann<p>I made this <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CFD</span></a> simulation run on a frankenstein zoo of 🟥AMD + 🟩Nvidia + 🟦Intel <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a>s! 🖖🤪<br><a href="https://www.youtube.com/watch?v=_8Ed8ET9gBU" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=_8Ed8ET9gB</span><span class="invisible">U</span></a></p><p>The ultimate SLI abomination setup:<br>- 1x Nvidia A100 40GB<br>- 1x Nvidia Tesla P100 16GB<br>- 2x Nvidia A2 15GB<br>- 3x AMD Instinct MI50<br>- 1x Intel Arc A770 16GB</p><p>I split the 2.5B cells in 9 domains of 15GB - A100 takes 2 domains, the other GPUs 1 domain each. The GPUs communicate over PCIe via <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenCL</span></a>.</p><p>Huge thanks to Tobias Ribizel from TUM for the hardware!</p>