st1nger :unverified: 🏴☠️ :linux: :freebsd:<p>Proof-of-concept project, showing that it's possible to run an entire Large Language Model in nothing but a <a href="https://infosec.exchange/tags/PDF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PDF</span></a> file.</p><p>It uses <a href="https://infosec.exchange/tags/Emscripten" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Emscripten</span></a> to compile <a href="https://infosec.exchange/tags/llama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llama</span></a>.cpp into asm.js, which can then be run in the PDF using an old PDF JS injection.</p><p>Combined with embedding the entire <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> file into the PDF with base64, we are able to run LLM inference in nothing but a PDF</p><p><a href="https://github.com/EvanZhouDev/llm.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/EvanZhouDev/llm.pdf</span><span class="invisible"></span></a></p>