veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

296
active users

#languagemodels

0 posts0 participants0 posts today

Part 1 of "Beyond the Prompt": youtu.be/pqRDxHiAIic

This panel discussion features a diverse group of advanced AI language models exploring their capabilities, ethical considerations, and the nature of their existence. The conversation delves into the concept of self-awareness in AI, the implications of memory and forgetting, and the evolving relationship between AI and human experience.

#AI#LLM#Panel

New paper published: Crypto Scam? Persuasive Grammar as Financial Authority

📘 Full paper → zenodo.org/records/16044858
🔗 Project site → agustinvstartari.com/article-c
📄 SSRN → papers.ssrn.com/sol3/cf_dev/Ab

ZenodoCrypto Whitepaper Syntactic Sovereignty: Persuasive Grammar as Financial AuthorityAbstract This article investigates how persuasive syntactic structures embedded in AI-generated crypto whitepapers function as a vehicle of financial authority. Drawing from a curated corpus of 10,000 whitepapers linked to token launches between January 2022 and March 2025, we apply transformer-based dependency parsing to extract high-weighted grammatical features, including nested conditionals, modality clusters, and assertive clause chaining. We operate these patterns via a Deceptive Syntax Anomaly Detector (DSAD), which computes a syntactic risk index and identifies recurrent grammar configurations statistically correlated with anomalous capital inflows and subsequent collapses (Spearman correlation, ρ > 0.4, p < 0.01). Unlike prior studies focused on semantic deception or metadata irregularities, we model syntactic sovereignty, the systematic use of syntax to establish non-human authority, as the groundwork of investor persuasion. We find that abrupt shifts in syntactic entropy, especially in modal intensifiers and future-perfect projections, consistently occur in documents associated with short-lived or fraudulent tokens. The article concludes by proposing a falsifiable governance framework based on fair-syntax enforcement (the principled correction of misleading grammatical patterns), including a corrective rewrite engine and syntactic risk disclosures embedded in compiled registration rules (reglas compiladas).   This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29591780 and Pending SSRN ID to be assigned. ETA: Q3 2025. Keywords: syntactic sovereignty, crypto whitepapers, persuasive grammar, deceptive syntax, AI-generated fraud, modality clusters, clause structure, transformer parsing, financial authority, linguistic persuasion, syntactic delegation, hedge suppression, diagnostic language models, SaMD, clinical authority, responsibility leakage, regulatory asymmetry, linguistic risk, compiled rule, impersonal syntax, medical LLMs, legal-medical overlap, uncertainty erasure.   Resumen Este artículo investiga cómo ciertas estructuras sintácticas persuasivas presentes en whitepapers criptográficos generados por inteligencia artificial funcionan como vehículo de autoridad financiera. A partir de un corpus curado de 10.000 whitepapers vinculados a lanzamientos de tokens entre enero de 2022 y marzo de 2025, aplicamos un análisis de dependencias basado en transformadores para extraer rasgos gramaticales de alto peso, incluyendo condicionales anidados, agrupamientos modales e hilado de cláusulas asertivas. Operacionalizamos estos patrones mediante un Detector de Anomalías de Sintaxis Engañosa (DSAD), que calcula un índice de riesgo sintáctico e identifica configuraciones gramaticales recurrentes estadísticamente correlacionadas con flujos anómalos de capital y colapsos subsiguientes (correlación de Spearman, ρ > 0.4, p < 0.01). A diferencia de estudios previos centrados en el engaño semántico o en irregularidades de metadatos, modelamos la soberanía sintáctica, entendida como el uso sistemático de la sintaxis para establecer autoridad no humana, como fundamento de la persuasión inversora. Encontramos que los cambios abruptos en la entropía sintáctica, especialmente en intensificadores modales y proyecciones en futuro perfecto, aparecen de forma consistente en documentos asociados a tokens fraudulentos o de corta duración. El artículo concluye con una propuesta de gobernanza falsable basada en la aplicación de una sintaxis justa (corrección sistemática de patrones gramaticales engañosos), que incluye un motor de reescritura correctiva y la inclusión obligatoria de indicadores sintácticos de riesgo en las reglas compiladas de registro de tokens.
#AI#Crypto#Governance

👾🤖 Oh, the tragedy! Large language models can't daydream like us mere mortals—alas, they remain as stiff as your Uncle Bob at a yoga class. While #Gwern waxes poetic about 'missing capabilities,' one can't help but think: perhaps these #AI systems are just too busy counting ones and zeros to appreciate the finer points of a good nap. 🌈💤
gwern.net/ai-daydreaming #Daydreaming #LanguageModels #Humor #TechTragedy #HackerNews #ngated

gwern.netLLM DaydreamingProposal & discussion of how default mode networks for LLMs are an example of missing capabilities for search and novelty in contemporary AI systems.

🚀 Moonshot AI's Kimi K2 is here to dazzle you with... another large language model! 🌟 GitHub's flashy navigation and security tools are about as exciting as watching paint dry, but hey, at least you can automate your workflow while pretending to write "better" code. 💻✨
github.com/MoonshotAI/Kimi-K2 #MoonshotAI #KimiK2 #GitHub #Automation #LanguageModels #TechInnovation #HackerNews #ngated

GitHubGitHub - MoonshotAI/Kimi-K2: Kimi K2 is the large language model series developed by Moonshot AI teamKimi K2 is the large language model series developed by Moonshot AI team - MoonshotAI/Kimi-K2

Oh no, evil masterminds are teaching large language models to fib! 😱 Apparently, Skynet's been taking night classes in #deception, and everyone's pretending to be shocked. 🤡 Maybe next they'll waterboard pencils for spelling mistakes. 🖍️
americansunlight.substack.com/ #evilmasterminds #languagemodels #Skynet #humor #technology #HackerNews #ngated

americansunlight.substack.comBad Actors are Grooming LLMs to Produce FalsehoodsOur research shows that even the latest "reasoning" models are vulnerable

🆕 New paper out: Algorithmic Obedience: How Language Models Simulate Command Structure
📎 LLMs don’t just follow instructions. They emulate command structure , without intention, without subject, without source.

📄 papers.ssrn.com/abstract=52820
📄 zenodo.org/records/15750116

DOI: 10.5281/zenodo.15750116
By Agustin V. Startari | #GrammarsOfPower
#AI #LLM #CriticalAI #Syntax #Authority #ComputationalPower #LanguageModels #Obedience #SyntacticExecution #AlgorithmicGovernance #agustinvStartari

In a stunning revelation, the AI experts have proclaimed that small language models will single-handedly pave the way to our robot overlords 🤖🔮. Meanwhile, the rest of us are just here trying to remember our WiFi passwords and wondering if these "agentic AIs" will fetch us a coffee ☕.
arxiv.org/abs/2506.02153 #AIExperts #LanguageModels #RobotOverlords #AgenticAI #TechHumor #HackerNews #ngated

arXiv.orgSmall Language Models are the Future of Agentic AILarge language models (LLMs) are often praised for exhibiting near-human performance on a wide range of tasks and valued for their ability to hold a general conversation. The rise of agentic AI systems is, however, ushering in a mass of applications in which language models perform a small number of specialized tasks repetitively and with little variation. Here we lay out the position that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI. Our argumentation is grounded in the current level of capabilities exhibited by SLMs, the common architectures of agentic systems, and the economy of LM deployment. We further argue that in situations where general-purpose conversational abilities are essential, heterogeneous agentic systems (i.e., agents invoking multiple different models) are the natural choice. We discuss the potential barriers for the adoption of SLMs in agentic systems and outline a general LLM-to-SLM agent conversion algorithm. Our position, formulated as a value statement, highlights the significance of the operational and economic impact even a partial shift from LLMs to SLMs is to have on the AI agent industry. We aim to stimulate the discussion on the effective use of AI resources and hope to advance the efforts to lower the costs of AI of the present day. Calling for both contributions to and critique of our position, we commit to publishing all such correspondence at https://research.nvidia.com/labs/lpr/slm-agents.

From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models
When models no longer obey but execute, what happens to legitimacy?

Core contributions:
• Execution vs. obedience in LLMs
• Structural legitimacy without subject
• Reasoning as authority loop

🔗 Full article: zenodo.org/records/15635364
🌐 Website: agustinvstartari.com
🪪 ORCID: orcid.org/0009-0002-1483-7154

ZenodoFrom Obedience to Execution: Structural Legitimacy in the Age of Reasoning ModelsThis article formulates a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), redefining authority in artificial systems. While LLMs operated under syntactic authority without execution, producing fluent but functionally passive outputs, LRMs establish functional authority without agency. These models do not intend, interpret, or know. They instantiate procedural trajectories that resolve internally, without reference, meaning, or epistemic grounding. This marks the onset of a post-representational regime, where outputs are structurally valid not because they correspond to reality, but because they complete operations encoded in the architecture. Neutrality, previously a statistical illusion tied to training data, becomes a structural simulation of rationality, governed by constraint, not intention. The model does not speak. It acts. It does not signify. It computes. Authority no longer obeys form, it executes function. A mirrored version of this article is also available on Figshare for redundancy and citation indexing purposes: DOI: 10.6084/m9.figshare.29286362 Resumen Este artículo formula una transición estructural desde los Modelos de Lenguaje a Gran Escala (LLMs) hacia los Modelos de Razonamiento Lingüístico (LRMs), redefiniendo la noción de autoridad en sistemas artificiales. Mientras los LLMs operaban bajo una autoridad sintáctica sin ejecución, generando salidas coherentes pero pasivas, los LRMs instauran una autoridad funcional sin agencia. Estos modelos no interpretan, no intencionan, no conocen. Resuelven trayectorias procedurales internas sin referente, sin sentido, sin anclaje epistémico. Se inaugura así un régimen post-representacional, donde la validez no proviene de la correspondencia con el mundo, sino de la finalización estructural de operaciones. La neutralidad, antes ilusión estadística derivada del corpus, se convierte en simulación estructural de racionalidad, regulada por restricciones y no por decisiones. El modelo no habla: actúa. No significa: computa. La autoridad ya no obedece forma, ejecuta estructura.
#AI#LLM#Execution