veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

294
active users

#explainableai

7 posts7 participants0 posts today

ML query language, by ML, caveat emptor (it's really LM)

textblock

Introducing MLQL – a query language for ML models. Think SQL, but for predictions, explanations, and comparisons. Example:
PREDICT churn USING model churn_model WHERE region="EU" EXPLAIN top_features.
Supports raw input, filters, explanations like SHAP. Stop treating ML like a black box.
#ML #AI #DevTools #ExplainableAI #MastodonDelendaEst

textblock

"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.

Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.

OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”

“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.

“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.

To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."

thenewstack.io/llms-can-now-tr

The New Stack · Breakthrough: LLM Traces Outputs to Specific Training DataAi2’s OLMoTrace uses string matching to reveal the exact sources behind chatbot responses

🧬Can we trust AI in bioinformatics if we don’t understand how it makes decisions?

As AI becomes central to bioinformatics, the opacity of its decision-making remains a major concern.

🔗 Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.12

📚 CSBJ: csbj.org/

🎉 That’s a wrap! The SAIL Spring School 2025 at Bielefeld University was an inspiring event, bringing together young researchers to explore AI evaluation beyond accuracy & precision.

🍕 A highlight: our poster & pizza session – Congrats to Kathrin Lammers & Thorben Markmann for winning Best Poster Awards! 👏

A big thank you to all speakers, participants & organizers! 🤝 See you at the next SAIL Spring School 2026 in Paderborn! 🚀

Applications are now open for the 2025 International Semantic Web Research Summer School - #ISWS2025
in Bertinoro, Italy, from June 8-14, 2025
Topic: Knowledge Graphs for Reliable AI
Application Deadline: March 25, 2025
Webpage: 2025.semanticwebschool.org/

Great keynote speakers: Frank van Harmelen (VU), Natasha Noy (Google), Enrico Motta (KMI)

#semanticweb #knowledgegraphs #AI #generativeAI #responsibleAI #explainableAI #reliableAI @albertmeronyo @AxelPolleres @lysander07

[Neue Publikation] Wir untersuchten, ob die KI in der Lage ist, subtile morphologische Unterschiede zwischen verschiedenen Populationen anhand von einfachen Smartphone-Aufnahmen zu erkennen. Die Ergebnisse zeigen: Ja, die KI kann selbst bei komplexen Bildhintergründen zuverlässig charakteristische Merkmale der Blätter identifizieren. #explainableAI #XAI #FloraIncognita

Mehr Details: floraincognita.de/neue-publika

@tu_ilmenau
@MPI_BGC

This Friday at 12.15 CEST I'll be hosting a talk by computer scientist Kary Främling, in which he will present his work on Explainable AI techniques for producing explanations useful to a variety of stakeholders, rather than only to AI experts.

The talk is hybrid, so even if you are not currently enjoying the very colourful (and very wet) autumn in Umeå, you can join nonetheless!

More information here: umu.se/en/events/fraiday-socia

All welcome!

www.umu.se#frAIday: Social Explainable AI - What is it and how can we make it happen?