veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

214
active users

#naturallanguageprocessing

2 posts2 participants0 posts today
Replied in thread

After the conference & day trip, it was time for meetings. We did plenty! I took part in delegation meetings with #AcademiaSinica and #NTU (National Taiwan Univ.), but there were many more. We learned about exchange opportunities, memoranda were signed, and we presented each others' research priorities and focus areas, and found lots of overlap! #Biodiversity #WorldLiterature #Sinology #DigitalHumanities #ArtifcialIntelligence #NaturalLanguageProcessing, to name just a few. #tcdh #UTrier

Everyone should have a pet-OCD, and my obsession is esoteric encoding schemes. That kept me awake last night, and now I'm giving in.

Part 1 of a handful; from idea to product, I present you "How to convert arbitrary binary data into English sentences and back again."

Part 2 in the making.

blog.ynfonatic.de/software/202

Alexander W. Janssen’s blog · A PGP Words variant creating natural sentences - part 1This is part 1 of a series of articles I am planning write. In this part I will outline the problem, and offer a solution proposal and define the requirements.

#Introduction

Hi everyone! It's ALTA here - the Australasian Language Technology Association.

We run an @aclmeeting - aligned workshop in Australia or Aotearoa New Zealand every year, focused on #nlp #NaturalLanguageProcessing.

This year's workshop will be held at @ANUResearch in the beautiful city of #Canberra #CBR on #Ngunnawal land, in early December 2024.

We're a pretty friendly and easy going bunch of folks - and we'd love to connect!

Continued thread

... cont'd 5/5

Schaeffer et. al. argue that the emergent abilities are predictably acquired according to a smooth scaling law.

Are Emergent Abilities of Large Language Models a Mirage?
arxiv.org/abs/2304.15004

Prompt engineering: Chain-of-thought reasoning:
en.wikipedia.org/wiki/Prompt_e

arXiv.orgAre Emergent Abilities of Large Language Models a Mirage?Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities; (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show to choose metrics to produce never-before-seen seemingly emergent abilities in multiple vision tasks across diverse deep networks. Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.