veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

296
active users

#generativeAI

66 posts52 participants1 post today

The Nvidia RTX A2000 6GB is not the best by far, it lacks VRAM. But computing wise and fair use, and with patience, @70W it is a really good deal. It is silent, small, and it works on older computers (no need to change the PSU). I think great for machine learning rather than generative AI, it supports #CUDA (it is for working e.g not gaming).

NB: non sponsored review

"While the risk of a billion-dollar-plus jury verdict is real, it’s important to note that judges routinely slash massive statutory damages awards — sometimes by orders of magnitude. Federal judges, in particular, tend to be skeptical of letting jury awards reach levels that would bankrupt a major company. As a matter of practice (and sometimes doctrine), judges rarely issue rulings that would outright force a company out of business, and are generally sympathetic to arguments about practical business consequences. So while the jury’s damages calculation will be the headline risk, it probably won’t be the last word.

On Thursday, the company filed a motion to stay — a request to essentially pause the case — in which they acknowledged the books covered likely number “in the millions.” Anthropic’s lawyers also wrote, “the specter of unprecedented and potentially business-threatening statutory damages against the smallest one of the many companies developing [large language models] with the same books data” (though it’s worth noting they have an incentive to amplify the stakes in the case to the judge).

The company could settle, but doing so could still cost billions given the scope of potential penalties."

obsolete.pub/p/anthropic-faces

Obsolete · Anthropic Faces Potentially “Business-Ending” Copyright LawsuitBy Garrison Lovely

"I just want to be clear here: the price of my plan did not change. Instead, Microsoft moved me to a new plan that contained generative AI features I never asked for; a plan that cost a lot more than I was already paying. Then it lied to me, claiming my existing plan had increased in price and that there was no version of a plan without generative AI — until I tried to stop paying them altogether.

Deceptive practices like this are part of the reason so many people not only increasingly despise the tech monopolies, but also see generative AI as a giant scam. I have little doubt that if Lina Khan was still heading up the US Federal Trade Commission that this is something she’d be looking into; it’s such a clear example of the abuses she used to take on. But now that a Trump crony is in that position instead, tech companies can get away with ripping off and lying to their customers, as Microsoft just did to me and millions of others.

I’m not trying to claim I’m the first person to notice Microsoft doing this; I’m expressing how furious I was when I saw how deceptively the company was acting toward me to fund its generative AI ambitions."

disconnect.blog/p/ive-had-it-w

Disconnect · I’ve had it with MicrosoftBy Paris Marx

Do AI models help produce verified bug fixes?

"Abstract: Among areas of software engineering where AI techniques — particularly, Large Language Models — seem poised to yield dramatic improvements, an attractive candidate is Automatic Program Repair (APR), the production of satisfactory corrections to software bugs. Does this expectation materialize in practice? How do we find out, making sure that proposed corrections actually work? If programmers have access to LLMs, how do they actually use them to complement their own skills?

To answer these questions, we took advantage of the availability of a program-proving environment, which formally determines the correctness of proposed fixes, to conduct a study of program debugging with two randomly assigned groups of programmers, one with access to LLMs and the other without, both validating their answers through the proof tools. The methodology relied on a division into general research questions (Goals in the GoalQuery-Metric approach), specific elements admitting specific answers (Queries), and measurements supporting these answers (Metrics). While applied so far to a limited sample size, the results are a first step towards delineating a proper role for AI and LLMs in providing guaranteed-correct fixes to program bugs.

These results caused surprise as compared to what one might expect from the use of AI for debugging and APR. The contributions also include: a detailed methodology for experiments in the use of LLMs for debugging, which other projects can reuse; a finegrain analysis of programmer behavior, made possible by the use of full-session recording; a definition of patterns of use of LLMs, with 7 distinct categories; and validated advice for getting the best of LLMs for debugging and Automatic Program Repair"

arxiv.org/abs/2507.15822

arXiv logo
arXiv.orgDo AI models help produce verified bug fixes?Among areas of software engineering where AI techniques -- particularly, Large Language Models -- seem poised to yield dramatic improvements, an attractive candidate is Automatic Program Repair (APR), the production of satisfactory corrections to software bugs. Does this expectation materialize in practice? How do we find out, making sure that proposed corrections actually work? If programmers have access to LLMs, how do they actually use them to complement their own skills? To answer these questions, we took advantage of the availability of a program-proving environment, which formally determines the correctness of proposed fixes, to conduct a study of program debugging with two randomly assigned groups of programmers, one with access to LLMs and the other without, both validating their answers through the proof tools. The methodology relied on a division into general research questions (Goals in the Goal-Query-Metric approach), specific elements admitting specific answers (Queries), and measurements supporting these answers (Metrics). While applied so far to a limited sample size, the results are a first step towards delineating a proper role for AI and LLMs in providing guaranteed-correct fixes to program bugs. These results caused surprise as compared to what one might expect from the use of AI for debugging and APR. The contributions also include: a detailed methodology for experiments in the use of LLMs for debugging, which other projects can reuse; a fine-grain analysis of programmer behavior, made possible by the use of full-session recording; a definition of patterns of use of LLMs, with 7 distinct categories; and validated advice for getting the best of LLMs for debugging and Automatic Program Repair.

"As chatbots grow more powerful, so does the potential for harm. OpenAI recently debuted “ChatGPT agent,” an upgraded version of the bot that can complete much more complex tasks, such as purchasing groceries and booking a hotel. “Although the utility is significant,” OpenAI CEO Sam Altman posted on X after the product launched, “so are the potential risks.” Bad actors may design scams to specifically target AI agents, he explained, tricking bots into giving away personal information or taking “actions they shouldn’t, in ways we can’t predict.” Still, he shared, “we think it’s important to begin learning from contact with reality.” In other words, the public will learn how dangerous the product can be when it hurts people."

theatlantic.com/technology/arc

The Atlantic · ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil WorshipBy Lila Shroff

"Alright, I’ve officially spent too much time reading Trump’s 28-page AI Action Plan, his three new AI executive orders, listening to his speech on the subject, and reading coverage of the event. I’ll put it bluntly: The vibes are bad. Worse than I expected, somehow.

Broadly speaking, the plan is that the Trump administration will help Silicon Valley put the pedal down on AI, delivering customers, data centers and power, as long as it operates in accordance with Trump’s ideological frameworks; i.e., as long as the AI is anti-woke.

More specifically, the plan aims to further deregulate the tech industry, penalize US states that pass AI laws, speed adoption of AI in the federal government and beyond, fast-track data center development, fast-track nuclear and fossil fuel power to run them, move to limit China’s influence in AI, and restrict speech in AI and the frameworks governing them by making terms like diversity, inclusion, misinformation, and climate change forbidden. There’s also a section on American workers that’s presented as protecting them from AI, but in reality seeks to give employers more power over them. It all portends a much darker future than I thought we’d see in this thing."

bloodinthemachine.com/p/trumps

Blood in the Machine · Trump's AI Action Plan is a blueprint for dystopiaBy Brian Merchant
#USA#Trump#AI

"[I]t appears that SoftBank may not be able to — or want to — proceed with any of these initiatives other than funding OpenAI's current round, and evidence suggests that even if it intends to, SoftBank may not be able to afford investing in OpenAI further.

I believe that SoftBank and OpenAI's relationship is an elaborate ruse, one created to give SoftBank the appearance of innovation, and OpenAI the appearance of a long-term partnership with a major financial institution that, from my research, is incapable of meeting the commitments it has made.

In simpler terms, OpenAI and SoftBank are bullshitting everyone.

I can find no tangible proof that SoftBank ever intended to seriously invest money in Stargate, and have evidence from its earnings calls that suggests SoftBank has no idea — or real strategy — behind its supposed $3-billion-a-year deployment of OpenAI software.

In fact, other than the $7.5 billion that SoftBank invested earlier in the year, I don't see a single dollar actually earmarked for anything to do with OpenAI at all.

SoftBank is allegedly going to send upwards of $20 billion to OpenAI by December 31 2025, and doesn't appear to have started any of the processes necessary to do so, or shown any signs it will. This is not a good situation for anybody involved."

wheresyoured.at/softbank-opena

Ed Zitron's Where's Your Ed At · Is SoftBank Still Backing OpenAI?Earlier in the week, the Wall Street Journal reported that SoftBank and OpenAI's "$500 billion" "AI Project" was now setting a "more modest goal of building a small data center by year-end." To quote: A $500 billion effort unveiled at the White House to supercharge the U.S.’s artificial-intelligence

"Consider AI Overviews, the algorithm-generated blurbs that often now appear front and centre when users ask questions. Fears that these would reduce the value of search-adjacent ads haven’t come to pass. On the contrary, Google says AI Overviews are driving 10 per cent more queries in searches where they appear and haven’t dented revenue. Paid clicks were up 4 per cent year on year, the company said in a call with analysts on Wednesday.

But as AI yields more, it costs more. Google’s capital expenditure on data centres and such trappings this year will now be about $85bn, versus its prior estimate of $75bn. That’s almost quadruple what the company spent in 2020, when AI was a glimmer in Silicon Valley’s eye. It’s also 22 per cent of the company’s expected revenue this year, according to LSEG, the highest annual level since 2006."

ft.com/content/7589393d-e562-4

Financial Times · Google earnings keep Silicon Valley’s AI flywheel spinningCapital expenditure on data centres and such trappings this year will now be about $85bn, versus its prior estimate of $75bn

The more advanced #AI models get, the better they are at deceiving us — they even know when they're being tested

More advanced AI systems show a better capacity to scheme and lie to us, and they know when they're being watched — so they change their behavior to hide their deceptions.

livescience.com/technology/art

Live Science · The more advanced AI models get, the better they are at deceiving us — they even know when they're being testedBy Roland Moore-Colyer

Last week, I got an email from Microsoft. It told me I’d be paying 46% more for my Office subscription, starting next month.

But when I tried to cancel, it offered me the same price I was already paying — without the generative AI features I never asked for in the first place.

This isn’t just deceptive; it’s an abuse of market power. I’ve had it with Microsoft.

disconnect.blog/p/ive-had-it-w

Disconnect · I’ve had it with MicrosoftBy Paris Marx

@researchfairy arguing that LLMs are a fascist technology: "well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda."

blog.bgcarlisle.com/2025/05/16

"And because LLM prompts can be repeated at industrial scales, an unscrupulous user can cherry-pick the plausible-but-slightly-wrong answers they return to favour their own agenda."

New uBlock Origin rule to clobber the intrusive "Copilot" button that recently appeared in Outlook web mail:

! Jul 25, 2025 https://outlook.office.com
outlook.office.com###CopilotCommandCenterButton

Note: there should be three (3) pound signs between ".com" and "CopilotCommandCenterButton". For some reason my fediverse server does not display all three.

#uBlock #AISpam #AI #GenerativeAI #Copilot #Microsoft #Outlook #DarkPattern

This post is not an invitation to criticize me for using a Microsoft product or to suggest an alternative.