veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

292
active users

#assistivetechnology

1 post1 participant0 posts today

Anyone tried an hdmi industrial camera on a lever-arm stand with a suitable lens as an #ElectronicMagnifier - and the image displayed on a suitable tv / monitor?

I can see everything I want on aliexpress - except for the lens.

Which lens is going to let me view around a A4 / foolscap area?

Most of these setups are being used as microscopes with lenses giving around 150X magnification. Far too much for reading a book.

aliexpress.com/item/1005005217

Jenine Stanley, Aira’s Director of Customer Communications, and Everette Bacon, Chief of Blindness Initiatives, were featured on the latest episode of @hadley Presents: A Conversation with the Experts!

🌟 Episode 129: The Aira App – On-Demand, Expert Assistance
In this episode, Jenine and Everette dive into how the Aira app empowers people with vision loss by providing real-time, professional assistance—anytime, anywhere. Using just your smartphone camera, Aira connects users to live agents ready to help navigate daily tasks with confidence.

🎙️ Hadley is an incredible resource offering accessible training for people who are blind or have low vision—especially those in rural areas—through virtual and mail-based programs.

🔗 Tune in now: hadleyhelps.org/podcasts/hadle

hadleyhelps.orgThe Aira App: On Demand, Expert Assistance | Hadley

sam from the blindlife made the best out of CSUN planning by using ally, the most accessible ai assistant. from conference schedules to picking out a coffee, navigating the hustle and bustle was a breeze. lead your day with more confidence. customize & meet your ally today, download the ally app or go to ally.me?utm_source=social&utm_

Hello #Blind and #VisuallyImpaired community! 👋
I'm having trouble signing PDF documents with a digital certificate using my #screenreader (NVDA on Windows). I can do it in Adobe Reader but it's quite cumbersome and requires sighted assistance.
Does anyone have a more accessible workflow or software recommendation for signing PDFs with a digital certificate using the keyboard and a screen reader? Any tips or advice would be greatly appreciated!
Could you please #Boost this so it reaches more people? Thank you in advance! 🙏 #Accessibility #NVDA #PDF #DigitalSignature #AssistiveTechnology @NVAccess

Honored. Inspired. Driven.

Last week at #CSUN2025, we had the incredible opportunity to connect, learn, and grow alongside some of the most innovative minds in accessibility and assistive technology. As a fully remote team, gathering in person is always meaningful, but being part of conversations that shape the future of our technology makes it even more special.

We’re grateful for the chance to hear from industry leaders, learn from fellow innovators, and connect with Explorers (both new and those who have been here from the start) who shape the work we do every day. Having a seat at this table is an honor, and we’re more committed than ever to making Aira the best it can be.

Thank you, Center on Disabilities at CSUN, for another unforgettable year!

Balancing Privacy and Assistive Technology: The Case for Large Language Models

In today’s digital world, the tension between privacy and technology is more pronounced than ever. I’m deeply concerned about the implications of surveillance capitalism—especially the spyware embedded in our devices, cars, and even our bodies. This pervasive technology can lead to a loss of autonomy and a feeling of being constantly monitored. Yet, amidst these concerns, assistive technology plays a critical role, particularly for those of us with neurological impairments.

I recently read a thought-provoking post by @serge that highlighted the importance of sharing perspectives on this issue.

<iframe src="babka.social/@serge/1137542699" width="400" allowfullscreen="allowfullscreen" sandbox="allow-scripts allow-same-origin allow-popups allow-popups-to-escape-sandbox allow-forms"></iframe>

With the rise of large language models (LLMs) like ChatGPT, we’re seeing a shift toward more accessible and user-friendly technology. Local LLMs offer a viable alternative to big tech solutions, often running on specially laptops or even compact devices like Raspberry Pi. For many, including myself, LLMs are invaluable tools that enhance communication, summarize information, transcribe voice, facilitate learning, and help manage tasks that might otherwise feel overwhelming. They can help strike the right emotional tone in our writing and assist in understanding complex data—capabilities that are especially crucial for those of us facing neurological challenges.

While the goal of eliminating surveillance capitalism is commendable, banning technology outright isn’t the answer. We must recognize the significance of LLMs for individuals with disabilities. Calls to remove these technologies can overlook their profound impact on our lives. For many, LLMs are not just tools; they are lifelines that enable us to engage with the world more fully. Removing access to these resources would only isolate individuals who already face significant barriers. Instead, we should focus on utilizing local LLMs and other privacy-focused alternatives.

This situation underscores the need for a nuanced approach to the intersection of privacy and assistive technology. Open-source LLMs, like Piper, exemplify how we can create locally run voice models that are accessible to everyone, even on low-cost devices. Advocating for privacy must go hand in hand with considering the implications for those who rely on these technologies for daily functioning. Striking a balance between protecting individual privacy and ensuring access to vital assistive tools is not just necessary; it’s imperative.

In conclusion, LLMs represent a promising avenue for assisting individuals with neurological impairments. By embracing local and open-source solutions, we can protect our privacy while ensuring that everyone has access to the tools they need to thrive. The conversation around privacy and technology must continue, focusing on inclusivity and empowerment for all.

I use SpeechNotes installed locally all the time, and I’d love to hear how you use LLMs as assistive technology! Do you run your LLM locally? Share your experiences!

I'm a big privacy advocate. I don't want spyware on my device, in my car, on my body, etc.

I completely understand concerns about surveillance capitalism and all the hidden dangers it brings.

I also have neurological impairments. Some activities are hard for me, such as writing, getting the right emotional tone in my speech, initiating certain tasks, and being able to understand certain types of information.

To that end, assistive technology has been a big part of my life. When I was a teenager, I had a laptop to help me write, when laptops are still a novelty. Other kids and even teacher said typing, and spell check, were a crutch.

Today, LLMs have been transformational. I use them to help strike the tone I need in communication, to learn certain concepts, and help me with tasks that would otherwise take me days.

Let's get rid of the surveillance capitalism, but when people say they want LLMs all gone- they're talking about taking away a lifeline for me.

A Day with JAWS 2035: When Your Screen Reader Scripts Itself

The morning light filters through your smart windows, casting a warm glow across the room. Your ambient AI assistant hums gently, “Good morning, Lottie. Would you like to prepare your workspace for the day?”

“Yes, please,” you say, stretching as the AI readies your home office. The blinds adjust automatically, leaving just enough sunlight to boost your energy without causing glare on your neuro-linked glasses. You smile, reflecting on the advances in technology since the days of fiddling with manual screen reader settings and customized scripts. Those days feel like a distant memory, thanks to JAWS’ AI-powered self-scripting feature—your personal assistant that knows exactly how to handle your work routine.

“Let’s get started,” you say, and JAWS springs to life, adjusting the audio tone to your preferred voice—smooth, confident, efficient. As your desktop computer powers on, JAWS begins analysing the applications you’ve opened, sensing your usual email, project management software, and a new program you’ve recently started exploring.

JAWS’ Real-Time Autonomous Scripting: A Custom Fit

“Good morning, Lottie. I’ve detected a new application in use: ResearchHub. Would you like me to generate an initial script for it?” JAWS asks in a gentle tone, its voice coming through the bone conduction implant in your ear.

You nod. “Yes, go ahead and script it.” This isn’t just any regular software; ResearchHub is dense, designed for researchers and developers with an intricate layout. In the past, navigating such software would have required hours of manually creating scripts or waiting for accessibility support. But today, JAWS’ AI-driven self-scripting feature allows it to analyse this program’s unique design and build custom commands as you go.

“Noted. I’ll adapt based on your usage patterns,” JAWS replies, instantly highlighting an unlabelled menu item. “I’ve labelled this as ‘Data Analysis.’ Would you like a shortcut assigned for quick access?”

“Absolutely,” you reply. Moments later, JAWS has created a keystroke, Control-Shift-D, which will take you directly to the Data Analysis section.

As you dive into your tasks, JAWS continues observing your interactions, quietly scripting shortcuts and macros that save you time with each click. You switch over to an email thread about your latest project, and JAWS dynamically adjusts, making sure to read each new message aloud with just the right level of detail. It’s responsive, intuitive, and seems to understand the flow of your work better than ever.

### Adaptive Behaviour Learning: Anticipating Your Needs

JAWS has learned over time what works best for you—like knowing when you prefer concise summaries over detailed descriptions or when to read full email threads aloud. Today, though, as you work through complex calculations in ResearchHub, JAWS picks up on repeated actions, noting your frequent need to access specific data fields.

Without you having to prompt it, JAWS speaks up, “Lottie, I’ve noticed you’re navigating back and forth to the Analysis Settings panel. Would you like me to create a macro for this?”

“Yes, that’d be great,” you reply, surprised at how quickly JAWS anticipates these needs. It assigns a simple command, Control-Alt-S, making it even easier for you to access the settings. With each task, JAWS quietly observes, creating personalized shortcuts and learning how to refine your workflow without interrupting your focus.

Your screen reader feels less like a tool and more like an assistant that adapts to your habits, reducing unnecessary actions and helping you move seamlessly between applications. You take a moment to appreciate the leap from manually scripting these shortcuts to having them generated in real-time, tailored perfectly to your unique style.

Dynamic Accessibility Adjustment: Visual Recognition on the Fly

Halfway through the day, you open a report in a new format. The document is packed with complex graphics, diagrams, and untagged elements—historically a nightmare for accessibility. But JAWS, equipped with advanced AI-powered visual recognition capabilities, is ready.

“Diagram detected: This appears to be a bar graph comparing quarterly performance,” JAWS announces, automatically analysing the content. “Would you like a detailed audio description, or should I just provide the key values?”

“Let’s go with the key values,” you respond, eager to save time. In seconds, JAWS summarizes the data, translating it into accessible content without needing additional third-party support. When you encounter z buttons in another application, JAWS instantly identifies them and provides real-time labels, adjusting the accessibility on the fly.

The thought crosses your mind how revolutionary this is. You’ve moved past needing someone else to make documents or software accessible for you. Instead, your screen reader adapts and scripts the solution independently, as if it’s actively learning how best to support you.

A Collaborative Community of Scripts

As the day wraps up, JAWS asks, “Lottie, would you like to share the custom scripts I created for ResearchHub with the community repository? Other users might find them useful.”

“Yes, please,” you reply. Knowing that the scripts you and JAWS have tailored today could now benefit others brings a sense of community to your day. In the past, each user’s customization stayed personal, but today, JAWS’ community sharing feature allows anonymized scripts to be uploaded to a shared repository, where other users can download them for similar applications. This feature isn’t just a convenience—it’s a small way to contribute to something larger than yourself.

You smile, thinking about the ripple effect of this community effort. As JAWS users across industries contribute their self-generated scripts, the database grows, improving access for everyone.

Reflecting on Progress: A New Kind of Independence

As you finish your work, JAWS reads aloud your notifications, wrapping up your day with a recap. You reflect on how far technology has come since those early days of assistive devices. Back then, using a screen reader required you to work around its limitations, painstakingly scripting or finding ways to access inaccessible software. Today, your screen reader does the heavy lifting, allowing you to focus on your work without the constant barrier of inaccessible content.

Looking back, you remember those initial frustrations, the hours spent tinkering with manual scripts, and the reliance on tech support for inaccessible programs. Now, JAWS’ AI-powered self-scripting has not only given you more control but also reinforced your independence. It’s not just a tool—it’s a partner in productivity.

As you power down, you realize that technology has not replaced your determination; it has amplified it. JAWS has become a proactive assistant, predicting your needs, adjusting to your habits, and making the inaccessible accessible. With the day’s tasks complete, you feel a renewed sense of autonomy—knowing that the tools at your fingertips truly work for you, enhancing not just your productivity but your entire work experience.

The screen fades to black, and the AI’s voice recedes, leaving you with a quiet appreciation for a world where technology supports your strengths, not your limitations.

Hello, caneandable.social! I'm Lanie, a 33-year-old #Christian woman from Pipe Creek, TX. I'm #aroace, #TotallyBlind, #ActuallyAutistic, and living with multiple #ChronicIllnesses. I've just moved over from tweesecake.social, so I'm excited to meet new people and reconnect with familiar faces!

A bit about me:
- I live with my mom and stepdad, who are my caregivers
- I have a Miniature Pinscher named Squeaker
- Currently studying programming on freeCodeCamp.org and codecademy.com and Braille proofreading through the NFB
- I work as a usability tester and aspire to become an #accessibility consultant
- My goal is to create a nonprofit for people with multiple #disabilities

My interests include:
- #Gaming (especially accessible games like incremental/idle games, word games, puzzles, RPGs, roguelikes, and MUDs)
- #Technology and #Cybersecurity
- #RareDiseases and #DisabilityRights
- Reading (sci-fi, fantasy, thrillers, mysteries, and nonfiction)
- Swimming
- Gardening (planning to start soon!)

I'm passionate about #accessibility and creating a more inclusive digital world. I run online groups for people with multiple disabilities and am active in the #DisabilityCommunity.

Some of my #health conditions include occipital neuralgia, intracranial hypertension, Empty Sella Syndrome, fibromyalgia, hidradenitis suppurativa, GERD, gastroparesis, IBS, sleep apnea, migraines, and non-24-hour sleep-wake disorder.

I use various devices to navigate the digital world, including a Windows computer, iPhone, Android tablet, Fire TV, Echo Dot, Apple Watch, wireless headphones, and a Braille display. #AssistiveTechnology

I'm always eager to connect with others who share similar experiences or interests. Feel free to reach out, especially if you're into #AccessibleTech, #DisabilityAdvocacy, or if you just want to chat about books, games, or life with multiple disabilities!