veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

294
active users

#machinelearning

125 posts104 participants17 posts today

🚨 OpenAI Week is HERE! 🚨

OpenAI is preparing to launch GPT-4.1 nano, alongside o4-mini and o3. Could this be the open-source model Sam Altman hinted at?

A smaller, more efficient GPT-4 could make AI more accessible, sparking new innovations for developers and researchers. Can't wait to see how this unfolds!

#AI#OpenAI#GPT4

🤖 Afgelopen vrijdag hebben wij bij #ITC
@utwente de toekomst van #AI #MachineLearning en #DeepLearning in #QGIS verkend. Inspirerende presentaties over plugins en tools. Waardevolle discussies en inzichten voor de toekomst van deze technologie in QGIS. Dank aan alle deelnemers en in het bijzonder onze internationale gasten Matthias Kuhn en Ivan Ivanov van @opengisch.

Dank aan Rosa Aguilar voor het organiseren en de steun van @eScienceCenter!

🚀 Behold, the groundbreaking revelation that neural networks can be trained without #backpropagation or forward-propagation! 😲 Why bother with actual #science when you can just wave your hands and hope for the best? 🤦‍♂️ Thank you, #arXiv, for this enlightening display of 🤡 #innovation.
arxiv.org/abs/2503.24322 #neuralnetworks #machinelearning #HackerNews #ngated

arXiv logo
arXiv.orgNoProp: Training Neural Networks without Back-propagation or Forward-propagationThe canonical deep learning approach for learning requires computing a gradient term at each layer by back-propagating the error signal from the output towards each learnable parameter. Given the stacked structure of neural networks, where each layer builds on the representation of the layer below, this approach leads to hierarchical representations. More abstract features live on the top layers of the model, while features on lower layers are expected to be less abstract. In contrast to this, we introduce a new learning method named NoProp, which does not rely on either forward or backwards propagation. Instead, NoProp takes inspiration from diffusion and flow matching methods, where each layer independently learns to denoise a noisy target. We believe this work takes a first step towards introducing a new family of gradient-free learning methods, that does not learn hierarchical representations -- at least not in the usual sense. NoProp needs to fix the representation at each layer beforehand to a noised version of the target, learning a local denoising process that can then be exploited at inference. We demonstrate the effectiveness of our method on MNIST, CIFAR-10, and CIFAR-100 image classification benchmarks. Our results show that NoProp is a viable learning algorithm which achieves superior accuracy, is easier to use and computationally more efficient compared to other existing back-propagation-free methods. By departing from the traditional gradient based learning paradigm, NoProp alters how credit assignment is done within the network, enabling more efficient distributed learning as well as potentially impacting other characteristics of the learning process.