veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

296
active users

#computation

2 posts2 participants0 posts today

The Fourier Transform is a mathematical operation that transforms a function of time (or space) into a function of frequency. It decomposes a complex signal into its constituent sinusoidal components, each with a specific frequency, amplitude, and phase. This is particularly useful in many fields, such as signal processing, physics, and engineering, because it allows for analysing the frequency characteristics of signals. The Fourier Transform provides a bridge between the time and frequency domains, enabling the analysis and manipulation of signals in more intuitive and computationally efficient ways. The result of applying a Fourier Transform is often represented as a spectrum, showing how much of each frequency is present in the original signal.

\[\Large\boxed{\boxed{\widehat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi \xi x}\,\mathrm dx, \quad \forall\xi \in \mathbb{R}.}}\]

Inverse Fourier Transform:
\[\Large\boxed{\boxed{ f(x) = \int_{-\infty}^{\infty} \widehat f(\xi)\ e^{i 2 \pi \xi x}\,\mathrm d\xi,\quad \forall x \in \mathbb R.}}\]

The equation allows us to listen to mp3s today. Digital Music Couldn’t Exist Without the Fourier Transform: bit.ly/22kbNfi

Gizmodo · Digital Music Couldn't Exist Without the Fourier TransformThis is the Fourier Transform. You can thank it for providing the music you stream every day, squeezing down the images you see on the Internet into tiny

@cryptadamist One of the reasons has to do with the nature of computation itself. #AI #algorithms are often highly complex (a technical term characterized by time and space requirements). AI demands computing resources that are intrinsically (mathematically) difficult to provide in “conventional” computers. It pushes the boundaries on #computation which are partially imposed by the design and engineering of computer systems. 2/n

The #Public #Cloud

I’m more and more inclined a public cloud is the only humane response to the Techno-Feudalist/fascists of Silicon Valley & beyond.

There will be so many #bots, so much #scam that each of us will have to run #machinelearning models just to survive the new post-#ai age.

#Digital #citizens should demand secure #computation cycles as a natural right, because that is the world about to begin.

It is the only answer to #LuxurySurveillance #dataheist #capitalism: Run your own.

The discovery of the fifth Busy Beaver number highlights the boundaries of computation itself. This elusive concept reveals complex tasks that even the most advanced machines can’t solve. As we push the limits of what can be calculated, we confront deeper questions: are there problems forever beyond the reach of algorithms?
#Computation #Mathematics #LimitsOfKnowledge
quantamagazine.org/amateur-mat

"Claude Shannon's 1950 paper Programming a Computer for Playing Chess was a seminal work in the field of Artificial Intelligence [...] In this paper, Shannon employs the Minimax Algorithm, which takes as its premise that your opponent will always choose the move that is best for them, and worst for you."

The Unimax Algorithm is a cooperation-centric alternative to this foundational yet adversarial paradigm in #computation.

unimax.run/

unimax.rununimax.run

“Dynamicland is a way for real people in the real world to explore ideas together, not just with words and pictures, but with computation.
But, for us, computation doesn't mean scrolling around in screens.It means working out in the real is a way for real people in the real world to explore ideas together, not just with words and pictures, but with computation.”

#technology #computation

dynamicland.org/

dynamicland.orgDynamiclandIncubating a humane dynamic medium.

What is the relationship between information, causation, and entropy?

The other day, I was reading a post from Corey S. Powell on how we are all ripples of information. I found it interesting because it resonated with my own understanding of information (i.e. it flattered my biases). We both seem to see information as something active rather than passive. In my case I see it fundamentally related to causation itself, more specifically a snapshot of causal processing. Powell notes that Seth Lloyd has an excellent book on this topic, so I looked it up.

Lloyd’s 2006 book is called Programming the Universe, which by itself gives you an idea of his views. He sees the entire universe as a giant computer, specifically a quantum computer, and much of the book is about making a case for it. It’s similar to the “it from qubit” stance David Chalmers explores in his book Reality+. (I did a series of posts on Chalmers’ book a while back.)

One of the problems with saying the universe is a computer is it invites an endless metaphysical debate, along with narrow conceptions of “computer” leading people to ask things like what kind of hardware the universe might be running on. I’ve come to think a better strategy is to talk about the nature of computation itself. Then we can compare and contrast that nature with the universe’s overall nature, at least to the extent we understand it.

Along those lines, Chalmers argues that computers are causation machines. I think it helps to clarify that we’re talking about logical processing, which is broader than just calculation. I see logical processing as distilled causation, specifically a high degree of causal differentiation (information) at the lowest energy levels currently achievable, in other words, a high information to energy ratio.

The energy point is important, because high causal differentiation tends to be expensive in terms of energy. (Data centers are becoming a major source of energy consumption in the developed world, and although the brain is far more efficient, it’s still the most expensive organ in the body, at least for humans.)

Which is why computational systems always have input/output interfaces that reduce the energy levels of incoming effects from the environment to the levels of their internal processing, and amplify the energy of outgoing effects. (Think keyboards and screens for traditional PCs, or sense organs and muscles for nervous systems.)

Of course, there’s no bright line, no sharp threshold in the information / energy ratio where a system is suddenly doing computation. As a recent Quanta piece pointed out, computation is everywhere. But for most things, like stars, the magnitude of their energy level plays a much larger role in the causal effects on the environment than their differentiation.

However, people like Lloyd or Chalmers would likely point out that the energy magnitude is itself a number, a piece of information, one that has computational effects on other systems. In a simulation of that system, the simulation wouldn’t have the same causal effects on other physical systems as the original, but it would within the environment of the simulation. (Simulated wetness isn’t wet, except for entities in the simulation.)

Anyway, the thing that really caught my eye with Lloyd was his description of entropy. I’ve covered before my struggles with the customary description of entropy as the amount of disorder in a system. Disorder according to who? As usually described, it leaves the question of how much entropy a particular system has as observer dependent, which seems problematic for a fundamental physics concept. My reconciliation of this is to think of entropy as disorder for transformation, or in engineering terms: for work.

Another struggle has been the relationship between entropy and information. I’ve long wanted to say that entropy and information are closely related, if not the same thing. That seems like the lesson from Claude Shannon’s theory of information, which uses an equation similar to Ludwig Boltzmann’s for entropy. Entropy is a measure of the complexity in a system, and higher values result in a system’s energy gradients being fragmented, making much of the energy in the system unavailable for transformation (work), at least without adding additional energy into the system.

However, people like Sean Carroll often argue that a high entropy state is one of low information. Although Carroll does frequently note that there are several conceptions of “information” out there. His response makes sense for what is often called “semantic information”, that is information whose meaning is known and useful to some kind of agent. The equivalence seems more for “physical information”, the broader concept of information as generally used in physics (and causes hand wringing due to the possibility of black holes losing it).

Lloyd seems to be on the same page. He sees entropy as information, although he stipulates that it’s hidden information, or unavailable information (similar to how energy is present but unavailable). But this again seems to result in entropy being observer dependent. If the information is available to you but not me, does that mean the system has higher entropy for me than it does for you? If so, then computers are high entropy systems since none of us have access to most of the current information in the device you’re using right now.

My reconciliation here is to include the observer as part of the accounting. So if a system is in a highly complex state, one you understand but I don’t, then the entropy for the you + system under consideration is lower than the entropy for the me + system combo. In other words, your knowledge, the correlations between you and the system, makes the combined you + system more ordered for transformation than the me + system combo. At least that’s my current conclusion.

But that means for any particular system considered in isolation, the level of entropy is basically the amount of complexity, of physical information it contains. That implies that the ratio I was talking about above, of information to energy, is also of entropy to energy. And another way to refer to these computational systems, in addition to information processing systems, is as entropy processing systems, or entropy transformers.

This might seem powerfully counter intuitive because we’re taught to think of entropy as bad. Computational systems seem to be about harnessing their entropy, their complexity, and making use of it. And we have to remember that these aren’t closed systems. As noted above, they’re systems that require a lot of inbound energy. It’s that supply of energy that enables transformation of their highly entropic states. (It’s worth noting that these systems also produce a lot of additional entropy that requires energy to be removed, such as waste heat or metabolic waste.)

So computers are causation machines and entropy transformers. Which kind of sounds like the universe, but maybe in a very concentrated form. Viewing it this way keeps us more aware of the causal relations not yet captured by current conventional computers. And the energy requirements remind us that computation may be everywhere, but the useful versions only seem to come about from extensive evolution or engineering. As Chalmers notes in his book, highly computational systems don’t come cheap.

What do you think? Are there differences between physical information and entropy that I’m overlooking? And how would you characterize the nature of computation? Does a star, rock, or hurricane compute in any meaningful sense? What about a unicellular organism?

Featured image credit

https://selfawarepatterns.com/2024/07/28/entropy-transformers/

Advancing Digital Earth Modeling - Hexagonal Multi-Structural Elements In Icosahedral DGGS For Enhanced Geospatial Data Processing
--
doi.org/10.1016/j.envsoft.2023 <-- shared paper
--
en.wikipedia.org/wiki/Discrete <-- DGD wiki page
--
[the math is way over my head, hence the wiki page leak, but a good read nonetheless]
“HIGHLIGHTS:
• Hexagonal multi-structural elements enhance Earth's surface modeling precision.
• Integration of indexing and conversion rules improves geospatial data computation.
• DGGRID implementation shows increased precision in raster and vector data modeling.
• Addresses limitations in existing software for Earth observation data.
• Pioneering approach expands geospatial data processing applications…"
#GIS #spatial #mapping #DiscreteGlobalGrid #DGG #DGGS #indexing #conversion #rules #computation #Hexagonal #DGGRID #raster #vector #data #model #modeling #earthobservation #remotesensing #grid #vertices #edges #icosahedral #projections #coordinates #representation