Home > Uncategorized > Intelligence Maximizes Entropy?

Intelligence Maximizes Entropy?

Intelligence Maximizes Entropy?

A new idea linking intelligence to entropy that is giving me something to think about.

“[…]intelligent behavior emerges from the “physical process of trying to capture as many future histories as possible,”

This sounds familiar to me. I have been calling my cosmological model the Sole Entropy Well, or Negentropic Monopoly, in which all signals (experiences) are diffracted from a single eternal experience, the content of which is the capacity to experience. I think that this is the same principle in this paper, called “causal entropic forces”, except in reverse. I wrote recently about how intelligence is rooted in public space while wisdom is about private time.

I think that causal entropic forces are about preserving a ‘float’ of high entropy on top of time. It’s like juggling – you want to suspend as many potentials as you can at “a” time and compensate for any potential threats before they can happen “in” time. Behind the causal entropic force, it seems to me that there must always be a core which is not entropic. That which seeks to entropically harness the future is itself motivated by the countervailing force for itself – to escape the harness of entropy.

None of this, however, addresses the Hard Problem. To the contrary, if this model is correct, then it is even more difficult to justify the existence of aesthetic sense, since all of the public effects of intelligence can be explained by thermodynamics.

Article: “A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research. 

Alexander Wissner-Gross, a physicist at Harvard University and the Massachusetts Institute of Technology, and Cameron Freer, a mathematician at the University of Hawaii at Manoa, developed an equation that they say describes many intelligent or cognitive behaviors, such as upright walking and tool use. 
 
The researchers suggest that intelligent behavior stems from the impulse to seize control of future events in the environment. This is the exact opposite of the classic science-fiction scenario in which computers or robots become intelligent, then set their sights on taking over the world. 
 
The findings describe a mathematical relationship that can “spontaneously induce remarkably sophisticated behaviors associated with the human ‘cognitive niche,’ including tool use and social cooperation, in simple physical systems,” the researchers wrote in a paper published today in the journal Physical Review Letters.  
 
“It’s a provocative paper,” said Simon DeDeo, a research fellow at the Santa Fe Institute, who studies biological and social systems. “It’s not science as usual.”
 
Wissner-Gross, a physicist, said the research was “very ambitious” and cited developments in multiple fields as the major inspirations. 
 
The mathematics behind the research comes from the theory of how heat energy can do work and diffuse over time, called thermodynamics. One of the core concepts in physics is called entropy, which refers to the tendency of systems to evolve toward larger amounts of disorder. The second law of thermodynamics explains how in any isolated system, the amount of entropy tends to increase. A mirror can shatter into many pieces, but a collection of broken pieces will not reassemble into a mirror.
 
The new research proposes that entropy is directly connected to intelligent behavior.
 
“[The paper] is basically an attempt to describe intelligence as a fundamentally thermodynamic process,” said Wissner-Gross.
 
The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.  
 
In one test, the researchers presented Entropica with a situation where it could use one item as a tool to remove another item from a bin, and in another, it could move a cart to balance a rod standing straight up in the air. Governed by simple principles of thermodynamics, the software responded by displaying behavior similar to what people or animals might do, all without being given a specific goal for any scenario.
 
“It actually self-determines what its own objective is,” said Wissner-Gross. “This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence].”
 
Entropica’s intelligent behavior emerges from the “physical process of trying to capture as many future histories as possible,” said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment.
 
Wissner-Gross calls the concept at the center of the research “causal entropic forces.” These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall.
 
“The universe exists in the present state that it has right now. It can go off in lots of different directions. My proposal is that intelligence is a process that attempts to capture future histories,” said Wissner-Gross.
 
The research may have applications beyond what is typically considered artificial intelligence, including language structure and social cooperation.
 
DeDeo said it would be interesting to use this new framework to examine Wikipedia, and research whether it, as a system, exhibited the same behaviors described in the paper.
 
“To me [the research] seems like a really authentic and honest attempt to wrestle with really big questions,” said DeDeo.
 
One potential application of the research is in developing autonomous robots, which can react to changing environments and choose their own objectives.
 
“I would be very interested to learn more and better understand the mechanism by which they’re achieving some impressive results, because it could potentially help our quest for artificial intelligence,” said Jeff Clune, a computer scientist at the University of Wyoming.
 
Clune, who creates simulations of evolution and uses natural selection to evolve artificial intelligence and robots, expressed some reservations about the new research, which he suggested could be due to a difference in jargon used in different fields.
Wissner-Gross indicated that he expected to work closely with people in many fields in the future in order to help them understand how their fields informed the new research, and how the insights might be useful in those fields.
 
The new research was inspired by cutting-edge developments in many other disciplines.  Some cosmologists have suggested that certain fundamental constants in nature have the values they do because otherwise humans would not be able to observe the universe. Advanced computer software can now compete with the best human players in chess and the strategy-based game called Go. The researchers even drew from what is known as the cognitive niche theory, which explains how intelligence can become an ecological niche and thereby influence natural selection.
 
The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior. Wissner-Gross suggested that the new findings fit well within an argument linking the origin of intelligence to natural selection and Darwinian evolution — that nothing besides the laws of nature are needed to explain intelligence.
 
Although Wissner-Gross suggested that he is confident in the results, he allowed that there is room for improvement, such as incorporating principles of quantum physics into the framework. Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense.
 
“We basically view this as a grand unified theory of intelligence,” said Wissner-Gross. “And I know that sounds perhaps impossibly ambitious, but it really does unify so many threads across a variety of fields, ranging from cosmology to computer science, animal behavior, and ties them all together in a beautiful thermodynamic picture.”
  1. April 23, 2013 at 11:15 pm

    Hi there,
    I wanted to drop by and thank you for following me on http://www.postsofhypnoticsuggestion.wordpress.com, it’s much appreciated. I really like what you’re doing here, so am now following you too.
    Wishing you all the best
    Tony

    PS If you wanted to, you could also follow me on Twitter at http://www.twitter.com/tbtalks. You’d be very welcome.

    • April 24, 2013 at 12:34 am

      Thanks Tony, I’ve got you on the Tweeter now too. Glad you’re here.

  2. May 9, 2013 at 6:27 pm

    I don’t see how causal entropy is really “thermodynamic” in the usual sense, though the authors of the paper sure want you to think it is. I’m probably understanding it wrong, but causal entropy strikes me as counter to general thermodynamics as it claims, I think, that a system will attempt to evolve to states where there is greater future thermodynamic freedom (more possible futures) even if that evolution means that it’s current state gets less thermodynamically entropic in absolute terms. Is that about right? I mean, isn’t that why the concept of “causal” entropy has to be introduced in the first place, over and above just plain “entropy?”

    There is some kind of linguistic/conceptual flimflammery going on in this study but I can’t quite figure out what it is. (Which doesn’t mean it’s wrong, just that the terms and meanings aren’t entirely transparent.) It sure got lots of press though.

    • May 9, 2013 at 7:47 pm

      Yes, I agree. The word entropy has a lot of different uses, and each of those seem a little shaky in their own right. I come to physics ‘through the back door’, as it were, coming from an interest in consciousness and trying to get a general understanding without having the natural aptitude. Instead of trying to actually solve or even parse the equations, I spend a lot of time trying to really get to the bottom of the basic terms: what is force? what is energy? time? etc. Entropy is the loopiest of all of them, which I think is because the role that perception/detection/description plays is not properly considered. Thinking about it invariably leads me to getting lost in whether what I am thinking of should be called low entropy or high entropy. With causal entropy for example. The general idea seems to be that some high entropy state wants to remain high, i.e. to preserve its own freedom from its first person perspective and its unpredictability from the third person perspective. The intelligence wants to then ‘capture possible futures’, i.e. to negate with certainty various possibilities that it can anticipate…which is really a lowering of causal entropy in the third person environment. What does the intelligent agent do? It crystallizes its environment with technology and habituation. Intelligence domesticates the environment, i.e. projecting negentropy into what it deliberately changes, but, as we see, that is achieved at the cost of producing entropic consequences (pollution, heat, environmental exhaustion). Then you’ve got the auto-domesticating feedback of living in an overly civilized environment which is hard to quantify in entropy terms…as I sit on my ass, doing the same commute to work every day, being very predictable…am I intelligent? Am I in a lower entropy state than a hunter-gatherer? Hunter-gatherers have their routines too. It’s all very sketchy if we really try to apply this kind of purely physical-quantitative model, not to mention that it doesn’t touch the hard problem at all.

      With the experiential entropy concept, I was trying to get at the idea of a whole other axis of information entropy, so that it’s not just a one dimensional measure of how objectively clear information is, but how important that information feels to us, whether it is clear or not. There would be a two value matrix instead, so that classical information entropy becomes the Y axis, and refers to flat computability, with high entropy values being like the letter ‘e’ compared to low entropy values being like the letter ‘q’ in the sense that ‘e’ could come before anything but ‘q’ is usually follower by ‘u’. The X axis becomes the emotional distance, or aesthetic saturation value. In this case, high X entropy is about delirium or ecstatic visions, poetry, psychedelia, etc. Experiences which transgress boundaries of self-other, now-then, here-there… crazy stuff. The low X entropy is all public, empirical phenomena. Practical, but also mechanistic, unfeeling, autistic (autistic in the sense of under-signifying social subtleties).

      I’m not sure if that maps to causal entropy exactly, but I think they both relate to the Intelligence vs. Wisdom concept. Pure intelligence with no wisdom is autistic…except in its own eyes, there is nothing missing, and wisdom seems superfluous and sentimental. It “does not compute”, because wisdom is based on experience through time rather than body configurations across space. Applying causal entropy to consciousness seems like trying to compress the entire X-axis of experiential entropy into the Y-axis of information entropy, so that there is still no recognition of aesthetic value or feeling but there is an added sense of dynamism within the single Y-axis. It’s more of a game-theory model using the terms of physics, and only looking at the lowest X-value perspective. It seems possibly true on that functional level, intelligence imports uncertainty from the future and uses it blur itself internally, giving it the freedom to export certainty into the present. Of course, if all there was to it was that, there would be no need for consciousness at all, but still, a promising idea.

      I think that it feels more natural to apply to individuals, societies, and nature.

  3. May 9, 2013 at 6:39 pm

    BTW, I love how they show the development of upright walking as an example of causal entropy in action. I think this is more profound as a metaphor for all of reality. Sense/Consciousness is trying to stand ever more upright, increasing it’s freedom, expanding it’s own capabilities of experience. As the entropy of the world increases so does consciousnesses causal power. It’s a journey toward an infinitely-ahead singularity in which all possible states of consciousness would be available to the system as a possible future. A state with infinite causal entropy is actually a state with no entropy at all. (Or am I just sounding loony?)

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Shé Art

The Art of Shé D'Montford

astrobutterfly.wordpress.com/

Transform your life with Astrology

Be Inspired..!!

Listen to your inner self..it has all the answers..

Rain Coast Review

Thoughts on life... by Donald B. Wilson

Perfect Chaos

The Blog of Author Steven Colborne

Amecylia

Multimedia Project: Mettā Programming DNA

SHINE OF A LUCID BEING

Astral Lucid Music - Philosophy On Life, The Universe And Everything...

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events

zumpoems

Zumwalt Poems Online

dhamma footsteps

postcards from the present moment

%d bloggers like this: