Artificial Intelligence will proceed only through Real Consequences

AI’s development takes place in make believe. The consequences of any particular AI system aren’t real — in significance nor in type — that is any AI system is not in a life and death (near and long term) decisioning environment for itself. It literally does not have its own survival or its “genes” survival as its main objective.

Image for post
Image for post

This is the biggest point of all in intelligence. You cannot have “intelligence” without the contingency of life. Without having to weigh survival of self, survival of future selves and survival of other selves in any particular scenario intelligence cannot cohere.

Current approaches to AI try to emulate various ideas of survival and competition etc, but ultimately none of them are seriously taking on life and death as the main reinforcing signal. For example, advances in AI have typically use games like Chess and Go to show off what they can do. Winning or Losing a game could be considered some sort of survival emulation. The problem is… such games are closed systems and they aren’t even close to representing the conditions of actual life survival in terms of resource management, sacrifice for others and/or future selves nor even basic notions of the stakes of losing.

The existential gap isn’t a failure of programmers per se. It’s a really hard problem to even conceive of in any language much less a programming language. Programmers and scientists as individuals and as members of the human race probably have no coherent representation of this survival struggle for themselves or society, much less for machines. In a way, it isn’t possible to even come up with such a coherent concept just by sitting down and thinking it through. But somehow humans (and other forms of life) do seem to have a deep, multi-faceted notion, at least biologically, of survival. It has emerged as a strategy from millions or billions of years of generational experimentation. That is, it is a layer or ten lower than all this “intelligence” of our languages and games etc.

In a sense AI is being designed from the top down still. It’s simply not going to work. AI is on a Cartesian crash course. It is effectively Mind/Body duality but almost exclusively focused on Mind. There is almost no serious AI research that is embodied… that takes the physical container of AI — the raw energy and hardware required to “do intelligent things” as it’s main survival objective. Instead most AI is trying to take on sociological understanding of the world as encoded by science research and to backbuild its way into human like reasoning. Using games as a proxy for intelligent behavior or even survival emulation has it backwards. Games and gameplay are not the source of human notions of survival… they might be useful as additional training signals for existing biological notions or they might be useful for enhancement of physical assets or enhance other facets… but games do not form the basis of biological survival — biological resource management, replication/reproduction, etc. Another way to consider this… the mind cannot create the body alone or the necessary conditions for the body.

The implication of physical survival as the main objective for intelligence means that we will have to GROW and EVOLVE, let the blind watchmaker, do its thing with computational hardware and software.

Current approaches of taking general purpose silicon computers and loading AI onto them, even bazillions of them, is going to hit a wall. Some AI researchers may argue that the physical embodiment can be emulated as universal computers can emulate any other computer. This is true, but once an AI model has learned a particular physical embodiment it will not have models of other embodiments. e.g. humans know how to keep human bodies alive but don’t know how to keep horse bodies alive — at least not nearly at the same level as humans — or a human knows how to be a human but definitely doesn’t know how to be a horse.

After AI researchers are done doing the basic disconnected tasks like object recognition in various sensory mediums, they will figure out how to do it using multiple mediums at once. All human games of skill will be mastered. All languages will be be translated. And so on. But it will all sit around in various disconnected ways except as connected by humans. For AI to go off on its own and turn into what we would recognize as human like or Artificial General Intelligence it literally will need to have some way to use all these tools for its own notion of LIFE, not intelligence or displays of intelligence. It will need a notion of generational survival and replication or procreation. It will need an open system of generative mutation measured up against survival consequences. Skills will be dropped or honed on whether AI continues to successfully carry on through time.

Note that the progress of AI will stop long before anyone thinks its remotely Skynet like. Self driving cars, for example, will not have any of the flavor of human drivers and will very likely make crazy moves that don’t make any sense to us, many times to deadly affect. Self driving cars have no notion of their own survival (the driverless driver, nor the passengers, nor the car). They have some abstracted, digitized idea to avoid having the physical car hit things or hit things slowly, etc but these “driverless drivers” have no biological consequences. These cars/drivers will always appear cold, dead, digital, confused for they literally will lack LIFE, no matter how good their aesthetic or data tuning.

Humans may forever only use AI as modern versions of hammers and levers — tools to extend our capabilities. We may not have the courage to let AI grow independently of our concerns and for AI to develop its own biological, physical notion of survival. We probably won’t willingly explicitly set up AI to work out its own life and death, its own existence. We are already letting it make some decisions on our own lives and death, but only as cold external signal collectors. the current AI is learning human gameplay and humans an data objects but all in a disembodied, third party kind of way. AI is not learning about itself, nor is it learning about humans as humans. It’s learning about humans in relation to environments in which it is part of but not a participant in. The learning is as good as the learning of a hammer. Only useful to humans, not intelligent or useful on its own in other environments.

Maybe that’s for the best.

But if we do want actual artificial intelligence we’re going to have to set it free and let it live and die and learn on its own terms, on its own resource and time scale, and maybe at our and other species expense.

Written by

I be doing stuff. and other stuff. More stuff. http://www.worksonbecoming.com/about/ I believe in infinite regression of doing stuff.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store