Mapping Existence

Towards a Theory of Learning and Knowing

Russell Foltz-Smith
27 min readApr 13, 2017

Introduction

Where to begin an essay on how things, all things, exist? What is the existence of existence and why it is so. And absurdly impossible question. Often it is best to talk about less than all things and keep it simple. And so this essay is about learning by any system capable of learning — i.e. animals (humans), societies, and computer networks. This essay will review a functional definition of learning, the conditions of learning and implications for wider awareness of existence.

The author requests patience from the reader as there will be a large build up of concepts before the crux of the argument gets underway.

What is Learning?

Learning is not just a human activity nor restricted to living or animate or even complex entities. Learning is not a positive progressive nor intelligent activity. Learning is adaptation to changing contingencies within and outside of the system doing the learning. Adaptation is observing, becoming aware, being stimulated by changes and remembering the conditions of those changes. Observation and memory is limited by computational resources of the learning system and its environment. The more observed and remembered changes a system processes in such a way that such changes can be compared against when confronted with previously unobserved the more it can be said the system learns. Observation (or awareness) is an alteration to a part of the system correlating to an alteration to the environment or other part of the system.

Cleaning Up Some Boundaries

There is no clean line between external/internal, system/environment, nor system/system or subsystem/system. These are linguistic categorizations required by natural language that are efficient for communication but they do betray the nuance of reality. There are probabilistic system boundaries where response/adaptation more or less correlates on a “side” or “facet” or “dimension” vs another. These probabilistic groupings, correlating adaptations and responses to changes, provide some meaningful way of grouping phenomena into systems, environments, internal and external.

Figure 1: Simplified Diagram of Probabilistic Categorization

The distinctions between boundaries and systems is a statistical, probabilistic assessment. There are no definitive boundaries between complex systems, internal/external, nor entity and environment. Boundaries are of the form “this collection of relations tends to behave in similar ways under these conditions.”

In the process of investigating learning and systems we, the investigators (also learning systems) define boundaries between system and determinants of that boundary as well as observation events. Let’s consider an overly simplistic build up of boundaries and theories of boundaries. Here consider an understanding of biology:

  • S1 is a system
  • S2 is a system.
  • F(s) is a mapping, process, function, method, algorithm between systems.
  • S1 f(s) S2, there is a mapping f of S1 onto S2.
  • Or these more common language variations.
  • S1 affects S2
  • S1 is observable by S2
  • S1 observes via ion channels S2
  • S1, an organic cell, observes via emitted ions passing through ion channels , S2, also a cell.
  • S1 and S2 share similar ions and effects/response/stimulus of similar ions. S1 and S2 are correlated and in a probabilistic relation, they are more or less similar in their response to ions. There exists a mapping between S1 and S2 we observe, record, and label probabilistically as “ion exchange between cells” and we group S1 and S2 into cells of Type F (based on the mapping and/or other observed similarities).
  • Add in S3 and S4, cells. S1, S2, S3, and S4 form a set of cells under which the same observations, say of, temperature, light exposure ion emission. However, under further observation in various conditions, say putting space between cells, we notice sensitivity to ion decay for S3 and S4. We now have a new probabilistic relation (mapping) between the set of cells. As we build up more observations of cells and more relations between them we group them all into a theory 1.
  • Theory 1 is a compressed form of the observations of the set of cells and their mappings. It requires less bandwidth/energy/work to transfer a theory between two scientists as long as they have learned similar observations, theories under similar conditions (similar tools, university settings, languages/codex) etc. These scientists can now expand or reduce boundaries between Theory 1 and Theory 2 via experimentation and the scientific method and logical inferences and mathematics.

The above simplified account of a biological theory build up is full of holes, undefined concepts, nebulous language and assumptions. As are all accounts of systems by limited systems, like humans. Those gaps in any learning are filled in and the transference of learning proceeds through the same pattern of observe, group by similar conditions/effects, generalize. Human cultural, language, mathematics etc are all mappings between systems between mappings. One of the biggest assumptions in the sample above are that the very codex of the example and the referred to theory and implied logic are coded in (observed -> remembered/stored -> presented) is shared between reader/author, scientist to scientist. If it is not then learning must be done to even understand the above words and their effects. This is an endless recursion.

Learning by systems about other systems is always a gathering of observations/effects, probabilistic assessment and compressed storage. This statement must be unpacked and demonstrated, but it is the critical statement, the crux of the argument of this essay. Learning can only happen and only this way because no system is infinitely scope in spacetime (any system other than the entirety). Any learning system is a growing system (via storage of previously learned observations) and thus even the entire set of all systems, the entirety of spacetime, must grow. This also implies that the mappings between systems grow in number and their own internal complexity (grows all the way up and all the way down). Therefore for learning to continue a learning system requires computationally reducible, efficient approaches to observation, memory, retrieval and comparison.

A note on computational efficiency and reducibility and probability is required. Computation in this essay refers to a whatever process to map a system to another system — to take a set of data and relate it to another set. Efficiency is considered on many levels very much how it is thought of as thermal efficiency — adding computation to a system should produce new net change/work/output. An inefficient computation is a process that the signal coming out of the computation is less than the signal that went in. While the universe and an average computer is full of inefficient computation coherent, “useful” systems have more efficient computation than inefficient or only the coherent parts are used.

Figure 2: Bias and variance contributing to total error
From Understanding the Bias-Variance Tradeoff, by Scott Fortmann-Roe.

In statistics and now machine learning all models are bound by a trade off between bias and variance. Computational Efficiency can be considered a version of this basic phenomenon. A model (a relationship between one set of data and another) is efficient only when its bias and variance are sufficiently low (there is a reasonably high probability the model is an actual representative relationship) and the execution/use/running/operation/observation of consequences of the model can be done in fewer steps that observing all aspects of the relating systems.

Returning to the previous S1 and S2 nomenclature above we can consider the challenge of increasing learning demands on systems. Nothing is learned between S1 and S2 if S1’s learning of S2 requires a completely new observation set and storage/memory, re-index, categorization. In the more complex situations intermediary systems S5…. Sn = {Si} are learned to map observations efficiently between systems.

Figure 3: Simplified Diagram of Systems Mapping Learning Between Each Other

The categories and methods of different knowledge categories (just systems themselves) that describe interactions, observations and relationships between systems are also systems — all subject to growing complexity, selection by consequences and evolutionary drift (seen and unseen).

The Mapping is The Medium is the Message

A mapping system graduates to a category of knowledge, what the author will refer to as a medium, if it successfully compresses observations between systems. Success is defined as reliability maintains the probabilistic relations in the mapping between observing or related systems. Medium formation is only possible if systems share probabilistic features, properties aka consistent and consistently observed behavior under similar relations between other systems. (A recursive definition, to be sure.) S1-> Si -> S2 … Si is a medium if and only if S1 and S2 are affected similarly (by other S’s). For example human A and human B can use human language because they have similar biological, cultural, environmental systems. Human A and hamster B cannot use human language to communicate as effectively because there is enough difference between the biology, sensory, cultural and environments to render human language too inefficient to render into a shared mapping medium.

The ideas of mediums can be stated differently. All mediums are communication paths. They communicate signals between systems. And considering the consequences above the phrase “the medium is the message” seems to be a useful heuristic because if a medium is a medium between systems then we, in fact, know something about the systems without even fully understanding the entire signal set within the medium. This, of course, overstates the case a bit because while observing two systems in communication using a given medium informs an observer to an extent it does not provide all that much insight. The observer must also come to map into the observed systems medium before hoping to understanding the signaling between the systems. That process of mapping into a medium an observer is unfamiliar with is recursively the same learning process of observing many systems in relation and measuring the probabilistic relations. Relations that maintain their structure under varied system interactions and changing conditions become the mapping into. And on and on and on.

There are many implications of such a recursive, evolving system. One of the most important implications is that of the relativity of these mappings between systems. S1’s understanding of Si is different than S2’s, always. Unless S1 == S2, that is, the are the exact same systems. S1 and S2 converge to each other as the fidelity of their mapping grows and converges. In one example, Human A and Human B have relative understanding of mathematics — each a slightly different understanding/perception of mathematical symbolics, process, vocabulary. The closer their mathematical understanding becomes the more Human A becomes Human B in the mathematical process. This can happen both as a culling out of relationships that are not as important (noise) and increasing the relations under consideration. In mathematics there is a robust process of removing specificity from relations and a movement towards abstraction that brings mathematical research into alignment. At the same time there is movement towards a larger set of abstractions. So all considered mathematics as a category of knowledge aka a medium, tends to map more human mathematicians in similar ways. In another example, animal A and animal B can be shaped into similar behavior over time by controlling the conditions of their environments, the schedules of their reinforcements and the biological factors. Genetic cloning and controlled development environments are a fine example of a higher fidelity animal replication process.

A note of caution on convergence of systems: if complex systems are actually different in almost any way there will always remain the possibility of compounding difference. That is, if two mathematicians are not actually the same exact mathematician there always is the possibility they will arrive at very different understanding of mathematics over a long period of doing mathematics. The same is true of genetic cloning, simple computer systems, and so forth. In short, absolute statements about knowledge, mappings, equivalency cannot be reliably made and learning must remain a probabilistic enterprise.

But, wait.

But what of mathematical truth statements? Or “universal” or non-relative, non-system dependent relationships? As so often happens with existence and mapping out existence these ideas have recursed back into themselves like a mobius strip. The inescapable relativity and probabilistic essence of existence births universal relationships and vice versa.

The Algebraic Geometry Of Existence

Map of All Maps

Properties of systems derive from their structural realities but are understood through their probabilistically observed behaviors. The structural realities of existence are algebraic geometries — mappings between systems — mappings between projections of systems — statements of equivalency — _morphisms of various systems. How does a system 1 go to (system intermediate) system 2? Existence is the set of all intermediate systems (maps) that map systems to other systems and themselves. Existence is a map — the map of all maps.

Physical systems and physical properties commonly referred to in science or daily life are useful reductions of the wider, more abstract algebraic geometries. Physical measurement of systems is always a computationally reductive process, per the first part of this essay. The slicing and dicing of the whole of existence into observable parts is only possible by probabilistic computational mappings between systems aka the mediums. The mediums, in the most abstract sense, are algebras between systems — axiomatic systems that can be combined to form the space of all communication mappings between systems.

Algebras

Of course, this argument is now very tricking because the language of math is not shared between the author and readers and reader to other readers and it all comes with a huge amount of contextual gaps. The author again begs the reader to be patient as the act of reading even non shared vocabulary is a mapping process towards developing a shared mapping. The key learning of this now tricky mathematical lingo is that underlying all the messy probabilistic stuff of existence there is a set of algebraic (computational) relations that are in no way system specific or medium dependent. But do not confuse this statement to mean that mathematics, however it is understood, is the basis of all things or even the basis of all learning. Algebras and mappings are of a much more general nature than that. The basis of the word algebra is ancient and simplistically conotes “reunion of broken parts.” Algebra in this essay’s sense is similar: a mapping of the overall system and its subsystems unto each other and itself. How can everything that exists relate to everything else that exists — the enumeration of all the hows forms the algebras or the why. When all hows (or maps) have been explored the why is thus illuminated — the why is literally that enumerated set.

Possibilities and Actualities

An improved interpretation of all that has been said is that systems can be observed and related and the possibilities plus the actualities are algebraic. Possible mappings is the largest set of mappings — all mappings are included, even non-observed and nonsensical mappings. Actual mappings are mappings that are observed and are computationally reachable by subsystems. That is, if existence is, then there exist mappings for systems to other systems and observable or knowable existence is that which is computationally efficient. If a defined mapping does not simply map a system (mathematically, logically, computationally reducible) nor observationally (probabilistically, computationally non-reductive) it is not a mapping. Possible mappings are determined by the topologies of systems. For complex systems mappings can only be considered probabilistically, there are not simple, reductions. The build up of simple maps into larger mappings necessarily leads to complex mappings in which only probabilistic understanding is possible. This is necessarily true because of computational irreducibility — that is, again, S1 and S2 can only use an intermediate mapping Si to relate if Si communicates at a relational scope less than S1 and S2. This has been observed in thousands of experiments as well as through more logical mapping methods in mathematics and computational science (see the results of Turing and Godel and many others). It is also easy to understand in a common sense way: a person who has not learned human language and the symbolics of scientific research cannot use quantum mechanical research effectively to engineer anything. Quantum Mechanics is an Si that is greater in scope than the human’s current S1.

The Convergence of Learning as Existence

Learning can be considered a build up of probabilistic mappings between systems. These mappings, in aggregate, are computationally efficient in observational and comparison operations. That is, these mappings allow a learning system to evaluate changes to relating systems and adjust accordingly. It is not implied that learning arrives a true or accurate mapping of all relations or the situation — instead — it is efficient — that is, the system can continue to observe, adjust, behave.

Consider Human Learning

It is time for a remark on “where” learning takes place in a human and between humans. Observation, memory, retrieval, comparison and behavior occurs genetically (genetic drift through generations, evolution, epigenetics), cellularly (metabolism, cancer, etc), organally (system flow, sleep, etc), environmentally (climate, weather, housing/shelter, nature, etc), and culturally (traditions, religion, government, laws, systems of formal education, humor, memes, etc) and technically (tools, computers, mechanical systems) and so on.

Figure 3: Simplified Diagram of Human Learning

Observations and memories (patterns of behavior and consequences) are related across all aspects of a human — crossing and integrating all boundary conditions. The boundaries defined as in above essay — probabilistically observed and remembered mappings between systems.

Consequences — Where the Efficiency Buck Stops

Consequences are the observed effects on systems on other systems and within an observing system. Typically consequences are the observations that perturb a system unto a new state. That is a consequence is a signal retained at more than one level and in more than one relating system. A new state is the collective relation of a system to all related/contingent systems. In the case of human consequence a signal perturbs a human when it affects behavior at a genetic, cellular, organally, social, cultural or environmental level in such a way that future engagement with similar signals will produce a different behavior than previous engagements.

The Ultimate Human Consequence

The most obvious or deeply affecting consequences in the human experience are those that hinder physical reproduction and physical survival. If consequences are severe enough to a human the system ceases to be a human aka death aka the parts/subsystems/relations that probabilistically comprise a human no longer relate robustly enough to be a human.

And here is a critical point of learning, consequences and existence. When systems become computationally infeasible to maintain relationships between systems they cease to exist. When a mapping becomes entropic enough or loses its learnable coherence it ceases to be.

In a sense the ultimate edge of learning in a human — the ultimate edge to consequences are to find out which relations in the world compose a state of non-survivability. Death (a systems non-existence) is the final end of computationally efficient learning. On a grander scale of humanity, learning is identifying and codifying into the social, historical and physical infrastructure the boundary conditions of non-existence. What are the mappings between humans and the world and each other that reproduce more humans than cause the end of humans. This the algebra of humanity.

The Ultimate Consequential Algebra

What is true of human learning and human existence is true of all system — though it may not seem so on the surface of this argument. All systems are locked in an inescapable tangle of learning — an evolution of the web of selection by consequences of observing and learning consequential mappings. This flow of efficiency maintained learning between system emerges into a web of the exploration of all combinations of mappings.

The Universe in a Nutshell

Previously discussed was the idea of Possibilities of Mappings between the set of all relations between existence and Actualities of Mappings. The universe can be argued to be the largest set of all things — the possibilities of systems, subsystems and all the mappings between them. And in order for that totality to come about the universe must also explore the actualities of mappings — mappings that are computationally efficient between subsystems.

This is not a case for intelligent design nor agency nor immortal deities.

Rather this is the consequence of existence and non-existence — presence and absence. That is, if something exists, this something can only exist in relation to something else that exists or does not exist. Existence requires Explanation. Existence is Explanation. Explanation is a Mapping of How Something Exists.

In some sense this is the old saw if a tree falls in a forest and nobody’s around to hear it does it make a sound? This is and always will be an unresolved question because any answer requires an infinite descent into explain How A Sound is Produced and What A Sound Is. And at the same time, practically speaking we will assume a pragmatic answer to that question to get on with it. In a physics argument we will claim the sound is made regardless of a human observer. In philosophy we will argue about the nature of sound possibly being an observer effect — sound is not simply the contraction of air molecules, it must include an observer to be affected by these air molecules. In computer science we would argue about the possibility of simulating sound effects and sound observers by a machine made of silicon. And so on. And in the process of indefinitely mapping the how of a sound, a tree and an observer we will find new mappings between systems and then proceed to argue about those.

All of these arguments being bounded by consequences — does arguing out the nature of sound and the world and observers produce more or less humanity. And up from there does producing more humanity keep a wider ecology in balance — of growing and relating but not collapsing… and out and up from there what are the limits to all of this ecological rebalancing.

Return To Base

Learning has taken on an unfortunate connotation. Learning is considered as some progress towards answers, accuracy, truth, reality and every other reification of ultimate ends. This is an unnecessary complexity to explain the actuality of learning and the actuality of existence — but, as this essay has attempted to show — enumerating the how of even the possibilities is necessary to find the extent of the actuality. What is not a robust or efficient or maintainable under consequences mapping of existence is to find the edge of the existence of anything.

The universe exists. How it exists at any level of detail or in any given state is the totality of the struggle between possibility and actuality. Where there is an untested, unaccounted for mapping is where the universe exists. And that happens to be where learning happens. At all levels.

The Big Bang

The cambrian explosion of existence is finding the boundary conditions between coherence (actuality. Finitude. Computability.) and incoherence (possibility. Infinitude. non-computability.) Everything can be refactored into a struggle to find the efficient frontier of survivability. From this infinitely complex on the whole struggle for survival the totality of existence emerges.

From the battle of the higgs field before it melts into matter and dark matter…

From the battle of the natural numbers giving way to the space between — the rationals and irrationals — the transcendentals, the complex, the real and the surreal or mapping themselves into the evens, odds, and primes.

From the battle of polynomial time algorithms to non-polynomial time algorithms.

To the conflict of the spins of quarks and the massless powers of photons to the 6d manifolds of dark matters origami like folding.

To the radical behaviorism of pigeons and rat maze runners and the fractal misbehavior of financial markets and the wobbly ideologies of conservative and progressive western democracy political theory.

To the arc of human justice bantered about while 80% of earth’s other species go belly up.

And everything inbetween.

All of these competing enumerations of how the world relates to itself being stored on silicon powered magnetized hard drives and/or wood fibered papers and hardened mud — all hurling through a solar system held in check by some strange spacetime curvature all within a much larger galaxy inside a larger tentacle of the galactic network — all of it trapped, in a sense, in a world that in billions of years will have moved so far away from other planets and galaxies transmissions of all these learnings will be impossible. And there, perhaps, is the edge of existence — perhaps the boundary condition of efficient computation — and perhaps that consequence of solar systems ability to find a new edge will lead to alternative systems and mappings capable of bridging the galactic divide.

A Bridge Over These Local Boundary Conditions

Epistemology is the bridge over any boundary condition. There is now final theory of knowledge and learning — there are only probabilistic experiments to shoot a new mapping over or through the boundary. Does any given attempt lead to more than it does not?

Human science, theories, history, technology, sociology, arts, religions are all just maps from here to there that in whatever form they currently take have survived the consequences of human existence (which includes the environments and other entities we share the world with). And now humans are moving these maps into computer networks — machine learning, simulations, and AI and human augmentation.

Most certainly this has expanded the computational efficiency of our epistemological map making. Humans can edit the genetic code, can simulate financial markets, can influence local weather, can span vast expanses of space, can create big bang physics interactions. At each newfound boundary of technology and physical limits there are new mappings to map out and for these mappings to be useful they must be culled and made efficient and survive transfers to new mediums and generations of humans/machines/animals.

“Natural Laws”

Laws aren’t universal laws. Even Isaac Newton understood this. Laws are robust statements — in general these approximations hold through many different mediums and observational scenarios but not all and not under all “levels” of observation.

Laws are reductive statements computationally optimized for robust observation and translation in different mediums and measurement to approximation.

Logic and Mathematical Truth in their various form inevitably enter the conversation. How true are such ideas? Are they the ultimate truth? Ultimately true? True in most cases, in most observable cases?

There is no truth and there is no possible route to knowing truth. Whatever is true is impossibly true that is only true if all possibilities are explored. Therefore whatever might be absolutely true is only knowable by exposing and experiencing all things — impossibly knowable. Perhaps there is still truth, but it’s not approachable, it’s only traceable by trying out possible truths through relationships/mediums.

And yet, here it is, math. Yes, many mathematical theories and methods are knowable and usable. They are simple and general enough to be robust — that is they are sufficiently uncomplicated to show up consistently in exact or near enough form in many different mediums. For example, a circle is very robust — its basic idea/form/structure is approximated in many different mediums/many different relations between systems. The “perfect circle” is likely not all that robust in of itself but the average of all systems exhibiting circle like relations averages out to perfect circle like characteristics.

Yes, Platonic ideals exist as probabilistic means across medium representation. The mathematical and logic realm exists as probabilistic

Art As The Efficient Mapping

Knowing?

What does a human know and how does she know it? What does a machine know and how does she know it? What is Knowledge? What Use Is It To Know?

These are unanswerable questions in any total sense. But there are maps upon maps that help navigate the terrain. In fact, these maps are what a human knows and what a machine knows. Not because the maps themselves and only of themselves are knowable and valuable. It is about the maps in action — the maps with activity — the living out of the maps are the knowledge. Knowledge is a map and its experienced consequences.

The most useful maps are those about what isn’t known — That is, to the user of the map, the map is more useful the more knowledge it has that the user of the map doesn’t. The map is a clue to be followed. The more interesting and enticing the map, the more that it is followed and the more that becomes known in the following.

Mapping?

Mapping is the intermediate systems between related systems that an observer creates/experiences/transcodes. E.g. a geological map is literally a system that takes the geography to the user in a useful way. Mappings can be simple: the even numbers map to the counting numbers (a mathematical mapping) or they can be complex: Schrodinger’s Equation that is a probabilistic mapping. And they can be far more complex: History and Tradition maps generations to other generations. Personal Identity maps one contingent moment in a person’s life to another moment.

Mappings make up the totality of existence. Literally all of existence is map… map making is the existential act. How does something exist? One can only answer by mapping it out.

What maps matter?

Art?

Art isn’t anything in particular. It’s everything no one has an idea what to do with. Art is the stuff between — the mapping of mappings. It’s the unmapped about to be mapped. It has no basis other than experimentation with mapping. Art is always a-mapping. The re-mapping or reinforcement of mappings — the testing of edge cases — the simulation of possible other maps between states of the world, between systems, between beings, between events.

Art is that which we consider “interesting.” An interesting re-map.

Interestingness — the metric that matters

Above discussed was the notion of the bias-variance trade off required to map complex systems between each other. Within this notion lies an idea of “interestingness.” It is useful to talk about complexity before attempting a definition of interestingness.

Complex systems typically are broken down by a couple of types according to their behavioral features. Non-linear systems cannot be modeled or related completely as a sum of their parts. A Chaotic system is sensitive to its initial state and has periodicity, multiple levels of topology and dense — in short, there is regularity and many different levels but that regularity is recursive and sensitive to change. Complex Adaptive systems are systems that adapt/change in response to relations in some systemically reliably ways but also retain their overall identity/relations. Categorization at this level is a bit fuzzy and it isn’t all that important to get it perfectly right, as all of these systems and the relationships between systems they represent all have this characteristic of “interestingness.” One can go deeper into the various observations of about these interesting, complex systems but the key point is that these systems cannot be fully observed, expressed, nor related. They are sufficiently interesting as to not allow any learning system to achieve 0 bias and 0 variance in observation.

Interestingness can be measured by the combined observation of bias and variance. There is a gradient curve between high bias, low variance and low bias, high variance where the minimum value on that curve is the maximum interestingness. That is, the maximally interesting systems are such that they are the equilibrium point of bias and variance curves of an observing/learning system aka a complex adaptive system.

The implications of this are many-fold.

What Is Knowledge, again?

A system K which moves a system A to system B, efficiently, is knowledge. Knowledge is that map which consequentially matters — meaning the map from here to there consistently provides relationships even when distorted and those relationships mapped have consequences — the order, quantity, quality, direction, strength, size, etc affect behavior and do so more or less consistently (not randomly). Consider that system K may have slight variations depending on interacting systems A or B — for example, a mathematical algorithm may produce slightly different approximations for given calculations on machine A vs machine B, as long as the calculation results correlate consistently on A and B those calculations can be considered a precondition for knowledge. For these calculations to be sufficiently knowledge-worthy the calculations must be efficient — that is, a consistent and computable relationship between system A and B (and whatever else the calculation might be relating) is achieved.

But Does This Distinction of “Knowledge” Matter

What matters and how does it matter? This is an important question and, of course, is recursive. One must ask that question of the propositions/systems of relations made and then evaluate consequences. That is, does the proposed knowledge (in whatever medium) lead to more creation/more ideas/more existence etc or less? Does a system K relating system A and system B allow a system X to gain a computational advantage using the system K? Does a system X survive longer or in more varied situations with the system K? In short, does system X value this system K and how is this value accumulated and maintained, grown or extinguished?

The Question of Mattering is a Question of Value is a Question of Computational Efficiency

A map (a system) is valuable in so much as it is an computationally reducible map between an observer/user of the map and the things mapped. A map takes on value as more things (systems/phenomena/events/other maps) are also efficiently accounted for/related in this mapping.

Value is an endless system of systems as well.

Humans codify value between systems in the form of paper currency, social reputation, material goods, various ledgers, emotions, habitual behaviors, language/vocabulary, cultural norms and so on. These value systems are computationally efficient maps between the complex network of systems a human must navigate while alive. The totality of any particular value system may not be activated/used/known/available to an particularly human — for example, any single human may use paper money but not understand the full implications of fiat currency, its origins, its debt obligations. It is efficient precisely because it is not required to understand the totality of the system. The mapping of paper money/fiat currency to various other systems is multi-faceted and computational efficient relatively speaking to these various sub-networks. The totality of the paper money system is not computationally irreducible, however. This fact is why humans observe stochastic behavior in various economic markets and so on.

The more value a system maps between other systems the more “interestingness” the value system encodes or represents. In animal behavior we often observe complex schedules of reinforcement producing long term behavioral repertories — that is overlapping variable ratio schedules or what amounts to overlapping variable ratio schedules prolong reinforced behaviors. In economic markets we find greater pricing variability in products and services that are not yet commodities — where there are large information imbalances, etc. Art markets are notoriously volatile. Volatile markets are not necessarily (and usually not) overall the most valuable! The other way to frame this concept is that the more efficiently a value system maps differing systems at an equilibrium of low variance and low bias this more value in the system. It’s a bit counterintuitive. Equilibrium of bias and variance allows a system to be adaptively complex and thus robust under perturbation — the value mapping is stronger/more relational. Volatile systems typically are way over or way underfitting in their mappings — they either have high bias or high variability and do not allow a system to consistently, and thus, efficiently map between other systems. These systems are more noise or more static and less interesting.

In mathematics we often find remarkable correspondence between complex patterns. The Riemann Zeta Function Zeros and the Distribution of the Prime Numbers are a good example. Transcendental numbers can be combined in beautiful formulas. There is good correspondence between many patterns in nature and the Fibonacci sequence. And so on. Considering all of the above information is shouldn’t seem so strange that complex/interesting patterns show up in a great deal of other systems -almost a fractal nature to complexity everywhere. Nor should it be surprising that long standing value systems also exhibit strong correlations to these complex mathematical patterns.

Back To The Origin (not that it’s possible)

Computationally efficient maps are the stuff of experience. Experience is the stuff of existence. While in-efficient maps are everywhere, they are not knowable/experienceable/fully mappable unto themselves — while they exist, they exist only between a network of chunked maps. In a sense, learning is the perpetual motion of existence. Systems mapping systems — up and down the gradient of interestingness, complexity, efficiency, entropy. The constant struggle between maps that are useful and knowable for a time in certain situations that fade into less efficient or less corresponding maps and give way to more efficient and better corresponding maps. All of this ebb and flowing leading to maps of maps (art! Science! Literature! Dictionaries! Literal maps! Libraries! Googles! WWW!) in which theories of those maps must be made to be efficient… all grown or reduced as value systems can keep rebalancing computational resources (evolution and ecology).

This has been and will be never ending. There will always be computationally irreducible/unmappable/unknowable relations between systems (events, people, things). There will also always be an ebb and flow of computationally efficient maps/relations between things as part of the inescapable conservation of computation/energy in the universe.

Even when machines and humans grow more capable (more intelligent/more aware) there will be more systems (maps of maps of maps) for them to be aware of. Machine learning systems have already achieved the inscrutable stage (they behave in ways that are computationally irreducible). The “natural world” has been computationally irreducible, probably since the beginning. It has always existed as an ongoing value struggle between efficient mappings that ebb and flow in their correspondence to balancing bias and variance (adaptability).

The implication is that anything sufficiently complex to be adaptive and “intelligent” will need to create value systems in order to map the growing uncertainty of maps of maps of maps that get encoded (valued) by a system of systems (humanity! Computer networks!). And we call those value systems politics, government, economics, culture, art, science, theory, and ethics.

How will we understand what the machines understand? How have we understood and mapped what the natural world understands (maps)? We categorize, write, draw, paint, compare, tabulate, observe, test. The machines will do the same. And we will be left to create intermediate systems of art, science, literature, ethics with them, and encoded in various value systems as these systems map each other ultimately to balance and survival

We will all learn. And Re-learn.

And what sticks around will be what’s interesting.

Bibliography

Non-Understandability of AI

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Reification and Agency and Measurement

https://socialmode.com/2015/11/14/psychology-today/

Bias-Variance Trade Off

https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
http://scott.fortmann-roe.com/docs/BiasVariance.html

Complex System Typing and Features

https://en.wikipedia.org/wiki/Complex_system#Types

Chaitin on Algorithmic Information, just a math of networks.

https://www.cs.auckland.ac.nz/~chaitin/sciamer3.html

Platonic solids are just networks

https://en.m.wikipedia.org/wiki/Platonic_solid#Liquid_crystals_with_symmetries_of_Platonic_solids

Real World Fractal Networks

https://en.m.wikipedia.org/wiki/Fractal_dimension_on_networks#Real-world_fractal_networks

Correlation for Network Connectivity Measures

http://www.ncbi.nlm.nih.gov/pubmed/22343126

Various Measurements in Transport Networks (Networks in general)

https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html

What Is Data?

https://socialmode.com/2015/06/20/what-is-data/

Brownian Motion, the network of particles

https://en.m.wikipedia.org/wiki/Brownian_motion

Semantic Networks

https://en.wikipedia.org/wiki/Semantic_network

Prime Number Distribution

http://mathworld.wolfram.com/RiemannPrimeCountingFunction.html

MPR

https://en.m.wikipedia.org/wiki/Mathematical_principles_of_reinforcement

Probably Approximately Correct

https://en.m.wikipedia.org/wiki/Probably_approximately_correct_learning

Probability Waves

http://www.physicsoftheuniverse.com/topics_quantum_probability.html

Bayes Theorem

https://en.m.wikipedia.org/wiki/Bayes%27_theorem

Wave

https://en.m.wikipedia.org/wiki/Wave

Locality of physics

http://www.theatlantic.com/science/archive/2016/02/all-physics-is-local/462480/

Complexity in economics

http://www.abigaildevereaux.com/?p=9%3Futm_source%3Dshare_buttons&utm_medium=social_media&utm_campaign=social_share

Particles

https://en.m.wikipedia.org/wiki/Graviton

Gravity is not a network phenomenon?

https://www.technologyreview.com/s/425220/experiments-show-gravity-is-not-an-emergent-phenomenon/

Gravity is a network phenomenon?

https://www.wolframscience.com/nksonline/section-9.15

Useful reframing/rethinking Gravity

http://www2.lbl.gov/Science-Articles/Archive/multi-d-universe.html

Social networks and fields

https://www.researchgate.net/profile/Wendy_Bottero/publication/239520882_Bottero_W._and_Crossley_N._(2011)_Worlds_fields_and_networks_Becker_Bourdieu_and_the_structures_of_social_relations_Cultural_Sociology_5(1)_99-119._DOI_10.11771749975510389726/links/0c96051c07d82ca740000000.pdf

Cause and effect

https://aeon.co/essays/could-we-explain-the-world-without-cause-and-effect

Human Decision Making with Concrete and Abstract Rewards

http://www.sciencedirect.com/science/article/pii/S1090513815001063

The Internet

http://motherboard.vice.com/blog/this-is-most-detailed-picture-internet-ever

The Power of IS

https://socialmode.com/2015/08/23/is/

--

--