We live in confusing times. How we arrived here is due to our collective misunderstanding about what learning is and how it occurs. Where we go from here collectively is determined by whether we understand learning more than we did in the past.
Humanity is not determined by a higher power or moral absolutism. Human behavior is not a fateful activity nor some thread of good vs evil. Humans, just like all other animals/life forms, are learners and replicators.
Learning is the complex process of noticing changes to the environment, relationships to the environment and internal to the learner system changes and adapting to these changing contingencies. Learning is not a progressive or regressive process. It is not a movement towards more absolute knowledge. It is adaptation. It is not isolated to a species or an individual or an environment. Learning is the collective changing relationships between beings. Learning is not towards reality or against reality. Learning is merely adaptation pushed along by consequences of the adaptations.
Learning occurs within and between all systems.
Learning takes place in the genetic code, the nervous system, the skin, the society, the design of objects and infrastructure, the words in books, the nodes connected on the internet, the bugs in the dirt, the chemical trails of ants, the elemental make-up of the atmosphere — all these systems are contingent with each other and constantly adapting to changes within and between. All of these systems have differing mechanisms for sensing change and storing previously sensed change. DNA encodes a deep memory of adaptations across trillions of individuals and situations — each new genetic individual serving as sensor for the genetic code for consequences to be either passed on or not, in a gradient, each and every genetic individual constantly signally back into the code. Building designs are constantly reformed as erected building change occupants, withstand changing weather conditions, need repairs, survive transportation innovations or explosions. The blueprints plus the historical record of survivability of various designs becomes encoded in architectural practices, material design, the remains of structures and so forth. An infinite number of examples can be drawn out.
Learning does not have any animating purpose or any origin principle.
There is not a right thing to learn. There is not a wrong thing to learn. There is learning and what is learned is more or less an adaptation that survives changing conditions or does not. Nothing that DNA learns or what a building design learns or a chemical trail of ants learns is right or wrong and is universally useful for survival or not. The learning is not isolated to the stated object of learning either. That is, the human genome is not just learning for human beings nor are building designs just learned by human architects and builders. Learnings are shared between entities in relation and the “same” set of conditions is learned in different ways by different entities.
Furthermore, learning is limited by the overall capacity of entities’ awareness and mechanisms to store awareness and consequences (situational models and outcomes).
This limitation requires a balance between learning in high fidelity vs. generalizations. Higher fidelity awareness requires finer, more capable senses and much more storage (both aspects are limited by energy efficiency). This inherent trade off in learning (knowledge, information) is touched on by several related concepts: Bias-Variance Trade Off, Entropy, Computational Irreducible, Mathematical Incompleteness and the Halting Problem. All of these various limitations come from the basic idea that complex systems are unknowable/unpredictable. Thus learning is a utility trade off where efficiency is optimized. Efficiency here is not about the fastest way to learn something but rather a calculus about “how much is just enough to continue” aka the path of least resistance/least use of energy to not collapse. This opens the door to variability and system experimentation. In fact, the limitation of learning, its very finitude, is the source of creative power in all forms of evolution.
There are two extremes for learning systems: periodic cycles or noise. Periodic cycles are simple patterns that repeat with no variation or adaptation. These are systems that do not learn, but simply repeat more or less. Some of these systems may temporarily respond to a perturbation but they return to their cycle eventually or the perturbation is not a material reformation of the system, the perturbation may simply introduce yet another repeating element. On the other hand some systems devolve into noise, where there is neither cyclic pattern nor static relations, but rather just noise — probabilistically random behavior. These systems, like cyclic systems, may only minimally be perturbed and absorb perturbations randomly. These systems neither store adaptations nor have any awareness. There are no known physical instances of noises nor can one even simulate one (due to any simulation would be produced by a learning system). Cyclic systems are possible, theoretically and physically. In fact, cyclic systems relating generate learning systems. Learning on the whole is necessary and reductive systems are necessary for learning.
Theoretically there are three types of systems that do not learn: non-existent, simple cycles, and noise. We have no physical evidence of non-existence or noise, on the whole. A system can cease to exist, which happens physically by transformation, combination into other systems. It should be noted that no system exists in isolation and so even cycles are localized phenomenon, the whole of existence (or even anything beyond a single bit) is learning (adapting and remembering).
Persistence of patterns and particular learning systems emerges from a successfully renewed balance between bias and variance — being able to use previously seen situations to evaluate newly experienced situations and use new situations to invalidate/validate stored models. This is nothing more or less than efficiency — it is simply more efficient to “borrow” learnings between similar situations (metaphors, analogies, language, are all forms of this). The alternatives would be to learn more or less fresh for each “new” situation (high variance) or to not learn at all (high bias). This balance can only be achieved through exposure to situations with enough variable frequency in order to validate and invalidate learnings. The limited capacity of any given system ensures that with exposure to new varied situations will necessarily alter the system over the long haul. Furthermore, systems not exposed to enough new data will become more and more cyclic and reinforce existing learnings making future adaptation harder.
“Conservative” systems of extreme bias are efficient in certain situations. Their efficiency comes from the idea that variation/complexity is sensed and remembered as generally “different” and does not require expensive evaluation to react. The material make up of differences themselves need not be considered, it is enough for a conservative system to sense “this is different” and behave accordingly. Even its own behavioral responses are relatively efficient, do not seek new data, do not behave differently than you did previously. This is a very cheap computation. And, given the right circumstance and a certain view of “success”, is a very successful strategy. If change occurs rapidly and the success metric is “survival in the short term”, conservative approaches are often very effective. Assume difference is threat or to be ignored, sameness as generally ok or reinforcing. Conservative approaches are fatal for larger systems and longer expanses of time as change always creeps slowly towards criticality (climate change, chaotic systems) or useful, efficient adaptations are missed when a threat was a false positive (missed mates, missed food sources, etc). Conservative biased systems favor individual survival over the whole, they are efficient for individuals but inefficient for wider systems. Individual survival is a near term, localized, small system. There are no known examples of biological species that sustain “individual” focused systems. Conserving the individual can be viewed as an exploration/experiment of testing whether remembered patterns are still relevant/useful and as a way to test for cheapest computations.
(In computer science terms conservative systems are Greedy Algorithms. Greedy Algorithms rarely find optimal paths for complex systems. And there are lots of Greedy Algorithms deployed in the world, particularly in finance.)
Sub-systems (systems that entirely fall in scope within a larger system) are always conservative experiments relative to the larger, more complex system. Genes are conservative systems within a genome. Cells are conservative systems within a human. Humans and ants and beavers are conservative systems within the earth’s ecology. Earth is a conservative system within the milky way, and so on. These smaller systems provide the beacons of experimentation for the overall system. More commodified sub systems, those systems, that seem to be entirely replaceable/swappable are the most conservative. These systems become ultimately conservative and commodified through long learning duration, where most of the strategies have been tested and a few cycles are now efficient and do not require as quick of adaptations. For example, this is why many of earth’s animals share some common, very old, genetic features. These genetic experiments have had trillions and trillions of trial runs. Sub-systems that appear to be critical to the persistence of the overall larger complex system are at a less mature, less learned, less trialed stage. For example, we have generally found ways to replace particular human organs and still have a recognizable human, mostly except for the brain. We have not commodified the brain enough through experimentation of failed strategies in adaptive situations. The wider system of humanity has not adapted sufficiently to be able to swap out brains.
In political thought larger systems that have commodified subsystems are generally considered “Progressive” or “Leftist”. They are harder for conservative movements to consider because they are of larger scope, carry experimental conflicts, are in flux, and adapting and tend towards variance, not bias. Progressive systems have many subsystems carrying out experiments. Extreme leftist systems are no different than extreme conservative systems, they are commodified (highly biased), non adapting systems that terminate once they no longer provide new information on the success of the system in changing conditions. Expanding ecosystems are always “progressive” by definition. Static or shrinking systems are always conservative, as they are subsystems of the larger progressive systems.
And now here we are at a critical moment in our adaptations as a species and in society. Our progressive movements are dangerously close to slipping into conservative movements through over simplification.
We must find a new way forward, new adaptations with a good balance between bias and variance. Many aspects of our society and the lives of people within the society are growing more cyclical/repetitive, free of new perspectives and free of new learning. Perceptive tools of society appear to be growing courser (in their binary polarization), not more capable. Future exposure to new data will happen less and less and/or meet with more resistance (fighting against previously reinforced highly biased models). Authoritarian rhetoric, moral absolutism, and most religions are not adaptive. They are repetitive and reductive systems that coarsen related systems. In one sense they are “black holes” of cultural and political adaptation. This is why in the historical record we do not find long standing authoritarian models at scale. They cannot stand. It is also why conservative movements cannot last, they simply do not learn. These reductive systems are the last stages of a major environmental shift.
It is very likely the nationalistic/populist movements spreading the globe are the last stages of the advent of The Internet and effective Birth Control.
Those two post World War II adaptionations of human life dramatically increased cultural/behavioral variation within the species as well as connected billions of people. System responses to this increased connectivity and diversity have now formed gravitational nodes that might be as consequential as a branching in the biological tree. For instance, those individuals denying climate change are ill prepared for the coming whole earth changes and are likely not to survive as frequently as those paying attention to climate change.
Beyond the nearer term existential threats we know about now new ones are being generated at an increasing pace.
The universe of humanity is increasing scope in connectivity with itself and now other worlds (planets, galaxies, virtual worlds) while our tools for understanding them and developing balanced adaptations is not keeping pace. Any significant epoch in human history can be marked by system interconnected complexity increasing faster than individual systems can perceive and adapt. The energy required to adapt now is escaping a single simple social circle and certainly escaping a single human. (it’s been this way for awhile). The complexity is very likely outpacing our genetic code and other storage mechanisms historically relied upon. We have built new tools of storage (books, computers, the internet) and new perceptive tools/sensors (language, programming, microscopes, ct scans, lidar, expanded gender systems, genetric sequencing etc) but all of these require expensive, sustained learning to benefit from them until they are part of the human apparatus (biological and/or basic cultural infrastructure). In other words, a large portion of humanity does not have access to sufficiently efficient learning. This will happen cyclically for as long as humans or human like things persist. It is part of the overall learning process — inefficient or fatalistic strategies necessarily will be pursued to reinforce that they are, in fact, fatal.
The implications are devastating for the chunk of humanity not learning equipped.
There is simply no historical record of any system (species or environment) that persisted through cyclical reduction or reaching the extremities of noise. Systems that cannot keep pace with changing relations tend to persist only as long as there remain enough relations sustain them in the wider network. This is the historical learning cycle. The edges of all differences will be explored until they become terminal. In human terms, eventually a non adaptive world view cuts a social group off so much from the wider system environment that there is nothing to sustain the group or enough of these social groups form and overwhelm the wider network and in doing so eliminate the source of variation/learning, which tends to open up opportunities for other systems. Extinction events are such extreme examples. Extinctions occur when an ecosystem isn’t sufficiently diverse to adapt to changes, whether rapid or drawn out.
This conclusion does not imply members of a species haven’t simplified their bodies or their perceptive tools and failed to be successful in surviving and thriving. When that happens a species tends to increase their individual quantity which has the overall effect of increasing adaptive capability, etc. We can actually see this with our computational world. Our computers are not growing in individual complexity, quite the opposite. The Internet of Things has each individual device in a much simpler form but it’s overall network is far more complex and robust. This does not mean systems haven’t found different niches etc. but even that is part of an overall movement to more complexity, more diversification, more ecological niches. Again, our computers have morphed into various interconnected niches of simple devices, stable ecosystem of desktop pcs, growing banks of less diversified cloud computers, etc. The computer network has grown in node count and connectivity while the node complexity has reduced. The network is far more perceptive and adaptive than it was 30 years ago. Other animals have many examples of this sort of adaptive shift.
In a sense, humanity’s big “decision” is about whether it is willing to re-define itself at the individual level and the species as a whole: What is a human? We have so far been extremely conservative about animal rights and machine augmentation and gender definition expansion. We have been extremely slow to recognize our interconnectedness to the climate. We have prized near term capital gains versus overall prosperity. We have prized industrial education of efficient procedures (simple calculations and tool use and allegiance) over experimentation and expression (arts, deep science, civics, philosophy). We have been slow to expand the definition of personhood to even all biological humans. In a very strong sense, our overall species’ learning ability seems ill-equipped to keep pace with the growth of the complexity of our experience. While it’s very likely some portion of the population will keep pace and keep pushing ahead and survive, there are million maybe billions that will not. It seems like a conservative, fatal strategy to allow the end of so many people. There is overwhelming evidence that there is plenty more to learn about the world and we don’t know who of us is critical to that learning. We certainly don’t have to let today’s definition of what’s important or what exists be the extent of our curiousity. Learning will continue in the universe. Humans will change. What we change into is the question.
Footnotes and Useful Reference:
Probably Approximately Correct:
Metaphors and Interpretation of the World: https://georgelakoff.com/2016/11/22/a-minority-president-why-the-polls-failed-and-what-the-majority-can-do/
Analogies as basis of knowledge: https://prelectur.stanford.edu/lecturers/hofstadter/analogy.html
Thinking Fast and Slow