Time is a label we use to mark our awareness of configurations of the universe (the world around us). Time does not have its own physical existence not even as a dimension of space-time.
It may be of value to some readers to check out some others reconsiderations of time before reading mine:
and Barbour’s The End of Time (this introduces a lot of lingo I borrow)
Additionally basic notions of Map Projections and Projective Geometry (and here)are extremely useful for understanding the following “essay.” Of particular importance is the notion of relativity and invariants. Sorting through issues of time and reality are really a classification and justification of what we consider a relative property of whatever we’re looking at and what is invariant across all these things we might observe. I encourage the reader to follow the links I provide as I provide them to get a better sense of how these things all interplay and for deeper definitions of them where I have no provided them.
Configurations and Complexity
The universe and everything in it can be reconsidered as a configuration of everything. Each configuration of everything is more or less different than each other configuration. The quantity of difference between one configuration and another is a measure of complexity. (Formally the current state of the art in complexity measure: https://en.m.wikipedia.org/wiki/Kolmogorov_complexity).
Information is instructions one configuration needs to become another configuration. This information can be considered as a map. Maps exist within the whole of everything and thus are themselves part of any configuration and further thus are a subset of configuration. That is maps from one configuration to another configuration are of less extent than the configurations themselves. This implies only the entirety of everything can be a map of everything to everything else I.e. The map of all maps. Maps less than the map of all maps necessarily take a subset of a configuration of everything to another configuration. (“Local” maps).
Regional Complexity Similarity
Local maps are the ontological stuff of existence. Specifically: the universe is just raw existence — bits if you will. The mapping of bits to other bits is definable (categorizable, measurable, observable) existence. A galaxy is a map of a small region of the entire universe configuration. A solar system a smaller regional map within a galaxy. A planet is a yet smaller regional map within the solar system. And so on. These regions are demarcated by their similarity — the relative complexity distance between them. This is is an important concept. Regional maps that are very close in complexity relative to each other and relative to the overall configurations with the universe can be usefully considered part of the same system. Regional maps as systems will have very similar characteristics (in popular language these systems will behave similarly). The reader may work out how this mapping concept plays out microscopically, genetically, cosmologically, as computer programs etc. at all “scales”.
Knowledge as Names and Consequences
Our human sciences are mapping exercises within specific categorizations of maps. Biology deals with different categories of maps than physics etc. Though obviously these mapping systems have crossover at the edges of their categories — at the point at which the maps start to be sufficiently different in complexity scope within a category. Of course this basic concept is not limited to the hard sciences. Art, literature, myth, traditions all have this same shape to them. Learning is precisely the process of finding the edge of a category of maps — the maps between one region and another. Knowledge is map of categorical maps — the taxonomy of maps. Bluntly, knowledge is the taxonomy defined by labeling a map and its likely consequences. E.g “that is a bear and it is likely to attack me.” Whereas “that is a bear” is an insufficient classification of a configuration and is not yet a map.
Time has been lost in this essay. Now I recover the point of this essay. It should be obvious that this mapping exercise becomes impossibly difficult. There’s simply no finite way of mapping definitively a complete map of any particular human alive today to the Big Bang. No way to say here’s how this all arrived here. Any map we would generate is a series regional maps that are incomplete mappings of the totality. This is where computational irreducibility, the infinite and the infinitesimals, and incompleteness shows up. Very clearly: any “map” between configurations of the universe that is more complex than the configurations themselves is not a map. All information and knowledge (maps) must be less complex than the things they map. (For a fun side bar this is where russell and whitehead ran into trouble and where the theory of types falls over and Russell’s paradox comes from.).
We can now reintroduce time. Time is the measure of complexity between configurations. Time is the map. The more complex the map between things the more time “passes” as measured by whatever clocks you choose (atomic, hands and gears etc, radioactive decay). The relativity time concerns from general and special relativity are found here too. I’ll list some resources for those who’d like to verify or refute these interpretations of relativity and time.
Spacetime in detail: http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/spacetime/
Light cones and causality:
A bit about light speed:
Please see green checked answer here for a useful discussion: https://physics.stackexchange.com/questions/192891/if-all-motion-is-relative-how-does-light-have-a-finite-speed
Information and the speed of light: http://curious.astro.cornell.edu/physics/107-the-universe/cosmology-and-the-big-bang/terminology/651-wait-i-m-still-confused-why-information-can-t-travel-faster-than-the-speed-of-light-beginner
Challenging the speed of light:
And for the adventurous this in its entirety is worth a read:
Causality’s Contingency and Breakdown
As many of the linked essays discuss we’re back to the age old consideration of causality. Information is often considered the stuff of causality. And the idea of light cones (electromagnetic wave cones) is one of causality and information and maps. And that’s a useful way to think about maps and particularly maps of the ontology of observable existence. The implications here are many fold: not all maps are causal in the cause and effect sense and not all maps are causally the same to all observers (key insight from Einstein et al). It is trivial to devise and observe maps of something to non-sense or noise or a random state of affairs. These nonsensical or noisy maps are very important, as they are the stuff of blackholes, singularities, Big Bangs, heat deaths… randomness, and ultimately — ex nihilo — the engine of existence.
The Path of Least Resistance or Computational Efficiency
(this section borrows concepts from “The Great Unknown, pages 264–300, by Marcus Du Sautoy)
Light (information, EM waves, gravity) follows the shortest path between two points. (This is the abstract concept of a “straight line” though the term “straight” conjures the wrong idea. The shortest path really should be least computationally expensive or smallest map or least complex map.) In terms of light that means light follows a path attempting to minimize distance against maximize of time (in space time) — this is odd but is due to the idea that spacetime is warped around massive objects and clocks move slower near the center of gravity (time passes slower…) Reframing that in this mapping conception, the map is smaller, less complex near the center of gravity — which is due to their being fewer competing maps (regional maps of other massive objects attempting to map a clock into it’s own spacetime geometry).
Arguments around how the universe began in a low state of entropy and will likely end in a high state of entropy rage on. But reconsidering time as complexity maps renders these arguments irrelevant. Total entropy and no entropy are exactly ZERO COMPLEXITY, so zero time. The beginning and end of time of any system is exactly 0 AND infinite time — 0 and infinite complexity. With zero (infinite) complexity there is no chance for coherent observation, there’s not even the possibility of an observer.
An observable configuration of the universe requires observers (mappers) and observations (maps) that are less complex than the whole of the universe. Without less complex configurations making up the whole information could not exist and could not “move” between regions/maps/systems.
As one way of exploring this consider the idea of rendering an image of something like a cellular automata with total fidelity. It is a non trivial thing to do. Please read this on the subject. This trouble comes about because, like fractals and transcendental numbers and certain geometric concepts, there is no computable (no finite) way to map the totality of a sufficiently complex behaving cellular automata. Without a finite way of mapping a cellular automata we are forced render partial maps of these systems. There are a couple of strategies — render a map with missing information and let the observer of the map fill in the missing information using statistical tools (human brains and sensory organs mostly do this… consider optical illusions, compression in music formats, etc). The other strategy is to supply the program and let the observer run the program. The fidelity of the map of such a strategy depends on the generator of the map (program) and the observer of the generated map sharing the same encoding/decoding/transcoding capability and logic. A simple example is sometimes how Mac and PCs misformat shared Microsoft Word documents (the decoders are slightly different). Genetics done by DNA/RNA also operate in this program and encoding strategy and have the same problem as the Mac/PC integration.
Map making and sharing literally could not work at any level of the universe without a reductive or compressive approach involving fluctuations in the generation and/or observation process. If mapping were a 100% perfect replication process there would be zero complexity, zero time, zero information and all maps would be functionally equivalent to the map of all maps. For one, 100% perfect replication processes would require infinite resources and thus infinite complexity. Secondarily, an observer can only observe maps at the same or lower complexity of itself (and any tools it augments itself with).
Limited, Mutating Maps Are The Creative Engine
The misalignment either through “missing” information map making and observer statistically imputation or generator-observer transcoder differences are the engine of existential creativity in the universe. Evolution proceeds through genetic descent with modification (mutations) that then get consequentially sorted and sifted (generational survival, etc). Animal behavior proceeds through schedule descent with modification (the radical behaviorism ideas of schedules, reinforcers and overlapping contingencies, habituation, etc). Human Knowledge and Culture proceeds through reproducibility with variation. In all these examples the modification or mutation of mapping observations by observers creates new maps to be tested for correlations to various other maps.
This ever enfolding fractal like nature to map rendering, map observing, map testing is where we all find the expanse of all of existence. In fact, this process is the ONLY POSSIBLE WAY to have the entire universe exist. A static map or configuration would have no way of finding all its variable, repeating, and invariant structures. A completely random map would have no basis for any particular structure, variant or invariant. And finally a completely deterministic map would be functionally equivalent to a static map. And more convincingly is that our own common observations and our scientific experiments simply do not suggest static, completely random, nor fully determined maps make up reality, quite the opposite.
Computational Efficiency and Computational Conservation
The overall configuration, the universe itself, does conserve the totality. That totality conserved is complexity and by proxy computation aka map rendering, map observing. Why this happens is due to the nature of observation itself. Per the above observers must use maps no more complex than themselves (and their tools). The more complex an observer becomes the more of the configuration of the whole it involves. For instance, while humans have grown incredible measurement tools like GPS, large hadron colliders, deep space rockets, planetary rovers, the resources required to build and operate these are a very much leveling out or conserving the total computation involved. That is, the extent to what the LHC measures requires engineering and computing resources that are very very hard to sustain. One could argue that an average telescope provides much more overall mapping ability for any particular human than the LHC ever will. The maps the LHC makes and requires are incredibly relative and not generally useful to the majority of humans.
This is largely due to some hard realities about Bias-Variance trade offs in any and all learning (machine, human or animals) and the issue of finite computing/memory/map making. That is, any map of the world is necessarily a blend of low to high bias and low to high variance. I cover this extensively in another essay. The maps most likely to be more useful in more situations are those with a sustainable blend of low bias, low variance — these maps require overturning biases with new observations (new map features) and maps with missing information (typically open frontiers)(allows there to be exploration).
Various Practical Implications for Humans
Tying all of this together and we find why some things in humanity seem to be more invariant or more universal than other things. At this point we can evaluate what human activities, knowledge, traditions, sciences, modes of thinking, communication approaches seem to hang around and apply to most of us.
Almost all humans are prolific at understanding the various states of the human face. Almost all humans share interpretations of a smile, frown, grimace and laughter. These are, for humans, computationally efficient maps. All humans learn behavior and language via a mix of genetic endowment, epigentics, fixed action patterns in the body, environmental reinforcement — all via selection by consequences through changing configurations of the world. All humans forage, in other words. In fact, all living things forage within their local regions. And when their forging forces them into knew map making they evolve or die off making way for other map makers.
What we find if we keep doing this exercise is that it is not “truth” or “math” or whatever absolute that keeps showing up as more prevalent in the human and animal record, but that which is computationally efficient at keeping maps evolving and in a flux of bias-variance trade off. Maps that bias too much or have too much variance cause the makers and users of such maps to become less relevant. And so map making and the maps themselves hover around a computational efficiency mean.
Is This Conceptual Reframing of Time as Complexity and Observable Universe of Maps Necessary or Merely Just Useful?
The utility of any map making activity aka knowledge seeking aka foraging is determined by the consequences. If this approach to thinking about the universe produces efficiency for me, the author, or you, the reader, then sure its useful and depending on how many maps this map clears up it may even become necessary within a certain expansive region of the universe.
Is it necessary overall? No, no map is. Any given knowledge approach is more or less part of the whole, necessary for its part of the whole, but not the definition or even the possibility of the definition of the whole.
I suppose it’s how I fill the time or make time… or rather balance time. or rather, maintain enough complexity in my life to keep on making maps.