It’s often best to use microcosm (or perhaps synecdoche) to best explain really massive ideas. So, we’ll talk here in terms of systems, and one of the main systems we have in the universe can stand for all systems: for example, learning about animals or computer networks in order to get a sense of how all systems might work.
First, let’s define learning. Learning is not done only by humans and is actually value neutral — it’s neither bad nor good. Learning is adapting. Adapting is becoming aware of changes in the surrounding environment and remembering them. The more something adapts, the more it learns.
We differentiate individual systems among the maze of systems that make up the universe, but these are just constructs of linguistic convenience. There aren’t any clean lines between these systems; they may be grouped by their response to changes with expected variation (often called “probabilism”).
When looking at biology, for example, let’s take two systems, call them S1 and S2, both of which are organic cells. Both S1 and S2 respond to similar stimuli, such as ions (which are mobile because of their electrical charge.)
We can observe and record the ion exchange between cells (again, understanding that there will be some variation in this process between the two cells or systems. We can call this process “mapping.” Once that’s done (although the process is never really done — we’ll have to carve out some arbitrary amount of time in which the mapping process occurs) we can group both cells, S1 and S2, into a larger rubric, which we’ll call Type F.
Upon adding new cells S3 and S4, we note that they exhibit different responses to ion exposure. This leads to a new mapping among the set of cells S1, S2, S3, S4 (Si). As we add and observe cells or systems, we can group them into Theory 1, which is easier to share with other observers (scientists) than a complex sett of detailed data. Think bullet points rather than blocks and blocks of images, charts, and other symbols that form data. Theory 1 now deals with cells that behave similarly enough to be grouped together, and can now be compared, let’s say to those in Theory 2.
To really utilize the above account, people need to instinctively fill in the gaps and inconsistencies that naturally exists among the maps. All learning does this, in terms of culture, language, mathematics and more. Put simply, some assumptions must be made; every single detail of a system cannot be observed or shared, so guesses must be made in order for the information to be usable.
But learning systems grow infinitely. Any attempt to map one would merely provide a snapshot in time, and would not capture the full scope of the ever-growing, ever-evolving system. You’ve got to reduce a system in order to investigate it.
(Here’s a potentially useless digital analogy: To efficiently work on an image file on your Macbook that’s lets say 1,000 terabytes, utterly enormous, you’ll need to reduce the file size by, first, reducing the resolution. You simply can’t work on 1K TB file on a run of the mill laptop. This ultimately “deletes” or ignores pixels that fundamentally exist in the image, and the file is no longer “whole.” But to do any work at all with the image, you’ve got to employ this crude reduction process.)
Keeping the above definition of learning system in mind, it’s crucial to recognize that new, observable systems are created when systems interact. For example, the interaction between S1 and S2 is, itself a system of learning, which of course, like all other systems, grows and changes continuously. We’ll call this phenomenon a “medium.”
To be sure, medium formation is only possible if the systems are affected similarly (S1 by S2, or any other S.) Put another way, Human A and Human B can use Human language because of the systems they have in common (cultural, biological, environmental, etc.) However, Human A cannot use Human language to communicate with Hamster B (nor, for that matter, can Hamster B use Hamster language to communicate with Human A.)
If we can pinpoint mediums, then we can say we know something about the systems within it, without fully understanding all of the signal set, that is, the map of activity among the systems.
Still, it’s necessary to remain aware of the fact that each system has its own means of mapping other systems it comes in contact with. For example, Human A and Human B each has her own relative understanding of mathematics, but mathematics itself generally tends to replace specificity with abstraction to align research. One way of understanding this is to see mathematics as a medium that tends to blend humans together by controlling the condition of their environments. (Somehow this relates to gene cloning, but I don’t know enough about gene cloning to explain it.) Of course, it’s important to be cautious: differences among systems can compound themselves, leading to difference of understanding over time.