small observations about visual art, primarily drawing/painting
when you are rendering a “scene” from our normal 4d space into a “2d” picture plane you’re actually NOT doing that!
first off… human perception of 4d spacetime is a bit more complicated than literally 3d of space and 1d of time.
let us consider just light rays/photon streams, space and objects and how they get mapped to “values” on a canvas/paper/picture plane. objects in space, their distance and orientation, is all about how many light rays/photons and how far away they are. that is… objects closer to you and the planes of an object facing your eyes appear brighter to you because more photons are hitting your eyeballs (and your skin, cause you “feel” them too). Objects/planes that are facing away or further back in space have less photons emitting directly at you, so they are “darker” aka less energetic.
When you set to map that effect onto a 2d picture plane what you are actually doing is adding CHEMICALS to the picture plan to ABSORB more light photons aka reflect LESS of them for objects facing away from you or further back in space. that is… you literally make the picture plane DARKER / more light absorbent. You are not “diagramming” the scene. you are literally chemically absorbing or reflecting light in a more or less isomorphic way.
When we do digital “picture making” the same idea is present it just is whether the computer screen EMITS more photons in a pixel or not….
side note: human vision actually does all sorts of inversions and flips etc. nothing in spacetime is as you see it nor as it is “emitted”. there is simply CORRESPONDENCE.
side note 2: the picture plane is also sitting within 4d reality. it does not EVER exist outside that layered dimensionality. the optical illusions aren’t optical at all. they are literally just data ignoring/channel jamming things. as you practice art you learn to see 4d “reality” at the same time as “the reality of the rendered picture plane”. you literally experience the CORRESPONDENCE.
This same physical/chemical mapping of difference “facets” of perceived “reality scenes” can be done in this way. Why i care to keep re-learning this and spelling it out for anyone who cares is that we tend to over-index what our senses are doing as “reality” and think the picture plane is either a good or bad depiction of that reality. it can never be a good or bad depiction… it is 100% part of the wider reality AND more or less correspondent….
and what’s crazy to think about… a rendered picture grows out of correspondence with the perceived scene once that scene has changed/dissolved/disappeared. e.g. the mona lisa is not a painting of Mona Lisa anymore. There’s no correspondence to that scene anymore other than this painting. The painting is its OWN scene and its OWN chemical map of itself.
side note 3: art imitates life, life imitates art. paintings/drawings often (i’d argue almost always) end up creating new correspondence to the world by shaping people to re-create rendered scenes in “perceived 4d”. in fact, this is why painting/drawing are so pervasive in homo sapiens across spacetime. art is a very effective compression/decompression akin to DNA/protein synthesis. as much as we think writing and computer programs etc are good, they aren’t even close to what simple pictures are able to compress/decompress (picture is worth a 1000 words, i.e.)
side note 4: this is why all perceived advances in AI will be primarily visual mappings. it’s not because they are visual but “vision” has the necessary dimensionality (a lot) to encapsulate better correspondence between phenomena in different modalities (4d spacetime to computer programs and data to picture planes to weird mathematical objects to genetics). language cannot be the basis of AI because language is a very bad compression of modalities.
this is now a blog post.