Message #3241

From: Eduard Baumann <ed.baumann@bluewin.ch>
Subject: Re: [MC4D] Visualizing Hyperobjects
Date: Tue, 24 Nov 2015 14:12:31 +0100

Wow, I like such explanations.
Best regards
Ed

—– Original Message —–
From: Melinda Green melinda@superliminal.com [4D_Cubing]
To: 4D_Cubing@yahoogroups.com
Sent: Tuesday, November 24, 2015 4:38 AM
Subject: Re: [MC4D] Visualizing Hyperobjects



Hello Chris,

I’m glad that you decided to join the discussion!

The closest I’ve come to truly perceiving something in 4D comes from the 4D shift-dragging rotation in MC4D. We get so used to seeing these projections into 3D that it’s easy to forget about their 4D nature. We all know that that the puzzle consists of eight 3D faces but we forget that they bound a rectilinear region of 4-space. It’s analogous to how solvers of the 3D twisty puzzles are very aware of the patterns of the stickers on the outside of their favorite objects and are only occasionally aware of its volume and the mechanism inside it. Looked at this way, the Rubik’s cube is really a 2D puzzle, and MC4D is really only 3D. That gives us plenty of opportunity to perceive the true nature of this puzzle, but not of the object itself. The 3D "skin" of this object completely fills my 3D mind, but where is the 4D volume that it bounds? When I’m shift-dragging in MC4D I sometimes get glimpses into of a kind of finite cubic hollowness, but it quickly vanishes. The good thing is that the experience is repeatable.

The only other thing I want to say is that visualization has little to do with vision. I’m glad that you pointed out how motion parallax allows stereo vision with only a single eye. One eye should be enough to "see" in any number of dimensions, but notice that you can still visualize just fine without any eyes. If you close your eyes and think about a model car, you can perceive it from any angle just by thinking about it. Actually, that’s a trick that does take some effort, and some people are not very good at it, I think because it specifically involves vision. It’s the mental equivalent of projecting a 3D object into a 2D picture. The result is that you can only look at one projection at a time. Not exactly the "all directions at once" you are hoping for.

But consider this. Imagine holding that model car in your hands with your eyes closed. You can now be completely aware of all of its features at the same time! Even people who were blind from birth are quite able to visualize in 3D. It’s as if we all have a kind of stage inside our minds that we can fill with an awareness of whatever we like, including ourselves. Especially including ourselves. That’s how we measure and judge the distance and speed and potential danger of cars and tigers and such in our environment. It’s crucial for most species to have this sort of awareness, and that’s why it evolved.

As llamaonacid said (what is your name please?), AI can be programmed to do similar things in any number of dimensions. It’s helpful to realize that computers and software also evolve. Their evolution is not driven by natural selection, but market forces still drive a very real form of evolution. In all cases, evolution is a form of optimization towards various goals where only the fittest survive. Evolution requires goals. We evolved over billions of years to survive in a 3D world, so that’s not just what we’re good at, it’s also a large part of what we are. Computers are evolving against some very different goals. One of the biggest uses of computer power today is in the linear programming field of resource allocation. These systems naturally deal with objects involving millions of dimensions, and have gotten very good at it. I can imagine a future in which they become self-aware, but I can’t imagine how I could ever become like them, or how they could become like me, because we are simply evolved for different things. I can on the other hand imagine something about the lives of future robots and self-driving cars, because they are evolving to function in some of the same environments in which we evolved. Notice also how it would be next to impossible to teach a linear programming computer to drive a car, or for a self-driving car to crunch linear programming problems. It would be analogous to trying to pull an umbrella through a chain-link fence. With enough enough effort, you might actually manage it, but you probably wouldn’t even call it an umbrella at that point.

-Melinda

On 11/23/2015 5:13 AM, Chris cpw@maine.rr.com [4D_Cubing] wrote:

The closest thing I ever had to a religious experience was when I visualized a hypersphere, at least as much as I'm capable of visualizing anything.  Haven't been able to duplicate the experience.

So, on the one hand, that's still a &quot;no&quot; because my ability to visualize is kind of . . . not there.  I've never found the words to describe what it is like.  It's almost seeing, but not quite there.  On the other hand, to the extent that I can &quot;see&quot; anything in my mind's eye, I did see it that one time.  On Zaphoid Beeblebrox's third hand, an n-dimensional sphere is always going to be the easiest thing to visualize.  It looks the same from every angle so if it isn't textured or colored, there's no complexity; looking at a sphere is always shows you a circle, looking at a circle if you're in a 2D environment always shows you a line of the same length.  Shading is probably different though, otherwise how do you know from a single angle that it's really an n-sphere not an &#91;n minus 1&#93; sphere?  (E.g. how do you know it's a sphere not a circular disk?)

Combine the simplicity of a hypersphere with the fact that nothing I &quot;visualize&quot; actually quite reaches the level of real visualization (I'm in envy of those who can truly visualize) and we're back to Melinda's answer of, &quot;No.&quot;  (Plus it only happened once.)

It is a bit more complex than simply that we're coded for 3D, though.  We are, but what that means is more complicated than it initially sounds.  We don't see in three dimensions.  We see in two dimensions, twice, from slightly different angles, and use the differences to determine three dimensional structure.  These days you can do the same thing with two pictures from different positions and a computer program, it's called &quot;structure from motion&quot; (the &quot;motion&quot; being moving the camera from angle one to angle two, and then probably other angles as well because why stop at two?)

If you close one eye (or only have the one, or don't have two working-together normally), though, even though what you're seeing is a single two dimensional image, you're still seeing three dimensional objects, and any movement allows the same kind of difference-to-structure as having the two standard offset eyes open (though our brains aren't nearly as adept at that compared to just having two eyes open and working in concert.)

Applying the same principles to four dimensional objects, an eye evolved for a 4D would return a three dimensional &quot;image&quot; and you'd use two such 3D viewing eyes, offset of course, to get the 4D structure.  But, as with our 2D viewing eyes, closing one of them wouldn't mean you're not looking at 4D objects anymore.

So to really visualize in 4D what is needed is to be able to visualize a 3D space as seen from every possible angle at once. As far as I know, no one can do that.  Might be why my pseudo-religious mathematical experience was a hypersphere, every possible angle of a sphere returns &quot;looks like a circle from this angle too.&quot;  But, even with something that simple, I only ever did it once.  And that's not because of a lack of trying, I simply can't duplicate the experience.

But if you could, somehow, do that (see 3D from every possible angle), then it would be the same as 4D viewing without depth perception.

Seeing in 4D requires seeing in true 3D, and human beings can't actually do that.  We compare two two dimensional images in a brain designed to gather depth information from the differences between them and see in a sort of 2.5-D.  We're not just hard-coded for a 3D world, we're hard-coded for viewing that world from a single position (with the only wiggle room being the distance from one eye to the other, said wiggle room being used to determine depth.)

- Chris Witham, who has been meaning to say, &quot;Hello,&quot; for years.<br>
  (chris the cynic)

ps Hi everyone.

On 11/22/2015 9&#58;35 PM, llamaonacid@gmail.com &#91;4D&#95;Cubing&#93; wrote&#58;

    <br>
  Is the human brain capable of actually visualizing hyperobjects and has anyone here been capable of doing so?