Message #4215

From: programagor@gmail.com
Subject: Re: Spherical distortion on 4D to 3D cameras
Date: Tue, 13 Aug 2019 18:09:15 +0000

Right, I think there has been some confusion.


First of all, Roice, what you said is a great idea! I’ll try to implement that soon.


Second of all, this is what I actually did:
I placed a 4-camera at (-1.25,0,0,0) or so, and a 4-cube with vertices at (±1,±1,±1,±1).
The 4-camera output are angles XY, XZ, XW between the camera and a point.
These angles are then used directly as XYZ coordinates for the next step.
These are fed into this function (link in my first mail):
sx = x * sqrtf(1 - y * y / 2 - z * z / 2 + y * y * z * z / 3); sy = y * sqrtf(1 - z * z / 2 - x * x / 2 + z * z * x * x / 3); sz = z * sqrtf(1 - x * x / 2 - y * y / 2 + x * x * y * y / 3);This is allegedly fancier than simple normalisation because of some uniform surface distribution stuff. It takes an object very similar to what we are used to from Melinda’s MC4D, and distorts it like you can see in my picture.
I suspect the reason for the center cell not being distorted very much is that the distortion function is designed for surface uniformity, not volume or angle uniformity. But that’s just a speculation on my side.
Anyway, as you can see, the function is 3D, not 4D. It operates on the completed 3D projection of the 4D point.


Now to get back to what you said.
I will do the following changes to my flow:
In the first step, even before the 4-camera output is computed for a point, that point is transformed either using normalisation or some fancier function. This transformation takes all points from the surface of a 4-cube (that’s where the stickers are anyway), and maps it onto the surface of a 4-sphere.
This is now a true 4D function, unlike in my original implementation.
Then the angle is computed between the 4-camera and the transformed point. After that, the previous 3D formula could be used again, but I’ll see if that’s necessary or not.