Moving Camera in 3D Space - Demo 16

Purpose

Make a moving camera in 3D space. Use Ortho to transform a rectangular prism, defined relative to camera space, into NDC.

Camera

Camera space with ortho volume

Problem purposefully put in

When running this demo and moving the viewer, parts of the geometry will disappear. This is because it gets “clipped out”, as the geometry will be outside of NDC, (-1 to 1 on all three axis). We could fix this by making a bigger ortho rectangular prism, but that won’t solve the fundamental problem.

This doesn’t look like a 3D application should, where objects further away from the viewer would appear smaller. This will be fixed in demo17.

Demo 16

Demo 16, which looks like trash

How to Execute

On Linux or on MacOS, in a shell, type “python src/demo16/demo.py”. On Windows, in a command prompt, type “python src\demo16\demo.py”.

Move the Paddles using the Keyboard

Keyboard Input

Action

w

Move Left Paddle Up

s

Move Left Paddle Down

k

Move Right Paddle Down

i

Move Right Paddle Up

d

Increase Left Paddle’s Rotation

a

Decrease Left Paddle’s Rotation

l

Increase Right Paddle’s Rotation

j

Decrease Right Paddle’s Rotation

UP

Move the camera up, moving the objects down

DOWN

Move the camera down, moving the objects up

LEFT

Move the camera left, moving the objects right

RIGHT

Move the camera right, moving the objects left

q

Rotate the square around its center

e

Rotate the square around paddle 1’s center

Description

Before starting this demo, run mvpVisualization/modelvieworthoprojection/modelvieworthoprojection.py, as it will show graphically all of the steps in this demo. In the GUI, take a look at the camera options buttons, and once the camera is placed and oriented in world space, use the buttons to change the camera’s position and orientation. This will demonstrate what we have to do for moving the camera in a 3D scene.

There are new keyboard inputs to control the moving camera. As you would expect to see in a first person game, up moves the camera forward (-z), down moves the camera backwards (z), left rotates the camera as would happen if you rotated your body to the left, and likewise for right. Page UP and Page DOWN rotate the camera to look up or to look down.

To enable this, the camera is modeled with a data structure, having a position in x,y,z relative to world space, and two rotations (one around the camera’s x axis, and one around the camera’s y axis).

To position the camera you would

  1. translate to the camera’s position, using the actual position values of camera position in world space coordinates.

  2. rotate around the local y axis

  3. rotate around the local x axis

To visualize this, run “python mvpVisualization/modelvieworthoprojection/modelvieworthoprojection.py”

The ordering of 1) before 2) and 3) should be clear, as we are imagining a coordinate system that moves, just like we do for the model-space to world space transformations. The ordering of 2) before 3) is very important, as two rotations around different axes are not commutative, meaning that you can’t change the order and still expect the same results https://en.wikipedia.org/wiki/Commutative_property.

Try this. Rotate your head to the right a little more that 45 degrees. Now rotate your head back a little more than 45 degrees.

Now, reset your head (glPopMatrix, which we have not yet covered). Try rotating your head back 45 degrees. Once it is there, rotate your head (not your neck), 45 degrees. It is different, and quite uncomfortable!

We rotate the camera by the y axis first, then by the relative x axis, for the same reason.

src/demo16/demo.py
385        # paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_camera_space.rotate_x(camera.rot_x) \
386        #                                   .rotate_y(camera.rot_y) \
387        #                                   .translate(camera.position_worldspace)

(Remember, read bottom up, just like the previous demos for model-space to world-space data)

Back to the point, we are envisioning the camera relative to the world space by making a moving coordinate system (composed of an origin, 1 unit in the “x” axis, 1 unit in the “y” axis, and 1 unit in the “z” axis), where each subsequent transformation is relative to the previous coordinate system. (This system is beneficial btw because it allows us to think of only one coordinate system at a time, and allows us to forget how we got there, (similar to a Markov process, https://en.wikipedia.org/wiki/Markov_chain))

But this system of thinking works only when we are placing the camera into its position/orientation relative to world space, which is not what we need to actually do. We don’t need to place the camera. We need to move every already-plotted object in world space towards the origin and orientation of NDC. Looking at the following graph,

Demo 16

Demo 16

We want to take the model-space geometry from, say Paddle1 space, to world space, and then to camera space (which is going in the opposite direction of the arrow, therefore requires an inverse operation, because to plot data we go from model-space to screen space on the graph.

Given that the inverse of a sequence of transformations is the sequence backwards, with each transformations inverted, we must do that to get from world space to camera space.

The inverted form is

src/demo16/demo.py
390        paddle1_vertex_in_camera_space: Vertex = paddle1_vertex_in_world_space.translate(-camera.position_worldspace) \
391                                          .rotate_y(-camera.rot_y) \
392                                          .rotate_x(-camera.rot_x)

Other things added Added rotations around the x axis, y axis, and z axis. https://en.wikipedia.org/wiki/Rotation_matrix

Code

The camera now has two angles as instance variables.

src/demo16/demo.py
267
268
269@dataclass
270class Camera:
271    position_worldspace: Vertex = field(
272        default_factory=lambda: Vertex(x=0.0, y=0.0, z=15.0)
273    )
274    rot_y: float = 0.0
275    rot_x: float = 0.0

Since we want the user to be able to control the camera, we need to read the input.

src/demo16/demo.py
293def handle_inputs() -> None:
...

Left and right rotate the viewer’s horizontal angle, page up and page down the vertical angle.

src/demo16/demo.py
306    if glfw.get_key(window, glfw.KEY_RIGHT) == glfw.PRESS:
307        camera.rot_y -= 0.03
308    if glfw.get_key(window, glfw.KEY_LEFT) == glfw.PRESS:
309        camera.rot_y += 0.03
310    if glfw.get_key(window, glfw.KEY_PAGE_UP) == glfw.PRESS:
311        camera.rot_x += 0.03
312    if glfw.get_key(window, glfw.KEY_PAGE_DOWN) == glfw.PRESS:
313        camera.rot_x -= 0.03

The up arrow and down arrow make the user move forwards and backwards. Unlike the camera space to world space transformation, here for movement code, we don’t do the rotate around the x axis. This is because users expect to simulate walking on the ground, not flying through the sky. I.e, we want forward/backwards movement to happen relative to the XZ plane at the camera’s position, not forward/backwards movement relative to camera space.

src/demo16/demo.py
317    if glfw.get_key(window, glfw.KEY_UP) == glfw.PRESS:
318        forwards_camera_space = Vertex(x=0.0, y=0.0, z=-1.0)
319        forward_world_space = forwards_camera_space.rotate_y(camera.rot_y) \
320                                                   .translate(camera.position_worldspace)
321        camera.position_worldspace.x = forward_world_space.x
322        camera.position_worldspace.y = forward_world_space.y
323        camera.position_worldspace.z = forward_world_space.z
324    if glfw.get_key(window, glfw.KEY_DOWN) == glfw.PRESS:
325        forwards_camera_space = Vertex(x=0.0, y=0.0, z=1.0)
326        forward_world_space = forwards_camera_space.rotate_y(camera.rot_y) \
327                                                   .translate(camera.position_worldspace)
328        camera.position_worldspace.x = forward_world_space.x
329        camera.position_worldspace.y = forward_world_space.y
330        camera.position_worldspace.z = forward_world_space.z

Ortho is the function call that shrinks the viewable region relative to camera space down to NDC, by moving the center of the rectangular prism to the origin, and scaling by the inverse of the width, height, and depth of the viewable region.

src/demo16/demo.py
192    def ortho(self: Vertex,
193              left: float,
194              right: float,
195              bottom: float,
196              top: float,
197              near: float,
198              far: float,
199              ) -> Vertex:
200        midpoint = Vertex(
201            x=(left + right) / 2.0,
202            y=(bottom + top) / 2.0,
203            z=(near + far) / 2.0
204        )
205        length_x: float
206        length_y: float
207        length_z: float
208        length_x, length_y, length_z = right - left, top - bottom, far - near
209        return self.translate(-midpoint) \
210                   .scale(2.0 / length_x,
211                          2.0 / length_y,
212                          2.0 / (-length_z))

We will make a wrapper function camera_space_to_ndc_space_fn which calls ortho, setting the size of the rectangular prism.

src/demo16/demo.py
218    def camera_space_to_ndc_space_fn(self: Vertex) -> Vertex:
219        return self.ortho(left=-10.0,
220                          right=10.0,
221                          bottom=-10.0,
222                          top=10.0,
223                          near=-0.1,
224                          far=-30.0)

Event Loop

The amount of repetition in the code below in starting to get brutal, as there’s too much detail to think about and retype out for every object being drawn, and we’re only dealing with 3 objects. The author put this repetition into the book on purpose, so that when we start using matrices later, the reader will fully appreciate what matrices solve for us.

src/demo16/demo.py
360while not glfw.window_should_close(window):
...

Paddle 1

src/demo16/demo.py
379    glColor3f(paddle1.r, paddle1.g, paddle1.b)
380    glBegin(GL_QUADS)
381    for paddle1_vertex_in_model_space in paddle1.vertices:
382        paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_model_space.rotate_z(paddle1.rotation) \
383                                         .translate(paddle1.position)
384        # doc-region-begin commented out camera placement
385        # paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_camera_space.rotate_x(camera.rot_x) \
386        #                                   .rotate_y(camera.rot_y) \
387        #                                   .translate(camera.position_worldspace)
388        # doc-region-end commented out camera placement
389        # doc-region-begin inverted transformation to go from world space to camera space
390        paddle1_vertex_in_camera_space: Vertex = paddle1_vertex_in_world_space.translate(-camera.position_worldspace) \
391                                          .rotate_y(-camera.rot_y) \
392                                          .rotate_x(-camera.rot_x)
393        # doc-region-end inverted transformation to go from world space to camera space
394        paddle1_vertex_in_ndc_space: Vertex = paddle1_vertex_in_camera_space.camera_space_to_ndc_space_fn()
395        glVertex3f(paddle1_vertex_in_ndc_space.x, paddle1_vertex_in_ndc_space.y, paddle1_vertex_in_ndc_space.z)
396    glEnd()

Square

the square should not be visible when hidden behind the paddle1, as we did a translate by -10 in the z direction.

src/demo16/demo.py
402    glColor3f(0.0, 0.0, 1.0)
403    glBegin(GL_QUADS)
404    for model_space in square:
405        paddle_1_space: Vertex = model_space.rotate_z(square_rotation) \
406                                            .translate(Vertex(x=2.0,
407                                                              y=0.0,
408                                                              z=0.0)) \
409                                            .rotate_z(rotation_around_paddle1) \
410                                            .translate(Vertex(x=0.0,
411                                                              y=0.0,
412                                                              z=-1.0))
413        world_space: Vertex =paddle_1_space.rotate_z(paddle1.rotation) \
414                                           .translate(paddle1.position)
415        # world_space: Vertex = camera_space.rotate_x(camera.rot_x) \
416        #                                   .rotate_y(camera.rot_y) \
417        #                                   .translate(camera.position_worldspace)
418        camera_space: Vertex = world_space.translate(-camera.position_worldspace) \
419                                          .rotate_y(-camera.rot_y) \
420                                          .rotate_x(-camera.rot_x)
421        ndc_space: Vertex = camera_space.camera_space_to_ndc_space_fn()
422        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
423    glEnd()

Paddle 2

src/demo16/demo.py
429    glColor3f(paddle2.r, paddle2.g, paddle2.b)
430    glBegin(GL_QUADS)
431    for paddle2_vertex_model_space in paddle2.vertices:
432        paddle2_vertex_world_space: Vertex = paddle2_vertex_model_space.rotate_z(paddle2.rotation) \
433                                                                       .translate(paddle2.position)
434        # paddle2_vertex_world_space: Vertex = paddle2_vertex_camera_space.rotate_x(camera.rot_x) \
435        #                                                                 .rotate_y(camera.rot_y) \
436        #                                                                 .translate(camera.position_worldspace)
437
438        paddle2_vertex_camera_space: Vertex = paddle2_vertex_world_space.translate(-camera.position_worldspace) \
439                                                                        .rotate_y(-camera.rot_y) \
440                                                                        .rotate_x(-camera.rot_x)
441
442        paddle2_vertex_ndc_space: Vertex = paddle2_vertex_camera_space.camera_space_to_ndc_space_fn()
443        glVertex3f(paddle2_vertex_ndc_space.x, paddle2_vertex_ndc_space.y, paddle2_vertex_ndc_space.z)
444    glEnd()