Lambda Stack - Demo 18

Purpose

Remove repetition in the coordinate transformations, as previous demos had very similar transformations, especially from camera space to NDC space. Each edge of the graph of objects should only be specified once per frame.

Demo 12

Full Cayley graph.

Noticing in the previous demos that the lower parts of the transformations have a common pattern, we can create a stack of functions for later application. Before drawing geometry, we add any functions to the top of the stack, apply all of our functions in the stack to our model-space data to get NDC data, and before we return to the parent node, we pop the functions we added off of the stack, to ensure that we return the stack to the state that the parent node gave us.

To explain in more detail —

What’s the difference between drawing paddle 1 and the square?

Here is paddle 1 code

src/demo17/demo.py
400    # draw paddle 1
401    glColor3f(paddle1.r, paddle1.g, paddle1.b)
402
403    glBegin(GL_QUADS)
404    for paddle1_vertex_in_model_space in paddle1.vertices:
405        paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_model_space.rotate_z(paddle1.rotation) \
406                                         .translate(paddle1.position)
407        # paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_camera_space.rotate_x(camera.rot_x) \
408        #                                   .rotate_y(camera.rot_y) \
409        #                                   .translate(camera.position_worldspace)
410        paddle1_vertex_in_camera_space: Vertex = paddle1_vertex_in_world_space.translate(-camera.position_worldspace) \
411                                                                              .rotate_y(-camera.rot_y) \
412                                                                              .rotate_x(-camera.rot_x)
413        paddle1_vertex_in_ndc_space: Vertex = paddle1_vertex_in_camera_space.camera_space_to_ndc_space_fn()
414        glVertex3f(paddle1_vertex_in_ndc_space.x, paddle1_vertex_in_ndc_space.y, paddle1_vertex_in_ndc_space.z)
415    glEnd()

Here is the square’s code:

src/demo17/demo.py
423    glColor3f(0.0, 0.0, 1.0)
424    glBegin(GL_QUADS)
425    for model_space in square:
426        paddle_1_space: Vertex = model_space.rotate_z(square_rotation) \
427                                            .translate(Vertex(x=2.0,
428                                                              y=0.0,
429                                                              z=0.0)) \
430                                            .rotate_z(rotation_around_paddle1) \
431                                            .translate(Vertex(x=0.0,
432                                                              y=0.0,
433                                                              z=-1.0))
434        world_space: Vertex =paddle_1_space.rotate_z(paddle1.rotation) \
435                                           .translate(paddle1.position)
436        camera_space: Vertex = world_space.translate(-camera.position_worldspace) \
437                                          .rotate_y(-camera.rot_y) \
438                                          .rotate_x(-camera.rot_x)
439        ndc_space: Vertex = camera_space.camera_space_to_ndc_space_fn()
440        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
441    glEnd()

The only difference is the square’s model-space to paddle1 space. Everything else is exactly the same. In a graphics program, because the scene is a hierarchy of relative objects, it is unwise to put this much repetition in the transformation sequence. Especially if we might change how the camera operates, or from perspective to ortho. It would required a lot of code changes. And I don’t like reading from the bottom of the code up. Code doesn’t execute that way. I want to read from top to bottom.

When reading the transformation sequences in the previous demos from top down the transformation at the top is applied first, the transformation at the bottom is applied last, with the intermediate results method-chained together. (look up above for a reminder)

With a function stack, the function at the top of the stack (f5) is applied first, the result of this is then given as input to f4 (second on the stack), all the way down to f1, which was the first fn to be placed on the stack, and as such, the last to be applied. (Last In First Applied - LIFA)

             |-------------------|
(MODELSPACE) |                   |
  (x,y,z)->  |       f5          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f4          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f3          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f2          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f1          |-->  (x,y,z) NDC
             |-------------------|

So, in order to ensure that the functions in a stack will execute in the same order as all of the previous demos, they need to be pushed onto the stack in reverse order.

This means that from model space to world space, we can now read the transformations FROM TOP TO BOTTOM!!!! SUCCESS!

Then, to draw the square relative to paddle one, those six transformations will already be on the stack, therefore only push the differences, and then apply the stack to the paddle’s model space data.

How to Execute

On Linux or on MacOS, in a shell, type “python src/demo18/demo.py”. On Windows, in a command prompt, type “python src\demo18\demo.py”.

Move the Paddles using the Keyboard

Keyboard Input

Action

w

Move Left Paddle Up

s

Move Left Paddle Down

k

Move Right Paddle Down

i

Move Right Paddle Up

d

Increase Left Paddle’s Rotation

a

Decrease Left Paddle’s Rotation

l

Increase Right Paddle’s Rotation

j

Decrease Right Paddle’s Rotation

UP

Move the camera up, moving the objects down

DOWN

Move the camera down, moving the objects up

LEFT

Move the camera left, moving the objects right

RIGHT

Move the camera right, moving the objects left

q

Rotate the square around its center

e

Rotate the square around paddle 1’s center

Description

Function stack. Internally it has a list, where index 0 is the bottom of the stack. In python we can store any object as a variable, and we will be storing functions which transform a vertex to another vertex, through the “modelspace_to_ndc” method.

src/demo18/demo.py
358@dataclass
359class FunctionStack:
360    stack: List[Callable[Vertex, Vertex]] = field(default_factory=lambda: [])
361
362    def push(self, o: object):
363        self.stack.append(o)
364
365    def pop(self):
366        return self.stack.pop()
367
368    def clear(self):
369        self.stack.clear()
370
371    def modelspace_to_ndc(self, vertex: Vertex) -> Vertex:
372        v = vertex
373        for fn in reversed(self.stack):
374            v = fn(v)
375        return v
376
377
378fn_stack = FunctionStack()

There is an example at the bottom of src/demo18/demo.py

src/demo18/demo.py
509def identity(x):
510    return x
511
512
513def add_one(x):
514    return x + 1
515
516
517def multiply_by_2(x):
518    return x * 2
519
520
521def add_5(x):
522    return x + 5
523
524

Define four functions, which we will compose on the stack.

Push identity onto the stack, which will will never pop off of the stack.

src/demo18/demo.py
529fn_stack.push(identity)
530print(fn_stack)
531print(fn_stack.modelspace_to_ndc(1))  # x = 1
src/demo18/demo.py
535fn_stack.push(add_one)
536print(fn_stack)
537print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
src/demo18/demo.py
541fn_stack.push(multiply_by_2)  # (x * 2) + 1 = 3
542print(fn_stack)
543print(fn_stack.modelspace_to_ndc(1))
src/demo18/demo.py
547fn_stack.push(add_5)  # ((x + 5) * 2) + 1 = 13
548print(fn_stack)
549print(fn_stack.modelspace_to_ndc(1))
src/demo18/demo.py
553fn_stack.pop()
554print(fn_stack)
555print(fn_stack.modelspace_to_ndc(1))  # (x * 2) + 1 = 3
src/demo18/demo.py
559fn_stack.pop()
560print(fn_stack)
561print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
src/demo18/demo.py
565fn_stack.pop()
566print(fn_stack)
567print(fn_stack.modelspace_to_ndc(1))  # x = 1

Event Loop

src/demo18/demo.py
387while not glfw.window_should_close(window):
...

In previous demos, camera_space_to_ndc_space_fn was always the last function called in the method chained pipeline. Put it on the bottom of the stack, by pushing it first, so that “modelspace_to_ndc” calls this function last. Each subsequent push will add a new function to the top of the stack.

\vec{f}_{c}^{ndc}

Demo 12
src/demo18/demo.py
419    fn_stack.push(lambda v: v.camera_space_to_ndc_space_fn())  # (1)

Unlike in previous demos in which we read the transformations from model space to world space backwards; this time because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

The camera’s position and orientation are defined relative to world space like so, read top to bottom:

\vec{f}_{c}^{w}

Demo 12
src/demo18/demo.py
423    # fn_stack.push(
424    #     lambda v: v.translate(camera.position_worldspace)
425    # fn_stack.push(lambda v: v.rotate_y(camera.rot_y))
426    # fn_stack.push(lambda v: v.rotate_x(camera.rot_x))

But, since we need to transform world-space to camera space, they must be inverted by reversing the order, and negating the arguments

Therefore the transformations to put the world space into camera space are.

\vec{f}_{w}^{c}

Demo 12
src/demo18/demo.py
431    fn_stack.push(lambda v: v.rotate_x(-camera.rot_x))  # (2)
432    fn_stack.push(lambda v: v.rotate_y(-camera.rot_y))  # (3)
433    fn_stack.push(lambda v: v.translate(-camera.position_worldspace)) # (4)

draw paddle 1

Unlike in previous demos in which we read the transformations from model space to world space backwards; because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

\vec{f}_{p1}^{w}

Demo 12
src/demo18/demo.py
439    fn_stack.push(lambda v: v.translate(paddle1.position)) # (5) translate the local origin
440    fn_stack.push(lambda v: v.rotate_z(paddle1.rotation)) # (6) (rotate around the local z axis

for each of the modelspace coordinates, apply all of the procedures on the stack from top to bottom this results in coordinate data in NDC space, which we can pass to glVertex3f

src/demo18/demo.py
445    glColor3f(paddle1.r, paddle1.g, paddle1.b)
446
447    glBegin(GL_QUADS)
448    for paddle1_vertex_in_model_space in paddle1.vertices:
449        paddle1_vertex_in_ndc_space = fn_stack.modelspace_to_ndc(paddle1_vertex_in_model_space)
450        glVertex3f(
451            paddle1_vertex_in_ndc_space.x,
452            paddle1_vertex_in_ndc_space.y,
453            paddle1_vertex_in_ndc_space.z,
454        )
455    glEnd()

draw the square

since the modelstack is already in paddle1’s space, and since the blue square is defined relative to paddle1, just add the transformations relative to it before the blue square is drawn. Draw the square, and then remove these 4 transformations from the stack (done below)

\vec{f}_{s}^{p1}

Demo 12
src/demo18/demo.py
459    glColor3f(0.0, 0.0, 1.0)
460
461    fn_stack.push(lambda v: v.translate(Vertex(x=0.0, y=0.0, z=-1.0)))  # (7)
462    fn_stack.push(lambda v: v.rotate_z(rotation_around_paddle1))  # (8)
463    fn_stack.push(lambda v: v.translate(Vertex(x=2.0, y=0.0, z=0.0)))  # (9)
464    fn_stack.push(lambda v: v.rotate_z(square_rotation))  # (10)
465
466    glBegin(GL_QUADS)
467    for model_space in square:
468        ndc_space = fn_stack.modelspace_to_ndc(model_space)
469        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
470    glEnd()

Now we need to remove fns from the stack so that the lambda stack will convert from world space to NDC. This will allow us to just add the transformations from world space to paddle2 space on the stack.

src/demo18/demo.py
474    fn_stack.pop()  # pop off (10)
475    fn_stack.pop()  # pop off (9)
476    fn_stack.pop()  # pop off (8)
477    fn_stack.pop()  # pop off (7)
478    fn_stack.pop()  # pop off (6)
479    fn_stack.pop()  # pop off (5)

since paddle2’s model_space is independent of paddle 1’s space, only leave the view and projection fns (1) - (4)

draw paddle 2

\vec{f}_{p2}^{w}

Demo 12
src/demo18/demo.py
459    glColor3f(0.0, 0.0, 1.0)
460
461    fn_stack.push(lambda v: v.translate(Vertex(x=0.0, y=0.0, z=-1.0)))  # (7)
462    fn_stack.push(lambda v: v.rotate_z(rotation_around_paddle1))  # (8)
463    fn_stack.push(lambda v: v.translate(Vertex(x=2.0, y=0.0, z=0.0)))  # (9)
464    fn_stack.push(lambda v: v.rotate_z(square_rotation))  # (10)
465
466    glBegin(GL_QUADS)
467    for model_space in square:
468        ndc_space = fn_stack.modelspace_to_ndc(model_space)
469        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
470    glEnd()

remove all fns from the function stack, as the next frame will set them clear makes the list empty, as the list (stack) will be repopulated the next iteration of the event loop.

src/demo18/demo.py
498    fn_stack.clear()  # done rendering everything, just go ahead and clean 1-6 off of the stack

Swap buffers and execute another iteration of the event loop

src/demo18/demo.py
502    glfw.swap_buffers(window)

Notice in the above code, adding functions to the stack is creating a shared context for transformations, and before we call “glVertex3f”, we always call “modelspace_to_ndc” on the modelspace vertex. In Demo 19, we will be using OpenGL 2.1 matrix stacks. Although we don’t have the code for the OpenGL driver, given that you’ll see that we pass modelspace data directly to “glVertex3f”, it should be clear that the OpenGL implementation must fetch the modelspace to NDC transformations from the ModelView and Projection matrix stacks.