Lambda Stack - Demo 18

Purpose

Remove repetition in the coordinate transformations, as previous demos had very similar transformations, especially from camera space to NDC space. Each edge of the graph of objects should only be specified once per frame.

Demo 12

Full Cayley graph.

Noticing in the previous demos that the lower parts of the transformations have a common pattern, we can create a stack of functions for later application. Before drawing geometry, we add any functions to the top of the stack, apply all of our functions in the stack to our model-space data to get NDC data, and before we return to the parent node, we pop the functions we added off of the stack, to ensure that we return the stack to the state that the parent node gave us.

To explain in more detail —

What’s the difference between drawing paddle 1 and the square?

Here is paddle 1 code

src/demo17/demo.py
417    # draw paddle 1
418    glColor3f(paddle1.r, paddle1.g, paddle1.b)
419
420    glBegin(GL_QUADS)
421    for paddle1_vertex_in_model_space in paddle1.vertices:
422        paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_model_space.rotate_z(paddle1.rotation) \
423                                         .translate(paddle1.position)
424        # paddle1_vertex_in_world_space: Vertex = paddle1_vertex_in_camera_space.rotate_x(camera.rot_x) \
425        #                                   .rotate_y(camera.rot_y) \
426        #                                   .translate(camera.position_worldspace)
427        paddle1_vertex_in_camera_space: Vertex = paddle1_vertex_in_world_space.translate(-camera.position_worldspace) \
428                                                                              .rotate_y(-camera.rot_y) \
429                                                                              .rotate_x(-camera.rot_x)
430        paddle1_vertex_in_ndc_space: Vertex = paddle1_vertex_in_camera_space.camera_space_to_ndc_space_fn()
431        glVertex3f(paddle1_vertex_in_ndc_space.x, paddle1_vertex_in_ndc_space.y, paddle1_vertex_in_ndc_space.z)
432    glEnd()

Here is the square’s code:

src/demo17/demo.py
440    glColor3f(0.0, 0.0, 1.0)
441    glBegin(GL_QUADS)
442    for model_space in square:
443        paddle_1_space: Vertex = model_space.rotate_z(square_rotation) \
444                                            .translate(Vertex(x=2.0,
445                                                              y=0.0,
446                                                              z=0.0)) \
447                                            .rotate_z(rotation_around_paddle1) \
448                                            .translate(Vertex(x=0.0,
449                                                              y=0.0,
450                                                              z=-1.0))
451        world_space: Vertex =paddle_1_space.rotate_z(paddle1.rotation) \
452                                           .translate(paddle1.position)
453        camera_space: Vertex = world_space.translate(-camera.position_worldspace) \
454                                          .rotate_y(-camera.rot_y) \
455                                          .rotate_x(-camera.rot_x)
456        ndc_space: Vertex = camera_space.camera_space_to_ndc_space_fn()
457        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
458    glEnd()

The only difference is the square’s model-space to paddle1 space. Everything else is exactly the same. In a graphics program, because the scene is a hierarchy of relative objects, it is unwise to put this much repetition in the transformation sequence. Especially if we might change how the camera operates, or from perspective to ortho. It would required a lot of code changes. And I don’t like reading from the bottom of the code up. Code doesn’t execute that way. I want to read from top to bottom.

When reading the transformation sequences in the previous demos from top down the transformation at the top is applied first, the transformation at the bottom is applied last, with the intermediate results method-chained together. (look up above for a reminder)

With a function stack, the function at the top of the stack (f5) is applied first, the result of this is then given as input to f4 (second on the stack), all the way down to f1, which was the first fn to be placed on the stack, and as such, the last to be applied. (Last In First Applied - LIFA)

             |-------------------|
(MODELSPACE) |                   |
  (x,y,z)->  |       f5          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f4          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f3          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f2          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f1          |-->  (x,y,z) NDC
             |-------------------|

So, in order to ensure that the functions in a stack will execute in the same order as all of the previous demos, they need to be pushed onto the stack in reverse order.

This means that from model space to world space, we can now read the transformations FROM TOP TO BOTTOM!!!! SUCCESS!

Then, to draw the square relative to paddle one, those six transformations will already be on the stack, therefore only push the differences, and then apply the stack to the paddle’s model space data.

How to Execute

On Linux or on MacOS, in a shell, type “python src/demo18/demo.py”. On Windows, in a command prompt, type “python src\demo18\demo.py”.

Move the Paddles using the Keyboard

Keyboard Input

Action

w

Move Left Paddle Up

s

Move Left Paddle Down

k

Move Right Paddle Down

i

Move Right Paddle Up

d

Increase Left Paddle’s Rotation

a

Decrease Left Paddle’s Rotation

l

Increase Right Paddle’s Rotation

j

Decrease Right Paddle’s Rotation

UP

Move the camera up, moving the objects down

DOWN

Move the camera down, moving the objects up

LEFT

Move the camera left, moving the objects right

RIGHT

Move the camera right, moving the objects left

q

Rotate the square around its center

e

Rotate the square around paddle 1’s center

Description

Function stack. Internally it has a list, where index 0 is the bottom of the stack. In python we can store any object as a variable, and we will be storing functions which transform a vertex to another vertex, through the “modelspace_to_ndc” method.

src/demo18/demo.py
361@dataclass
362class FunctionStack:
363    stack: List[Callable[Vertex, Vertex]] = field(default_factory=lambda: [])
364
365    def push(self, o: object):
366        self.stack.append(o)
367
368    def pop(self):
369        return self.stack.pop()
370
371    def clear(self):
372        self.stack.clear()
373
374    def modelspace_to_ndc(self, vertex: Vertex) -> Vertex:
375        v = vertex
376        for fn in reversed(self.stack):
377            v = fn(v)
378        return v
379
380
381fn_stack = FunctionStack()

There is an example at the bottom of src/demo18/demo.py

src/demo18/demo.py
524def identity(x):
525    return x
526
527
528def add_one(x):
529    return x + 1
530
531
532def multiply_by_2(x):
533    return x * 2
534
535
536def add_5(x):
537    return x + 5
538
539

Define four functions, which we will compose on the stack.

Push identity onto the stack, which will will never pop off of the stack.

src/demo18/demo.py
544fn_stack.push(identity)
545print(fn_stack)
546print(fn_stack.modelspace_to_ndc(1))  # x = 1
src/demo18/demo.py
550fn_stack.push(add_one)
551print(fn_stack)
552print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
src/demo18/demo.py
556fn_stack.push(multiply_by_2)  # (x * 2) + 1 = 3
557print(fn_stack)
558print(fn_stack.modelspace_to_ndc(1))
src/demo18/demo.py
562fn_stack.push(add_5)  # ((x + 5) * 2) + 1 = 13
563print(fn_stack)
564print(fn_stack.modelspace_to_ndc(1))
src/demo18/demo.py
568fn_stack.pop()
569print(fn_stack)
570print(fn_stack.modelspace_to_ndc(1))  # (x * 2) + 1 = 3
src/demo18/demo.py
574fn_stack.pop()
575print(fn_stack)
576print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
src/demo18/demo.py
580fn_stack.pop()
581print(fn_stack)
582print(fn_stack.modelspace_to_ndc(1))  # x = 1

Event Loop

src/demo18/demo.py
390while not glfw.window_should_close(window):
...

In previous demos, camera_space_to_ndc_space_fn was always the last function called in the method chained pipeline. Put it on the bottom of the stack, by pushing it first, so that “modelspace_to_ndc” calls this function last. Each subsequent push will add a new function to the top of the stack.

\vec{f}_{c}^{ndc}

Demo 12
src/demo18/demo.py
432    fn_stack.push(lambda v: v.camera_space_to_ndc_space_fn())  # (1)

Unlike in previous demos in which we read the transformations from model space to world space backwards; this time because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

The camera’s position and orientation are defined relative to world space like so, read top to bottom:

\vec{f}_{c}^{w}

Demo 12
src/demo18/demo.py
436    # fn_stack.push(
437    #     lambda v: v.translate(camera.position_worldspace)
438    # fn_stack.push(lambda v: v.rotate_y(camera.rot_y))
439    # fn_stack.push(lambda v: v.rotate_x(camera.rot_x))

But, since we need to transform world-space to camera space, they must be inverted by reversing the order, and negating the arguments

Therefore the transformations to put the world space into camera space are.

\vec{f}_{w}^{c}

Demo 12
src/demo18/demo.py
444    fn_stack.push(lambda v: v.rotate_x(-camera.rot_x))  # (2)
445    fn_stack.push(lambda v: v.rotate_y(-camera.rot_y))  # (3)
446    fn_stack.push(lambda v: v.translate(-camera.position_worldspace)) # (4)

draw paddle 1

Unlike in previous demos in which we read the transformations from model space to world space backwards; because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

\vec{f}_{p1}^{w}

Demo 12
src/demo18/demo.py
452    fn_stack.push(lambda v: v.translate(paddle1.position)) # (5) translate the local origin
453    fn_stack.push(lambda v: v.rotate_z(paddle1.rotation)) # (6) (rotate around the local z axis

for each of the modelspace coordinates, apply all of the procedures on the stack from top to bottom this results in coordinate data in NDC space, which we can pass to glVertex3f

src/demo18/demo.py
458    glColor3f(paddle1.r, paddle1.g, paddle1.b)
459
460    glBegin(GL_QUADS)
461    for paddle1_vertex_in_model_space in paddle1.vertices:
462        paddle1_vertex_in_ndc_space = fn_stack.modelspace_to_ndc(
463            paddle1_vertex_in_model_space
464        )
465        glVertex3f(
466            paddle1_vertex_in_ndc_space.x,
467            paddle1_vertex_in_ndc_space.y,
468            paddle1_vertex_in_ndc_space.z,
469        )
470    glEnd()

draw the square

since the modelstack is already in paddle1’s space, and since the blue square is defined relative to paddle1, just add the transformations relative to it before the blue square is drawn. Draw the square, and then remove these 4 transformations from the stack (done below)

\vec{f}_{s}^{p1}

Demo 12
src/demo18/demo.py
474    glColor3f(0.0, 0.0, 1.0)
475
476    fn_stack.push(lambda v: v.translate(Vertex(x=0.0, y=0.0, z=-1.0)))  # (7)
477    fn_stack.push(lambda v: v.rotate_z(rotation_around_paddle1))  # (8)
478    fn_stack.push(lambda v: v.translate(Vertex(x=2.0, y=0.0, z=0.0)))  # (9)
479    fn_stack.push(lambda v: v.rotate_z(square_rotation))  # (10)
480
481    glBegin(GL_QUADS)
482    for model_space in square:
483        ndc_space = fn_stack.modelspace_to_ndc(model_space)
484        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
485    glEnd()

Now we need to remove fns from the stack so that the lambda stack will convert from world space to NDC. This will allow us to just add the transformations from world space to paddle2 space on the stack.

src/demo18/demo.py
489    fn_stack.pop()  # pop off (10)
490    fn_stack.pop()  # pop off (9)
491    fn_stack.pop()  # pop off (8)
492    fn_stack.pop()  # pop off (7)
493    fn_stack.pop()  # pop off (6)
494    fn_stack.pop()  # pop off (5)

since paddle2’s model_space is independent of paddle 1’s space, only leave the view and projection fns (1) - (4)

draw paddle 2

\vec{f}_{p2}^{w}

Demo 12
src/demo18/demo.py
474    glColor3f(0.0, 0.0, 1.0)
475
476    fn_stack.push(lambda v: v.translate(Vertex(x=0.0, y=0.0, z=-1.0)))  # (7)
477    fn_stack.push(lambda v: v.rotate_z(rotation_around_paddle1))  # (8)
478    fn_stack.push(lambda v: v.translate(Vertex(x=2.0, y=0.0, z=0.0)))  # (9)
479    fn_stack.push(lambda v: v.rotate_z(square_rotation))  # (10)
480
481    glBegin(GL_QUADS)
482    for model_space in square:
483        ndc_space = fn_stack.modelspace_to_ndc(model_space)
484        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
485    glEnd()

remove all fns from the function stack, as the next frame will set them clear makes the list empty, as the list (stack) will be repopulated the next iteration of the event loop.

src/demo18/demo.py
513    fn_stack.clear()  # done rendering everything, just go ahead and clean 1-6 off of the stack

Swap buffers and execute another iteration of the event loop

src/demo18/demo.py
517    glfw.swap_buffers(window)

Notice in the above code, adding functions to the stack is creating a shared context for transformations, and before we call “glVertex3f”, we always call “modelspace_to_ndc” on the modelspace vertex. In Demo 19, we will be using OpenGL 2.1 matrix stacks. Although we don’t have the code for the OpenGL driver, given that you’ll see that we pass modelspace data directly to “glVertex3f”, it should be clear that the OpenGL implementation must fetch the modelspace to NDC transformations from the ModelView and Projection matrix stacks.