Lambda Stack - Demo 18

Purpose

Remove repetition in the coordinate transformations, as previous demos had very similar transformations, especially from camera space to NDC space. Each edge of the graph of objects should only be specified once per frame.

Demo 12

Full Cayley graph.

Noticing in the previous demos that the lower parts of the transformations have a common pattern, we can create a stack of functions for later application. Before drawing geometry, we add any functions to the top of the stack, apply all of our functions in the stack to our modelspace data to get NDC data, and before we return to the parent node, we pop the functions we added off of the stack, to ensure that we return the stack to the state that the parent node gave us.

To explain in more detail —

What’s the difference between drawing paddle 1 and the square?

Here is paddle 1 code

    # draw paddle 1
    glColor3f(paddle1.r, paddle1.g, paddle1.b)

    glBegin(GL_QUADS)
    for model_space in paddle1.vertices:
        world_space: Vertex = model_space.rotate_z(paddle1.rotation) \
                                         .translate(tx=paddle1.position.x,
                                                    ty=paddle1.position.y,
                                                    tz=0.0)
        # world_space: Vertex = camera_space.rotate_x(camera.rot_x) \
        #                                   .rotate_y(camera.rot_y) \
        #                                   .translate(tx=camera.position_worldspace.x,
        #                                              ty=camera.position_worldspace.y,
        #                                              tz=camera.position_worldspace.z)
        camera_space: Vertex = world_space.translate(tx=-camera.position_worldspace.x,
                                                     ty=-camera.position_worldspace.y,
                                                     tz=-camera.position_worldspace.z) \
                                          .rotate_y(-camera.rot_y) \
                                          .rotate_x(-camera.rot_x)
        ndc_space: Vertex = camera_space.camera_space_to_ndc_space_fn()
        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
    glEnd()

Here is the square’s code:

    glColor3f(0.0, 0.0, 1.0)
    glBegin(GL_QUADS)
    for model_space in square:
        paddle_1_space: Vertex = model_space.rotate_z(square_rotation) \
                                            .translate(tx=20.0,
                                                       ty=0.0,
                                                       tz=0.0) \
                                            .rotate_z(rotation_around_paddle1) \
                                            .translate(tx=0.0,
                                                       ty=0.0,
                                                       tz=-10.0)
        world_space: Vertex =paddle_1_space.rotate_z(paddle1.rotation).translate(tx=paddle1.position.x,
                                                                                 ty=paddle1.position.y,
                                                                                 tz=0.0)
        camera_space: Vertex = world_space.translate(tx=-camera.position_worldspace.x,
                                                     ty=-camera.position_worldspace.y,
                                                     tz=-camera.position_worldspace.z) \
                                          .rotate_y(-camera.rot_y) \
                                          .rotate_x(-camera.rot_x)
        ndc_space: Vertex = camera_space.camera_space_to_ndc_space_fn()
        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
    glEnd()

The only difference is the square’s model-space to paddle1 space. Everything else is exactly the same. In a graphics program, because the scene is a hierarchy of relative objects, it’s unwise to put this much repetition in the transformation sequence. Especially if we might change how the camera operates, or from perspective to ortho. It would required a lot of code changes. And I don’t like reading from the bottom of the code up. Code doesn’t execute that way. I want to read from top to bottom.

When reading the transformation sequences in the previous demos from top down the transformation at the top is applied first, the transformation at the bottom is applied last, with the intermediate results method-chained together. (look up above for a reminder)

With a function stack, the function at the top of the stack (f5) is applied first, the result of this is then given as input to f4 (second on the stack), all the way down to f1, which was the first fn to be placed on the stack, and as such, the last to be applied. (Last In First Applied - LIFA)

             |-------------------|
(MODELSPACE) |                   |
  (x,y,z)->  |       f5          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f4          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f3          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f2          |--
             |-------------------| |
                                   |
          -------------------------
          |
          |  |-------------------|
          |  |                   |
           ->|       f1          |-->  (x,y,z) NDC
             |-------------------|

So, in order to ensure that the functions in a stack will execute in the same order as all of the previous demos, they need to be pushed onto the stack in reverse order.

This means that from modelspace to world space, we can now read the transformations FROM TOP TO BOTTOM!!!! SUCCESS!

Then, to draw the square relative to paddle one, those six transformations will already be on the stack, therefore only push the differences, and then apply the stack to the paddle’s modelspace data.

How to Execute

On Linux or on MacOS, in a shell, type “python src/demo18/demo.py”. On Windows, in a command prompt, type “python src\demo18\demo.py”.

Move the Paddles using the Keyboard

Keyboard Input

Action

w

Move Left Paddle Up

s

Move Left Paddle Down

k

Move Right Paddle Down

i

Move Right Paddle Up

d

Increase Left Paddle’s Rotation

a

Decrease Left Paddle’s Rotation

l

Increase Right Paddle’s Rotation

j

Decrease Right Paddle’s Rotation

UP

Move the camera up, moving the objects down

DOWN

Move the camera down, moving the objects up

LEFT

Move the camera left, moving the objects right

RIGHT

Move the camera right, moving the objects left

q

Rotate the square around it’s center

e

Rotate the square around paddle 1’s center

Description

Function stack. Internally it has a list, where index 0 is the bottom of the stack. In python we can store any object as a variable, and we will be storing functions which transform a vertex to another vertex, through the “modelspace_to_ndc” method.

@dataclass
class FunctionStack:
    stack: List[Callable[Vertex, Vertex]] = field(default_factory=lambda: [])

    def push(self, o: object):
        self.stack.append(o)

    def pop(self):
        return self.stack.pop()

    def clear(self):
        self.stack.clear()

    def modelspace_to_ndc(self, vertex: Vertex) -> Vertex:
        v = vertex
        for fn in reversed(self.stack):
            v = fn(v)
        return v


fn_stack = FunctionStack()

There is an example at the bottom of src/demo18/demo.py

def identity(x):
    return x


def add_one(x):
    return x + 1


def multiply_by_2(x):
    return x * 2


def add_5(x):
    return x + 5


Define four functions, which we will compose on the stack.

Push identity onto the stack, which will will never pop off of the stack.

fn_stack.push(identity)
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))  # x = 1
fn_stack.push(add_one)
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
fn_stack.push(multiply_by_2)  # (x * 2) + 1 = 3
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))
fn_stack.push(add_5)  # ((x + 5) * 2) + 1 = 13
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))
fn_stack.pop()
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))  # (x * 2) + 1 = 3
fn_stack.pop()
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))  # x + 1 = 2
fn_stack.pop()
print(fn_stack)
print(fn_stack.modelspace_to_ndc(1))  # x = 1

Event Loop

while not glfw.window_should_close(window):
...

In previous demos, camera_space_to_ndc_space_fn was always the last function called in the method chained pipeline. Put it on the bottom of the stack, by pushing it first, so that “modelspace_to_ndc” calls this function last. Each subsequent push will add a new function to the top of the stack.

\vec{f}_{c}^{ndc}

Demo 12
    fn_stack.push(lambda v: v.camera_space_to_ndc_space_fn())  # (1)

Unlike in previous demos in which we read the transformations from model space to world space backwards; this time because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

The camera’s position and orientation are defined relative to world space like so, read top to bottom:

\vec{f}_{c}^{w}

Demo 12
    # fn_stack.push(
    #     lambda v: v.translate(tx=camera.position_worldspace.x,
    #                           ty=camera.position_worldspace.y,
    #                           tz=camera.position_worldspace.z)
    # )
    # fn_stack.push(lambda v: v.rotate_y(camera.rot_y))
    # fn_stack.push(lambda v: v.rotate_x(camera.rot_x))

But, since we need to transform world-space to camera space, they must be inverted by reversing the order, and negating the arguments

Therefore the transformations to put the world space into camera space are.

\vec{f}_{w}^{c}

Demo 12
    fn_stack.push(lambda v: v.rotate_x(-camera.rot_x))  # (2)
    fn_stack.push(lambda v: v.rotate_y(-camera.rot_y))  # (3)
    fn_stack.push(lambda v: v.translate(tx=-camera.position_worldspace.x,
                                          ty=-camera.position_worldspace.y,
                                          tz=-camera.position_worldspace.z))  # (4)

draw paddle 1

Unlike in previous demos in which we read the transformations from model space to world space backwards; because the transformations are on a stack, the fns on the model stack can be read forwards, where each operation translates/rotates/scales the current space

\vec{f}_{p1}^{w}

Demo 12
    fn_stack.push(lambda v: v.translate(tx=paddle1.position.x,
                                          ty=paddle1.position.y,
                                          tz=0.0)) # (5) translate the local origin
    fn_stack.push(lambda v: v.rotate_z(paddle1.rotation)) # (6) (rotate around the local z axis

for each of the modelspace coordinates, apply all of the procedures on the stack from top to bottom this results in coordinate data in NDC space, which we can pass to glVertex3f

    glColor3f(paddle1.r, paddle1.g, paddle1.b)

    glBegin(GL_QUADS)
    for model_space in paddle1.vertices:
        ndc_space = fn_stack.modelspace_to_ndc(model_space)
        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
    glEnd()

draw the square

since the modelstack is already in paddle1’s space, and since the blue square is defined relative to paddle1, just add the transformations relative to it before the blue square is drawn. Draw the square, and then remove these 4 transformations from the stack (done below)

\vec{f}_{s}^{p1}

Demo 12
    glColor3f(0.0, 0.0, 1.0)

    fn_stack.push(lambda v: v.translate(tx=0.0, ty=0.0, tz=-10.0))  # (7)
    fn_stack.push(lambda v: v.rotate_z(rotation_around_paddle1))  # (8)
    fn_stack.push(lambda v: v.translate(tx=20.0, ty=0.0, tz=0.0))  # (9)
    fn_stack.push(lambda v: v.rotate_z(square_rotation))  # (10)

    glBegin(GL_QUADS)
    for model_space in square:
        ndc_space = fn_stack.modelspace_to_ndc(model_space)
        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
    glEnd()

Now we need to remove fns from the stack so that the lambda stack will convert from world space to NDC. This will allow us to just add the transformaions from world space to paddle2 space on the stack.

    fn_stack.pop()  # pop off (10)
    fn_stack.pop()  # pop off (9)
    fn_stack.pop()  # pop off (8)
    fn_stack.pop()  # pop off (7)
    fn_stack.pop()  # pop off (6)
    fn_stack.pop()  # pop off (5)

since paddle2’s model_space is independent of paddle 1’s space, only leave the view and projection fns (1) - (4)

draw paddle 2

\vec{f}_{p2}^{w}

Demo 12
    fn_stack.push(lambda v: v.translate(tx=paddle2.position.x,
                                          ty=paddle2.position.y,
                                          tz=0.0))  # (5)
    fn_stack.push(lambda v: v.rotate_z(paddle2.rotation))  # (6)

    glColor3f(paddle2.r, paddle2.g, paddle2.b)

    glBegin(GL_QUADS)
    for model_space in paddle2.vertices:
        ndc_space: Vertex = fn_stack.modelspace_to_ndc(model_space)
        glVertex3f(ndc_space.x, ndc_space.y, ndc_space.z)
    glEnd()

remove all fns from the function stack, as the next frame will set them clear makes the list empty, as the list (stack) will be repopulated the next iteration of the event loop.

    fn_stack.clear()  # done rendering everything, just go ahead and clean 1-6 off of the stack

Swap buffers and execute another iteration of the event loop

    glfw.swap_buffers(window)

Notice in the above code, adding functions to the stack is creating a shared context for transformations, and before we call “glVertex3f”, we always call “modelspace_to_ndc” on the modelspace vertex. In Demo 19, we will be using OpenGL 2.1 matrix stacks. Although we don’t have the code for the OpenGL driver, given that you’ll see that we pass modelspace data directly to “glVertex3f”, it should be clear that the OpenGL implementation must fetch the modelspace to NDC transformations from the ModelView and Projection matrix stacks.