Cycle 3: Puzzlr Final

Introduction

The ending of this project was particularly frustrating because I ended up getting COVID the last week of classes. Yes if you’re reading this 20 years in the future, COVID was a terrible virus that crippled most of the planet in 2020, and we are now living with it every day. Because of this, I was not able to present my final prototype to the class, so a part of this documentation post will include a short video demo on how my game works!

The final project was a great prototype that I am very proud of. After refining some of the cable setups and improving the stability of the Makey-Makey board, I was able to get most of the contacts working. Because of the way I glued and reinforced the wires, some of the contacts cross over, so some will activate more than 1 “keystroke” on the Makey-Makey. Combining this with the digital experience I created on Isadora yielded a pleasing result that was an excellent insight into how different microcontroller boards work.

Reflecting on the Build Process

Building and testing Puzzlr was no easy process. It involved a lot if iteration through different laser-cut prototypes and testing scenarios. I learned how to wire-up and configure the Makey Makey within Isadora, and had to design a circuit system that would work with the Puzzle style.

The first wooden prototype allowed me to test the setup and whether the puzzle would be hard for others to solve. It turned out to be a n excellent learning moment because I ended up adjusting the thickness to actually fit the wires, and even learned that I had to flip the puzzle pieces to get the correct engravings.

The second and final prototype involved laser-cutting the puzzle out of white acrylic. Then I sanded everything down, cleaned it up, and put it together with duct tape. I would have glued it together but I needed constant easy access to the wire housing to make adjustments and improvements. Another hurdle at this stage was sanding down the circuit clips that hold the wires together. They were initially too tall and ended up poking through the holes; I had to sand them down so they would remain flush with the upper level. By using tin foil, foil tape, glue, and duct tape, I was able to put everything together and get the circuit completed.

How the Game Works

Here is a picture of the actual schematic I created for the puzzle board. The goal of the game is simple: get the pieces onto the board and in the right spots. Each puzzle piece has a piece of metal foil tape underneath it that corresponds to the black squares on the schematic. When the piece makes contact with the board it completes each circuit by closing the loop between the ground and input wires. Each contact corresponds to a certain input on the Makey-Makey, and when activated will relay a keystroke to the computer.

Starter Screen

Instruction Screen

Game screen. As pieces get placed on the board, they begin appearing here. If they are removed from the contact, the piece will disappear a few seconds after it is no longer receiving an input signal.

Win screen

Video Demo

Reflecting on the Project

I really enjoyed this exploratory project because it allowed me to experiment with something I was considering for my thesis. The Makey-Makey system is really unique but has it’s issues with more robust and complex applications. If I were to change anything about this project, I would have spent some time learning how to solder so the connections could be a little stronger. I will be applying the things I learned during this project when creating my own microcontroller input methods for my thesis in the future!


Cycle 3 documentation – Dynamic Cloth

After cycle 2, the main aspect I worked on was controlling the Kinect input numbers in the ranges I wanted and that worked best for the type of movement I wanted, and also polishing the interactions based on that. I made the shapes ‘more sensitive’ in the Z-axis, more similar to how they were in Cycle 1, but using the skeleton data instead of just brightness inputs, although I still used brightness for subtle horizontal and vertical movement. I also experimented with placing some pre-recorded animations from Cinema 4D in the background, made transitions between the scenes smoother, and made the materials in 3ds Max less shiny.

The transitions required a workaround that Alex helped me with. Initially, I was trying to just set a 3-second delay in the Jump actor but I was getting weird glitches during that delay, like the next scene showing up temporarily in the wrong position, etc. So, I ended up putting empty scenes in between each interactive scene and setting a delay so that it looks like a smooth transition is happening between each scene.

I’m happy with how the Cycle 3 presentation went (other than Isadora crashing 2 times) and I have gotten a lot of interesting and useful feedback. It was also very enjoyable seeing everyone engage with the projections. The feedback I have gotten included experimenting with adding texture to the materials and seeing how that impacts the experience and perception of the models, another comment was about potentially introducing a physical aspect like touching a real piece of fabric with something like Makey-Makey that triggers transitions between the scenes, as well as tracking more types of user movements instead of mainly focusing on depth of one direction. Comments also included that the animations felt a bit different from the main interactive models which were in the foreground, but the animations faded in the background, which I definitely agreed with. I think the next iteration would be making the animations interactive too and getting better at materials in 3ds Max and experimenting with more texture because I liked that suggestion. In the next iteration, I would apply all these suggestions except the physical cloth part since my main goal with this project was to experiment with introducing movement and body tracking as one method of interaction I could explore in virtual environments. With that said, I am very happy with the takeaways from this class and the whole Cycle 1-3 process, including learning Isadora, getting more comfortable using the Motion Lab, and trying out a new type of project and experience.

I also learned a lot of new things about designing in and around a physical space, since that is something I usually don’t think about much. I learned about important considerations when using physical space and more free-form movement, especially when Alex was running backward and collided with Kinect. That also prompted me to think about putting it in front of the projection in the next iteration like I initially had in mind, and moving the projections up higher so that the sensor is not right in front of them.

Video compilation of everyone’s experience from Alex’s 360 videos:

Another useful feedback was participants expecting and wishing that they could use their hands more since that was the main tendency when first seeing cloth on the screen; a lot of participants wished they could manipulate it using their hands similar to how they would in real life. I think this would also be very interesting to explore in the next iteration, playing with tracking the distance between hands and hand movements to influence the number ranges.

As I mentioned during the discussion, I have been experimenting with this in a different project using Leap Motion that we checked out. I created a virtual environment in the game engine Unity and used physics properties on interactive materials to make them responsive to hand tracking using Leap Motion, which allows participants to push, pull, and move cloth and various other objects in the virtual space, so I also wanted to share a short screen-recording of that here too:


Mollie Wolf Cycle 2: The WILDS – Dancing w/ Cody

For Cycle 2, I began experimenting with another digital ecosystem for my thesis installation project. I began with a shot I have of one of my collaborators, Cody Brunelle-Potter dancing, gesturing, casting spells on the edge of a log over looking a mountain side. As they do so, I (holding the camera) am slowing walking toward them along the log. I was rewatching this footage recently with the idea of using a depth camera to play the footage forward or backward   as you walk – allowing your body to mimic the perspective of the camera – moving toward Cody or away from them. 

I wasn’t exactly sure how to make this happen, but the first idea I came up with was to make an Isadora patch that recorded how far someone was from an Xbox Kinect at moments in time regularly, and was always comparing the their current location to where they were a moment ago. Then, whether the difference between those two numbers was positive or negative would tell the video whether to play forward or backward.

I explained this idea to Alex; he agreed it was a decent one and helped me figure out which actors to use to do such a thing. We began with the OpenNI Tracker, which has many potential ways to track data using the Kinect. We turned many of the trackers off, because I wasn’t interested in creating any rules in regards to what the people were doing, just where they were in space. The Kinect sends data by bouncing a laser of objects, depending on how bright the is when it bounces back tells the camera whether the object is close (bright), or far (dim). So the video data that comes from the Kinect is grey scale, based on this brightness (closer is to white, as far is to black). To get a number from this data, we used a Calc Brightness actor, which tracks a steadily changing value corresponding to the brightness of the video. Then we used Pulse Generator and Trigger Value actors to frequently record this number. Finally, we used two Comparator actors: one that checked if the number from the Pulse Generator was less than the current brightness from the Calc Brightness actor, and one that did the opposite, if it was greater than. These Comparators each triggered Tigger Value actors that would trigger the speed of the Movie Player playing the footage of Cody to be -1 or 1 (meaning that it would play forward at normal speed or backwards at normal speed).

Once this basic structure was set up, quite a bit of fine tuning was needed. Many of the other actors you see in these photos were used to experiment with fine tuning. Some of them are connect and some of them are not. Some of them are even connected but not currently doing anything to manipulate the data (the Calculator, for example). At the moment, I cam using the Float to Integer actor to make whole numbers out of the brightness number (as opposed to one with 4 decimal points). This makes the system less sensitive (which was a goal because initially the video would jump between forward and backward when a person what just standing still, breathing). Additionally I am using a Smoother in two locations, one before the data reaches the Trigger Value and Comparator actors, and one before the data reaches the Movie Player. In both cases, the Smoother creates a gradual increase or decrease of value between numbers rather than jumping between them. The first helps the sensed brightness data change steadily (or smoothly, if you will); and the second helps the video slow to a stop and then speed up to a reverse, rather than jumping to reverse, which felt glitchy originally. As I move this into Urban Arts Space, where I will ultimately be presenting this installation, I will need to fine tune quite a bit more, hence why I have left the other actors around as additional things to try.

Once things were fine tuned and functioning relatively well, I had some play time with it. I noticed that I almost instantly had the impulse to dance with Cody, mimicking their movements. I also knew that depth was what the camera was registering, so I played a lot with moving forward and backward at varying times and speeds. After reflecting over my physical experimentation, I realized I was learning how to interact with the system. I noticed that I intuitively changed my speed and length of step to be one that the system more readily registered, so that I could more fluidly feel a responsiveness between myself and the footage. I wondered whether my experience would be common, or if I as a dancer have particular practice noticing how other bodies are responding to my movement and subtly adapting what I’m doing in response to them…

When I shared the system with my classmates, I rolled out a rectangular piece of astro turf in the center of the Kinect’s focus (and almost like a carpet runway pointing toward the projected footage of Cody). I asked them to remove their shoes and to take turns, one at a time. I noticed that collectively over time they also began to learn/adapt to the system. For them, it wasn’t just their individual learning, but their collective learning because they were watching each other. Some of them tried to game-ify it, almost as thought it was a puzzle with an objective (often thinking it was more complicated than it was). Others (mostly the dancers) had the inclination to dance with Cody, as I had. Even though I watched their bodies learned the system, none of them ever quite felt like they ‘figured it out.’ Some seemed unsettled by this and others not so much. My goal is for people to experience a sense of play and responsiveness between them and their surroundings, less that it’s a game with rules to figure out.

Almost everyone said that they enjoyed standing on the astro turf—that the sensation brought them into their bodies, and that there was some pleasure in the feeling of stepping/walking on the surface. Along these lines, Katie suggested a diffuser with pine oil to further extend the embodied experience (something I am planning to do in multiple of the digital ecosystems through out the installation). I’m hoping that prompting people into their sensorial experience will help them enter the space with a sense of play, rather than needing to ‘figure it out.’

I am picturing this specific digital ecosystem happening in a small hallway or corner in Urban Arts Space, because I would rather this feel like an intimate experience with the digital ecosystem as opposed to a public performance with others watching. As an experiment with this hallway idea, I experimented with the zoom of the projector, making the image smaller or larger as my classmates played with the system. Right away, my classmates and I noticed that we much preferred the full, size of the projector (which is MUCH wider than a hallway). So now I have my next predicament – how to have the image large enough to feel immersive in a narrow hallway (meaning it will need to wrap on multiple walls). 


Cycle 2: Puzzlr Progress

Introduction

Not too much has changed in the actual design of my project since the Cycle 1 check-in. I’m still working with a unique and fun way to create physical input methods that match a digital experience. My overall goal is applying the knowledge I gain from this project to my final thesis. So far there have been a few hiccups and issues, but I’m incredibly happy with the progress I’ve made and I look forward to finishing the project soon.

Progress Update

Most of the photos that will be in this Cycle 2 post will be work-in-progress shots of me laser cutting the final pieces, sanding them, and building up the puzzle system and circuits. The last few weeks have mostly been spent refining and building my puzzle board, as well as experimenting with some simple Isadora elements.

The final puzzle board was laser cut from white acrylic, and assembled using duct tape and super glue. The new board design came as a result of a lack of space on the first prototype; this version is much more stable, and includes a bottom space for circuits, a mid plate where the puzzle pieces sit, and then a border to help keep the puzzle pieces in. I also created a simple circuit system where the two wires meet under the contact point, but are separated by a plug-like shaft. Each side extends up into the board and splits into two contact points, the puzzle pieces themselves have metal foil underneath which complete the circuit when placed correctly.

Next Steps

As the project nears it’s end, I’m currently refining the circuit setup and improving the connection quality. I’m planning on including a simple visual component that will provide progress feedback to the player. Since that is an important part of my investigation into hand-eye coordination, I will be prioritizing the Isadora interface to get at least some feedback from that area of focus.


cycle 2 -simultaneous seeing : the digital self is real

Cycle 2 Documentation

11.22.2022

Katie O’Loughlin

For cycle two, I worked on creating a malleable environment that was affected by the choices people made within the space. I continued to play with feedback loops, although they weren’t the main focus anymore because of the variety of other choices that could be made. I also continued to think about the impact and effect of seeing and being seen, of framing a body other than your own, and of experiencing the digital image of your body as something less familiar, although not necessarily less real.

In the space, there were three cameras all attached to live feed video being projected in the space. One camera was being projected onto the large flat scrim via a capture card, one was being projected onto the large, curved scrim via NDI watcher, and one was hidden behind the curtains and projecting directly through a pico projector onto a small box. I implemented the pico projector in the corner of the space to play with scale. Where it was located, it would hide the person from the rest of the play space, giving a break from what could be chaotic experimentation.

The main area was carved into a circular-ish space with two curtains pulled around the curved track. The back scrim and one of the circle scrims had the two different live feeds playing on them. People were allowed to pick up both cameras and choose how to frame the room and the people in it. In the middle of the space there was a table with a magnifying glass, some shiny emergency blankets, and some plastic Fresnel lenses that warped and focused objects/people as you looked through them. These items were objects for the participants to play with to filter the images on the screens and change how they were viewing the space.

This cycle definitely didn’t have an end goal – there was nothing I was secretly trying to get the participants to realize. My research is invested in shifting perspective and understanding how perception affects experience. I am curious about how humans can be many things at once, including perceived as many things at once. I find myself invested in discovering how to hold multiple truths at once. As I watched the participants maneuver through the space, filter the images, and choose different framings, I was really interested in seeing the similarities and differences between the image on the screen, and the person I was seeing right in front of me. All of this work is really making me consider how integrated our digital lives are in society right now, and how we have a lot of agency in how we present ourselves, and others, to the world on digital platforms.

How does how we frame ourselves and our world affect other’s perceptions as they look into our world? What does real and fake mean in this day and age? If our digital selves are a representation of our identity, what is the impact on our own perception of self? How much choice do we get in how other people see us, or experience us? How carefully are we holding the idea that how we perceive someone else changes our reality of them, which in turn may change theirs as well?

I like giving participants agency in this work, to make choices and hold their own responsibility. As I do work with the digital body, I continue to be aware of the power structures, hierarchies, and delicate spaces that arise easily within this topic. One of the aspects of this cycle that I found really enjoyable was to see how all the participants interacted with each other much more than cycle one, and how I got to see the interconnectedness between choices and how that impacted the space as a whole.

footage taken by Alex O and Mollie W

Cycle 2 documentation

Since Cycle 1, I used Cinema 4D to create the final 3D cloth models I’m going to use for the installation, setting up Kinect and Isadora working in the Motion Lab, experimenting with projection spots, learning how to project in the first place, and I’ve also been modifying the Isadora patch based on the Motion Lab environment. One of the main changes I made is to have 4 separate scenes every minute at least. A big part of this process was optimizing the models in 3DS Max since the program has a maximum number of polygon faces that can be exported and my original models were much bigger than that:

At the time of the Cycle 2 presentation, my visuals were still in progress since I am learning how to make materials in 3DS Max which is the program I have to use because that’s the only format Isadora supports. But my vision for all the materials is to be non-shiny, like the first two scenes

…which was also the feedback I got from the critique – scene number 2 was the most visually pleasing one, and I have to figure out how to edit the shiny materials on the other objects (scenes 3 and 4) this week.

During Cycle 2 I decided I want the projection to be on the main curtain at the front of the Motion Lab and I liked the scale of the projected models, but I need to remove the Kinect-colored skeletons from the background and have the background just be black.

The feedback from the critique also included experimenting further with introducing some more forms of movement to the cloth which I already tried but it was kind of laggy and patchy so I think once I learn how to control the skeleton numbers and output better I could use this to also expand the ways in which I can make the models move, and then I’ll experiment with having them move a little on the projection horizontally and vertically instead of just scaling along the Z-axis.

Next steps:
My main next step is to keep working on modifying the Isadora patch since it is really confusing to figure out which numbers are best to use based on the skeleton tracking outputs, I’m thinking I might switch back to using brightness/darkness inputs for some scenes since I liked how much more sensitive the cloth models were when I was using that. But I will first experiment with utilizing the skeleton data more efficiently. I am also going to polish the layout and materials of the 3rd and 4th scenes, and I think I’m happy with how the first and second scenes are looking, they just need some interaction refining. On Tuesday I am also going to work on setting up the Kinect to be much further from the computer, in front of the participants.

I am also going to render some animations I have of these same cloth models and try importing them into the Isadora patch in addition to the interactive models to see how that combination looks in the projections.


Cycle 2 – Perspective drawing tool

I decided to move from the small handheld book form to the motion lab for my next cycle. I have been working with my own students in the motion lab. I decided to explore the possibility of designing tools for teaching drawing that involve observation, collaboration, and movement in the process. My initial plan involved using 2-point perspective as a starting point. Typically teaching 2-point perspective is a guided drawing kind of process, step-by-step. It is most often experienced using a pencil and a ruler. A teacher guides students through steps for drawing a horizon line, two points, and then proceeds to demonstrate how to draw a simple dimensional cube, often using a document camera to project a larger live capture image of the teacher’s actions. Once students understand the basic process, they are set free to create on their own.

I scaled this process up and envisioned the motion lab as a large format document camera. I set up a drawing space in the center of the space, directly under a birds-eye-view camera. In the image below you can see that I projected the live feed image on the main projection screen and applied a TT edge detect filter in Isadora. In the photo you can see students using the ropes that were each tied to one of the points on our horizon line that bisects the image.

The patch I created was fairly simple, see below:

Isadora Patch

Playing with two camera views layered and both processed through TT Edge Detect

After the initial explorations with the one video in-watcher, I layered another feed from a camera on a tripod that was directed at a whiteboard. This is when participants began to play even more with what was happening on each screen and how they were interacting together in the projection.

Layering with birds-eye-view and camera on tripod. shifting scale by zooming in, turning camera view

For my final 3rd cycle, I want to continue to build on and play with this tool. My initial ideas about pushing it into another phase is to focus on observational drawing and the figure/body in motion. Maybe shifting the output 90 degrees so the drawers can face the large projection screen head on. Using two camera again, but one would be directed at a body moving in space and those drawing would try to capture the movement and gestures on the page. I also wanted to add in the live drawing tool and see how that adds another layer of complexity and interest to the projected image.


Cycle 1 – Ghosts in the Landscape – Mollie Wolf

For cycle 1, I decided to attempt an interactive portion of my installation. My initial idea was to use an Astra or Kinect, with a depth sensor to make the background invisible and layer live capture footage of audience members into a video of a landscape.

When I asked Alex for help in background subtraction, he suggested I use a webcam instead. We found an example patch that Mark had posted on Toikatronix for background subtraction, so I started there.

He essentially used an effects mixer with a threshold actor to read the data from a video in watcher, and identify when a person had entered the frame – by watching for a change in light, compared to a freeze frame grab of the same image. Then, he used an add alpha channel actor  to mask anything that hadn’t changed (i.e. the background).

Here are photos of my version of Mark’s background subtraction patch.

When I was first using this patch, it was a little finicky – sometimes only showing portions of the body, or flickering images of the background, or showing splotchy images. I noticed that it had a hard time with light colors (if the person was wearing white, as well as if their skin was reflecting the light in the room). I tried to fix this by adjusting the threshold number and by adding a gaussian blur to the image, but it still was more finicky than I’d like.

When I brought this issue up to Alex, he suggested that I could use a difference actor instead to do something similar. However, with the difference actor, it is mostly recognizing the outline of bodies rather than picking up the whole image of a person. I decided that I actually liked this better, aesthetically anyway – it made for ghost-like images to be projected in to the landscape, rather than video images of the people. 

Here are photos of my current patch, using a difference actor. (The top line with the flip, freeze, effect mixer, etc – is still there from the previous attempt, but the projector is not active, so you can ignore that).

I think this method worked way better because it was no longer dependent on light in the room, and instead just on motion. (Which is way better for me, considering that I’m practicing this all in the Motion Lab with black walls and floors, but will be presenting it in Urban Arts Space with white walls and concrete floors). Conceptually, I like this version more as well, as it is an abstraction of people’s bodies and action, rather than a direct representation of it.

As a last minute experimentation, I laid out some platforms in the space and roughly lined them up with where there were mounds/hills in the landscape – that way if someone stood on the platform, it would look like their corresponding ghost image had climbed up the hill.

When I presented this version to the class, I was pleased to see how playful everyone became. With it just being an outline of their bodies, they were not looking at an image of themselves, and so there didn’t seem to be any amount of self-consciousness that I would expect if everyone were looking in a mirror, for example. People seemed to have genuine delight in figuring out that they could move through the landscape. 

One interesting thing about the difference actor, is that non-moving bodies blend into the background (with no motion, there is no difference, frame be frame). So when someone decides to sit or stand rather than moving around, their ghost image disappears. While this is an inevitable aspect of the difference actor, I kind of like the metaphor behind this – you have to actively, physically engage in order to take effect on your environment.

We spent some time with the camera in a few different locations (to the side or in front of people). As we were discussing together, we came to a consensus that it was more magical/interesting when the camera was to the side. It was a, less obvious in terms of how to move (i.e. not as easy as looking into a mirror), which added to the magic. And having the camera to the side also meant that the landscape image was not obscured from view by anything. People only knew where the camera was if they went searching for it.

Here is a video of my peers interacting with the patch/system.


cycle 1: infinite seeing : feedback loops

Cycle 1 Documentation: Katie O’Loughlin

10.27.2022

This first cycle took place in the Motion Lab and was my first experiment with multiple feedback loops happening simultaneously. Originally, I had wanted to try a double feedback loop, which had the scrims in a circle, two cameras facing each other, and two projectors above each camera. But, as it goes, one hand-held camera capture card decided to quit working, and I was left with only one hand-held camera, eliminating the possibility of that idea. I mentally scrolled through the resources I did have, and ones I felt confident working with on my own, which included the single hand-held cam, a top-down cam, a top-down projector, and the regularly used circle and front projectors.

So, instead of facing feedback loops, I put the front projector on a standard loop, and then the top-down camera and projector on their own loop that was projected onto a white sheet laid out on the floor. In addition to the floor, I also projected the top-down feed onto a hanging scrim in the space, which was at a 90-degree angle to the front scrim. My goal was partially met, as there were two feedback loops happening, but as they were not facing each other they weren’t picking up the other loop within their own, which was what I was curious about.

The top-down set up was a unique feedback loop, as the frame of the projector was shooting down, and the frame of the camera was turned at a 90-degree angle and shooting down. Instead of that feedback loop being a replica, it ended up being a rotated loop.

Once people entered the room, it was interesting to see how they interacted with the loops, as they could clearly see their own impact on the image. There was a definite sense of play in their interaction, which I want to hold onto if possible. I manipulated the front projector’s image in Isadora once everyone had landed in the space, mostly by rotating the image and zooming it in/out. The rotation of the image caused a spiral in the loop which would turn into a square at a certain angle. I enjoyed the idea that both the person in the space and the person behind the computer could choose to change the image.

I mentioned to Alex that I was realizing just how impactful lighting is for this work, as it will totally blow out an image’s loop if angled or intensified in a specific way. After I said this, he suggested I turn off all the lights and the front projector, as the front projector was at a way higher resolution than the down projector, because the whole top-down set up was getting lost in comparison. When I did, the light from the down projector was enough light to create the feedback loop on the floor, and at a way brighter contrast making it easier to see the results of the loop.

As people began to play with that loop, we realized that they were creating an analog version of the loop because the camera would catch the movement, create a loop, and then continue to see its own loop even after the person moved away from the camera. It created an amoeba-looking shape that would stay for a while and then eventually burst into static before it lost the loop.

I was intrigued with this result, as the top-down was my back-up plan that I wasn’t too excited about. And because I had it turned on with the other loop, that continuous feedback was getting blown out by the surrounding light. Once we could see it clearer, the loop got longer and more complex and we could see the reaction to our body’s movement on the scrim.

One piece of feedback I received was that there were many “frames” within the experience. Looking at the space, both hanging scrims were rectangles, the cloth on the ground was a square, and the loops were also looping the rectangular frame within themselves. The camera had its rectangular viewfinder open, and the scrims were at a 90-degree angle from each other. I appreciated the idea that it felt like everything was a camera frame, as how we are seen through a camera is a facet of my research. That being said, I’m glad it was pointed out because I’m not sure I want such a square space. I wished I could have played with the circle projectors and scrims but will try to implement them into cycle two.

I’m glad I showed what I did because new aspects of the feedback loop showed up. It gave me a bit more motivation to move forward into the next cycle. My goals for cycle two are to set up the wireless router and get the NDI Watcher working through my phone and Isadora.  That way, people can play with a moving camera around in the feedback loop.


Cycle 1: Puzzlr Prototype

Project Brief

For the final project, I’ve decided to create a fun and challenging puzzle that will stimulate the player in a variety of different ways through 3 distinct play modes. The puzzle consists of six pieces arranged in a hexagonal shape. The goal of each level will be to complete the puzzle each round while navigating the changes in the sensory elements of the puzzle. My research thread focuses on microcontroller inputs and the conceptualization of touch, so I see this project as a way to explore creating micro-interaction systems through physical objects.

The first round of the puzzle will be basic; the player just has to complete the puzzle by getting all the pieces onto the board in the correct configuration. The contacts on the bottom of the pieces and the board itself will give a slight hint as to how to approach the puzzle (more on that later). There will be both a visual and auditory feedback component informing the player about their progress.

The second round will involve covering the players hands so that they can only use the visual/auditory feedback from the system rather than looking at their hands. The pieces will also include unique textures that will then be mirrored on the screen to guide the player on what textures go where on the board.

The third and final round will involve blindfolding the player so they will have to rely on the textural elements of the pieces coupled with the auditory feedback. In this section, the auditory feedback for each piece will be unique so players can keep track of what pieces go where. This final level is intended to be very difficult and will be used mostly as an experiment.

First version of the schematic

Production & First Prototype

For the first phase of iteration I created a schematic and then used it to laser cut a preliminary model of the puzzle and circuit housing. After laser cutting and setting everything up I realized some parts of the first prototype need to change:

  1. The engravings on the bottom plate are too narrow and can’t house the wires and cables necessary for the puzzle piece contacts.
  2. The engravings on the bottom of the puzzle piece are on the wrong side of the pieces because I forgot to flip them before printing
  3. The circuit layout is confusing, and requires more thought into how the ground/input wires are going to be set up in the final board.

After thinking about those changes I modified the schematic and created a new one featuring a new build layout, a new circuit component, and a new circuit layout. I decided I would create two levels to the puzzle, one for the circuits and one to put the puzzle pieces on. This allows more room for cables and components under the puzzles, and ensures a clean look on the top. The new circuit component is a little adapter (in the middle of the schematic) that will serve as a bridge between the circuit housing and the top of the puzzle. The component will take the ground and input cables and attach them to two pieces of foil that are sticking out at the top, the puzzle piece will then have a piece of foil underneath that, when placed on the contact, will complete the circuit.

Schematic V2
New Circuit Layout

Workspace

Next Steps

The next step in the project is to make the second prototype and get the circuit laid out and working for the next cycle!