Pressure Project 1
Posted: February 13, 2025 Filed under: Uncategorized Leave a comment »For pressure project one I was inspired by the original Isadora template to keep things simple and looping indefinitely as opposed to creative a narrative. My original instinct was to animate the patch with random positions and shape patterns to create something that was consistently changing to keep the viewer’s attention. Thankfully, Isadora’s license was not valid on my first attempt so I was unable to save my work. On my second attempt, I challenged myself to be a bit more intentional about my choices and work within the scope of the assignment. Knowing that we had the option to use a camera input as a sensor, I began the process of adding some video feedback and interaction to the patch.
Adding the video to the squares was easy enough. From my previous experience with Isadora, I already knew how to add a new user input to the user actors, so it was a simple process. However, I quickly realized that the incoming aspect ratio was rectangular and the shapes we were given were square. In the spirit of simplicity and symmetry, I choose to crop the video to a square. This led to the image having a border and combining the projectors gave the effect of a color filter which I thought was a bit retro and fun. This basic foundation gave me the inspiration for the rest of the patch.
The square image with a thick border remined me of a photo booth and I was off to create an experience that could capture that magic into an interactive experience. I wanted to essentially “take a snap shot” at the same time as the colored square appeared. I found the freeze actor worked perfectly for this. Adding the same trigger as the projector activation synced everything up nicely.
I wanted to refine the images a bit more by changing the way they appeared and disappeared. For this, I wanted to create the effect of turning over a card or picture to reveal what is underneath. I have never tried this in Isadora before so I experimented a bit before I found the 3D projector. The setting on this actor are quite different from the normal projector, but I was quickly able to figure out what most of the options did. To rotate the image, I added a ramp to the Y rotation so it would flip into place when the projector became active. This worked as you might expect, but I did discover there were some artifacts and lines that would tear into the image as the effect was taking place. I played around with the blend modes and layer heights, but nothing seemed to work. Finally I found the “depth test” setting that made the transition smooth and looked good. I would have liked to flip the image back over when it disappeared, but I ran out of time and figuring out the timing and trigger delays was not as important to me as getting a good image.
One of the goals for me on this project was also to explore new actors and refine my methods of using them. I have used the eyes++ actor many times, but with limited success. Recently I have tried filtering the input with a chroma key to isolate faces which has worked fairly well. This technique worked well for trying to essentially “zoom in” on the users face when taking the “photo”. I had to guess what actors to use to do the cropping and centering of the image on the “Blob” but I was able to get something that worked reasonably well.
When it was time to present, I quickly realized the camera tuning I had done in my office at home was producing the same results in the classroom. I was able to franticly changes some setting to get it working again just before our presentations began. For the presentation itself, I choose to display the stage on the classroom TV. This was similar to how I had programmed everything in my office with second display. I was excited to see how others would react and I was pleased that most people seemed to enjoy the experience. One thing I did notice was that at some point people started to back away from the screen. This was partially to give others space, but I think it was also because the camera was essentially following them and they didn’t want their picture taken. For a future iteration I might try to limit the range of the camera so it only interacted with people at a certain range and distance.
Overall, I enjoyed working on this project and I’m happy I was able to keep most of the original programming intact.



Pressure Project 1: Building a Galaxy
Posted: February 4, 2025 Filed under: Uncategorized Leave a comment »Deviating from the original self-generating patch to create something unrecognizable was a process of playing with the shape and wave actors tucked inside of each 50-50 box. Although my patch made several leaps from the original source, this was ultimately an exercise in the unpredictability of not only pattern but challenge. The process of experimenting, problem-solving, and making creative choices based on trial and error allowed me to develop something unique. Each step in the project introduced new discoveries, frustrations, and moments of inspiration that shaped the final product.
I worked on this project in intervals that felt manageable for me. Whenever I could pop the thumb drive in, as long as I wasn’t becoming frustrated, I could keep trucking forward. The moment I’d hit a wall was when I found value in stepping away and coming back with a fresh perspective. Allowing myself space to breathe through the creative process kept me from overworking certain ideas or becoming too attached to one solution. I created drafts at the following intervals: 1 hour, 1.5 hours, 3.25 hours, 4.5 hours, and 5 hours. Each session built upon the last, adding layers of depth and refinement to the patch.
My first hour consisted largely of two things: playing around with different shapes and patterns that were visually appealing to me and organizing the 50-50 boxes onto virtual stages to ensure I was adjusting the correct parameters on the patches. This initial exploration allowed me to get comfortable with the software and begin to establish an aesthetic direction. I decided to take the second 50-50 box and duplicate these hexagon shapes, as evident in the video. I considered having a blinking hexagon of another color travel through the lines of hexagons to give the illusion of movement. Initially, I intended to duplicate this pattern across the whole screen, but as the hour passed, I realized this approach would be too meticulous for what I was looking to accomplish in the given timeframe.
For the first 50-50 box, I experimented with some video actors. I inserted the explode actor between the shape and projector while also adding a wave generator to the vertical position, giving the illusion of a bouncing ball. This small animation gave me my first taste of how dynamic movement could be implemented within the patch. The interplay between controlled movement and randomization became an interesting area to explore.
The next 30 minutes would get interesting as the black void of the stage became strikingly apparent to me. I wanted to texture the space a bit to avoid a completely flat background. I found a background color actor that fixed this problem, but I didn’t like how flat it felt. To enhance the visual complexity, I used the explode actor to create a grain-like texture behind my shape actors. Additionally, I decided to see if I had any audio on my computer to throw in for inspiration. I landed on an ’80s-style synth instrumental. The combination of this music with the textured background inspired me to create an outer-space-style scene. Wanting to reinforce this theme, I focused on making shapes appear to be floating or traveling. I took my bouncing ball from before and added a wave generator to the horizontal parameters, which gave the illusion of flight. However, at this point, I noticed that the projector crop was cutting off the shape along the horizontal base, creating an unexpected limitation that I would need to address later.
In the fourth 50-50 box, which I renamed Box 4, I added a wave generator and a limit scale value actor to the facet’s parameter of the shape. I decided to limit these values between 3 to 6 to keep the shape sharp and prevent it from becoming too rounded. Additionally, I thought it would be fun to implement the same actors to the projector zoom, but this time keeping the values between 100 and 400. This gave the illusion that the shapes were not only increasing in facets but also in size. The unexpected interplay of these parameters created a more organic transformation, making the visuals feel dynamic rather than rigid.
Over the next hour and 45 minutes, I focused on giving my stage actors and parameters that made the piece feel not only like space but something alive. I experimented with my background color and explode actors to create movement when I discovered an actor called Video Noise that resolved this issue beautifully. I also added a subtle stage background actor to adjust the color beneath the noise.
Additionally, as I became more comfortable utilizing Inside Range actors, I decided to base some sort of cue off the music. I connected an Inside Range actor to the position parameter of the movie player, which tracked the number of seconds in the song. Unsure if I could maintain attention for more than 30 seconds, I aimed for something around the 20-second mark. I set my low at 20 and my high at 21, which would then trigger a wave generator. I connected the sawtooth wave generator to two limit scale value actors—one to set the scale position and another to set the vertical position of a shape actor that I envisioned as a planet. I originally attempted to create something that resembled Saturn with a ring around it, but eventually, I realized I was spending too much time refining this one parameter. I ultimately settled on creating dimension via a line size.
Over the next hour and fifteen minutes, I primarily focused on two elements. First, I organized my patch as it was now becoming necessary due to its increasing complexity. Second, I worked on establishing a “night sky” transition. I copied the base 50-50 box and tried to create an explosion that would become the primary layer. Initially, I tried to trigger this effect four seconds after the initial planet would rise, but that didn’t work. I then tried adding an Enter Scene trigger with a trigger delay set for about 24 seconds. This was when I realized the hurdles of real-time rendering, so I created a second scene, which was blank, to flip back and forth between and determine whether my actors were behaving as intended
My final 30 minutes focused on going back to basics and trusting what was working. I scrapped my night sky idea but repurposed the box to return to the galaxy tear concept. I took the night sky and created a shape that resembled the planet. I then used the explode actor and a random wave generator to trigger varying horizontal ranges, creating the illusion of a dying planet.
At this point, the rapid shape actor I had developed in the first hour and a half was feeling stale. To add texture, I introduced the dots actor. However, I wanted to maintain an unpredictable pattern, so I connected the established Inside Range actor to a toggle actor that would turn the dots actor on and off whenever the range from the random wave generator fell between 50 and 100. This was one of my proudest moments, as it allowed me to create something far from the original patch.
At this point, my work was done. I was proud of myself for creating a fun visual that accomplished my goal of creating an outer-space-like scene. This project was a great exercise in evaluation—identifying which actors and elements were useful and which needed to be discarded. Some core ideas were thrown out only to return in new forms, such as the galaxy tear transforming into a dying planet.
During my presentation, I was nervous about whether the piece would hold interest for more than 30 seconds. However, my peers provided valuable feedback. Due to the planet shifting around 20 seconds in, they expected more to happen. This small movement not only shifted expectations but also broke the pattern completely. Additionally, they mentioned that the project felt like the start screen of a video game, meaning my music choice and visuals were in harmony. In the future, I would tackle one box at a time rather than jumping around, which would improve organization and efficiency.
Pressure Project 1
Posted: February 4, 2025 Filed under: Uncategorized Leave a comment »For Pressure Project 1, my key strategy was to find a story and see if I could successfully shape abstract visuals to convey that story, or some semblance of a story line with a clear beginning, middle, and end (Figure 1). I recognize that I struggle to make art that doesn’t tell a story (and struggle even more, perhaps, to make up that story myself, being an artist who makes art that expresses stories created by other people), so initial attempts to create something visually interesting without a central framework around which to form it did not go very far for me. I was able to start to brainstorm ways I might alter the initial patch to at least create visual interest and then used those initial alterations to build other iterations. I ended up working from the center of my scenes outward, alternating working toward the beginning and then toward the end without a clear plan but rather as ideas came to me or I thought of ways to advance the visual story. Working in first one direction and then the other gave me something to work toward (I won’t say goal!) and allowed me to explore how to get from point B to points A and C respectively. I wanted to challenge myself to explore actors I hadn’t had a real opportunity to implement before, as well as try to get a stronger handle on how the ones in the initial patch were functioning. I think I only minorly succeeded in this endeavor, but I did feel that, overall, I was able to gain some amount of facility with the tools I was using.

Figure 1
I played a lot with the Wave Generators (Figure 2) and the User Actors, mostly trying to get a better handle on the latter. At one point, this somewhat inadvertently led to me recreating the initial patch from scratch with some alterations, but I’m chalking that up to valuable time spent playing and learning (Figures 3 and 4). I also tried out the Movie Player actor and started messing around with the Play Start and Play Length fields. I also dipped my toe into the effects, like Explode, and know that I have a lot more exploring to do there.

Figure 2

Figure 3

Figure 4
The main challenge for me, I think, is just that I am very much a novice with this type of work and, as I said before, struggle to design if I don’t have a clear framework (i.e. a story) around which to base my work. I’m struggling, in general, to put the pieces together and remember how things work, but spending my time breaking and repairing some of the things in the initial patch helped with that a bit. This speaks less to this project in particular and more to a large issue, but it also took me a decent amount of time to find assets such as music and video to use, once I decided to use them, and I had to settle for watermarked material, which was fine if not a little annoying (Figure 5). Related to all this, at some point I had to just decide to use this as practice and try to play to my beginner-level strengths; my challenge here and moving forward will be to be satisfied with where I am, skills-wise, and to find ways to create things with those limited resources, for now, growing the resources as I can but also knowing that it’s ok to create within what you have.
While my project went a wildly different direction than those of my classmates, I think it was relatively well-received. Where after staring at it over and over again I had become concerned with the pacing, based on feedback, it seemed like, while it was a little slow, the pace ultimately supported the arc of the project if you didn’t know how it was going to play out. That was my aim, but it had become hard to discern whether or not that was being achieved after watching it over and over again for the last hour or so of my time spent on the project. I appreciate, too, that I have a ways to go in terms of presentation and how to do it in a way that looks more finished; I had no idea that you could make the window bigger and block out the scenes! Additionally, while I did finally figure out how to make a video of my presentation (below), I cannot figure out how to also make it have sound. I assume there is a way, but maybe there isn’t… something to explore, I suppose!
All-in-all, I was pleased with how my project turned out. I think I set attainable aims for myself and achieved those, so I’m also quite pleased with myself for being able to realistically assess my abilities/resources. I think that will serve me quite well as we continue through the semester.

Figure 5
Pressure Project 1 – Interactive Exploration
Posted: February 4, 2025 Filed under: Uncategorized | Tags: Interactive Media, Isadora, Pressure Project Leave a comment »For this project, I wanted to prioritize joy through exploration. I wanted to create an experience that allowed people to try different movements and actions to see if they could “unlock” my project, so to say. To do this, I built motion and sound sensors into my project that would trigger the shapes to do certain actions.

Starting this project was difficult because I didn’t know what direction I wanted to take it, but I knew I wanted it to have some level of interactivity. I started off small by adding a User Input actor to adjust the number of facets on each shape, then a Random actor (with a Limit-Scale Value actor) to simply change the size of the shapes each time they appeared on screen. Now it was on.
I started building my motion sensor, which involved a pretty heavy learning curve because I could not open the file from class that would have told me which actors to put where. I did a lot of trial-and-error and some research to jog my memory and eventually got the pieces I needed, and we were off to the races!

A mockup diagram of which section of the motion sensor is attached to each shape. The webcam is mirrored so it is actually backwards, which made it difficult to keep of which portion of the sensor attached to which shape.
Figuring out the motion sensor from scratch was just the tip of the iceberg; I still needed to figure out how to implement it. I decided to divide the picture into six sections, so each section triggered the corresponding shape to rotate. Figuring out how to make the rotation last the right amount of time was tricky, because the shapes were only on-screen for a short, inconsistent amount of time and I wanted the shapes to have time to stop rotating before fading. I plugged different numbers into different outputs of a Wave Generator and Limit-Scale Value actor to get this right.
Then it was time to repeat this process five more times. Because each shape needed a different section for the motion detector, I had to crop each one individually (making my project file large and woefully inefficient). I learned the hard way how each box interacts and that not everything can be copied to each box as I had previously thought, causing me to have to go back a few times to fix/customize each shape. (I certainly understand the importance of planning out projects now!)

I had some time left after the motion sensor was done and functional, so I revisited an idea from earlier. I had originally wanted the motion sensor to trigger the shapes to explode, but realized that would likely be overwhelming, and my brain was melting trying to get the Explode actor plugged in right to make it work. Thus, I decided on an audio sensor instead. Finding the sweet spot to set the value at to trigger the explosion was difficult, as clapping and talking loudly were very close in value, so it is not a terribly robust sensor, but it worked well enough, and I was able to figure out where the Explode actor went.
I spent a lot of time punching in random values and plugging actors into different inputs to figure out what they did and how they worked in relation to each other. Exploration was not just my desired end result; it was a part of the creative process. For some functions, I could look up how to make them work, such as which actors to use and where to plug them in. But other times, I just had to find the magic values to achieve my desired result.
This meant utilizing virtual stages as a way to previsualize what I was trying to do, separate from my project to make sure it worked right. I also put together smaller pieces to the side (projected to a virtual stage), so I could get that component working before plugging it into the rest of the project. Working in smaller chunks like this helped me keep my brain clear and my project unjumbled.

I worked in small chunks and took quick breaks after completing a piece of the puzzle, establishing a modified Pomodoro Technique workflow. I would work for 10-20 minutes, then take a few minutes to check notifications on my phone or refill my water bottle, because I knew trying to get it done in one sitting would be exhausting and block my creative flow. Not holding myself to a strict regimen to complete the project allowed me the freedom to have fun with it and prioritize discovery over completion, as there was no specific end goal. I think this creative freedom and flexibility gave me the chance to learn about design and creating media in a way I could not have with a set end result to achieve because it gave me options to do different things.
If something wasn’t working for me, I had the option to choose a new direction (rotating the shape with the motion sensor instead of exploding them). After spending a few hours with Isadora, I gained confidence in my knowledge base and skill set that allowed me to return to abandoned reconsidered ideas and try them again in a new way (triggering explosions with a sound sensor).
I wasn’t completely without an end goal. I wanted to create a fun interactive media system that allowed for the discovery of joy through exploration. I wanted my audience to feel the same way playing with my project as I did making it. It was incredibly fulfilling watching a group of adults giggle and gasp as they figured out how to trigger the shapes in different ways, and I was fascinated watching the ways in which they went about it. They had to move their bodies in different ways to trigger the motion sensors and make different sounds to figure out which one triggered the explosions.
Link to YouTube video: https://youtu.be/EjI6DlFUof0
Bumping Emily’s Post
Posted: January 15, 2025 Filed under: Uncategorized Leave a comment »I was drawn to Emily’s work for the distilled, but eye-catching imagery provided in the working stills. From the description of Emily’s process, it ties nicely with our recent work with loop and generator actors. I’d love to challenge myself to create 3 dimensional shapes like the ones in the stills.
Pressure Project One | Devising EMS
Cycle 3: PALIMPSEST
Posted: December 11, 2024 Filed under: Uncategorized Leave a comment »As I ponder the 3rd iteration of this cyclical project, I am reminded of how much time is an incredibly valuable resource in the RSVP cycle. I have struggled with feeling so close to my MFA project both in choreography and projection design that there have been moments where I have been unable to see what is actually occurring. A huge moment of learning occurred for me following our Thanksgiving break. With the gift of a few days away from OSU, I found myself able to view my choreography and projection design with more fresh eyes and some objectivity. Within a very quick turnaround, I was able to determine some factors that has made a tremendous impact on this cycle.
In this cycle, performance and valuaction were deeply important. I was working towards a Dec 5th showing in the Dance Dept to all faculty and also our Cycle 3 presentations on Dec 9th. I wanted to create and show at least 10 minutes of choreography and projection design accompanied by some music, lighting, and costuming ideas so that my advisors and peers could have a sense of the world I was trying to communicate and access through this piece.
My initial goals for this cycle were to slow down the projections and create some more negative space on the floor for my dancers to move through and emulate an interactive relationship with the projection. Through both feedback from faculty, peers, and other artists, I realized that the projection on the floor had a very strong definitive edge of a rectangle that actively shrunk a lot of the dancing space. This meant that whenever a dancer stepped out of the perimeter of the rectangle, it looked like a mistake in choreography. To combat this, I initially attempted to use a crescent or circle shape as a mask in Isadora, but still felt like the shape was far too crisp. Due to time constraints, I was unable to finish a version of the projection that utilized organic shapes for the Dec 5th showing in the Dance Dept.
In the Dec 5th showing, my cohort had decided to use the black marley in the Barnett which meant that this would be my first time seeing the projections on a black floor. In all honesty, I was quite delighted by the black floor—there was a sense of depth and texture that the black floor offered that was not quite accessible when compared to the white floor. This piece, currently entitled PALIMPSEST, references a manuscript where text is effaced and then new text is written upon it. I view each dancer as deeply important to the layering of the piece. I intend that PALIMPSEST communicates a meditation on small nuanced intercultural experiences that draw from my Taiwanese-American diasporic worldview.
Below is the video of the Dec 5th showing
Following the Dec 5th showing, I was finally able to take the time to figure out how to use an organic shape as a mask. I wanted to use an image of a banyan tree from Taiwan. This felt like a little Easter egg for myself—I frequently imagine the movement of my dancers as hybrid bird-banyan trees. With support from Annelise, I figured out how to use the banyan tree as a mask in Isadora and was deeply surprised at how the organic shape transformed the possibilities of the projection.

In our Cycle 3 presentation on Dec 9, I showed a new version of the projection that played with more negative space and utilized the banyan tree mask. I also began to play with adding some more blackouts in the projection so that I could imagine what the choreography could look like without any projection. I am imagining that for my 20ish minute piece, projection will only be present for at most half to two-thirds of the piece.
In our feedback during cycle 3, I was glad to hear that the black floor resonated with my peers and that the banyan tree mask created a feeling that the projection was another dancer. People used words like desert, tension, moss, mind of its own to describe the projection in collaboration with music. I do feel that these words reflect where I desire to go forward in this project. I would like to be able to envision the projection as another dancer in the space. Returning to the RSVP cycle, I want to acknowledge that I am at the stage of making where I deeply need some time away before returning to see what I have actually made. As I look towards the next cycles leading up to my MFA project performance at the end of February 2025, I desire to return to solidifying what the choreography is so that I can make more informed choices about when the projections might be dancing in the space with the other performers.
Cycle 3
Posted: December 11, 2024 Filed under: Uncategorized Leave a comment »Naiya Dawson
For cycle three I presented the patches and videos I created on isadora in the motion lab. I used two scrims and four projectors to project my videos four different ways. Presenting in the motion lab allowed me to see what projection was the most interesting and what editing techniques looked good and what didnt/could change. If I were to do cycle 4 and beyond I would want to play with more floor projection and the videos with the difference actor effect. I also would want to have a dancer in the space and create the live drawings I used to match up with the live dancing.
Cycle 3
Posted: December 11, 2024 Filed under: Uncategorized Leave a comment »Resources
Motion Lab
- Projectors and screens
- Visual Light Cameras
- Overhead depth sensor
Castle
Blender
Isadora
Adobe Creative Suite
Poplar Dowel Rod
MaKey MaKey
Arduino Board
Accelerometer/Gyroscope
Bluetooth card
Score
Guests are staged in the entryway of the Motion Lab. They are met by the “magical conductor” who invites them to come in with a wish in their hearts.
As guests file into the completely dark room housing only an unlit castle the song “When You Wish Upon a Star” from the Pinocchio soundtrack begins to play.
Projections onto the castle are delayed by approximately 10 seconds. Projections start slow, progressively showcasing the full range of mapped projection capabilities (i.e. isolating each element of the castle.
Stars shoot across the front of the castle corresponding to the first two mentions of “when you wish upon a star”
The projections steadily increase in intensity.
As the novelty of the intensified projections wears off, the magical conductor enters the scene to demonstrate the capabilities of the magic wand, introducing a new dimension to the experience.
The magical conductor offers the magic wand to guests who can try their hand at interacting with the castle.
The music fades along with the projections.
The Peppers Ghost emerges from the castle entrance and thanks guests for coming. Fades out.

This score outlines the behavior of my relatively low-tech approach to creating a magic wand. As a physical object, the magic wand required only a poplar dowel rod and a little bit of imagination.

Isadora Score for Cycle 3 iteration. Entering the scene activates the control scene, which controls magic wand functionality and provides the manual star control. Entering the scene also initiates a timer to launch the first star around the first mention of “Wish Upon a Star”. The Sound level watcher ++ creates a connection between the music and the projection’s brightness.
Valuation
Embrace the bleed – I spent hours during cycles 1 and 2 trying to tighten my projection mapping and reduce bleed around the edges. During our feedback session for Cycle 2, one of my classmates pointed out that the bleed outlining the castle on the main projection screen actually looked pretty cool. I immediately recognized that they were right, the silhouette of the castle was a striking image. It took time for me to fully embrace this idea.
Simplify to achieve balance – I started to hear Alex’s voice ringing in my ears over the course of this project. This was the most critical lesson for me to take away from this course. I dream big. Probably too big. I always want to create magic in my projects, I have an impulse to improve and expand and
During cycle 3 I made several conscious simplifications to achieve balance, i.e. to deliver a magical experience within the allotted time.
I removed the stone facade projection from the castle. This was a neat effect, but the lines in the stone made imperfections in the mapping readily apparent. The payoff for the audience was marginal and diminished quickly. Removing this element allowed investment in features that made the experience more immersive and engaging.
I simplified the function of the magic wand (which had already been simplified several times from the original concept of a digital fireworks cannon). Prior to the final iteration, the concept for the wand was to embed a small Arduino chip, an accelerometer, and battery to capture movements and trajectories, sending signals to the computer over bluetooth that would trigger events in Isadora. The simplified concept used the Motion Lab’s depth sensor to detect activity in a range over the top of the castle. The active areas were cropped to isolate the effect to specific areas of the castle. An auxiliary effect was added to the Isadora interface with a button that enabled the operator to trigger a shooting star when guests tried to perform magic tricks that were not programmed into the system.

Early concepts for what would become the Magic Wand called for a blaster. At various times the blaster was to house some combination a Nintendo Wii remote, a phone using Data OSC, or an Arduino board with a gyroscope/accelerometer and bluetooth card. Other conceptual drawings outline the operation whereby the pull rope connected to a PVC pipe on an elastic band inside the blaster would close a circuit to send a signal to the computer to fire a digital firework along the trajectory of the blaster. (More drawings will be uploaded upon retrieval of my notebook from the Motion Lab).
I simplified the castle, removing several intended features from the castle including multiple towers, parapets, and roofs. These features would have been visually striking and several towers were already cut and shaped, but each additional tower increases complexity and time costs. Truthfully if I had not broken the castle twice (once on the night before final installation in the motion lab and again on the way into the building the morning of final installation) I likely would have added a few additional towers. It’s not clear whether this would have substantially improved the final experience.
The most difficult simplification was cancelling plans to include a Pepper’s ghost in the doorway – this was the hardest to part with because the potential payoff to guests was huge. Other features like windows with waving character silhouettes were also difficult to cut because they would have improved upon the magic of the experience.
Manage the magic – I created gigabytes of interesting visualizations, 3D models with reflective glass and metal textures, and digital fireworks for this project and it was very tempting to bring everything in and have the experience run at full intensity throughout, but this approach burns too quickly. People need time to adapt and recover in order for escalations to be effective. Metering out the experience requires a measure of discipline.

This score shows an emotional journey map of the planned experience prior to dropping plans for the Pepper’s Ghost in the week prior to the performance. Ultimately removing the ghost still allowed for escalation of emotional intensity and a meaningful journey, where diverting time to building it would have put the project completion in jeopardy.
Simplify the projection mapping – Prior efforts to simplify my approach to projection mapping had been unsuccessful and at this point multiple projectors could no longer be avoided. I decided to change my approach to do much of the mapping in the media itself to reduce the amount of mapping necessary at installation.
I used multiple approaches to support this effort with the expectation that some may fail, but that diversification would ultimately lead to the greatest chance for success. First I took photos of the castle from the projectors’ vantage point. Next I measured the distance, rotation, and orientation of the projectors in relation to the castle to model the space in Adobe After Effects. Finally I built a 3D model of the castle in Blender, hoping that I would be able to import the obj files into Isadora and use my projections as textures. This was a pipe dream that did not work, but I did manage to create something interesting that I couldn’t figure out how to actually use in time for the final presentation.


A cool effort that turned out to be only marginally useful in the end. Given more time I would love to use this OBJ file to reflect things in the environment like fireworks that could create a really interesting and cohesive experience.

I used images of the castle to help build digital models.




Early attempts to model the castle using Adobe After Effects and position cameras to correspond with the projector positioning were time consuming and came close to approximating the environment, but ultimately unsuccessful. Using the measurements taken from the motion lab helped close in on actual positioning, but the angles and exact positioning proved elusive.


The most useful asset proved to be line tracings from the perspective of the motion lab projectors that I could pull into After Effects to create graphics that were very close to the actual dimensions of the castle. These were much easier to map onto the castle and only require minor distortions to account for imperfections in my photography approach.
Performance
I think that many of my theories were confirmed during the performance. The flow worked well. Nobody mentioned the removal of the digital stone facade. Classmates enjoyed playing with the magic wand even during the feedback session. The silhouette of the castle on the main projector made for a striking visual and gave an impression of being larger-than-life. The magical conductor role was looked upon favorably. Obviously the Pepper’s ghost was not mentioned because it was a secret closer, but I suspect that if well executed it would have fit well with the overall theming and magic.
One major oversight that would have been embarrassingly simple to execute is the addition of an audio indicator for successful magic wand interactions. I added these in post to demonstrate how it might have changed the experience.
What could have been…