Body as Reservoir of Secrets, DEMS Cycle 2
Posted: December 12, 2023 Filed under: Uncategorized Leave a comment »For cycle 2, the idea started to take shape in a more concrete manner. I started to develop the costume using coveralls bought on amazon which I tailored myself for a better fit.
The following feature extraction functionalities were in the costume-
- On the coveralls I stitched on some patches with dark pieces of digitally printed fabric. This increases the overall contrast of the figure, which helps the eyes++ actor and webcam track object movements better.
- A contact microphone was housed in the chest pocket running up to my neck. An additional elastic band was devised to apply pressure and hold the contact microphone in place.
- Two iphone Gyroscopes were housed within blue/purple pockets with elastic openings on the arm and calf of the custom.
- A microphone was blown into whenever a scene change was desired, as a means of activating breath.
Challenges/Feedback
- Composition looked too busy given that more than 10 variables were being fed into isadora.
- My computer’s graphics card seems to be struggling and isadora crashes easily.
- A classmate mentioned that the costume appeared like an astronaut, which is not what I intended.
- I had a headache after wearing the neck piece with the contact mic for 20 minutes because it was restricting bloodflow.
Observations/On-going curiosities
- The class seems to be very interested to map/figure out which visual element was coupled with the various sensors.
- The class seems to be interested view the live generated graphics as a video piece, when I had intended the graphics primarily to be viewed as prints. I am open to the possibility of the video being screened alongside the painting performance.
Cycle 2
Posted: November 28, 2023 Filed under: Uncategorized Leave a comment »Cycle 2
I achieved connecting Isadora and Arduino for Cycle 1 and made just a simple Arduino project like blinking LEDs.
So, I set up my goal for Cycle 2 to try to operate motor(s) by Isadora using a webcam as a motion sensor.
I firstly tried a servo motor. Fortunately, Arduino actor on Isadora already had a feature to control servos so I could easily connect them.
I made a webcam as a motion sensor like we had played in PP3 and made an instant “interactive servo robot” whose arm chases a viewer’s motion.
I also tried a stepper motor. I was a little bit tricky for me as Arduino actor on Isadora doesn’t have a stepper motor operating feature.
So, I re-checked a tutorial Arduino code to control a stepper motor and made the same work flow on Isadora (blinking Low/High to make a magnetic motor spin).
Its motion was not smooth, but it worked!
Probably I need to adjust an interval time between Low/High or some other things to operate a stepper motor smoothly, but anyway I decided to use servo motors for Cycle 3 as I should focus on building a final output now (and servos are easier to control for my project).
But operating a stepper motor by Isadora taught me many things like the idea of translating a written code into an Isadora chart.
My Cycle 3, the final output, would be a small interactive robot using webcam, Arduino, servos, and Isadora for sure.
Feedbacks from the class during Cycle 2 inspired me to make a small but very organistic interactive robot; as I usually make spatial or sculptural scale works, I considered my small Arduino test project (like a tiny servo) just as a small mock-up, but everyone’s reaction suggested me that a small scale thing could have its own communicativeness which a large sculpture doesn’t have. This would be my direction to achieve Cycle 3.
Lawson: Cycle 1
Posted: November 14, 2023 Filed under: Nico Lawson, Uncategorized | Tags: cycle 1, dance, Interactive Media, Isadora Leave a comment »My final project is yet untitled. This project will also be a part of my master’s thesis, “Grieving Landscapes” that I will present in January. The intention of this project is that it will be a part of the exhibit installation that audience members can interact with and that I will also dance in/with during the performances. My goal is to create a digital interpretation of “water” that is projected into a pool of silk flower petals that can then be interacted, including casting shadows and reflecting the person that enters the pool.
In my research into the performance of grief, water and washing has come up often. Water holds significant symbolism as a spirit world, a passage into the spirit world, the passing of time, change and transition, and cleansing. Water and washing also holds significance in my personal life. I was raised as an Evangelical Christian, so baptism was a significant part of my emotional and spiritual formation. In thinking about how I grieve my own experiences, baptism has reemerged as a means of taking control back over my life and how I engage with the changes I have experienced over the last several years.
For cycle 1, I created the Isadora patch that will act as my “water.” Rather than attempting to create an exact replica of physical water, I want to emphasis the spiritual quality of water: unpredictable and mysterious.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/Screenshot-2023-11-14-221628-1024x538.png)
To create the shiny, flowing surface of water, I found a water GLSL shader online and adjusted it’s color until it felt suitably blue: ghostly but bright, but not so bright as to outshine the reflection generated by the web cam. To emphasize the spiritual quality of the digital emanation, I decided that I did not want the watch to be constantly projecting the web cam’s image. The GLSL shader became the “passive” state of the patch. I used difference, calculate brightness, and comparater actors with active and deactive scene actors to form a motion sensor that would detect movement in front of the camera. When movement is detected, the scene with the web cam projection is activated, projecting the participant’s image over the GLSL shader.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/Screenshot-2023-11-14-221711-1024x541.png)
To imitate the instability of reflections in water I applied a motion blur to the reflection video. I also wanted to imitate the ghostliness of reflections in water, so I desaturated the image from the camera as well.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/Screenshot-2023-11-14-221744-1024x533.png)
To emphasize the mysterious quality of my digital water, I used an additional motion sensor to deactivate the reflection scene. If the participant stops moving or moves out of the range of the camera, the reflection image fades away like the closing of a portal.
The patch itself is very simple. It’s two layers of projection and a simple motion detector. What matters to me is the way that this patch will eventually interact with the materials and how the materials with influence the way that the participant then engages with the patch.
For cycle 2, I will projection map the patch to the size of the pool, calibrating it for an uneven surface. I will determine what type of lighting I will need to support the web camera and appropriate placement of the web camera for a recognizable reflection. I will also need to recalibrate the comparater for a darker environment to keep the motion sensor functioning.
Lawson: PP3 “Melting Point”
Posted: November 14, 2023 Filed under: Nico Lawson, Uncategorized | Tags: Interactive Media, Isadora, Pressure Project Leave a comment »For Pressure Project 3, we were tasked to improve upon our previous project inspired by the work of Chuck Csuri to make the project suitable to be exhibited in a “gallery setting” for the ACCAD Open House on November 3, 2023. I was really happy with the way that my first iteration played with the melting and whimsical qualities of Csuri’s work, so I wanted to turn my attention to the way that my patch could also act as it’s own “docent” to encourage viewer engagement with the patch.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/10/IsadoraTemp23102415064804-1024x576.jpg)
First, rather than wait until the end of my patch to feature the two works that inspired my project, I decided to make my inspiration photos the “passive” state of the patch. Before approaching the web camera and triggering the start of the patch, my hope was that the audience would be curious and approach the screen. I improved the sensitivity of the motion sensor aspect of the patch so that as soon as a person began moving in front of the camera, the patch would begin running.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/10/IsadoraTemp23102415060002-1024x576.jpg)
When the patch begins running, the first scene that the audience sees is this explanation. Because I am a dancer and the creator of the patch, I am intimately familiar with the types of actions that make the patch more interesting. However, audience members, especially those without movement experience, might not know how to move with the patch with only the effects on the screen. My hope was that including instructions for the type of movement that best interacted with the patch would increase the likelihood that a viewer would stay and engage with the patch for it’s full duration. For this reason, I also told the audience about the length of the patch so audience members would know what to expect. Additional improvements made to this patch were shortening the length of the scenes to keep viewers from getting bored.
Update upon further reflection:
I wish that I had removed or altered the final scene in which the facets of the kaleidoscope actor were controlled by the sound level watcher. After observing visitors to the open house and using the patch at home where I had control over my own sound levels, I found that it was difficult to get the volume to increase to such a level that the facets would change frequently enough for the actor to attract audience member’s attention by allowing them to intuit that their volume impacted what they saw on screen. For this reason, people would leave my project before the loop was complete seeiming to be confused or bored. For simplicity, I could have removed the scene. I also could have used an inside range level actor to lower the threshold for the facets to be increased and spark audience attention.
Cycle 1: Blinking Arrows
Posted: November 12, 2023 Filed under: Uncategorized Leave a comment »Going into Cycle 1, I knew that I wanted to make a game of some kind and I knew I wanted to use an alternative control scheme of some kind using the Makey Makey. I was also drawing inspiration from retro games, like Snake and Simon/Simon Says. Aside from this, I didn’t have any idea of what I wanted my final project to look like, so I decided to treat Cycle 1 as a pressure project, in that I would play around with Isadora and my ideas so far, and whatever came from it is what I’d demo for class.
My first step was just figuring out what exactly I wanted my project to look like at the end of Cycle 3, so I decided to just look at some retro games that I could potentially recreate in Isadora within the scope of the class. This research brought me to Dance Dance Revolution (not quite retro but close enough) and the idea of a rhythm game, the mechanics of which felt simple enough to produce before the end of the semester.
I started my patch with a rectangle in each corner of the stage, each labeled with a direction (up, down, left right). I would be using the Makey Makey as my controller (I will decide the actual controls in a later cycle), but these would correspond to the temporary keyboard input for each rectangles action. The rectangle UI didn’t feel very intuitive to me, and I didn’t like how it looked, so I scrapped those and changed them to arrows. At this point, I moved towards creating the game mechanics, starting with getting the arrows to blink based on the input of the user. Below is a picture of the patch for just one of the arrows to demonstrate how I got this to work.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/Screen-Shot-2023-11-12-at-9.41.53-AM-1024x640.png)
During the in-class demo, I mentioned wanting to use a source of input that would be simple enough that a baby could use it, drawing inspiration from my baby nephew. Someone suggested I use a children’s xylophone as my controller and I really liked the idea, so I will be exploring children’s toys as input methods, most likely in Cycle 3. For Cycle 2, I will me continuing to work on the mechanics of my game.
Cycle 1: Getting Wires Crossed
Posted: November 7, 2023 Filed under: Uncategorized Leave a comment »For my cycle 1, I decided to tackle a basic interactive element I want my cycle 3 to have, a stove. I started my Isadora patch with a top-down view of an electric stove, where I would connect red shapes to slowly fade in and out based on user interaction. I initially tested with a keyboard watcher, but knew I would eventually attach a Makey-Makey to my makeshift stovetop. I used four user actors as the “burners” and used gates to prevent them from being continually turned on and off. Below is a screenshot of one of the burner’s patches.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/image-1-1024x533.png)
I also knew going into this process that I want my cycle 3 to be a house-like experience with odd elements thrown in to encourage interaction. I’ve always been interested in the idea of a “normal house but move it slightly to the left.” I’m still deciding if I’m connecting some sort of story to it yet or not, but for the moment I knew something had to happen if the user interacted with the stove a certain way. I made it so a pit would “open up” on the stove top if the user turned all the burners on within 2 seconds. I used the simultaneity actor within Isadora to achieve this.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/DAF235EC-B1FE-44CA-83D5-AF62271EA833_1_105_c.jpeg)
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/11/stove-back-cycle-1-real-1024x770.png)
Here’s a few pictures I took when I first connected the Makey-Makey to the patch. The performance in class was a great experience and I deeply appreciated the feedback others provided. I took note especially that the framing of having physical stove dials helped make the experience more interactive and special, so I hope to include more physical interaction in my next two cycles.
Body as Reservoir of Secrets, DEMS Cycle 1
Posted: October 31, 2023 Filed under: Uncategorized Leave a comment »I see the body as a reservoir of ancestral secrets. I have been coaxing out /imaging these secrets as an abstract painter, through gestures and paint. However, gesture is only one vector in this exploration. The body generates streams of data in addition to gesture that can be harnessed and imaged- such as breath, heartbeat, voice.
I am devising a system that extracts these features and maps them accordingly to the movement/color/scale of visual elements, triggering video or image files. These visual elements come together in compositions that will then have video and print outputs, via the capture stage to image and video actors.
In Cycle 1, I mostly concentrated on figuring out isadora-OSC/gyroscope communications and tuning hardware like webcams and lavalier microphones for feature extraction. The outcome was somewhat successful, but I have generated too many input data streams and are lacking variables to plug these inputs. Feature extraction for hearbeat is not very effective using a lavalier microphone.
For Cycle 2, I intend to:
- Focus on creating more variables to plug the generated inputs into.
- Develop greater flexibility in the visuals generated, so that there is more variance in the composition- composition can range from very simple and very complex.
- Include a contact microphone for heartbeat extraction
- Start to integrate costume into mix.
For Cycle 3, I intend to:
- Develop a clarified live performance dimension to the work
- Figure out how to feed the captured jpegs to a printer
Reflection on Cycle 1
Posted: October 31, 2023 Filed under: Uncategorized Leave a comment »I set up my goal in this class at the end of the semester as “connecting Isadora and Arduino” and “using a webcam as a sensor to operate a motorized object”.
The first step=my goal of Cycle 1 was just simply connecting Arduino to Isadora.
I researched on the web (there are some helpful articles and forums) and found that an application called “Firmata” can play a role to connect Arduino to many kinds of software including Isadora.
This article explains what Firmata is and how to get/use it.
https://www.instructables.com/Arduino-Installing-Standard-Firmata/
We normally need to write a code to operate Arduino, but with Firmata, I can send output signals to or receiving inputs from Arduino without coding process.
I downloaded a protocol code called “StandardFirmata” and uploaded on my Arduino.
Here’s the test of Firmata; by clicking High/Low on Firmata window, I can turn on/off a LED connected to Arduino.
Then, I downloaded “Arduino Firmata Actor” which is an Isadora plugin to Arduino.
https://community.troikatronix.com/topic/7176/arduino-firmata-actor
Now I’m ready to connect the two.
I opened Isadora and set up Arduino Firmata Actor (and also some port connection settings) to connect to my Arduino.
Here, I can turn on/off LEDs on Isadora window (so I’m sending output signals from Isadora to Arduino).
Another test; an input signal from a photoresistor (a sensor detects brightness of light) connected to Arduino is being received by Isadora (sending signals from Arduino to Isadora).
I connected this input to a width of a square, so it’s interacting with light.
This very simple connection test was a very big step for me. The two big magics (Isadora and Arduino) are now syncing!
The next step (Cycle 2) will be connecting a servo motor (and/or a stepper motor??) and trying to operate it from Isadora. Then, I can connect it to a webcam as a sensor, just like we did in PP2 and 3.
Cycle 1: Connecting MaxMSP with Isadora (OSC)
Posted: October 29, 2023 Filed under: Arvcuken Noquisi, Uncategorized Leave a comment »Hello again.
For cycle 1 I decided to make a proof-of-concept test to get MaxMSP and Isadora to work together via OSC. I plan on using MaxMSP for live audio input which then gets transmitted to Isadora to impact visual output. I plan on using MaxMSP on one computer and Isadora on another – meaning that I will have to use OSC over a router network so that these two computers can communicate with eachother.
I first needed to know how easy/difficult it would be to make these two software work together.
To start I pulled a MaxMSP “Insta-theremin” patch from the internet. This patch creates an audio signal based on computer mouse location (x-axis pitch y-axis amplitude).
It took a lot of googling to figure out which MaxMSP objects and connections are necessary to send OSC. I considered using plugins such as ODOT, but eventually got the “udpsend” object to work without complications. I did not know that the OSC name had to specifically be /isadora/# for non-TouchOSC software to work with Isadora, but once I understood that it was very easy to transmit MaxMSP input to Isadora.
Here is a video of the patch at work (may be loud!):
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/10/image-2-1024x576.png)
On the Isadora side, I used OSC listeners connected to “color maker RGBA” actors and the rotation input of a shape actor – I wanted to have multiple inputs so that I could instantly see whether or not the MaxMSP input is truly working. I also had a camera input mixed in with the shape, just to add a bit of complexity to the patch. I had 2 OSC input channels for the two axes of the theremin. X-axis (pitch) controls color and rotation, while y-axis (amplitude) enables and disables the “effect mixer” actor bypass (turning on and off the camera input). This made it very easy for me to tell whether Isadora was following my mouse location input.
Though the stream of numbers coming in to Isadora looks delayed at times, I could tell based on the stage output that there was essentially no latency over the localhost system. For my next cycle I will have to 1. test whether MaxMSP and Isadora can communicate with eachother across computers on a routing network and 2. start working on a more interactive audio input on MaxMSP, either using granular synthesis with microphone input or a sequencer which can be altered and changed by human input.
PP2/PP3: Musicaltereality
Posted: October 29, 2023 Filed under: Arvcuken Noquisi, Pressure Project 2, Pressure Project 3, Uncategorized Leave a comment »Hello. Welcome to my Isadora patch.
![](https://dems.asc.ohio-state.edu/wp-content/uploads/2023/10/image-1-1024x596.png)
This project is an experiment in conglomeration and human response. I was inspired by Charles Csuri’s piece Swan Lake – I was intrigued by the essentialisation of human form and movement, particularly how it joins with glitchy computer perception.
I used this pressure project to extend the ideas I had built from our in-class sound patch work from last month. I wanted to make a visual entity which seems to respond and interact with both the musical input and human input (via camera) that it is given, to create an altered reality that combines the two (hence musicaltereality).
So here’s the patch at work. I chose Matmos’ song No Concept as the music input, because it has very notable rhythms and unique textures which provide great foundation for the layering I wanted to do with my patch.
Photosensitivity/flashing warning – this video gets flashy toward the end
The center dots are a constantly-rotating pentagon shape connected to a “dots” actor. I connected frequency analysis to dot size, which is how the shape transforms into larger and smaller dots throughout the song.
The giant bars on the screen are a similar setup to the center dots. Frequency analysis is connected to a shapes actor, which is connected to a dots actor (with “boxes” selected instead of “dots”). The frequency changes both the dot size and the “src color” of the dot actor, which is how the output visuals are morphing colors based on audio input.
The motion-tracking rotating square is another shapes-dots setup which changes size based on music input. As you can tell, a lot of this patch is made out of repetitive layers with slight alterations.
There is a slit-scan actor which is impacted by volume. This is what creates the bands of color that waterfall up and down. I liked how this created a glitch effect, and directly responded to human movement and changes in camera input.
There are two difference actors: one of them is constantly zooming in and out, which creates an echo effect that follows the regular outlines. The other difference actor is connected to a TT edge detect actor, which adds thickness to the (non-zooming) outlines. I liked how these add confusion to the reality of the visuals.
All of these different inputs are then put through a ton of “mixer” actors to create the muddied visuals you see on screen. I used a ton of “inside range”, “trigger value”, and “value select” actors connected to these different mixers in order to change the color combinations at different points of the music. Figuring this part out (how to actually control the output and sync it up to the song) was what took the majority of my time for pressure project 3.
I like the chaos of this project, though I wonder what I can do to make it feel more interactive. The motion-tracking square is a little tacked-on, so if I were to make another project similar to this in the future I would want to see if I can do more with motion-based input.