Cycle 2 – Perspective drawing tool
Posted: November 22, 2022 Filed under: Uncategorized Leave a comment »I decided to move from the small handheld book form to the motion lab for my next cycle. I have been working with my own students in the motion lab. I decided to explore the possibility of designing tools for teaching drawing that involve observation, collaboration, and movement in the process. My initial plan involved using 2-point perspective as a starting point. Typically teaching 2-point perspective is a guided drawing kind of process, step-by-step. It is most often experienced using a pencil and a ruler. A teacher guides students through steps for drawing a horizon line, two points, and then proceeds to demonstrate how to draw a simple dimensional cube, often using a document camera to project a larger live capture image of the teacher’s actions. Once students understand the basic process, they are set free to create on their own.
I scaled this process up and envisioned the motion lab as a large format document camera. I set up a drawing space in the center of the space, directly under a birds-eye-view camera. In the image below you can see that I projected the live feed image on the main projection screen and applied a TT edge detect filter in Isadora. In the photo you can see students using the ropes that were each tied to one of the points on our horizon line that bisects the image.
The patch I created was fairly simple, see below:
After the initial explorations with the one video in-watcher, I layered another feed from a camera on a tripod that was directed at a whiteboard. This is when participants began to play even more with what was happening on each screen and how they were interacting together in the projection.
For my final 3rd cycle, I want to continue to build on and play with this tool. My initial ideas about pushing it into another phase is to focus on observational drawing and the figure/body in motion. Maybe shifting the output 90 degrees so the drawers can face the large projection screen head on. Using two camera again, but one would be directed at a body moving in space and those drawing would try to capture the movement and gestures on the page. I also wanted to add in the live drawing tool and see how that adds another layer of complexity and interest to the projected image.
Cycle 1 – Ghosts in the Landscape – Mollie Wolf
Posted: November 2, 2022 Filed under: Uncategorized Leave a comment »For cycle 1, I decided to attempt an interactive portion of my installation. My initial idea was to use an Astra or Kinect, with a depth sensor to make the background invisible and layer live capture footage of audience members into a video of a landscape.
When I asked Alex for help in background subtraction, he suggested I use a webcam instead. We found an example patch that Mark had posted on Toikatronix for background subtraction, so I started there.
He essentially used an effects mixer with a threshold actor to read the data from a video in watcher, and identify when a person had entered the frame – by watching for a change in light, compared to a freeze frame grab of the same image. Then, he used an add alpha channel actor to mask anything that hadn’t changed (i.e. the background).
Here are photos of my version of Mark’s background subtraction patch.
When I was first using this patch, it was a little finicky – sometimes only showing portions of the body, or flickering images of the background, or showing splotchy images. I noticed that it had a hard time with light colors (if the person was wearing white, as well as if their skin was reflecting the light in the room). I tried to fix this by adjusting the threshold number and by adding a gaussian blur to the image, but it still was more finicky than I’d like.
When I brought this issue up to Alex, he suggested that I could use a difference actor instead to do something similar. However, with the difference actor, it is mostly recognizing the outline of bodies rather than picking up the whole image of a person. I decided that I actually liked this better, aesthetically anyway – it made for ghost-like images to be projected in to the landscape, rather than video images of the people.
Here are photos of my current patch, using a difference actor. (The top line with the flip, freeze, effect mixer, etc – is still there from the previous attempt, but the projector is not active, so you can ignore that).
I think this method worked way better because it was no longer dependent on light in the room, and instead just on motion. (Which is way better for me, considering that I’m practicing this all in the Motion Lab with black walls and floors, but will be presenting it in Urban Arts Space with white walls and concrete floors). Conceptually, I like this version more as well, as it is an abstraction of people’s bodies and action, rather than a direct representation of it.
As a last minute experimentation, I laid out some platforms in the space and roughly lined them up with where there were mounds/hills in the landscape – that way if someone stood on the platform, it would look like their corresponding ghost image had climbed up the hill.
When I presented this version to the class, I was pleased to see how playful everyone became. With it just being an outline of their bodies, they were not looking at an image of themselves, and so there didn’t seem to be any amount of self-consciousness that I would expect if everyone were looking in a mirror, for example. People seemed to have genuine delight in figuring out that they could move through the landscape.
One interesting thing about the difference actor, is that non-moving bodies blend into the background (with no motion, there is no difference, frame be frame). So when someone decides to sit or stand rather than moving around, their ghost image disappears. While this is an inevitable aspect of the difference actor, I kind of like the metaphor behind this – you have to actively, physically engage in order to take effect on your environment.
We spent some time with the camera in a few different locations (to the side or in front of people). As we were discussing together, we came to a consensus that it was more magical/interesting when the camera was to the side. It was a, less obvious in terms of how to move (i.e. not as easy as looking into a mirror), which added to the magic. And having the camera to the side also meant that the landscape image was not obscured from view by anything. People only knew where the camera was if they went searching for it.
Here is a video of my peers interacting with the patch/system.
cycle 1: infinite seeing : feedback loops
Posted: October 31, 2022 Filed under: Uncategorized Leave a comment »Cycle 1 Documentation: Katie O’Loughlin
10.27.2022
This first cycle took place in the Motion Lab and was my first experiment with multiple feedback loops happening simultaneously. Originally, I had wanted to try a double feedback loop, which had the scrims in a circle, two cameras facing each other, and two projectors above each camera. But, as it goes, one hand-held camera capture card decided to quit working, and I was left with only one hand-held camera, eliminating the possibility of that idea. I mentally scrolled through the resources I did have, and ones I felt confident working with on my own, which included the single hand-held cam, a top-down cam, a top-down projector, and the regularly used circle and front projectors.
So, instead of facing feedback loops, I put the front projector on a standard loop, and then the top-down camera and projector on their own loop that was projected onto a white sheet laid out on the floor. In addition to the floor, I also projected the top-down feed onto a hanging scrim in the space, which was at a 90-degree angle to the front scrim. My goal was partially met, as there were two feedback loops happening, but as they were not facing each other they weren’t picking up the other loop within their own, which was what I was curious about.
The top-down set up was a unique feedback loop, as the frame of the projector was shooting down, and the frame of the camera was turned at a 90-degree angle and shooting down. Instead of that feedback loop being a replica, it ended up being a rotated loop.
Once people entered the room, it was interesting to see how they interacted with the loops, as they could clearly see their own impact on the image. There was a definite sense of play in their interaction, which I want to hold onto if possible. I manipulated the front projector’s image in Isadora once everyone had landed in the space, mostly by rotating the image and zooming it in/out. The rotation of the image caused a spiral in the loop which would turn into a square at a certain angle. I enjoyed the idea that both the person in the space and the person behind the computer could choose to change the image.
I mentioned to Alex that I was realizing just how impactful lighting is for this work, as it will totally blow out an image’s loop if angled or intensified in a specific way. After I said this, he suggested I turn off all the lights and the front projector, as the front projector was at a way higher resolution than the down projector, because the whole top-down set up was getting lost in comparison. When I did, the light from the down projector was enough light to create the feedback loop on the floor, and at a way brighter contrast making it easier to see the results of the loop.
As people began to play with that loop, we realized that they were creating an analog version of the loop because the camera would catch the movement, create a loop, and then continue to see its own loop even after the person moved away from the camera. It created an amoeba-looking shape that would stay for a while and then eventually burst into static before it lost the loop.
I was intrigued with this result, as the top-down was my back-up plan that I wasn’t too excited about. And because I had it turned on with the other loop, that continuous feedback was getting blown out by the surrounding light. Once we could see it clearer, the loop got longer and more complex and we could see the reaction to our body’s movement on the scrim.
One piece of feedback I received was that there were many “frames” within the experience. Looking at the space, both hanging scrims were rectangles, the cloth on the ground was a square, and the loops were also looping the rectangular frame within themselves. The camera had its rectangular viewfinder open, and the scrims were at a 90-degree angle from each other. I appreciated the idea that it felt like everything was a camera frame, as how we are seen through a camera is a facet of my research. That being said, I’m glad it was pointed out because I’m not sure I want such a square space. I wished I could have played with the circle projectors and scrims but will try to implement them into cycle two.
I’m glad I showed what I did because new aspects of the feedback loop showed up. It gave me a bit more motivation to move forward into the next cycle. My goals for cycle two are to set up the wireless router and get the NDI Watcher working through my phone and Isadora. That way, people can play with a moving camera around in the feedback loop.
Cycle 1: Puzzlr Prototype
Posted: October 27, 2022 Filed under: Uncategorized Leave a comment »Project Brief
For the final project, I’ve decided to create a fun and challenging puzzle that will stimulate the player in a variety of different ways through 3 distinct play modes. The puzzle consists of six pieces arranged in a hexagonal shape. The goal of each level will be to complete the puzzle each round while navigating the changes in the sensory elements of the puzzle. My research thread focuses on microcontroller inputs and the conceptualization of touch, so I see this project as a way to explore creating micro-interaction systems through physical objects.
The first round of the puzzle will be basic; the player just has to complete the puzzle by getting all the pieces onto the board in the correct configuration. The contacts on the bottom of the pieces and the board itself will give a slight hint as to how to approach the puzzle (more on that later). There will be both a visual and auditory feedback component informing the player about their progress.
The second round will involve covering the players hands so that they can only use the visual/auditory feedback from the system rather than looking at their hands. The pieces will also include unique textures that will then be mirrored on the screen to guide the player on what textures go where on the board.
The third and final round will involve blindfolding the player so they will have to rely on the textural elements of the pieces coupled with the auditory feedback. In this section, the auditory feedback for each piece will be unique so players can keep track of what pieces go where. This final level is intended to be very difficult and will be used mostly as an experiment.
Production & First Prototype
For the first phase of iteration I created a schematic and then used it to laser cut a preliminary model of the puzzle and circuit housing. After laser cutting and setting everything up I realized some parts of the first prototype need to change:
- The engravings on the bottom plate are too narrow and can’t house the wires and cables necessary for the puzzle piece contacts.
- The engravings on the bottom of the puzzle piece are on the wrong side of the pieces because I forgot to flip them before printing
- The circuit layout is confusing, and requires more thought into how the ground/input wires are going to be set up in the final board.
After thinking about those changes I modified the schematic and created a new one featuring a new build layout, a new circuit component, and a new circuit layout. I decided I would create two levels to the puzzle, one for the circuits and one to put the puzzle pieces on. This allows more room for cables and components under the puzzles, and ensures a clean look on the top. The new circuit component is a little adapter (in the middle of the schematic) that will serve as a bridge between the circuit housing and the top of the puzzle. The component will take the ground and input cables and attach them to two pieces of foil that are sticking out at the top, the puzzle piece will then have a piece of foil underneath that, when placed on the contact, will complete the circuit.
Next Steps
The next step in the project is to make the second prototype and get the circuit laid out and working for the next cycle!
Cycle 1: Book to map?
Posted: October 27, 2022 Filed under: Uncategorized Leave a comment »Exploring the book form and text that is interactive and triggers audio. Questioning the form. Leaning towards a map . . .
How do I design an experience for someone to engage with my research in a way that breaks from the typical dissertation format and reading printed text on a page?
Should the experience be intimate? Like our experience reading a book where it is just us and the text? Or, relational to both the text (object), ideas, researcher and reader?
What structure could facilitate a non-linear engagement with the “text” that is different for each individual?
How does one experience the researcher’s voice?
I considered a book and an accordion-style book but these seem limiting. Is a map an option? What are the aesthetics? How do I include layers? How does the experience facilitate connections across distance? A distancing from the research itself and the researcher?
What do I like about the book format?
It relates to a way we come to know things through reading.
A physical book must be held in one’s hand.
A book has weight. Value. Symbolic of knowledge.
What do I like about a map? The process of reading a map?
It still has boundaries and borders, but can be entered from any point? Could include a hierarchy using scale and volume.
Can still be held, however can an interactive map be hand-held in a convenient way? (other than a digital map on our phone)? I like the idea of the reader following along with their finger to “trace” the lines of text which also triggers and “generates” the text.
What about a wayfinding format? Like those trailhead maps? Something tactile with “triggers” built in that combine reading with audio that fills in the spaces between maybe? As you follow from one point on a “path” to another, you make a connection and the audio plays and leads you there.
Tactile topographic maps. . .
If this and this, then this. . .
Point A (Idea A) draw line to point B (Idea B), space between how A relates to B (audio).
Trigger through touch. Running your finger over the text or connection line. Touching on two points to draw a connection between them.
Recording: Idea A, connection, Idea B Following the route. Using the text to draw the line between them and make the connection.
Still pre-determined by researcher, in control of what the ideas are, how they relate/connect. Reader is in control deciding what they are interested in making connections between. I could imagine what this might look like in a VR space potentially, but what about a physical space?
How can I present a standard, university-defined, formatted dissertation become more experiential?
What if the “text’s” were not just printed text and audio, but also video and photographic artifacts?
Aesthetics and materials, topography, cast tracing paper, Plexi, watercolor, cut-out text, wires visible? Connections and “behind-the-scenes” transparent, process exposed, making thinking visible, translations, sound and video/photo projection on the “map”? Is there a backtrack? Could there be subtle changes to the backtrack that are affected by the “routing” on the map?
Is it on a wall, positioned on an angle much like a podium at a height about waist-high?
Intermedia map that involves human touch, listening, hearing, and viewing, prompts one to speak?
Is a dimensional, topographic map a good solution? The lines between could be graphite or could be the actual wire that is couched down with embroidery floss. The wires could be “embedded” into the layers of the tracing paper casting so they could be hinted at, and visible, but also underneath the surface. How do you ground the viewer in this case? What if they had to hold a “fake” pen/pencil that connected to the ground and use that to trace? I prefer the idea of using their finger directly to trace though. Maybe they are asked to touch a “key” of some sort as they are also then touching a point and following it with their finger.
How could each point be wired up with a way for a finger to complete the whole circuit?
Some feedback: My next step is to find a simple story to “map” and test out form, does the map form work? physical hyperlinks to audio, video and image files. Connections to braille, reading in different ways, audio books, how we experience making connections between things, our own, personal experiences, choose your own adventure.
Cycle 1 documentation – Dynamic Cloth – prototype
Posted: October 27, 2022 Filed under: Uncategorized Leave a comment »Since my research is based on how shared mixed reality experiences help us relate to virtual worlds, I wanted to use this project to create an experience where users can collaboratively affect/respond to digital cloth shapes through body movement tracking in the Motion Lab. I love creating experiences and environments that blend both physical and virtual worlds, and so I thought this would be a good way to explore how physical surroundings and inputs impact virtual objects, and I also thought this project direction would be an interesting way to explore how interactive technology creates performative spaces.
Since at the time (and now) I was still a beginner in Isadora I didn’t really have an idea how to go about doing this, and I didn’t know if Isadora is even the right software to use or should I just be using Kinect in a game engine. My goal is to have the users affect the virtual cloth in real time, but not knowing how to do this, in the beginning I was thinking an option could also be pre-rendering the cloth simulations and then use the Kinect inputs to trigger different animations to create this dynamic effect. However, after learning how to import 3D models to Isadora and affect lighting, I realized that I will be able to trigger real time changes to 3D shapes, without using pre-made animations. I might still use animations if I think they can benefit the experience, but after the progress I’ve made so far I realized that I have a sense of how to make this work in real time.
After deciding I need Kinect and Isadora for this experience, I needed some help from Felicity to install the drivers for Kinect on the computer in the classroom, so I can begin working on an early prototype. After that was setup, I first learned how to import 3D models in Isadora because I didn’t figure that out during PP1. I was able to import a placeholder cloth model I made a few months ago and use that to begin figuring out how to cause dynamic changes to it using Kinect input. Initially, I hooked the Kinect tracking input to the 3D light orientation and I was already happy where it was going since it felt like I was casting a shadow on a virtual object through my movement, but this was just a simple starting point:
After this, I wanted to test changing the depth of the shape through positions and motion, so I thought a good initial approach would be plugging the moving inputs into the shape’s size along the y axis, to make it seem like the object is shrinking and expanding:
I took this approach to the previous file and currently I have the Kinect input impacting the lighting, orientation, and y-axis size of the placeholder cloth shapes. In the gif below I plugged in the movement inputs to the brightness calculator, and when I’m further away from Kinect and when more light is being let in, the shapes expand along the y axis, but when I get closer and it gets darker, the shapes flatten down, which feels like putting pressure on them through movement:
I’m happy that I’m figuring out how to do what I want, but I want this to be a shared experience with multiple users’ data influencing the shapes simultaneously, so the next step is to transition from the computer where I made the prototype into the Motion Lab where I want the experience to be. Currently I need the Isadora on the Motion Lab computers to be updated to the version we are using in class, so I will remind Felicity about that. After the setup is done again both with Isadora and Kinect, I will keep working on this in the Motion Lab and modifying the patch based on that environment, since they are going to be interdependent. I also finally managed to renew my Cinema 4D license yesterday so this upcoming week I also want to make the final models (and animations?) I want to use for this project and replace the current placeholders.
Feedback from the presentation:
I appreciated the feedback from the class, and the patch I currently have seemed to produce positive reactions. The comments I got included that it was intriguing to see how responsive and sensitive the 3D cloth model was. It was nice to see that a lot of people wanted to interact with it in different ways. I realized I need to think more about the scale of the projection since it can impact how people perceive and engage with it.
Apollo 11 Launch Sound Experience
Posted: October 25, 2022 Filed under: Uncategorized Leave a comment »Preparing For Launch
The Moon. Arguably one of the greatest achievements in the history of space travel. It’s wild to think that people have been on that little circle of cheese in the sky, and even wilder that we’re going back. Space has always been a huge passion for me. So much so that one of my most cherished childhood possessions was my little season pass to the Burke Baker planetarium at the Houston Museum of Natural Science; whenever there was a new film or experience, my mom and I would go and spend the day gazing at the stars.
My love for science and space travel continued to high school, where I have vivid memories of afternoons with my science teacher Dr.Cote, talking about rockets and the latest NASA news, we even set up a watch party when the Curiosity rover landed back in 2012. All of these events and interactions with the wonders of space have stayed with me for my whole life. So, for the sound experience project, I choose the monumental event of the Apollo 11 Moon Landing.
Mission Log
As soon as I started the project I knew I wanted to use a piece from the Interstellar soundtrack. It is hands down my favorite movie of all time, and I absolutely love the soundtrack created by Hans Zimmer. The core style of the soundtrack gives me chills every time I hear it, and it sparks a truly unique sense of wonder and curiosity.
The problem I now faced was which song should I use. If you’ve seen Interstellar, then you’re probably picturing the film in your head, thinking of some core moments throughout the movie. As I did this myself, I remembered that the track I play most often is Cornfield Chase. This is the scene where Cooper, Murph, and Tom are chasing after a mysterious drone through rows and rows of corn. Take a listen below:
With my song selected I got right to work collecting sound artifacts relevant to the launch. I knew that nasa kept audio recordings of a lot of famous missions, so I started downloading and trimming to get the very best bits.
I also wanted a sort of preface to the mission recordings, something that could spark wonder and simulate that same fuzzy feeling I get when I watch the film and think about Space. After perusing through the events that led up to the launch, I remembered that John F Kennedy’s Rice University speech would be perfect for the job. I trimmed the speech and extracted this memorable line:
A couple more audio bits were collected to form a short chronological story into the Apollo 11 launch. My goal was for listeners to feel like they were progressing through this historic event, just as humanity progressed after JFK’s speech. I cut together countdown audio as well as some other famous lines from the astronauts themselves.
But instead of giving you more tidbits, listen to the full experience for yourself. Make sure you’re sitting down because you may get some intense goosebumps.
Mission Successful
I really loved the outcome of this project. It helped me experiment with audio like I never had before. I also have a strong interest in accessibility so getting to do a project like this helped me flex my universal design muscles. In addition to the audio experience, I also created a visual component because I wanted to include some enhancements to the audio if people chose to view it. Sorry in advance for the weird aspect ratio!
I wouldn’t change anything about my final product. I love this audio experience and I plan on posting it somewhere to share with the other kids that have a season pass to the Planetarium. Experiences like this are what sparked my love for space, and I can only hope to do the same for others.
PP3 Sound project
Posted: October 25, 2022 Filed under: Uncategorized Leave a comment »For this project my first idea was to work with a story that I always remember first when thinking of books/stories that stuck with me when I was younger. The story I chose to represent through sound is from the book called Palle Alone in the World by a Danish writer Jens Sigsgaard. Since I never read this book in English I know the book as “Pale sam na svijetu” and I always remembered the look of this cover because I liked the illustration style in the book:
Since my perception and understanding of the book has always been based on the visual, I thought it will be interesting to imagine what the events in the book would sound like. I always associated the book with happy memories but just thinking about recreating it through sound I could tell that it is probably going to sound kind of daunting and portraying overall a stressful experience. In the book, the boy Palle discovers he is totally alone in the world and so he goes on doing whatever he wants without any restrictions, he tries driving cars, he even crashes a car, he steals money from the bank, he eats all the food he can eat in the grocery store… The approach I took in the project was to depict the sounds of his actions and experiences in the order that they occurred, condensed in 3 minutes, and I also overlayed some slow piano music to create a dreamlike mood, since at the end of the book we find out that this was all just a dream.
I used the sound level watcher in Isadora to listen to the sound of the piece and use that to distort the picture of the book cover. I did this because as I was working on this project and listening to what’s happening in the book, my perception of the book and how it would feel to be Palle started to change.
Just the visual:
On the day we were showing the projects, I couldn’t use my Isadora file on the Motion Lab computers because I was using some of the effects that I previously had to install as plugins on the classroom computer I have been using, so I didn’t end up showing the visual portion during the experience. I wish I could have but it was still very interesting to hear the reactions people were having even without the visual. The visual was still quite abstract not knowing it’s a book but I think the picture does provide some context of “children’s book”, and it also gives further flow since it is constantly moving. Maybe I also thought this because I personally prefer to have something to look at, but I realized that’s not a universal preference. Based on the comments I got, a lot of people understood the moods and the narrative I was trying to convey which was good to hear. The comments included the observations that the events are linear, occurring in a specific sequence, and happening right after one another not in a way you would normally expect but it also still feels continuous, interpreting the sound through child’s perspective, the feeling of uneasiness, and getting invested in some sounds more than other like the sound of eating, walking on the grass, or unwrapping a chocolate bar. Another interesting aspect of the experience was being able to add and manipulate the lighting as the audience was listening to it, which is something I haven’t thought of before because I was not planning initially to show it in Motion Lab but I decided to after hearing how immersive the other sounds were in there.
I also remember thinking of this book sometimes when Covid first started when I had a very bad experience being stuck in a house with toxic and insane roommates and not being able to see my friends, so for me this book also relates to this time period.
PP2 – QuakenShake – Katie O
Posted: October 25, 2022 Filed under: Uncategorized Leave a comment »For Pressure Project Two, the assignment was to choose a moment that was culturally impactful for you and tell its story 99% through sound. I chose to do some research on the 1964 earthquake that hit Alaska, which my mom lived through, and pair it with the recent 2018 earthquake that hit right outside of Anchorage. I found clips from the news sources that covered both earthquakes and bounced back and forth between the two as they described the details of the quake and the impact it had on the land and the community.
I put up a variety of photos from both events that were collaged together, showing the buildings and roads that had been destroyed. I felt curious about how distant humanity has become to natural disaster events as we see many of them in the media but do not necessarily experience them ourselves. I remember not thinking that hard about the 1964 earthquake while my mom described her experience, but once I lived through the 2018 earthquake, I began to see the 1964 in a different light. My empathy grew.
Although the recordings were of two different news sources, the audience said they couldn’t necessarily tell until closer to the end that it was bouncing between the two different earthquake experiences. I’m guessing a visual would have supported that side of it, but the audience seemed to eventually put it together with just the image of the old cars and grainy quality.
I received feedback that, although I mentioned my mom and I story at the beginning of the experience, they would have liked to hear more and possibly end it with our story as well. I had plans and notes to give more information, but to be honest and human, I couldn’t quite handle being on a microphone that day. I can also feel a sensitivity to feedback in class right now, which I’m hoping will shift as I continue to get to know the class better. I knew I would be able to hold critique of the piece, but I did not feel confident I could hold critique of my own voice, and I wasn’t sure how deep the group would go that day.
I’m glad we did this project. I like sound and don’t give it the time I wish I would, so I’m glad I was pushed into it. I think we covered some important aspects of experience while creating and participating in this project.
PP2: Inviting Intimacy through Sonic Storytelling – Mollie Wolf
Posted: October 25, 2022 Filed under: Uncategorized Leave a comment »For this pressure project, I wanted to work toward my project for cycle 1, building the sonic storytelling element I’m been imagining. The idea I have in mind is to have some sort of cornered off/walled off space that feels private. Perhaps there will be a comfy chair here, perhaps there will be a small screen, playing a personal film for one person at a time, perhaps it will just be an area surrounded by plants, with only the sonic storytelling happening. The point is to create a sense of intimacy. A time when one audience member at a time can experience something between them and their real/imagined environment.
I used a few different sound recordings I have of Frankie Tan (my friend/collaborator whom I traveled to Malaysia with this summer) telling a story she wrote about herself and her relationship to the jungle in Penang. I decided that as time goes on, I wanted the sense of intimacy to increase, so I started with a recording of her speaking aloud, then slowly, by the time we reach the end of the story, she has transitioned into whispering to her listener. My plan is to play Frankie’s voice/story on a small, local speaker that only the one listener can hear – so that this truly is a moment that they alone get to experience.
Here is a recording of Frankie’s story (I started this audio at 3:57 when I did the presentation).
Then, I looped a sound recording I have of the Penang jungle at night to play through out a larger space, to really surround the listener (the individual one, as well as others in surrounding areas) with the sounds of the jungle that Frankie’s story references.
Here is a recording of the Penang jungle.
When I presented the story to my peers, I didn’t let them know about my goal of intimacy, and I presented it to all of them at once. I placed the Bluetooth speaker with Frankie’s story nearby where my audience was sitting, and played the jungle sounds through the whole room.
Here is a recording of the two playing together, in the same space.
I was pleased to know that the sense of intimacy was apparent to my audience. Some of them mentioned wanting to be closer to the speaker and having the impulse to ‘lean in.’ One of my peers said that the content of the story felt like an intimate conversation between this person (Frankie’s character, Noon) and the forest.
Alex mentioned that there was more than just intimacy, but also tension – he noticed the word ‘hate.’ I appreciate this as well, because it is purposeful. So much of my thesis project in general is about this – the concept of ‘the wild,’ this simultaneous allure and repulsion that we feel about the natural world, and the behaviors and concepts we have been socialized into that create a distance between ourselves and nature. There is a love/hate present. There is an internal struggle for the Western body between desire, responsibility, and ignorance when it comes to ‘the wild,’ so absolutely yes, the tension in this story is purposeful. Intimacy does not mean a lack of tension.
Feedback that I want to keep in mind from my peers is that it was confusing, or hard to follow, or distracting when I had Frankie’s voice layer over itself. I wonder if there is a way I can either play with it more, so it feels like it’s okay that you’re not catching every word, or if I should just not layer her voice at all…