PP3
Posted: November 9, 2018 Filed under: Uncategorized Leave a comment »Task: For our third pressure project, we were tasked with creating a mystery the did not use a keyboard and keypad. I decided on a basic interactive 360 photo mystery. I found this task to be a good exploration of 360 photos and also the workings of 360 photos in the context of Google Maps and the ability to call images of public places from Google Pictures that are placed on Google maps. I learned a bit about “text to speech” (as there is no dialogue, only auditory speech.)
Process: For this project, I started with a map of California. I wanted to take the user on a scavenger hunt up and down the coast with the “mystery” being a theft taking place at the Getty Museum in Los Angeles. I also wanted the user to get a sense of the environment.
I started with drawing the cities, environments, suspects and the items that were to be used in discovering the thief. I wanted to scavenge to take the user to cities and landmarks in CA.
Once I had the ideas written out, I found the images of the environments I was going to use. The environments helped to dictate the pictures of individuals that were to be used and represented. After I had all the images together, I wrote some dialogue for the suspects. Once I had all the elements, I brought them into unity and linked them up using some basic scripting and a 360 template.
Feedback: I believe that my project was well received. Due to the fact that only one person could go at a time, there were only two classmates that were able to try the experience. However, they really seemed to enjoy the project and kept giving me the note that they had felt like they had been transported to another place. This was my intention, so I was very pleased with that response. I also asked for some feedback on the “voice to text” and the users seemed to love it. This may be something I include in my thesis moving forward. Text bubbles seem to disrupt the FLOW of the experience.
If I did it over again, I would have changed the photos to basic models. I could have just used a simple command to then make their face (the photo) appear. I believe this would be interesting as we have to decide to look at a person’s face in a public area and could have created some intrigue within the scene itself.
Schematic and Schedule
Posted: November 8, 2018 Filed under: Uncategorized Leave a comment »11/1 | Laboratory for Cycle 2
Set up other computer. Intro to WiMote. Try regular (not short throw) projector. Weekend: Make patching with WiMote. Buy and prepare materials for hanging the window.
|
11/6 | Presentation of ideation and current state of Prototypes
Try patches with window, Hang window bars, Map projections
|
11/8 | Laboratory for Cycle 2
Figure out dimensions in final cut for projections to land on window locations. Continue figuring out WiMote/mapping projections as necessary Weekend: Edit video (maybe shoot more?) and set them up in patches.
|
11/13 | Last second problem solving
Continue figuring out WiMote/mapping projection as necessary |
11/15 | Cycle 2 Performance |
11/20 | Critique
|
11/22 | TURKEY TIME |
11/27 | Laboratory for Final Cycle
Figure out how the computers communicate (if necessary for Makey Makey signals to reach patches) |
11/29 | Rehearsal of public performance
Set up Makey Makey with wires for window hangers. Program Triggers with Makey Makey |
12/4 | Last second problem solving
|
12/6th or 7th | Student Choice:
Public Performance at Class Time or Final Time |
12/? |
Official final time: Friday Dec 7 4:00pm-5:45pm
|
Pressure Project 2
Posted: November 8, 2018 Filed under: Uncategorized Leave a comment »PP2 was to create an interactive fortune teller in 9 hours. Similar to last time I challenged myself to completely begin and end the the process within that nine hour limit (this includes the brainstorming phase). Unlike the first pressure project, I went a route I assumed would be a(n attainable) challenge but instead realized the error of this assumption.
My original idea was to use a program called Max- a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling ’74. The program is similar to Isadora, but I was more familiar with how to use it and various sensors. I wanted to try to create a “fortune teller” that could use some sort of sensor to tell your fortune.
When I had spent too much time trying to figure out Max, I decided to turn to the Arduino, which I was familiar with and thought could work to create a physical fortune teller machine, similar to the 20 questions little game I had as a child. I also knew that Arduino’s have tons of tutorials and help guides that might be able to assist in case I got stuck. It also helped that I had all the equipment from a previous class to get started (working by the light of a laptop):
My plan seemed straightforward: use an lcd screen to display questions to a user. Two small buttons would correspond to Yes and No answers to these questions. The order in which a user answers would after a few questions generate a custom fortune. I began by wiring my arduino and led screen, using using a helpful code and preset sketch. This part was a success!
Next, I attempted to modify a few codes online, one being a simple fortune teller and the other being a code that have various forms of “if this order of yes and no answers (ex: yes no no no yes) then do this (in my case read fortune).
This is where I ran into issues. When I put in a code, Text no longer showed up on the screen and my led stopped turning on. Then, my led light stopped completely. Then when I tried to get the box made so it would be held in something nice, I dropped the Arduino and messed up where the wired had been.
Then I ran out of time.
Pressure Project 3
Posted: November 4, 2018 Filed under: Uncategorized Leave a comment »Our third pressure project included these guidelines:
-Create a 3 minute experience
-user must touch something apart from the keyboard and receive a response
-Must include sound
-User must move in a large environment and a hidden mystery must be revealed
Though I tried to meet all guidelines, I based my experience off of the guidelines “user must touch something apart from the keyboard and receive a response” and “User must move in a large environment”. I wanted to encourage a physicality and adrenaline in the user experience. Inspired by simple video games, I created a game in which the user walked on a path made of aluminum foil and attempted to tag bananas to earn points.
The objectives of the game were as follows: Keep contact with the path, touch all bananas, beat the clock! Level 2 included the user performing these tasks backwards.
I intentionally placed the bananas far from the designated path so the user would have to stretch and reach to earn the points. Additionally, I included an element of time so that the user would have a sense of urgency in his or her movement. I was very curious about the human embodiment of a game that was inspired by a 2-D experience I played as a child.
As a whole, the experience was a fun time for everyone involved. I enjoyed watching the users try and beat the tasks and I think the users had a fun time playing the ridiculous game. My colleagues explained that this experience felt most like a game and that they felt a strong urgency to win.
Though it was a fun time for everyone involved, difficulties definitely arose in the system set-up(see video footage below). The path kept detaching from the floor and caused users to have to start over for no reason. Looking back on it, I should have taped the path to the floor so it wouldn’t move around as much. I practiced the game on carpet and didn’t take in to account the environment in which we would be playing it.
I really enjoyed this project and it definitely inspired certain elements of my final project game I am constructing!
Pressure Project 2
Posted: October 7, 2018 Filed under: Uncategorized Leave a comment »For my pressure project, I chose to make a simple webpage that would give you a new fortune every time you clicked on a fortune cookie emoji.
I chose to do it like this because I wanted more experience with the particular web framework I used, and I wanted to make an interesting web experience in general.
Overall, I think I didn’t scope enough out for this project; I wish I had done more than what the final project ultimately ended up being but problems with getting a webpage up and running in the first place was an impediment in my project. I did have a lot of fun writing some of the fortunes though. There is a total of 81 unique fortunes that can be viewed, some I got from sites, others I changed a bit to fit the tone of the project more, and others I wrote from scratch. Given more time, I would have liked to include inputs from the user other than clicking such as giving fortunes to fit a name or picking from several fortune cookies on the screen.
If you’d like to read the fortunes in the project as a list, check out my github. The generation of the fortunes is in db/seeds.rb.
Pressure Project #2
Posted: October 5, 2018 Filed under: Uncategorized Leave a comment »For Pressure Project #2 we were assigned to create an automated fortune-telling machine utilizing any medium we felt comfortable with. I, still in the learning stages of Isadora, decided to use this platform to explore and develop. I had many goals for this project in order to challenge myself intellectually and artistically. My main goal of the project was not just to create t a functioning and responding system, but rather, a complex sensory experience that created a deliberate atmosphere. Naturally, I went with a 50s-esk carnival theme. I portrayed this theme through old film clips of San Francisco and a creepy sound scape that played throughout the user experience. My colleagues commented that they enjoyed these features of the system because it enhanced the mood of the experience and created a curious atmosphere.
I also wanted the computer fortune-teller to be tricky, clever, and mysterious. I tried to create a character within the system for the user to engage with. I did this by providing ambiguous instructions that could be interpreted many ways. I was curious to see if the user would be able to solve the riddles presented to them in order to receive their fortune. I also threw in some patterns that I hoped the user would eventually catch on to. My last goal was to utilize a wide range of triggers to keep the system moving forward. I used mouse watcher triggers, keyboard triggers, and voice and motion triggers intermittently throughout the system. The vocal triggers were the hardest to manage because every person has a clap or a snap of a different volume level. With this in mind, I made a pretty large inside range to catch a sound trigger. However, this caused my computer to sometimes pick up sounds that were unintentional. Or an intentional sound still didn’t fit within the inside range I allocated. This is still an area I need a lot of work in!
Every person who tried the system had a different experience. Some users immediately caught on to the patterns and loose instructions and others took longer to recognize that they were in a loop before they were able to get themselves to the next phase of the system. It was also interesting to use the devise in a group setting. When the people watching figured out the tricks of the system they experienced the uneasy feeling of keeping a secret to not ruin it for the user or future users. This uneasiness added to the overall mood of the system.
I really enjoyed this pressure project and am looking forward to how I can expand some of these goals in to my future work in the class and beyond!
Pressure Project 2 (A Walk in the Woods)
Posted: October 4, 2018 Filed under: Uncategorized Leave a comment »I wanted to get away from the keyboard for this assignment, and make something that asked people to move and make noise. I considered making some sort of mock fight what got people to dodge or gesture to certain sides of their body, but I didn’t want to make anything violent. Serendipitously, I was an outdoor gathering while I pondered what to make, and this party what highly attended by bees. So, it began–a walk in the woods. I thought I would tell people that things (like bees) were coming at then and ask them from a certain angle and then ask them to either dodge or gesture it away. I thought I would use the crop actor to crop portions of what my camera was seeing, and read where the movement was to trigger the next scene.
Less serendipitously (in some ways) I was at this party because my partner’s sister’s wedding the was the day before, and I was in the wedding party. This meant that I had 2 travel days, a rehearsal dinner, an entire day of getting ready for a wedding (who knew it could take so long?), and the day after party. WHEN WAS I SUPPOSED TO DO MY WORK?!
So, to simplify, I decided to not worry about cropping, and to trigger through movement, lack of movement, sound, or lack of sound. Then, the fun began! Making it work.
I decided to start with a training that would give the user hints/instructions/practice for how to continue. This also give me a chance to figure out the patches without worrying about the content of the story. Using Text Draw, I instructed the user to stand up, and told them that they would practice a few things. Then, more words came in to tell the user to run. I created a user actor that would delay text, because I planned to use this in multiple scenes. The user actor has an enter scene trigger to a trigger delay to a toggle actor that toggles the projector’s activity between on and off. Since I set the projector to initialize “off” and the Enter Scene Trigger only sends one trigger, the Projector’s activity toggles from “off” to “on.” Then, I connected user input actors to “text” in the Text Draw and “delay” in the Trigger Delay Actor. This allows me to change the text and the amount of time before it appears every time I use this user actor.
I also made user actors that trigger with sounds and with movement. I’ll walk through each of these below.
Any story that prompted them to run or stay still used a user actor that I called, “Difference Jump++.” Here, I used the Difference Actor to and the Calc Brightness actor to measure how much movement takes place. (Note: the light in the space really mattered, and this may have been part of what made this not function as planned. However, because this was in a user actor, I could go into the space before showing it and change the the values in the Inside Range actor and it would change these values in EVERY SCENE! I was pretty proud of this, but it still wasn’t working quite right when I showed it.)
I used to gate actor, because I wanted to use movement to trigger this scene, but the scene starts by asking the user to stand up, so I didn’t want movement to trigger the Jump++ actor until they were set up. So, I set this up similar to the Text Delay User Actor, and used the Gate Actor to block the connection between Calc brightness and Inside Range until 17 seconds into the scene. (17 second was too long, and something on the screen that showed a countdown would have helped the user know what was going on.)
So, with this, the first scene was built (showing the dancer that their running action can trigger changes. The next scene trained the user that their voice can also trigger changes. For this, I built a user actor that I called, “Sound Jump++.” This functions pretty similarly to the Difference Jump++ User Actor.
So, the trigger for most of the scenes is usually either movement, stillness, sound, or silence. I’ve explained how movement and sound trigger the next scene, but stillness and silence are the absence of movement and sound. So, in add
ition to including a Difference Jump++ User Actor and/or a Sound Jump++ User Actor I had and Enter Scene Trigger to a Trigger Delay to a Jump ++ Actor. If the user had not triggered a scene change by the time the Trigger Delay triggered the next scene, it was assumed that the user had chosen stillness and/or silence.
Then, which scene we jumped to depended on how the scene change was triggered.
Next, I came up with a story line involving bears, bees, robots, mythical creatures, airlift rescues, search and rescue units, and I downloaded videos and sounds to represent these events. Unfortunately, I either had things jumping to the wrong scenes, or Inside Range values were set incorrectly, because the performance of this “choose your own adventure” story didn’t function, and we kept getting the same storyline when we tried it in class. I learned a lot in the process of building this though!
Robot Gender ASumptions
Posted: October 3, 2018 Filed under: Uncategorized Leave a comment »https://www.wired.com/story/robot-gender-stereotypes/?mbid=synd_digg
Stop Pretending
Posted: October 3, 2018 Filed under: Uncategorized Leave a comment »https://www.pcmag.com/news/364132/california-law-bans-bots-from-pretending-to-be-human