Cycle 2 – Hand tracking algo

I spent the majority of the time leading up to this cycle devising a hand-tracking algorithm that could efficiently and robustly track the position of the fingers in (one) hand. There are a few approaches I took:

Approach 1: Get the position of the palm of the hand from polling the leap motion controller. Then subtract the difference of the finger position to get its relative position.

Pros: Consistent, deals with location away from Leap motion controller well

Cons: Gives bad data at certain orientations

Approach 2: Get the differences of the finger positions in relation to each other. These deltas will provide relative finger positions to each other.

Pros: Works well at most hand orientations

Cons: Not as good when fingers close together

Approach 3: Check to see if each finger is extended and generate a unique code corresponding to each finger for each frame.This is the approach I went with. This makes it much easier to program as it abstracts away a lot of the calculations involved. Also, this ended up being arguably the most robust out of the three due to only using Leap Motion’s api.

Pros: Very robust, easy to program, works well at most orientations

Cons: Will have to make hand signals simpler to effectively use this approach.


Cycle 2 “Tech Rehearsal”

In my cycle 2 showing, I took away the fun part we had in cycle one where I have audience members dancing with the music on the stage and reacted to the lightings. In this cycle, I tried a little bit serious run to have people become my crew member and help me trigger the lightings with their movement.

In this run, I called them helper instead of “lighting designer” which makes more sense to me. But still, I don’t think the audience members only have one title when they join my project. They can be the performer moving on the stage while become the crew member trigger the lighting with their movement tasks. I think I should really blur their title instead of really giving them a position contains various jobs.

Also, with the help of Alex, I add one OpenNI actor to have the connector recognize people’s shape, so that it can see where people’s arms/legs/torso are moving. With the help of this, I designed a movement of clapping over head and when the connect see both of the hands reach the same hight it will trigger the cue. It really helped me develop more movements for audience members rather than just letting them appear/not appear in the area where the connector can see.

I will develop more movements after this cycle and really blur audience member’s role as being a lighting helper and a performer at the same time.


DroidCo. Final

For the final stage of my project, I finished the display and game logic of DroidCo. As a final product, DroidCo. allows two players to use OSC to “write code” (by pressing a predetermined selection of buttons) that their droids will execute to compete. The goal of the game is to convert every droid on the grid onto your team.

The code input screen, with the option to use the same code from the previous round or rewrite new code
The actual game screen, which shows the droids moving about and displays the current turn
The victory screen! This scene plays after a team has converted all the droids or turn 350 has been reached.
function main()
{
	var worldState = JSON.parse(arguments[0]);
	var droid = arguments[1];
	var startIndex = (droid * 6) + 2;
	var team = worldState[startIndex + 1];
	var x = worldState[startIndex + 2];
	var y = worldState[startIndex + 3];
	var dir = worldState[startIndex + 4];
	var red;
	var blue;
	var hor;
	var ver;
	var rot;
	switch (team){
		case 1:
			blue = 50;
			red = -100;
			break;
		case 2:
			blue = -100;
			red = 50;
			break;
	}
	switch (x){
		case 0:
			hor = -45;
			break;
		case 1:
			hor = -35;
			break;
		case 2:
			hor = -25;
			break;
		case 3:
			hor = -15;
			break;
		case 4:
			hor = -5;
			break;
		case 5:
			hor = 5;
			break;
		case 6:
			hor = 15;
			break;
		case 7:
			hor = 25;
			break;
		case 8:
			hor = 35;
			break;
		case 9:
			hor = 45;
			break;
	}
	switch (y){
		case 0:
			ver = 45;
			break;
		case 1:
			ver = 35;
			break;
		case 2:
			ver = 25;
			break;
		case 3:
			ver = 15;
			break;
		case 4:
			ver = 5;
			break;
		case 5:
			ver = -5;
			break;
		case 6:
			ver = -15;
			break;
		case 7:
			ver = -25;
			break;
		case 8:
			ver = -35;
			break;
		case 9:
			ver = -45;
			break;
	}
	switch(dir){
		case 0:
			rot = 180;
			break;
		case 90:
			rot = 270;
			break;
		case 180:
			rot = 0;
			break;
		case 270:
			rot = 90;
			break;
	}
	return [red, blue, hor, ver, rot];
}

Above is the code for the “droid parser” which converts each droid’s worldState data into values that Isadora can understand to display the droid.

Overall, the game was a major success. It ran with only minor bugs and I accomplished most of what I wanted to do with this project. The code was rather buggy (I underestimated the strain new users could put on my code). I also ran into several problems with players’ code not executing exactly as they wrote it. These bugs are definitely due to the String Constructor (Isadora does not handle strings very well), but I think I can work out the kinks.


Cycle 2 – DroidCo. Interface Development

For cycle two, I finished the development of the OSC interface for DroidCo. The interface consisted of a set of buttons that sent keywords to Isadora which would then be used to generate the virtual code for each team’s droid to run.

The interface (available in both an iPhone/iPod version and iPad version) consisted of two pages. The first contained the END command to complete code blocks, IF and WHILE to create conditional blocks, action keywords to trigger turn actions, and a delete button to remove lines of code. The second page contained all the conditionals that would be used with the IF and WHILE blocks.

To process these OSC inputs, I created a String builder javascript program. This program takes the next OSC input and adds it to a code string (the string the compiler from cycle 1 uses to create the virtual code) and a display string that shows the user their code (with proper indentation). Below is the code for this program.

function main(){
	var codeString = arguments[0];
	var displayString = arguments[1];
	var next = arguments[2];
	var ends = arguments[3];
	var i;
	switch(next){
		case "END ":
			codeString = codeString.concat(next);
			ends = ends - 1;
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "IF":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			ends = ends + 1;
			break;
		case "WHILE":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			ends = ends + 1;
			break;
		case "move ":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "turn-left ":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "turn-right ":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "skip ":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "hack ":
			codeString = codeString.concat(next);
			for (i = 0; i < ends; i++){
				displayString = displayString.concat("\t");
			}
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		case "back":
			if(codeString != "BEGIN "){
				if(codeString.substring(codeString.length - 4) == "END "){
					ends = ends + 1;
				}
				codeString = codeString.substring(0, codeString.lastIndexOf(" "));
				codeString = codeString.substring(0, codeString.lastIndexOf(" ")+1);
				displayString = displayString.substring(0, displayString.lastIndexOf(" "));
				displayString = displayString.substring(0, displayString.lastIndexOf(" ")+2);
			}
			break;
		default:
			codeString = codeString.concat(next);
			displayString = displayString.concat(next);
			displayString = displayString.concat("\n");
			break;
		}	
	var ret = new Array(codeString, displayString, ends);
	return ret;
}


Cycle 3- One-Handed Ninjutsu

This project was inspired by an animated TV show that I used to watch in middle school; Naruto. In Naruto, the protagonists had special abilities that they could activate by making certain hand signals in quick succession. Example:

Almost all of the abilities in the show required the use of two hands. Unfortunately, The leap motion controller that I used for this project did not perform so well when 2 hands were in view and it would have been extremely difficult for it to distinguish between 2-handed signals. However, I feel that the leap motion was still the best tool for hand-tracking due to its impressive 60 frames per second hand tracking that was quite robust (with one hand) on the screen.

Some more inspirations:

I managed to program 6 different hand signals for the project:

Star, Fist, Stag, Trident, crescent, and uno

Star – All five fingers extended

Fist – No fingers extended (like a fist)

Uno – Index finger extended

Trident – Index, Middle, Ring extended

Crescent – Thumb and pinky extended

Stag – Index and pinky extended

The Jutsu that I programmed are as follows:

Fireball Jutsu – star, fist, trident, fist

Ice Storm Jutsu – uno, stag, fist, trident

Lightning Jutsu – stag, crescent, fist, star

Dark Jutsu – trident, fist, stag, uno

Poison Jutsu – uno, trident, crescent, star

The biggest challenges for this project was devising a robust-enough algorithm for the hand signals that would be efficient enough to not interfere with the program’s high frame rate.

And two: Animations and art. Until you start working on a game, I think people don’t realize the amount of work that goes into animating stuff and how laborious of a process it is. So, that was a big-time sink for me and if I had more time I definitely would have improved the animation quality.

Here it is in action: https://dems.asc.ohio-state.edu/wp-content/uploads/2019/12/IMG_8437.mov

Some pics:

Ice Jutsu
Poison Jutsu

You can find the source code here: https://github.com/Harmanjit759/ninjaGame

NOTE: You must have monogame, leap motion sdk, c++ 2011 redistributable installed on your machine to be able to test out the program.


Cycle 3 – Aaron Cochran

For my final project, I achieved blob detection and the response of projected mapping onto a grid.

As seen in the video below, I set out to create a prototype of a combination between the games checkers and minesweeper. Scott Swearingen often talks about the concepts of public and private information in gameplay. In the game of Poker for example, there is public information about how much you are bidding, while the players each have private information of what is in their hands and on the board. These factors influence decision making on the behalf of the players.

This prototype acts as a proof of concept for a game that uses projection mapping to have information that is “private information” that is held by the game that changes the strategy of players as a result. By combining checkers and minesweeper, a level of randomness was added that would disrupt strategy by random pieces being “blown up” by the game rather than “captured” by the players.


Cycle 2 – Aaron Cochran

By the time we arrived at Cycle 2, I had abandoned the library I was using to attempt projection mapping. I was able to develop Rudimentary Blob tracking—using tutorials from Dan Shiffman—with very limited interactivity.

The final deliverable I was able to bring is visible in the video below. The Kinect was able to detect a blob and determine whether it was in the top or bottom of the screen. No projection was involved in this stage.


Cycle 1 – Aaron Cochran

In Cycle 1, I tried to connect the Kinect 2 and Processing to create a successful trackable Projection Mapping setup. I was able to successfully sync the Kinect and projector according to the library I was using but I didn’t know how to use data.


Pressure Project 3: Thumbnail Generator

Resources

I wanted to complete a project using primarily p5.js and/or Processing to refresh my knowledge on these skills and to work in an environment that felt more comfortable to me than Isadora. Little did I realize how little knowledge I had about connecting Processing to external interfaces.

Score

My goal was to create a system that you could enter one thumbnail drawing in and it would iterate over a series of iterations on that original drawing by manipulating width and height and skewing the bounding quadrilateral (i.e. reducing the width of the bottom while maintaining the width of the top).

Valuaction

In my first cycle, I followed a tutorial to take manipulate a single image into a grid of pictures.

In the second, I focused on modifying proportions between images.

Then I created two prototypes.

Performance

In the end I created 3 Prototypes. testThumbs created a row of cylinders with a variable of their height::width proportion changed.

thumbnails.pde created a sequential grid with no randomization.

My final prototype was made in Illustrator to exemplify what I hoped for in the end.


Final Showing-Design your own Relaxation Environment

I constructed an individual create your own relaxation environment in which one participant at a time could choose from given music genres, background scenes, and background sounds on a Touch OSC interface I designed before entering into a short guided meditation. My goal was to provide a place of escape for students or faculty on campus from the stresses of college, work, life, etc by guiding them through a calming meditation. I wanted the experience to feel safe, cozy, but most of all personal, which reasons why I wanted the participant to make choices about the vibe of their relaxation environment.

Here I am explaining my project to an observer. Behind me you can see my “tent” with a projection hinting someone is inside!
Inside the tent was an oversized cushion, blankets, pillows, and a stuffed animal.
Super comfy!

Inside the tent, participants would interact using an iPod touch, following a program I created using Touch OSC. This simplified their choice making by only giving them a few buttons and having them decide on their favorite of the few.

I also found it important to ask the participants how they felt before and after the experience. I hoped that by claiming their position of their state of being before the program, by the end they would notice an improvement of mood.

However, while my interface looks simple, my Isadora program is much more complex.

The trickiest aspect about working with Touch OSC and Isadora is actually allowing Isadora to send messages back to Touch OSC. I didn’t want the participant to do any work other than make choices about their relaxation environment. That means, in order for their interface to progress automatically to the next page, I needed Isadora to send it a message to move on. Alex and I worked on this extensively, but eventually resolved this by using the actor ‘OSC Multi Transmit’ and ensuring the incoming port number on OSC matches the port on that actor. In this case I used 9999 seen on the left image in the top right corner. (Also refer to my Cycle 2 post for more on problem solving this task!)

Overall, most of my participants gave positive feedback and encouraged me to further this idea by creating a more established relaxation environment somewhere on campus. I felt my final showing for DEMS was a very successful prototype. I think that my program could become more robust by running it on a machine with a better processor. My personal laptop crashed multiple times while running the program and disrupted some participants mid-mediation.

I think I would be interested in establishing this idea some place on campus because it truly promotes positive well being. Especially being in such a stressful environment, taking care of one’s mental and emotional state in crucial. This program would teach people that it’s okay to need a break sometimes, so why not break in a place you can personalize?!