Thursday 19 May 2011

IDAT 210: Digital artifact.

Over the course of this module we have been expsoed to a variety of difrent visualisations and interactive technologies, which have been very cool to get a hands on approach with. However for ages the brief was not clear to me and i was struggling as to what i could take from it all. Eventually thought the brief for 'make a digital artefact' became clear.

First thoughts...

I've been looking for a way to incorporate my love for music in to a project and thought this would be the perfect choice, as the brief was open. My initial thoughts were that i wanted to create an application of sorts that changed the way people think about underground music. Music being my biggest passion it annoys me when i see people week in week out going to places like the SU for what is the worst music ears can be exposed to, when there are nights that play music  with thought, emotion and soul behind them. It baffles me as to why people do it to themselves.

So i started thinking why i love music, and its because it has the ability to change my mood and emotions. However i choose to listen to particular songs based on this, therefore while i might choose to listen to ' booka shade - in white rooms' someone who regularly goes to the SU might choose ' lady GaGa - poker face' (Heaven forbid).

So i got thinking, instead of an emotion/state of mind reflecting can it work in reverse? Music Mirrors Mind was born.

Can Music Mirror the human Mind?

At this point i knew i wanted to make something thats output was the playback of music based upon and input of data, but what could this data be? How could i gain human emotions in the form of data.I had been exposed to the nexus 10 kit in tutorials which allows for human biometric data to be used, and was keen to use it. However watching a course mate apply the kit made me realize that you had to stand still, and remain calm. Thus a persons emotion will always be that. Ideally i wanted to use brainwave data directly but this just wasn't feasible at the time, but i plan to extend my concept further soon and use it. So at this point I was still searching for an adequate user input.

Ontop of this i needed a vast amount of psychological research into the human mind, available in my ReadMe. For half of this project i felt as if i was a psychology student.


My idea.

Watching my house mate play a game on his HTC desire one night caught my eye. The game required him to move the phone in different directions and speeds based on how he wanted the game to react. So a smoother and slow movement caused the same on screen. A violent shake reflected in the form of fast speed and an angry user aha. This is when i first noticed that in this modern day we have an emotional connection to technology and in this case it was connect by accelerometer tecnology!!

I wanted to see just how far this emotional connection could be pushed, and thus applied it to my artifact. I began to start thinking how i could use accelerometer's to provide the input my project, this was no easy task.

So at this point my concept was this...

INPUT: Gain user data from accelerometer's on a smart phone

VISUALIZATION: Reflect this data in the form of shapes / colours similar to that of music visualizations.

INTERACTIVITY: Allow the user to explore the genres of underground music based on this data.

OUTPUT:  Music played.


Getting the data.

I wanted music mirrors mind to be web based so that anyone could use it. However when i go ahead and further this project i will make it a full blown application available on app stores( Hopefully).

So i had two options...

1. Take advantage of new HTML5 technology by using the tutorial provided here...

OR
2. Use an older method, by combining Adobe Flash and a user created plugin called Flosc.

'flosc is a communication gateway, written by ben chun, that allows macromedia flashMX to talk with any software that can understand UDP data. while this includes many sound programming tools, the options are limited only by your curiosity.'
'

I went for the second method which became an absolute headache to try and get working until i got hold of Flash CS5.

Flash CS5 natively allows you to pull in accelerometer data, use it how you wish and then test it on a phone emulator in Adobe Device central.

So here was the first bit of code written...

import flash.sensors.Accelerometer;
import flash.events.AccelerometerEvent;
import flash.geom.ColorTransform;
var accel:Accelerometer = new Accelerometer();
var accelXpos:int;
var accelYpos:int;
var accelZpos:int;


This allowed the data from how the phone is being moved to be turned into three separate variables. Brilliant.


What I realized at this point was that although there is a clear connection between state of mind and accelerometer, it is quite limited. There are only really three ways you can move the phone. So based upon this i did my research into state of mind and emotions and narrowed them down to three categories in which the user could interact with.


1. Calm / Hard working / At peace
2. Happy / Creative
3. Excited/ Energetic / Angry

I must admit representing these states in terms of a visualization on a phone based upon how it is being moved was some challenge and i was a bit skeptical as to how i would actually get it to work. However i persisted to make this work and treated my project as more of an experiment by this point. posing the question

How far can our emotional relationship with technology be pushed? Specifically in this case with music.


Manipulating the data

Upon first testing in device central i realized that the values were too small to manipulate effectively. So i coded this to make them greater and easier to define three separate visual paths for the user.


function update (e:AccelerometerEvent):void{
accelXpos=  e.accelerationX * 100;
accelYpos = e.accelerationY * 100;
accelZpos = e.accelerationY * 100;

After this i was able to define three separate paths easily like this...


/CALM BRANCH
if(accelXpos > -30 && accelXpos < 30){
                                  .......................................
}

// Happy branch

if(accelXpos < -30  || accelXpos > 30 ){
......................................
}

// Irratic branch
if(accelZpos > 50 || accelZpos < -50){
.............................................
}

Inside each of these if statements would be different buttons and visuals allowing the user to explore the realms of underground music and the psychological attachment to it via their own data.


Designing Music Mirrors Mind.

After a huge amount of research into color, shape association, state of mind and music, I was able to develop artwork that reflected each of the three categories. The stipulations were that the user must be able to explore different music based upon their accelerometer data, but then click buttons that resonate with their corresponding genres. These buttons must only include colors and shapes that match that of the genre but also that particular state of mind.

Here is a snap of each one in its full explored state...

The first screen is the logo and neutral colours as no data has been utilized yet. When the button is pressed the
accelerometer data is pulled in. So based upon how the phone is being moved the user will get one of three screens.

First screen.




After the grey button is clicked, it will change colour to represent the data it is been given, as seen below.
The visualizations also completely change, and a new button is added at the bottom of the screen in a shape that represents that state of mind. After this secondary buttons is clicked three new ones appear, each representing a different genre of music, and containing three songs each, all of which accurately repsent the state of mind.
/CALM BRANCH
if(accelXpos > -30 && accelXpos < 30){
                                  .......................................
}


/ Happy branch

if(accelXpos < -30  || accelXpos > 30 ){
......................................
}



// Irratic branch
if(accelZpos > 50 || accelZpos < -50){
.............................................
}


Each button is a genre of music with three different tracks. When one is clicked the music playback controls appear. At first i thought i was going to have to code a music player for each button but this would have made my artifact alot slower and very buggy. Not to mention that it would have been a right effort to code 9 times.

So instead i have one player and an .XML list for each button that dynamically changes the audio. Like this...

This is the button for the genre of minimal.
function minimalBtnClick(ev:MouseEvent):void{
//------------------------------
var myXMLLoader:URLLoader = new URLLoader();
            myXMLLoader.load(new URLRequest("minimal_playlist.xml"));
            myXMLLoader.addEventListener(Event.COMPLETE, processXML);
}       

This is used for each button/genre and the bit in red is the only part that changes.

Final thoughts, bug report and further development.

The design and technical aspect of MMM proved a real challenge, especially since a lot of psychology was involved and its all open to opinion and interpretation but it does work to a good degree.

However the playback of audio on a HTC device is extremely laggy, this is because the sound files are on my server and have to be downloaded and maybe it might be due to the fact the artist name and track name are pulled in from .xml documents.

So for now Music Mirros Mind stays in a experimental stage, that is open to opinion.
While this has been a university based project i have big plans for my ideas concerning Music Mirrors Mind, as there currently is nothing like it. Stay tuned.



Ben Quinney.