Thursday 19 May 2011

IDAT 210: Digital artifact.

Over the course of this module we have been expsoed to a variety of difrent visualisations and interactive technologies, which have been very cool to get a hands on approach with. However for ages the brief was not clear to me and i was struggling as to what i could take from it all. Eventually thought the brief for 'make a digital artefact' became clear.

First thoughts...

I've been looking for a way to incorporate my love for music in to a project and thought this would be the perfect choice, as the brief was open. My initial thoughts were that i wanted to create an application of sorts that changed the way people think about underground music. Music being my biggest passion it annoys me when i see people week in week out going to places like the SU for what is the worst music ears can be exposed to, when there are nights that play music  with thought, emotion and soul behind them. It baffles me as to why people do it to themselves.

So i started thinking why i love music, and its because it has the ability to change my mood and emotions. However i choose to listen to particular songs based on this, therefore while i might choose to listen to ' booka shade - in white rooms' someone who regularly goes to the SU might choose ' lady GaGa - poker face' (Heaven forbid).

So i got thinking, instead of an emotion/state of mind reflecting can it work in reverse? Music Mirrors Mind was born.

Can Music Mirror the human Mind?

At this point i knew i wanted to make something thats output was the playback of music based upon and input of data, but what could this data be? How could i gain human emotions in the form of data.I had been exposed to the nexus 10 kit in tutorials which allows for human biometric data to be used, and was keen to use it. However watching a course mate apply the kit made me realize that you had to stand still, and remain calm. Thus a persons emotion will always be that. Ideally i wanted to use brainwave data directly but this just wasn't feasible at the time, but i plan to extend my concept further soon and use it. So at this point I was still searching for an adequate user input.

Ontop of this i needed a vast amount of psychological research into the human mind, available in my ReadMe. For half of this project i felt as if i was a psychology student.


My idea.

Watching my house mate play a game on his HTC desire one night caught my eye. The game required him to move the phone in different directions and speeds based on how he wanted the game to react. So a smoother and slow movement caused the same on screen. A violent shake reflected in the form of fast speed and an angry user aha. This is when i first noticed that in this modern day we have an emotional connection to technology and in this case it was connect by accelerometer tecnology!!

I wanted to see just how far this emotional connection could be pushed, and thus applied it to my artifact. I began to start thinking how i could use accelerometer's to provide the input my project, this was no easy task.

So at this point my concept was this...

INPUT: Gain user data from accelerometer's on a smart phone

VISUALIZATION: Reflect this data in the form of shapes / colours similar to that of music visualizations.

INTERACTIVITY: Allow the user to explore the genres of underground music based on this data.

OUTPUT:  Music played.


Getting the data.

I wanted music mirrors mind to be web based so that anyone could use it. However when i go ahead and further this project i will make it a full blown application available on app stores( Hopefully).

So i had two options...

1. Take advantage of new HTML5 technology by using the tutorial provided here...

OR
2. Use an older method, by combining Adobe Flash and a user created plugin called Flosc.

'flosc is a communication gateway, written by ben chun, that allows macromedia flashMX to talk with any software that can understand UDP data. while this includes many sound programming tools, the options are limited only by your curiosity.'
'

I went for the second method which became an absolute headache to try and get working until i got hold of Flash CS5.

Flash CS5 natively allows you to pull in accelerometer data, use it how you wish and then test it on a phone emulator in Adobe Device central.

So here was the first bit of code written...

import flash.sensors.Accelerometer;
import flash.events.AccelerometerEvent;
import flash.geom.ColorTransform;
var accel:Accelerometer = new Accelerometer();
var accelXpos:int;
var accelYpos:int;
var accelZpos:int;


This allowed the data from how the phone is being moved to be turned into three separate variables. Brilliant.


What I realized at this point was that although there is a clear connection between state of mind and accelerometer, it is quite limited. There are only really three ways you can move the phone. So based upon this i did my research into state of mind and emotions and narrowed them down to three categories in which the user could interact with.


1. Calm / Hard working / At peace
2. Happy / Creative
3. Excited/ Energetic / Angry

I must admit representing these states in terms of a visualization on a phone based upon how it is being moved was some challenge and i was a bit skeptical as to how i would actually get it to work. However i persisted to make this work and treated my project as more of an experiment by this point. posing the question

How far can our emotional relationship with technology be pushed? Specifically in this case with music.


Manipulating the data

Upon first testing in device central i realized that the values were too small to manipulate effectively. So i coded this to make them greater and easier to define three separate visual paths for the user.


function update (e:AccelerometerEvent):void{
accelXpos=  e.accelerationX * 100;
accelYpos = e.accelerationY * 100;
accelZpos = e.accelerationY * 100;

After this i was able to define three separate paths easily like this...


/CALM BRANCH
if(accelXpos > -30 && accelXpos < 30){
                                  .......................................
}

// Happy branch

if(accelXpos < -30  || accelXpos > 30 ){
......................................
}

// Irratic branch
if(accelZpos > 50 || accelZpos < -50){
.............................................
}

Inside each of these if statements would be different buttons and visuals allowing the user to explore the realms of underground music and the psychological attachment to it via their own data.


Designing Music Mirrors Mind.

After a huge amount of research into color, shape association, state of mind and music, I was able to develop artwork that reflected each of the three categories. The stipulations were that the user must be able to explore different music based upon their accelerometer data, but then click buttons that resonate with their corresponding genres. These buttons must only include colors and shapes that match that of the genre but also that particular state of mind.

Here is a snap of each one in its full explored state...

The first screen is the logo and neutral colours as no data has been utilized yet. When the button is pressed the
accelerometer data is pulled in. So based upon how the phone is being moved the user will get one of three screens.

First screen.




After the grey button is clicked, it will change colour to represent the data it is been given, as seen below.
The visualizations also completely change, and a new button is added at the bottom of the screen in a shape that represents that state of mind. After this secondary buttons is clicked three new ones appear, each representing a different genre of music, and containing three songs each, all of which accurately repsent the state of mind.
/CALM BRANCH
if(accelXpos > -30 && accelXpos < 30){
                                  .......................................
}


/ Happy branch

if(accelXpos < -30  || accelXpos > 30 ){
......................................
}



// Irratic branch
if(accelZpos > 50 || accelZpos < -50){
.............................................
}


Each button is a genre of music with three different tracks. When one is clicked the music playback controls appear. At first i thought i was going to have to code a music player for each button but this would have made my artifact alot slower and very buggy. Not to mention that it would have been a right effort to code 9 times.

So instead i have one player and an .XML list for each button that dynamically changes the audio. Like this...

This is the button for the genre of minimal.
function minimalBtnClick(ev:MouseEvent):void{
//------------------------------
var myXMLLoader:URLLoader = new URLLoader();
            myXMLLoader.load(new URLRequest("minimal_playlist.xml"));
            myXMLLoader.addEventListener(Event.COMPLETE, processXML);
}       

This is used for each button/genre and the bit in red is the only part that changes.

Final thoughts, bug report and further development.

The design and technical aspect of MMM proved a real challenge, especially since a lot of psychology was involved and its all open to opinion and interpretation but it does work to a good degree.

However the playback of audio on a HTC device is extremely laggy, this is because the sound files are on my server and have to be downloaded and maybe it might be due to the fact the artist name and track name are pulled in from .xml documents.

So for now Music Mirros Mind stays in a experimental stage, that is open to opinion.
While this has been a university based project i have big plans for my ideas concerning Music Mirrors Mind, as there currently is nothing like it. Stay tuned.



Ben Quinney.








Tuesday 8 March 2011

Sex-Ray Specs, an Augmented reality adventure.

The problem?
In the modern world to make meaningful connection for a relationship is hard, and limited to close friends within the work place, social events such as nightclubs or dating websites. In these situations contact is based upon physical attraction and rarely soulful connections such as interests and personality. This restricts the way in which we meet the opposite sex and even the amount of them we can engage with.

Solution


To enable people to connect in a meaningful way via a futuristic wearable accessory (i.e. sunglasses) in which people can access other data about those around them. Thus not confining the way in which we as humans enter loving relationships.
Controlled by eye gesture recognition, the user would control what they see and do on the interface to display and manipulate other people’s data. The data of each person would be powered by social media websites such as Facebook, dating websites and of course our own sites in which users sign up to. 

That device would use facial recognition software, along with GPS location of individual mobile phones which are linked to a specific person, which would recognise people on the database, or internet profiles, and display an aura of that person, when the wearer looks at them whilst wearing ‘Sex-ray Specs’. These ‘auras’ are a visual representation of a compatibility match between you and the person you are looking at, based on a number of criteria like interest matching , physical attraction etc. This allows people to see visually weather they would match before even speaking to them. 


If a person is selected, biometric information is augmented onto the wearers viewing space. E.g. Pupil dilation or heart rate. As well as this text based information about the selected person is shown. Such as likes, dislikes, favourite type of music etc.


At this point the ‘wearer’ can see if the selected person would be an ideal match. They can the obviously go and speak to the person or send them an automated personal message to gain contact.

Monday 14 February 2011

Dino Data: A HTML5 experience.




Brief.

The brief for this project was to create a flashy website utilising many of the new features available in html5. In order to achieve this we have created a child’s dinosaur information website with a ‘top trumps’ style interactive experience, and other fun activities, that are flashy, fun and meaningful.

Introduction.

It was after heavily researching all the available elements new in html 5 we decided that making a fun interactive website aimed at children would be the best way to incorporate as many elements as possible. We picked the theme of dinosaurs, as its topic children are generally interested in at that age.
We mocked up the example below to illustrate the overall look and feel we hoped to accomplish with the final product.



How we went about it.



With a concept image of our idea to work from, it made it easier to focus on what elements of HTML 5 we were going to need.
In relation to the brief and the target audience of which we chose children, the website was to be made as ‘flashy’ as possible.

To start with we used Adobe illustrator to create background and dinosaur images that appeal to children. The main background image is simply set as this simply in the document.

HTML5 allows for audio and video to be incorporated in a much easier fashion.
The background music is played using the new ‘audio’ tag which is auto played when the website first loads up, and the 3D video is played using the ‘video’ tag and styled using JavaScript.

To give the user a more personal experience we included the use of local storage, so that the website would remember the users name and how many times they had visited the site to create a basic user profile.

We have used the new canvas elements to display graphics multiple times. First of all to create the top trumps at the bottom, to draw the Dino cards to the page. Secondly we used it to draw the dinosaur stickers to the scene creation game. The images are scalable and moveable due to JavaScript.

We have used time and date time tags in the basic information about dinosaurs to create a site that is in tune with the idea of the semantic web. By giving certain parts of the information e.g. 100 million years ago, these tags it allows the text to conform to the ideologies of the semantic web.

Finally we used meter tags to provide a rating system for each dinosaur card. Aside from the aesthetic purpose we included meter tags due to the fact it enables the data to be semantic.

Final product.




Our final product demonstrates a wide variety of elements available with the release of html5. The information on the website could be further expanded but this is a good example of how the site works. There is an ever-growing list of elements available in html5, but if we were to utilise them all, it would be to the final sites detriment.

Link to project

www.busy-signal.co.uk/DinoData.html

Our final product is available by clicking here. While it can be viewed in safari it is best viewed in Google chrome so that all the elements can be utilised.

Wednesday 26 January 2011

Visualisation as Art.

I dont think anyone would dispute the fact that visualisation is art, especially when it engages more than one of the human senses.
I have an obsession with music visualisation, because it adds to the euphoric experience when you go and see live music. It can tell a story, engage emotion and even disorientate the senses.

Below is Chris Cunninghams visualisation for one of Aphex Twins track. The visualisation for this song is powerful, energetic and very artistic. There are underlying emotions, themes and narratives in this video that can be interpreted differently depending on the viewer. To me that is art.





Chris Cunningham and Aphex Twin


In this video you can see various lighting arrangements and visual display that have been chosen and carefully designed for a reason. Matching with the theme, tempo and energy of the music it gives the user an overpowering experience visually.




Carl cox live ibiza 2009

Tuesday 25 January 2011

html5 Proposal

We decided to base our HTML5 website on dinosaurs, and therefore make it a fun interactive environment for children. The mock up below shows a simple layout of the way the site may look. Their are five main HTML5 elements that we intend to include. Embedded audio which will autopsy on the load of the page, Embedded video which will be altered by a 3D effect, the canvas element will be used to display different din fact cards, linked to this all data will be 'tagged' to make the page more semantic, and finally we intend to use the local storage element to build up, and display unique user profiles of favourite dinos etc. 




Most of the elements we intend to use can be demoed at w3schools, a good example of what we wish to do with our embedded video can be found here, and the fact cards on the canvas element example can be found here

Sunday 16 January 2011

Final Blog Conclusion.

Even though after effects has not been as reliable as I thought it would be, and many hours have been spent searching for solutions to errors, and the dome correction plugin wouldn't work,I'm very proud of myself and my project.

I set out to create an animation with a meaningful cultural narrative that suitably animated to the progression of audio, and that's exactly what it does. After testing on the inflatable dome I can also see that my animation creates a 3d environment from nothing but 2d animations, something I didn't think I would achieve.

Looking back I wish I could have created my project in blender, but I struggled so much, and I think I would have had to come up with a much more simplified idea, and I just didn't want to do that.

Obviously my idea and project are open to opinion but I achieved what I set out to create, and i hope the audience enjoy watching it as much as I enjoyed creating it.

Blog 11. Question time.

Why didn't I use blender?

When using blender I felt I just couldn't get to grips with the simplest of tutorials, and i knew it would hinder my idea greatly. Also I had not seen any current projects where someone had related audio and video effectively with a 3d package. My idea was also very abstract and colourful when I thought about it, and its something blender couldn't give me in such a small time with a high learning curve.



Why did I use after effects?


Very easy to get to grips with, and much more suited to my project than blender. With a vast array of plug-ins and tutorials it seemed silly not to choose After effects. While i did run into small errors here and there they were easily solved by an overwhelming user community on the web.

What would i do differently?

I would have loved to have spent alot more time fine tuning my project, but the deadline was a factor obviously.
In the time we had i managed to teach myself After effects and produce an animation which im very proud of.

Although it is not dome corrected due to the plugin refusing to work with my animation, I think I've done a good job of creating a 3d immersive animation that is visually exciting.