Wednesday 15 December 2010

Idat 204 - My killer application.

http://busy-signal.co.uk/mediaindex.html
!note: It does take a while to load and may require a refresh!!


My killer App

The current web is designed and guided for human readability, and requires human interaction to perform any desired task. The semantic web would take the web to a new form of interaction, by presenting data in such a way that is readable and understood by computers as well as humans. This would allow to machines to do such tasks as searching without a human operator. The semantic web does not change the web as we know it, it extends it. By adding new forms of data and tags in the form of XML to documents that become readable by humans and computers.
The problem

Our society is now a media frenzy, and has been for the past 10 years. With new technologies becoming available it has led to an overwhelming volume of content, 24 hours of video are uploaded to YouTube every minute alone. With this in mind it can be a hard and lengthy process to find the exact media file, be it audio, video or text that you are looking for, especially if you don’t know the title or producer. For example, if you are trying to explain to a friend a film you have seen but no idea of the title, it's obviously going to be very hard to share the media you wish too. The same problem is true with information in general, although Wikipedia is considered by many to be the main index of information on the web, my application is of a similar service but for media related files.
The solution.
“A killer application that automatically tags current and, future media content. “

My application would use software to scan any media uploaded to the net via a unique API. This API could be included in any upload service on any site such as Youtube. After the media file has been scanned it would convert any dialogue into searchable text. The person who uploads the file can then tag the file via unique tags. In addition to this any viewer can add tags to the selected content creating quicker and more efficient searches. So instead of tagging title, genre etc which is possible on most video sites now, the person will tag things such as location, actor name, song pitch. Therefore my application is not a database it is a search engine.

Basically the user inputs a selection of text, be it an exert from a song, film or book and selects the media type to narrow down the search. So if the user has typed in a line from a song such as then we kiss”, upon clicking search the software will scan the user generated tags and XML document's of the media for the related text. Based upon this search it will select what it believes is the most relevant video, song or book depending on what media category was selected on the first screen. In addition to that it will start the media file at the exact point in which it appears. For example in the second screen, screen-shot after having typed in “then we kiss” the video has started at 0.18. It knows to start here via the XML document which has enabled the file to become searchable text.

So here is an example of the XML of any given media file.
Produced By: Oliver Koletzki Title: hypnotized


<?xml version="1.0" ?>

                                <title>Produced By: Oliver Koletzki  Title: hypnotized</title>                     
                                <author>Media Type: Music</author>
                                <time>Time: 0.18</time>                           
                                <message>

I catch your eyes ,try not to smile
I track your style I feel your vibes
We have a drink then go outside
Talk for a while and then we kiss

                </message>
                </post>


Starting to make Media Index

So as you can see I have mocked up some basic images of how I want the interface to look. I have kept the design minimal yet aesthetically pleasing.
 The reason for this is due to inspiration from Google’s incredible success by creating an easy to use service. To make this prototype functional I will be using Flash(Actionscript 3.0) and XML. So like many others I set off to gain a deeper understanding of both languages via tutorials.
While the tutorials I found helped, I quickly learnt that to show the full potential of my prototype was going to be hard. Pulling in video via XML was a very tricky process so I kept it simple by having the video embedded in the flash file. The only drawback of me doing it this way, was that I now only knew how to import a singular media type. The original plan was to have an example of a Book, Song and Film all being pulled in from XML. However the chosen file fully demonstrates how my product works.






So as you can see from this screenshot the design has changed slightly. Instead of a drop down box I now have radio buttons which called Title, Media type and message. The user will select each one depending on how much information they have about the media file they are trying to find. For example, if they know the producer of a film and a small passage of dialogue from it they will select those two buttons and search for one of them. However all three can be selected to search all of the tags contained in the xml. The result will depend on which buttons were selected and, if nothing is typed or the search request is less than 6 characters, the user will see an error message. I have done this so that my application gets the best search results.

The result box gives the user vital information on the file, Title, Media type and timeframe. The timeframe result is key to my application. The result will depend on what the user searched for. So for example if the words from a song are at 2 minutes and 20 seconds into the file the video will start at that point. My application knows to do this because of the unique Time tags given to each file automatically.
I also realised that any given search will more than likely give multiple results, especially if the dialogue is the same for two different media types. E.g. A movie adaptation of a book. With this in mind I have added a ‘Nearest Media type’s’ section. A lot like YouTube’s suggested videos, but mine contains all three media types from many different sources.

Why is Media Index Semantic?
Currently computers can only search media for title and certain tags that a human gives it. Tagging does make for an effective search so that will remain in place with Media Index but a bit more fine tuned, So as mentioned new tags will be added automatically such as location, actor name, or song pitch.
On top of this my application relies on software not available yet. It will convert any media file into a searchable text document the very moment it is uploaded to the web. So if a user uploads a video to YouTube, the software will scan the data and create an xml file searchable via my application.
However nothing will be hosted on my site because that would make it a database and that is possible now. Instead my Application is a search engine for media files, like Wikipedia is for information.



What makes Media Index a Killer app?

A killer Application to me is something of which dominates its market and makes money. My application will do both. When the user gains the desired result e.g. a music file, links will appear that will take the three cheapest prices of the product from three different sources. This is something YouTube has incorporated and is making revenue by providing a link for the user to be able to purchase such media files. So by sending vast amounts of traffic to many different sites, it will provide sales for each, therefore my application will take a certain amount of the money from each sale.

Conclusion

Here is the final and finished prototype:       www.busy-signal.co.uk/mediaindex.html