At NAB (national association of broadcasters) this week, Akamai and NBA.com are showing off a live video app that Dreamsocket developed for the Final Four. The application they are showing is basic, a video mosaic that allows you switch camera feeds during the live event by clicking sections of the mosaic or buttons. I developed that app in a blink, but what I’d really like to show is the prototype. The prototype actually used a single mosaic feed, which it spliced and then blitted to the screen.
So what do I mean by blitting video? For those not familiar with the concept, instead of letting the video just “show” on the screen, you actually capture the video and cut sections of it and then render them where and how you want them. In the case of the NBA mosaic, the mosaic video was a single video that had 4 different views in it. The broadcaster combined all 4 into the same picture and broadcast that video (you see this in the news sometimes). When I received the video, before I “showed” it, I cut it up and then placed it into pieces I wanted. What this allowed me to do is have the mosaic be 4 interactive elements that the user could zoom in and out of them in real time. Since I could have as many duplicates of the video parts that I wanted, I was able to use each video section in the actual buttons as well. This gave real time views of each section.
So technically and visually, the concept is pretty cool. However, it is interesting that it has some actual benefits as well. You can’t achieve the interactivity another way without opening all 4 feeds at once, which is expensive on the server, the bandwidth, and client machine (aka not feasible). By opening a single feed, you are consuming only the bandwidth for one stream, you are never hit with the delay of switching a stream, your feeds are never out of sync, and you have full control over manipulating the entire mosaic.
Are there downfalls? Yes, there are. For one, this is only possible with video whose rights you have to script. In the case of using FMS, you need FMS3 and the server must allow you to capture the video’s pixels in it’s application script (p_client.videoSampleAccess = “/”;) or via it’s configuration. The other downfall is that because you have 4 videos in one, each individual video’s quality is a lower because your are zooming a smaller space.
All that said, this is an interesting concept that has a lot of different ways you could take it. Love to hear people’s thoughts and ideas.