Distributing Live VR Video Content - The Reality behind the Experience

Blog Post created by charliekraus on Jun 6, 2016

Virtual Reality (VR) is an exciting concept with huge possibilities for altering how we experience many forms of entertainment. In a blog following the recent National Association of Broadcasters show, I discussed three VR demos I experienced that proved with proper filming and viewing technology, the results are quite amazing. VR broadcasts have already occurred in high profile sporting events such as the March Madness basketball tournament and the Masters Golf event, as well as music and entertainment content. Viewing these through a VR headset can make you feel as if you are on the sidelines of a game or in the front row of a live performance by your favorite band.

Given how immersive a good VR experience is with the ability to have a 360 degree view by swiveling your head, I wondered about the bandwidth required for all the video streaming data for a broadcast. Let’s take a brief look under the covers at VR streaming and what it means for network service providers.

Streaming Virtual Reality across Distance

The process for capturing VR images, broadcasting them over a network, and displaying them to viewers is an end-to end ecosystem.

VR cameras have to capture multiple views of a scene, and to do this, have multiple lenses arranged in a circle like a carousel, with many also adding cameras aimed up and down for full surround image capture, like the example below.

                                                                    VR camera.png

                                                                Image credit:

With so many cameras in play, a bit of math will come up with a scary aggregate bitrate that would potentially have to be transmitted from a location to the broadcast facility, encoded, and broadcast to viewers. Let’s look at a simple case first, a single viewer watching a live VR video of a scene. The immersion experience with VR comes from the ability of the viewer wearing a VR headset to be presented with the view in any direction, whether they rotate their head side to side, look up, or down, the image will change to correspond to where they look. In this case, the technology directs the camera to only send the portion of the image related to where the viewer is looking. An app in the VR headset reconstructs the video and 3D geometry in real time, and fast enough that there is no video lag. Seems simple enough.

The next level of complication is how the headset app can reconstruct the image without video lag. What actually happens is the views adjacent in all directions to where the viewer is looking are also transmitted, so when the viewer moves their head in any direction, the image is already available for the app to display. This gets obviously more complex if the viewer moves their head rapidly in many directions in succession.

What happens with a live VR sporting event TV broadcast, like at the Masters Golf Tournament earlier this year? With huge numbers of viewers watching, all camera images have to be broadcast to users so they can each choose their preferred view of the action. That is a lot of data.

This is all well and good for a live broadcast. A common use case cited for VR is for users to take virtual tours of resort destinations, museums, famous sites, etc. Now we have Video on-demand VR. This means a VR video file that contains the content of all the cameras together so the viewer can look in any direction and see the appropriate image. I’ll spare you the math, but depending on resolution and frame rate, a 1 minute file could be up to 4GB, and 5 minute up to 20GB at 56 Mbps! So users aren’t going to download a lot of VOD VR content, and hosting a lot this will mean large storage expenditures.  In the case of live video, we have bandwidth to think about. In a keynote presentation at Streaming Media East in May, Josh Courtney of SkyVR made the case that 4K video will be a requirement for the experience audiences will expect, and note that there will be left and right eye 4K streams. Depending on encoding and format, the required bandwidth will be 20Mbps to 56Mbps!

The implications of this are broad reaching. How will global network providers be able to scale to handle bandwidth demands? Is there a business case to justify the investment? Will new video compression algorithms be able to reduce the requirements? Today, the best compression algorithms reduce a 4K stream to 2.5X a 1080 stream. That’s not good enough. Encoding techniques are progressing, and could help reduce the amount of data that has to be transmitted to each user.

The issues raised here are similar to what is happening with 4K TV, so the VR world is in good company. A lot of large organizations have a stake in all this, and large investments in R&D are being put to work to come up with solutions. Certainly a lot to watch unfold in the next few years.

To learn more about encoding research, this superb article on Next-generation video encoding techniques for 360 video and VR does a great job of covering the engineering challenges and solutions.