This summer, sports fans around the world hit pause on their daily lives to tune into the 2014 World Cup Brazil™. While the trip to Brazil was not possible for many, the opportunity to view at home (or the office!) and consume complementary content on the second screen was a popular choice.
Viewers cheering on via computers and connected devices were offered a new perspective this year by being able to review multiple camera angles as the games were being played. To provide this multi-screen action review and a wide range of other second screen content to viewers from afar, EVS, a global provider of live video production systems, leveraged Brightcove’s Zencoder cloud-based encoding service to power transcoding for its C-Cast content delivery solution. C-Cast is a multimedia distribution platform that allows broadcasters to instantly deliver complementary content to viewers on web and mobile devices during live events. Host Broadcasting Services (HBS), the main provider of broadcast and interactive services for the 2014 FIFA World Cup, managed the implementation and integration of C-Cast to deploy second-screen experiences.
“We took second screen content to the next level by allowing broadcasters to deliver multi-camera action review, key highlights browsing and review, stats-related content and player, team and personality tracking,” said Nicolas Bourdon, SVP Marketing at EVS. “This completely enhanced the at-home viewing experience and judging by the stats, viewers were pleased and eager to view the World Cup this way.”
Reaching the magnitude of viewers that the World Cup attracts with a high-quality viewing experience is no easy feat. With the games over and the numbers crunched, we’re happy to share some stats directly from Zencoder on the scale of video that powered the World Cup on the second screen:
Recording, uploading and transcoding video straight from the browser
Video on the web continues to grow by leaps and bounds. If you want a cheap-but-good analogy, it’s about a tween: it already knows a lot, but still has substantial growth ahead. It’s becoming too cool for plugins like Flash and Silverlight, but a lot of the shiny and exciting APIs on the edge of the spec still need to mature a little.
Last year at WebExpo I gave a talk about the future of video on the web, as well as a brief foray into new and exciting features that were on the way. Some of the things I talked about, like Encrypted Media Extensions, were, for the most part, still a pipe dream with few to no working implementations. Now, just a year later, nearly all of the things discussed in that talk have meaningful implementations in most modern browsers.
I recently gave a talk at Developer-Week about how the future of video on the web has arrived. During the talk we walked through real world examples of each “future” feature and, amazingly, talked about how each of the features could be (cautiously) used in the wild today.
One of those features was WebRTC and using the APIs individually. We built a simple example of using getUserMedia to request a user’s webcam and display it in a video element. To take this a step further, let’s take that example and use it to save, then transcode content directly from the browser.
From a reliability perspective, one of the great things about VOD transcoding is the ability to simply try again. If anything goes wrong during the transcoding process, we can simply re-run the process from the beginning. The worst effect of a hiccup is a job takes a little longer, which is unfortunate, but the end result is a transcoded file is delivered to the customer and all is well.
During a live event, however, we don’t have the luxury of simply trying again. Since these events are transcoded in real time, we only have one chance to deliver a perfect transcode to end-users; a blip anywhere along the way can mean an interruption. No matter how reliable the system is, you never know when something like a power supply or network card is going to fail, so redundancy is crucial for high profile events.
To account for this, we recently announced redundant transcoding for live streams. In a nutshell, if redundant_transcode is enabled, and at least one secondary_url is specified, Zencoder will transparently create a new job using any secondary URLs specified in the outputs of the original request. This job has all the same settings of the original, with the important distinction of being run in the nearest transcoding region of an entirely different cloud provider.
Last night, at 6:08pm EDT, the Zencoder service went offline due to a database failure. We began working on the problem immediately, but unfortunately our primary approach to solving the problem was unsuccessful, and the secondary approach took an extended period to implement. In total, the service was unavailable for six hours and 18 minutes.
Here is a detailed description of what happened, why, and why it will never happen again.
The engineering team at Zencoder conducted a review to assess the impact to our system of the CVE-2014-0106 vulnerability, nicknamed “Heartbleed”. This serious vulnerability would allow an attacker to reveal up to 64kB of memory to a connected client or server. Even though many servers running on the internet were exposed, Zencoder public facing servers were not effected.
Some of our internal servers, which are not accessible by customers, were vulnerable. We fixed this vulnerability as of 8:00PM PST on Monday, Apr 7th.
Today, we are excited to unveil our support for advanced encoding formats. Zencoder will now support HEVC, MPEG-TS, JPEG 2000 and AVC-Intra, bringing the power of cloud encoding to broadcast and professional workflows. Additionally, Zencoder will also support MXF container for conformity with DPP guidelines for European broadcasters.
With today’s announcement, Zencoder is applying the elastic scalability of the cloud to solve encoding challenges for a new segment of the production workflow. Just as Zencoder empowered publishers to rapidly transcode video to support the fragmented Web and mobile ecosystem, the service is now enabling broadcasters and professional content providers to scale their media processing operations in the cloud.
Zencoder’s support for advanced encoding formats is critically important as the OTT and streaming landscape continues to expand. The market is disparate and demands that content providers:
Create a video rendition for every streaming outlet
Create files for endpoints for international subsidiaries and distribution partners
Provide encoded libraries of content to support licensing deals
All of these initiatives can be costly when relying on on-premise encoders. Cloud encoding for these workflows alleviates cost concerns and helps publishers to better plan for capacity.
For our customers in the broadcast realm and within other professional workflows, support for H.265 (HEVC) and MPEG-DASH is particularly important. Because H.265 promises to deliver high-quality video at lower-bitrates – ideal for 4K/Ultra HD – broadcasters are hyper-focused on related support for this standard. Additionally, MPEG-DASH will serve as an international standard that will potentially unify cross-platform workflows. We are very excited to provide premium publishers with seamless encoding solutions that will simplify incredibly complex workflows.
Advanced Encoding Formats are now in beta. Contact us to learn more about participating in this beta program.
If you missed it, Casey Wilms did a three part series on the blog called Manifest Destiny. I liked it so much I started playing around with hacking together manifests programmatically and decided to do a screencast on the basics. The sceencast reiterates some of what Casey talks about, so the basics of an HLS manifest, and how to go about manually editing one. We finish by writing a short script using Node.js to take in multiple manifests and concatenate them altogether before writing a new m3u8 we can watch.
You might not believe it, but working at a company like Brightcove means you think about video quite a bit. For a service like Zencoder, for instance, we constantly need to keep in mind what formats our customers need now and down the road, along with the devices they will want to support. The world of video is big and a lot of the work we do affects others in our space (and vice versa), making it important for those of us working in this space to communicate. Because of this, we’re supporting the SF Video Technology Meetup, and we’d love for you to join us.
Last Tuesday was the 5th meeting, and Robert Scott, the platform engineering manager at Inkling, spoke to us about Inkling’s platform and how they handle video. He walked us through the Inkling service itself from both an editor’s and a user’s perspective, then went into the process they went through before deciding on a transcoding service. The group is usually pretty vocal, so Inkling’s use of signed Cloud Front URLs with HTTP Live Streaming led to a great discussion on the practice.
Excellent presentations from last year included J Sherwani and Faraz Khan from Screenhero, who talked about H.264 vs VP8 for low-latency video streaming, and Jon Gubman from Funny or Die, who followed up with a presentation on Funny or Die’s switch to HTML5 video from Flash and the gotchas they uncovered along the way. Robert Scott started the new year on an excellent note, so we’d like to continue the trend in the upcoming months, but the group needs your help.
This month’s meetup (February 25th) is going to focus on HTTP streaming. The talks will be a bit more of a lightning talk format, with Anton Kast (Video Architect at Yahoo) giving an overview of Dash, and myself (Matt McClure) talking about dynamically generating HLS manifests. One more talk maybe added before the meetup, so keep an eye out.
Interested in talking? Please get in touch! Anyone is welcome to the floor, it doesn’t have to be long (anywhere from 5 to 10 minutes), and can consist of anything from technical demos to presentations (or both). The general rule of thumb is that it should be interesting to engineers that work with video, but other than that we’re open to any proposals.
If you’re interested in coming, please join the Meetup so we can get in touch about updates (and make sure to buy enough food and drink). Hope to see you there!
Brightcove is sponsoring the 2014 TV Hackfest February 5th and 6th at the Moscone Center. We think the web is the future of TV, but sometimes content still needs a little help getting to every device. We help developers build these applications by providing fast, scalable, and easy to use video or audio transcoding that allows them to deliver media to any browser, mobile or OTT device.
We’re awarding prizes for both best use of the Zencoder API and best use of Video.js. Both of these work well with other APIs and can be used in conjunction with other sponsoring services. For the the team that does the coolest work with the Zencoder API, we’re giving each team member a Twine + Breakout Board. If you’re not familiar with Twine, it’s a little sensor box that can talk to the web, similar to If This Then That for the real world. The breakout board allows you to extend Twine’s built in sensors with your own digital input. They’ve got some neat how-tos if you get to take one of these babies home and need a little inspiration.
If you’re playing back video via HTML5 in the browser, you’ll need a player! If you’re not familiar, we help maintain and contribute to Video.js, a free, open source HTML5 video player. It’s dead simple to customize and there’s a growing list of plug-ins that provide everything from YouTube playback to playlists to stereo panning. For the best use of Video.js, we’re giving each team member a 3 pack of San Francisco’s famous Blue Bottle Coffee.
We’re providing free encoding minutes to contestants, and we’re always available to answer any questions about the API or Video.js. As always, all of our documentation is available online, but you can also look for Matt, the guy in the Zencoder hoodie, or ping him on Twitter and he’ll be happy to help.
With the recent addition of support for both Google Cloud Storage (GCS) and Google Compute Engine (GCE), Zencoder is breaking new ground in becoming “cloud-agnostic” — enabling customers to have control over where and how their content is processed, at scale.
Google Cloud Storage joins Amazon S3 and Rackspace Cloud Files as a supported cloud storage provider in Zencoder. Like the s3:// or cf:// protocols, you can also construct input and output URLs with the gcs:// protocol to access content that is stored in Google Cloud Storage. With a bit of initial setup, you’ll be able to ingest content from your GCS buckets and push transcoded renditions (including adaptive HTTP Live Streaming) to GCS.
We’re also pleased to announce that Zencoder VOD transcoding jobs can now be run on Google Compute Engine, which was recently made generally available. By using both Google Compute Engine and Google Cloud Storage, you can take advantage of Google’s massive network to power your video transcoding and delivery workflow.
Below, we’ve put together a quick guide on configuring Zencoder to take advantage of both GCS and GCE. If you have any questions, feel free to get in touch with our support team at firstname.lastname@example.org.