Transcoding and Live Streaming with Zencoder, S3 + CloudFront

We recently published a quick-start guide detailing how to publish an HLS live stream to S3 for testing purposes. If you’d like to take it a step further, you can use CloudFront as a CDN with no additional commitment. This guide assumes you’ve already gone through the Live to S3 quickstart guide and have that working.

CloudFront Setup

First we need to set up our CloudFront distribution. Log into your AWS console and go to your CloudFront dashboard. Click “Create Distribution” in the top left corner.

Create Distribution

You should now see two delivery method options. You might be tempted to select the “Streaming” option, but that’s specifically for the RTMP protocol. What we want in this case is “Download”, so make sure that’s selected and click “Continue”.

Delivery Method

When you click the input box for “Origin Domain Name”, you will see a drop down menu of your available S3 buckets. Select the one you used in the Live to S3 quickstart guide, and the “Origin ID” field will automatically be populated. “Restrict Bucket Access” allows you to make it so people can only access your bucket through CloudFront. Because we’ll want to be able to view both for testing later, leave this disabled.

You could leave the rest of the settings untouched, but just for fun I went ahead and set up a CNAME for mine. If you have a domain name available, you can just specify a subdomain such as “live.yourawesomedomain.com” in “Alternate Domain Names(CNAMEs)” under Distribution Settings. Don’t worry about setting up the CNAME yet, just specify what subdomain you’re going to set up and they’ll provide the CloudFront domain name in the next step. I also created a simple landing page as index.html and uploaded it to my S3 bucket. I specified that as the “Default Root Object” so someone that just went directly to live.yourawesomedomain.com would see that instead of an S3 permissions error. Once again, completely optional.

Distribution Settings

When you’re ready, click “Create Distribution”. You’ll be taken back to your CloudFront dashboard where you should see your brand new distribution being created. While you’re waiting, if you specified a CNAME earlier you can go ahead and set that up. Simply set up a CNAME that points to the Domain Name listed for your new distribution. After a minute or two you’ll see the status go from “InProgress” to “Deployed”, at which point you’re ready to start streaming.

In Progress

Once your distribution is done deploying, if you created a CNAME and specified a default root object you should be able to see that by going to your custom domain. At this point, it’s time to set up a new stream. The only change we need to make to the original Zencoder request is to add an additional header for Cache-Control. Max-age needs to be set to, at most, half the segment length (Zencoder default is 10 seconds) for each rendition, or you’ll run into all sorts of issues with caching old manifests. Your whole request should now look like this:

{
  "live_stream": true,
  "outputs": [
    {
      "label": "hls_300",
      "size": "480x270",
      "video_bitrate": 300,
      "url": "s3://YOUR_S3_BUCKET/awesomeness_300.m3u8",
      "credentials": "s3",
      "type": "segmented",
      "live_stream": true,
      "headers": {
        "x-amz-acl": "public-read",
        "Cache-Control": "max-age=4"
      }
    },
    {
      "label": "hls_600",
      "size": "640x360",
      "video_bitrate": 600,
      "url": "s3://YOUR_S3_BUCKET/awesomeness_600.m3u8",
      "credentials": "s3",
      "type": "segmented",
      "live_stream": true,
      "headers": {
        "x-amz-acl": "public-read",
        "Cache-Control": "max-age=4"
      }
    },
    {
      "label": "hls_1200",
      "size": "1280x720",
      "video_bitrate": 1200,
      "url": "s3://YOUR_S3_BUCKET/awesomeness_1200.m3u8",
      "credentials": "s3",
      "type": "segmented",
      "live_stream": true,
      "headers": {
        "x-amz-acl": "public-read",
        "Cache-Control": "max-age=4"
      }
    },
    {
      "url": "s3://YOUR_S3_BUCKET/master.m3u8",
      "credentials": "s3",
      "type": "playlist",
      "streams": [
        {
          "bandwidth": 300,
          "path": "awesomeness_300.m3u8"
        },
        {
          "bandwidth": 600,
          "path": "awesomeness_600.m3u8"
        },
        {
          "bandwidth": 1200,
          "path": "awesomeness_1200.m3u8"
        }
      ],
      "headers": {
        "x-amz-acl": "public-read",
        "Cache-Control": "max-age=4"
      }
    }
  ]
}

Just like in the original guide, copy and paste the request into the Request Builder. After you’ve replaced all the instances of YOUR_S3_BUCKET with the bucket you specified when creating your cloud distribution, and if applicable, replaced “s3″ with your specific credential name, click “Execute”. Like before, you’ll see a response appear below the request builder which contains the stream URL and name you’ll need for Flash Media Live Encoder.

 

Once you start publishing a stream to the provided endpoint, you should be able to watch it from both your new CloudFront URL and directly from your bucket. If your bucket was “all-the-streams” and the CNAME you set up was “live.yourawesomedomain.com”, you could start watching the stream from http://all-the-streams.s3.amazonaws.com/master.m3u8 or http://live.yourawesomedomain.com/master.m3u8. If you didn’t set up a CNAME, you can just use the domain name of the CloudFront distribution, which would look something like this: d3gw31barqn5pc.cloudfront.net/master.m3u8.

When you’re finished, make sure to disconnect your encoder. On Flash Media Live Encoder, this is done by simply clicking the “Stop” button, but we’ll automatically end a stream if the encoder input stream doesn’t reconnect within the reconnect time specified in the request (default is 30 seconds).

Notes

S3 follows an eventual consistency model, meaning that given enough time, all updates can be expected to propagate throughout the system. For our live streaming use case, this means that there’s a chance that a segment might not be immediately available to viewers, which would cause the video to skip until it got to another available segment. Anecdotally, this has not been an issue during testing, but if there are lives at stake over the dependability of your live stream you may want to use EC2 instead of S3 to guarantee immediate availability.

Tags: , , , ,

  • Anonymous

    great tutorial! thanks for putting this together!

blog comments powered by Disqus