Today’s Alternatives to RTMP for Getting Your Streams to the Internet

Share This Post

Initially a proprietary protocol, RTMP was originally developed by Macromedia (now Adobe) for real-time streaming of video, audio, and data between a server and Flash player. Although Adobe announced that it will no longer support Flash, RTMP remains a commonly used protocol for live streaming within production workflows. In this post we’ll explore some of the challenges that RTMP presents for contribution streaming in “the first mile” – getting your stream from the live event to the cloud for distribution via CDN – and a few alternative protocols worth considering.

The reality is that RTMP simply isn’t scaling for today’s contribution challenges. Here’s why:

 

  • Support for RTMP is waning, and CDNs are decommissioning RTMP entry points in favor of HLS and MPEG-DASH 

 

 

  • RTMP does not support HEVC encoded streams. 

 

 

  • RTMP does not elegantly support advanced resolutions, in part due to lack of HEVC support but also because RTMP cannot be used at high bitrates due to bandwidth limitations. 

 

 

  • Historically, RTMP has been difficult to provision through firewalls. 

 

 

  • RTMP experiences issues with security, multiple language support, and ad insertion support. 

 

If RTMP has so many challenges, what are some of the alternative protocols available to transport your streams to your CDN? Nowadays, your choice is really between HTTP based protocols such as HTTP Live Streaming (HLS), MPEG-DASH or Secure Reliable Transport (SRT), the open source protocol. DOWNLOAD THE WHITE PAPER

Secure Reliable Transport

First, a quick primer for those who may not be familiar with SRT. Originally developed by Haivision, SRT is a new open source standard that is being used by thousands of organizations globally in a wide range of industry applications, from IP cameras, encoders and decoders to gateways, OTT platforms and CDNs. Adoption of the SRT protocol and membership of the SRT Alliance continues to grow from strength to strength (now at over 200 members and counting) with industry giants like Avid, Brightcove, Deluxe, Global, Imagine, LTN, MediaKind, Microsoft, Net Insight, Red Bee and Telestream endorsing the protocol.

Let’s take a deeper dive into some of the fundamental differences between RTMP, HTTP based protocols and SRT.

Continuous vs. Fragmented Streaming

RTMP is a continuous streaming technology. Packets containing the fundamental stream structure of MPEG I-frames and P-frames are sent from the source to the destination as they are created. HTTP-based protocols (HLS and MPEG-DASH) rely on those streams being packaged as fragments (or segments or chunks) and then sent across the network as file segments. Before leaving an encoding platform, a number of sequences of I-frames and P-frames are collected and packaged together as a file for transmission. This packaging process, although accommodating the ubiquity of file transfers, slow down your stream and dramatically increase end to end latency. SRT, like RTMP, is a continuous streaming protocol designed for high performance transmission and bandwidth optimization.

Packet Loss

TCP/IP

Like RTMP, the HTTP based protocols (including HLS and MPEG-DASH) rely on TCP/IP for handshaking and for identifying and replacing packets that are lost in transmission. TCP/IP demands a very tight sender-receiver relationship so that regardless of network conditions all lost packets are detected and retransmitted. Large buffers at either end of the stream workflow hold packets that may need to be retransmitted when faced with packet loss. However, the main problem with TCP/IP based transmission for video streaming is that the re-transmission of packets takes a very long time, adding dramatically to the latency of the video. Here’s how it works. The receiver acknowledges every packet, the sender maintains the large buffer of all packets sent, just in case, and then retransmits packets that do not somehow arrive within a particular time window.

A second major problem with TCP/IP is that because of all the TCP/IP handshaking (and related congestion control algorithms that tend to throttle down bandwidth), RTMP and HTTP streaming has a very difficult time with bandwidth utilization. OK for lower bandwidth transmission over a few hops, TCP/IP based transmission reaches a maximum bandwidth very quickly (sometimes in the 1.5 to 2 Mbps range), and once there the bandwidth starts to choke. In essence, both TCP/IP and stream fragments are your enemy when trying to establish a high performance video stream.

UDP 

The UDP protocol works similarly to TCP, but without the error checking that slows things down. When using UDP, “datagrams” which are essentially packets, are sent to the recipient, the sender does not wait to make sure the recipient received the packet – it will just continue sending the next packet. Sometimes referred to as a “best effort” service, if you are the recipient and you miss some UDP packets, too bad. There is no guarantee that you are receiving all the packets and there is no way to ask for a packet again if you miss it. The upside of losing all this processing overhead, however, means that UDP is fast and is frequently used for time-sensitive applications such as online gaming or live broadcasts where perceived latency is more critical than packet loss. UDP is a great protocol for streaming over pristine networks, like the LAN or dedicated private network WAN connections such as MPLS.

SRT 

Haivision created SRT to deliver the performance of UDP over lossy networks with a high performance sender / receiver module – one that does not choke the network with useless acknowledgements so it can scale and maximize the available bandwidth. With video, timing is everything, so SRT guarantees that the packet cadence (compressed video signal) that enters the network is identical to the one that is received at the decoder, dramatically simplifying the decode process. SRT is designed for low latency video stream delivery over dirty networks.

SRT provides some additional features specifically designed for the video streamer. It natively provides for independent AES encryption, so stream security is managed on a link level. It also allows users to easily traverse firewalls throughout the workflow by supporting both sender and receive modes (in contrast to both RTMP and HTTP that only support a single mode). SRT can also bring together multiple video, audio, and data streams within a single SRT stream to support highly complex workflows without burdening network administrators.

Within the sender / receiver module, SRT has the ability to detect the network performance with regards to latency, packet loss, jitter, and available bandwidth. Advanced SRT integrations can use this information to guide stream initiation, or even to adapt the endpoints to changing network conditions such as we have done on Haivision’s Makito X series of video encoders with Network Adaptive Encoding.

Saving the best for last, SRT is payload agnostic, open source and royalty free. With RTMP fading away, and although DASH or HLS may be acceptable for stream distribution from your CDN to desktops and mobile devices, SRT is the clear winner for “first mile” contribution streaming.

RTMP vs. SRT

We recently put RTMP and SRT to the test to see exactly how they compare. To see the results and discover how the two protocols stack up against each other in latency and bandwidth tests, download our white paper, RTMP vs. SRT.

RTMP vs SRT White Paper

Read more from our blog

Our blog brings you the latest in video streaming trends, technology, & best practices from the leading provider of enterprise video solutions.