In a recent blog about the current state of WebRTC, I mentioned that readers should check out an excellent white paper about seven situations in which WebRTC would need server-side media processing. Written by Tsahi Levent-Levi, a leading blogger at BlogGeek.me and WebRTC activist, the white paper gives a great overview and unique point of view about why media servers matter.
In the next few weeks, I will explore some of the seven reasons from Tsahi’s white paper. Today we’ll dive into reason number six: processing of the media stream. You may be wondering why we would need a media server to do this. After all, smartphones are basically little computers—don’t they do all of the media processing?
No, they don’t. They can do some of it, but not all of it. Even when smartphones can do the processing, it’s better to use a cloud-based media server simply to preserve the device’s battery power. We all know that battery resources are golden.
Tsahi goes over a few use cases in the white paper, including one he calls “scale and compose multiple streams and media types into a single coherent view.” In its simplest form, an example of this could be text overlay, or adding text to an existing video stream, possibly for advertising purposes. Another use case involves using the media server to analyze surveillance video streams in order to detect suspicious motion. An application would need to identify the motion and the resulting course of action, which in this case might involve sending the media stream to a security guard via mobile phone. Because this involves transcoding, a media server would need to be involved.
To read about more examples like this, check out Tsahi’s white paper, “Seven Reasons for WebRTC Server-Side Processing.”