The Transformation of Transcoding

Article Featured Image
Article Featured Image

Workflow Changes
I’ll also briefly mention a few workflow changes that are emerging.

WATCH FOLDERS: SO YESTERDAY
Speed is queen to quality’s king. But today’s best transcoding solutions also include a jack of all trades: analytics.

In order to eliminate human or high-touch bottlenecks, top-shelf transcoding solutions offer analysis, transcoding, and distribution. The latter two have been part of desktop transcoding for more than a decade. But even today’s desktop transcoding solutions lack the analysis function that most mid- and high-level transcoding solutions offer: the ability to move beyond rigidly defined watch folders and more into ad-hoc analysis of content, with subsequent transcoding based on analysis outcome.

If that sounds confusing, here’s an attempt at a simple explanation of the difference between watch folders and analysis:
Watch folders have been used, from the outset, to perform a particular set of transcoding functions. The traditional approach was to set up a watch folder named “web” and define the transcoding to convert any file added to the folder into one or more web formats.

To work properly, though, all original content added to a single watch folder had to share similar traits: a supported codec from which to transcode the output, a similar pixel size, a similar frame rate, and either interlaced or progressive content (but not both).

Mixing and matching content in the same watch folder was a sure guarantee for job failure, which often affected all of the remaining original files in the watch folder: If the failure occurred on the topmost original video file, a transcoding job might shut down immediately. Even for those systems that skipped over failed jobs and continued transcoding the remaining video files in the watch folder, the task of manually finding and deleting the errant output files was highly daunting for even medium-level workloads.

Enter the analysis segment, accompanied by a user-friendly graphical user interface (GUI), of which Inlet’s Armada and Telestream’s Vantage are prime examples. In today’s robust transcoding systems, a GUI can be used to lay out a visual workflow of files from one or multiple watch folders.

Do you need to analyze original files to see if they are progressive or interlaced content? Add a box in the visual workflow that is linked to two other boxes, each with a possible outcome (progressive or interlaced).

Want to further parse the progressive content into 24 frames per second (fps) versus 25 fps? Add a second pair of boxes to the visual workflow, one for 24 fps content and another for 25 fps content.

Finally, add one transcode step for each of the decision boxes, as well as a location box for each of the output files, and suddenly the watch folder can handle a much wider variety of content, with intelligence built in to the workflow.

LOGS AND PARTIAL JOB SUBMISSIONS
Once the workflow is run and the files are distributed, all transcoding systems will generate log files. Some solutions offer an easy way to parse these log files into CSV or other manipulable reports, gathering information from the database and presenting it in a report format. Others are stuck in a mode where the log files are XML only—great for integrating into
a third-party billing solution but not so easy for the average user to gather data from.

All GUI-based systems that were tested, though, have the ability to visually alert the user to particular failures, whether at the analysis, transcoding, or distribution step. Vantage, announced at NAB and now shipping, also has the ability to restart a failed job that has been identified in the middle of an otherwise complete workflow: Should only one transcode in a  multiple transcode scenario fail, a user can restart just that one transcode, without having to run the entire job again.

CPU TAX
When it comes to scaling up transcoding resources, one consistent finding in all tests was that adding an additional transcoding node did not double the throughput, nor did adding two nodes triple the throughput. This was more apparent with short content durations; as content lengths moved out beyond 5 minutes, though, the majority of solutions were able to use raw computing power to reach a break-even point at which content was then transcoded in real time for almost all of the multiple outputs.

Several solutions have been proposed to scale enterprise- and carrier-class transcoding products to a point where all content is generated in faster-than-real-time transcoding. These solutions involve ways to eliminate the heavy “CPU tax” from overhead processes inherent to general purpose operating systems, and they hold promise for a linear upswing in throughput on purpose-built transcoding systems.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

2011 Streamticker: Online Video Mergers, Acquisitions, and Investments

2010 proved to be a banner year for companies looking to bolster their strengths or move into new markets.

Companies and Suppliers Mentioned