Streaming video for iPad / iPod / iPhone to Bash - cheap and cheerful

    Hello, dear habrozhitel!

    In this short article I want to share the experience of creating an online broadcasting system for the devices of “one fruit company” :).




    In order for mobile users to fully enjoy streaming video, Apple suggested using a rather simple approach - the video stream is cut into small pieces, which the device plays in turn, giving the user the illusion of video continuity.

    The video clips themselves can be transmitted both over HTTP and over HTTPS - it is enough to append the video clips themselves to the directory on any web server in a timely manner and update the playlist with information about them.

    Despite the fact that video clips are transmitted over a protocol that does not support real-time data management (like the same RTSP / RTP / RTMP), this approach has several advantages - even a school student can create a distributed system for distributing static content and (in my opinion - main feature) this approach allows you not to dance with a tambourine at all for the work of these protocols through NAT / Proxy.

    In the Apple documentation on the site for developers there is a picture that clearly explains how it works (although the iPad itself is not drawn there):

    image

    The most important thing in this approach is that the server that is responsible for converting the video first has time to convert it at a speed higher than 25 frames per second, and secondly, it has a fairly good and stable connection with the nodes that distribute static content.

    When one of our customers (a television channel well-known in Moldova and Romania - Jurnal TV) asked us to implement a similar broadcasting system for iPhone / iPad / iPod in the MDX network (a high-speed network within the country to which all providers and traffic in which unlimited) we had a choice:
    1. Use ready-made systems (I will not name the manufacturers, as NDA) - costing from 10,000 euros and up to the horizon (depending on the rivals available in the software) for a hardware-software complex consisting of one server and software that has whistles and fakes allows you to distribute static content to end nodes (edge ​​servers, edge servers in English terminology) - which, of course, are not included in the price.
    2. To independently implement such a system, especially since there were several free diskless servers that we use for normal webcasting (using VLC and also via HTTP, by the way - if it’s interesting, I’ll tell you) - with very fast processors and a bunch of RAM .
    3. Since we are not looking for easy ways, and it did not make sense for the client to spend a lot of money on the new system, we chose the second option.


    What we had:
    1. Unlimited access to the video signal in any form, we chose SDI
    2. Converter SDI-> DV, which we normally saw as IEE1394, better known popularly as “Fire Wire”.
    3. A diskless server with a 4-core Xeon on board running Ubuntu Maverick.


    In short, the algorithm of the system is as follows:
    1. Receive a video clip lasting 10 seconds (in accordance with recommendations from Apple).
    2. Convert it to the desired format (MPEG-4 in the transport container from MPEG2)
    3. Update playlist
    4. Return to item 1


    Now, how these points of the algorithm were implemented.

    We decided to receive video clips of the required duration using the dvgrab utility - it proved itself to be good when working around the clock in the video archive system of the same television. Of course, you have to save 10-second video clips directly to RAM, to a RAM disk. 10 seconds of uncompressed video takes 35 megabytes. The compressed fragment takes about 1.2 megabytes at a bitrate of 800kbps.

    It was decided to convert the video clips with the help of ffmpeg - he also quite a while ago and firmly settled in the system of the same video archive of television due to its versatility. As a codec, a free implementation of H264 - x264 is used.

    The system itself, which monitors the arrival of new video fragments, starts converting and updates the playlist (at the same time, the video fragments in the playlist represent the so-called “window” - only 3 fragments are stored in the playlist, 10 are written to disk) was written in Bash .

    Actually, here is this code: The code may be somewhat not optimal, there is room for optimization and modifications (for example, you can make 2-3 streams with different bitrates), but this code works - and with this approach, there is no need for a segmentation utility . Unfortunately, this video stream is available only for those who are connected to MDX - i.e. only for users from Moldova, but according to reviews from the thousand with a ponytail of users who use this service, they like to "carry a small TV with them."

    #!/bin/bash
    #set -x

    VIDEO_FILES=( ); # array to store all available *.ts files at the moment
    VIDEO_FILES_MAX=10; # how many elements can be stored in $VIDEO_FILES array
    LIST_LEN=0; #*.ts list length

    VIDEO_WINDOW=""; # array to store current video files window
    VIDEO_WINDOW_LEN=3; # how many files we are storing in the window

    LAST_CONVERTED=0; # ID of last converted video slice

    RAW_SLICES_PATH="/tmp/DV/"; # where to look for raw video slices
    MP4_SLICES_PATH="/tmp/MP4/"; # where to place converted chunks
    MP4_SLICES_WEBPATH="http://istream.jurnaltv.md/live/"; # web path from the user`s POV
    SLICE_DURATION=10; # seconds, 10-15 seconds recomended by Apple
    M3U_FILE_NAME="/tmp/MP4/live.m3u"; # full path to the m3u index file

    FFMPEG_CMD="/usr/local/bin/ffmpeg -y -i ";

    update_m3u() {
    # updating number of elements
    LIST_LEN=${#VIDEO_FILES[@]};
    echo "Number of elements in array is: $LIST_LEN ";
    echo -n "(";
    for slice in ${VIDEO_FILES[@]}
    do
    echo -n "${slice} ";
    done
    echo ")";
    echo;
    # getting last $VIDEO_WINDOW_LEN files from array
    let LAST_IDX=LIST_LEN-VIDEO_WINDOW_LEN;
    if [ $LAST_IDX -le 0 ]
    then
    LAST_IDX=0;
    fi
    echo "Last index we must use is $LAST_IDX";
    # recreating m3u file
    # getting slice id from $LAST_CONVERTED
    SLICE_ID=0;
    let SLICE_ID=LAST_CONVERTED-VIDEO_WINDOW_LEN;
    if [ $SLICE_ID -le 0 ]
    then
    SLICE_ID=0;
    fi
    echo "------------- DUMP START ------------- ";
    echo "#EXTM3U">$M3U_FILE_NAME;
    echo "#EXT-X-TARGETDURATION:$SLICE_DURATION">>$M3U_FILE_NAME;
    echo "#EXT-X-MEDIA-SEQUENCE:$SLICE_ID">>$M3U_FILE_NAME;
    i=$LAST_IDX;
    while [ $i -lt $LIST_LEN ]; do
    echo "#EXTINF:${SLICE_DURATION},">>$M3U_FILE_NAME;
    echo "${MP4_SLICES_WEBPATH}${VIDEO_FILES[${i}]}">>$M3U_FILE_NAME;
    let i++;
    done
    echo "------------- DUMP END ------------- ";

    # if array length is greater than $VIDEO_FILES_MAX - remove first element and compact array: array=( "${array[@]}" )
    if [ $LIST_LEN -ge $VIDEO_FILES_MAX ]
    then
    echo "Packing array by removing first element";
    echo ${MP4_SLICES_PATH}${VIDEO_FILES[0]};
    rm -f ${MP4_SLICES_PATH}${VIDEO_FILES[0]};
    unset VIDEO_FILES[0];
    VIDEO_FILES=( "${VIDEO_FILES[@]}" );
    fi
    echo "-------";
    }

    # gracefly handle SIG_TERM
    on_sigterm() {
    echo "Got sigterm, exiting!";
    RUN="0";
    }

    trap 'on_sigterm' TERM

    # cleanup source and converted folders
    rm -f ${RAW_SLICES_PATH}*.dv;
    rm -f ${MP4_SLICES_PATH}*.dv;

    # forever do
    # convert video
    # move to MP4
    # erase original
    # add converted to the tail of array
    # update live.m3u file for $VIDEO_WINDOW_LEN files
    # if array len>$VIDEO_FILES_MAX
    # then remove first element from array and compact array it
    # forever end

    RUN="1";
    raw_slice="";

    while [ $RUN -eq "1" ]; do
    #getting oldest file from the list of slices
    raw_slice=`ls -tr ${RAW_SLICES_PATH}|head -1`;
    if [ "$raw_slice" != "" ];
    then
    OPEN_FLAG=`lsof|grep $raw_slice|wc -l`;
    if [ $OPEN_FLAG -eq 0 ];
    then
    #converting video
    echo "Converting ${raw_slice}">>/tmp/istream.txt
    #sleep 6; # simulating transcoding delay
    mp4_slice="live-${LAST_CONVERTED}.ts";
    $FFMPEG_CMD ${RAW_SLICES_PATH}${raw_slice} -acodec libfaac -ac 1 -ar 48000 -ab 96k -vcodec libx264 -vpre baseline -vpre fast -vpre ipod640 -b 800k -g 5 -async 25 -keyint_min 5 -s 512x256 -aspect 16:9 -bt 100k -maxrate 800k -bufsize 800k -deinterlace -f mpegts ${MP4_SLICES_PATH}${mp4_slice}
    rm -f ${RAW_SLICES_PATH}$raw_slice
    LIST_LEN=${#VIDEO_FILES[@]};
    VIDEO_FILES[${LIST_LEN}]=$mp4_slice;
    #generating m3u file
    let LAST_CONVERTED++;
    update_m3u;
    else
    sleep 1; # sleep one second
    echo "Waiting for file to be closed!";
    fi
    else
    sleep 1; # sleep one second
    echo "Sleeping!";
    fi







    I will be happy to answer community questions.

    PS Thanks to our office manager Tatyana for agreeing to pose with the tablet, and to the marketing director for working as a photographer :).


    Also popular now: