Scroller for video and understanding of the representation of time in Objective-C



Hello, the Guardian!

In this article, I want to share my experience working with video in one of my latest iOS projects. I will not go into details, I will only describe one of the tasks that could not be solved by searching the hub, github and the rest of the Internet. The task was as follows: to make a scroller for video, but not simple, but to be like in the standard iOS 7 gallery



. the standard component MPMoviePlayerViewController was used to play the video , and it supports rewinding the video to any position, the main task was to get pictures from the video at regular intervals and put them on a UIView, so that they are approximately under the current position in the video. Running a little ahead, I want to say that along the way I had to solve a couple more problems: brakes when generating pictures from video on the iPad, and different slider lengths in the vertical and horizontal orientation of the device.

So, for starters, we need to understand how to get pictures from the video. And AVAssetImageGenerator will help us with this . This class is specially created in order to receive pictures from an arbitrary place in the video. We assume that our test file is located in the home folder and is called test.mov :

NSString *filepath = [NSString stringWithFormat:@"%@/Documents/test.mov", NSHomeDirectory()];
NSURL *fileURL = [NSURL fileURLWithPath:filepath];


An example of using AVAssetImageGenerator :

// create asset
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:fileURL options:nil];
// create generator
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
// time for preview
CMTime time = CMTimeMake(1, 2);
// get image ref
CGImageRef imageRef = [generator copyCGImageAtTime:time actualTime:nil error:nil];
// create image
UIImage *image = [UIImage imageWithCGImage:oneRef];


I didn’t come across CMTime before , and in order to divide the time into equal intervals, it would not be bad to understand what this data structure is.

CMTimeMake takes two arguments as input: value and timescale . I have read the official documentation and I want to explain in simple words to those who do not know what these arguments are.
First, timescale is the number by which each second will be divided. Using this argument, you specify the accuracy with which we can specify the desired point in time. For example, if you specify timescale equal to 10, then you can get 1/10 of a second.
In turn valueindicates the desired part of the time, given timescale . For example, we have a video that is 60 seconds long, timescale is 10, to get 30 seconds, value should be 300.
To better understand the time representation using CMTime , I’ll say that the number of seconds at the current moment in the video is value / timescale . From the previous example, 30 seconds is 300/10. If we understand the translation of time from seconds to CMTime and vice versa, then there should not be any problems with this structure.

Go ahead, now we need to know the length of the video. It's quite simple, the asset object created earlier already has the property we need.

CMTime duration = asset.duration;


Well, we have everything to cut a video into a bunch of pictures. Now the question is how many of them are needed for portrait and landscape orientation of devices. The first thing you need to pay attention to is the height of the scroller in the standard gallery of the iPhone and iPad. Yes, it is almost the same, only the width differs. It is not difficult to guess that the number of images is equal to the width of the slider divided by the width of one image. I decided that I would make square pictures of 29x29 pixels. There is one subtle point here, in the generator the size of the pictures must be indicated in pixels, so there will be a value of 58x58.

generator.maximumSize = CGSizeMake(58.0, 58.0);


For simplicity and convenience, the number of images I specified in defines

#define iPad (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
#define ThumbnailsCountInPortrait (iPad ? 25 : 10)
#define ThumbnailsCountInLandscape (iPad ? 38 : 15)


Now everything is ready for generating pictures. I made two different arrays, because in portrait and landscape orientation, the pictures from the video will be different.

NSMutableArray *portraitThumbnails = [NSMutableArray array];
NSMutableArray *landscapeThumbnails = [NSMutableArray array];
// generate portrait thumbnails
for (NSInteger i=0; i < ThumbnailsCountInPortrait; i++) {
    CMTime time = CMTimeMake(duration.value/ThumbnailsCountInPortrait*i, duration.timescale);
    CGImageRef oneRef = [generator copyCGImageAtTime:time actualTime:nil error:nil];
    [portraitThumbnails addObject:[UIImage imageWithCGImage:oneRef]];
}
// generate landscape thumbnails
for (NSInteger i=0; i < ThumbnailsCountInLandscape; i++) {
    CMTime time = CMTimeMake(duration.value/ThumbnailsCountInLandscape*i, duration.timescale);
    CGImageRef oneRef = [generator copyCGImageAtTime:time actualTime:nil error:nil];
    [landscapeThumbnails addObject:[UIImage imageWithCGImage:oneRef]];
}


I don’t think it would be appropriate here to tell how to place the received pictures in a row on a UIView , and even more so how to take them from different arrays with different device orientations. This is actually nothing complicated and all this can be seen in the finished example.

Lastly, I would like to talk about a method for solving the problem with brakes. Because the slider is initialized when the controller boots, then there will be a delay in the animation transition to the current controller. The simplest solution is dispatch_async . This extremely useful thing allows you to execute the contents of the block asynchronously in the background, without slowing down the application.

Usage example:

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
    [videoScroller initializeThumbnails];
    dispatch_async(dispatch_get_main_queue(), ^{
       [videoScroller loadThumbnails];
    });
});


I think it’s clear that videoScroller is our object that initializes its data in the background and then loads it.

A working example can be taken here: https://github.com/iBlacksus/BLVideoScroller

P.S.
This is my first article, if it turns out to be interesting for hawkers, then I am ready to continue to share my experience, in particular, I plan to write an article on creating a slider that allows you to choose the color of text from an arbitrary palette, which is just a picture.

Also popular now: