
Slit shooting: implementation on bash (ffmpeg + imagemagick)
I don’t remember what and why I searched on the Internet a few days ago, but I came across an interesting article with unusual photos . And later on another article , which described the implementation of the algorithm for creating such photos in python. After reading, I was interested in this topic and I decided to spend the evenings of the May holidays with benefit for myself, namely, to implement the algorithm for "converting" video into a slit photo. True, not on python, but improvised means on bash. But first things first.
View of a photograph in which not one event at one particular moment in time is captured, but several events. This is achieved due to the fact that the slit camera takes frames of a width of one pixel (this is the "gap") and "glues" them into one photo. It sounds a bit confusing and it’s hard to imagine what it is and how it looks. The most intelligible explanation for me was a comment on one of the above articles from a Stdit user :

After that, everything becomes clear.
Example for clarity:

It sounds fearless and simple.
The first and simplest thing that comes to mind is to write a bash script that will process video and photos in accordance with the described steps of the algorithm. To implement my plan, I needed ffmpeg and imagemagick . In simplified form, on a pseudo bash script looks like this:
For a couple of evenings, a script was written to which we submit a video file to the input, and at the output we get a photo. In theory, you can input video in any format that supports ffmpeg. The output file can be obtained in those formats that imagemagick supports.
Using the script is very simple: where input is the video file for processing, output is the name of the resulting file, slit-shift is the horizontal offset of the slit. The first thing for quick testing, I did not shoot the video on the camera, but downloaded the first video that came across from youtube and “fed” it to the script. Here's what came of it: The next day, I took my Xiaomi Yi with me for a walk and shot some videos. Here is what came of it:


Native Sea of Azov (photo taken from video resolution of 1920x1080 pixels and a duration of 31 seconds, 60k / s)


And these photos are collected from video resolution of 1280x720 pixels and a duration of 16 seconds, 120k / s. Pay attention to the background of the second photo. It is not static. In the background was a moving ferris wheel.
You can view and download the script in my repository on GitHub . Suggestions, criticism and pull-quests are welcome.
What is a slit photo
View of a photograph in which not one event at one particular moment in time is captured, but several events. This is achieved due to the fact that the slit camera takes frames of a width of one pixel (this is the "gap") and "glues" them into one photo. It sounds a bit confusing and it’s hard to imagine what it is and how it looks. The most intelligible explanation for me was a comment on one of the above articles from a Stdit user :

After that, everything becomes clear.
Example for clarity:

Slit photography algorithm
- Split video into multiple images.
- Trim each received image in width to one pixel with a given offset (gap).
- Collect the resulting many images into one.
It sounds fearless and simple.
Given
- Xiaomi Yi Camera
- The desire to understand and take some unusual photos
- A couple of evenings of free time
Decision
The first and simplest thing that comes to mind is to write a bash script that will process video and photos in accordance with the described steps of the algorithm. To implement my plan, I needed ffmpeg and imagemagick . In simplified form, on a pseudo bash script looks like this:
ffmpeg -i videoFile frame-%d.png
for ((i = 1; i <= framesCount; i++));
do
convert -crop 1xframeHeight+slitShift+0 frame-$i.png slit-$i.png
done
montage slit-%d.png[1-framesCount] -tile framesCountx1 -geometry +0+0 outputImage
Let's see what happens here.
- Firstly, using the ffmpeg utility, we split the video into many images of the form frame-0.png ... frame-n.png.
- Secondly, using the convert utility from the imagemagick package, we crop each received image (the -crop switch) as follows: width == 1px, height == image height. Also indicate the horizontal offset of the gap. Save to the files of the form slit-0.png ... slit-n.png.
- Thirdly, using the montage utility from the imagemagick package, we collect the resulting images in one photo. The -tile switch indicates that all photos must be assembled into one according to the “framesCount horizontally and 1 vertically” template, that is, to collect many images in one row.
Result
For a couple of evenings, a script was written to which we submit a video file to the input, and at the output we get a photo. In theory, you can input video in any format that supports ffmpeg. The output file can be obtained in those formats that imagemagick supports.
Using the script is very simple: where input is the video file for processing, output is the name of the resulting file, slit-shift is the horizontal offset of the slit. The first thing for quick testing, I did not shoot the video on the camera, but downloaded the first video that came across from youtube and “fed” it to the script. Here's what came of it: The next day, I took my Xiaomi Yi with me for a walk and shot some videos. Here is what came of it:
./slitcamera.sh --input=test.avi --output=test.png --slit-shift=100


Native Sea of Azov (photo taken from video resolution of 1920x1080 pixels and a duration of 31 seconds, 60k / s)


And these photos are collected from video resolution of 1280x720 pixels and a duration of 16 seconds, 120k / s. Pay attention to the background of the second photo. It is not static. In the background was a moving ferris wheel.
You can view and download the script in my repository on GitHub . Suggestions, criticism and pull-quests are welcome.