
I still love graphics programming

About a year ago, I wrote a story about one of my bicycles, which I called “ I love graphics programming .” In that story, I tried to show the development process “on the romantic side”, joking a bit over myself, saying that everything is so fun and funny when you program the graphics. I told the story only from the side of “Wow! Stripy ... ”, and now, almost a year later, I decided to share with you a story about how it all worked and how it ended. I want to warn you right away that this is still a story about bicycles. This is not a story about revolutionary technology or super-mega smart solutions. This is a story about how, for my pleasure, I deliberately wrote a bicycle.
The story is again a bit messy and for everyone who does not like Android, C ++, Live Wallpaper, Minecraft, bicycles, a stream of consciousness that is weakly attached to the topic and all that, I want to immediately warn that the content of this post may upset them, so continue reading at one's own risk.
Past story?
For those who are not in the know and do not want to read the previous article, I will tell you in a nutshell about what was discussed. I wrote Live Wallpaper for Android, in the style of Minecraft. In fact, it’s just a landscape image in the style of Minecraft, which you can scroll with your finger and in which there is a change of day and night.
A bit about bicycles
In my last story, I made a digression to talk about a game that I wrote many years ago because of nostalgic feelings, and in this story I decided not to differ in special originality and start from the same.
In the 2006th year, I was the proud owner of a PDA. The VGA screen and all the other goodies of this device provided quite wide possibilities and at that time I wrote several applications for Windows Mobile.
One of the applications that I wrote was a game that in my childhood was called “Eggplant” and which spent a lot of childhood time and children's nerves. Without going into details, I took a set of images from the emulator and made a game based on them:

But I’m telling you this not to advertise the CCP and not only out of nostalgic feelings. This is a story about bicycles - about bicycles that are created intentionally.
At that time, I had enough experience in software development to write such a simple game in less than a day. As I already said, the graphics and sound were taken from the emulator, the logic is terribly simple, and the display of 10 sprites certainly did not constitute any problems. Perhaps the fact that the implementation was too simple was the main problem for me. After all, I decided to write this game for myself - not for sale, not for someone, but just for myself, which means I have to get something more from this than a ready-made game. The process was supposed to be fun.
Here bicycles came to the rescue ... Their own image format, for sprites that should be displayed on the screen (over the background image). Own process of outputting these sprites with the effect of white noise. And everything was implemented through FrameBuffer, i.e. through an array of pixels. And now the task has already taken about 4 days and has become much more interesting.
To quote myself from a previous article:
Wrote, it worked exactly as I remembered, played once, quit, because already “played enough” in the process of debugging.But for me it has become a kind of tradition ...
You see, every year on December 31, my friends and I go to the bathhouse. This is our tradition.From time to time, for myself, I began to realize things in order, as they say, to “stretch the brain”. And this process gives me a lot of pleasure, so I immediately want to give an answer to those who like to scream in the comments: “Fu-fu-fu, why so ?! Yes, we immediately need perlin noise and fractal generation of regions! ” - Yes, for everything that I describe in this story, there are methods and tools - I know this very well. My implementation has nothing to do with the fact that I do not know about the existence of ready-made solutions. The essence is in the implementation of their decisions.
Back to the topic
A simple Live wallpaper application was written that simply painted over the screen with a specific color. I will not tell you how to do this, because this is the part that can be easily found in search engines. In the application, the “draw” method was responsible for drawing, which did something like this:

Here the first troubles began with performance ...
SDK Features
Initially, the question was how to get an image from JNI, and the most logical solution seemed to get an array from int, which would represent a bitmap with a 32 bit pixel representation (32bpp). This approach was also prompted by the fact that Java offers a method for outputting such an array to canvas:

I proceeded from this logic - an array of pixels can also be obtained from a Bitmap object, but it will definitely be longer than just passing the array and rendering the array using a method that specially made for that.
Then a surprise from Android was waiting for me: the drawBitmap method painted a picture (720x1280) for about 21ms, so if I want to draw 30 frames per second (even if without delays - taking up all the processor time), then my implementation would have to fit in 12ms (and yes, here I don’t even take into account the fact that drawing itself is not the only thing that takes time). This arrangement of affairs definitely did not suit me, so I had to experiment and look for other options. The solution was to transfer the Bitmap object to the JNI, which in the drawing method was processed as follows:

So, transferring an object of type Bitmap, getting information about it (AndroidBitmap_getInfo), getting an array of pixels (AndroidBitmap_lockPixels / AndroidBitmap_unlockPixels) and actually calling JNI (without my rendering), now took no more than 1ms. At this point, the problem of transferring the image to and from JNI was resolved. I did not find anything in the documentation about why using the drawBitmap method with an array of pixels has been taking so long, but we can assume that in this case, just before drawing, a Bitmap object is created.
Overhead costs somehow remained, because getting and releasing a Canvas object with each draw takes about 1-2ms. And the output of Bitmap itself using the drawBitmap method takes another 3-4m:

In total, approximately 5-6ms have to be given for additional operations, but then I couldn’t do anything and had to come to terms.
This is perhaps the only interesting technical aspect that Java faced, so the rest of the story goes to JNI and to the implementation of landscape generation algorithms.
Skyline and Cycling
The landscape itself was displayed cyclically, i.e. the image is a closed tape that can be scrolled endlessly in any direction (horizontally). From the point of view of the algorithm, it is possible to generate a landscape of any length, but the length is limited so as not to occupy too much memory.
From the point of view of implementation, everything is very simple:
If you need to generate a region whose points extend outside the map (horizontally), then these points are simply transferred to the other side of the map (for x <0, x + = width and for x> = width , x - = width). A little more interesting is the issue with the implementation of the horizon level - the horizon must be random, but the start and end points must converge so that there is no such effect:

To solve the problem, the following algorithm was written:
- Randomly select the coordinate of the starting point.
- The next point can have one of the offsets relative to the previous point: {-2, -1, 0, 1, 2}, for each of the offsets it is checked whether it is possible to return to the starting point for the remaining number of transitions. An offset, the selection of which does not allow to go to the starting point, is removed from the list.
- A random offset value is selected from the list.
- We repeat from the second point until we come to the beginning.
In practice, this approach allows you to generate both flat terrain and mountains. In truth, I know that the algorithm is the most optimal and that should come from the distortion of a straight line, but at that moment, the solution seemed sufficient.
Caves and other elements
One of the main tasks was the generation of caves, and various blocks (coal, sand, etc.). There are quite a few algorithms for solving this problem, but I decided to write everything myself, as it comes to my mind.
The implementation was similar to the implementation of the flood fill algorithm, only with some element of randomness:
- Select a random point, set the “weight” for this point (random number).
- Check the availability of neighboring points (process each of the neighboring points, reducing the “weight” randomly) - is there a point outside the screen, has the point been processed previously.
The “weight” of neighboring points is determined by a random number ranging from (the “weight” of the parent point / 2) to the “weight” of the parent point. Additionally, the ability to create more random regions was added to the generation algorithm by adding the condition of “additional randomness”, where daughter points are not considered with a 20% probability.
The process of traversing points can be quite clearly shown using animation:

A slightly different animation that shows the process of reducing the “weight” from the center:

At its core, the algorithm is recursive, but a queue has been used for implementation. The starting point is added to the processing queue and the loop is executed until there are points in the queue. Available neighboring points with a weight greater than 0 are added to the queue. In general, everything is pretty standard, in terms of solving the problem of excessive depth of the call stack.
This algorithm is executed in several passes - it creates several starting points from which the child points are processed. As a result, the regions interconnect randomly, creating more complex regions. Here is an example of creating five regions (the points of each region are marked with numbers from one to five, the starting point of the region has a red frame):

This algorithm is the basis for most of the landscape, only the number of starting points (points from which separate regions are created) changes, the initial “weight” for the starting points, and the position of the starting points (this is done so that certain regions are created only within a certain depths, for example, only near the bottom).
The starting points of the regions were also used to place lighting elements (torches). In the early version, the torches were located exactly at the starting points of the region, but then I decided to lower them “to the ground” and if the starting point of the region is higher than two blocks from the “floor”, then the torch was placed exactly two blocks from the floor. This approach made the lighting more interesting (large parts of the region could remain poorly lit).
Performance again
The whole development process, I struggled with productivity. I had to redo quite a lot, when at the testing stage it became clear that drawing takes too much time. I didn’t come up with anything really original, I implemented only the process of determining the changed areas, so that only the area that really needs to be redrawn was involved in the drawing. The rest was taken from the past image. So, for example, when turning over, the majority remained unchanged, but only the segment that needed to be added was rendered (in the image, the blocks that need to be drawn are marked in red):

The algorithm was responsible not only for displacement, but also for areas, lighting which changed with time:

Pseudo 3d
The “pseudo-scope” was also created quite simply - the position of the block is always fixed - in addition to the front face, you can see the side face (left) and the upper face. For each type of block, 5 images were generated from the textures, which together created the illusion of volume:

In total, 5 different block arrangement options can be distinguished (the block to be drawn is marked in black, adjacent blocks are shown in gray, and the place where there is no neighboring block is marked in white):

In each of the options, it is necessary to draw a certain set of images:
1. a, b, c, d1, d2
2. a, c, d2
3. a, c
...
An example of a drawing where block segments are marked with different colors:

The values of the visible edges of the block were calculated at the stage of landscape generation, which means that when drawing, it was only necessary to draw what was needed.
A bit about random numbers
To generate “random” numbers, a class was written that generated a set of numbers based on a given string and an additional parameter. For example, when generating caves, a combination of seed and “caves” was used, for trees - seed and “trees”. This approach made it possible to turn off the generation of a certain area (for example, not to generate blocks of coal or iron), but not to affect the appearance of other elements that are based on random numbers.
I built the algorithm on the crc32 checksum algorithm, because at the same time it allowed me to get a number from any data, and I had its implementation at hand.
Briefly about the results
Application settings for the year: 16553 (+ 81 (paid)):

The total income from the application amounted to 2794.91 rubles . In general, if I could make a living by writing bicycles, then I would already starve to death.
I can say about the development process that it was very interesting to solve problems that we usually don’t encounter in our work. Of course, it was possible to take one of the ready-made engines and based on it to do something similar, with honest 3d and without the described perversions, but again I hope that this will push someone else to create something with their own hands, and not just using ready-made frameworks and libraries.
Thank you all for your attention!