About True Image 2017 release and development - all hardcore features in place
We recently had a holiday - released True Image 2017, which we worked on for a year. There are a lot of changes, but the first thing that catches your eye is the “casual” minimalistic design. If our first releases were tools of advanced users and system administrators, then the last few years the world popularity of the product is such that very, very different people backup it. Including those who do not particularly distinguish the monitor from the system unit.
Therefore, our approach is to preserve all the good old hardcore, flexibility of settings and many non-trivial tools, but to focus on simplifying the interface.

All the settings are in place.
In this release, we learned how to make a local backup of iOS and Android to your desktop, backup your Facebook profile (thanks to the user Masha Bucket), worked with the archive architecture and so on. I’ll tell you about the main features and difficulties in their development.

In the first place - the main actions with obviously good defaults
The first mode with which we historically began, a sector-by-sector backup, was naturally preserved in the 2017 release. Let me remind you how it works: you make a full physical copy of the hard drive and can work with it further as you want. This is very convenient when restoring data (so as not to corrupt anything on a barely breathing disk). You can also mount such disks (more precisely, image files) directly in Acronis True Image and see them as separate R / O disks in Windows Explorer.
The traditional backup is done a little differently, according to an iterative scheme. First, the full image is taken (all files of the hard disk, subject to exceptions), then the difference is added to the backup. The schedule and all details are very flexible, and for “casual” users there are optimal defaults.
Of course, we still propose to make an emergency disk or an emergency flash drive to boot from it. We write on a USB flash drive actually Linux with recovery tools (in the case of a Mac, their "native" standard disaster recovery tools and our utility are written there). You can use this as, in fact, the Next-Next-Finish tool, without special knowledge, or you can switch the screen and do something different.

Select boot branch

Recovery utility
In the Russian version, a full-fledged history has been preserved with the recovery of drivers and their rolling to a new system (in the USA there are legal difficulties with this). The story is that if you take a backup from XP, Vista or Win7, and then bring it to a new computer (with another hardware), nothing will just start. You will need to reinstall all the drivers, in fact, once again configure the entire OS. As a result, about 8 years ago, we rummaged very deeply with the reverse of the Windows code and wrote our own driver installer. Now you just need to move to a new configuration, make a recovery and show where the drivers for the changed hardware are. We have a separate utility for this, it comes bundled (that is, free for users of the 2017 release), but you need to download it separately.
Another challenge is MBR / EFI data transfer. We have a scenario where we can take an MBR OS and create a branch for it in EFI so that it boots normally and supports large disks. Now this choice arises for users who want to roll the old OS onto a new machine. Here are some more low-level details with different conversion branches:
Naturally, since we have so well disassembled the lower level, you can make an emergency tool directly on the local computer. We are creating a new boot branch in EFI, where we write our modified compact Linux and recovery tools. If you have something covered, you can try to boot from the same computer on an alternative branch and recover from the backup. This protects users from laziness who do not want to carry a hard drive to their computer for scheduled backups (and do not backup to the cloud). Nevertheless, of course, this will not save from complex failures of iron, therefore a separate medium with your data is still important.
Another interesting thing is auto-mapping. We already had it, but in this release it has become much smarter. The fact is that recovery is now taking place from almost any medium, and even there may be several such media. For example, a network resource, a local copy, and something else. If the system has changed significantly since the last iteration of the backup (for example, you roll up a backup a year ago, this happens, for example, when setting up a new workplace in some companies), then you need to correctly understand where and how to restore it. A typical example is another hard disk layout. Another case: when the backup is old, but you don’t want to wait for a full recovery from this computer, you can try to exclude what was clearly not damaged. In order not to wait a couple of hours.

Network backup
And yes, just in case, I’ll add that all the old things, like “on my grandmother’s computer, everything will be backed up directly to the USB flash drive that is stuck in the back” or “on my virtual machine you can restore the backup of the girl’s home computer while I have it on the weekend” they still stayed.

Folder sync
We pretty much went over all the technology of mobile backup in the new release. The main point - we are well aware that not everyone wants to give a copy of their data to someone else’s cloud, therefore they provided a local backup function. The old cloud option is also available, but the local backup was very, very popular. We had server code in corporate developments (just made for paranoid and backups of corporate devices within the network), and we decided to reuse it. To make it clearer, first I will tell you how the new scheme works:
Difficulties were almost everywhere. Firstly, of course, traffic, background activation and power consumption on new versions of iOS and Android. In general, this is already relatively simple, and there were ready-made recipes. After the first full backup, as a rule, we are talking about a few megabytes of new photos, a couple of kilobytes of contacts and other user information. We rewrote the code responsible for filling: chunks do not break, they are well downloaded even after a 24-hour break in the network. Contacts now flow first in priority. At the same time, we updated the cloud fill (in case the backup is not local), now the contacts are visible immediately after they were received, without even waiting for the end of the chunk.
It was not trivial to establish a connection between the phone and the desktop. We tried many methods until we settled on QR codes - you need to photograph the QR code from the desktop screen in the application. But even in this version it was quite difficult: at first we sewed a full-fledged SSL certificate into the visual code right away, and not all library readers read it. I had to greatly reduce the QR and install a new key exchange protocol.
Discovery itself is done right away in different ways, so even if you start the operating system inside a virtual machine (as we like to do), the phone will find its pair. A common home case is when the phone is connected via Wi-Fi and the laptop is connected by a local cable. Resolution goes by IP4, bonjour services and hostname. Unfortunately, Russian Yota users are still without this feature: their “provider” modems that distribute Wi-Fi create NAT, which is difficult to find.
Reusing server code is also a different story. Initially, it was a Linux server, which theoretically could be ported, for example, to Win. At the same time, the entire logic of the component is built in such a way that it has a control server that issues commands. As a result, with a corporate backup, we create a microserver that acts as a command center for this subnet. In the home case, such a "nanoserver" can be said to be emulated on the desktop. More precisely, just the component sends requests, and the desktop, in the case of a local backup, itself gives the answers (in the case of the cloud, the answers are given by the remote server). This component itself was required in order, in particular, for the console to know what was already backed up and what wasn’t.
Recovery also goes through the mobile application, but with an already active start via the GUI. Well, and, of course, most of the data can be migrated between different devices: for example, if you are moving from iOS to Android, dragging photos is the simplest thing.
If you just hijacked a page - this is, in general, not a very big problem, most often, it can be restored using regular means. If Facebook unexpectedly banned you, you are unlikely to pull anything out. If your page dies due to a technical glitch on their side - too. Sometimes the user himself accidentally deletes something (for example, a single photo) and really wants to restore it. In general, many asked if it was possible to backup a Facebook profile. Yes you can. Now we do it.
When you start the profile copy tool from the desktop, the Fb page opens, where you need to allow a special application access to your data. This is a gateway for the Facebook API, in fact, a token repository for backup work. Then you can limit the “visibility” of your profile through the API on the Facebook side, and we can start copying.
Not everything lends itself to copying. For example, you can’t take and save a graph of friends, Facebook very carefully (unlike many other personal data) protects its main property. But it turns out to pick up all the photos, the whole wall, all the messages and so on. Photos and videos can be quickly restored if something happens - the API makes it possible to upload them retroactively and not show them in the update stream, but, of course, they will not have likes and comments. These data are not poured back in any way, which, in general, is quite explainable. Your Timeline is also not restored: unlike photos, all records will be dated by the time of backfill. Your likes will be saved in the backup, but will not be restored either.
The Facebook API turned out to be quite buggy, or rather, rich in extremely illogical features. This tool was not originally intended for large tasks: as a rule, applications pull one or two requests for something extremely specific, but not all at once. This resulted in the fact that the same incremental backup is very difficult to do. For example, it is almost impossible to quickly get the difference between the current and previous state on a known date. There are no regular methods and will not be - this seems to be part of the Facebook security policy. I had to think carefully about the organization of the algorithms in the existing limitations. As a result, we digged hard, but found a balance. The second point - there are restrictions with the token. The longest token is not eternal, lives 60 days. And, according to the logic of Facebook, after 60 days you need to go back to the dashboard and press the button. Regular web tokens are renewed during user activity: when you go to your profile and you have cookies mentioning such a token, it is updated. We work without a live call, in a separate session. Fortunately, we are not manufacturers of SmartTV. From them, users always get the code on the phone or desktop and hammer it from the remote control onto the TV once every two months. Fortunately, we just have one button.
It was a lot of fun testing. Facebook allows you to create test virtual users that are not visible on the main network. Unfortunately, these “mannequins” cannot like each other, cannot comment and have many other restrictions. As a result, we created our “live” users, the main of which over time became Masha Bucket (you can still find it on the network, but we cleaned the profile). In the final, we worked with the profiles of our top managers - we needed very “bloated” profiles of people who had been living an active network life for many years and today are leading. As a result, one of the employees began to measure the backup volume, there could be explanations for tickets like: “Data on 50 Ivanovs was taken from Facebook, and only 48 came.”
Since this is a post about release, and not development features, I will leave a lot of interesting information about implementation overboard. But my colleagues and I will try to tell hardcore details later. In the meantime, I note that we have changed something else interesting.
The first interesting story is the exceptions. For example, a normal backup script does not take a shadow copy of the system - it just does not make sense. To facilitate backup (which is especially important for those who upload it over the network), we did a very good job with exceptions. In addition to temporary folders (with the exception of the browser cache - it turned out to be unexpectedly important), system logs, crash dumps and other system things, it eliminates the fact that it has its own recovery tools. For example, unless you explicitly specify the opposite, we will not backup Dropbox, Yandex.Disk, Google Drive, OneDrive, libraries like iTunes and antivirus quarantines. Accordingly, I had to reverse all of these tools in order to find their settings, determine by their configuration records what they are in the system, and understand what should be taken and what not. Yandex ran the longest from us. Disk - in different versions they store their settings in completely different ways. The first time we came across it in the last release - with a config in an XML file, then he began to store the settings in the SQLite database, then changed the storage format ... We drove these settings and tried to understand the logic for each exception.
In the opposite direction, exceptions were also required. For example, in order to raise the old archive on a new system, it is by no means possible to restore drivers from the old OS, swap file, and so on. Accordingly, from release to release, we support such rule sets.

We also exclude from the backup files of the same backup if they are stored on the same drive. Otherwise, recursion would occur. At the same time, we have a new type of storage — an archive of unnecessary files (a compressed version of the “Parse1”, “Parse2” folders, and so on, which the user displays and stores from nostalgic feelings) - we include them in the backup.
The archive in which backup data is stored has changed a lot. For this and last year, we have come up with a lot of ideas on how to optimize it and how to make it more accessible (in the end, we “transparently” mount disks from archive files and give web access to a cloud copy). Now the local backup is stored in the second version of the archive with significant finishing touches on the architecture, and the mobile backup is already in the third, with a completely different architecture. I think next year we will switch to the third version everywhere. By and large, our archive is a separate file system "in itself." So, for example, up to 20 versions of one file (with or without deduplication) can lie inside it; there is its own means of deleting old versions. There is an analogue of the garbage collector: you can “clean” the archive, and a storage space is formed inside it (while the file size itself will not change). As in classical file systems, deleted files inside the archive are not deleted for optimization, but are marked as subject to overwriting. Due to the dictionary structure of the archive, the deletion of dictionary sections is required for complete deletion. We also have our own means of internal defragmentation of the archive, but we do not use it without a direct user command (just as we never move data at the time of deletion). The fact is that any movement of information creates a threat of loss. For example, a typical problem is “USB garlands” when the user plugs the device through several adapters (especially often we see this among users of small laptops). Attempting to delete a file inside the archive with resizing the archive will result in that all data will be moved sector by sector. On a long "garland" this causes losses, and we had several dozen such cases in support.
We also worked on the recovery of individual files. In particular, they are now very quickly searched in the local archive.
Here is the page of the Russian release . As in the previous version, you can buy a subscription (which is important for those who back up to our cloud) or the “once and for all” version, but with a limit of 5 free gigabytes in the cloud (this is an option for those who back up on the local network or on wheels in a cabinet). Licenses are available for 1, 3 or 5 devices (Mac or Win) and plus for an unlimited number of mobile devices.
Well, to summarize. Now our release can be used for a full backup to any local or network media, to the cloud, for synchronizing data on different computers, for a number of system tasks (in particular, a favorite of many Try & Decide in place), for Facebook, for backup of iOS and Android mobile devices (a desktop version is planned for Win-tablets), for quickly mounting backups as virtual disks, restoring and moving the system, and quickly searching for individual data in the archive.
My colleagues and I will try to tell you more about design, mobile development and low level later, each about its part.
Therefore, our approach is to preserve all the good old hardcore, flexibility of settings and many non-trivial tools, but to focus on simplifying the interface.

All the settings are in place.
In this release, we learned how to make a local backup of iOS and Android to your desktop, backup your Facebook profile (thanks to the user Masha Bucket), worked with the archive architecture and so on. I’ll tell you about the main features and difficulties in their development.

In the first place - the main actions with obviously good defaults
Traditional desktop backup
The first mode with which we historically began, a sector-by-sector backup, was naturally preserved in the 2017 release. Let me remind you how it works: you make a full physical copy of the hard drive and can work with it further as you want. This is very convenient when restoring data (so as not to corrupt anything on a barely breathing disk). You can also mount such disks (more precisely, image files) directly in Acronis True Image and see them as separate R / O disks in Windows Explorer.
The traditional backup is done a little differently, according to an iterative scheme. First, the full image is taken (all files of the hard disk, subject to exceptions), then the difference is added to the backup. The schedule and all details are very flexible, and for “casual” users there are optimal defaults.
Of course, we still propose to make an emergency disk or an emergency flash drive to boot from it. We write on a USB flash drive actually Linux with recovery tools (in the case of a Mac, their "native" standard disaster recovery tools and our utility are written there). You can use this as, in fact, the Next-Next-Finish tool, without special knowledge, or you can switch the screen and do something different.

Select boot branch

Recovery utility
In the Russian version, a full-fledged history has been preserved with the recovery of drivers and their rolling to a new system (in the USA there are legal difficulties with this). The story is that if you take a backup from XP, Vista or Win7, and then bring it to a new computer (with another hardware), nothing will just start. You will need to reinstall all the drivers, in fact, once again configure the entire OS. As a result, about 8 years ago, we rummaged very deeply with the reverse of the Windows code and wrote our own driver installer. Now you just need to move to a new configuration, make a recovery and show where the drivers for the changed hardware are. We have a separate utility for this, it comes bundled (that is, free for users of the 2017 release), but you need to download it separately.
Another challenge is MBR / EFI data transfer. We have a scenario where we can take an MBR OS and create a branch for it in EFI so that it boots normally and supports large disks. Now this choice arises for users who want to roll the old OS onto a new machine. Here are some more low-level details with different conversion branches:
Source (Archive) \ Target | BIOS booted system / target HDD <2 ^ 32 logical sectors | BIOS booted system / target HDD> 2 ^ 32 logical sectors | EFI booted system / target HDD <2 ^ 32 logical sectors | EFI booted system / target HDD> 2 ^ 32 logical sectors |
MBR not UEFI capable OS (32 bit windows or 64bit Windows prior to Windows Vista SP1) | no changes in bootability or disk layout (1) | 1. Use as non-system disk GPT - not available for Windows XP x32 host (7) 2. Migrate as is, Warning that user is unable to use the entire disk space, if OS installed on source is Windows XP warning that the system may not be bootable (8) | Migrate as is. warning that the system will not be bootable in UEFI (14) | 1. Use as non-system disk GPT (20). 2. Migrate as is, Warning that user is unable to use the entire disk space + warning that the system will not be bootable in UEFI (21); |
MBR UEFI capable OS (Windows x64: Windows Vista SP1 or later) | no changes in bootability or disk layout (2) | leave MBR, warning that user is unable to use the entire disk space. warning prompt to change the system to EFI, reboot from media and re-start the operation in order to be able to use the entire disk space with GPT (9) | Convert disk to GPT layout, fix Windows bootability (15) | Convert disk to GPT layout, fix Windows bootability (22) |
MBR no OS or non-Windows OS | 1. leave MBR. (3) 2. convert to GPT, warning that the disk will be converted to GPT and the disk must be non-system - not available for Windows XP x32 host (4) | 1. leave MBR, warning that user is unable to use the entire disk space. (10) 2. convert to GPT, warning that the disk will be converted to GPT and the disk must be non-system - not available for Windows XP x32 host (11) | 1. leave MBR. (16) 2. convert to GPT, warning that the disk will be converted to GPT and the disk must be non-system (17) | 1. leave MBR, warning that user is unable to use the entire disk space. (23) 2. convert to GPT, warning that the disk will be converted to GPT. (24) |
GPT UEFI capable OS (Windows x64: Windows Vista SP1 or later) | no changes in bootability or disk layout, warning that the system will not be bootable in BIOS (5) | no changes in bootability or disk layout, warning that the system will not be bootable in BIOS (12) | no changes in bootability or disk layout (18) | no changes in bootability or disk layout (25) |
GPT no OS or non-Windows OS | no changes in bootability or disk layout (6) | no changes in bootability or disk layout (13) | no changes in bootability or disk layout (19) | no changes in bootability or disk layout (26) |
Naturally, since we have so well disassembled the lower level, you can make an emergency tool directly on the local computer. We are creating a new boot branch in EFI, where we write our modified compact Linux and recovery tools. If you have something covered, you can try to boot from the same computer on an alternative branch and recover from the backup. This protects users from laziness who do not want to carry a hard drive to their computer for scheduled backups (and do not backup to the cloud). Nevertheless, of course, this will not save from complex failures of iron, therefore a separate medium with your data is still important.
Another interesting thing is auto-mapping. We already had it, but in this release it has become much smarter. The fact is that recovery is now taking place from almost any medium, and even there may be several such media. For example, a network resource, a local copy, and something else. If the system has changed significantly since the last iteration of the backup (for example, you roll up a backup a year ago, this happens, for example, when setting up a new workplace in some companies), then you need to correctly understand where and how to restore it. A typical example is another hard disk layout. Another case: when the backup is old, but you don’t want to wait for a full recovery from this computer, you can try to exclude what was clearly not damaged. In order not to wait a couple of hours.

Network backup
And yes, just in case, I’ll add that all the old things, like “on my grandmother’s computer, everything will be backed up directly to the USB flash drive that is stuck in the back” or “on my virtual machine you can restore the backup of the girl’s home computer while I have it on the weekend” they still stayed.

Folder sync
Mobile backup
We pretty much went over all the technology of mobile backup in the new release. The main point - we are well aware that not everyone wants to give a copy of their data to someone else’s cloud, therefore they provided a local backup function. The old cloud option is also available, but the local backup was very, very popular. We had server code in corporate developments (just made for paranoid and backups of corporate devices within the network), and we decided to reuse it. To make it clearer, first I will tell you how the new scheme works:
- You come home with your phone (more precisely, on the same Wi-Fi network where your desktop with the basic version of Acronis True Image is).
- The application on the phone is activated (according to the iOS schedule or from the Android background) and determines the ability to connect to a computer on the current network. To do this, the phone starts looking for the main host to establish an SSL connection.
- After that, an iterative backup begins - everything that has changed over the past day is divided into parts and filled with short chunks. It is assumed that the user spends a lot of time on his home network, therefore, the background tools of both operating systems are used: for example, in the case of iOS, we can open the “windows” for 30 seconds, fill in a short chunk and wait again for the next time window.
Difficulties were almost everywhere. Firstly, of course, traffic, background activation and power consumption on new versions of iOS and Android. In general, this is already relatively simple, and there were ready-made recipes. After the first full backup, as a rule, we are talking about a few megabytes of new photos, a couple of kilobytes of contacts and other user information. We rewrote the code responsible for filling: chunks do not break, they are well downloaded even after a 24-hour break in the network. Contacts now flow first in priority. At the same time, we updated the cloud fill (in case the backup is not local), now the contacts are visible immediately after they were received, without even waiting for the end of the chunk.
It was not trivial to establish a connection between the phone and the desktop. We tried many methods until we settled on QR codes - you need to photograph the QR code from the desktop screen in the application. But even in this version it was quite difficult: at first we sewed a full-fledged SSL certificate into the visual code right away, and not all library readers read it. I had to greatly reduce the QR and install a new key exchange protocol.
Discovery itself is done right away in different ways, so even if you start the operating system inside a virtual machine (as we like to do), the phone will find its pair. A common home case is when the phone is connected via Wi-Fi and the laptop is connected by a local cable. Resolution goes by IP4, bonjour services and hostname. Unfortunately, Russian Yota users are still without this feature: their “provider” modems that distribute Wi-Fi create NAT, which is difficult to find.
Reusing server code is also a different story. Initially, it was a Linux server, which theoretically could be ported, for example, to Win. At the same time, the entire logic of the component is built in such a way that it has a control server that issues commands. As a result, with a corporate backup, we create a microserver that acts as a command center for this subnet. In the home case, such a "nanoserver" can be said to be emulated on the desktop. More precisely, just the component sends requests, and the desktop, in the case of a local backup, itself gives the answers (in the case of the cloud, the answers are given by the remote server). This component itself was required in order, in particular, for the console to know what was already backed up and what wasn’t.
Recovery also goes through the mobile application, but with an already active start via the GUI. Well, and, of course, most of the data can be migrated between different devices: for example, if you are moving from iOS to Android, dragging photos is the simplest thing.
Facebook profile backup
If you just hijacked a page - this is, in general, not a very big problem, most often, it can be restored using regular means. If Facebook unexpectedly banned you, you are unlikely to pull anything out. If your page dies due to a technical glitch on their side - too. Sometimes the user himself accidentally deletes something (for example, a single photo) and really wants to restore it. In general, many asked if it was possible to backup a Facebook profile. Yes you can. Now we do it.
When you start the profile copy tool from the desktop, the Fb page opens, where you need to allow a special application access to your data. This is a gateway for the Facebook API, in fact, a token repository for backup work. Then you can limit the “visibility” of your profile through the API on the Facebook side, and we can start copying.
Not everything lends itself to copying. For example, you can’t take and save a graph of friends, Facebook very carefully (unlike many other personal data) protects its main property. But it turns out to pick up all the photos, the whole wall, all the messages and so on. Photos and videos can be quickly restored if something happens - the API makes it possible to upload them retroactively and not show them in the update stream, but, of course, they will not have likes and comments. These data are not poured back in any way, which, in general, is quite explainable. Your Timeline is also not restored: unlike photos, all records will be dated by the time of backfill. Your likes will be saved in the backup, but will not be restored either.
The Facebook API turned out to be quite buggy, or rather, rich in extremely illogical features. This tool was not originally intended for large tasks: as a rule, applications pull one or two requests for something extremely specific, but not all at once. This resulted in the fact that the same incremental backup is very difficult to do. For example, it is almost impossible to quickly get the difference between the current and previous state on a known date. There are no regular methods and will not be - this seems to be part of the Facebook security policy. I had to think carefully about the organization of the algorithms in the existing limitations. As a result, we digged hard, but found a balance. The second point - there are restrictions with the token. The longest token is not eternal, lives 60 days. And, according to the logic of Facebook, after 60 days you need to go back to the dashboard and press the button. Regular web tokens are renewed during user activity: when you go to your profile and you have cookies mentioning such a token, it is updated. We work without a live call, in a separate session. Fortunately, we are not manufacturers of SmartTV. From them, users always get the code on the phone or desktop and hammer it from the remote control onto the TV once every two months. Fortunately, we just have one button.
It was a lot of fun testing. Facebook allows you to create test virtual users that are not visible on the main network. Unfortunately, these “mannequins” cannot like each other, cannot comment and have many other restrictions. As a result, we created our “live” users, the main of which over time became Masha Bucket (you can still find it on the network, but we cleaned the profile). In the final, we worked with the profiles of our top managers - we needed very “bloated” profiles of people who had been living an active network life for many years and today are leading. As a result, one of the employees began to measure the backup volume, there could be explanations for tickets like: “Data on 50 Ivanovs was taken from Facebook, and only 48 came.”
Interesting things in changelog
Since this is a post about release, and not development features, I will leave a lot of interesting information about implementation overboard. But my colleagues and I will try to tell hardcore details later. In the meantime, I note that we have changed something else interesting.
The first interesting story is the exceptions. For example, a normal backup script does not take a shadow copy of the system - it just does not make sense. To facilitate backup (which is especially important for those who upload it over the network), we did a very good job with exceptions. In addition to temporary folders (with the exception of the browser cache - it turned out to be unexpectedly important), system logs, crash dumps and other system things, it eliminates the fact that it has its own recovery tools. For example, unless you explicitly specify the opposite, we will not backup Dropbox, Yandex.Disk, Google Drive, OneDrive, libraries like iTunes and antivirus quarantines. Accordingly, I had to reverse all of these tools in order to find their settings, determine by their configuration records what they are in the system, and understand what should be taken and what not. Yandex ran the longest from us. Disk - in different versions they store their settings in completely different ways. The first time we came across it in the last release - with a config in an XML file, then he began to store the settings in the SQLite database, then changed the storage format ... We drove these settings and tried to understand the logic for each exception.
In the opposite direction, exceptions were also required. For example, in order to raise the old archive on a new system, it is by no means possible to restore drivers from the old OS, swap file, and so on. Accordingly, from release to release, we support such rule sets.

We also exclude from the backup files of the same backup if they are stored on the same drive. Otherwise, recursion would occur. At the same time, we have a new type of storage — an archive of unnecessary files (a compressed version of the “Parse1”, “Parse2” folders, and so on, which the user displays and stores from nostalgic feelings) - we include them in the backup.
The archive in which backup data is stored has changed a lot. For this and last year, we have come up with a lot of ideas on how to optimize it and how to make it more accessible (in the end, we “transparently” mount disks from archive files and give web access to a cloud copy). Now the local backup is stored in the second version of the archive with significant finishing touches on the architecture, and the mobile backup is already in the third, with a completely different architecture. I think next year we will switch to the third version everywhere. By and large, our archive is a separate file system "in itself." So, for example, up to 20 versions of one file (with or without deduplication) can lie inside it; there is its own means of deleting old versions. There is an analogue of the garbage collector: you can “clean” the archive, and a storage space is formed inside it (while the file size itself will not change). As in classical file systems, deleted files inside the archive are not deleted for optimization, but are marked as subject to overwriting. Due to the dictionary structure of the archive, the deletion of dictionary sections is required for complete deletion. We also have our own means of internal defragmentation of the archive, but we do not use it without a direct user command (just as we never move data at the time of deletion). The fact is that any movement of information creates a threat of loss. For example, a typical problem is “USB garlands” when the user plugs the device through several adapters (especially often we see this among users of small laptops). Attempting to delete a file inside the archive with resizing the archive will result in that all data will be moved sector by sector. On a long "garland" this causes losses, and we had several dozen such cases in support.
We also worked on the recovery of individual files. In particular, they are now very quickly searched in the local archive.
Where to get the release
Here is the page of the Russian release . As in the previous version, you can buy a subscription (which is important for those who back up to our cloud) or the “once and for all” version, but with a limit of 5 free gigabytes in the cloud (this is an option for those who back up on the local network or on wheels in a cabinet). Licenses are available for 1, 3 or 5 devices (Mac or Win) and plus for an unlimited number of mobile devices.
Well, to summarize. Now our release can be used for a full backup to any local or network media, to the cloud, for synchronizing data on different computers, for a number of system tasks (in particular, a favorite of many Try & Decide in place), for Facebook, for backup of iOS and Android mobile devices (a desktop version is planned for Win-tablets), for quickly mounting backups as virtual disks, restoring and moving the system, and quickly searching for individual data in the archive.
My colleagues and I will try to tell you more about design, mobile development and low level later, each about its part.