Approaches to Versioning Database Changes

Much better disciplinary restrictions clean instrumentality extension
article Author


Introduction


При разработке информационной системы, то есть программы, нацеленной на хранение, работу с данными, обработку, анализ и визуализацию какой-то базы данных, одним из краеугольных камней стоит задача разработки БД. Когда я только начинал задаваться этим вопросом, казалось – что ни сделай, все равно будет криво.


На протяжении 5 лет разработки нескольких корпоративных ИС, я ставил и пытался решать вопросы, как тот или иной аспект разработки БД сделать удобным. Искал инструменты, помогающие что-то делать с БД, методологии. На удивление в этой области мало наработок. И в каждом подходе сразу видно – вот это нельзя, вот тут будет неудобно, тут слишком много дисциплинарных правил (см эпиграф)… В этой статье я попытался собрать те походы, которые считаю наиболее эффективными, и один, в добавление к собранным, представлю как венец моих исканий, который считаю наиболее «бронебойным».


Эволюция проблемы


Приведу пример, как может задача разработки БД стать проблемой и эволюционировать. Если интересна сразу глобальная постановка задачи со всеми фичами, этот параграф можно пропустить.


  1. There is one database instance, it is prod, and users work on it, and programmers look and change data and the scheme. It seems convenient, everything is very simple, nothing more, Occam's razor and all that.
  2. Hmm, and programmers do program immediately on prod - and if they do something wrong? Lock data, or delete something important? Yes or just load the server in search of the necessary optimization. Any solution to this already arisen situation will be bad for users. No, it would be necessary to separate the database for dev development from prod. The question arises - how to transfer changes from dev to prod?
  3. Hmm, how would we keep track of when and what changed in the database? We are following the application code, but the code of stored procedures and everything else in the database? Will we script some scripts with alter 's? How to version create ? After all, it is repeatedly non-rollable, unlike alter .
  4. Hmm, all programmers are on the same dev server, it is also a test for users. That is, testing users can stumble on the fact that the programmer has not yet completed what should not be visible to the user. Or just broken. It would be necessary to share dev with test.
    (in fact, in many IPs you can not pay much attention to this problem)
  5. But the situation is the same with programmers - if two programmers change the database, then there may be some artifacts. One changes something, but the application is not ready for this yet - and for all the project does not start. It would be necessary for each programmer in his own database instance. So that everyone has their own sandbox. And when is the database very large?
  6. Oops, how can we now transfer all changes to prod? If you use comparison schemes, the database of one programmer will erase the changes of another programmer. And how to transfer changes of one programmer to an instance of another? Transfer backup from prod to each programmer? And then its changes how to roll on this backup? No, nonsense, somehow here some scripts should play a role.
  7. Wait, what about the data that should be the same across all instances (directories and persistent directories)? Will they be transferred with backups? Or is it still scripting insert 's or merge 's and versioning scripts? But how to launch them - always all at once, or all the same only those that have changed? And how to regulate the order?
  8. But it seems that it happens that there is such data and procedures that should be different at different instances! Settings, replication, communication with other services. When sharding . How to be with them? Transfer backup and subsequent change script, your own for each instance? It is possible, but not all the differences lend themselves to this - for example, memory optimized tables cannot be converted to regular tables, apart from the fact that raising backups with them can be simply impossible when there is not enough memory at the test server or programmer instance .
    By the way, I came across this when I needed to make memory optimized tables on 20 GB, and even when prod had a whole system of synchronization with another system, which should not have been on other instances.
  9. Mmmm, how would you make it so that you could easily raise a whole new DB instance right away? What if all the settings for both the database and SQL Server are scripted and versioned?
    This is convenient when each programmer has his own SQL Server, and not when there is one server with a bunch of databases. Or sharding. Or a new test loop for a new big business feature.

Some of the points of this evolution may seem superfluous, and generally inventing problems for yourself, but when solving them with a convenient tool, developing a database will become almost as simple and painless as developing ordinary application code.


Having put together all the possible problems associated with the development of the database that I saw when developing information systems, I came to a global formulation of the problem statement in my vision.


Complete universal problem statement


Tasks and problems that may arise when developing a database:


  1. It is necessary to version:
    A. Database schema. That is, you need to be able to in some form save the history of changes to the database schema.
    This is the most basic goal on which all other possibilities are hung. Versioning a database schema is the cornerstone of database development. There can be two ways. You can save the stream of changes of all database objects in one global stream, and you can somehow version each specific object. I believe (see p6) that the second way is, in the end, much more practical.
    B. Some data: dynamic directories, persistent directories (persistent, static).
    Important goalif the application code depends on this data (then they will be persistent). That is, if some table corresponds to one enum in the code. Or, otherwise, the goal is secondary, but very useful for controlling the addition / modification of static and not very data, as well as for automating the deployment of database changes and release of different circuits (n2) - if there are more than one.
    C. Data conversion scripts (migration scripts). They can be associated either with a change in the database schema or with a business task. For such changes, order is important, and sometimes re-execution is required.
    Not all IPs will have such scripts. But, if they exist, then this goal is imperative .
    D. Database settings.
    Examples of important database settings that should be versioned - if service broker should be enabled (it is necessary for sql dependency ), change tracking (it can be used for more complex caches), read_committed_snapshot option , there should be the necessary user database for the application, setting up error alerts , setting up support for memory optimized tables . The benefit of saving and versioning database settings is that you can easily make a new database that works in exactly the same way as the main one. And that in a convenient form, you can conduct a code review , agree on the changes. And also just to have a story - what, when and by whom it was done.
    E.Some other SQL Server objects include jobs, some server settings, alerts, logins, linked servers (for integration with other systems).
    This may be necessary when the test loops must repeat prod completely, along with processing in the job. Which seems like a completely normal situation.
    In addition, if all the settings, job and other scripted, you can easily raise a new circuit. I launched one script - and the whole new SQL environment is completely working.
  2. Support for many circuits is needed: one prod (or maybe not one! - for example, when sharding, or when the product is boxed and each client has its own database), many test ones - not only two test circuits (prod-test, dev-test) , but also along the circuit for each programmer, or along the circuit for a large feature.
    A. Support for many circuits means that it should be possible to transfer changes from one circuit to any other circuit without spoiling it and without overwriting the changes that it already has, possibly.
    That is, if each programmer has his own DB sandbox, it is necessary to transfer the changes of one programmer not only to prod, but also to all other instances. Even if another programmer did something there too.
    B.It happens that a circuit in different circuits may differ slightly - for example, in a test environment additional objects (views, storages) are needed, or vice versa, a piece of the database, integration with another system etc. should be excluded from some test circuit. Also, in the case when in one instance there are memory optimized tables , and in the other they will not fit in memory and these tables should be normal.
    C. In different loops, the versioned data may be different - for example, settings for launching an application, access to other systems. Also test data for test loops.
    You can draw an analogy with the case when different programmers, or on different publish-hosts of the application must have different app.config files- what is achieved using the Visual Studio app config transformation feature . Similarly, something in the database may differ in different instances.
    D. The same for the database and server settings. For example, prod / prod-test needs all the jobs, but not all the programmers on local databases. Support for memory optimized tables - on prod is needed, but on other circuits it may not.
    E When restoring backup from prod to another instance, you need to be able to bring the restored database in accordance with the desired path.
    Sometimes it may not be possible (again - if there are memory optimized tables), but, if possible, then this is a very convenient feature when you need to transfer all the data from one DB instance to another, which is most simple to do by transferring the backup.
  3. Tasks related to the version of the database:
    A. When the programmer makes changes to the database, not all of which must be in prod, you need to leave the unnecessary - if the programmer made them for his test purposes - and transfer to prod (and other instances) only necessary.
    It is convenient for the programmer to leave everything he wants in his sandbox, although it is necessary to transfer exactly what is needed to transfer to other instances. That is, if he made for himself some convenient and needed only a view, procedure, or even a table - this should not get on prod.
    This item is almost equivalent to 2bc, but here it is understood that there may be database objects that you can not version, but simply manually create in your sandbox.
    B. Rollback to a specific version of the database (greater than the current on the selected circuit).
    This is sometimes necessary if the development of the database is separate from the development of the application. If the database changes go along with the application change, then there should simply be no different future versions of the database. That is, there should be an approach “not ready / spoils / not supported by the application - do not commit, at least to the general branch of the repository”. And if there are changes in the database, then they must be backward compatible. For example, deleting / renaming a column should be accompanied by a corresponding change in the application, and adding is not necessary. In article 3 ( Evolutionary Database Design ), the author calls such changes destructive - I agree with this term.
    C.Rollback to the old version, by analogy with a simple git checkout during normal application programming - is not possible in a general way. Because, if there is data in the tables, it is impossible to convert them to the old (that is, arbitrary) scheme.
  4. Sometimes it can be useful to create a clean database with a given schema and data directories (or without directories) - for tests, for example.
    A. When testing automation ( continuous integration ), you can create a fresh clean database to run the tests, or with random data filling (I don’t know free tools for this, redgate , devart have paid tools ) or a subset of working data ( Jailer can help with this ).
    In practice, the real benefits of this can be gained only if this process is simple and quick as a snap of your fingers - otherwise no one will do it. After all, you can use a regular test database.
  5. If any change has already been rolled up - in the normal mode, provide that it is not rolled up again, or make the changes idempotent.
    In practice, the “make all changes idempotent” rule is too disciplined. It is much better to remove disciplinary restrictions by instrumental expansion.
  6. For each database object, you need to be able to see the history of its changes.
    A. For control, you need to be able to see what changes will be rolled up in specific SQL scripts (“cumulative scripts”).
    B. Code review is highly desirable . Moreover, the explicit alter operator is preferable to comparing the create table statements (based on which diff is made and subsequently rolled), since it is better to control actions from the database, rather than declarations. And for procedures and similar objects, you must be able to see the diff of the body.


  7. The tools used need licenses. With the shared db approach (which can cause problems and conflicts of changes - see paragraph 2), one is enough (when only one specially designated person rolls the changes), and with the approach, when each programmer has his own database - for each programmer under license.

In my practice, I was guided by the points:


  • 1 and 2 - from cover to cover;
  • 3a;
  • I did not use 3b, because always in my projects the databases with the application were an integral one system and were developed completely jointly;
  • 4 - I tried to use it, but the tests as a whole didn’t take root in me, because they require an additional resource, or paradigm shift for TDD;
  • 5 - required;
  • 6 - important, although rather rare;
  • licenses were not required because the tool for the chosen approach is free.

So I consider all points to be mandatory except 3b and 4.


The approaches


I came to the conclusion that it is advisable to highlight the approaches:


  1. Comparison of schemes of the target database and source database.
  2. Comparison of the scripted schema (and data) with the target database.
  3. Based on sequential (incremental) manual SQL scripts.
  4. Based on manual independent SQL scripts, the structure of which repeats the database schema.

In this list, the approaches are sorted by increasing utility.


Similar articles


  1. https://habrahabr.ru/post/121265/ - briefly describes approaches 2, 3. There is still an approach with a sequence of idempotent changes, but I discard it due to the too high complexity of supporting idempotency of scripts when their number is large. In addition to just running 1000 scripts, even if they do not end up changing anything, it also takes time (and the size of the log file). There should be an approach “if the change has already been rolled up, do not repeat it” (p5).
  2. https://habrahabr.ru/post/258005/ - a combination of approaches 3 and 1 - based on redgate SQL Source Control and redgate SQL Compare. (the article poorly describes working with the database, it is mainly about the love of Atlassian) - as I understand it, at first, when committing, they roll scripts onto DB QA, then by comparing the circuit it is transferred to prod.
  3. https://habrahabr.ru/post/312970/ - a good long article, the approach is very similar to the previous article. They use CI so that changes to the QA database are rolled up for each commit, and an artifact-accumulative script of changes to the database to roll out to prod is rolled out. The meaning of this artifact is not very clear if the scripts themselves are in a commit. Quintessence in the picture .

In general, I would be extremely wary of the idea of ​​automating the roll-up of scripts by commit - sometimes it happens that commits are made unprepared or incomplete. The disciplinary rule “commit only ready-made code to the master” works very poorly in practice. It is better to avoid it by improving the tools - for this there is a class of continuous integration tools (for example, TeamCity from JetBrains or completely free Jenkins ). I am for the fact that scripts on the database should be rolled up exclusively deliberately by a human programmer and only at the right moments in time - which should not be connected with the commit.


1 Comparison of the schemes of the target database and the source database


Tool


Redgate SQL Compare . There is still http://compalex.net/ , but it only works with php. There are other tools for comparing database schemas.


Methodology


In addition to the prod database - it is the target database - the dev database is made - it is the source database.


In some way, changes are made to the source database. Moreover, this database source is not test in the generally accepted sense, because you cannot do anything with it - it is understood that all changes (at least changes to the database schema) must be transferred to the target database. Further, these changes can be scripted, but these scripts are subsequently not used in any way - because if you use them and roll them in any way, then the whole essence of the approach disappears, comparison of schemes becomes meaningless. These scripts can only play the role of change history. But which may differ from reality, because you can change something visually in Management Studio (or another GUI for the database) and forget to script it. Or to script incorrectly. Then, at the time of deployment to the target database, a diff script is made (using the tool), which is rolled,


pros


  • For those who are afraid to write sql. Changes in the source database can be done by visual means (on the other hand, I do not consider this a good practice).
  • Scripts need to be written only for data conversion (again, I believe that scripts for the programmer should be the starting point, and you should not leave them).

Minuses


  • Impossible is everything that is connected with several circuits (n2).
    If I make changes to my dedicated test DB sandbox, I must transfer these changes to the source DB for prod. If this is done in the same way, then this gives a hole - transferring those random or personal changes that I made to my database (I can do everything in my sandbox all right, right?). After all, only the necessary changes are scripted, but everything is transferred. In addition, I will overwrite the changes that another programmer is transferring.
  • If there are at least some differences in the second circuit, you need to create another source database for it. That is, if there are any settings or data in the test loop that differ from prod, then either it needs a second database source, or somehow solve this problem.
  • Lack of control over change transfer scripts - the scripts that the tool will generate will be rolled up. Sometimes it can be bad, you have to take into account the specifics. For example, some changes require re-creating the table. What you need to do is either extremely careful or not at all (if the table is often used, it is very large).
    If we take this diff-script, watch and edit it, then again the whole essence of circuit comparison disappears - you can write scripts right away.
  • The scripted history of object changes may differ from the real one. Because the tool compares the database scheme, not the scripts, it is possible (by chance, of course) to change something and not put it into scripts. Or put in scripts is not quite right.
  • Anyway, additional efforts and tools are needed to support:
    - migration scripts (data conversion),
    - versioning of directories and persistent data.
  • The tool does not drag the database settings and SQL Server settings.
  • If someone else rolled something on the source database, during deployment it will also be pulled over.
  • Dependence of the target database on another database - in the sense that prod must be bundled with its source.
  • There are no free good tools, and redgate SQL Compare is expensive. Moreover, you need a license for the tool on all hosts from which migration to any target database will be carried out.

The approach does not solve the problem


  • 1b - it is understood that SQL Data Compare should be used for this; 1c; 1d; 1e.
  • For 2 - additional and very significant efforts. It’s easier to abandon several circuits than to maintain them.
  • 3a - the tool drags changes to the circuit of either all objects or selected ones. That is, when deploying, you need to look through the list of all objects and click off unnecessary ones. In principle, this solves the problem, but ... Nobody will do that. Again, the rule “it is much better to remove disciplinary restrictions by instrumental expansion”. Something like a .gitignore file is missing.
  • 6 - additional efforts are needed, the history of changes may diverge from the real one.

2 Comparison of the scripted scheme (and data) with the target database


Tool


I don’t know which tool could compare two database schemas not according to the databases themselves, but according to scripts. Therefore, this approach is theoretical. But, if you find or make such an instrument, it seems that the approach would be quite good. Especially if this tool could compare data.


Methodology


Here the role of the source database would be played by a directory with scripts that completely create the database — both the schema and versioned data (directories, persistent directories). That is, the programmer makes changes to these scripts, launches a tool that compares the entire directory with the target database and makes a diff script that is either saved for code review or rolled right away.
Since there is no tool, you can only fantasize about how this tool could compare the data and settings of the database and SQL Server.


pros


  • The problem of the history of changes of each database object (item 6) was perfectly solved.
  • The sandbox problem (n3a) was perfectly solved, as in the remaining script-based approaches. No changes needed - do not make scripts.

Minuses


  • Similar to approach 1, everything that is connected with several circuits (n2) is impossible.
  • Similar to approach 1, there is no control over the rolling scripts.
  • The same problem with migration scripts.

The approach does not solve the problem


  • 1c.
  • 1d, 1e - it is not known which instrument.
  • For 2 - additional and very significant efforts. It’s easier to abandon several circuits than to maintain them.

3 Based on sequential (incremental) manual SQL scripts


Tool


flyway db . There may be alternatives ( https://github.com/lecaillon/Evolve - I'm not ready to talk about this tool, but it seems to be doing something similar).


Methodology


The methodology of the approach is the simplest. Enchantingly simple. As needed, sql-scripts of changes are written - arbitrary, both for changing the scheme, and for changing the data. It doesn't matter which scripts. Files are numbered, added to daddy. At the right time, a tool is launched that, in the order of numbering, rolls new, that is, not yet executed script files to the selected database. He remembers the thumbnails in a special plate, that is, the script will not be executed again.


This is how Qiwi works. Or it worked when I participated in the development of a payment system there. But there without tools, the tool is replaced by disciplinary rules. There are several QA-employees who follow the special git repository and roll up new scripts, first to the test database - they look to see if something has broken, then, if everything is fine, to prod.


pros


  • Extremely unusually simple. You write a script, put it in daddy, and that’s it. I think that is why this approach is the most common.
  • Full (absolutely complete) control on rolling changes. No script generation.
  • Including, it is possible to store data in scripts (p1b), and migration scripts (p1c), and settings for both the database and SQL Server (p1de).
  • You can roll up to a specific version (p3b).
  • The problem of many circuits is solved, but without differences in them (n2a).

Minuses


  • It is impossible to single out the history of changes of one object, alter 's are scattered on one object according to many scripts, many files.
  • With the parallel work of two programmers, they can create scripts with the same numbers.
  • Since the approach is script-centric, there are not enough features:
    - I would like to roll some scripts when changing them. That is, he added a line to the script of the versioned directory, and it was executed, and a line appeared in the table. In this form, you can store the history of data changes (see approach 4).
    - I would like to roll out some scripts with each deploy - for example, some kind of cleaning, entering persistent directories that must be versioned (therefore, they cannot be entered manually in the database).
  • It is extremely difficult to understand a sequence of arbitrary scripts. Creating tables, their alter 's, adding lines to directories, migrations are scattered almost randomly in one sequence. I would like to have an alphabetical sorting of files, various daddies. In a word, I want to see the database structure in scripts. You can, of course, come up with something - make a bunch of daddies, make a huge bat, launch the tool on these daddies in the right order ... Yes, this is the beginning of the next approach, 4th.

The approach does not solve the problem


  • 2bcde,
  • 6b.

4 Based on manual independent SQL scripts, the structure of which repeats the database schema


Tool


liquibase . There may be alternatives ( redgate SQL Source Control , https://www.quora.com/What-are-the-alternatives-to-LiquiBase - but I don't quite know how they work).


Methodology


Idea


To create and change the schema for each database object, we create a file in which there will be a script responsible for this object - thus, when versioning files, we get a history of changes for each object. We put these files in folders that repeat the structure of the database. Since the sequence of execution of scripts is important, we introduce control files containing the sequence of script roll-up, and the tool makes writing these control files quite simple and decides which change to roll up and which not if it has already been rolled up or filtered out. In addition, if you need a difference in something in different database instances, we enter the values ​​of the variables that the tool uses, modifying the scripts for each instance as necessary. In addition, you can enter filters for scripts, and, depending on the context (“only change the scheme”,


To change the table, you need to add a script (changeset) with the alter or create index operator or some other to the file with its create . Or, you can modify the existing corresponding changeset if it is possible to make it re-rollable.


To change the procedure / function / trigger / view, you need to change the code in the file corresponding to this object. In order for this script to be re-rolled, you need to make the creation of this object with an empty body in the first changeset, and in the second - the alter statement with the desired body (sorry, SQL Server does not have a create or alter statement ). Then the first changeset will be executed only once, and the second - when changing.


Well, for the deploy itself, we make the bat-file (s) that launch the tool with the desired context and settings. Thus, the desired deploy will be launched by launching the corresponding bat'nik.


You can configure the startup logs (which changesets were executed and how much time) to be saved. By adding them to .gitignore.


Files


We make the following folder structure:


- <DB name>


  • CLR_Objects
    There will be 1 sql file for each CLR assembly. In this file, first drop function , then drop assembly , then create assembly , then create function . The assemblies themselves will have to be stored in the script as base64 , but you can make a simple tool that automates this conversion:
    Clipboard.SetText("0x" + string.Join(“”, File.ReadAllBytes(path).Select(b => b.ToString("x2"))) )
  • Tables
    There will be a directory for each table, in which separately the files for creating the table and the merge script for the versioned directory are separately .
    - <table name1>
    - < table name2 >
    - ...
  • Types
    There will be 1 sql file for each type.
  • Functions
    There will be 1 sql file for each function.
  • Views
    There will be 1 sql file for each view.
  • Stored_Procedures
    There will be 1 sql file for each stored procedure.
  • Triggers
    There will be 1 sql file for each trigger.
  • Migration_Scripts
    There will be a sequence of data modification scripts.

- core
There will be the tool itself and its configuration files.


As you can see, the idea of ​​these folders is to repeat the database schema and match each object to its own single file.


In the head folder <DB Name> there will be files:


  1. Database creation script ( create database , alter database - settings, users, roles)
    create_db.sql
  2. File (s) with scripts that modify the settings of SQL Server and the database itself ( sp_configure , sp_add_job and her brothers, sysmail_add_account_sp and her brothers, alter database etc). Although job scripts can be stored for convenience in various files.
    config_db.sql
  3. File with a sequence of roll tables. The command of the includeAll tool executes scripts in alphabetical order, which will incorrectly create tables due to the presence of links in them, therefore, we need a file that regulates the sequence of tables rolling.
    tables.xml
  4. View file Views can depend on each other, so their roll-up order is important.
    views.xml
  5. File with a sequence of roll-up scripts for changing data. For data migrations, order is also important.
    migrations.xml
  6. File with commands to the tool how to create a database schema. It will contain links to files 1 and 2 with the creation and configuration of the database and server, and then to the folders CLR_Objects , Tables (more precisely, the tables.xml file ), Types , Functions , Views (more precisely, the views.xml file ), Stored_Procedures , Triggers - precisely in this sequence. Also in this file you can set values ​​for variables on which scripts will depend - if there are differences in different database instances.
    master_Scheme.xml
  7. File with all the tool commands. At first, there is a link to the previous file for creating the scheme, then to additional actions - conversion / migration / data cleaning (link to the Migration_Scripts folder , or rather, the file migrations.xml )
    master.xml

Note that for creating / modifying stored procedures, triggers, functions, and types, the sequence is not important. Therefore, the includeAll command of the tool rolling them in alphabetical order is enough .


How to use


For each use case, you need to create a bat-file that launches the tool with the appropriate context - for example, deploy_local_scheme.bat , deploy_local_full.bat , deploy_prod_scheme.bat , deploy_prod_full.bat , deploy_ <programmer> .bat etc. In one project, I had 18 such files - there was a whole system of data migration, and it was necessary to regulate when to execute which migration.


In addition to the context, the bat-file must contain a connection string and the name of the tool command.
Possible commands:


  • update . The main command for executing all the necessary changeset.
  • status . Will show which changeset will be rolled.
  • udpateSQL . It will show a full sql script that collects all rolled changesets.
  • changelogSync . Enters new changesets as executed, without rolling them. Must be used when the change is rolled manually. It should be used as rarely as possible.

For the convenience of viewing execution logs, I also output them to a text file:
> %outputfilename% 2>&1


Changesets can be marked with attributes:


  • runOnChange - when =trueinstructs the tool to roll the changeset when it changes (if you write =false, then after changing the changeset there will be an error of the form "the rolled changeset has been changed");
  • runAlways - tells roll changeset at each deploy'e.

In the case when you need to change the scheme so that some data modification scripts break, that is, migration scripts (for example, if you need to change the name of the table, columns, delete something), then you need to write the corresponding alter or sp_rename to the file corresponding this table, and modify the necessary scripts accordingly. Further, for one-time scripts from them, you need to make sure that the tool does not give an error that the rolled changeset has changed. This is achieved in two ways - either the changelogSync command , or manually changing the corresponding row in the tool table by updating the md5 sum there - its value will be prompted by the tool itself.


pros


  • Full (absolutely complete) control on rolling changes. Same as in approach 3.
  • Similar to approach 2, it is very convenient to conduct a code review , to look at the history of changes of all objects.
  • Differences in paths (n2bcde) are supported by variables. That is, you can make a bat-file on the path in which the values ​​of the variables will be set, and use this value in the script, or changes can be marked with labels that can be filtered or not depending on the path.
  • Even a clean database can be created (p4).

Minuses


  • Sophisticated in initial setup. But this is only the difficulty of entering. When using this complexity, there is no need - just add the script-changeset to the desired file, and that’s it.
  • In long-term development, when one table needs to be changed many times over time, in some cases changeset with the alter table operator will multiply . But in most cases this will not happen - if you always write reusable changesets. This is a minus regarding approach 2 (in which you can edit the create statements ), but for approach 3 this is an improvement - in general, all changes will be linearly added to each other there.
  • The tool is very poorly maintained and developed. It is written by one person as a hobby. But today's functionality is enough. In addition, the tool is simple, you can do a similar one yourself in a month.

The approach does not solve the problem


Only optional in my vision p3b. Victory.


Also popular now: