Zabbix: Backing up a small database

We omit the long introduction about the need for data backup. We all know that backups need to be done. Those who actively use Zabbix also think about the possibility of restoring the database if it is damaged or transferred to a new server, etc. It is clear that the best option for this is replication, but not every organization can afford it. I will show how the Zabbix backup problem was solved with us. If anyone is interested, I ask for a cat.

Warning: I do not consider the scheme presented below to be ideal, therefore the article was not written as a model, but with the aim of obtaining constructive criticism and useful tips for improvement.

When I faced the need to backup the Zabbix database, I did not go far and decided to use the proven tool: mysqldump . It worked well when used with databases of other web services such as redmine, glpi, drupal and others, but in the case of Zabbix it turned out to be completely unsuitable. Backups were made, no errors occurred, but once a rainy day came when the backup needed to be restored. It took about two days to unload a relatively small base from the dump. Considering that the database at that time was, you might say, in its infancy, the downtime in the future would increase significantly. That was totally unacceptable. It was then that my mind fell on Percona XtraBackup .

Percona xtrabackup- An open source software product that allows you to backup MySQL databases without blocking. Open Query, mozilla.org and Facebook use this program, which allows us to conclude that this is not a crude red-eyed work. I note that in my case it was not possible to use Percona together with the zabbix server due to the low performance of the disk subsystem. Sometimes errors occurred due to the fact that Percona did not have time to read the data that Zabbix was actively writing. It was decided to stop the zabbix server while creating the backup. Today it takes about 6 minutes a day. Considering that all especially critical triggers are tied to SNMP traps, which after starting the zabbix-server will still not be left out, this time is quite acceptable for me. Maybe,

Then I thought what an ideal backup should be for me. And I realized that the best backup is the one you don’t worry about, but you know that it is running. I managed to achieve this result: I do not worry about backups. I don’t go to the server every morning with my fingers crossed and do not check the log, file size and remaining space on the partition. But I know that backups are being done, because I get all the necessary information by e-mail. This is helped by a fairly simple script, which I want to bring to the public. Perhaps there are some problems with him that I don’t suspect?

In Zabbix, we have email notifications configured. Ssmtp is used for this. There are enough instructions on the Internet, in addition, the topic is considered completely different, so I will not dwell on this. Apart from Percona and ssmtp, nothing specific is used: tar, gzip, sed, and find are found in every distribution. For reliability, backup files are duplicated to a remote server via NFS.

The system on which it is used:


The script itself
#!/bin/bash
DAY=`date +%Y%m%d`
LOGFILE=/var/log/zabbix_backup.log
# E-mail, на который производится отправка
EMAIL=admin@host.com
# Отдельно используются logfile и mailfile: лог не перезатирается каждый день и периодически сжимается или удаляется с помощью logrotate
MAILFILE=/tmp/mailfile.tmp
# Каталог с бэкапами
BK_GLOBAL=/home/zabbix/backups
# Каталог для текущего бэкапа
BK_DIR=$BK_GLOBAL/Zabbix_$DAY
#
#Процедура set_date для записи даты и времени в лог
set_date ()
{
DT=`date "+%y%m%d %H:%M:%S"`
}
#
mkdir $BK_DIR
set_date
echo -e "$DT Начало резервного копирования базы ZABBIX" > $MAILFILE
service zabbix-server stop 2>>$MAILFILE
innobackupex --user=root --password=qwerty --no-timestamp $BK_DIR/xtra 2>&1 | tee /var/log/innobackupex.log | egrep "ERROR|innobackupex: completed OK" >>$MAILFILE
innobackupex --apply-log --use-memory=1000M $BK_DIR/xtra 2>&1 | tee /var/log/innobackupex.log | egrep "ERROR|innobackupex: completed OK" >>$MAILFILE
# У Percona Xtrabackup есть неприятная особенность: выводить много лишней информации. Здесь отсекается всё лишнее, с помощью tee и egrep.
service zabbix-server start 2>>$MAILFILE
set_date
echo -e "$DT Резервное копирование базы данных завершено" >> $MAILFILE
set_date
echo -e "$DT Начало архивирования" >> $MAILFILE
cd $BK_DIR
tar -cf $BK_DIR/zabbix_db_$DAY.tar ./xtra 2>>$MAILFILE
rm -rf $BK_DIR/xtra
cd /usr/share
tar -cf $BK_DIR/zabbix_files_$DAY.tar ./zabbix 2>>$MAILFILE
cd /etc
tar -cf $BK_DIR/zabbix_etc_$DAY.tar ./zabbix 2>>$MAILFILE
cd /
gzip $BK_DIR/zabbix_db_$DAY.tar 2>>$MAILFILE
gzip $BK_DIR/zabbix_files_$DAY.tar 2>>$MAILFILE
gzip $BK_DIR/zabbix_etc_$DAY.tar 2>>$MAILFILE
set_date
echo -e "$DT Архивирование завершено" >> $MAILFILE
rm -f zabbix_db_$DAY.tar
rm -f zabbix_files_$DAY.tar
rm -f zabbix_etc_$DAY.tar
set_date
echo -e "$DT Монтирование NFS-каталога" >> $MAILFILE
mount 192.168.1.30:/home/backups /mnt/nfs 2>>$MAILFILE
set_date
echo -e "$DT Копирование файлов по сети начато" >> $MAILFILE
mkdir /mnt/nfs/Zabbix_$DAY
cp $BK_DIR/zabbix_db_$DAY.tar.gz /mnt/nfs/Zabbix_$DAY 2>>$MAILFILE
cp $BK_DIR/zabbix_files_$DAY.tar.gz /mnt/nfs/Zabbix_$DAY 2>>$MAILFILE
cp $BK_DIR/zabbix_etc_$DAY.tar.gz /mnt/nfs/Zabbix_$DAY 2>>$MAILFILE
set_date
echo -e "$DT Копирование файлов по сети завершено" >> $MAILFILE
echo -e "$DT Удаление старых архивов" >> $MAILFILE
find $BK_GLOBAL/* -type f -ctime +30 -exec rm -rf {} \;  2>>$MAILFILE
find /mnt/nfs/* -type f -ctime +30 -exec rm -rf {} \;  2>>$MAILFILE
find $BK_GLOBAL/* -type d -name "*" -empty -delete 2>>$MAILFILE
find /mnt/nfs/* -type d -name "*" -empty -delete 2>>$MAILFILE
set_date
echo -e "$DT Старые архивы удалены" >> $MAILFILE
echo -e "$DT Размонтирование NFS-каталога\n" >> $MAILFILE
umount /mnt/nfs 2>>$MAILFILE
cat $MAILFILE >> $LOGFILE
sed -i -e "1 s/^/Subject: Zabbix backup log $DAY\n\n/;" $MAILFILE 2>>$LOGFILE
sed -i -e "1 s/^/From: zabbixsender@host.com\n/;" $MAILFILE 2>>$LOGFILE
sed -i -e "1 s/^/To: $EMAIL\n/;" $MAILFILE 2>>$LOGFILE
echo -e "\nСодержимое каталога $BK_DIR:\n" >> $MAILFILE
ls -lh $BK_DIR >> $MAILFILE
echo -e "\nИспользование жёсткого диска:\n" >> $MAILFILE
df -h >> $MAILFILE
cat $MAILFILE | mail -t 2>>$LOGFILE


Files from the / etc / zabbix and / usr / share / zabbix directories are copied in order to save the settings and additional scripts used in Zabbix.
The text of the letter sent by the script


To restore the database from the resulting copy, just stop mysqld, replace the directory with the one extracted from the archive, and restart the MySQL daemon. Count for yourself how much time it will take you, but I think it is much less than two days.
All working backups and stable operation of all systems.

Also popular now: