
read_buffer_size can break data replication
Translation of Miguel Angel Nieto's latest article, read_buffer_size can break your replication .
There are some variables that can affect replication and sometimes cause a lot of trouble. In this post, I am going to talk about the read_buffer_size variable, and how this variable together with max_allowed_packet can break your replication.
Suppose we have a master-master replication setup: To break replication, I am going to load 4 million rows using LOAD DATA INFILE: After some time, the SHOW SLAVE STATUS command on MasterA will give us the following output:
Very strange! We uploaded the data to MasterA, and now we have torn replication there and an error related to max_allowed_packet. The next step is to check the binary logs for both masters:
MasterA: There are no blocks larger than max_allowed_packet (33554432). MasterB: Have you noticed the difference? 33555308 is larger than max_allowed_packet (33554432), so Master2 wrote some blocks 876 bytes more than the safe value. MasterA then attempts to read the binary log from MasterB, and replication breaks because the packets are too large. No, replicate_same_server_id is not enabled.
Again, it’s better to give an example than to explain in words. Let's take the new values: We run LOAD DATA INFILE again, and now the output on both servers will be: The maximum data block size is based on the read_buffer_size value, so now they will certainly never be larger than max_allowed_packet. Therefore, it is worth remembering that if the value of read_buffer_size is greater than max_allowed_packet, then this can cause replication failure during the import of data into MySQL. This bug is in all versions from 5.0.x to the latest 5.5.25, and the easiest way to get around it is not to set read_buffer_size to more than max_allowed_packet. It seems that bug 30435 has not actually been fixed.
And do not forget that large values for read_buffer_size do not increase performance (you can read about it here in the original, but here - the translation).
There are some variables that can affect replication and sometimes cause a lot of trouble. In this post, I am going to talk about the read_buffer_size variable, and how this variable together with max_allowed_packet can break your replication.
Suppose we have a master-master replication setup: To break replication, I am going to load 4 million rows using LOAD DATA INFILE: After some time, the SHOW SLAVE STATUS command on MasterA will give us the following output:
max_allowed_packet = 32M
read_buffer_size = 100M
MasterA (test) > LOAD DATA INFILE '/tmp/data' INTO TABLE t;
Query OK, 4510080 rows affected (26.89 sec)
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master;the first event 'mysql-bin.000002' at 74416925, the last event read from './mysql-bin.000004' at 171, the last byte read from './mysql-bin.000004' at 190.'
Very strange! We uploaded the data to MasterA, and now we have torn replication there and an error related to max_allowed_packet. The next step is to check the binary logs for both masters:
MasterA: There are no blocks larger than max_allowed_packet (33554432). MasterB: Have you noticed the difference? 33555308 is larger than max_allowed_packet (33554432), so Master2 wrote some blocks 876 bytes more than the safe value. MasterA then attempts to read the binary log from MasterB, and replication breaks because the packets are too large. No, replicate_same_server_id is not enabled.
masterA> mysqlbinlog data/mysql-bin.000004 | grep block_len
#Begin_load_query: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 4194304
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 10420608
masterB> mysqlbinlog data/mysql-bin.000004 | grep block_len
#Begin_load_query: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 4191676
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 10419732
What is the connection between read_buffer_size and this bug?
Again, it’s better to give an example than to explain in words. Let's take the new values: We run LOAD DATA INFILE again, and now the output on both servers will be: The maximum data block size is based on the read_buffer_size value, so now they will certainly never be larger than max_allowed_packet. Therefore, it is worth remembering that if the value of read_buffer_size is greater than max_allowed_packet, then this can cause replication failure during the import of data into MySQL. This bug is in all versions from 5.0.x to the latest 5.5.25, and the easiest way to get around it is not to set read_buffer_size to more than max_allowed_packet. It seems that bug 30435 has not actually been fixed.
max_allowed_packet = 32M
read_buffer_size = 16M
#Begin_load_query: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 14614912
And do not forget that large values for read_buffer_size do not increase performance (you can read about it here in the original, but here - the translation).