Added options --auto-increment-increment and --auto-increment-offset.

This allows one to setup a master <-> master replication with non conflicting auto-increment series.
Cleaned up binary log code to make it easyer to add new state variables.
Added simpler 'upper level' logic for artificial events (events that should not cause cleanups on slave).
Simplified binary log handling.
Changed how auto_increment works together with to SET INSERT_ID=# to make it more predictable: Now the inserted rows in a multi-row statement are set independent of the existing rows in the table. (Before only InnoDB did this correctly)




mysql-test/r/mix_innodb_myisam_binlog.result:
  Disable End_log_pos column from 'show binlog events' as this is now different from before
mysql-test/t/mix_innodb_myisam_binlog.test:
  Disable End_log_pos column from 'show binlog events' as this is now different from before
sql/ha_berkeley.cc:
  Changed prototype for get_auto_increment()
sql/ha_berkeley.h:
  Changed prototype for get_auto_increment()
sql/ha_heap.cc:
  Changed prototype for get_auto_increment()
sql/ha_heap.h:
  Changed prototype for get_auto_increment()
sql/ha_innodb.cc:
  Change how auto-increment is calculated.
  Now the auto-increment logic is done in 'update_auto_increment()' to ensure that all handlers has the same auto-increment usage
sql/ha_innodb.h:
  Changed prototype for get_auto_increment()
sql/ha_myisam.cc:
  Changed prototype for get_auto_increment()
sql/ha_myisam.h:
  Changed prototype for get_auto_increment()
sql/ha_ndbcluster.cc:
  Changed prototype for get_auto_increment()
sql/ha_ndbcluster.h:
  Changed prototype for get_auto_increment()
sql/handler.cc:
  Remove some usage of current_thd
  Changed how auto_increment works with SET INSERT_ID to make it more predictable
  (Now we should generate same auto-increment serie on a slave, even if the table has rows that was not on the master.
  Use auto_increment_increment and auto_increment_offset
sql/handler.h:
  Changed prototype for get_auto_increment()
sql/log.cc:
  Remove usage of 'set_log_pos()' to make code simpler. (Now log_pos is set in write_header())
  Use 'data_written' instead of 'get_event_len()' to calculate how much data was written in the log
sql/log_event.cc:
  Simple optimizations.
  Remove cached_event_len (not used variable)
  Made comments fit into 79 chars
  Removed Log_event::set_log_pos(). Now we calculate log_pos in write_header().
  Renamed write_data() to write() as the original write() function was not needed anymore.
  Call writing of event header from event::write() functions. This made it easier to calculate the length of an event.
  Simplified 'write_header' and remove 'switches' from it.
  Changed all write() functions to return 'bool'. (The previous return values where not consistent)
  Store auto_increment_increment and auto_increment_offset in binary log
  Simplified how Query_log_event's where written and read. Now it's much easier to add now status variables for a query event to the binary log.
  Removed some old MySQL 4.x code to make it easier to grep for functions used in 5.0
sql/log_event.h:
  Changed return type of write() functions to bool. (Before we returned -1 or 1 for errors)
  write_data() -> write()
  Added 'data_written' member to make it easier to get length of written event.
  Removed 'cached_event_len' and 'get_event_len()'
  Added usage of auto_increment_increment and auto_increment_offset
  Added 'artifical_event' to Start_log_event_v3, to hide logic that we in the binary log use log_pos=0 as a flag for an artifical event.
sql/mysqld.cc:
  Added options --auto-increment-increment and --auto-increment-offset
sql/set_var.cc:
  Added variables auto_increment_increment and auto_increment_offset
sql/slave.cc:
  Changed errors -> warnings & information (in error log)
sql/sql_class.cc:
  Added THD::cleanup_after_query(). This makes some code simpler and allows us to clean up 'next_insert_id' after query
sql/sql_class.h:
  Added new auto_increment_xxx variables
  Moved some functions/variables in THD class
sql/sql_help.cc:
  Removed compiler warning
sql/sql_insert.cc:
  Call 'restore_auto_increment()' if row was not inserted.
  This makes it easier for handler to reuse the last generated auto-incrment value that was not used (for example in case of duplicate key)
sql/sql_parse.cc:
  Use cleanup_after_query()
sql/sql_prepare.cc:
  Use cleanup_after_query()
sql/sql_table.cc:
  R
This commit is contained in:
unknown 2004-09-15 22:10:31 +03:00
parent b15004a800
commit ffc0d185da
30 changed files with 1176 additions and 817 deletions

View File

@ -8,10 +8,10 @@ insert into t2 select * from t1;
commit; commit;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 176 use `test`; insert into t1 values(1) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(1)
master-bin.000001 238 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 238 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 326 Query 1 389 use `test`; COMMIT master-bin.000001 326 Query 1 # use `test`; COMMIT
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -23,10 +23,10 @@ Warnings:
Warning 1196 Some non-transactional changed tables couldn't be rolled back Warning 1196 Some non-transactional changed tables couldn't be rolled back
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 176 use `test`; insert into t1 values(2) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(2)
master-bin.000001 238 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 238 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 326 Query 1 391 use `test`; ROLLBACK master-bin.000001 326 Query 1 # use `test`; ROLLBACK
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -41,13 +41,13 @@ Warning 1196 Some non-transactional changed tables couldn't be rolled back
commit; commit;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 176 use `test`; insert into t1 values(3) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(3)
master-bin.000001 238 Query 1 174 use `test`; savepoint my_savepoint master-bin.000001 238 Query 1 # use `test`; savepoint my_savepoint
master-bin.000001 317 Query 1 176 use `test`; insert into t1 values(4) master-bin.000001 317 Query 1 # use `test`; insert into t1 values(4)
master-bin.000001 398 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 398 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 486 Query 1 186 use `test`; rollback to savepoint my_savepoint master-bin.000001 486 Query 1 # use `test`; rollback to savepoint my_savepoint
master-bin.000001 577 Query 1 640 use `test`; COMMIT master-bin.000001 577 Query 1 # use `test`; COMMIT
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -67,14 +67,14 @@ a
7 7
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 176 use `test`; insert into t1 values(5) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(5)
master-bin.000001 238 Query 1 174 use `test`; savepoint my_savepoint master-bin.000001 238 Query 1 # use `test`; savepoint my_savepoint
master-bin.000001 317 Query 1 176 use `test`; insert into t1 values(6) master-bin.000001 317 Query 1 # use `test`; insert into t1 values(6)
master-bin.000001 398 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 398 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 486 Query 1 186 use `test`; rollback to savepoint my_savepoint master-bin.000001 486 Query 1 # use `test`; rollback to savepoint my_savepoint
master-bin.000001 577 Query 1 176 use `test`; insert into t1 values(7) master-bin.000001 577 Query 1 # use `test`; insert into t1 values(7)
master-bin.000001 658 Query 1 721 use `test`; COMMIT master-bin.000001 658 Query 1 # use `test`; COMMIT
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -89,10 +89,10 @@ get_lock("a",10)
1 1
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 176 use `test`; insert into t1 values(8) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(8)
master-bin.000001 238 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 238 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 326 Query 1 391 use `test`; ROLLBACK master-bin.000001 326 Query 1 # use `test`; ROLLBACK
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -100,8 +100,8 @@ insert into t1 values(9);
insert into t2 select * from t1; insert into t2 select * from t1;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 176 use `test`; insert into t1 values(9) master-bin.000001 95 Query 1 # use `test`; insert into t1 values(9)
master-bin.000001 176 Query 1 264 use `test`; insert into t2 select * from t1 master-bin.000001 176 Query 1 # use `test`; insert into t2 select * from t1
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -110,17 +110,17 @@ begin;
insert into t2 select * from t1; insert into t2 select * from t1;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 177 use `test`; insert into t1 values(10) master-bin.000001 95 Query 1 # use `test`; insert into t1 values(10)
master-bin.000001 177 Query 1 265 use `test`; insert into t2 select * from t1 master-bin.000001 177 Query 1 # use `test`; insert into t2 select * from t1
insert into t1 values(11); insert into t1 values(11);
commit; commit;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 177 use `test`; insert into t1 values(10) master-bin.000001 95 Query 1 # use `test`; insert into t1 values(10)
master-bin.000001 177 Query 1 265 use `test`; insert into t2 select * from t1 master-bin.000001 177 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 265 Query 1 327 use `test`; BEGIN master-bin.000001 265 Query 1 # use `test`; BEGIN
master-bin.000001 327 Query 1 347 use `test`; insert into t1 values(11) master-bin.000001 327 Query 1 # use `test`; insert into t1 values(11)
master-bin.000001 409 Query 1 472 use `test`; COMMIT master-bin.000001 409 Query 1 # use `test`; COMMIT
alter table t2 engine=INNODB; alter table t2 engine=INNODB;
delete from t1; delete from t1;
delete from t2; delete from t2;
@ -131,10 +131,10 @@ insert into t2 select * from t1;
commit; commit;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 177 use `test`; insert into t1 values(12) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(12)
master-bin.000001 239 Query 1 183 use `test`; insert into t2 select * from t1 master-bin.000001 239 Query 1 # use `test`; insert into t2 select * from t1
master-bin.000001 327 Query 1 390 use `test`; COMMIT master-bin.000001 327 Query 1 # use `test`; COMMIT
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -156,9 +156,9 @@ rollback to savepoint my_savepoint;
commit; commit;
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 177 use `test`; insert into t1 values(14) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(14)
master-bin.000001 239 Query 1 302 use `test`; COMMIT master-bin.000001 239 Query 1 # use `test`; COMMIT
delete from t1; delete from t1;
delete from t2; delete from t2;
reset master; reset master;
@ -176,8 +176,8 @@ a
18 18
show binlog events from 95; show binlog events from 95;
Log_name Pos Event_type Server_id End_log_pos Info Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 95 Query 1 157 use `test`; BEGIN master-bin.000001 95 Query 1 # use `test`; BEGIN
master-bin.000001 157 Query 1 177 use `test`; insert into t1 values(16) master-bin.000001 157 Query 1 # use `test`; insert into t1 values(16)
master-bin.000001 239 Query 1 177 use `test`; insert into t1 values(18) master-bin.000001 239 Query 1 # use `test`; insert into t1 values(18)
master-bin.000001 321 Query 1 384 use `test`; COMMIT master-bin.000001 321 Query 1 # use `test`; COMMIT
drop table t1,t2; drop table t1,t2;

View File

@ -0,0 +1,185 @@
stop slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
create table t1 (a int not null auto_increment,b int, primary key (a)) engine=myisam auto_increment=3;
insert into t1 values (NULL,1),(NULL,2),(NULL,3);
select * from t1;
a b
12 1
22 2
32 3
select * from t1;
a b
12 1
22 2
32 3
drop table t1;
create table t1 (a int not null auto_increment,b int, primary key (a)) engine=myisam;
insert into t1 values (1,1),(NULL,2),(3,3),(NULL,4);
delete from t1 where b=4;
insert into t1 values (NULL,5),(NULL,6);
select * from t1;
a b
1 1
2 2
3 3
22 5
32 6
select * from t1;
a b
1 1
2 2
3 3
22 5
32 6
drop table t1;
set @@session.auto_increment_increment=100, @@session.auto_increment_offset=10;
show variables like "%auto%";
Variable_name Value
auto_incrememt_increment 100
auto_increment_offset 10
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
insert into t1 values (NULL),(5),(NULL);
insert into t1 values (250),(NULL);
select * from t1;
a
5
10
110
250
310
insert into t1 values (1000);
set @@insert_id=400;
insert into t1 values(NULL),(NULL);
select * from t1;
a
5
10
110
250
310
400
410
1000
select * from t1;
a
5
10
110
250
310
400
410
1000
drop table t1;
create table t1 (a int not null auto_increment, primary key (a)) engine=innodb;
insert into t1 values (NULL),(5),(NULL);
insert into t1 values (250),(NULL);
select * from t1;
a
5
10
110
250
310
insert into t1 values (1000);
set @@insert_id=400;
insert into t1 values(NULL),(NULL);
select * from t1;
a
5
10
110
250
310
400
410
1000
select * from t1;
a
5
10
110
250
310
400
410
1000
drop table t1;
set @@session.auto_increment_increment=1, @@session.auto_increment_offset=1;
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
insert into t1 values (NULL),(5),(NULL),(NULL);
insert into t1 values (500),(NULL),(502),(NULL),(NULL);
select * from t1;
a
1
5
6
7
500
501
502
503
504
set @@insert_id=600;
insert into t1 values(600),(NULL),(NULL);
ERROR 23000: Duplicate entry '600' for key 1
set @@insert_id=600;
insert ignore into t1 values(600),(NULL),(NULL),(610),(NULL);
select * from t1;
a
1
5
6
7
500
501
502
503
504
600
610
611
select * from t1;
a
1
5
6
7
500
501
502
503
504
600
610
611
drop table t1;
set @@session.auto_increment_increment=10, @@session.auto_increment_offset=1;
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
insert into t1 values(2),(12),(22),(32),(42);
insert into t1 values (NULL),(NULL);
insert into t1 values (3),(NULL),(NULL);
select * from t1;
a
1
3
11
21
31
select * from t1;
a
1
2
3
11
12
21
22
31
32
42
drop table t1;

View File

@ -25,6 +25,7 @@ insert into t1 values(1);
insert into t2 select * from t1; insert into t2 select * from t1;
commit; commit;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -37,6 +38,7 @@ insert into t2 select * from t1;
# should say some changes to non-transact1onal tables couldn't be rolled back # should say some changes to non-transact1onal tables couldn't be rolled back
rollback; rollback;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -51,6 +53,7 @@ insert into t2 select * from t1;
rollback to savepoint my_savepoint; rollback to savepoint my_savepoint;
commit; commit;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -67,6 +70,7 @@ insert into t1 values(7);
commit; commit;
select a from t1 order by a; # check that savepoints work :) select a from t1 order by a; # check that savepoints work :)
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
# and when ROLLBACK is not explicit? # and when ROLLBACK is not explicit?
@ -87,6 +91,7 @@ connection con2;
# so SHOW BINLOG EVENTS may come before con1 does the loggin. To be sure that # so SHOW BINLOG EVENTS may come before con1 does the loggin. To be sure that
# logging has been done, we use a user lock. # logging has been done, we use a user lock.
select get_lock("a",10); select get_lock("a",10);
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
# and when not in a transact1on? # and when not in a transact1on?
@ -97,6 +102,7 @@ reset master;
insert into t1 values(9); insert into t1 values(9);
insert into t2 select * from t1; insert into t2 select * from t1;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
# Check that when the query updat1ng the MyISAM table is the first in the # Check that when the query updat1ng the MyISAM table is the first in the
@ -108,10 +114,12 @@ reset master;
insert into t1 values(10); # first make t1 non-empty insert into t1 values(10); # first make t1 non-empty
begin; begin;
insert into t2 select * from t1; insert into t2 select * from t1;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
insert into t1 values(11); insert into t1 values(11);
commit; commit;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
@ -129,6 +137,7 @@ insert into t1 values(12);
insert into t2 select * from t1; insert into t2 select * from t1;
commit; commit;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -140,6 +149,7 @@ insert into t1 values(13);
insert into t2 select * from t1; insert into t2 select * from t1;
rollback; rollback;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -154,6 +164,7 @@ insert into t2 select * from t1;
rollback to savepoint my_savepoint; rollback to savepoint my_savepoint;
commit; commit;
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
delete from t1; delete from t1;
@ -170,6 +181,7 @@ insert into t1 values(18);
commit; commit;
select a from t1 order by a; # check that savepoints work :) select a from t1 order by a; # check that savepoints work :)
--replace_column 5 #
show binlog events from 95; show binlog events from 95;
drop table t1,t2; drop table t1,t2;

View File

@ -0,0 +1 @@
--auto-increment-increment=10 --auto-increment-offset=2

View File

@ -0,0 +1,104 @@
#
# Test of auto_increment with offset
#
source include/have_innodb.inc;
source include/master-slave.inc;
create table t1 (a int not null auto_increment,b int, primary key (a)) engine=myisam auto_increment=3;
insert into t1 values (NULL,1),(NULL,2),(NULL,3);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
create table t1 (a int not null auto_increment,b int, primary key (a)) engine=myisam;
insert into t1 values (1,1),(NULL,2),(3,3),(NULL,4);
delete from t1 where b=4;
insert into t1 values (NULL,5),(NULL,6);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
set @@session.auto_increment_increment=100, @@session.auto_increment_offset=10;
show variables like "%auto%";
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
# Insert with 2 insert statements to get better testing of logging
insert into t1 values (NULL),(5),(NULL);
insert into t1 values (250),(NULL);
select * from t1;
insert into t1 values (1000);
set @@insert_id=400;
insert into t1 values(NULL),(NULL);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
#
# Same test with innodb (as the innodb code is a bit different)
#
create table t1 (a int not null auto_increment, primary key (a)) engine=innodb;
# Insert with 2 insert statements to get better testing of logging
insert into t1 values (NULL),(5),(NULL);
insert into t1 values (250),(NULL);
select * from t1;
insert into t1 values (1000);
set @@insert_id=400;
insert into t1 values(NULL),(NULL);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
set @@session.auto_increment_increment=1, @@session.auto_increment_offset=1;
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
# Insert with 2 insert statements to get better testing of logging
insert into t1 values (NULL),(5),(NULL),(NULL);
insert into t1 values (500),(NULL),(502),(NULL),(NULL);
select * from t1;
set @@insert_id=600;
--error 1062
insert into t1 values(600),(NULL),(NULL);
set @@insert_id=600;
insert ignore into t1 values(600),(NULL),(NULL),(610),(NULL);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
#
# Test that auto-increment works when slave has rows in the table
#
set @@session.auto_increment_increment=10, @@session.auto_increment_offset=1;
create table t1 (a int not null auto_increment, primary key (a)) engine=myisam;
sync_slave_with_master;
insert into t1 values(2),(12),(22),(32),(42);
connection master;
insert into t1 values (NULL),(NULL);
insert into t1 values (3),(NULL),(NULL);
select * from t1;
sync_slave_with_master;
select * from t1;
connection master;
drop table t1;
# End cleanup
sync_slave_with_master;

View File

@ -2089,9 +2089,9 @@ ha_rows ha_berkeley::records_in_range(uint keynr, key_range *start_key,
} }
longlong ha_berkeley::get_auto_increment() ulonglong ha_berkeley::get_auto_increment()
{ {
longlong nr=1; // Default if error or new key ulonglong nr=1; // Default if error or new key
int error; int error;
(void) ha_berkeley::extra(HA_EXTRA_KEYREAD); (void) ha_berkeley::extra(HA_EXTRA_KEYREAD);
@ -2140,7 +2140,7 @@ longlong ha_berkeley::get_auto_increment()
} }
} }
if (!error) if (!error)
nr=(longlong) nr=(ulonglong)
table->next_number_field->val_int_offset(table->rec_buff_length)+1; table->next_number_field->val_int_offset(table->rec_buff_length)+1;
ha_berkeley::index_end(); ha_berkeley::index_end();
(void) ha_berkeley::extra(HA_EXTRA_NO_KEYREAD); (void) ha_berkeley::extra(HA_EXTRA_NO_KEYREAD);

View File

@ -153,7 +153,7 @@ class ha_berkeley: public handler
int5store(to,share->auto_ident); int5store(to,share->auto_ident);
pthread_mutex_unlock(&share->mutex); pthread_mutex_unlock(&share->mutex);
} }
longlong get_auto_increment(); ulonglong get_auto_increment();
void print_error(int error, myf errflag); void print_error(int error, myf errflag);
uint8 table_cache_type() { return HA_CACHE_TBL_TRANSACT; } uint8 table_cache_type() { return HA_CACHE_TBL_TRANSACT; }
bool primary_key_is_clustered() { return true; } bool primary_key_is_clustered() { return true; }

View File

@ -484,7 +484,7 @@ void ha_heap::update_create_info(HA_CREATE_INFO *create_info)
create_info->auto_increment_value= auto_increment_value; create_info->auto_increment_value= auto_increment_value;
} }
longlong ha_heap::get_auto_increment() ulonglong ha_heap::get_auto_increment()
{ {
ha_heap::info(HA_STATUS_AUTO); ha_heap::info(HA_STATUS_AUTO);
return auto_increment_value; return auto_increment_value;

View File

@ -62,7 +62,7 @@ class ha_heap: public handler
int write_row(byte * buf); int write_row(byte * buf);
int update_row(const byte * old_data, byte * new_data); int update_row(const byte * old_data, byte * new_data);
int delete_row(const byte * buf); int delete_row(const byte * buf);
longlong get_auto_increment(); ulonglong get_auto_increment();
int index_read(byte * buf, const byte * key, int index_read(byte * buf, const byte * key,
uint key_len, enum ha_rkey_function find_flag); uint key_len, enum ha_rkey_function find_flag);
int index_read_idx(byte * buf, uint idx, const byte * key, int index_read_idx(byte * buf, uint idx, const byte * key,

View File

@ -1627,8 +1627,6 @@ ha_innobase::open(
} }
} }
auto_inc_counter_for_this_stat = 0;
block_size = 16 * 1024; /* Index block size in InnoDB: used by MySQL block_size = 16 * 1024; /* Index block size in InnoDB: used by MySQL
in query optimization */ in query optimization */
@ -2198,7 +2196,7 @@ ha_innobase::write_row(
longlong dummy; longlong dummy;
ibool incremented_auto_inc_for_stat = FALSE; ibool incremented_auto_inc_for_stat = FALSE;
ibool incremented_auto_inc_counter = FALSE; ibool incremented_auto_inc_counter = FALSE;
ibool skip_auto_inc_decr; ibool skip_auto_inc_decr, auto_inc_used= FALSE;
DBUG_ENTER("ha_innobase::write_row"); DBUG_ENTER("ha_innobase::write_row");
@ -2260,98 +2258,13 @@ ha_innobase::write_row(
prebuilt->sql_stat_start = TRUE; prebuilt->sql_stat_start = TRUE;
} }
/* Fetch the value the user possibly has set in the /*
autoincrement field */ We must use the handler code to update the auto-increment
value to be sure that increment it correctly.
auto_inc = table->next_number_field->val_int(); */
/* In replication and also otherwise the auto-inc column
can be set with SET INSERT_ID. Then we must look at
user_thd->next_insert_id. If it is nonzero and the user
has not supplied a value, we must use it, and use values
incremented by 1 in all subsequent inserts within the
same SQL statement! */
if (auto_inc == 0 && user_thd->next_insert_id != 0) {
auto_inc = user_thd->next_insert_id;
auto_inc_counter_for_this_stat = auto_inc;
}
if (auto_inc == 0 && auto_inc_counter_for_this_stat) {
/* The user set the auto-inc counter for
this SQL statement with SET INSERT_ID. We must
assign sequential values from the counter. */
auto_inc_counter_for_this_stat++;
incremented_auto_inc_for_stat = TRUE;
auto_inc = auto_inc_counter_for_this_stat;
/* We give MySQL a new value to place in the
auto-inc column */
user_thd->next_insert_id = auto_inc;
}
if (auto_inc != 0) {
/* This call will calculate the max of the current
value and the value supplied by the user and
update the counter accordingly */
/* We have to use the transactional lock mechanism
on the auto-inc counter of the table to ensure
that replication and roll-forward of the binlog
exactly imitates also the given auto-inc values.
The lock is released at each SQL statement's
end. */
innodb_srv_conc_enter_innodb(prebuilt->trx);
error = row_lock_table_autoinc_for_mysql(prebuilt);
innodb_srv_conc_exit_innodb(prebuilt->trx);
if (error != DB_SUCCESS) {
error = convert_error_code_to_mysql(error,
user_thd);
goto func_exit;
}
dict_table_autoinc_update(prebuilt->table, auto_inc);
} else {
innodb_srv_conc_enter_innodb(prebuilt->trx);
if (!prebuilt->trx->auto_inc_lock) {
error = row_lock_table_autoinc_for_mysql(
prebuilt);
if (error != DB_SUCCESS) {
innodb_srv_conc_exit_innodb(
prebuilt->trx);
error = convert_error_code_to_mysql(
error, user_thd);
goto func_exit;
}
}
/* The following call gets the value of the auto-inc
counter of the table and increments it by 1 */
auto_inc = dict_table_autoinc_get(prebuilt->table);
incremented_auto_inc_counter = TRUE;
innodb_srv_conc_exit_innodb(prebuilt->trx);
/* We can give the new value for MySQL to place in
the field */
user_thd->next_insert_id = auto_inc;
}
/* This call of a handler.cc function places
user_thd->next_insert_id to the column value, if the column
value was not set by the user */
update_auto_increment(); update_auto_increment();
auto_inc_used= 1;
} }
if (prebuilt->mysql_template == NULL if (prebuilt->mysql_template == NULL
@ -2366,42 +2279,37 @@ ha_innobase::write_row(
error = row_insert_for_mysql((byte*) record, prebuilt); error = row_insert_for_mysql((byte*) record, prebuilt);
if (error == DB_SUCCESS && auto_inc_used) {
/* Fetch the value that was set in the autoincrement field */
auto_inc = table->next_number_field->val_int();
if (auto_inc != 0) {
/* This call will calculate the max of the current
value and the value supplied by the user and
update the counter accordingly */
/* We have to use the transactional lock mechanism
on the auto-inc counter of the table to ensure
that replication and roll-forward of the binlog
exactly imitates also the given auto-inc values.
The lock is released at each SQL statement's
end. */
error = row_lock_table_autoinc_for_mysql(prebuilt);
if (error != DB_SUCCESS) {
error = convert_error_code_to_mysql(error, user_thd);
goto func_exit;
}
dict_table_autoinc_update(prebuilt->table, auto_inc);
}
}
innodb_srv_conc_exit_innodb(prebuilt->trx); innodb_srv_conc_exit_innodb(prebuilt->trx);
if (error != DB_SUCCESS) {
/* If the insert did not succeed we restore the value of
the auto-inc counter we used; note that this behavior was
introduced only in version 4.0.4.
NOTE that a REPLACE command handles a duplicate key error
itself, and we must not decrement the autoinc counter
if we are performing a REPLACE statement.
NOTE 2: if there was an error, for example a deadlock,
which caused InnoDB to roll back the whole transaction
already in the call of row_insert_for_mysql(), we may no
longer have the AUTO-INC lock, and cannot decrement
the counter here. */
skip_auto_inc_decr = FALSE;
if (error == DB_DUPLICATE_KEY
&& (user_thd->lex->sql_command == SQLCOM_REPLACE
|| user_thd->lex->sql_command
== SQLCOM_REPLACE_SELECT)) {
skip_auto_inc_decr= TRUE;
}
if (!skip_auto_inc_decr && incremented_auto_inc_counter
&& prebuilt->trx->auto_inc_lock) {
dict_table_autoinc_decrement(prebuilt->table);
}
if (!skip_auto_inc_decr && incremented_auto_inc_for_stat
&& prebuilt->trx->auto_inc_lock) {
auto_inc_counter_for_this_stat--;
}
}
error = convert_error_code_to_mysql(error, user_thd); error = convert_error_code_to_mysql(error, user_thd);
/* Tell InnoDB server that there might be work for /* Tell InnoDB server that there might be work for
@ -2412,6 +2320,7 @@ func_exit:
DBUG_RETURN(error); DBUG_RETURN(error);
} }
/****************************************************************** /******************************************************************
Converts field data for storage in an InnoDB update vector. */ Converts field data for storage in an InnoDB update vector. */
inline inline
@ -5217,7 +5126,7 @@ initialized yet. This function does not change the value of the auto-inc
counter if it already has been initialized. Returns the value of the counter if it already has been initialized. Returns the value of the
auto-inc counter. */ auto-inc counter. */
longlong ulonglong
ha_innobase::get_auto_increment() ha_innobase::get_auto_increment()
/*=============================*/ /*=============================*/
/* out: auto-increment column value, -1 if error /* out: auto-increment column value, -1 if error
@ -5230,10 +5139,10 @@ ha_innobase::get_auto_increment()
if (error) { if (error) {
return(-1); return(~(ulonglong) 0);
} }
return(nr); return((ulonglong) nr);
} }
/*********************************************************************** /***********************************************************************

View File

@ -164,7 +164,7 @@ class ha_innobase: public handler
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to, THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type); enum thr_lock_type lock_type);
void init_table_handle_for_HANDLER(); void init_table_handle_for_HANDLER();
longlong get_auto_increment(); ulonglong get_auto_increment();
uint8 table_cache_type() { return HA_CACHE_TBL_ASKTRANSACT; } uint8 table_cache_type() { return HA_CACHE_TBL_ASKTRANSACT; }
static char *get_mysql_bin_log_name(); static char *get_mysql_bin_log_name();
static ulonglong get_mysql_bin_log_pos(); static ulonglong get_mysql_bin_log_pos();

View File

@ -1520,8 +1520,12 @@ int ha_myisam::rename_table(const char * from, const char * to)
} }
longlong ha_myisam::get_auto_increment() ulonglong ha_myisam::get_auto_increment()
{ {
ulonglong nr;
int error;
byte key[MI_MAX_KEY_LENGTH];
if (!table->next_number_key_offset) if (!table->next_number_key_offset)
{ // Autoincrement at key-start { // Autoincrement at key-start
ha_myisam::info(HA_STATUS_AUTO); ha_myisam::info(HA_STATUS_AUTO);
@ -1531,19 +1535,16 @@ longlong ha_myisam::get_auto_increment()
/* it's safe to call the following if bulk_insert isn't on */ /* it's safe to call the following if bulk_insert isn't on */
mi_flush_bulk_insert(file, table->next_number_index); mi_flush_bulk_insert(file, table->next_number_index);
longlong nr;
int error;
byte key[MI_MAX_KEY_LENGTH];
(void) extra(HA_EXTRA_KEYREAD); (void) extra(HA_EXTRA_KEYREAD);
key_copy(key,table,table->next_number_index, key_copy(key,table,table->next_number_index,
table->next_number_key_offset); table->next_number_key_offset);
error=mi_rkey(file,table->record[1],(int) table->next_number_index, error=mi_rkey(file,table->record[1],(int) table->next_number_index,
key,table->next_number_key_offset,HA_READ_PREFIX_LAST); key,table->next_number_key_offset,HA_READ_PREFIX_LAST);
if (error) if (error)
nr=1; nr= 1;
else else
nr=(longlong) nr= ((ulonglong) table->next_number_field->
table->next_number_field->val_int_offset(table->rec_buff_length)+1; val_int_offset(table->rec_buff_length)+1);
extra(HA_EXTRA_NO_KEYREAD); extra(HA_EXTRA_NO_KEYREAD);
return nr; return nr;
} }

View File

@ -111,7 +111,7 @@ class ha_myisam: public handler
int create(const char *name, TABLE *form, HA_CREATE_INFO *create_info); int create(const char *name, TABLE *form, HA_CREATE_INFO *create_info);
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to, THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type); enum thr_lock_type lock_type);
longlong get_auto_increment(); ulonglong get_auto_increment();
int rename_table(const char * from, const char * to); int rename_table(const char * from, const char * to);
int delete_table(const char *name); int delete_table(const char *name);
int check(THD* thd, HA_CHECK_OPT* check_opt); int check(THD* thd, HA_CHECK_OPT* check_opt);

View File

@ -3099,19 +3099,18 @@ int ndbcluster_drop_database(const char *path)
} }
longlong ha_ndbcluster::get_auto_increment() ulonglong ha_ndbcluster::get_auto_increment()
{ {
int cache_size;
Uint64 auto_value;
DBUG_ENTER("get_auto_increment"); DBUG_ENTER("get_auto_increment");
DBUG_PRINT("enter", ("m_tabname: %s", m_tabname)); DBUG_PRINT("enter", ("m_tabname: %s", m_tabname));
int cache_size= cache_size= ((rows_to_insert > autoincrement_prefetch) ?
(rows_to_insert > autoincrement_prefetch) ? rows_to_insert : autoincrement_prefetch);
rows_to_insert auto_value= ((skip_auto_increment) ?
: autoincrement_prefetch; m_ndb->readAutoIncrementValue((NDBTAB *) m_table) :
Uint64 auto_value= m_ndb->getAutoIncrementValue((NDBTAB *) m_table, cache_size));
(skip_auto_increment) ? DBUG_RETURN((ulonglong) auto_value);
m_ndb->readAutoIncrementValue((NDBTAB *) m_table)
: m_ndb->getAutoIncrementValue((NDBTAB *) m_table, cache_size);
DBUG_RETURN((longlong)auto_value);
} }

View File

@ -204,7 +204,7 @@ class ha_ndbcluster: public handler
int key_cmp(uint keynr, const byte * old_row, const byte * new_row); int key_cmp(uint keynr, const byte * old_row, const byte * new_row);
void print_results(); void print_results();
longlong get_auto_increment(); ulonglong get_auto_increment();
int ndb_err(NdbConnection*); int ndb_err(NdbConnection*);
bool uses_blob_value(bool all_fields); bool uses_blob_value(bool all_fields);

View File

@ -111,7 +111,7 @@ TYPELIB tx_isolation_typelib= {array_elements(tx_isolation_names)-1,"",
enum db_type ha_resolve_by_name(const char *name, uint namelen) enum db_type ha_resolve_by_name(const char *name, uint namelen)
{ {
THD *thd=current_thd; THD *thd= current_thd;
if (thd && !my_strcasecmp(&my_charset_latin1, name, "DEFAULT")) { if (thd && !my_strcasecmp(&my_charset_latin1, name, "DEFAULT")) {
return (enum db_type) thd->variables.table_type; return (enum db_type) thd->variables.table_type;
} }
@ -142,6 +142,7 @@ const char *ha_get_storage_engine(enum db_type db_type)
enum db_type ha_checktype(enum db_type database_type) enum db_type ha_checktype(enum db_type database_type)
{ {
show_table_type_st *types; show_table_type_st *types;
THD *thd= current_thd;
for (types= sys_table_types; types->type; types++) for (types= sys_table_types; types->type; types++)
{ {
if ((database_type == types->db_type) && if ((database_type == types->db_type) &&
@ -161,8 +162,8 @@ enum db_type ha_checktype(enum db_type database_type)
} }
return return
DB_TYPE_UNKNOWN != (enum db_type) current_thd->variables.table_type ? DB_TYPE_UNKNOWN != (enum db_type) thd->variables.table_type ?
(enum db_type) current_thd->variables.table_type : (enum db_type) thd->variables.table_type :
DB_TYPE_UNKNOWN != (enum db_type) global_system_variables.table_type ? DB_TYPE_UNKNOWN != (enum db_type) global_system_variables.table_type ?
(enum db_type) global_system_variables.table_type : (enum db_type) global_system_variables.table_type :
DB_TYPE_MYISAM; DB_TYPE_MYISAM;
@ -946,7 +947,7 @@ int handler::read_first_row(byte * buf, uint primary_key)
void handler::update_timestamp(byte *record) void handler::update_timestamp(byte *record)
{ {
long skr= (long) current_thd->query_start(); long skr= (long) table->in_use->query_start();
#ifdef WORDS_BIGENDIAN #ifdef WORDS_BIGENDIAN
if (table->db_low_byte_first) if (table->db_low_byte_first)
{ {
@ -958,42 +959,165 @@ void handler::update_timestamp(byte *record)
return; return;
} }
/* /*
Updates field with field_type NEXT_NUMBER according to following: Generate the next auto-increment number based on increment and offset
if field = 0 change field to the next free key in database.
In most cases increment= offset= 1, in which case we get:
1,2,3,4,5,...
If increment=10 and offset=5 and previous number is 1, we get:
1,5,15,25,35,...
*/
inline ulonglong
next_insert_id(ulonglong nr,struct system_variables *variables)
{
nr= (((nr+ variables->auto_increment_increment -
variables->auto_increment_offset)) /
(ulonglong) variables->auto_increment_increment);
return (nr* (ulonglong) variables->auto_increment_increment +
variables->auto_increment_offset);
}
/*
Updates columns with type NEXT_NUMBER if:
- If column value is set to NULL (in which case
auto_increment_field_not_null is 0)
- If column is set to 0 and (sql_mode & MODE_NO_AUTO_VALUE_ON_ZERO) is not
set. In the future we will only set NEXT_NUMBER fields if one sets them
to NULL (or they are not included in the insert list).
There are two different cases when the above is true:
- thd->next_insert_id == 0 (This is the normal case)
In this case we set the set the column for the first row to the value
next_insert_id(get_auto_increment(column))) which is normally
max-used-column-value +1.
We call get_auto_increment() only for the first row in a multi-row
statement. For the following rows we generate new numbers based on the
last used number.
- thd->next_insert_id != 0. This happens when we have read a statement
from the binary log or when one has used SET LAST_INSERT_ID=#.
In this case we will set the column to the value of next_insert_id.
The next row will be given the id
next_insert_id(next_insert_id)
The idea is the generated auto_increment values are predicatable and
independent of the column values in the table. This is needed to be
able to replicate into a table that alread has rows with a higher
auto-increment value than the one that is inserted.
After we have already generated an auto-increment number and the users
inserts a column with a higher value than the last used one, we will
start counting from the inserted value.
thd->next_insert_id is cleared after it's been used for a statement.
*/ */
void handler::update_auto_increment() void handler::update_auto_increment()
{ {
longlong nr; ulonglong nr;
THD *thd; THD *thd= table->in_use;
struct system_variables *variables= &thd->variables;
DBUG_ENTER("handler::update_auto_increment"); DBUG_ENTER("handler::update_auto_increment");
if (table->next_number_field->val_int() != 0 ||
/*
We must save the previous value to be able to restore it if the
row was not inserted
*/
thd->prev_insert_id= thd->next_insert_id;
if ((nr= table->next_number_field->val_int()) != 0 ||
table->auto_increment_field_not_null && table->auto_increment_field_not_null &&
current_thd->variables.sql_mode & MODE_NO_AUTO_VALUE_ON_ZERO) thd->variables.sql_mode & MODE_NO_AUTO_VALUE_ON_ZERO)
{ {
/* Clear flag for next row */
table->auto_increment_field_not_null= FALSE; table->auto_increment_field_not_null= FALSE;
/* Mark that we didn't generated a new value **/
auto_increment_column_changed=0; auto_increment_column_changed=0;
/* Update next_insert_id if we have already generated a value */
if (thd->clear_next_insert_id && nr >= thd->next_insert_id)
{
if (variables->auto_increment_increment != 1)
nr= next_insert_id(nr, variables);
else
nr++;
thd->next_insert_id= nr;
DBUG_PRINT("info",("next_insert_id: %lu", (ulong) nr));
}
DBUG_VOID_RETURN; DBUG_VOID_RETURN;
} }
table->auto_increment_field_not_null= FALSE; table->auto_increment_field_not_null= FALSE;
thd=current_thd; if (!(nr= thd->next_insert_id))
if ((nr=thd->next_insert_id)) {
thd->next_insert_id=0; // Clear after use nr= get_auto_increment();
else if (variables->auto_increment_increment != 1)
nr=get_auto_increment(); nr= next_insert_id(nr-1, variables);
if (!table->next_number_field->store(nr)) /*
Update next row based on the found value. This way we don't have to
call the handler for every generated auto-increment value on a
multi-row statement
*/
thd->next_insert_id= nr;
}
DBUG_PRINT("info",("auto_increment: %lu", (ulong) nr));
/* Mark that we should clear next_insert_id before next stmt */
thd->clear_next_insert_id= 1;
if (!table->next_number_field->store((longlong) nr))
thd->insert_id((ulonglong) nr); thd->insert_id((ulonglong) nr);
else else
thd->insert_id(table->next_number_field->val_int()); thd->insert_id(table->next_number_field->val_int());
/*
We can't set next_insert_id if the auto-increment key is not the
first key part, as there is no gurantee that the first parts will be in
sequence
*/
if (!table->next_number_key_offset)
{
/*
Set next insert id to point to next auto-increment value to be able to
handle multi-row statements
This works even if auto_increment_increment > 1
*/
thd->next_insert_id= next_insert_id(nr, variables);
}
else
thd->next_insert_id= 0;
/* Mark that we generated a new value */
auto_increment_column_changed=1; auto_increment_column_changed=1;
DBUG_VOID_RETURN; DBUG_VOID_RETURN;
} }
/*
restore_auto_increment
longlong handler::get_auto_increment() In case of error on write, we restore the last used next_insert_id value
because the previous value was not used.
*/
void handler::restore_auto_increment()
{ {
longlong nr; THD *thd= table->in_use;
if (thd->next_insert_id)
thd->next_insert_id= thd->prev_insert_id;
}
ulonglong handler::get_auto_increment()
{
ulonglong nr;
int error; int error;
(void) extra(HA_EXTRA_KEYREAD); (void) extra(HA_EXTRA_KEYREAD);
@ -1014,8 +1138,8 @@ longlong handler::get_auto_increment()
if (error) if (error)
nr=1; nr=1;
else else
nr=(longlong) table->next_number_field-> nr=((ulonglong) table->next_number_field->
val_int_offset(table->rec_buff_length)+1; val_int_offset(table->rec_buff_length)+1);
index_end(); index_end();
(void) extra(HA_EXTRA_NO_KEYREAD); (void) extra(HA_EXTRA_NO_KEYREAD);
return nr; return nr;

View File

@ -404,7 +404,8 @@ public:
*/ */
virtual int delete_all_rows() virtual int delete_all_rows()
{ return (my_errno=HA_ERR_WRONG_COMMAND); } { return (my_errno=HA_ERR_WRONG_COMMAND); }
virtual longlong get_auto_increment(); virtual ulonglong get_auto_increment();
virtual void restore_auto_increment();
virtual void update_create_info(HA_CREATE_INFO *create_info) {} virtual void update_create_info(HA_CREATE_INFO *create_info) {}
/* admin commands - called from mysql_admin_table */ /* admin commands - called from mysql_admin_table */

View File

@ -366,12 +366,11 @@ bool MYSQL_LOG::open(const char *log_name, enum_log_type log_type_arg,
Format_description_log_event s(BINLOG_VERSION); Format_description_log_event s(BINLOG_VERSION);
if (!s.is_valid()) if (!s.is_valid())
goto err; goto err;
s.set_log_pos(this);
if (null_created_arg) if (null_created_arg)
s.created= 0; s.created= 0;
if (s.write(&log_file)) if (s.write(&log_file))
goto err; goto err;
bytes_written+= s.get_event_len(); bytes_written+= s.data_written;
} }
if (description_event_for_queue && if (description_event_for_queue &&
description_event_for_queue->binlog_version>=4) description_event_for_queue->binlog_version>=4)
@ -386,24 +385,24 @@ bool MYSQL_LOG::open(const char *log_name, enum_log_type log_type_arg,
has been produced by has been produced by
Format_description_log_event::Format_description_log_event(char* Format_description_log_event::Format_description_log_event(char*
buf,). buf,).
Why don't we want to write the description_event_for_queue if this event Why don't we want to write the description_event_for_queue if this
is for format<4 (3.23 or 4.x): this is because in that case, the event is for format<4 (3.23 or 4.x): this is because in that case, the
description_event_for_queue describes the data received from the master, description_event_for_queue describes the data received from the
but not the data written to the relay log (*conversion*), which is in master, but not the data written to the relay log (*conversion*),
format 4 (slave's). which is in format 4 (slave's).
*/ */
/* /*
Set 'created' to 0, so that in next relay logs this event does not trigger Set 'created' to 0, so that in next relay logs this event does not
cleaning actions on the slave in trigger cleaning actions on the slave in
Format_description_log_event::exec_event(). Format_description_log_event::exec_event().
Set 'log_pos' to 0 to show that it's an artificial event.
*/ */
description_event_for_queue->created= 0; description_event_for_queue->created= 0;
description_event_for_queue->log_pos= 0; /* Don't set log_pos in event header */
description_event_for_queue->artificial_event=1;
if (description_event_for_queue->write(&log_file)) if (description_event_for_queue->write(&log_file))
goto err; goto err;
bytes_written+= description_event_for_queue->get_event_len(); bytes_written+= description_event_for_queue->data_written;
} }
if (flush_io_cache(&log_file) || if (flush_io_cache(&log_file) ||
my_sync(log_file.file, MYF(MY_WME))) my_sync(log_file.file, MYF(MY_WME)))
@ -881,22 +880,18 @@ int MYSQL_LOG::purge_logs(const char *to_log,
while ((strcmp(to_log,log_info.log_file_name) || (exit_loop=included)) && while ((strcmp(to_log,log_info.log_file_name) || (exit_loop=included)) &&
!log_in_use(log_info.log_file_name)) !log_in_use(log_info.log_file_name))
{ {
ulong file_size; ulong file_size= 0;
LINT_INIT(file_size);
if (decrease_log_space) //stat the file we want to delete if (decrease_log_space) //stat the file we want to delete
{ {
MY_STAT s; MY_STAT s;
/*
If we could not stat, we can't know the amount
of space that deletion will free. In most cases,
deletion won't work either, so it's not a problem.
*/
if (my_stat(log_info.log_file_name,&s,MYF(0))) if (my_stat(log_info.log_file_name,&s,MYF(0)))
file_size= s.st_size; file_size= s.st_size;
else
{
/*
If we could not stat, we can't know the amount
of space that deletion will free. In most cases,
deletion won't work either, so it's not a problem.
*/
file_size= 0;
}
} }
/* /*
It's not fatal if we can't delete a log file ; It's not fatal if we can't delete a log file ;
@ -1069,9 +1064,8 @@ void MYSQL_LOG::new_file(bool need_lock)
*/ */
THD *thd = current_thd; /* may be 0 if we are reacting to SIGHUP */ THD *thd = current_thd; /* may be 0 if we are reacting to SIGHUP */
Rotate_log_event r(thd,new_name+dirname_length(new_name)); Rotate_log_event r(thd,new_name+dirname_length(new_name));
r.set_log_pos(this);
r.write(&log_file); r.write(&log_file);
bytes_written += r.get_event_len(); bytes_written += r.data_written;
} }
/* /*
Update needs to be signalled even if there is no rotate event Update needs to be signalled even if there is no rotate event
@ -1130,7 +1124,7 @@ bool MYSQL_LOG::append(Log_event* ev)
error=1; error=1;
goto err; goto err;
} }
bytes_written += ev->get_event_len(); bytes_written+= ev->data_written;
DBUG_PRINT("info",("max_size: %lu",max_size)); DBUG_PRINT("info",("max_size: %lu",max_size));
if ((uint) my_b_append_tell(&log_file) > max_size) if ((uint) my_b_append_tell(&log_file) > max_size)
{ {
@ -1376,7 +1370,6 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
(uint) thd->variables.collation_database->number, (uint) thd->variables.collation_database->number,
(uint) thd->variables.collation_server->number); (uint) thd->variables.collation_server->number);
Query_log_event e(thd, buf, written, 0); Query_log_event e(thd, buf, written, 0);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
@ -1392,7 +1385,6 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
thd->variables.time_zone->get_name()->ptr(), thd->variables.time_zone->get_name()->ptr(),
"'", NullS); "'", NullS);
Query_log_event e(thd, buf, buf_end - buf, 0); Query_log_event e(thd, buf, buf_end - buf, 0);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
@ -1401,21 +1393,18 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
{ {
Intvar_log_event e(thd,(uchar) LAST_INSERT_ID_EVENT, Intvar_log_event e(thd,(uchar) LAST_INSERT_ID_EVENT,
thd->current_insert_id); thd->current_insert_id);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
if (thd->insert_id_used) if (thd->insert_id_used)
{ {
Intvar_log_event e(thd,(uchar) INSERT_ID_EVENT,thd->last_insert_id); Intvar_log_event e(thd,(uchar) INSERT_ID_EVENT,thd->last_insert_id);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
if (thd->rand_used) if (thd->rand_used)
{ {
Rand_log_event e(thd,thd->rand_saved_seed1,thd->rand_saved_seed2); Rand_log_event e(thd,thd->rand_saved_seed1,thd->rand_saved_seed2);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
@ -1431,7 +1420,6 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
user_var_event->length, user_var_event->length,
user_var_event->type, user_var_event->type,
user_var_event->charset_number); user_var_event->charset_number);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
@ -1443,7 +1431,6 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
p= strmov(strmov(buf, "SET CHARACTER SET "), p= strmov(strmov(buf, "SET CHARACTER SET "),
thd->variables.convert_set->name); thd->variables.convert_set->name);
Query_log_event e(thd, buf, (ulong) (p - buf), 0); Query_log_event e(thd, buf, (ulong) (p - buf), 0);
e.set_log_pos(this);
if (e.write(file)) if (e.write(file))
goto err; goto err;
} }
@ -1452,7 +1439,6 @@ COLLATION_CONNECTION=%u,COLLATION_DATABASE=%u,COLLATION_SERVER=%u",
/* Write the SQL command */ /* Write the SQL command */
event_info->set_log_pos(this);
if (event_info->write(file)) if (event_info->write(file))
goto err; goto err;
@ -1632,7 +1618,6 @@ bool MYSQL_LOG::write(THD *thd, IO_CACHE *cache, bool commit_or_rollback)
master's binlog, which would result in wrong positions being shown to master's binlog, which would result in wrong positions being shown to
the user, MASTER_POS_WAIT undue waiting etc. the user, MASTER_POS_WAIT undue waiting etc.
*/ */
qinfo.set_log_pos(this);
if (qinfo.write(&log_file)) if (qinfo.write(&log_file))
goto err; goto err;
} }
@ -1658,7 +1643,6 @@ bool MYSQL_LOG::write(THD *thd, IO_CACHE *cache, bool commit_or_rollback)
commit_or_rollback ? "COMMIT" : "ROLLBACK", commit_or_rollback ? "COMMIT" : "ROLLBACK",
commit_or_rollback ? 6 : 8, commit_or_rollback ? 6 : 8,
TRUE); TRUE);
qinfo.set_log_pos(this);
if (qinfo.write(&log_file) || flush_io_cache(&log_file) || if (qinfo.write(&log_file) || flush_io_cache(&log_file) ||
sync_binlog(&log_file)) sync_binlog(&log_file))
goto err; goto err;
@ -1894,9 +1878,8 @@ void MYSQL_LOG::close(uint exiting)
(exiting & LOG_CLOSE_STOP_EVENT)) (exiting & LOG_CLOSE_STOP_EVENT))
{ {
Stop_log_event s; Stop_log_event s;
s.set_log_pos(this);
s.write(&log_file); s.write(&log_file);
bytes_written+= s.get_event_len(); bytes_written+= s.data_written;
signal_update(); signal_update();
} }
#endif /* HAVE_REPLICATION */ #endif /* HAVE_REPLICATION */

File diff suppressed because it is too large Load Diff

View File

@ -139,7 +139,7 @@ struct sql_ex_info
field_term_len + enclosed_len + line_term_len + field_term_len + enclosed_len + line_term_len +
line_start_len + escaped_len + 6 : 7); line_start_len + escaped_len + 6 : 7);
} }
int write_data(IO_CACHE* file); bool write_data(IO_CACHE* file);
char* init(char* buf,char* buf_end,bool use_new_format); char* init(char* buf,char* buf_end,bool use_new_format);
bool new_format() bool new_format()
{ {
@ -231,7 +231,7 @@ struct sql_ex_info
#define Q_FLAGS2_CODE 0 #define Q_FLAGS2_CODE 0
#define Q_SQL_MODE_CODE 1 #define Q_SQL_MODE_CODE 1
#define Q_CATALOG_CODE 2 #define Q_CATALOG_CODE 2
#define Q_AUTO_INCREMENT 3
/* Intvar event post-header */ /* Intvar event post-header */
@ -387,8 +387,10 @@ typedef struct st_last_event_info
uint32 flags2; uint32 flags2;
bool sql_mode_inited; bool sql_mode_inited;
ulong sql_mode; /* must be same as THD.variables.sql_mode */ ulong sql_mode; /* must be same as THD.variables.sql_mode */
ulong auto_increment_increment, auto_increment_offset;
st_last_event_info() st_last_event_info()
: flags2_inited(0), flags2(0), sql_mode_inited(0), sql_mode(0) :flags2_inited(0), flags2(0), sql_mode_inited(0), sql_mode(0),
auto_increment_increment(1),auto_increment_offset(1)
{ {
db[0]= 0; /* initially, the db is unknown */ db[0]= 0; /* initially, the db is unknown */
} }
@ -407,13 +409,14 @@ class Log_event
{ {
public: public:
/* /*
The offset in the log where this event originally appeared (it is preserved The offset in the log where this event originally appeared (it is
in relay logs, making SHOW SLAVE STATUS able to print coordinates of the preserved in relay logs, making SHOW SLAVE STATUS able to print
event in the master's binlog). Note: when a transaction is written by the coordinates of the event in the master's binlog). Note: when a
master to its binlog (wrapped in BEGIN/COMMIT) the log_pos of all the transaction is written by the master to its binlog (wrapped in
queries it contains is the one of the BEGIN (this way, when one does SHOW BEGIN/COMMIT) the log_pos of all the queries it contains is the
SLAVE STATUS it sees the offset of the BEGIN, which is logical as rollback one of the BEGIN (this way, when one does SHOW SLAVE STATUS it
may occur), except the COMMIT query which has its real offset. sees the offset of the BEGIN, which is logical as rollback may
occur), except the COMMIT query which has its real offset.
*/ */
my_off_t log_pos; my_off_t log_pos;
/* /*
@ -422,21 +425,24 @@ public:
*/ */
char *temp_buf; char *temp_buf;
/* /*
Timestamp on the master(for debugging and replication of NOW()/TIMESTAMP). Timestamp on the master(for debugging and replication of
It is important for queries and LOAD DATA INFILE. This is set at the event's NOW()/TIMESTAMP). It is important for queries and LOAD DATA
creation time, except for Query and Load (et al.) events where this is set INFILE. This is set at the event's creation time, except for Query
at the query's execution time, which guarantees good replication (otherwise, and Load (et al.) events where this is set at the query's
we could have a query and its event with different timestamps). execution time, which guarantees good replication (otherwise, we
could have a query and its event with different timestamps).
*/ */
time_t when; time_t when;
/* The number of seconds the query took to run on the master. */ /* The number of seconds the query took to run on the master. */
ulong exec_time; ulong exec_time;
/* Number of bytes written by write() function */
ulong data_written;
/* /*
The master's server id (is preserved in the relay log; used to prevent from The master's server id (is preserved in the relay log; used to prevent from
infinite loops in circular replication). infinite loops in circular replication).
*/ */
uint32 server_id; uint32 server_id;
uint cached_event_len;
/* /*
Some 16 flags. Only one is really used now; look above for Some 16 flags. Only one is really used now; look above for
@ -453,26 +459,25 @@ public:
Log_event(); Log_event();
Log_event(THD* thd_arg, uint16 flags_arg, bool cache_stmt); Log_event(THD* thd_arg, uint16 flags_arg, bool cache_stmt);
/* /*
read_log_event() functions read an event from a binlog or relay log; used by read_log_event() functions read an event from a binlog or relay
SHOW BINLOG EVENTS, the binlog_dump thread on the master (reads master's log; used by SHOW BINLOG EVENTS, the binlog_dump thread on the
binlog), the slave IO thread (reads the event sent by binlog_dump), the master (reads master's binlog), the slave IO thread (reads the
slave SQL thread (reads the event from the relay log). event sent by binlog_dump), the slave SQL thread (reads the event
If mutex is 0, the read will proceed without mutex. from the relay log). If mutex is 0, the read will proceed without
We need the description_event to be able to parse the event (to know the mutex. We need the description_event to be able to parse the
post-header's size); in fact in read_log_event we detect the event's type, event (to know the post-header's size); in fact in read_log_event
then call the specific event's constructor and pass description_event as an we detect the event's type, then call the specific event's
argument. constructor and pass description_event as an argument.
*/ */
static Log_event* read_log_event(IO_CACHE* file, static Log_event* read_log_event(IO_CACHE* file,
pthread_mutex_t* log_lock, pthread_mutex_t* log_lock,
const Format_description_log_event *description_event); const Format_description_log_event *description_event);
static int read_log_event(IO_CACHE* file, String* packet, static int read_log_event(IO_CACHE* file, String* packet,
pthread_mutex_t* log_lock); pthread_mutex_t* log_lock);
/* set_log_pos() is used to fill log_pos with tell(log). */
void set_log_pos(MYSQL_LOG* log);
/* /*
init_show_field_list() prepares the column names and types for the output of init_show_field_list() prepares the column names and types for the
SHOW BINLOG EVENTS; it is used only by SHOW BINLOG EVENTS. output of SHOW BINLOG EVENTS; it is used only by SHOW BINLOG
EVENTS.
*/ */
static void init_show_field_list(List<Item>* field_list); static void init_show_field_list(List<Item>* field_list);
#ifdef HAVE_REPLICATION #ifdef HAVE_REPLICATION
@ -494,7 +499,7 @@ public:
} }
#else #else
Log_event() : temp_buf(0) {} Log_event() : temp_buf(0) {}
// avoid having to link mysqlbinlog against libpthread /* avoid having to link mysqlbinlog against libpthread */
static Log_event* read_log_event(IO_CACHE* file, static Log_event* read_log_event(IO_CACHE* file,
const Format_description_log_event *description_event); const Format_description_log_event *description_event);
/* print*() functions are used by mysqlbinlog */ /* print*() functions are used by mysqlbinlog */
@ -512,13 +517,17 @@ public:
my_free((gptr) ptr, MYF(MY_WME|MY_ALLOW_ZERO_PTR)); my_free((gptr) ptr, MYF(MY_WME|MY_ALLOW_ZERO_PTR));
} }
int write(IO_CACHE* file); bool write_header(IO_CACHE* file, ulong data_length);
int write_header(IO_CACHE* file); virtual bool write(IO_CACHE* file)
virtual int write_data(IO_CACHE* file) {
{ return write_data_header(file) || write_data_body(file); } return (write_header(file, get_data_size()) ||
virtual int write_data_header(IO_CACHE* file __attribute__((unused))) write_data_header(file) ||
write_data_body(file));
}
virtual bool is_artificial_event() { return 0; }
virtual bool write_data_header(IO_CACHE* file)
{ return 0; } { return 0; }
virtual int write_data_body(IO_CACHE* file __attribute__((unused))) virtual bool write_data_body(IO_CACHE* file __attribute__((unused)))
{ return 0; } { return 0; }
virtual Log_event_type get_type_code() = 0; virtual Log_event_type get_type_code() = 0;
virtual bool is_valid() const = 0; virtual bool is_valid() const = 0;
@ -535,17 +544,10 @@ public:
} }
} }
virtual int get_data_size() { return 0;} virtual int get_data_size() { return 0;}
int get_event_len() /*
{ Get event length for simple events. For complicated events the length
/* is calculated during write()
We don't re-use the cached event's length anymore (we did in 4.x) because */
this leads to nasty problems: when the 5.0 slave reads an event from a 4.0
master, it caches the event's length, then this event is converted before
it goes into the relay log, so it would be written to the relay log with
its old length, which is garbage.
*/
return (cached_event_len=(LOG_EVENT_HEADER_LEN + get_data_size()));
}
static Log_event* read_log_event(const char* buf, uint event_len, static Log_event* read_log_event(const char* buf, uint event_len,
const char **error, const char **error,
const Format_description_log_event const Format_description_log_event
@ -592,32 +594,32 @@ public:
uint16 error_code; uint16 error_code;
ulong thread_id; ulong thread_id;
/* /*
For events created by Query_log_event::exec_event (and For events created by Query_log_event::exec_event (and
Load_log_event::exec_event()) we need the *original* thread id, to be able Load_log_event::exec_event()) we need the *original* thread id, to be able
to log the event with the original (=master's) thread id (fix for to log the event with the original (=master's) thread id (fix for
BUG#1686). BUG#1686).
*/ */
ulong slave_proxy_id; ulong slave_proxy_id;
/* /*
Binlog format 3 and 4 start to differ (as far as class members are Binlog format 3 and 4 start to differ (as far as class members are
concerned) from here. concerned) from here.
*/ */
int catalog_len; // <= 255 char; -1 means uninited int catalog_len; // <= 255 char; -1 means uninited
/* /*
We want to be able to store a variable number of N-bit status vars: We want to be able to store a variable number of N-bit status vars:
(generally N=32; but N=64 for SQL_MODE) a user may want to log the number of (generally N=32; but N=64 for SQL_MODE) a user may want to log the number
affected rows (for debugging) while another does not want to lose 4 bytes in of affected rows (for debugging) while another does not want to lose 4
this. bytes in this.
The storage on disk is the following: The storage on disk is the following:
status_vars_len is part of the post-header, status_vars_len is part of the post-header,
status_vars are in the variable-length part, after the post-header, before status_vars are in the variable-length part, after the post-header, before
the db & query. the db & query.
status_vars on disk is a sequence of pairs (code, value) where 'code' means status_vars on disk is a sequence of pairs (code, value) where 'code' means
'sql_mode', 'affected' etc. Sometimes 'value' must be a short string, so its 'sql_mode', 'affected' etc. Sometimes 'value' must be a short string, so
first byte is its length. For now the order of status vars is: its first byte is its length. For now the order of status vars is:
flags2 - sql_mode - catalog. flags2 - sql_mode - catalog.
We should add the same thing to Load_log_event, but in fact We should add the same thing to Load_log_event, but in fact
LOAD DATA INFILE is going to be logged with a new type of event (logging of LOAD DATA INFILE is going to be logged with a new type of event (logging of
@ -643,6 +645,7 @@ public:
uint32 flags2; uint32 flags2;
/* In connections sql_mode is 32 bits now but will be 64 bits soon */ /* In connections sql_mode is 32 bits now but will be 64 bits soon */
ulong sql_mode; ulong sql_mode;
ulong auto_increment_increment, auto_increment_offset;
#ifndef MYSQL_CLIENT #ifndef MYSQL_CLIENT
@ -667,14 +670,8 @@ public:
} }
} }
Log_event_type get_type_code() { return QUERY_EVENT; } Log_event_type get_type_code() { return QUERY_EVENT; }
int write(IO_CACHE* file); bool write(IO_CACHE* file);
int write_data(IO_CACHE* file); // returns 0 on success, -1 on error
bool is_valid() const { return query != 0; } bool is_valid() const { return query != 0; }
int get_data_size()
{
/* Note that the "1" below is the db's length. */
return (q_len + db_len + 1 + status_vars_len + QUERY_HEADER_LEN);
}
}; };
#ifdef HAVE_REPLICATION #ifdef HAVE_REPLICATION
@ -713,7 +710,7 @@ public:
int get_data_size(); int get_data_size();
bool is_valid() const { return master_host != 0; } bool is_valid() const { return master_host != 0; }
Log_event_type get_type_code() { return SLAVE_EVENT; } Log_event_type get_type_code() { return SLAVE_EVENT; }
int write_data(IO_CACHE* file ); bool write(IO_CACHE* file);
}; };
#endif /* HAVE_REPLICATION */ #endif /* HAVE_REPLICATION */
@ -804,8 +801,8 @@ public:
{ {
return sql_ex.new_format() ? NEW_LOAD_EVENT: LOAD_EVENT; return sql_ex.new_format() ? NEW_LOAD_EVENT: LOAD_EVENT;
} }
int write_data_header(IO_CACHE* file); bool write_data_header(IO_CACHE* file);
int write_data_body(IO_CACHE* file); bool write_data_body(IO_CACHE* file);
bool is_valid() const { return table_name != 0; } bool is_valid() const { return table_name != 0; }
int get_data_size() int get_data_size()
{ {
@ -830,23 +827,26 @@ extern char server_version[SERVER_VERSION_LENGTH];
is >4 (otherwise if ==4 the event will be sent naturally). is >4 (otherwise if ==4 the event will be sent naturally).
****************************************************************************/ ****************************************************************************/
class Start_log_event_v3: public Log_event class Start_log_event_v3: public Log_event
{ {
public: public:
/* /*
If this event is at the start of the first binary log since server startup If this event is at the start of the first binary log since server
'created' should be the timestamp when the event (and the binary log) was startup 'created' should be the timestamp when the event (and the
created. binary log) was created. In the other case (i.e. this event is at
In the other case (i.e. this event is at the start of a binary log created the start of a binary log created by FLUSH LOGS or automatic
by FLUSH LOGS or automatic rotation), 'created' should be 0. rotation), 'created' should be 0. This "trick" is used by MySQL
This "trick" is used by MySQL >=4.0.14 slaves to know if they must drop the >=4.0.14 slaves to know if they must drop the stale temporary
stale temporary tables or not. tables or not.
Note that when 'created'!=0, it is always equal to the event's timestamp;
indeed Start_log_event is written only in log.cc where the first Note that when 'created'!=0, it is always equal to the event's
constructor below is called, in which 'created' is set to 'when'. timestamp; indeed Start_log_event is written only in log.cc where
So in fact 'created' is a useless variable. When it is 0 the first constructor below is called, in which 'created' is set
we can read the actual value from timestamp ('when') and when it is to 'when'. So in fact 'created' is a useless variable. When it is
non-zero we can read the same value from timestamp ('when'). Conclusion: 0 we can read the actual value from timestamp ('when') and when it
is non-zero we can read the same value from timestamp
('when'). Conclusion:
- we use timestamp to print when the binlog was created. - we use timestamp to print when the binlog was created.
- we use 'created' only to know if this is a first binlog or not. - we use 'created' only to know if this is a first binlog or not.
In 3.23.57 we did not pay attention to this identity, so mysqlbinlog in In 3.23.57 we did not pay attention to this identity, so mysqlbinlog in
@ -856,6 +856,12 @@ public:
time_t created; time_t created;
uint16 binlog_version; uint16 binlog_version;
char server_version[ST_SERVER_VER_LEN]; char server_version[ST_SERVER_VER_LEN];
/*
artifical_event is 1 in the case where this is a generated event that
should not case any cleanup actions. We handle this in the log by
setting log_event == 0 (for now).
*/
bool artificial_event;
#ifndef MYSQL_CLIENT #ifndef MYSQL_CLIENT
Start_log_event_v3(); Start_log_event_v3();
@ -872,14 +878,16 @@ public:
const Format_description_log_event* description_event); const Format_description_log_event* description_event);
~Start_log_event_v3() {} ~Start_log_event_v3() {}
Log_event_type get_type_code() { return START_EVENT_V3;} Log_event_type get_type_code() { return START_EVENT_V3;}
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
bool is_valid() const { return 1; } bool is_valid() const { return 1; }
int get_data_size() int get_data_size()
{ {
return START_V3_HEADER_LEN; //no variable-sized part return START_V3_HEADER_LEN; //no variable-sized part
} }
virtual bool is_artificial_event() { return artificial_event; }
}; };
/* /*
For binlog version 4. For binlog version 4.
This event is saved by threads which read it, as they need it for future This event is saved by threads which read it, as they need it for future
@ -912,18 +920,12 @@ public:
const Format_description_log_event* description_event); const Format_description_log_event* description_event);
~Format_description_log_event() { my_free((gptr)post_header_len, MYF(0)); } ~Format_description_log_event() { my_free((gptr)post_header_len, MYF(0)); }
Log_event_type get_type_code() { return FORMAT_DESCRIPTION_EVENT;} Log_event_type get_type_code() { return FORMAT_DESCRIPTION_EVENT;}
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
bool is_valid() const bool is_valid() const
{
return ((common_header_len >= ((binlog_version==1) ? OLD_HEADER_LEN :
LOG_EVENT_MINIMAL_HEADER_LEN)) &&
(post_header_len != NULL));
}
int get_event_len()
{ {
int i= LOG_EVENT_MINIMAL_HEADER_LEN + get_data_size(); return ((common_header_len >= ((binlog_version==1) ? OLD_HEADER_LEN :
DBUG_PRINT("info",("event_len=%d",i)); LOG_EVENT_MINIMAL_HEADER_LEN)) &&
return i; (post_header_len != NULL));
} }
int get_data_size() int get_data_size()
{ {
@ -944,6 +946,7 @@ public:
Logs special variables such as auto_increment values Logs special variables such as auto_increment values
****************************************************************************/ ****************************************************************************/
class Intvar_log_event: public Log_event class Intvar_log_event: public Log_event
{ {
public: public:
@ -967,10 +970,11 @@ public:
Log_event_type get_type_code() { return INTVAR_EVENT;} Log_event_type get_type_code() { return INTVAR_EVENT;}
const char* get_var_type_name(); const char* get_var_type_name();
int get_data_size() { return 9; /* sizeof(type) + sizeof(val) */;} int get_data_size() { return 9; /* sizeof(type) + sizeof(val) */;}
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
bool is_valid() const { return 1; } bool is_valid() const { return 1; }
}; };
/***************************************************************************** /*****************************************************************************
Rand Log Event class Rand Log Event class
@ -981,6 +985,7 @@ public:
waste, it does not cause bugs). waste, it does not cause bugs).
****************************************************************************/ ****************************************************************************/
class Rand_log_event: public Log_event class Rand_log_event: public Log_event
{ {
public: public:
@ -1003,10 +1008,11 @@ class Rand_log_event: public Log_event
~Rand_log_event() {} ~Rand_log_event() {}
Log_event_type get_type_code() { return RAND_EVENT;} Log_event_type get_type_code() { return RAND_EVENT;}
int get_data_size() { return 16; /* sizeof(ulonglong) * 2*/ } int get_data_size() { return 16; /* sizeof(ulonglong) * 2*/ }
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
bool is_valid() const { return 1; } bool is_valid() const { return 1; }
}; };
/***************************************************************************** /*****************************************************************************
User var Log Event class User var Log Event class
@ -1018,6 +1024,7 @@ class Rand_log_event: public Log_event
written before the Query_log_event, to set the user variable. written before the Query_log_event, to set the user variable.
****************************************************************************/ ****************************************************************************/
class User_var_log_event: public Log_event class User_var_log_event: public Log_event
{ {
public: public:
@ -1044,16 +1051,11 @@ public:
User_var_log_event(const char* buf, const Format_description_log_event* description_event); User_var_log_event(const char* buf, const Format_description_log_event* description_event);
~User_var_log_event() {} ~User_var_log_event() {}
Log_event_type get_type_code() { return USER_VAR_EVENT;} Log_event_type get_type_code() { return USER_VAR_EVENT;}
int get_data_size() bool write(IO_CACHE* file);
{
return (is_null ? UV_NAME_LEN_SIZE + name_len + UV_VAL_IS_NULL :
UV_NAME_LEN_SIZE + name_len + UV_VAL_IS_NULL + UV_VAL_TYPE_SIZE +
UV_CHARSET_NUMBER_SIZE + UV_VAL_LEN_SIZE + val_len);
}
int write_data(IO_CACHE* file);
bool is_valid() const { return 1; } bool is_valid() const { return 1; }
}; };
/***************************************************************************** /*****************************************************************************
Stop Log Event class Stop Log Event class
@ -1090,6 +1092,7 @@ public:
This will be depricated when we move to using sequence ids. This will be depricated when we move to using sequence ids.
****************************************************************************/ ****************************************************************************/
class Rotate_log_event: public Log_event class Rotate_log_event: public Log_event
{ {
public: public:
@ -1121,22 +1124,18 @@ public:
my_free((gptr) new_log_ident, MYF(0)); my_free((gptr) new_log_ident, MYF(0));
} }
Log_event_type get_type_code() { return ROTATE_EVENT;} Log_event_type get_type_code() { return ROTATE_EVENT;}
int get_event_len()
{
return (LOG_EVENT_MINIMAL_HEADER_LEN + get_data_size());
}
int get_data_size() { return ident_len + ROTATE_HEADER_LEN;} int get_data_size() { return ident_len + ROTATE_HEADER_LEN;}
bool is_valid() const { return new_log_ident != 0; } bool is_valid() const { return new_log_ident != 0; }
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
}; };
/* the classes below are for the new LOAD DATA INFILE logging */ /* the classes below are for the new LOAD DATA INFILE logging */
/***************************************************************************** /*****************************************************************************
Create File Log Event class Create File Log Event class
****************************************************************************/ ****************************************************************************/
class Create_file_log_event: public Load_log_event class Create_file_log_event: public Load_log_event
{ {
protected: protected:
@ -1187,13 +1186,13 @@ public:
4 + 1 + block_len); 4 + 1 + block_len);
} }
bool is_valid() const { return inited_from_old || block != 0; } bool is_valid() const { return inited_from_old || block != 0; }
int write_data_header(IO_CACHE* file); bool write_data_header(IO_CACHE* file);
int write_data_body(IO_CACHE* file); bool write_data_body(IO_CACHE* file);
/* /*
Cut out Create_file extentions and Cut out Create_file extentions and
write it as Load event - used on the slave write it as Load event - used on the slave
*/ */
int write_base(IO_CACHE* file); bool write_base(IO_CACHE* file);
}; };
@ -1202,6 +1201,7 @@ public:
Append Block Log Event class Append Block Log Event class
****************************************************************************/ ****************************************************************************/
class Append_block_log_event: public Log_event class Append_block_log_event: public Log_event
{ {
public: public:
@ -1209,14 +1209,15 @@ public:
uint block_len; uint block_len;
uint file_id; uint file_id;
/* /*
'db' is filled when the event is created in mysql_load() (the event needs to 'db' is filled when the event is created in mysql_load() (the
have a 'db' member to be well filtered by binlog-*-db rules). 'db' is not event needs to have a 'db' member to be well filtered by
written to the binlog (it's not used by Append_block_log_event::write()), so binlog-*-db rules). 'db' is not written to the binlog (it's not
it can't be read in the Append_block_log_event(const char* buf, int used by Append_block_log_event::write()), so it can't be read in
event_len) constructor. the Append_block_log_event(const char* buf, int event_len)
In other words, 'db' is used only for filtering by binlog-*-db rules. constructor. In other words, 'db' is used only for filtering by
Create_file_log_event is different: its 'db' (which is inherited from binlog-*-db rules. Create_file_log_event is different: it's 'db'
Load_log_event) is written to the binlog and can be re-read. (which is inherited from Load_log_event) is written to the binlog
and can be re-read.
*/ */
const char* db; const char* db;
@ -1237,15 +1238,17 @@ public:
Log_event_type get_type_code() { return APPEND_BLOCK_EVENT;} Log_event_type get_type_code() { return APPEND_BLOCK_EVENT;}
int get_data_size() { return block_len + APPEND_BLOCK_HEADER_LEN ;} int get_data_size() { return block_len + APPEND_BLOCK_HEADER_LEN ;}
bool is_valid() const { return block != 0; } bool is_valid() const { return block != 0; }
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
const char* get_db() { return db; } const char* get_db() { return db; }
}; };
/***************************************************************************** /*****************************************************************************
Delete File Log Event class Delete File Log Event class
****************************************************************************/ ****************************************************************************/
class Delete_file_log_event: public Log_event class Delete_file_log_event: public Log_event
{ {
public: public:
@ -1269,15 +1272,17 @@ public:
Log_event_type get_type_code() { return DELETE_FILE_EVENT;} Log_event_type get_type_code() { return DELETE_FILE_EVENT;}
int get_data_size() { return DELETE_FILE_HEADER_LEN ;} int get_data_size() { return DELETE_FILE_HEADER_LEN ;}
bool is_valid() const { return file_id != 0; } bool is_valid() const { return file_id != 0; }
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
const char* get_db() { return db; } const char* get_db() { return db; }
}; };
/***************************************************************************** /*****************************************************************************
Execute Load Log Event class Execute Load Log Event class
****************************************************************************/ ****************************************************************************/
class Execute_load_log_event: public Log_event class Execute_load_log_event: public Log_event
{ {
public: public:
@ -1300,10 +1305,11 @@ public:
Log_event_type get_type_code() { return EXEC_LOAD_EVENT;} Log_event_type get_type_code() { return EXEC_LOAD_EVENT;}
int get_data_size() { return EXEC_LOAD_HEADER_LEN ;} int get_data_size() { return EXEC_LOAD_HEADER_LEN ;}
bool is_valid() const { return file_id != 0; } bool is_valid() const { return file_id != 0; }
int write_data(IO_CACHE* file); bool write(IO_CACHE* file);
const char* get_db() { return db; } const char* get_db() { return db; }
}; };
#ifdef MYSQL_CLIENT #ifdef MYSQL_CLIENT
class Unknown_log_event: public Log_event class Unknown_log_event: public Log_event
{ {

View File

@ -4068,7 +4068,8 @@ enum options_mysqld
OPT_DEFAULT_TIME_ZONE, OPT_DEFAULT_TIME_ZONE,
OPT_OPTIMIZER_SEARCH_DEPTH, OPT_OPTIMIZER_SEARCH_DEPTH,
OPT_OPTIMIZER_PRUNE_LEVEL, OPT_OPTIMIZER_PRUNE_LEVEL,
OPT_SQL_UPDATABLE_VIEW_KEY OPT_SQL_UPDATABLE_VIEW_KEY,
OPT_AUTO_INCREMENT, OPT_AUTO_INCREMENT_OFFSET
}; };
@ -4087,6 +4088,16 @@ struct my_option my_long_options[] =
#endif /* HAVE_REPLICATION */ #endif /* HAVE_REPLICATION */
{"ansi", 'a', "Use ANSI SQL syntax instead of MySQL syntax.", 0, 0, 0, {"ansi", 'a', "Use ANSI SQL syntax instead of MySQL syntax.", 0, 0, 0,
GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0}, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
{"auto-increment-increment", OPT_AUTO_INCREMENT,
"Auto-increment columns are incremented by this",
(gptr*) &global_system_variables.auto_increment_increment,
(gptr*) &max_system_variables.auto_increment_increment, 0, GET_ULONG,
OPT_ARG, 1, 1, 65535, 0, 1, 0 },
{"auto-increment-offset", OPT_AUTO_INCREMENT_OFFSET,
"Offset added to Auto-increment columns. Used when auto-increment-increment != 1",
(gptr*) &global_system_variables.auto_increment_offset,
(gptr*) &max_system_variables.auto_increment_offset, 0, GET_ULONG, OPT_ARG,
1, 1, 65535, 0, 1, 0 },
{"basedir", 'b', {"basedir", 'b',
"Path to installation directory. All paths are usually resolved relative to this.", "Path to installation directory. All paths are usually resolved relative to this.",
(gptr*) &mysql_home_ptr, (gptr*) &mysql_home_ptr, 0, GET_STR, REQUIRED_ARG, (gptr*) &mysql_home_ptr, (gptr*) &mysql_home_ptr, 0, GET_STR, REQUIRED_ARG,

View File

@ -127,6 +127,11 @@ static byte *get_warning_count(THD *thd);
alphabetic order alphabetic order
*/ */
sys_var_thd_ulong sys_auto_increment_increment("auto_increment_increment",
&SV::auto_increment_increment);
sys_var_thd_ulong sys_auto_increment_offset("auto_increment_offset",
&SV::auto_increment_offset);
sys_var_long_ptr sys_binlog_cache_size("binlog_cache_size", sys_var_long_ptr sys_binlog_cache_size("binlog_cache_size",
&binlog_cache_size); &binlog_cache_size);
sys_var_thd_ulong sys_bulk_insert_buff_size("bulk_insert_buffer_size", sys_var_thd_ulong sys_bulk_insert_buff_size("bulk_insert_buffer_size",
@ -476,6 +481,8 @@ sys_var_const_str sys_license("license", STRINGIFY_ARG(LICENSE));
sys_var *sys_variables[]= sys_var *sys_variables[]=
{ {
&sys_auto_is_null, &sys_auto_is_null,
&sys_auto_increment_increment,
&sys_auto_increment_offset,
&sys_autocommit, &sys_autocommit,
&sys_big_tables, &sys_big_tables,
&sys_big_selects, &sys_big_selects,
@ -624,6 +631,8 @@ sys_var *sys_variables[]=
*/ */
struct show_var_st init_vars[]= { struct show_var_st init_vars[]= {
{"auto_incrememt_increment", (char*) &sys_auto_increment_increment, SHOW_SYS},
{"auto_increment_offset", (char*) &sys_auto_increment_offset, SHOW_SYS},
{"back_log", (char*) &back_log, SHOW_LONG}, {"back_log", (char*) &back_log, SHOW_LONG},
{"basedir", mysql_home, SHOW_CHAR}, {"basedir", mysql_home, SHOW_CHAR},
#ifdef HAVE_BERKELEY_DB #ifdef HAVE_BERKELEY_DB

View File

@ -220,9 +220,10 @@ static byte* get_table_key(TABLE_RULE_ENT* e, uint* len,
look_for_description_event look_for_description_event
1 if we should look for such an event. We only need 1 if we should look for such an event. We only need
this when the SQL thread starts and opens an existing this when the SQL thread starts and opens an existing
relay log and has to execute it (possibly from an offset relay log and has to execute it (possibly from an
>4); then we need to read the first event of the relay offset >4); then we need to read the first event of
log to be able to parse the events we have to execute. the relay log to be able to parse the events we have
to execute.
DESCRIPTION DESCRIPTION
- Close old open relay log files. - Close old open relay log files.
@ -333,8 +334,8 @@ int init_relay_log_pos(RELAY_LOG_INFO* rli,const char* log,
while (look_for_description_event) while (look_for_description_event)
{ {
/* /*
Read the possible Format_description_log_event; if position was 4, no need, it will Read the possible Format_description_log_event; if position
be read naturally. was 4, no need, it will be read naturally.
*/ */
DBUG_PRINT("info",("looking for a Format_description_log_event")); DBUG_PRINT("info",("looking for a Format_description_log_event"));
@ -373,9 +374,9 @@ int init_relay_log_pos(RELAY_LOG_INFO* rli,const char* log,
Format_desc (of slave) Format_desc (of slave)
Rotate (of master) Rotate (of master)
Format_desc (of master) Format_desc (of master)
So the Format_desc which really describes the rest of the relay log is So the Format_desc which really describes the rest of the relay log
the 3rd event (it can't be further than that, because we rotate the is the 3rd event (it can't be further than that, because we rotate
relay log when we queue a Rotate event from the master). the relay log when we queue a Rotate event from the master).
But what describes the Rotate is the first Format_desc. But what describes the Rotate is the first Format_desc.
So what we do is: So what we do is:
go on searching for Format_description events, until you exceed the go on searching for Format_description events, until you exceed the
@ -424,7 +425,7 @@ err:
/* /*
Init functio to set up array for errors that should be skipped for slave Init function to set up array for errors that should be skipped for slave
SYNOPSIS SYNOPSIS
init_slave_skip_errors() init_slave_skip_errors()
@ -505,26 +506,11 @@ void st_relay_log_info::inc_group_relay_log_pos(ulonglong log_pos,
the relay log is not "val". the relay log is not "val".
With the end_log_pos solution, we avoid computations involving lengthes. With the end_log_pos solution, we avoid computations involving lengthes.
*/ */
DBUG_PRINT("info", ("log_pos=%lld group_master_log_pos=%lld", DBUG_PRINT("info", ("log_pos: %lu group_master_log_pos: %lu",
log_pos,group_master_log_pos)); (long) log_pos, (long) group_master_log_pos));
if (log_pos) // 3.23 binlogs don't have log_posx if (log_pos) // 3.23 binlogs don't have log_posx
{ {
#if MYSQL_VERSION_ID < 50000
/*
If the event was converted from a 3.23 format, get_event_len() has
grown by 6 bytes (at least for most events, except LOAD DATA INFILE
which is already a big problem for 3.23->4.0 replication); 6 bytes is
the difference between the header's size in 4.0 (LOG_EVENT_HEADER_LEN)
and the header's size in 3.23 (OLD_HEADER_LEN). Note that using
mi->old_format will not help if the I/O thread has not started yet.
Yes this is a hack but it's just to make 3.23->4.x replication work;
3.23->5.0 replication is working much better.
*/
group_master_log_pos= log_pos -
(mi->old_format ? (LOG_EVENT_HEADER_LEN - OLD_HEADER_LEN) : 0);
#else
group_master_log_pos= log_pos; group_master_log_pos= log_pos;
#endif /* MYSQL_VERSION_ID < 5000 */
} }
pthread_cond_broadcast(&data_cond); pthread_cond_broadcast(&data_cond);
if (!skip_lock) if (!skip_lock)
@ -612,7 +598,8 @@ int purge_relay_logs(RELAY_LOG_INFO* rli, THD *thd, bool just_reset,
goto err; goto err;
} }
if (!just_reset) if (!just_reset)
error= init_relay_log_pos(rli, rli->group_relay_log_name, rli->group_relay_log_pos, error= init_relay_log_pos(rli, rli->group_relay_log_name,
rli->group_relay_log_pos,
0 /* do not need data lock */, errmsg, 0); 0 /* do not need data lock */, errmsg, 0);
err: err:
@ -880,8 +867,8 @@ static TABLE_RULE_ENT* find_wild(DYNAMIC_ARRAY *a, const char* key, int len)
second call will make the decision (because second call will make the decision (because
all_tables_not_ok() = !tables_ok(1st_list) && !tables_ok(2nd_list)). all_tables_not_ok() = !tables_ok(1st_list) && !tables_ok(2nd_list)).
Thought which arose from a question of a big customer "I want to include all Thought which arose from a question of a big customer "I want to include
tables like "abc.%" except the "%.EFG"". This can't be done now. If we all tables like "abc.%" except the "%.EFG"". This can't be done now. If we
supported Perl regexps we could do it with this pattern: /^abc\.(?!EFG)/ supported Perl regexps we could do it with this pattern: /^abc\.(?!EFG)/
(I could not find an equivalent in the regex library MySQL uses). (I could not find an equivalent in the regex library MySQL uses).
@ -1390,7 +1377,7 @@ static int get_master_version_and_clock(MYSQL* mysql, MASTER_INFO* mi)
else else
{ {
mi->clock_diff_with_master= 0; /* The "most sensible" value */ mi->clock_diff_with_master= 0; /* The "most sensible" value */
sql_print_error("Warning: \"SELECT UNIX_TIMESTAMP()\" failed on master, \ sql_print_warning("\"SELECT UNIX_TIMESTAMP()\" failed on master, \
do not trust column Seconds_Behind_Master of SHOW SLAVE STATUS"); do not trust column Seconds_Behind_Master of SHOW SLAVE STATUS");
} }
if (master_res) if (master_res)
@ -2151,7 +2138,7 @@ file '%s')", fname);
goto errwithmsg; goto errwithmsg;
#ifndef HAVE_OPENSSL #ifndef HAVE_OPENSSL
if (ssl) if (ssl)
sql_print_error("SSL information in the master info file " sql_print_warning("SSL information in the master info file "
"('%s') are ignored because this MySQL slave was compiled " "('%s') are ignored because this MySQL slave was compiled "
"without SSL support.", fname); "without SSL support.", fname);
#endif /* HAVE_OPENSSL */ #endif /* HAVE_OPENSSL */
@ -2569,17 +2556,16 @@ int st_relay_log_info::wait_for_pos(THD* thd, String* log_name,
ulong init_abort_pos_wait; ulong init_abort_pos_wait;
int error=0; int error=0;
struct timespec abstime; // for timeout checking struct timespec abstime; // for timeout checking
set_timespec(abstime,timeout); const char *msg;
DBUG_ENTER("wait_for_pos"); DBUG_ENTER("wait_for_pos");
DBUG_PRINT("enter",("group_master_log_name: '%s' pos: %lu timeout: %ld", DBUG_PRINT("enter",("log_name: '%s' log_pos: %lu timeout: %lu",
group_master_log_name, (ulong) group_master_log_pos, log_name->c_ptr(), (ulong) log_pos, (ulong) timeout));
(long) timeout));
set_timespec(abstime,timeout);
pthread_mutex_lock(&data_lock); pthread_mutex_lock(&data_lock);
const char *msg= thd->enter_cond(&data_cond, &data_lock, msg= thd->enter_cond(&data_cond, &data_lock,
"Waiting for the slave SQL thread to " "Waiting for the slave SQL thread to "
"advance position"); "advance position");
/* /*
This function will abort when it notices that some CHANGE MASTER or This function will abort when it notices that some CHANGE MASTER or
RESET MASTER has changed the master info. RESET MASTER has changed the master info.
@ -2635,6 +2621,12 @@ int st_relay_log_info::wait_for_pos(THD* thd, String* log_name,
bool pos_reached; bool pos_reached;
int cmp_result= 0; int cmp_result= 0;
DBUG_PRINT("info",
("init_abort_pos_wait: %ld abort_pos_wait: %ld",
init_abort_pos_wait, abort_pos_wait));
DBUG_PRINT("info",("group_master_log_name: '%s' pos: %lu",
group_master_log_name, (ulong) group_master_log_pos));
/* /*
group_master_log_name can be "", if we are just after a fresh group_master_log_name can be "", if we are just after a fresh
replication start or after a CHANGE MASTER TO MASTER_HOST/PORT replication start or after a CHANGE MASTER TO MASTER_HOST/PORT
@ -2941,8 +2933,8 @@ server_errno=%d)",
/* Check if eof packet */ /* Check if eof packet */
if (len < 8 && mysql->net.read_pos[0] == 254) if (len < 8 && mysql->net.read_pos[0] == 254)
{ {
sql_print_error("Slave: received end packet from server, apparent\ sql_print_information("Slave: received end packet from server, apparent "
master shutdown: %s", "master shutdown: %s",
mysql_error(mysql)); mysql_error(mysql));
return packet_error; return packet_error;
} }
@ -3261,14 +3253,14 @@ slave_begin:
thd->proc_info = "Connecting to master"; thd->proc_info = "Connecting to master";
// we can get killed during safe_connect // we can get killed during safe_connect
if (!safe_connect(thd, mysql, mi)) if (!safe_connect(thd, mysql, mi))
sql_print_error("Slave I/O thread: connected to master '%s@%s:%d',\ sql_print_information("Slave I/O thread: connected to master '%s@%s:%d',\
replication started in log '%s' at position %s", mi->user, replication started in log '%s' at position %s", mi->user,
mi->host, mi->port, mi->host, mi->port,
IO_RPL_LOG_NAME, IO_RPL_LOG_NAME,
llstr(mi->master_log_pos,llbuff)); llstr(mi->master_log_pos,llbuff));
else else
{ {
sql_print_error("Slave I/O thread killed while connecting to master"); sql_print_information("Slave I/O thread killed while connecting to master");
goto err; goto err;
} }
@ -3301,7 +3293,7 @@ connected:
sql_print_error("Failed on request_dump()"); sql_print_error("Failed on request_dump()");
if (io_slave_killed(thd,mi)) if (io_slave_killed(thd,mi))
{ {
sql_print_error("Slave I/O thread killed while requesting master \ sql_print_information("Slave I/O thread killed while requesting master \
dump"); dump");
goto err; goto err;
} }
@ -3325,7 +3317,7 @@ dump");
} }
if (io_slave_killed(thd,mi)) if (io_slave_killed(thd,mi))
{ {
sql_print_error("Slave I/O thread killed while retrying master \ sql_print_information("Slave I/O thread killed while retrying master \
dump"); dump");
goto err; goto err;
} }
@ -3338,7 +3330,7 @@ reconnecting to try again, log '%s' at postion %s", IO_RPL_LOG_NAME,
if (safe_reconnect(thd, mysql, mi, suppress_warnings) || if (safe_reconnect(thd, mysql, mi, suppress_warnings) ||
io_slave_killed(thd,mi)) io_slave_killed(thd,mi))
{ {
sql_print_error("Slave I/O thread killed during or \ sql_print_information("Slave I/O thread killed during or \
after reconnect"); after reconnect");
goto err; goto err;
} }
@ -3360,7 +3352,7 @@ after reconnect");
if (io_slave_killed(thd,mi)) if (io_slave_killed(thd,mi))
{ {
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("Slave I/O thread killed while reading event"); sql_print_information("Slave I/O thread killed while reading event");
goto err; goto err;
} }
@ -3397,20 +3389,20 @@ max_allowed_packet",
if (io_slave_killed(thd,mi)) if (io_slave_killed(thd,mi))
{ {
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("Slave I/O thread killed while waiting to \ sql_print_information("Slave I/O thread killed while waiting to \
reconnect after a failed read"); reconnect after a failed read");
goto err; goto err;
} }
thd->proc_info = "Reconnecting after a failed master event read"; thd->proc_info = "Reconnecting after a failed master event read";
if (!suppress_warnings) if (!suppress_warnings)
sql_print_error("Slave I/O thread: Failed reading log event, \ sql_print_information("Slave I/O thread: Failed reading log event, \
reconnecting to retry, log '%s' position %s", IO_RPL_LOG_NAME, reconnecting to retry, log '%s' position %s", IO_RPL_LOG_NAME,
llstr(mi->master_log_pos, llbuff)); llstr(mi->master_log_pos, llbuff));
if (safe_reconnect(thd, mysql, mi, suppress_warnings) || if (safe_reconnect(thd, mysql, mi, suppress_warnings) ||
io_slave_killed(thd,mi)) io_slave_killed(thd,mi))
{ {
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("Slave I/O thread killed during or after a \ sql_print_information("Slave I/O thread killed during or after a \
reconnect done to recover from failed read"); reconnect done to recover from failed read");
goto err; goto err;
} }
@ -3472,7 +3464,7 @@ log space");
// error = 0; // error = 0;
err: err:
// print the current replication position // print the current replication position
sql_print_error("Slave I/O thread exiting, read up to log '%s', position %s", sql_print_information("Slave I/O thread exiting, read up to log '%s', position %s",
IO_RPL_LOG_NAME, llstr(mi->master_log_pos,llbuff)); IO_RPL_LOG_NAME, llstr(mi->master_log_pos,llbuff));
VOID(pthread_mutex_lock(&LOCK_thread_count)); VOID(pthread_mutex_lock(&LOCK_thread_count));
thd->query = thd->db = 0; // extra safety thd->query = thd->db = 0; // extra safety
@ -3623,7 +3615,7 @@ slave_begin:
rli->group_master_log_name, rli->group_master_log_name,
llstr(rli->group_master_log_pos,llbuff))); llstr(rli->group_master_log_pos,llbuff)));
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("Slave SQL thread initialized, starting replication in \ sql_print_information("Slave SQL thread initialized, starting replication in \
log '%s' at position %s, relay log '%s' position: %s", RPL_LOG_NAME, log '%s' at position %s, relay log '%s' position: %s", RPL_LOG_NAME,
llstr(rli->group_master_log_pos,llbuff),rli->group_relay_log_name, llstr(rli->group_master_log_pos,llbuff),rli->group_relay_log_name,
llstr(rli->group_relay_log_pos,llbuff1)); llstr(rli->group_relay_log_pos,llbuff1));
@ -3661,7 +3653,7 @@ the slave SQL thread with \"SLAVE START\". We stopped at log \
} }
/* Thread stopped. Print the current replication position to the log */ /* Thread stopped. Print the current replication position to the log */
sql_print_error("Slave SQL thread exiting, replication stopped in log \ sql_print_information("Slave SQL thread exiting, replication stopped in log \
'%s' at position %s", '%s' at position %s",
RPL_LOG_NAME, llstr(rli->group_master_log_pos,llbuff)); RPL_LOG_NAME, llstr(rli->group_master_log_pos,llbuff));
@ -4373,7 +4365,7 @@ Error: '%s' errno: %d retry-time: %d retries: %d",
if (reconnect) if (reconnect)
{ {
if (!suppress_warnings && global_system_variables.log_warnings) if (!suppress_warnings && global_system_variables.log_warnings)
sql_print_error("Slave: connected to master '%s@%s:%d',\ sql_print_information("Slave: connected to master '%s@%s:%d',\
replication resumed in log '%s' at position %s", mi->user, replication resumed in log '%s' at position %s", mi->user,
mi->host, mi->port, mi->host, mi->port,
IO_RPL_LOG_NAME, IO_RPL_LOG_NAME,
@ -4556,12 +4548,12 @@ Log_event* next_event(RELAY_LOG_INFO* rli)
/* /*
Relay log is always in new format - if the master is 3.23, the Relay log is always in new format - if the master is 3.23, the
I/O thread will convert the format for us. I/O thread will convert the format for us.
A problem: the description event may be in a previous relay log. So if the A problem: the description event may be in a previous relay log. So if
slave has been shutdown meanwhile, we would have to look in old relay the slave has been shutdown meanwhile, we would have to look in old relay
logs, which may even have been deleted. So we need to write this logs, which may even have been deleted. So we need to write this
description event at the beginning of the relay log. description event at the beginning of the relay log.
When the relay log is created when the I/O thread starts, easy: the master When the relay log is created when the I/O thread starts, easy: the
will send the description event and we will queue it. master will send the description event and we will queue it.
But if the relay log is created by new_file(): then the solution is: But if the relay log is created by new_file(): then the solution is:
MYSQL_LOG::open() will write the buffered description event. MYSQL_LOG::open() will write the buffered description event.
*/ */
@ -4715,8 +4707,8 @@ Log_event* next_event(RELAY_LOG_INFO* rli)
{ {
#ifdef EXTRA_DEBUG #ifdef EXTRA_DEBUG
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("next log '%s' is currently active", sql_print_information("next log '%s' is currently active",
rli->linfo.log_file_name); rli->linfo.log_file_name);
#endif #endif
rli->cur_log= cur_log= rli->relay_log.get_log_file(); rli->cur_log= cur_log= rli->relay_log.get_log_file();
rli->cur_log_old_open_count= rli->relay_log.get_open_count(); rli->cur_log_old_open_count= rli->relay_log.get_open_count();
@ -4745,8 +4737,8 @@ Log_event* next_event(RELAY_LOG_INFO* rli)
*/ */
#ifdef EXTRA_DEBUG #ifdef EXTRA_DEBUG
if (global_system_variables.log_warnings) if (global_system_variables.log_warnings)
sql_print_error("next log '%s' is not active", sql_print_information("next log '%s' is not active",
rli->linfo.log_file_name); rli->linfo.log_file_name);
#endif #endif
// open_binlog() will check the magic header // open_binlog() will check the magic header
if ((rli->cur_log_fd=open_binlog(cur_log,rli->linfo.log_file_name, if ((rli->cur_log_fd=open_binlog(cur_log,rli->linfo.log_file_name,
@ -4772,7 +4764,11 @@ event(errno: %d cur_log->error: %d)",
} }
} }
if (!errmsg && global_system_variables.log_warnings) if (!errmsg && global_system_variables.log_warnings)
errmsg = "slave SQL thread was killed"; {
sql_print_information("Error reading relay log event: %s",
"slave SQL thread was killed");
DBUG_RETURN(0);
}
err: err:
if (errmsg) if (errmsg)

View File

@ -160,8 +160,8 @@ bool foreign_key_prefix(Key *a, Key *b)
THD::THD() THD::THD()
:user_time(0), global_read_lock(0), is_fatal_error(0), :user_time(0), global_read_lock(0), is_fatal_error(0),
last_insert_id_used(0), rand_used(0), time_zone_used(0),
insert_id_used(0), rand_used(0), time_zone_used(0), last_insert_id_used(0), insert_id_used(0), clear_next_insert_id(0),
in_lock_tables(0), bootstrap(0), spcont(NULL) in_lock_tables(0), bootstrap(0), spcont(NULL)
{ {
current_arena= this; current_arena= this;
@ -496,6 +496,24 @@ bool THD::store_globals()
} }
/* Cleanup after a query */
void THD::cleanup_after_query()
{
if (clear_next_insert_id)
{
clear_next_insert_id= 0;
next_insert_id= 0;
}
/* Free Items that were created during this execution */
free_items(free_list);
/*
In the rest of code we assume that free_list never points to garbage:
Keep this predicate true.
*/
free_list= 0;
}
/* /*
Convert a string to another character set Convert a string to another character set
@ -1461,8 +1479,8 @@ void Statement::end_statement()
lex_end(lex); lex_end(lex);
delete lex->result; delete lex->result;
lex->result= 0; lex->result= 0;
free_items(free_list); /* Note that free_list is freed in cleanup_after_query() */
free_list= 0;
/* /*
Don't free mem_root, as mem_root is freed in the end of dispatch_command Don't free mem_root, as mem_root is freed in the end of dispatch_command
(once for any command). (once for any command).

View File

@ -373,6 +373,7 @@ struct system_variables
ulonglong myisam_max_sort_file_size; ulonglong myisam_max_sort_file_size;
ha_rows select_limit; ha_rows select_limit;
ha_rows max_join_size; ha_rows max_join_size;
ulong auto_increment_increment, auto_increment_offset;
ulong bulk_insert_buff_size; ulong bulk_insert_buff_size;
ulong join_buff_size; ulong join_buff_size;
ulong long_query_time; ulong long_query_time;
@ -835,6 +836,8 @@ public:
generated auto_increment value in handler.cc generated auto_increment value in handler.cc
*/ */
ulonglong next_insert_id; ulonglong next_insert_id;
/* Remember last next_insert_id to reset it if something went wrong */
ulonglong prev_insert_id;
/* /*
The insert_id used for the last statement or set by SET LAST_INSERT_ID=# The insert_id used for the last statement or set by SET LAST_INSERT_ID=#
or SELECT LAST_INSERT_ID(#). Used for binary log and returned by or SELECT LAST_INSERT_ID(#). Used for binary log and returned by
@ -889,6 +892,9 @@ public:
/* for user variables replication*/ /* for user variables replication*/
DYNAMIC_ARRAY user_var_events; DYNAMIC_ARRAY user_var_events;
enum killed_state { NOT_KILLED=0, KILL_CONNECTION=ER_SERVER_SHUTDOWN, KILL_QUERY=ER_QUERY_INTERRUPTED };
killed_state volatile killed;
/* scramble - random string sent to client on handshake */ /* scramble - random string sent to client on handshake */
char scramble[SCRAMBLE_LENGTH+1]; char scramble[SCRAMBLE_LENGTH+1];
@ -896,22 +902,10 @@ public:
bool locked, some_tables_deleted; bool locked, some_tables_deleted;
bool last_cuted_field; bool last_cuted_field;
bool no_errors, password, is_fatal_error; bool no_errors, password, is_fatal_error;
bool query_start_used,last_insert_id_used,insert_id_used,rand_used; bool query_start_used, rand_used, time_zone_used;
bool time_zone_used; bool last_insert_id_used,insert_id_used, clear_next_insert_id;
bool in_lock_tables; bool in_lock_tables;
bool query_error, bootstrap, cleanup_done; bool query_error, bootstrap, cleanup_done;
enum killed_state { NOT_KILLED=0, KILL_CONNECTION=ER_SERVER_SHUTDOWN, KILL_QUERY=ER_QUERY_INTERRUPTED };
killed_state volatile killed;
inline int killed_errno() const
{
return killed;
}
inline void send_kill_message() const
{
my_error(killed_errno(), MYF(0));
}
bool tmp_table_used; bool tmp_table_used;
bool charset_is_system_charset, charset_is_collation_connection; bool charset_is_system_charset, charset_is_collation_connection;
bool slow_command; bool slow_command;
@ -951,6 +945,7 @@ public:
void init_for_queries(); void init_for_queries();
void change_user(void); void change_user(void);
void cleanup(void); void cleanup(void);
void cleanup_after_query();
bool store_globals(); bool store_globals();
#ifdef SIGNAL_WITH_VIO_CLOSE #ifdef SIGNAL_WITH_VIO_CLOSE
inline void set_active_vio(Vio* vio) inline void set_active_vio(Vio* vio)
@ -1070,6 +1065,14 @@ public:
} }
inline CHARSET_INFO *charset() { return variables.character_set_client; } inline CHARSET_INFO *charset() { return variables.character_set_client; }
void update_charset(); void update_charset();
inline int killed_errno() const
{
return killed;
}
inline void send_kill_message() const
{
my_error(killed_errno(), MYF(0));
}
}; };
/* Flags for the THD::system_thread (bitmap) variable */ /* Flags for the THD::system_thread (bitmap) variable */

View File

@ -640,7 +640,7 @@ int mysqld_help(THD *thd, const char *mask)
uint mlen= strlen(mask); uint mlen= strlen(mask);
MEM_ROOT *mem_root= &thd->mem_root; MEM_ROOT *mem_root= &thd->mem_root;
if (res= open_and_lock_tables(thd, tables)) if ((res= open_and_lock_tables(thd, tables)))
goto end; goto end;
/* /*
Init tables and fields to be usable from items Init tables and fields to be usable from items

View File

@ -311,8 +311,6 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
else else
#endif #endif
error=write_record(table,&info); error=write_record(table,&info);
if (error)
break;
/* /*
If auto_increment values are used, save the first one If auto_increment values are used, save the first one
for LAST_INSERT_ID() and for the update log. for LAST_INSERT_ID() and for the update log.
@ -323,6 +321,8 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
{ // Get auto increment value { // Get auto increment value
id= thd->last_insert_id; id= thd->last_insert_id;
} }
if (error)
break;
thd->row_count++; thd->row_count++;
} }
@ -638,9 +638,10 @@ int write_record(TABLE *table,COPY_INFO *info)
{ {
while ((error=table->file->write_row(table->record[0]))) while ((error=table->file->write_row(table->record[0])))
{ {
uint key_nr;
if (error != HA_WRITE_SKIP) if (error != HA_WRITE_SKIP)
goto err; goto err;
uint key_nr; table->file->restore_auto_increment();
if ((int) (key_nr = table->file->get_dup_key(error)) < 0) if ((int) (key_nr = table->file->get_dup_key(error)) < 0)
{ {
error=HA_WRITE_SKIP; /* Database can't find key */ error=HA_WRITE_SKIP; /* Database can't find key */
@ -733,6 +734,7 @@ int write_record(TABLE *table,COPY_INFO *info)
if (info->handle_duplicates != DUP_IGNORE || if (info->handle_duplicates != DUP_IGNORE ||
(error != HA_ERR_FOUND_DUPP_KEY && error != HA_ERR_FOUND_DUPP_UNIQUE)) (error != HA_ERR_FOUND_DUPP_KEY && error != HA_ERR_FOUND_DUPP_UNIQUE))
goto err; goto err;
table->file->restore_auto_increment();
} }
else else
info->copied++; info->copied++;

View File

@ -1571,8 +1571,7 @@ bool dispatch_command(enum enum_server_command command, THD *thd,
check_grant(thd, SELECT_ACL, &table_list, 2, UINT_MAX, 0)) check_grant(thd, SELECT_ACL, &table_list, 2, UINT_MAX, 0))
break; break;
mysqld_list_fields(thd,&table_list,fields); mysqld_list_fields(thd,&table_list,fields);
free_items(thd->free_list); thd->cleanup_after_query();
thd->free_list= 0; /* free_list should never point to garbage */
break; break;
} }
#endif #endif
@ -4520,6 +4519,7 @@ void mysql_parse(THD *thd, char *inBuf, uint length)
} }
thd->proc_info="freeing items"; thd->proc_info="freeing items";
thd->end_statement(); thd->end_statement();
thd->cleanup_after_query();
} }
DBUG_VOID_RETURN; DBUG_VOID_RETURN;
} }
@ -4546,10 +4546,12 @@ bool mysql_test_parse_for_slave(THD *thd, char *inBuf, uint length)
all_tables_not_ok(thd,(TABLE_LIST*) lex->select_lex.table_list.first)) all_tables_not_ok(thd,(TABLE_LIST*) lex->select_lex.table_list.first))
error= 1; /* Ignore question */ error= 1; /* Ignore question */
thd->end_statement(); thd->end_statement();
thd->cleanup_after_query();
DBUG_RETURN(error); DBUG_RETURN(error);
} }
#endif #endif
/***************************************************************************** /*****************************************************************************
** Store field definition for create ** Store field definition for create
** Return 0 if ok ** Return 0 if ok

View File

@ -1628,8 +1628,7 @@ int mysql_stmt_prepare(THD *thd, char *packet, uint packet_length,
thd->restore_backup_statement(stmt, &thd->stmt_backup); thd->restore_backup_statement(stmt, &thd->stmt_backup);
cleanup_items(stmt->free_list); cleanup_items(stmt->free_list);
close_thread_tables(thd); close_thread_tables(thd);
free_items(thd->free_list); thd->cleanup_after_query();
thd->free_list= 0;
thd->current_arena= thd; thd->current_arena= thd;
if (error) if (error)
@ -1856,12 +1855,7 @@ void mysql_stmt_execute(THD *thd, char *packet, uint packet_length)
cleanup_items(stmt->free_list); cleanup_items(stmt->free_list);
reset_stmt_params(stmt); reset_stmt_params(stmt);
close_thread_tables(thd); /* to close derived tables */ close_thread_tables(thd); /* to close derived tables */
/* thd->cleanup_after_query();
Free items that were created during this execution of the PS by
query optimizer.
*/
free_items(thd->free_list);
thd->free_list= 0;
} }
thd->set_statement(&thd->stmt_backup); thd->set_statement(&thd->stmt_backup);
@ -1969,13 +1963,8 @@ static void execute_stmt(THD *thd, Prepared_statement *stmt,
reset_stmt_params(stmt); reset_stmt_params(stmt);
close_thread_tables(thd); // to close derived tables close_thread_tables(thd); // to close derived tables
thd->set_statement(&thd->stmt_backup); thd->set_statement(&thd->stmt_backup);
/* Free Items that were created during this execution of the PS. */ thd->cleanup_after_query();
free_items(thd->free_list);
/*
In the rest of prepared statements code we assume that free_list
never points to garbage: keep this predicate true.
*/
thd->free_list= 0;
if (stmt->state == Item_arena::PREPARED) if (stmt->state == Item_arena::PREPARED)
{ {
thd->current_arena= thd; thd->current_arena= thd;

View File

@ -3389,6 +3389,7 @@ copy_data_between_tables(TABLE *from,TABLE *to,
to->file->print_error(error,MYF(0)); to->file->print_error(error,MYF(0));
break; break;
} }
to->file->restore_auto_increment();
delete_count++; delete_count++;
} }
else else