Only write full transactions to binary log
A lot of new functions for BDB tables Fix for DROP DATABASE on windows Default server_id variables Docs/manual.texi: Update of BDB info + Changes configure.in: Added test of readlink include/mysqld_error.h: Added new error message sql/ha_berkeley.cc: Added storing of status, CHECK, ANALYZE and OPTIMIZE TABLE sql/ha_berkeley.h: Added storing of status, CHECK, ANALYZE and OPTIMIZE TABLE sql/handler.cc: Only write full transactions to binary log sql/hostname.cc: cleanup sql/log.cc: Only write full transactions to binary log sql/log_event.h: Only write full transactions to binary log sql/mf_iocache.cc: Changes to be able to use IO_CACHE to save statements in a transaction sql/mysql_priv.h: New variables sql/mysqld.cc: Only write full transactions to binary log Added default values for server_id Lots of new bdb options sql/share/czech/errmsg.sys: Added new error message sql/share/czech/errmsg.txt: Added new error message sql/share/danish/errmsg.sys: Added new error message sql/share/danish/errmsg.txt: Added new error message sql/share/dutch/errmsg.sys: Added new error message sql/share/dutch/errmsg.txt: Added new error message sql/share/english/errmsg.sys: Added new error message sql/share/english/errmsg.txt: Added new error message sql/share/estonian/errmsg.sys: Added new error message sql/share/estonian/errmsg.txt: Added new error message sql/share/french/errmsg.sys: Added new error message sql/share/french/errmsg.txt: Added new error message sql/share/german/errmsg.sys: Added new error message sql/share/german/errmsg.txt: Added new error message sql/share/greek/errmsg.sys: Added new error message sql/share/greek/errmsg.txt: Added new error message sql/share/hungarian/errmsg.sys: Added new error message sql/share/hungarian/errmsg.txt: Added new error message sql/share/italian/errmsg.sys: Added new error message sql/share/italian/errmsg.txt: Added new error message sql/share/japanese/errmsg.sys: Added new error message sql/share/japanese/errmsg.txt: Added new error message sql/share/korean/errmsg.sys: Added new error message sql/share/korean/errmsg.txt: Added new error message sql/share/norwegian-ny/errmsg.txt: Added new error message sql/share/norwegian/errmsg.txt: Added new error message sql/share/polish/errmsg.sys: Added new error message sql/share/polish/errmsg.txt: Added new error message sql/share/portuguese/errmsg.sys: Added new error message sql/share/portuguese/errmsg.txt: Added new error message sql/share/romanian/errmsg.txt: Added new error message sql/share/russian/errmsg.sys: Added new error message sql/share/russian/errmsg.txt: Added new error message sql/share/slovak/errmsg.sys: Added new error message sql/share/slovak/errmsg.txt: Added new error message sql/share/spanish/errmsg.sys: Added new error message sql/share/spanish/errmsg.txt: Added new error message sql/share/swedish/errmsg.OLD: Added new error message sql/share/swedish/errmsg.sys: Added new error message sql/share/swedish/errmsg.txt: Added new error message sql/sql_base.cc: cleanup sql/sql_class.cc: Only write full transactions to binary log sql/sql_class.h: Added error handling of failed writes to logs sql/sql_db.cc: Fix for DROP DATABASE on windows sql/sql_delete.cc: Only write full transactions to binary log sql/sql_insert.cc: Only write full transactions to binary log sql/sql_load.cc: Only write full transactions to binary log sql/sql_parse.cc: End transaction at DROP, RENAME, CREATE and TRUNCATE sql/sql_table.cc: Fixes for ALTER TABLE on BDB tables for windows sql/sql_update.cc: Only write full transactions to binary log sql/sql_yacc.yy: AGAINST is not anymore a reserved word support-files/my-huge.cnf.sh: Changed to use binary log support-files/my-large.cnf.sh: Changed to use binary log support-files/my-medium.cnf.sh: Changed to use binary log support-files/my-small.cnf.sh: Changed to use binary log
This commit is contained in:
parent
e5c585861e
commit
29907fc5a4
202
Docs/manual.texi
202
Docs/manual.texi
@ -712,7 +712,7 @@ Solving some common problems with MySQL
|
||||
* Log Replication:: Database replication with update log
|
||||
* Backup:: Database backups
|
||||
* Update log:: The update log
|
||||
* Binary log::
|
||||
* Binary log:: The binary log
|
||||
* Slow query log:: Log of slow queries
|
||||
* Multiple servers:: Running multiple @strong{MySQL} servers on the same machine
|
||||
|
||||
@ -9117,6 +9117,32 @@ bin\mysqld-nt --remove # remove MySQL as a service
|
||||
By invoking @code{mysqld} directly.
|
||||
@end itemize
|
||||
|
||||
When the @code{mysqld} daemon starts up, it changes directory to the
|
||||
data directory. This is where it expects to write log files and the pid
|
||||
(process ID) file, and where it expects to find databases.
|
||||
|
||||
The data directory location is hardwired in when the distribution is
|
||||
compiled. However, if @code{mysqld} expects to find the data directory
|
||||
somewhere other than where it really is on your system, it will not work
|
||||
properly. If you have problems with incorrect paths, you can find out
|
||||
what options @code{mysqld} allows and what the default path settings are by
|
||||
invoking @code{mysqld} with the @code{--help} option. You can override the
|
||||
defaults by specifying the correct pathnames as command-line arguments to
|
||||
@code{mysqld}. (These options can be used with @code{safe_mysqld} as well.)
|
||||
|
||||
Normally you should need to tell @code{mysqld} only the base directory under
|
||||
which @strong{MySQL} is installed. You can do this with the @code{--basedir}
|
||||
option. You can also use @code{--help} to check the effect of changing path
|
||||
options (note that @code{--help} @emph{must} be the final option of the
|
||||
@code{mysqld} command). For example:
|
||||
|
||||
@example
|
||||
shell> EXECDIR/mysqld --basedir=/usr/local --help
|
||||
@end example
|
||||
|
||||
Once you determine the path settings you want, start the server without
|
||||
the @code{--help} option.
|
||||
|
||||
Whichever method you use to start the server, if it fails to start up
|
||||
correctly, check the log file to see if you can find out why. Log files
|
||||
are located in the data directory (typically
|
||||
@ -9146,32 +9172,6 @@ the old Berkeley DB log file from the database directory to some other
|
||||
place, where you can later examine these. The log files are named
|
||||
@file{log.0000000001}, where the number will increase over time.
|
||||
|
||||
When the @code{mysqld} daemon starts up, it changes directory to the
|
||||
data directory. This is where it expects to write log files and the pid
|
||||
(process ID) file, and where it expects to find databases.
|
||||
|
||||
The data directory location is hardwired in when the distribution is
|
||||
compiled. However, if @code{mysqld} expects to find the data directory
|
||||
somewhere other than where it really is on your system, it will not work
|
||||
properly. If you have problems with incorrect paths, you can find out
|
||||
what options @code{mysqld} allows and what the default path settings are by
|
||||
invoking @code{mysqld} with the @code{--help} option. You can override the
|
||||
defaults by specifying the correct pathnames as command-line arguments to
|
||||
@code{mysqld}. (These options can be used with @code{safe_mysqld} as well.)
|
||||
|
||||
Normally you should need to tell @code{mysqld} only the base directory under
|
||||
which @strong{MySQL} is installed. You can do this with the @code{--basedir}
|
||||
option. You can also use @code{--help} to check the effect of changing path
|
||||
options (note that @code{--help} @emph{must} be the final option of the
|
||||
@code{mysqld} command). For example:
|
||||
|
||||
@example
|
||||
shell> EXECDIR/mysqld --basedir=/usr/local --help
|
||||
@end example
|
||||
|
||||
Once you determine the path settings you want, start the server without
|
||||
the @code{--help} option.
|
||||
|
||||
If you get the following error, it means that some other program (or another
|
||||
@code{mysqld} server) is already using the TCP/IP port or socket
|
||||
@code{mysqld} is trying to use:
|
||||
@ -9222,6 +9222,10 @@ This will not run in the background and it should also write a trace in
|
||||
@file{\mysqld.trace}, which may help you determine the source of your
|
||||
problems. @xref{Windows}.
|
||||
|
||||
If you are using BDB (Berkeley DB) tables, you should familiarize
|
||||
yourself with the different BDB specific startup options. @xref{BDB start}.
|
||||
|
||||
|
||||
@node Automatic start, Command-line options, Starting server, Post-installation
|
||||
@subsection Starting and Stopping MySQL Automatically
|
||||
@cindex starting, the server automatically
|
||||
@ -9747,6 +9751,10 @@ Version 3.23:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
If you do a @code{DROP DATABASE} on a symbolic linked database, both the
|
||||
link and the original database is deleted. (This didn't happen in 3.22
|
||||
because configure didn't detect the @code{readlink} system call).
|
||||
@item
|
||||
@code{OPTIMIZE TABLE} now only works for @strong{MyISAM} tables.
|
||||
For other table types, you can use @code{ALTER TABLE} to optimize the table.
|
||||
During @code{OPTIMIZE TABLE} the table is now locked from other threads.
|
||||
@ -17464,7 +17472,9 @@ DROP DATABASE [IF EXISTS] db_name
|
||||
@end example
|
||||
|
||||
@code{DROP DATABASE} drops all tables in the database and deletes the
|
||||
database. @strong{Be VERY careful with this command!}
|
||||
database. If you do a @code{DROP DATABASE} on a symbolic linked
|
||||
database, both the link and the original database is deleted. @strong{Be
|
||||
VERY careful with this command!}
|
||||
|
||||
@code{DROP DATABASE} returns the number of files that were removed from
|
||||
the database directory. Normally, this is three times the number of
|
||||
@ -18261,10 +18271,13 @@ Deleted records are maintained in a linked list and subsequent @code{INSERT}
|
||||
operations reuse old record positions. You can use @code{OPTIMIZE TABLE} to
|
||||
reclaim the unused space and to defragment the data file.
|
||||
|
||||
For the moment @code{OPTIMIZE TABLE} only works on @strong{MyISAM}
|
||||
tables. You can get optimize table to work on other table types by
|
||||
starting @code{mysqld} with @code{--skip-new} or @code{--safe-mode}, but in
|
||||
this case @code{OPTIMIZE TABLE} is just mapped to @code{ALTER TABLE}.
|
||||
For the moment @code{OPTIMIZE TABLE} only works on @strong{MyISAM} and
|
||||
@code{BDB} tables. For @code{BDB} tables, @code{OPTIMIZE TABLE} is
|
||||
currently mapped to @code{ANALYZE TABLE}. @xref{ANALYZE TABLE}.
|
||||
|
||||
You can get optimize table to work on other table types by starting
|
||||
@code{mysqld} with @code{--skip-new} or @code{--safe-mode}, but in this
|
||||
case @code{OPTIMIZE TABLE} is just mapped to @code{ALTER TABLE}.
|
||||
|
||||
@code{OPTIMIZE TABLE} works the following way:
|
||||
@itemize @bullet
|
||||
@ -18277,7 +18290,7 @@ If the statistics are not up to date (and the repair couldn't be done
|
||||
by sorting the index), update them.
|
||||
@end itemize
|
||||
|
||||
@code{OPTIMIZE TABLE} is equvialent of running
|
||||
@code{OPTIMIZE TABLE} for @code{MyISAM} tables is equvialent of running
|
||||
@code{myisamchk --quick --check-changed-tables --sort-index --analyze}
|
||||
on the table.
|
||||
|
||||
@ -18294,11 +18307,12 @@ CHECK TABLE tbl_name[,tbl_name...] [option [option...]]
|
||||
option = QUICK | FAST | EXTEND | CHANGED
|
||||
@end example
|
||||
|
||||
@code{CHECK TABLE} only works on @code{MyISAM} tables and is the same thing
|
||||
as running @code{myisamchk -m table_name} on the table.
|
||||
@code{CHECK TABLE} only works on @code{MyISAM} and @code{BDB} tables. On
|
||||
@code{MyISAM} tables it's the same thing as running @code{myisamchk -m
|
||||
table_name} on the table.
|
||||
|
||||
Check the table(s) for errors and update the key statistics for the table.
|
||||
The command returns a table with the following columns:
|
||||
Checks the table(s) for errors. For @code{MyISAM} tables the key statistics
|
||||
is updated. The command returns a table with the following columns:
|
||||
|
||||
@multitable @columnfractions .35 .65
|
||||
@item @strong{Column} @tab @strong{Value}
|
||||
@ -18325,6 +18339,9 @@ The different check types stand for the following:
|
||||
@item @code{EXTENDED} @tab Do a full key lookup for all keys for each row. This ensures that the table is 100 % consistent, but will take a long time!
|
||||
@end multitable
|
||||
|
||||
Note that for BDB tables the different check options doesn't affect the
|
||||
check in any way!
|
||||
|
||||
You can combine check options as in:
|
||||
|
||||
@example
|
||||
@ -18423,7 +18440,9 @@ ANALYZE TABLE tbl_name[,tbl_name...]
|
||||
@end example
|
||||
|
||||
Analyze and store the key distribution for the table. During the
|
||||
analyze the table is locked with a read lock.
|
||||
analyze the table is locked with a read lock. This works on
|
||||
@code{MyISAM} and @code{BDB} tables.
|
||||
|
||||
This is equivalent to running @code{myisamchk -a} on the table.
|
||||
|
||||
@strong{MySQL} uses the stored key distribution to decide in which order
|
||||
@ -20108,16 +20127,15 @@ If @code{key_reads} is big, then your @code{key_cache} is probably too
|
||||
small. The cache hit rate can be calculated with
|
||||
@code{key_reads}/@code{key_read_requests}.
|
||||
@item
|
||||
If @code{Handler_read_rnd} is big, then you probably have a lot of queries
|
||||
that require @strong{MySQL} to scan whole tables or you have joins that don't use
|
||||
keys properly.
|
||||
If @code{Handler_read_rnd} is big, then you probably have a lot of
|
||||
queries that require @strong{MySQL} to scan whole tables or you have
|
||||
joins that don't use keys properly.
|
||||
@item
|
||||
If @code{Created_tmp_tables} or @code{Sort_merge_passes} are high then
|
||||
your @code{mysqld} @code{sort_buffer} variables is probably too small.
|
||||
@item
|
||||
@code{Created_tmp_files} doesn't count the files needed to handle temporary
|
||||
tables.
|
||||
@item
|
||||
@end itemize
|
||||
|
||||
@node SHOW VARIABLES, SHOW PROCESSLIST, SHOW STATUS, SHOW
|
||||
@ -20143,6 +20161,7 @@ differ somewhat:
|
||||
| bdb_home | /usr/local/mysql/data/ |
|
||||
| bdb_logdir | |
|
||||
| bdb_tmpdir | /tmp/ |
|
||||
| binlog_cache_size | 32768 |
|
||||
| character_set | latin1 |
|
||||
| character_sets | latin1 |
|
||||
| connect_timeout | 5 |
|
||||
@ -20239,7 +20258,7 @@ cache.
|
||||
@item @code{bdb_home}
|
||||
The value of the @code{--bdb-home} option.
|
||||
|
||||
@item @code{bdb_lock_max}
|
||||
@item @code{bdb_max_lock}
|
||||
The maximum number of locks (1000 by default) you can have active on a
|
||||
BDB table. You should increase this if you get errors of type @code{bdb:
|
||||
Lock table is out of available locks} or @code{Got error 12 from ...}
|
||||
@ -20249,9 +20268,17 @@ a lot of rows to calculate the query.
|
||||
@item @code{bdb_logdir}
|
||||
The value of the @code{--bdb-logdir} option.
|
||||
|
||||
@item @code{bdb_shared_data}
|
||||
Is @code{ON} if you are using @code{--bdb-shared-data}.
|
||||
|
||||
@item @code{bdb_tmpdir}
|
||||
The value of the @code{--bdb-tmpdir} option.
|
||||
|
||||
@item @code{binlog_cache_size}. The size of the cache to hold the SQL
|
||||
statements for the binary log during a transaction. If you often use
|
||||
big, multi-statement transactions you can increase this to get more
|
||||
performance. @xref{COMMIT}.
|
||||
|
||||
@item @code{character_set}
|
||||
The default character set.
|
||||
|
||||
@ -20390,6 +20417,11 @@ wrong) packets. You must increase this value if you are using big
|
||||
@code{BLOB} columns. It should be as big as the biggest @code{BLOB} you want
|
||||
to use.
|
||||
|
||||
@item @code{max_binlog_cache_size}. If a multi-statement transaction
|
||||
requires more than this amount of memory, one will get the error
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size'
|
||||
bytes of storage".
|
||||
|
||||
@item @code{max_connections}
|
||||
The number of simultaneous clients allowed. Increasing this value increases
|
||||
the number of file descriptors that @code{mysqld} requires. See below for
|
||||
@ -21014,6 +21046,21 @@ table you will get an error (@code{ER_WARNING_NOT_COMPLETE_ROLLBACK}) as
|
||||
a warning. All transactional safe tables will be restored but any
|
||||
non-transactional table will not change.
|
||||
|
||||
If you are using @code{BEGIN} or @code{SET AUTO_COMMIT=0}, you
|
||||
should use the @strong{MySQL} binary log for backups instead of the
|
||||
old update log; The transaction is stored in the binary log
|
||||
in one chunk, during @code{COMMIT}, the to ensure and @code{ROLLBACK}:ed
|
||||
transactions are not stored. @xref{Binary log}.
|
||||
|
||||
The following commands automaticly ends an transaction (as if you had done
|
||||
a @code{COMMIT} before executing the command):
|
||||
|
||||
@multitable @columnfractions .33 .33 .33
|
||||
@item @code{ALTER TABLE} @tab @code{BEGIN} @tab @code{CREATE INDEX}
|
||||
@item @code{DROP DATABASE} @tab @code{DROP TABLE} @tab @code{RENAME TABLE}
|
||||
@item @code{TRUNCATE}
|
||||
@end multitable
|
||||
|
||||
@findex LOCK TABLES
|
||||
@findex UNLOCK TABLES
|
||||
@node LOCK TABLES, SET OPTION, COMMIT, Reference
|
||||
@ -22511,11 +22558,12 @@ BDB tables:
|
||||
@item @code{--bdb-home=directory} @tab Base directory for BDB tables. This should be the same directory you use for --datadir.
|
||||
@item @code{--bdb-lock-detect=#} @tab Berkeley lock detect. One of (DEFAULT, OLDEST, RANDOM, or YOUNGEST).
|
||||
@item @code{--bdb-logdir=directory} @tab Berkeley DB log file directory.
|
||||
@item @code{--bdb-nosync} @tab Don't synchronously flush logs.
|
||||
@item @code{--bdb-no-sync} @tab Don't synchronously flush logs.
|
||||
@item @code{--bdb-recover} @tab Start Berkeley DB in recover mode.
|
||||
@item @code{--bdb-shared-data} @tab Start Berkeley DB in multi-process mode (Don't use @code{DB_PRIVATE} when initializing Berkeley DB)
|
||||
@item @code{--bdb-tmpdir=directory} @tab Berkeley DB tempfile name.
|
||||
@item @code{--skip-bdb} @tab Don't use berkeley db.
|
||||
@item @code{-O bdb_lock_max=1000} @tab Set the maximum number of locks possible. @xref{SHOW VARIABLES}.
|
||||
@item @code{-O bdb_max_lock=1000} @tab Set the maximum number of locks possible. @xref{SHOW VARIABLES}.
|
||||
@end multitable
|
||||
|
||||
If you use @code{--skip-bdb}, @strong{MySQL} will not initialize the
|
||||
@ -22526,13 +22574,17 @@ Normally you should start mysqld with @code{--bdb-recover} if you intend
|
||||
to use BDB tables. This may, however, give you problems when you try to
|
||||
start mysqld if the BDB log files are corrupted. @xref{Starting server}.
|
||||
|
||||
With @code{bdb_lock_max} you can specify the maximum number of locks
|
||||
With @code{bdb_max_lock} you can specify the maximum number of locks
|
||||
(1000 by default) you can have active on a BDB table. You should
|
||||
increase this if you get errors of type @code{bdb: Lock table is out of
|
||||
available locks} or @code{Got error 12 from ...} when you have do long
|
||||
transactions or when @code{mysqld} has to examine a lot of rows to
|
||||
calculate the query.
|
||||
|
||||
You may also want to change @code{binlog_cache_size} and
|
||||
@code{max_binlog_cache_size} if you are using big multi-line transactions.
|
||||
@xref{COMMIT}.
|
||||
|
||||
@node BDB characteristic, BDB TODO, BDB start, BDB
|
||||
@subsection Some characteristic of @code{BDB} tables:
|
||||
|
||||
@ -22578,6 +22630,10 @@ tables. In other words, the key information will take a little more
|
||||
space in @code{BDB} tables compared to MyISAM tables which don't use
|
||||
@code{PACK_KEYS=0}.
|
||||
@item
|
||||
There is often holes in the BDB table to allow you to insert new rows
|
||||
between different keys. This makes BDB tables somewhat larger than
|
||||
MyISAM tables.
|
||||
@item
|
||||
@strong{MySQL} performs a checkpoint each time a new Berkeley DB log
|
||||
file is started, and removes any log files that are not needed for
|
||||
current transactions. One can also run @code{FLUSH LOGS} at any time
|
||||
@ -22585,6 +22641,17 @@ to checkpoint the Berkeley DB tables.
|
||||
|
||||
For disaster recovery, one should use table backups plus MySQL's binary
|
||||
log. @xref{Backup}.
|
||||
@item
|
||||
The optimizer needs to know an approximation of the number of rows in
|
||||
the table. @strong{MySQL} solves this by counting inserts and
|
||||
maintaining this in a separate segment in each BDB table. If you don't
|
||||
do a lot of @code{DELETE} or @code{ROLLBACK}:s this number should be
|
||||
accurate enough for the @strong{MySQL} optimizer, but as @strong{MySQL}
|
||||
only store the number on close, it may be wrong if @strong{MySQL} dies
|
||||
unexpectedly. It should not be fatal even if this number is not 100 %
|
||||
correct. One can update the number of rows by executing @code{ANALYZE
|
||||
TABLE} or @code{OPTIMIZE TABLE}. @xref{ANALYZE TABLE} . @xref{OPTIMIZE
|
||||
TABLE}.
|
||||
@end itemize
|
||||
|
||||
@node BDB TODO, BDB errors, BDB characteristic, BDB
|
||||
@ -25367,7 +25434,8 @@ server-id=<some unique number between 1 and 2^32-1>
|
||||
@end example
|
||||
|
||||
@code{server-id} must be different for each server participating in
|
||||
replication.
|
||||
replication. If you don't specify a server-id, it will be set to
|
||||
1 if you have not defined @code{master-host}, else it will be set to 2.
|
||||
|
||||
@item Restart the slave(s).
|
||||
|
||||
@ -26341,6 +26409,7 @@ like this:
|
||||
Possible variables for option --set-variable (-O) are:
|
||||
back_log current value: 5
|
||||
bdb_cache_size current value: 1048540
|
||||
binlog_cache_size current_value: 32768
|
||||
connect_timeout current value: 5
|
||||
delayed_insert_timeout current value: 300
|
||||
delayed_insert_limit current value: 100
|
||||
@ -26352,6 +26421,7 @@ key_buffer_size current value: 1048540
|
||||
lower_case_table_names current value: 0
|
||||
long_query_time current value: 10
|
||||
max_allowed_packet current value: 1048576
|
||||
max_binlog_cache_size current_value: 4294967295
|
||||
max_connections current value: 100
|
||||
max_connect_errors current value: 10
|
||||
max_delayed_threads current value: 20
|
||||
@ -33323,7 +33393,8 @@ and the crash.
|
||||
@node Binary log, Slow query log, Update log, Common problems
|
||||
@section The Binary Log
|
||||
|
||||
In the future we expect the binary log to replace the update log!
|
||||
In the future the binary log will replace the update log, so we
|
||||
recommend you to switch to this log format as soon as possible!
|
||||
|
||||
The binary log contains all information that is available in the update
|
||||
log in a more efficient format. It also contains information about how long
|
||||
@ -33369,6 +33440,20 @@ direct from a remote mysql server!
|
||||
@code{mysqlbinlog --help} will give you more information of how to use
|
||||
this program!
|
||||
|
||||
If you are using @code{BEGIN} or @code{SET AUTO_COMMIT=0}, you must use
|
||||
the @strong{MySQL} binary log for backups instead of the old update log.
|
||||
|
||||
All updates (@code{UPDATE}, @code{DELETE} or @code{INSERT}) that changes
|
||||
a transactional table (like BDB tables) is cached until a @code{COMMIT}.
|
||||
Any updates to a not transactional table is stored in the binary log at
|
||||
once. Every thread will on start allocate a buffer of
|
||||
@code{binlog_cache_size} to buffer queries. If a query is bigger than
|
||||
this, the thread will open a temporary file to handle the bigger cache.
|
||||
The temporary file will be deleted when the thread ends.
|
||||
|
||||
The @code{max_binlog_cache_size} can be used to restrict the total size used
|
||||
to cache a multi-transaction query.
|
||||
|
||||
@cindex slow query log
|
||||
@cindex files, slow query log
|
||||
@node Slow query log, Multiple servers, Binary log, Common problems
|
||||
@ -39275,6 +39360,27 @@ though, so Version 3.23 is not released as a stable version yet.
|
||||
@appendixsubsec Changes in release 3.23.29
|
||||
@itemize @bullet
|
||||
@item
|
||||
Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
|
||||
@item
|
||||
Changed the default server-id to 1 for masters and 2 for slaves
|
||||
to make it easier to use the binary log.
|
||||
@item
|
||||
Added @code{CHECK}, @code{ANALYZE} and @code{OPTIMIZE} of BDB tables.
|
||||
@item
|
||||
Store in BDB tables the number of rows; This helps to optimize queries
|
||||
when we need an approximation of the number of row.
|
||||
@item
|
||||
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and
|
||||
@code{DROP INDEX} are now transaction endpoints.
|
||||
@item
|
||||
Added option @code{--bdb-shared-data} to @code{mysqld}.
|
||||
@item
|
||||
Added variables @code{binlog_cache_size} and @code{max_binlog_cache_size} to
|
||||
@code{mysqld}.
|
||||
@item
|
||||
If you do a @code{DROP DATABASE} on a symbolic linked database, both
|
||||
the link and the original database is deleted.
|
||||
@item
|
||||
Fixed that @code{DROP DATABASE} works on OS/2.
|
||||
@item
|
||||
Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN
|
||||
|
@ -1239,7 +1239,7 @@ AC_CHECK_FUNCS(alarm bmove \
|
||||
chsize ftruncate rint finite fpsetmask fpresetsticky\
|
||||
cuserid fcntl fconvert poll \
|
||||
getrusage getpwuid getcwd getrlimit getwd index stpcpy locking longjmp \
|
||||
perror pread realpath rename \
|
||||
perror pread realpath readlink rename \
|
||||
socket strnlen madvise mkstemp \
|
||||
strtol strtoul strtoull snprintf tempnam thr_setconcurrency \
|
||||
gethostbyaddr_r gethostbyname_r getpwnam \
|
||||
|
@ -197,4 +197,5 @@
|
||||
#define ER_CRASHED_ON_USAGE 1194
|
||||
#define ER_CRASHED_ON_REPAIR 1195
|
||||
#define ER_WARNING_NOT_COMPLETE_ROLLBACK 1196
|
||||
#define ER_ERROR_MESSAGES 197
|
||||
#define ER_TRANS_CACHE_FULL 1197
|
||||
#define ER_ERROR_MESSAGES 198
|
||||
|
@ -37,12 +37,15 @@
|
||||
transaction ?)
|
||||
- When using ALTER TABLE IGNORE, we should not start an transaction, but do
|
||||
everything wthout transactions.
|
||||
- When we do rollback, we need to subtract the number of changed rows
|
||||
from the updated tables.
|
||||
|
||||
Testing of:
|
||||
- ALTER TABLE
|
||||
- LOCK TABLES
|
||||
- CHAR keys
|
||||
- BLOBS
|
||||
- Mark tables that participate in a transaction so that they are not
|
||||
closed during the transaction. We need to test what happens if
|
||||
MySQL closes a table that is updated by a not commit transaction.
|
||||
*/
|
||||
|
||||
|
||||
@ -58,19 +61,27 @@
|
||||
#include <hash.h>
|
||||
#include "ha_berkeley.h"
|
||||
#include "sql_manager.h"
|
||||
#include <stdarg.h>
|
||||
|
||||
#define HA_BERKELEY_ROWS_IN_TABLE 10000 /* to get optimization right */
|
||||
#define HA_BERKELEY_RANGE_COUNT 100
|
||||
#define HA_BERKELEY_MAX_ROWS 10000000 /* Max rows in table */
|
||||
/* extra rows for estimate_number_of_rows() */
|
||||
#define HA_BERKELEY_EXTRA_ROWS 100
|
||||
|
||||
/* Bits for share->status */
|
||||
#define STATUS_PRIMARY_KEY_INIT 1
|
||||
#define STATUS_ROW_COUNT_INIT 2
|
||||
#define STATUS_BDB_ANALYZE 4
|
||||
|
||||
const char *ha_berkeley_ext=".db";
|
||||
bool berkeley_skip=0;
|
||||
u_int32_t berkeley_init_flags=0,berkeley_lock_type=DB_LOCK_DEFAULT;
|
||||
bool berkeley_skip=0,berkeley_shared_data=0;
|
||||
u_int32_t berkeley_init_flags= DB_PRIVATE, berkeley_lock_type=DB_LOCK_DEFAULT;
|
||||
ulong berkeley_cache_size;
|
||||
char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
|
||||
long berkeley_lock_scan_time=0;
|
||||
ulong berkeley_trans_retry=5;
|
||||
ulong berkeley_lock_max;
|
||||
ulong berkeley_max_lock;
|
||||
pthread_mutex_t bdb_mutex;
|
||||
|
||||
static DB_ENV *db_env;
|
||||
@ -86,11 +97,13 @@ TYPELIB berkeley_lock_typelib= {array_elements(berkeley_lock_names),"",
|
||||
static void berkeley_print_error(const char *db_errpfx, char *buffer);
|
||||
static byte* bdb_get_key(BDB_SHARE *share,uint *length,
|
||||
my_bool not_used __attribute__((unused)));
|
||||
static BDB_SHARE *get_share(const char *table_name);
|
||||
static void free_share(BDB_SHARE *share);
|
||||
static BDB_SHARE *get_share(const char *table_name, TABLE *table);
|
||||
static void free_share(BDB_SHARE *share, TABLE *table);
|
||||
static void update_status(BDB_SHARE *share, TABLE *table);
|
||||
static void berkeley_noticecall(DB_ENV *db_env, db_notices notice);
|
||||
|
||||
|
||||
|
||||
/* General functions */
|
||||
|
||||
bool berkeley_init(void)
|
||||
@ -121,14 +134,14 @@ bool berkeley_init(void)
|
||||
|
||||
db_env->set_cachesize(db_env, 0, berkeley_cache_size, 0);
|
||||
db_env->set_lk_detect(db_env, berkeley_lock_type);
|
||||
if (berkeley_lock_max)
|
||||
db_env->set_lk_max(db_env, berkeley_lock_max);
|
||||
if (berkeley_max_lock)
|
||||
db_env->set_lk_max(db_env, berkeley_max_lock);
|
||||
|
||||
if (db_env->open(db_env,
|
||||
berkeley_home,
|
||||
berkeley_init_flags | DB_INIT_LOCK |
|
||||
DB_INIT_LOG | DB_INIT_MPOOL | DB_INIT_TXN |
|
||||
DB_CREATE | DB_THREAD | DB_PRIVATE, 0666))
|
||||
DB_CREATE | DB_THREAD, 0666))
|
||||
{
|
||||
db_env->close(db_env,0);
|
||||
db_env=0;
|
||||
@ -336,7 +349,7 @@ berkeley_key_cmp(TABLE *table, KEY *key_info, const char *key, uint key_length)
|
||||
if (*key != (table->record[0][key_part->null_offset] &
|
||||
key_part->null_bit) ? 0 : 1)
|
||||
return 1;
|
||||
if (!*key++) // Null value
|
||||
if (!*key++) // Null value
|
||||
continue;
|
||||
}
|
||||
if ((cmp=key_part->field->pack_cmp(key,key_part->length)))
|
||||
@ -345,7 +358,7 @@ berkeley_key_cmp(TABLE *table, KEY *key_info, const char *key, uint key_length)
|
||||
key+=length;
|
||||
key_length-=length;
|
||||
}
|
||||
return 0;
|
||||
return 0; // Identical keys
|
||||
}
|
||||
|
||||
|
||||
@ -387,7 +400,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
}
|
||||
|
||||
/* Init table lock structure */
|
||||
if (!(share=get_share(name)))
|
||||
if (!(share=get_share(name,table)))
|
||||
{
|
||||
my_free(rec_buff,MYF(0));
|
||||
my_free(alloc_ptr,MYF(0));
|
||||
@ -397,7 +410,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
|
||||
if ((error=db_create(&file, db_env, 0)))
|
||||
{
|
||||
free_share(share);
|
||||
free_share(share,table);
|
||||
my_free(rec_buff,MYF(0));
|
||||
my_free(alloc_ptr,MYF(0));
|
||||
my_errno=error;
|
||||
@ -413,7 +426,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
2 | 4),
|
||||
"main", DB_BTREE, open_mode,0))))
|
||||
{
|
||||
free_share(share);
|
||||
free_share(share,table);
|
||||
my_free(rec_buff,MYF(0));
|
||||
my_free(alloc_ptr,MYF(0));
|
||||
my_errno=error;
|
||||
@ -459,7 +472,6 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Calculate pack_length of primary key */
|
||||
if (!hidden_primary_key)
|
||||
{
|
||||
@ -470,12 +482,9 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
ref_length+= key_part->field->max_packed_col_length(key_part->length);
|
||||
fixed_length_primary_key=
|
||||
(ref_length == table->key_info[primary_key].key_length);
|
||||
share->status|=STATUS_PRIMARY_KEY_INIT;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!share->primary_key_inited)
|
||||
update_auto_primary_key();
|
||||
}
|
||||
get_status();
|
||||
DBUG_RETURN(0);
|
||||
}
|
||||
|
||||
@ -491,7 +500,7 @@ int ha_berkeley::close(void)
|
||||
if (key_file[i] && (error=key_file[i]->close(key_file[i],0)))
|
||||
result=error;
|
||||
}
|
||||
free_share(share);
|
||||
free_share(share,table);
|
||||
my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR));
|
||||
my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR));
|
||||
if (result)
|
||||
@ -632,8 +641,8 @@ void ha_berkeley::unpack_key(char *record, DBT *key, uint index)
|
||||
This will never fail as the key buffer is pre allocated.
|
||||
*/
|
||||
|
||||
DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
|
||||
const byte *record)
|
||||
DBT *ha_berkeley::create_key(DBT *key, uint keynr, char *buff,
|
||||
const byte *record, int key_length)
|
||||
{
|
||||
bzero((char*) key,sizeof(*key));
|
||||
|
||||
@ -647,11 +656,11 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
|
||||
KEY *key_info=table->key_info+keynr;
|
||||
KEY_PART_INFO *key_part=key_info->key_part;
|
||||
KEY_PART_INFO *end=key_part+key_info->key_parts;
|
||||
DBUG_ENTER("pack_key");
|
||||
DBUG_ENTER("create_key");
|
||||
|
||||
key->data=buff;
|
||||
|
||||
for ( ; key_part != end ; key_part++)
|
||||
for ( ; key_part != end && key_length > 0; key_part++)
|
||||
{
|
||||
if (key_part->null_bit)
|
||||
{
|
||||
@ -666,6 +675,7 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
|
||||
}
|
||||
buff=key_part->field->pack(buff,record + key_part->offset,
|
||||
key_part->length);
|
||||
key_length-=key_part->length;
|
||||
}
|
||||
key->size= (buff - (char*) key->data);
|
||||
DBUG_DUMP("key",(char*) key->data, key->size);
|
||||
@ -729,8 +739,8 @@ int ha_berkeley::write_row(byte * record)
|
||||
|
||||
if (table->keys == 1)
|
||||
{
|
||||
error=file->put(file, transaction, pack_key(&prim_key, primary_key,
|
||||
key_buff, record),
|
||||
error=file->put(file, transaction, create_key(&prim_key, primary_key,
|
||||
key_buff, record),
|
||||
&row, key_type[primary_key]);
|
||||
}
|
||||
else
|
||||
@ -742,8 +752,8 @@ int ha_berkeley::write_row(byte * record)
|
||||
if ((error=txn_begin(db_env, transaction, &sub_trans, 0)))
|
||||
break;
|
||||
DBUG_PRINT("trans",("starting subtransaction"));
|
||||
if (!(error=file->put(file, sub_trans, pack_key(&prim_key, primary_key,
|
||||
key_buff, record),
|
||||
if (!(error=file->put(file, sub_trans, create_key(&prim_key, primary_key,
|
||||
key_buff, record),
|
||||
&row, key_type[primary_key])))
|
||||
{
|
||||
for (keynr=0 ; keynr < table->keys ; keynr++)
|
||||
@ -751,8 +761,8 @@ int ha_berkeley::write_row(byte * record)
|
||||
if (keynr == primary_key)
|
||||
continue;
|
||||
if ((error=key_file[keynr]->put(key_file[keynr], sub_trans,
|
||||
pack_key(&key, keynr, key_buff2,
|
||||
record),
|
||||
create_key(&key, keynr, key_buff2,
|
||||
record),
|
||||
&prim_key, key_type[keynr])))
|
||||
{
|
||||
last_dup_key=keynr;
|
||||
@ -783,6 +793,8 @@ int ha_berkeley::write_row(byte * record)
|
||||
}
|
||||
if (error == DB_KEYEXIST)
|
||||
error=HA_ERR_FOUND_DUPP_KEY;
|
||||
else if (!error)
|
||||
changed_rows++;
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
||||
@ -838,7 +850,7 @@ int ha_berkeley::update_primary_key(DB_TXN *trans, bool primary_key_changed,
|
||||
{
|
||||
// Primary key changed or we are updating a key that can have duplicates.
|
||||
// Delete the old row and add a new one
|
||||
pack_key(&old_key, primary_key, key_buff2, old_row);
|
||||
create_key(&old_key, primary_key, key_buff2, old_row);
|
||||
if ((error=remove_key(trans, primary_key, old_row, (DBT *) 0, &old_key)))
|
||||
DBUG_RETURN(error); // This should always succeed
|
||||
if ((error=pack_row(&row, new_row, 0)))
|
||||
@ -893,10 +905,10 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row)
|
||||
}
|
||||
else
|
||||
{
|
||||
pack_key(&prim_key, primary_key, key_buff, new_row);
|
||||
create_key(&prim_key, primary_key, key_buff, new_row);
|
||||
|
||||
if ((primary_key_changed=key_cmp(primary_key, old_row, new_row)))
|
||||
pack_key(&old_prim_key, primary_key, primary_key_buff, old_row);
|
||||
create_key(&old_prim_key, primary_key, primary_key_buff, old_row);
|
||||
else
|
||||
old_prim_key=prim_key;
|
||||
}
|
||||
@ -921,7 +933,7 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row)
|
||||
if ((error=remove_key(sub_trans, keynr, old_row, (DBT*) 0,
|
||||
&old_prim_key)) ||
|
||||
(error=key_file[keynr]->put(key_file[keynr], sub_trans,
|
||||
pack_key(&key, keynr, key_buff2,
|
||||
create_key(&key, keynr, key_buff2,
|
||||
new_row),
|
||||
&prim_key, key_type[keynr])))
|
||||
{
|
||||
@ -980,7 +992,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record,
|
||||
error=key_file[keynr]->del(key_file[keynr], sub_trans,
|
||||
keynr == primary_key ?
|
||||
prim_key :
|
||||
pack_key(&key, keynr, key_buff2, record),
|
||||
create_key(&key, keynr, key_buff2, record),
|
||||
0);
|
||||
}
|
||||
else
|
||||
@ -997,7 +1009,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record,
|
||||
if (!(error=cursor->c_get(cursor,
|
||||
(keynr == primary_key ?
|
||||
prim_key :
|
||||
pack_key(&key, keynr, key_buff2, record)),
|
||||
create_key(&key, keynr, key_buff2, record)),
|
||||
(keynr == primary_key ?
|
||||
packed_record : prim_key),
|
||||
DB_GET_BOTH)))
|
||||
@ -1046,7 +1058,7 @@ int ha_berkeley::delete_row(const byte * record)
|
||||
|
||||
if ((error=pack_row(&row, record, 0)))
|
||||
DBUG_RETURN((error));
|
||||
pack_key(&prim_key, primary_key, key_buff, record);
|
||||
create_key(&prim_key, primary_key, key_buff, record);
|
||||
if (hidden_primary_key)
|
||||
keys|= (key_map) 1 << primary_key;
|
||||
|
||||
@ -1078,7 +1090,9 @@ int ha_berkeley::delete_row(const byte * record)
|
||||
if (error != DB_LOCK_DEADLOCK)
|
||||
break;
|
||||
}
|
||||
DBUG_RETURN(0);
|
||||
if (!error)
|
||||
changed_rows--;
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
||||
|
||||
@ -1090,7 +1104,7 @@ int ha_berkeley::index_init(uint keynr)
|
||||
dbug_assert(cursor == 0);
|
||||
if ((error=file->cursor(key_file[keynr], transaction, &cursor,
|
||||
table->reginfo.lock_type > TL_WRITE_ALLOW_READ ?
|
||||
0 : 0)))
|
||||
DB_RMW : 0)))
|
||||
cursor=0; // Safety
|
||||
bzero((char*) &last_key,sizeof(last_key));
|
||||
DBUG_RETURN(error);
|
||||
@ -1336,7 +1350,7 @@ void ha_berkeley::position(const byte *record)
|
||||
memcpy_fixed(ref, (char*) current_ident, BDB_HIDDEN_PRIMARY_KEY_LENGTH);
|
||||
}
|
||||
else
|
||||
pack_key(&key, primary_key, ref, record);
|
||||
create_key(&key, primary_key, ref, record);
|
||||
}
|
||||
|
||||
|
||||
@ -1345,9 +1359,18 @@ void ha_berkeley::info(uint flag)
|
||||
DBUG_ENTER("info");
|
||||
if (flag & HA_STATUS_VARIABLE)
|
||||
{
|
||||
records = estimate_number_of_rows(); // Just to get optimisations right
|
||||
records = share->rows; // Just to get optimisations right
|
||||
deleted = 0;
|
||||
}
|
||||
if ((flag & HA_STATUS_CONST) || version != share->version)
|
||||
{
|
||||
version=share->version;
|
||||
for (uint i=0 ; i < table->keys ; i++)
|
||||
{
|
||||
table->key_info[i].rec_per_key[table->key_info[i].key_parts-1]=
|
||||
share->rec_per_key[i];
|
||||
}
|
||||
}
|
||||
else if (flag & HA_STATUS_ERRKEY)
|
||||
errkey=last_dup_key;
|
||||
DBUG_VOID_RETURN;
|
||||
@ -1424,6 +1447,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
|
||||
}
|
||||
}
|
||||
transaction= (DB_TXN*) thd->transaction.stmt.bdb_tid;
|
||||
changed_rows=0;
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1437,6 +1461,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
|
||||
current_row.data=0;
|
||||
}
|
||||
}
|
||||
thread_safe_add(share->rows, changed_rows, &share->mutex);
|
||||
current_row.data=0;
|
||||
if (!--thd->transaction.bdb_lock_count)
|
||||
{
|
||||
@ -1607,6 +1632,142 @@ ha_rows ha_berkeley::records_in_range(int keynr,
|
||||
DBUG_RETURN(rows <= 1.0 ? (ha_rows) 1 : (ha_rows) rows);
|
||||
}
|
||||
|
||||
|
||||
longlong ha_berkeley::get_auto_increment()
|
||||
{
|
||||
longlong nr=1; // Default if error or new key
|
||||
int error;
|
||||
(void) ha_berkeley::extra(HA_EXTRA_KEYREAD);
|
||||
ha_berkeley::index_init(table->next_number_index);
|
||||
|
||||
if (!table->next_number_key_offset)
|
||||
{ // Autoincrement at key-start
|
||||
error=ha_berkeley::index_last(table->record[1]);
|
||||
}
|
||||
else
|
||||
{
|
||||
DBT row;
|
||||
bzero((char*) &row,sizeof(row));
|
||||
uint key_len;
|
||||
KEY *key_info= &table->key_info[active_index];
|
||||
|
||||
/* Reading next available number for a sub key */
|
||||
ha_berkeley::create_key(&last_key, active_index,
|
||||
key_buff, table->record[0],
|
||||
table->next_number_key_offset);
|
||||
/* Store for compare */
|
||||
memcpy(key_buff2, key_buff, (key_len=last_key.size));
|
||||
key_info->handler.bdb_return_if_eq= -1;
|
||||
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_SET_RANGE),
|
||||
table->record[1], active_index, &row, (DBT*) 0, 0);
|
||||
key_info->handler.bdb_return_if_eq= 0;
|
||||
if (!error && !berkeley_key_cmp(table, key_info, key_buff2, key_len))
|
||||
{
|
||||
/*
|
||||
Found matching key; Now search after next key, go one step back
|
||||
and then we should have found the biggest key with the given
|
||||
prefix
|
||||
*/
|
||||
(void) read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT_NODUP),
|
||||
table->record[1], active_index, &row, (DBT*) 0, 0);
|
||||
if (read_row(cursor->c_get(cursor, &last_key, &row, DB_PREV),
|
||||
table->record[1], active_index, &row, (DBT*) 0, 0) ||
|
||||
berkeley_key_cmp(table, key_info, key_buff2, key_len))
|
||||
error=1; // Something went wrong
|
||||
}
|
||||
}
|
||||
nr=(longlong)
|
||||
table->next_number_field->val_int_offset(table->rec_buff_length)+1;
|
||||
ha_berkeley::index_end();
|
||||
(void) ha_berkeley::extra(HA_EXTRA_NO_KEYREAD);
|
||||
return nr;
|
||||
}
|
||||
|
||||
|
||||
/****************************************************************************
|
||||
Analyzing, checking, and optimizing tables
|
||||
****************************************************************************/
|
||||
|
||||
|
||||
static void print_msg(THD *thd, const char *table_name, const char *op_name,
|
||||
const char *msg_type, const char *fmt, ...)
|
||||
{
|
||||
String* packet = &thd->packet;
|
||||
packet->length(0);
|
||||
char msgbuf[256];
|
||||
msgbuf[0] = 0;
|
||||
va_list args;
|
||||
va_start(args,fmt);
|
||||
|
||||
my_vsnprintf(msgbuf, sizeof(msgbuf), fmt, args);
|
||||
msgbuf[sizeof(msgbuf) - 1] = 0; // healthy paranoia
|
||||
|
||||
DBUG_PRINT(msg_type,("message: %s",msgbuf));
|
||||
|
||||
net_store_data(packet, table_name);
|
||||
net_store_data(packet, op_name);
|
||||
net_store_data(packet, msg_type);
|
||||
net_store_data(packet, msgbuf);
|
||||
if (my_net_write(&thd->net, (char*)thd->packet.ptr(),
|
||||
thd->packet.length()))
|
||||
thd->killed=1;
|
||||
}
|
||||
|
||||
|
||||
int ha_berkeley::analyze(THD* thd, HA_CHECK_OPT* check_opt)
|
||||
{
|
||||
DB_BTREE_STAT stat;
|
||||
uint i;
|
||||
|
||||
for (i=0 ; i < table->keys ; i++)
|
||||
{
|
||||
file->stat(key_file[i], (void*) &stat, 0, 0);
|
||||
share->rec_per_key[i]= stat.bt_ndata / stat.bt_nkeys;
|
||||
}
|
||||
/* If hidden primary key */
|
||||
if (hidden_primary_key)
|
||||
file->stat(file, (void*) &stat, 0, 0);
|
||||
pthread_mutex_lock(&share->mutex);
|
||||
share->rows=stat.bt_ndata;
|
||||
share->status|=STATUS_BDB_ANALYZE; // Save status on close
|
||||
share->version++; // Update stat in table
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
update_status(share,table); // Write status to file
|
||||
return ((share->status & STATUS_BDB_ANALYZE) ? HA_ADMIN_FAILED :
|
||||
HA_ADMIN_OK);
|
||||
}
|
||||
|
||||
int ha_berkeley::optimize(THD* thd, HA_CHECK_OPT* check_opt)
|
||||
{
|
||||
return ha_berkeley::analyze(thd,check_opt);
|
||||
}
|
||||
|
||||
|
||||
int ha_berkeley::check(THD* thd, HA_CHECK_OPT* check_opt)
|
||||
{
|
||||
char name_buff[FN_REFLEN];
|
||||
int error;
|
||||
fn_format(name_buff,share->table_name,"", ha_berkeley_ext, 2 | 4);
|
||||
if ((error=file->verify(file, name_buff, NullS, (FILE*) 0,
|
||||
hidden_primary_key ? 0 : DB_NOORDERCHK)))
|
||||
{
|
||||
print_msg(thd, table->real_name, "check", "error",
|
||||
"Got error %d checking file structure",error);
|
||||
return HA_ADMIN_CORRUPT;
|
||||
}
|
||||
for (uint i=0 ; i < table->keys ; i++)
|
||||
{
|
||||
if ((error=file->verify(key_file[i], name_buff, NullS, (FILE*) 0,
|
||||
DB_ORDERCHKONLY)))
|
||||
{
|
||||
print_msg(thd, table->real_name, "check", "error",
|
||||
"Key %d was not in order",error);
|
||||
return HA_ADMIN_CORRUPT;
|
||||
}
|
||||
}
|
||||
return HA_ADMIN_OK;
|
||||
}
|
||||
|
||||
/****************************************************************************
|
||||
Handling the shared BDB_SHARE structure that is needed to provide table
|
||||
locking.
|
||||
@ -1619,19 +1780,21 @@ static byte* bdb_get_key(BDB_SHARE *share,uint *length,
|
||||
return (byte*) share->table_name;
|
||||
}
|
||||
|
||||
static BDB_SHARE *get_share(const char *table_name)
|
||||
static BDB_SHARE *get_share(const char *table_name, TABLE *table)
|
||||
{
|
||||
BDB_SHARE *share;
|
||||
pthread_mutex_lock(&bdb_mutex);
|
||||
uint length=(uint) strlen(table_name);
|
||||
if (!(share=(BDB_SHARE*) hash_search(&bdb_open_tables, table_name, length)))
|
||||
{
|
||||
if ((share=(BDB_SHARE *) my_malloc(sizeof(*share)+length+1,
|
||||
if ((share=(BDB_SHARE *) my_malloc(sizeof(*share)+length+1 +
|
||||
sizeof(ha_rows)* table->keys,
|
||||
MYF(MY_WME | MY_ZEROFILL))))
|
||||
{
|
||||
share->table_name_length=length;
|
||||
share->table_name=(char*) (share+1);
|
||||
strmov(share->table_name,table_name);
|
||||
share->rec_per_key= (ha_rows*) (share+1);
|
||||
if (hash_insert(&bdb_open_tables, (char*) share))
|
||||
{
|
||||
pthread_mutex_unlock(&bdb_mutex);
|
||||
@ -1647,11 +1810,14 @@ static BDB_SHARE *get_share(const char *table_name)
|
||||
return share;
|
||||
}
|
||||
|
||||
static void free_share(BDB_SHARE *share)
|
||||
static void free_share(BDB_SHARE *share, TABLE *table)
|
||||
{
|
||||
pthread_mutex_lock(&bdb_mutex);
|
||||
if (!--share->use_count)
|
||||
{
|
||||
update_status(share,table);
|
||||
if (share->status_block)
|
||||
share->status_block->close(share->status_block,0);
|
||||
hash_delete(&bdb_open_tables, (gptr) share);
|
||||
thr_lock_delete(&share->lock);
|
||||
pthread_mutex_destroy(&share->mutex);
|
||||
@ -1660,20 +1826,124 @@ static void free_share(BDB_SHARE *share)
|
||||
pthread_mutex_unlock(&bdb_mutex);
|
||||
}
|
||||
|
||||
/*
|
||||
Get status information that is stored in the 'status' sub database
|
||||
and the max used value for the hidden primary key.
|
||||
*/
|
||||
|
||||
void ha_berkeley::update_auto_primary_key()
|
||||
void ha_berkeley::get_status()
|
||||
{
|
||||
pthread_mutex_lock(&share->mutex);
|
||||
if (!share->primary_key_inited)
|
||||
if (!test_all_bits(share->status,(STATUS_PRIMARY_KEY_INIT |
|
||||
STATUS_ROW_COUNT_INIT)))
|
||||
{
|
||||
(void) extra(HA_EXTRA_KEYREAD);
|
||||
index_init(primary_key);
|
||||
if (!index_last(table->record[1]))
|
||||
share->auto_ident=uint5korr(current_ident);
|
||||
index_end();
|
||||
(void) extra(HA_EXTRA_NO_KEYREAD);
|
||||
pthread_mutex_lock(&share->mutex);
|
||||
if (!(share->status & STATUS_PRIMARY_KEY_INIT))
|
||||
{
|
||||
(void) extra(HA_EXTRA_KEYREAD);
|
||||
index_init(primary_key);
|
||||
if (!index_last(table->record[1]))
|
||||
share->auto_ident=uint5korr(current_ident);
|
||||
index_end();
|
||||
(void) extra(HA_EXTRA_NO_KEYREAD);
|
||||
}
|
||||
if (! share->status_block)
|
||||
{
|
||||
char name_buff[FN_REFLEN];
|
||||
uint open_mode= (((table->db_stat & HA_READ_ONLY) ? DB_RDONLY : 0)
|
||||
| DB_THREAD);
|
||||
fn_format(name_buff, share->table_name,"", ha_berkeley_ext, 2 | 4);
|
||||
if (!db_create(&share->status_block, db_env, 0))
|
||||
{
|
||||
if (!share->status_block->open(share->status_block, name_buff,
|
||||
"status", DB_BTREE, open_mode, 0))
|
||||
{
|
||||
share->status_block->close(share->status_block, 0);
|
||||
share->status_block=0;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!(share->status & STATUS_ROW_COUNT_INIT) && share->status_block)
|
||||
{
|
||||
share->org_rows=share->rows=
|
||||
table->max_rows ? table->max_rows : HA_BERKELEY_MAX_ROWS;
|
||||
if (!file->cursor(share->status_block, 0, &cursor, 0))
|
||||
{
|
||||
DBT row;
|
||||
char rec_buff[64],*pos=rec_buff;
|
||||
bzero((char*) &row,sizeof(row));
|
||||
bzero((char*) &last_key,sizeof(last_key));
|
||||
row.data=rec_buff;
|
||||
row.size=sizeof(rec_buff);
|
||||
row.flags=DB_DBT_USERMEM;
|
||||
if (!cursor->c_get(cursor, &last_key, &row, DB_FIRST))
|
||||
{
|
||||
uint i;
|
||||
share->org_rows=share->rows=uint4korr(pos); pos+=4;
|
||||
for (i=0 ; i < table->keys ; i++)
|
||||
{
|
||||
share->rec_per_key[i]=uint4korr(pos); pos+=4;
|
||||
}
|
||||
}
|
||||
cursor->c_close(cursor);
|
||||
}
|
||||
cursor=0; // Safety
|
||||
}
|
||||
share->status|= STATUS_PRIMARY_KEY_INIT | STATUS_ROW_COUNT_INIT;
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
}
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
}
|
||||
|
||||
|
||||
static void update_status(BDB_SHARE *share, TABLE *table)
|
||||
{
|
||||
DBUG_ENTER("update_status");
|
||||
if (share->rows != share->org_rows ||
|
||||
(share->status & STATUS_BDB_ANALYZE))
|
||||
{
|
||||
pthread_mutex_lock(&share->mutex);
|
||||
if (!share->status_block)
|
||||
{
|
||||
/*
|
||||
Create sub database 'status' if it doesn't exist from before
|
||||
(This '*should*' always exist for table created with MySQL)
|
||||
*/
|
||||
|
||||
char name_buff[FN_REFLEN];
|
||||
if (db_create(&share->status_block, db_env, 0))
|
||||
goto end;
|
||||
share->status_block->set_flags(share->status_block,0);
|
||||
if (share->status_block->open(share->status_block,
|
||||
fn_format(name_buff,share->table_name,"",
|
||||
ha_berkeley_ext,2 | 4),
|
||||
"status", DB_BTREE,
|
||||
DB_THREAD | DB_CREATE, my_umask))
|
||||
goto end;
|
||||
}
|
||||
{
|
||||
uint i;
|
||||
DBT row,key;
|
||||
char rec_buff[4+MAX_KEY*sizeof(ulong)], *pos=rec_buff;
|
||||
const char *key_buff="status";
|
||||
|
||||
bzero((char*) &row,sizeof(row));
|
||||
bzero((char*) &key,sizeof(key));
|
||||
row.data=rec_buff;
|
||||
key.data=(void*) key_buff;
|
||||
key.size=sizeof(key_buff);
|
||||
row.flags=key.flags=DB_DBT_USERMEM;
|
||||
int4store(pos,share->rows); pos+=4;
|
||||
for (i=0 ; i < table->keys ; i++)
|
||||
{
|
||||
int4store(pos,share->rec_per_key[i]); pos+=4;
|
||||
}
|
||||
row.size=(uint) (pos-rec_buff);
|
||||
(void) share->status_block->put(share->status_block, 0, &key, &row, 0);
|
||||
share->status&= ~STATUS_BDB_ANALYZE;
|
||||
}
|
||||
end:
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
}
|
||||
DBUG_VOID_RETURN;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1683,14 +1953,7 @@ void ha_berkeley::update_auto_primary_key()
|
||||
|
||||
ha_rows ha_berkeley::estimate_number_of_rows()
|
||||
{
|
||||
ulonglong max_ident;
|
||||
ulonglong max_rows=table->max_rows ? table->max_rows : HA_BERKELEY_MAX_ROWS;
|
||||
if (!hidden_primary_key)
|
||||
return (ha_rows) max_rows;
|
||||
pthread_mutex_lock(&share->mutex);
|
||||
max_ident=share->auto_ident+EXTRA_RECORDS;
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
return (ha_rows) min(max_ident,max_rows);
|
||||
return share->rows + HA_BERKELEY_EXTRA_ROWS;
|
||||
}
|
||||
|
||||
#endif /* HAVE_BERKELEY_DB */
|
||||
|
@ -27,11 +27,13 @@
|
||||
|
||||
typedef struct st_berkeley_share {
|
||||
ulonglong auto_ident;
|
||||
ha_rows rows, org_rows, *rec_per_key;
|
||||
THR_LOCK lock;
|
||||
pthread_mutex_t mutex;
|
||||
char *table_name;
|
||||
DB *status_block;
|
||||
uint table_name_length,use_count;
|
||||
bool primary_key_inited;
|
||||
uint status,version;
|
||||
} BDB_SHARE;
|
||||
|
||||
|
||||
@ -49,7 +51,8 @@ class ha_berkeley: public handler
|
||||
BDB_SHARE *share;
|
||||
ulong int_option_flag;
|
||||
ulong alloced_rec_buff_length;
|
||||
uint primary_key,last_dup_key, hidden_primary_key;
|
||||
ulong changed_rows;
|
||||
uint primary_key,last_dup_key, hidden_primary_key, version;
|
||||
bool fixed_length_row, fixed_length_primary_key, key_read;
|
||||
bool fix_rec_buff_for_blob(ulong length);
|
||||
byte current_ident[BDB_HIDDEN_PRIMARY_KEY_LENGTH];
|
||||
@ -58,7 +61,8 @@ class ha_berkeley: public handler
|
||||
int pack_row(DBT *row,const byte *record, bool new_row);
|
||||
void unpack_row(char *record, DBT *row);
|
||||
void ha_berkeley::unpack_key(char *record, DBT *key, uint index);
|
||||
DBT *pack_key(DBT *key, uint keynr, char *buff, const byte *record);
|
||||
DBT *create_key(DBT *key, uint keynr, char *buff, const byte *record,
|
||||
int key_length = MAX_KEY_LENGTH);
|
||||
DBT *pack_key(DBT *key, uint keynr, char *buff, const byte *key_ptr,
|
||||
uint key_length);
|
||||
int remove_key(DB_TXN *trans, uint keynr, const byte *record,
|
||||
@ -79,8 +83,9 @@ class ha_berkeley: public handler
|
||||
HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER | HA_LASTKEY_ORDER |
|
||||
HA_LONGLONG_KEYS | HA_NULL_KEY | HA_HAVE_KEY_READ_ONLY |
|
||||
HA_BLOB_KEY | HA_NOT_EXACT_COUNT |
|
||||
HA_PRIMARY_KEY_IN_READ_INDEX | HA_DROP_BEFORE_CREATE),
|
||||
last_dup_key((uint) -1)
|
||||
HA_PRIMARY_KEY_IN_READ_INDEX | HA_DROP_BEFORE_CREATE |
|
||||
HA_AUTO_PART_KEY),
|
||||
last_dup_key((uint) -1),version(0)
|
||||
{
|
||||
}
|
||||
~ha_berkeley() {}
|
||||
@ -123,6 +128,10 @@ class ha_berkeley: public handler
|
||||
int reset(void);
|
||||
int external_lock(THD *thd, int lock_type);
|
||||
void position(byte *record);
|
||||
int analyze(THD* thd,HA_CHECK_OPT* check_opt);
|
||||
int optimize(THD* thd, HA_CHECK_OPT* check_opt);
|
||||
int check(THD* thd, HA_CHECK_OPT* check_opt);
|
||||
|
||||
ha_rows records_in_range(int inx,
|
||||
const byte *start_key,uint start_key_len,
|
||||
enum ha_rkey_function start_search_flag,
|
||||
@ -135,7 +144,7 @@ class ha_berkeley: public handler
|
||||
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
|
||||
enum thr_lock_type lock_type);
|
||||
|
||||
void update_auto_primary_key();
|
||||
void get_status();
|
||||
inline void get_auto_primary_key(byte *to)
|
||||
{
|
||||
ulonglong tmp;
|
||||
@ -144,11 +153,12 @@ class ha_berkeley: public handler
|
||||
int5store(to,share->auto_ident);
|
||||
pthread_mutex_unlock(&share->mutex);
|
||||
}
|
||||
longlong ha_berkeley::get_auto_increment();
|
||||
};
|
||||
|
||||
extern bool berkeley_skip;
|
||||
extern bool berkeley_skip, berkeley_shared_data;
|
||||
extern u_int32_t berkeley_init_flags,berkeley_lock_type,berkeley_lock_types[];
|
||||
extern ulong berkeley_cache_size, berkeley_lock_max;
|
||||
extern ulong berkeley_cache_size, berkeley_max_lock;
|
||||
extern char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
|
||||
extern long berkeley_lock_scan_time;
|
||||
extern TYPELIB berkeley_lock_typelib;
|
||||
|
@ -191,8 +191,7 @@ int ha_autocommit_or_rollback(THD *thd, int error)
|
||||
{
|
||||
DBUG_ENTER("ha_autocommit_or_rollback");
|
||||
#ifdef USING_TRANSACTIONS
|
||||
if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)) &&
|
||||
!thd->locked_tables)
|
||||
if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)))
|
||||
{
|
||||
if (!error)
|
||||
{
|
||||
@ -211,6 +210,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans)
|
||||
{
|
||||
int error=0;
|
||||
DBUG_ENTER("ha_commit");
|
||||
#ifdef USING_TRANSACTIONS
|
||||
/* Update the binary log if we have cached some queries */
|
||||
if (trans == &thd->transaction.all && mysql_bin_log.is_open() &&
|
||||
my_b_tell(&thd->transaction.trans_log))
|
||||
{
|
||||
mysql_bin_log.write(&thd->transaction.trans_log);
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
}
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
if (trans->bdb_tid)
|
||||
{
|
||||
@ -224,13 +233,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans)
|
||||
#endif
|
||||
#ifdef HAVE_INNOBASE_DB
|
||||
{
|
||||
if ((error=innobase_commit(thd,trans->innobase_tid))
|
||||
if ((error=innobase_commit(thd,trans->innobase_tid)))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
|
||||
error=1;
|
||||
}
|
||||
trans->innobase_tid=0;
|
||||
}
|
||||
#endif
|
||||
if (error && trans == &thd->transaction.all && mysql_bin_log.is_open())
|
||||
sql_print_error("Error: Got error during commit; Binlog is not up to date!");
|
||||
#endif
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
@ -260,6 +272,12 @@ int ha_rollback_trans(THD *thd, THD_TRANS *trans)
|
||||
}
|
||||
trans->innobase_tid=0;
|
||||
}
|
||||
#endif
|
||||
#ifdef USING_TRANSACTIONS
|
||||
if (trans == &thd->transaction.all)
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
#endif
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
@ -180,20 +180,21 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
|
||||
VOID(pthread_mutex_lock(&hostname_cache->lock));
|
||||
if (!(hp=gethostbyaddr((char*) in,sizeof(*in), AF_INET)))
|
||||
{
|
||||
DBUG_PRINT("error",("gethostbyaddr returned %d",errno));
|
||||
VOID(pthread_mutex_unlock(&hostname_cache->lock));
|
||||
add_wrong_ip(in);
|
||||
DBUG_RETURN(0);
|
||||
DBUG_PRINT("error",("gethostbyaddr returned %d",errno));
|
||||
goto err;
|
||||
}
|
||||
if (!hp->h_name[0])
|
||||
if (!hp->h_name[0]) // Don't allow empty hostnames
|
||||
{
|
||||
VOID(pthread_mutex_unlock(&hostname_cache->lock));
|
||||
DBUG_PRINT("error",("Got an empty hostname"));
|
||||
add_wrong_ip(in);
|
||||
DBUG_RETURN(0); // Don't allow empty hostnames
|
||||
goto err;
|
||||
}
|
||||
if (!(name=my_strdup(hp->h_name,MYF(0))))
|
||||
{
|
||||
VOID(pthread_mutex_unlock(&hostname_cache->lock));
|
||||
DBUG_RETURN(0); // out of memory
|
||||
}
|
||||
check=gethostbyname(name);
|
||||
VOID(pthread_mutex_unlock(&hostname_cache->lock));
|
||||
if (!check)
|
||||
@ -214,8 +215,7 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
|
||||
{
|
||||
DBUG_PRINT("error",("mysqld doesn't accept hostnames that starts with a number followed by a '.'"));
|
||||
my_free(name,MYF(0));
|
||||
add_wrong_ip(in);
|
||||
DBUG_RETURN(0);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
@ -230,6 +230,8 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
|
||||
}
|
||||
DBUG_PRINT("error",("Couldn't verify hostname with gethostbyname"));
|
||||
my_free(name,MYF(0));
|
||||
|
||||
err:
|
||||
add_wrong_ip(in);
|
||||
DBUG_RETURN(0);
|
||||
}
|
||||
|
256
sql/log.cc
256
sql/log.cc
@ -16,6 +16,7 @@
|
||||
|
||||
|
||||
/* logging of commands */
|
||||
/* TODO: Abort logging when we get an error in reading or writing log files */
|
||||
|
||||
#include "mysql_priv.h"
|
||||
#include "sql_acl.h"
|
||||
@ -523,14 +524,12 @@ void MYSQL_LOG::new_file()
|
||||
}
|
||||
|
||||
|
||||
void MYSQL_LOG::write(THD *thd,enum enum_server_command command,
|
||||
bool MYSQL_LOG::write(THD *thd,enum enum_server_command command,
|
||||
const char *format,...)
|
||||
{
|
||||
if (is_open() && (what_to_log & (1L << (uint) command)))
|
||||
{
|
||||
va_list args;
|
||||
va_start(args,format);
|
||||
char buff[32];
|
||||
int error=0;
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
|
||||
/* Test if someone closed after the is_open test */
|
||||
@ -538,14 +537,17 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command,
|
||||
{
|
||||
time_t skr;
|
||||
ulong id;
|
||||
int error=0;
|
||||
va_list args;
|
||||
va_start(args,format);
|
||||
char buff[32];
|
||||
|
||||
if (thd)
|
||||
{ // Normal thread
|
||||
if ((thd->options & OPTION_LOG_OFF) &&
|
||||
(thd->master_access & PROCESS_ACL))
|
||||
{
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return; // No logging
|
||||
return 0; // No logging
|
||||
}
|
||||
id=thd->thread_id;
|
||||
if (thd->user_time || !(skr=thd->query_start()))
|
||||
@ -593,115 +595,184 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command,
|
||||
write_error=1;
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE),name,error);
|
||||
}
|
||||
va_end(args);
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
}
|
||||
va_end(args);
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return error != 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Write to binary log in a format to be used for replication */
|
||||
|
||||
void MYSQL_LOG::write(Query_log_event* event_info)
|
||||
bool MYSQL_LOG::write(Query_log_event* event_info)
|
||||
{
|
||||
/* In most cases this is only called if 'is_open()' is true */
|
||||
bool error=1;
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
if (is_open())
|
||||
{
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
if (is_open())
|
||||
THD *thd=event_info->thd;
|
||||
IO_CACHE *file = (event_info->cache_stmt ? &thd->transaction.trans_log :
|
||||
&log_file);
|
||||
if ((!(thd->options & OPTION_BIN_LOG) &&
|
||||
thd->master_access & PROCESS_ACL) ||
|
||||
!db_ok(event_info->db, binlog_do_db, binlog_ignore_db))
|
||||
{
|
||||
THD *thd=event_info->thd;
|
||||
if ((!(thd->options & OPTION_BIN_LOG) &&
|
||||
thd->master_access & PROCESS_ACL) ||
|
||||
!db_ok(event_info->db, binlog_do_db, binlog_ignore_db))
|
||||
{
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return;
|
||||
}
|
||||
|
||||
if (thd->last_insert_id_used)
|
||||
{
|
||||
Intvar_log_event e((uchar)LAST_INSERT_ID_EVENT, thd->last_insert_id);
|
||||
if (e.write(&log_file))
|
||||
{
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
if (thd->insert_id_used)
|
||||
{
|
||||
Intvar_log_event e((uchar)INSERT_ID_EVENT, thd->last_insert_id);
|
||||
if (e.write(&log_file))
|
||||
{
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
if (thd->convert_set)
|
||||
{
|
||||
char buf[1024] = "SET CHARACTER SET ";
|
||||
char* p = strend(buf);
|
||||
p = strmov(p, thd->convert_set->name);
|
||||
int save_query_length = thd->query_length;
|
||||
// just in case somebody wants it later
|
||||
thd->query_length = (uint)(p - buf);
|
||||
Query_log_event e(thd, buf);
|
||||
if (e.write(&log_file))
|
||||
{
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
goto err;
|
||||
}
|
||||
thd->query_length = save_query_length; // clean up
|
||||
}
|
||||
if (event_info->write(&log_file) || flush_io_cache(&log_file))
|
||||
{
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
}
|
||||
err:
|
||||
VOID(pthread_cond_broadcast(&COND_binlog_update));
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return 0;
|
||||
}
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
|
||||
if (thd->last_insert_id_used)
|
||||
{
|
||||
Intvar_log_event e((uchar)LAST_INSERT_ID_EVENT, thd->last_insert_id);
|
||||
if (e.write(file))
|
||||
goto err;
|
||||
}
|
||||
if (thd->insert_id_used)
|
||||
{
|
||||
Intvar_log_event e((uchar)INSERT_ID_EVENT, thd->last_insert_id);
|
||||
if (e.write(file))
|
||||
goto err;
|
||||
}
|
||||
if (thd->convert_set)
|
||||
{
|
||||
char buf[1024] = "SET CHARACTER SET ";
|
||||
char* p = strend(buf);
|
||||
p = strmov(p, thd->convert_set->name);
|
||||
int save_query_length = thd->query_length;
|
||||
// just in case somebody wants it later
|
||||
thd->query_length = (uint)(p - buf);
|
||||
Query_log_event e(thd, buf);
|
||||
if (e.write(file))
|
||||
goto err;
|
||||
thd->query_length = save_query_length; // clean up
|
||||
}
|
||||
if (event_info->write(file) ||
|
||||
file == &log_file && flush_io_cache(file))
|
||||
goto err;
|
||||
error=0;
|
||||
|
||||
err:
|
||||
if (error)
|
||||
{
|
||||
if (my_errno == EFBIG)
|
||||
my_error(ER_TRANS_CACHE_FULL, MYF(0));
|
||||
else
|
||||
my_error(ER_ERROR_ON_WRITE, MYF(0), name, errno);
|
||||
write_error=1;
|
||||
}
|
||||
if (file == &log_file)
|
||||
VOID(pthread_cond_broadcast(&COND_binlog_update));
|
||||
}
|
||||
else
|
||||
error=0;
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return error;
|
||||
}
|
||||
|
||||
void MYSQL_LOG::write(Load_log_event* event_info)
|
||||
/*
|
||||
Write a cached log entry to the binary log
|
||||
We only come here if there is something in the cache.
|
||||
'cache' needs to be reinitialized after this functions returns.
|
||||
*/
|
||||
|
||||
bool MYSQL_LOG::write(IO_CACHE *cache)
|
||||
{
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
bool error=1;
|
||||
if (is_open())
|
||||
{
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
if (is_open())
|
||||
uint length;
|
||||
my_off_t start_pos=my_b_tell(&log_file);
|
||||
|
||||
if (reinit_io_cache(cache, WRITE_CACHE, 0, 0, 0))
|
||||
{
|
||||
THD *thd=event_info->thd;
|
||||
if ((thd->options & OPTION_BIN_LOG) ||
|
||||
!(thd->master_access & PROCESS_ACL))
|
||||
{
|
||||
if (event_info->write(&log_file) || flush_io_cache(&log_file))
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
VOID(pthread_cond_broadcast(&COND_binlog_update));
|
||||
}
|
||||
if (!write_error)
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno);
|
||||
goto err;
|
||||
}
|
||||
while ((length=my_b_fill(cache)))
|
||||
{
|
||||
if (my_b_write(&log_file, cache->rc_pos, length))
|
||||
{
|
||||
if (!write_error)
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
goto err;
|
||||
}
|
||||
cache->rc_pos=cache->rc_end; // Mark buffer used up
|
||||
}
|
||||
if (flush_io_cache(&log_file))
|
||||
{
|
||||
if (!write_error)
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
goto err;
|
||||
}
|
||||
if (cache->error) // Error on read
|
||||
{
|
||||
if (!write_error)
|
||||
sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno);
|
||||
goto err;
|
||||
}
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
}
|
||||
error=0;
|
||||
|
||||
err:
|
||||
if (error)
|
||||
write_error=1;
|
||||
else
|
||||
VOID(pthread_cond_broadcast(&COND_binlog_update));
|
||||
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return error;
|
||||
}
|
||||
|
||||
|
||||
bool MYSQL_LOG::write(Load_log_event* event_info)
|
||||
{
|
||||
bool error=0;
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
if (is_open())
|
||||
{
|
||||
THD *thd=event_info->thd;
|
||||
if ((thd->options & OPTION_BIN_LOG) ||
|
||||
!(thd->master_access & PROCESS_ACL))
|
||||
{
|
||||
if (event_info->write(&log_file) || flush_io_cache(&log_file))
|
||||
{
|
||||
if (!write_error)
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
|
||||
error=write_error=1;
|
||||
}
|
||||
VOID(pthread_cond_broadcast(&COND_binlog_update));
|
||||
}
|
||||
}
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return error;
|
||||
}
|
||||
|
||||
|
||||
/* Write update log in a format suitable for incremental backup */
|
||||
|
||||
void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
bool MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
time_t query_start)
|
||||
{
|
||||
bool error=0;
|
||||
if (is_open())
|
||||
{
|
||||
time_t current_time;
|
||||
VOID(pthread_mutex_lock(&LOCK_log));
|
||||
if (is_open())
|
||||
{ // Safety agains reopen
|
||||
int error=0;
|
||||
int tmp_errno=0;
|
||||
char buff[80],*end;
|
||||
end=buff;
|
||||
if (!(thd->options & OPTION_UPDATE_LOG) &&
|
||||
(thd->master_access & PROCESS_ACL))
|
||||
{
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
if ((specialflag & SPECIAL_LONG_LOG_FORMAT) || query_start)
|
||||
{
|
||||
@ -722,14 +793,14 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
start->tm_min,
|
||||
start->tm_sec);
|
||||
if (my_b_write(&log_file, (byte*) buff,24))
|
||||
error=errno;
|
||||
tmp_errno=errno;
|
||||
}
|
||||
if (my_b_printf(&log_file, "# User@Host: %s[%s] @ %s [%s]\n",
|
||||
thd->priv_user,
|
||||
thd->user,
|
||||
thd->host ? thd->host : "",
|
||||
thd->ip ? thd->ip : "") == (uint) -1)
|
||||
error=errno;
|
||||
tmp_errno=errno;
|
||||
}
|
||||
if (query_start)
|
||||
{
|
||||
@ -739,12 +810,12 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
(ulong) (current_time - query_start),
|
||||
(ulong) (thd->time_after_lock - query_start),
|
||||
(ulong) thd->sent_row_count) == (uint) -1)
|
||||
error=errno;
|
||||
tmp_errno=errno;
|
||||
}
|
||||
if (thd->db && strcmp(thd->db,db))
|
||||
{ // Database changed
|
||||
if (my_b_printf(&log_file,"use %s;\n",thd->db) == (uint) -1)
|
||||
error=errno;
|
||||
tmp_errno=errno;
|
||||
strmov(db,thd->db);
|
||||
}
|
||||
if (thd->last_insert_id_used)
|
||||
@ -777,7 +848,7 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
*end=0;
|
||||
if (my_b_write(&log_file, (byte*) "SET ",4) ||
|
||||
my_b_write(&log_file, (byte*) buff+1,(uint) (end-buff)-1))
|
||||
error=errno;
|
||||
tmp_errno=errno;
|
||||
}
|
||||
if (!query)
|
||||
{
|
||||
@ -787,29 +858,22 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
|
||||
if (my_b_write(&log_file, (byte*) query,query_length) ||
|
||||
my_b_write(&log_file, (byte*) ";\n",2) ||
|
||||
flush_io_cache(&log_file))
|
||||
error=errno;
|
||||
if (error && ! write_error)
|
||||
tmp_errno=errno;
|
||||
if (tmp_errno)
|
||||
{
|
||||
write_error=1;
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE),name,error);
|
||||
error=1;
|
||||
if (! write_error)
|
||||
{
|
||||
write_error=1;
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE),name,error);
|
||||
}
|
||||
}
|
||||
}
|
||||
VOID(pthread_mutex_unlock(&LOCK_log));
|
||||
}
|
||||
return error;
|
||||
}
|
||||
|
||||
#ifdef TO_BE_REMOVED
|
||||
void MYSQL_LOG::flush()
|
||||
{
|
||||
if (is_open())
|
||||
if (flush_io_cache(log_file) && ! write_error)
|
||||
{
|
||||
write_error=1;
|
||||
sql_print_error(ER(ER_ERROR_ON_WRITE),name,errno);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
void MYSQL_LOG::close(bool exiting)
|
||||
{ // One can't set log_type here!
|
||||
|
@ -118,16 +118,18 @@ public:
|
||||
ulong thread_id;
|
||||
#if !defined(MYSQL_CLIENT)
|
||||
THD* thd;
|
||||
Query_log_event(THD* thd_arg, const char* query_arg):
|
||||
Log_event(thd_arg->start_time,0,0,thd_arg->server_id), data_buf(0),
|
||||
bool cache_stmt;
|
||||
Query_log_event(THD* thd_arg, const char* query_arg, bool using_trans=0):
|
||||
Log_event(thd_arg->start_time,0,1,thd_arg->server_id), data_buf(0),
|
||||
query(query_arg), db(thd_arg->db), q_len(thd_arg->query_length),
|
||||
error_code(thd_arg->net.last_errno),
|
||||
thread_id(thd_arg->thread_id), thd(thd_arg)
|
||||
thread_id(thd_arg->thread_id), thd(thd_arg),
|
||||
cache_stmt(using_trans &&
|
||||
(thd_arg->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)))
|
||||
{
|
||||
time_t end_time;
|
||||
time(&end_time);
|
||||
exec_time = (ulong) (end_time - thd->start_time);
|
||||
valid_exec_time = 1;
|
||||
db_len = (db) ? (uint32) strlen(db) : 0;
|
||||
}
|
||||
#endif
|
||||
|
@ -121,7 +121,7 @@ int init_io_cache(IO_CACHE *info, File file, uint cachesize,
|
||||
}
|
||||
/* end_of_file may be changed by user later */
|
||||
info->end_of_file= ((type == READ_NET || type == READ_FIFO ) ? 0
|
||||
: MY_FILEPOS_ERROR);
|
||||
: ~(my_off_t) 0);
|
||||
info->type=type;
|
||||
info->error=0;
|
||||
info->read_function=(type == READ_NET) ? _my_b_net_read : _my_b_read; /* net | file */
|
||||
@ -176,6 +176,8 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
|
||||
DBUG_ENTER("reinit_io_cache");
|
||||
|
||||
info->seek_not_done= test(info->file >= 0); /* Seek not done */
|
||||
|
||||
/* If the whole file is in memory, avoid flushing to disk */
|
||||
if (! clear_cache &&
|
||||
seek_offset >= info->pos_in_file &&
|
||||
seek_offset <= info->pos_in_file +
|
||||
@ -186,8 +188,12 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
|
||||
info->rc_end=info->rc_pos;
|
||||
info->end_of_file=my_b_tell(info);
|
||||
}
|
||||
else if (info->type == READ_CACHE && type == WRITE_CACHE)
|
||||
info->rc_end=info->buffer+info->buffer_length;
|
||||
else if (type == WRITE_CACHE)
|
||||
{
|
||||
if (info->type == READ_CACHE)
|
||||
info->rc_end=info->buffer+info->buffer_length;
|
||||
info->end_of_file = ~(my_off_t) 0;
|
||||
}
|
||||
info->rc_pos=info->rc_request_pos+(seek_offset-info->pos_in_file);
|
||||
#ifdef HAVE_AIOWAIT
|
||||
my_aiowait(&info->aio_result); /* Wait for outstanding req */
|
||||
@ -195,11 +201,20 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
|
||||
}
|
||||
else
|
||||
{
|
||||
/*
|
||||
If we change from WRITE_CACHE to READ_CACHE, assume that everything
|
||||
after the current positions should be ignored
|
||||
*/
|
||||
if (info->type == WRITE_CACHE && type == READ_CACHE)
|
||||
info->end_of_file=my_b_tell(info);
|
||||
if (flush_io_cache(info))
|
||||
/* No need to flush cache if we want to reuse it */
|
||||
if ((type != WRITE_CACHE || !clear_cache) && flush_io_cache(info))
|
||||
DBUG_RETURN(1);
|
||||
info->pos_in_file=seek_offset;
|
||||
if (info->pos_in_file != seek_offset)
|
||||
{
|
||||
info->pos_in_file=seek_offset;
|
||||
info->seek_not_done=1;
|
||||
}
|
||||
info->rc_request_pos=info->rc_pos=info->buffer;
|
||||
if (type == READ_CACHE || type == READ_NET || type == READ_FIFO)
|
||||
{
|
||||
@ -210,7 +225,7 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
|
||||
info->rc_end=info->buffer+info->buffer_length-
|
||||
(seek_offset & (IO_SIZE-1));
|
||||
info->end_of_file= ((type == READ_NET || type == READ_FIFO) ? 0 :
|
||||
MY_FILEPOS_ERROR);
|
||||
~(my_off_t) 0);
|
||||
}
|
||||
}
|
||||
info->type=type;
|
||||
@ -536,6 +551,11 @@ int _my_b_write(register IO_CACHE *info, const byte *Buffer, uint Count)
|
||||
Buffer+=rest_length;
|
||||
Count-=rest_length;
|
||||
info->rc_pos+=rest_length;
|
||||
if (info->pos_in_file+info->buffer_length > info->end_of_file)
|
||||
{
|
||||
my_errno=errno=EFBIG;
|
||||
return info->error = -1;
|
||||
}
|
||||
if (flush_io_cache(info))
|
||||
return 1;
|
||||
if (Count >= IO_SIZE)
|
||||
|
@ -507,9 +507,9 @@ extern ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size,
|
||||
net_read_timeout,net_write_timeout,
|
||||
what_to_log,flush_time,
|
||||
max_tmp_tables,max_heap_table_size,query_buff_size,
|
||||
lower_case_table_names,thread_stack,thread_stack_min;
|
||||
extern ulong specialflag;
|
||||
extern ulong current_pid;
|
||||
lower_case_table_names,thread_stack,thread_stack_min,
|
||||
binlog_cache_size, max_binlog_cache_size;
|
||||
extern ulong specialflag, current_pid;
|
||||
extern bool low_priority_updates;
|
||||
extern bool opt_sql_bin_update;
|
||||
extern char language[LIBLEN],reg_ext[FN_EXTLEN],blob_newline;
|
||||
|
@ -201,10 +201,10 @@ ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size,
|
||||
query_buff_size, lower_case_table_names, mysqld_net_retry_count,
|
||||
net_interactive_timeout, slow_launch_time = 2L,
|
||||
net_read_timeout,net_write_timeout,slave_open_temp_tables=0;
|
||||
ulong thread_cache_size=0;
|
||||
ulong thread_cache_size=0, binlog_cache_size=0, max_binlog_cache_size=0;
|
||||
volatile ulong cached_thread_count=0;
|
||||
|
||||
// replication parameters, if master_host is not NULL, we are slaving off the master
|
||||
// replication parameters, if master_host is not NULL, we are a slave
|
||||
my_string master_user = (char*) "test", master_password = 0, master_host=0,
|
||||
master_info_file = (char*) "master.info";
|
||||
const char *localhost=LOCAL_HOST;
|
||||
@ -1496,23 +1496,21 @@ int main(int argc, char **argv)
|
||||
if (opt_update_log)
|
||||
open_log(&mysql_update_log, glob_hostname, opt_update_logname, "",
|
||||
LOG_NEW);
|
||||
|
||||
if (!server_id)
|
||||
server_id= !master_host ? 1 : 2;
|
||||
if (opt_bin_log)
|
||||
{
|
||||
if(server_id)
|
||||
{
|
||||
if (!opt_bin_logname)
|
||||
{
|
||||
char tmp[FN_REFLEN];
|
||||
strnmov(tmp,glob_hostname,FN_REFLEN-5);
|
||||
strmov(strcend(tmp,'.'),"-bin");
|
||||
opt_bin_logname=my_strdup(tmp,MYF(MY_WME));
|
||||
}
|
||||
mysql_bin_log.set_index_file_name(opt_binlog_index_name);
|
||||
open_log(&mysql_bin_log, glob_hostname, opt_bin_logname, "-bin",
|
||||
LOG_BIN);
|
||||
}
|
||||
else
|
||||
sql_print_error("Server id is not set - binary logging disabled");
|
||||
if (!opt_bin_logname)
|
||||
{
|
||||
char tmp[FN_REFLEN];
|
||||
strnmov(tmp,glob_hostname,FN_REFLEN-5);
|
||||
strmov(strcend(tmp,'.'),"-bin");
|
||||
opt_bin_logname=my_strdup(tmp,MYF(MY_WME));
|
||||
}
|
||||
mysql_bin_log.set_index_file_name(opt_binlog_index_name);
|
||||
open_log(&mysql_bin_log, glob_hostname, opt_bin_logname, "-bin",
|
||||
LOG_BIN);
|
||||
}
|
||||
|
||||
if (opt_slow_log)
|
||||
@ -1620,19 +1618,14 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
// slave thread
|
||||
if(master_host)
|
||||
if (master_host)
|
||||
{
|
||||
if(server_id)
|
||||
{
|
||||
pthread_t hThread;
|
||||
if(!opt_skip_slave_start &&
|
||||
pthread_create(&hThread, &connection_attrib, handle_slave, 0))
|
||||
sql_print_error("Warning: Can't create thread to handle slave");
|
||||
else if(opt_skip_slave_start)
|
||||
init_master_info(&glob_mi);
|
||||
}
|
||||
else
|
||||
sql_print_error("Server id is not set, slave thread will not be started");
|
||||
pthread_t hThread;
|
||||
if(!opt_skip_slave_start &&
|
||||
pthread_create(&hThread, &connection_attrib, handle_slave, 0))
|
||||
sql_print_error("Warning: Can't create thread to handle slave");
|
||||
else if(opt_skip_slave_start)
|
||||
init_master_info(&glob_mi);
|
||||
}
|
||||
|
||||
printf(ER(ER_READY),my_progname,server_version,"");
|
||||
@ -2205,7 +2198,8 @@ enum options {
|
||||
OPT_BDB_HOME, OPT_BDB_LOG,
|
||||
OPT_BDB_TMP, OPT_BDB_NOSYNC,
|
||||
OPT_BDB_LOCK, OPT_BDB_SKIP,
|
||||
OPT_BDB_RECOVER, OPT_MASTER_HOST,
|
||||
OPT_BDB_RECOVER, OPT_BDB_SHARED,
|
||||
OPT_MASTER_HOST,
|
||||
OPT_MASTER_USER, OPT_MASTER_PASSWORD,
|
||||
OPT_MASTER_PORT, OPT_MASTER_INFO_FILE,
|
||||
OPT_MASTER_CONNECT_RETRY, OPT_SQL_BIN_UPDATE_SAME,
|
||||
@ -2233,6 +2227,7 @@ static struct option long_options[] = {
|
||||
{"bdb-logdir", required_argument, 0, (int) OPT_BDB_LOG},
|
||||
{"bdb-recover", no_argument, 0, (int) OPT_BDB_RECOVER},
|
||||
{"bdb-no-sync", no_argument, 0, (int) OPT_BDB_NOSYNC},
|
||||
{"bdb-shared-data", required_argument, 0, (int) OPT_BDB_SHARED},
|
||||
{"bdb-tmpdir", required_argument, 0, (int) OPT_BDB_TMP},
|
||||
#endif
|
||||
{"big-tables", no_argument, 0, (int) OPT_BIG_TABLES},
|
||||
@ -2323,7 +2318,7 @@ static struct option long_options[] = {
|
||||
(int) OPT_REPLICATE_REWRITE_DB},
|
||||
{"safe-mode", no_argument, 0, (int) OPT_SAFE},
|
||||
{"socket", required_argument, 0, (int) OPT_SOCKET},
|
||||
{"server-id", required_argument, 0, (int)OPT_SERVER_ID},
|
||||
{"server-id", required_argument, 0, (int) OPT_SERVER_ID},
|
||||
{"set-variable", required_argument, 0, 'O'},
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
{"skip-bdb", no_argument, 0, (int) OPT_BDB_SKIP},
|
||||
@ -2363,9 +2358,14 @@ CHANGEABLE_VAR changeable_vars[] = {
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
{ "bdb_cache_size", (long*) &berkeley_cache_size,
|
||||
KEY_CACHE_SIZE, 20*1024, (long) ~0, 0, IO_SIZE },
|
||||
{ "bdb_lock_max", (long*) &berkeley_lock_max,
|
||||
{ "bdb_max_lock", (long*) &berkeley_max_lock,
|
||||
1000, 0, (long) ~0, 0, 1 },
|
||||
/* QQ: The following should be removed soon! */
|
||||
{ "bdb_lock_max", (long*) &berkeley_max_lock,
|
||||
1000, 0, (long) ~0, 0, 1 },
|
||||
#endif
|
||||
{ "binlog_cache_size", (long*) &binlog_cache_size,
|
||||
32*1024L, IO_SIZE, ~0L, 0, IO_SIZE },
|
||||
{ "connect_timeout", (long*) &connect_timeout,
|
||||
CONNECT_TIMEOUT, 2, 65535, 0, 1 },
|
||||
{ "delayed_insert_timeout", (long*) &delayed_insert_timeout,
|
||||
@ -2390,7 +2390,7 @@ CHANGEABLE_VAR changeable_vars[] = {
|
||||
{"innobase_buffer_pool_size",
|
||||
(long*) &innobase_buffer_pool_size, 8*1024*1024L, 1024*1024L,
|
||||
~0L, 0, 1024*1024L},
|
||||
{"innobase_additional_mem_pool_size_mb",
|
||||
{"innobase_additional_mem_pool_size",
|
||||
(long*) &innobase_additional_mem_pool_size, 1*1024*1024L, 512*1024L,
|
||||
~0L, 0, 1024},
|
||||
{"innobase_file_io_threads",
|
||||
@ -2408,6 +2408,8 @@ CHANGEABLE_VAR changeable_vars[] = {
|
||||
IF_WIN(1,0), 0, 1, 0, 1 },
|
||||
{ "max_allowed_packet", (long*) &max_allowed_packet,
|
||||
1024*1024L, 80, 17*1024*1024L, MALLOC_OVERHEAD, 1024 },
|
||||
{ "max_binlog_cache_size", (long*) &max_binlog_cache_size,
|
||||
~0L, IO_SIZE, ~0L, 0, IO_SIZE },
|
||||
{ "max_connections", (long*) &max_connections,
|
||||
100, 1, 16384, 0, 1 },
|
||||
{ "max_connect_errors", (long*) &max_connect_errors,
|
||||
@ -2465,10 +2467,12 @@ struct show_var_st init_vars[]= {
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
{"bdb_cache_size", (char*) &berkeley_cache_size, SHOW_LONG},
|
||||
{"bdb_home", (char*) &berkeley_home, SHOW_CHAR_PTR},
|
||||
{"bdb_lock_max", (char*) &berkeley_lock_max, SHOW_LONG},
|
||||
{"bdb_max_lock", (char*) &berkeley_max_lock, SHOW_LONG},
|
||||
{"bdb_logdir", (char*) &berkeley_logdir, SHOW_CHAR_PTR},
|
||||
{"bdb_shared_data", (char*) &berkeley_shared_data, SHOW_BOOL},
|
||||
{"bdb_tmpdir", (char*) &berkeley_tmpdir, SHOW_CHAR_PTR},
|
||||
#endif
|
||||
{"binlog_cache_size", (char*) &binlog_cache_size, SHOW_LONG},
|
||||
{"character_set", default_charset, SHOW_CHAR},
|
||||
{"character_sets", (char*) &charsets_list, SHOW_CHAR_PTR},
|
||||
{"concurrent_insert", (char*) &myisam_concurrent_insert, SHOW_MY_BOOL},
|
||||
@ -2497,6 +2501,7 @@ struct show_var_st init_vars[]= {
|
||||
{"low_priority_updates", (char*) &low_priority_updates, SHOW_BOOL},
|
||||
{"lower_case_table_names", (char*) &lower_case_table_names, SHOW_LONG},
|
||||
{"max_allowed_packet", (char*) &max_allowed_packet, SHOW_LONG},
|
||||
{"max_binlog_cache_size", (char*) &max_binlog_cache_size, SHOW_LONG},
|
||||
{"max_connections", (char*) &max_connections, SHOW_LONG},
|
||||
{"max_connect_errors", (char*) &max_connect_errors, SHOW_LONG},
|
||||
{"max_delayed_threads", (char*) &max_insert_delayed_threads, SHOW_LONG},
|
||||
@ -2711,8 +2716,9 @@ static void usage(void)
|
||||
--bdb-lock-detect=# Berkeley lock detect\n\
|
||||
(DEFAULT, OLDEST, RANDOM or YOUNGEST, # sec)\n\
|
||||
--bdb-logdir=directory Berkeley DB log file directory\n\
|
||||
--bdb-nosync Don't synchronously flush logs\n\
|
||||
--bdb-no-sync Don't synchronously flush logs\n\
|
||||
--bdb-recover Start Berkeley DB in recover mode\n\
|
||||
--bdb-shared-data Start Berkeley DB in multi-process mode\n\
|
||||
--bdb-tmpdir=directory Berkeley DB tempfile name\n\
|
||||
--skip-bdb Don't use berkeley db (will save memory)\n\
|
||||
");
|
||||
@ -3224,6 +3230,10 @@ static void get_options(int argc,char **argv)
|
||||
}
|
||||
break;
|
||||
}
|
||||
case OPT_BDB_SHARED:
|
||||
berkeley_init_flags&= ~(DB_PRIVATE);
|
||||
berkeley_shared_data=1;
|
||||
break;
|
||||
case OPT_BDB_SKIP:
|
||||
berkeley_skip=1;
|
||||
break;
|
||||
|
Binary file not shown.
@ -207,3 +207,4 @@
|
||||
"Tabulka '%-.64s' je ozna-Bčena jako porušená a měla by být opravena",-A
|
||||
"Tabulka '%-.64s' je ozna-Bčena jako porušená a poslední (automatická?) oprava se nezdařila",-A
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -201,3 +201,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -202,3 +202,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -201,3 +201,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -200,3 +200,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"La tabella '%-.64s' e' segnalata come rovinata e deve essere riparata",
|
||||
"La tabella '%-.64s' e' segnalata come rovinata e l'ultima ricostruzione (automatica?) e' fallita",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -200,3 +200,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
@ -200,3 +200,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
@ -200,3 +200,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -202,3 +202,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
@ -202,3 +202,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -201,3 +201,4 @@
|
||||
"Таблица '%-.64s' помечена как испорченная и должна быть исправлена",
|
||||
"Таблица '%-.64s' помечена как испорченная и последняя попытка исправления (автоматическая?) не удалась",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -206,3 +206,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
Binary file not shown.
@ -199,3 +199,4 @@
|
||||
"Table '%-.64s' is marked as crashed and should be repaired",
|
||||
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
|
||||
"Warning: Some non-transactional changed tables couldn't be rolled back",
|
||||
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
|
||||
|
@ -198,3 +198,5 @@
|
||||
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
|
||||
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
|
||||
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
|
||||
#ER_TRANS_CACHE_FULL
|
||||
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
|
||||
|
Binary file not shown.
@ -198,3 +198,4 @@
|
||||
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
|
||||
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
|
||||
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
|
||||
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
|
||||
|
@ -454,15 +454,15 @@ void close_temporary_tables(THD *thd)
|
||||
next=table->next;
|
||||
close_temporary(table);
|
||||
}
|
||||
if(query && mysql_bin_log.is_open())
|
||||
{
|
||||
uint save_query_len = thd->query_length;
|
||||
*--p = 0;
|
||||
thd->query_length = (uint)(p-query);
|
||||
Query_log_event qinfo(thd, query);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
thd->query_length = save_query_len;
|
||||
}
|
||||
if (query && mysql_bin_log.is_open())
|
||||
{
|
||||
uint save_query_len = thd->query_length;
|
||||
*--p = 0;
|
||||
thd->query_length = (uint)(p-query);
|
||||
Query_log_event qinfo(thd, query);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
thd->query_length = save_query_len;
|
||||
}
|
||||
thd->temporary_tables=0;
|
||||
}
|
||||
|
||||
|
@ -121,8 +121,10 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
|
||||
#ifdef USING_TRANSACTIONS
|
||||
bzero((char*) &transaction,sizeof(transaction));
|
||||
if (open_cached_file(&transaction.trans_log,
|
||||
mysql_tmpdir,LOG_PREFIX,0,MYF(MY_WME)))
|
||||
mysql_tmpdir, LOG_PREFIX, binlog_cache_size,
|
||||
MYF(MY_WME)))
|
||||
killed=1;
|
||||
transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
#endif
|
||||
|
||||
#ifdef __WIN__
|
||||
|
@ -74,12 +74,12 @@ public:
|
||||
void open(const char *log_name,enum_log_type log_type,
|
||||
const char *new_name=0);
|
||||
void new_file(void);
|
||||
void write(THD *thd, enum enum_server_command command,const char *format,...);
|
||||
void write(THD *thd, const char *query, uint query_length,
|
||||
bool write(THD *thd, enum enum_server_command command,const char *format,...);
|
||||
bool write(THD *thd, const char *query, uint query_length,
|
||||
time_t query_start=0);
|
||||
void write(Query_log_event* event_info); // binary log write
|
||||
void write(Load_log_event* event_info);
|
||||
|
||||
bool write(Query_log_event* event_info); // binary log write
|
||||
bool write(Load_log_event* event_info);
|
||||
bool write(IO_CACHE *cache);
|
||||
int generate_new_name(char *new_name,const char *old_name);
|
||||
void make_log_name(char* buf, const char* log_ident);
|
||||
bool is_active(const char* log_file_name);
|
||||
|
@ -158,14 +158,14 @@ exit:
|
||||
are 2 digits (raid directories).
|
||||
*/
|
||||
|
||||
static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
|
||||
uint level)
|
||||
static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *org_path,
|
||||
uint level)
|
||||
{
|
||||
long deleted=0;
|
||||
ulong found_other_files=0;
|
||||
char filePath[FN_REFLEN];
|
||||
DBUG_ENTER("mysql_rm_known_files");
|
||||
DBUG_PRINT("enter",("path: %s", path));
|
||||
DBUG_PRINT("enter",("path: %s", org_path));
|
||||
/* remove all files with known extensions */
|
||||
|
||||
for (uint idx=2 ;
|
||||
@ -181,7 +181,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
|
||||
{
|
||||
char newpath[FN_REFLEN];
|
||||
MY_DIR *new_dirp;
|
||||
strxmov(newpath,path,"/",file->name,NullS);
|
||||
strxmov(newpath,org_path,"/",file->name,NullS);
|
||||
unpack_filename(newpath,newpath);
|
||||
if ((new_dirp = my_dir(newpath,MYF(MY_DONT_SORT))))
|
||||
{
|
||||
@ -199,7 +199,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
|
||||
found_other_files++;
|
||||
continue;
|
||||
}
|
||||
strxmov(filePath,path,"/",file->name,NullS);
|
||||
strxmov(filePath,org_path,"/",file->name,NullS);
|
||||
unpack_filename(filePath,filePath);
|
||||
if (my_delete(filePath,MYF(MY_WME)))
|
||||
{
|
||||
@ -224,9 +224,9 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
|
||||
*/
|
||||
if (!found_other_files)
|
||||
{
|
||||
#ifdef HAVE_READLINK
|
||||
char tmp_path[FN_REFLEN];
|
||||
path=unpack_filename(tmp_path,path);
|
||||
char *path=unpack_filename(tmp_path,org_path);
|
||||
#ifdef HAVE_READLINK
|
||||
int linkcount = readlink(path,filePath,sizeof(filePath)-1);
|
||||
if (linkcount > 0) // If the path was a symbolic link
|
||||
{
|
||||
|
@ -126,7 +126,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
|
||||
SQL_SELECT *select;
|
||||
READ_RECORD info;
|
||||
bool using_limit=limit != HA_POS_ERROR;
|
||||
bool use_generate_table;
|
||||
bool use_generate_table,using_transactions;
|
||||
DBUG_ENTER("mysql_delete");
|
||||
|
||||
if (!table_list->db)
|
||||
@ -214,18 +214,20 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
|
||||
(void) table->file->extra(HA_EXTRA_READCHECK);
|
||||
if (options & OPTION_QUICK)
|
||||
(void) table->file->extra(HA_EXTRA_NORMAL);
|
||||
if (deleted)
|
||||
using_transactions=table->file->has_transactions();
|
||||
if (deleted && (error == 0 || !using_transactions))
|
||||
{
|
||||
mysql_update_log.write(thd,thd->query, thd->query_length);
|
||||
if (mysql_bin_log.is_open())
|
||||
{
|
||||
Query_log_event qinfo(thd, thd->query);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
Query_log_event qinfo(thd, thd->query, using_transactions);
|
||||
if (mysql_bin_log.write(&qinfo) && using_transactions)
|
||||
error=1;
|
||||
}
|
||||
if (!table->file->has_transactions())
|
||||
if (!using_transactions)
|
||||
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
|
||||
}
|
||||
if (ha_autocommit_or_rollback(thd,error >= 0))
|
||||
if (using_transactions && ha_autocommit_or_rollback(thd,error >= 0))
|
||||
error=1;
|
||||
if (thd->lock)
|
||||
{
|
||||
|
@ -102,6 +102,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
|
||||
int error;
|
||||
bool log_on= ((thd->options & OPTION_UPDATE_LOG) ||
|
||||
!(thd->master_access & PROCESS_ACL));
|
||||
bool using_transactions;
|
||||
uint value_count;
|
||||
uint save_time_stamp;
|
||||
ulong counter = 1;
|
||||
@ -254,18 +255,21 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
|
||||
thd->insert_id(id); // For update log
|
||||
else if (table->next_number_field)
|
||||
id=table->next_number_field->val_int(); // Return auto_increment value
|
||||
if (info.copied || info.deleted)
|
||||
using_transactions=table->file->has_transactions();
|
||||
if ((info.copied || info.deleted) && (error == 0 || !using_transactions))
|
||||
{
|
||||
mysql_update_log.write(thd, thd->query, thd->query_length);
|
||||
if (mysql_bin_log.is_open())
|
||||
{
|
||||
Query_log_event qinfo(thd, thd->query);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
Query_log_event qinfo(thd, thd->query, using_transactions);
|
||||
if (mysql_bin_log.write(&qinfo) && using_transactions)
|
||||
error=1;
|
||||
}
|
||||
if (!table->file->has_transactions())
|
||||
if (!using_transactions)
|
||||
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
|
||||
}
|
||||
error=ha_autocommit_or_rollback(thd,error);
|
||||
if (using_transactions)
|
||||
error=ha_autocommit_or_rollback(thd,error);
|
||||
if (thd->lock)
|
||||
{
|
||||
mysql_unlock_tables(thd, thd->lock);
|
||||
@ -1265,7 +1269,8 @@ bool select_insert::send_eof()
|
||||
mysql_update_log.write(thd,thd->query,thd->query_length);
|
||||
if (mysql_bin_log.is_open())
|
||||
{
|
||||
Query_log_event qinfo(thd, thd->query);
|
||||
Query_log_event qinfo(thd, thd->query,
|
||||
table->file->has_transactions());
|
||||
mysql_bin_log.write(&qinfo);
|
||||
}
|
||||
return 0;
|
||||
|
@ -245,10 +245,11 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
|
||||
|
||||
if (!table->file->has_transactions())
|
||||
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
|
||||
if (!read_file_from_client)
|
||||
if (!read_file_from_client && mysql_bin_log.is_open())
|
||||
{
|
||||
ex->skip_lines = save_skip_lines;
|
||||
Load_log_event qinfo(thd, ex, table->table_name, fields, handle_duplicates);
|
||||
Load_log_event qinfo(thd, ex, table->table_name, fields,
|
||||
handle_duplicates);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
}
|
||||
DBUG_RETURN(0);
|
||||
|
@ -172,7 +172,7 @@ check_connections(THD *thd)
|
||||
vio_description(net->vio)));
|
||||
if (!thd->host) // If TCP/IP connection
|
||||
{
|
||||
char ip[17];
|
||||
char ip[30];
|
||||
|
||||
if (vio_peer_addr(net->vio,ip))
|
||||
return (ER_BAD_HOST_ERROR);
|
||||
@ -718,7 +718,7 @@ bool do_command(THD *thd)
|
||||
case COM_DROP_DB:
|
||||
{
|
||||
char *db=thd->strdup(packet+1);
|
||||
if (check_access(thd,DROP_ACL,db,0,1))
|
||||
if (check_access(thd,DROP_ACL,db,0,1) || end_active_trans(thd))
|
||||
break;
|
||||
mysql_log.write(thd,command,db);
|
||||
mysql_rm_db(thd,db,0);
|
||||
@ -1136,7 +1136,10 @@ mysql_execute_command(void)
|
||||
goto error; /* purecov: inspected */
|
||||
if (grant_option && check_grant(thd,INDEX_ACL,tables))
|
||||
goto error;
|
||||
res = mysql_create_index(thd, tables, lex->key_list);
|
||||
if (end_active_trans(thd))
|
||||
res= -1;
|
||||
else
|
||||
res = mysql_create_index(thd, tables, lex->key_list);
|
||||
break;
|
||||
|
||||
case SQLCOM_SLAVE_START:
|
||||
@ -1224,7 +1227,9 @@ mysql_execute_command(void)
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
if (mysql_rename_tables(thd,tables))
|
||||
if (end_active_trans(thd))
|
||||
res= -1;
|
||||
else if (mysql_rename_tables(thd,tables))
|
||||
res= -1;
|
||||
break;
|
||||
}
|
||||
@ -1428,7 +1433,10 @@ mysql_execute_command(void)
|
||||
{
|
||||
if (check_table_access(thd,DROP_ACL,tables))
|
||||
goto error; /* purecov: inspected */
|
||||
res = mysql_rm_table(thd,tables,lex->drop_if_exists);
|
||||
if (end_active_trans(thd))
|
||||
res= -1;
|
||||
else
|
||||
res = mysql_rm_table(thd,tables,lex->drop_if_exists);
|
||||
}
|
||||
break;
|
||||
case SQLCOM_DROP_INDEX:
|
||||
@ -1438,7 +1446,10 @@ mysql_execute_command(void)
|
||||
goto error; /* purecov: inspected */
|
||||
if (grant_option && check_grant(thd,INDEX_ACL,tables))
|
||||
goto error;
|
||||
res = mysql_drop_index(thd, tables, lex->drop_list);
|
||||
if (end_active_trans(thd))
|
||||
res= -1;
|
||||
else
|
||||
res = mysql_drop_index(thd, tables, lex->drop_list);
|
||||
break;
|
||||
case SQLCOM_SHOW_DATABASES:
|
||||
#if defined(DONT_ALLOW_SHOW_COMMANDS)
|
||||
@ -1643,7 +1654,8 @@ mysql_execute_command(void)
|
||||
}
|
||||
case SQLCOM_DROP_DB:
|
||||
{
|
||||
if (check_access(thd,DROP_ACL,lex->name,0,1))
|
||||
if (check_access(thd,DROP_ACL,lex->name,0,1) ||
|
||||
end_active_trans(thd))
|
||||
break;
|
||||
mysql_rm_db(thd,lex->name,lex->drop_if_exists);
|
||||
break;
|
||||
|
@ -1427,6 +1427,17 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
|
||||
thd->count_cuted_fields=0; /* Don`t calc cuted fields */
|
||||
new_table->time_stamp=save_time_stamp;
|
||||
|
||||
#if defined( __WIN__) || defined( __EMX__)
|
||||
/*
|
||||
We must do the COMMIT here so that we can close and rename the
|
||||
temporary table (as windows can't rename open tables)
|
||||
*/
|
||||
if (ha_commit_stmt(thd))
|
||||
error=1;
|
||||
if (ha_commit(thd))
|
||||
error=1;
|
||||
#endif
|
||||
|
||||
if (table->tmp_table)
|
||||
{
|
||||
/* We changed a temporary table */
|
||||
@ -1544,6 +1555,8 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
#if !(defined( __WIN__) || defined( __EMX__))
|
||||
/* The ALTER TABLE is always in it's own transaction */
|
||||
error = ha_commit_stmt(thd);
|
||||
if (ha_commit(thd))
|
||||
@ -1554,6 +1567,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
|
||||
VOID(pthread_mutex_unlock(&LOCK_open));
|
||||
goto err;
|
||||
}
|
||||
#endif
|
||||
|
||||
thd->proc_info="end";
|
||||
mysql_update_log.write(thd, thd->query,thd->query_length);
|
||||
|
@ -49,7 +49,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
|
||||
thr_lock_type lock_type)
|
||||
{
|
||||
bool using_limit=limit != HA_POS_ERROR;
|
||||
bool used_key_is_modified;
|
||||
bool used_key_is_modified, using_transactions;
|
||||
int error=0;
|
||||
uint save_time_stamp, used_index;
|
||||
key_map old_used_keys;
|
||||
@ -237,18 +237,20 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
|
||||
thd->proc_info="end";
|
||||
VOID(table->file->extra(HA_EXTRA_READCHECK));
|
||||
table->time_stamp=save_time_stamp; // Restore auto timestamp pointer
|
||||
if (updated)
|
||||
using_transactions=table->file->has_transactions();
|
||||
if (updated && (error == 0 || !using_transactions))
|
||||
{
|
||||
mysql_update_log.write(thd,thd->query,thd->query_length);
|
||||
if (mysql_bin_log.is_open())
|
||||
{
|
||||
Query_log_event qinfo(thd, thd->query);
|
||||
mysql_bin_log.write(&qinfo);
|
||||
Query_log_event qinfo(thd, thd->query, using_transactions);
|
||||
if (mysql_bin_log.write(&qinfo) && using_transactions)
|
||||
error=1;
|
||||
}
|
||||
if (!table->file->has_transactions())
|
||||
if (!using_transactions)
|
||||
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
|
||||
}
|
||||
if (ha_autocommit_or_rollback(thd, error >= 0))
|
||||
if (using_transactions && ha_autocommit_or_rollback(thd, error >= 0))
|
||||
error=1;
|
||||
if (thd->lock)
|
||||
{
|
||||
|
@ -2451,6 +2451,7 @@ user:
|
||||
keyword:
|
||||
ACTION {}
|
||||
| AFTER_SYM {}
|
||||
| AGAINST {}
|
||||
| AGGREGATE_SYM {}
|
||||
| AUTOCOMMIT {}
|
||||
| AVG_ROW_LENGTH {}
|
||||
|
@ -34,10 +34,12 @@ set-variable = record_buffer=2M
|
||||
set-variable = thread_cache=8
|
||||
set-variable = thread_concurrency=8 # Try number of CPU's*2
|
||||
set-variable = myisam_sort_buffer_size=64M
|
||||
log-update
|
||||
log-bin
|
||||
server-id = 1
|
||||
|
||||
# Uncomment the following if you are using BDB tables
|
||||
#set-variable = bdb_cache_size=384M
|
||||
#set-variable = bdb_max_lock=100000
|
||||
|
||||
# Point the following paths to different dedicated disks
|
||||
#tmpdir = /tmp/
|
||||
|
@ -34,10 +34,12 @@ set-variable = record_buffer=1M
|
||||
set-variable = myisam_sort_buffer_size=64M
|
||||
set-variable = thread_cache=8
|
||||
set-variable = thread_concurrency=8 # Try number of CPU's*2
|
||||
log-update
|
||||
log-bin
|
||||
server-id = 1
|
||||
|
||||
# Uncomment the following if you are using BDB tables
|
||||
#set-variable = bdb_cache_size=64M
|
||||
#set-variable = bdb_max_lock=100000
|
||||
|
||||
# Point the following paths to different dedicated disks
|
||||
#tmpdir = /tmp/
|
||||
|
@ -33,10 +33,12 @@ set-variable = table_cache=64
|
||||
set-variable = sort_buffer=512K
|
||||
set-variable = net_buffer_length=8K
|
||||
set-variable = myisam_sort_buffer_size=8M
|
||||
log-update
|
||||
log-bin
|
||||
server-id = 1
|
||||
|
||||
# Uncomment the following if you are using BDB tables
|
||||
#set-variable = bdb_cache_size=4M
|
||||
#set-variable = bdb_max_lock=10000
|
||||
|
||||
# Point the following paths to different dedicated disks
|
||||
#tmpdir = /tmp/
|
||||
|
@ -33,12 +33,13 @@ set-variable = thread_stack=64K
|
||||
set-variable = table_cache=4
|
||||
set-variable = sort_buffer=64K
|
||||
set-variable = net_buffer_length=2K
|
||||
server-id = 1
|
||||
|
||||
# Uncomment the following if you are NOT using BDB tables
|
||||
#skip-bdb
|
||||
|
||||
# Uncomment the following if you want to log updates
|
||||
#log-update
|
||||
#log-bin
|
||||
|
||||
[mysqldump]
|
||||
quick
|
||||
|
Loading…
x
Reference in New Issue
Block a user