Merged some functions and removed some unused client functions.
Remember UNION for ALTER TABLE Added test for if we are supporting transactions. Don't allow REPLACE to replace a row when we have generated an auto_increment key Fixed bug when using BLOB keys Fixed bug in SET @variable=user. Docs/manual.texi: Added some examples and moved the Error access denied section to the error section. client/mysqltest.c: Changed to use the new mysql_send_query() include/mysql.h: Changed mysql_reap_query() to mysql_send_query(). libmysql/libmysql.c: Changed mysql_reap_query() to mysql_send_query() Merged some functions and removed some unused functions. mysql-test/r/bdb.result: New test case mysql-test/r/distinct.result: New test case mysql-test/r/key.result: New test case mysql-test/r/merge.result: New test case mysql-test/r/replace.result: New test case mysql-test/t/bdb.test: New test case mysql-test/t/key.test: New test case mysql-test/t/merge.test: New test case mysql-test/t/replace.test: New test case mysys/my_lock.c: Moved global lock variable to static sql-bench/test-insert.sh: Added test case for index-read only sql/field.h: Fixed that one can optimize ORDER BY with ISAM and GEMINI sql/ha_berkeley.cc: Added type casts needed for Windows sql/ha_innobase.cc: Removed reference to manual from comment. sql/ha_myisammrg.cc: Remember UNION for ALTER TABLE sql/ha_myisammrg.h: Remember UNION for ALTER TABLE sql/handler.cc: Added test for if we are supporting transactions. Don't allow REPLACE to replace a row when we have generated an auto_increment key. sql/handler.h: Remember UNION for ALTER TABLE sql/key.cc: Fixed bug when using BLOB keys sql/mysql_priv.h: Added new variables sql/mysqld.cc: Added new variables sql/opt_range.cc: Fixed problem with BLOB keys sql/opt_sum.cc: Fix for BLOB keys sql/sql_class.cc: Added test if we need to init/clean transaction variables sql/sql_insert.cc: Fix for REPLACE and auto_increment keys sql/sql_parse.cc: Fixed bug in max_user_connections sql/sql_select.cc: Fixed problem with key on BLOB sql/sql_yacc.yy: Fixed bug in SET @variable=user. sql/table.cc: Fixed problem with keys on BLOB
This commit is contained in:
parent
ec5e2f589f
commit
869c89feaa
239
Docs/manual.texi
239
Docs/manual.texi
@ -512,7 +512,7 @@ BDB or Berkeley_db Tables
|
||||
INNOBASE Tables
|
||||
|
||||
* INNOBASE overview::
|
||||
* Innobase restrictions::
|
||||
* INNOBASE restrictions::
|
||||
|
||||
MySQL Tutorial
|
||||
|
||||
@ -580,7 +580,7 @@ Replication in MySQL
|
||||
* Replication Options:: Replication Options in my.cnf
|
||||
* Replication SQL:: SQL Commands related to replication
|
||||
* Replication FAQ:: Frequently Asked Questions about replication
|
||||
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
|
||||
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
|
||||
|
||||
Getting Maximum Performance from MySQL
|
||||
|
||||
@ -706,7 +706,6 @@ Problems and Common Errors
|
||||
* Multiple sql commands:: How to run SQL commands from a text file
|
||||
* Temporary files:: Where @strong{MySQL} stores temporary files
|
||||
* Problems with mysql.sock:: How to protect @file{/tmp/mysql.sock}
|
||||
* Error Access denied:: @code{Access denied} error
|
||||
* Changing MySQL user:: How to run @strong{MySQL} as a normal user
|
||||
* Resetting permissions:: How to reset a forgotten password.
|
||||
* File permissions :: Problems with file permissions
|
||||
@ -723,10 +722,12 @@ Problems and Common Errors
|
||||
|
||||
Some Common Errors When Using MySQL
|
||||
|
||||
* Error Access denied::
|
||||
* Gone away:: @code{MySQL server has gone away} error
|
||||
* Can not connect to server:: @code{Can't connect to [local] MySQL server} error
|
||||
* Blocked host:: @code{Host '...' is blocked} error
|
||||
* Too many connections:: @code{Too many connections} error
|
||||
* Non-transactional tables:: @code{Some non-transactional changed tables couldn't be rolled back} Error
|
||||
* Out of memory:: @code{Out of memory} error
|
||||
* Packet too large:: @code{Packet too large} error
|
||||
* Communication errors:: Communication errors / Aborted connection
|
||||
@ -13806,6 +13807,7 @@ Some things you should be aware about @code{BIGINT} columns:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
@cindex rounding errors
|
||||
As all arithmetic is done using signed @code{BIGINT} or @code{DOUBLE}
|
||||
values, so you shouldn't use unsigned big integers larger than
|
||||
@code{9223372036854775807} (63 bits) except with bit functions! If you
|
||||
@ -16419,6 +16421,19 @@ mysql> select TRUNCATE(1.999,1);
|
||||
mysql> select TRUNCATE(1.999,0);
|
||||
-> 1
|
||||
@end example
|
||||
|
||||
Note that as decimal numbers are normally not stored as exact numbers in
|
||||
computers, but as double values, you may be fooled by the following
|
||||
result:
|
||||
|
||||
@cindex rounding errors
|
||||
@example
|
||||
mysql> select TRUNCATE(10.28*100,0);
|
||||
-> 1027
|
||||
@end example
|
||||
|
||||
The above happens because 10.28 is actually stored as something like
|
||||
10.2799999999999999.
|
||||
@end table
|
||||
|
||||
@findex string functions
|
||||
@ -17911,9 +17926,11 @@ column value even if it isn't unique. The following gives the value of
|
||||
column:
|
||||
|
||||
@example
|
||||
substr(MIN(concat(sort,space(6-length(sort)),column),7,length(column)))
|
||||
substr(MIN(concat(rpad(sort,6,' '),column)),7)
|
||||
@end example
|
||||
|
||||
@xref{example-Maximum-column-group-row}.
|
||||
|
||||
@cindex @code{ORDER BY}, aliases in
|
||||
@cindex aliases, in @code{ORDER BY} clauses
|
||||
@cindex @code{GROUP BY}, aliases in
|
||||
@ -20019,6 +20036,16 @@ terminated by carriage return-linefeed pairs, or to read a file
|
||||
containing such lines, specify a @code{LINES TERMINATED BY '\r\n'}
|
||||
clause.
|
||||
|
||||
For example, to read a file of jokes, that are separated with a line
|
||||
of @code{%%}, into a SQL table you can do:
|
||||
|
||||
@example
|
||||
create table jokes (a int not null auto_increment primary key, joke text
|
||||
not null);
|
||||
load data infile "/tmp/jokes.txt" into table jokes fields terminated by ""
|
||||
lines terminated by "\n%%\n" (joke);
|
||||
@end example
|
||||
|
||||
@code{FIELDS [OPTIONALLY] ENCLOSED BY} controls quoting of fields. For
|
||||
output (@code{SELECT ... INTO OUTFILE}), if you omit the word
|
||||
@code{OPTIONALLY}, all fields are enclosed by the @code{ENCLOSED BY}
|
||||
@ -20582,6 +20609,10 @@ For now, it tells whether index is FULLTEXT or not.
|
||||
@node SHOW TABLE STATUS, SHOW STATUS, SHOW DATABASE INFO, SHOW
|
||||
@subsection SHOW Status Information About Tables
|
||||
|
||||
@example
|
||||
SHOW TABLE STATUS [FROM db_name] [LIKE wild]
|
||||
@end example
|
||||
|
||||
@code{SHOW TABLE STATUS} (new in Version 3.23) works likes @code{SHOW
|
||||
STATUS}, but provides a lot of information about each table. You can
|
||||
also get this list using the @code{mysqlshow --status db_name} command.
|
||||
@ -20618,64 +20649,64 @@ in the table comment.
|
||||
below, though the format and numbers probably differ:
|
||||
|
||||
@example
|
||||
+--------------------------+--------+
|
||||
| Variable_name | Value |
|
||||
+--------------------------+--------+
|
||||
| Aborted_clients | 0 |
|
||||
| Aborted_connects | 0 |
|
||||
| Bytes_received | 629539 |
|
||||
| Bytes_sent | 736394 |
|
||||
| Connections | 62 |
|
||||
| Created_tmp_disk_tables | 0 |
|
||||
| Created_tmp_tables | 0 |
|
||||
| Created_tmp_files | 0 |
|
||||
| Delayed_insert_threads | 0 |
|
||||
| Delayed_writes | 0 |
|
||||
| Delayed_errors | 0 |
|
||||
| Flush_commands | 1 |
|
||||
| Handler_delete | 0 |
|
||||
| Handler_read_first | 1 |
|
||||
| Handler_read_key | 9201 |
|
||||
| Handler_read_next | 0 |
|
||||
| Handler_read_prev | 0 |
|
||||
| Handler_read_rnd | 0 |
|
||||
| Handler_read_rnd_next | 45 |
|
||||
| Handler_update | 5998 |
|
||||
| Handler_write | 0 |
|
||||
| Key_blocks_used | 407 |
|
||||
| Key_read_requests | 27683 |
|
||||
| Key_reads | 407 |
|
||||
| Key_write_requests | 0 |
|
||||
| Key_writes | 0 |
|
||||
| Max_used_connections | 60 |
|
||||
| Not_flushed_key_blocks | 0 |
|
||||
| Not_flushed_delayed_rows | 0 |
|
||||
| Open_tables | 60 |
|
||||
| Open_files | 66 |
|
||||
| Open_streams | 0 |
|
||||
| Opened_tables | 66 |
|
||||
| Questions | 9308 |
|
||||
| Select_full_join | 0 |
|
||||
| Select_full_range_join | 0 |
|
||||
| Select_range | 0 |
|
||||
| Select_range_check | 0 |
|
||||
| Select_scan | 0 |
|
||||
| Slave_running | OFF |
|
||||
| Slave_open_temp_tables | 0 |
|
||||
| Slow_launch_threads | 0 |
|
||||
| Slow_queries | 0 |
|
||||
| Sort_merge_passes | 0 |
|
||||
| Sort_range | 0 |
|
||||
| Sort_rows | 0 |
|
||||
| Sort_scan | 0 |
|
||||
| Table_locks_immediate | 3183 |
|
||||
| Table_locks_waited | 6030 |
|
||||
| Threads_cached | 30 |
|
||||
| Threads_created | 61 |
|
||||
| Threads_connected | 31 |
|
||||
| Threads_running | 31 |
|
||||
| Uptime | 135 |
|
||||
+--------------------------+--------+
|
||||
+--------------------------+------------+
|
||||
| Variable_name | Value |
|
||||
+--------------------------+------------+
|
||||
| Aborted_clients | 0 |
|
||||
| Aborted_connects | 0 |
|
||||
| Bytes_received | 155372598 |
|
||||
| Bytes_sent | 1176560426 |
|
||||
| Connections | 30023 |
|
||||
| Created_tmp_disk_tables | 0 |
|
||||
| Created_tmp_tables | 8340 |
|
||||
| Created_tmp_files | 60 |
|
||||
| Delayed_insert_threads | 0 |
|
||||
| Delayed_writes | 0 |
|
||||
| Delayed_errors | 0 |
|
||||
| Flush_commands | 1 |
|
||||
| Handler_delete | 462604 |
|
||||
| Handler_read_first | 105881 |
|
||||
| Handler_read_key | 27820558 |
|
||||
| Handler_read_next | 390681754 |
|
||||
| Handler_read_prev | 6022500 |
|
||||
| Handler_read_rnd | 30546748 |
|
||||
| Handler_read_rnd_next | 246216530 |
|
||||
| Handler_update | 16945404 |
|
||||
| Handler_write | 60356676 |
|
||||
| Key_blocks_used | 14955 |
|
||||
| Key_read_requests | 96854827 |
|
||||
| Key_reads | 162040 |
|
||||
| Key_write_requests | 7589728 |
|
||||
| Key_writes | 3813196 |
|
||||
| Max_used_connections | 0 |
|
||||
| Not_flushed_key_blocks | 0 |
|
||||
| Not_flushed_delayed_rows | 0 |
|
||||
| Open_tables | 1 |
|
||||
| Open_files | 2 |
|
||||
| Open_streams | 0 |
|
||||
| Opened_tables | 44600 |
|
||||
| Questions | 2026873 |
|
||||
| Select_full_join | 0 |
|
||||
| Select_full_range_join | 0 |
|
||||
| Select_range | 99646 |
|
||||
| Select_range_check | 0 |
|
||||
| Select_scan | 30802 |
|
||||
| Slave_running | OFF |
|
||||
| Slave_open_temp_tables | 0 |
|
||||
| Slow_launch_threads | 0 |
|
||||
| Slow_queries | 0 |
|
||||
| Sort_merge_passes | 30 |
|
||||
| Sort_range | 500 |
|
||||
| Sort_rows | 30296250 |
|
||||
| Sort_scan | 4650 |
|
||||
| Table_locks_immediate | 1920382 |
|
||||
| Table_locks_waited | 0 |
|
||||
| Threads_cached | 0 |
|
||||
| Threads_created | 30022 |
|
||||
| Threads_connected | 1 |
|
||||
| Threads_running | 1 |
|
||||
| Uptime | 80380 |
|
||||
+--------------------------+------------+
|
||||
@end example
|
||||
|
||||
@cindex variables, status
|
||||
@ -20773,6 +20804,10 @@ If @code{Threads_created} is big, you may want to increase the
|
||||
@node SHOW VARIABLES, SHOW LOGS, SHOW STATUS, SHOW
|
||||
@subsection SHOW VARIABLES
|
||||
|
||||
@example
|
||||
SHOW VARIABLES [LIKE wild]
|
||||
@end example
|
||||
|
||||
@code{SHOW VARIABLES} shows the values of some @strong{MySQL} system
|
||||
variables. You can also get this information using the @code{mysqladmin
|
||||
variables} command. If the default values are unsuitable, you can set most
|
||||
@ -20871,6 +20906,7 @@ or @samp{M} to indicate kilobytes or megabytes. For example, @code{16M}
|
||||
indicates 16 megabytes. The case of suffix letters does not matter;
|
||||
@code{16M} and @code{16m} are equivalent:
|
||||
|
||||
@cindex variables, values
|
||||
@table @code
|
||||
@item @code{ansi_mode}.
|
||||
Is @code{ON} if @code{mysqld} was started with @code{--ansi}.
|
||||
@ -23452,10 +23488,10 @@ not trivial).
|
||||
|
||||
@menu
|
||||
* INNOBASE overview::
|
||||
* Innobase restrictions::
|
||||
* INNOBASE restrictions::
|
||||
@end menu
|
||||
|
||||
@node INNOBASE overview, Innobase restrictions, INNOBASE, INNOBASE
|
||||
@node INNOBASE overview, INNOBASE restrictions, INNOBASE, INNOBASE
|
||||
@subsection INNOBASE Tables overview
|
||||
|
||||
Innobase is included in the @strong{MySQL} source distribution starting
|
||||
@ -23637,15 +23673,17 @@ P.O.Box 800
|
||||
Finland
|
||||
@end example
|
||||
|
||||
@node Innobase restrictions, , INNOBASE overview, INNOBASE
|
||||
@subsection Some restrictions on @code{Innobase} tables:
|
||||
@node INNOBASE restrictions, , INNOBASE overview, INNOBASE
|
||||
@subsection Some restrictions on @code{INNOBASE} tables:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
You can't have a key on a @code{BLOB} or @code{TEXT} column.
|
||||
@item
|
||||
@code{DELETE FROM TABLE} doesn't generate the table but instead deletes all
|
||||
@code{DELETE FROM TABLE} doesn't re-generate the table but instead deletes all
|
||||
rows, one by one, which isn't that fast.
|
||||
@item
|
||||
The maximum blob size is 8000 bytes.
|
||||
@end itemize
|
||||
|
||||
@cindex tutorial
|
||||
@ -26382,7 +26420,7 @@ tables}.
|
||||
* Replication Options:: Replication Options in my.cnf
|
||||
* Replication SQL:: SQL Commands related to replication
|
||||
* Replication FAQ:: Frequently Asked Questions about replication
|
||||
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
|
||||
* Troubleshooting Replication:: Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication. Troubleshooting Replication.
|
||||
@end menu
|
||||
|
||||
@node Replication Intro, Replication Implementation, Replication, Replication
|
||||
@ -33525,7 +33563,6 @@ pre-allocated MYSQL struct.
|
||||
* Multiple sql commands:: How to run SQL commands from a text file
|
||||
* Temporary files:: Where @strong{MySQL} stores temporary files
|
||||
* Problems with mysql.sock:: How to protect @file{/tmp/mysql.sock}
|
||||
* Error Access denied:: @code{Access denied} error
|
||||
* Changing MySQL user:: How to run @strong{MySQL} as a normal user
|
||||
* Resetting permissions:: How to reset a forgotten password.
|
||||
* File permissions :: Problems with file permissions
|
||||
@ -33889,10 +33926,12 @@ sure that no other programs are using the dynamic libraries!
|
||||
@section Some Common Errors When Using MySQL
|
||||
|
||||
@menu
|
||||
* Error Access denied:: @code{Access denied} Error
|
||||
* Gone away:: @code{MySQL server has gone away} error
|
||||
* Can not connect to server:: @code{Can't connect to [local] MySQL server} error
|
||||
* Blocked host:: @code{Host '...' is blocked} error
|
||||
* Too many connections:: @code{Too many connections} error
|
||||
* Non-transactional tables:: @code{Some non-transactional changed tables couldn't be rolled back} Error
|
||||
* Out of memory:: @code{Out of memory} error
|
||||
* Packet too large:: @code{Packet too large} error
|
||||
* Communication errors:: Communication errors / Aborted connection
|
||||
@ -33904,7 +33943,15 @@ sure that no other programs are using the dynamic libraries!
|
||||
* Cannot initialize character set::
|
||||
@end menu
|
||||
|
||||
@node Gone away, Can not connect to server, Common errors, Common errors
|
||||
@cindex errors, access denied
|
||||
@cindex problems, access denied errors
|
||||
@cindex access denied errors
|
||||
@node Error Access denied, Gone away, Common errors, Common errors
|
||||
@subsection @code{Access denied} Error
|
||||
|
||||
@xref{Privileges}, and especially see @xref{Access denied}.
|
||||
|
||||
@node Gone away, Can not connect to server, Error Access denied, Common errors
|
||||
@subsection @code{MySQL server has gone away} Error
|
||||
|
||||
This section also covers the related @code{Lost connection to server
|
||||
@ -34096,7 +34143,7 @@ check that there isn't anything wrong with TCP/IP connections from that
|
||||
host. If your TCP/IP connections aren't working, it won't do you any good to
|
||||
increase the value of the @code{max_connect_errors} variable!
|
||||
|
||||
@node Too many connections, Out of memory, Blocked host, Common errors
|
||||
@node Too many connections, Non-transactional tables, Blocked host, Common errors
|
||||
@subsection @code{Too many connections} Error
|
||||
|
||||
If you get the error @code{Too many connections} when you try to connect
|
||||
@ -34118,7 +34165,32 @@ the thread library is on a given platform. Linux or Solaris should be
|
||||
able to support 500-1000 simultaneous connections, depending on how much
|
||||
RAM you have and what your clients are doing.
|
||||
|
||||
@node Out of memory, Packet too large, Too many connections, Common errors
|
||||
@cindex Non-transactional tables
|
||||
@node Non-transactional tables, Out of memory, Too many connections, Common errors
|
||||
@subsection @code{Some non-transactional changed tables couldn't be rolled back} Error
|
||||
|
||||
If you get the error/warning: @code{Warning: Some non-transactional
|
||||
changed tables couldn't be rolled back} when trying to do a
|
||||
@code{ROLLBACK}, this means that some of the tables you used in the
|
||||
transaction didn't support transactions. These non-transactional tables
|
||||
will not be affected by the @code{ROLLBACK} statement.
|
||||
|
||||
The most typical case when this happens is when you have tried to create
|
||||
a table of a type that is not supported by your @code{mysqld} binary.
|
||||
If @code{mysqld} doesn't support a table type (or if the table type is
|
||||
disabled by a startup option) , it will instead create the table type
|
||||
with the table type that is most resembles to the one you requested,
|
||||
probably @code{MyISAM}.
|
||||
|
||||
You can check the table type for a table by doing:
|
||||
|
||||
@code{SHOW TABLE STATUS LIKE 'table_name'}. @xref{SHOW TABLE STATUS}.
|
||||
|
||||
You can check the extensions your @code{mysqld} binary supports by doing:
|
||||
|
||||
@code{show variables like 'have_%'}. @xref{SHOW VARIABLES}.
|
||||
|
||||
@node Out of memory, Packet too large, Non-transactional tables, Common errors
|
||||
@subsection @code{Out of memory} Error
|
||||
|
||||
If you issue a query and get something like the following error:
|
||||
@ -34489,7 +34561,7 @@ the original table.
|
||||
|
||||
@cindex @code{mysql.sock}, protection
|
||||
@cindex deletion, @code{mysql.sock}
|
||||
@node Problems with mysql.sock, Error Access denied, Temporary files, Problems
|
||||
@node Problems with mysql.sock, Changing MySQL user, Temporary files, Problems
|
||||
@section How to Protect @file{/tmp/mysql.sock} from Being Deleted
|
||||
|
||||
If you have problems with the fact that anyone can delete the
|
||||
@ -34507,17 +34579,9 @@ only by their owners or the superuser (@code{root}).
|
||||
You can check if the @code{sticky} bit is set by executing @code{ls -ld /tmp}.
|
||||
If the last permission bit is @code{t}, the bit is set.
|
||||
|
||||
@cindex errors, access denied
|
||||
@cindex problems, access denied errors
|
||||
@cindex access denied errors
|
||||
@node Error Access denied, Changing MySQL user, Problems with mysql.sock, Problems
|
||||
@section @code{Access denied} Error
|
||||
|
||||
@xref{Privileges}, and especially see @xref{Access denied}.
|
||||
|
||||
@cindex starting, @code{mysqld}
|
||||
@cindex @code{mysqld}, starting
|
||||
@node Changing MySQL user, Resetting permissions, Error Access denied, Problems
|
||||
@node Changing MySQL user, Resetting permissions, Problems with mysql.sock, Problems
|
||||
@section How to Run MySQL As a Normal User
|
||||
|
||||
The @strong{MySQL} server @code{mysqld} can be started and run by any user.
|
||||
@ -41755,6 +41819,13 @@ not yet 100 % confident in this code.
|
||||
@appendixsubsec Changes in release 3.23.34
|
||||
@itemize @bullet
|
||||
@item
|
||||
@code{REPLACE} will not replace a row that conflicts with an
|
||||
@code{auto_increment} generated key.
|
||||
@item
|
||||
@code{mysqld} now only sets @code{CLIENT_TRANSACTIONS} in
|
||||
@code{mysql->server_capabilities} if the server supports a transaction
|
||||
safe handler.
|
||||
@item
|
||||
Fixed that one can with @code{LOAD DATA INFILE} read number values to
|
||||
@code{ENUM} and @code{SET} columns.
|
||||
@item
|
||||
@ -41774,6 +41845,8 @@ Fixed problem in automatic repair that could let some threads in state
|
||||
@item
|
||||
@code{SHOW CREATE TABLE} now dumps the @code{UNION()} for @code{MERGE} tables.
|
||||
@item
|
||||
@code{ALTER TABLE} now remembers the old @code{UNION()} definition.
|
||||
@item
|
||||
Fixed bug when replicating timestamps.
|
||||
@item
|
||||
Fixed bug in bi-directonal replication.
|
||||
@ -41789,6 +41862,8 @@ Fixed problem with 'garbage results' when using @code{BDB} tables and
|
||||
@item
|
||||
Fixed a problem with @code{BDB} tables and @code{TEXT} columns.
|
||||
@item
|
||||
Fixed bug when using a @code{BLOB} key where a const row wasn't found.
|
||||
@item
|
||||
Fixed that @code{mysqlbinlog} writes the timestamp value for each query.
|
||||
This ensures that on gets same values for date functions like @code{NOW()}
|
||||
when using @code{mysqlbinlog} to pipe the queries to another server.
|
||||
|
@ -1412,13 +1412,13 @@ int run_query(MYSQL* mysql, struct st_query* q, int flags)
|
||||
else
|
||||
ds= &ds_res;
|
||||
|
||||
if((flags & QUERY_SEND) &&
|
||||
(q_error = mysql_send_query(mysql, q->query)))
|
||||
if ((flags & QUERY_SEND) &&
|
||||
(q_error = mysql_send_query(mysql, q->query, strlen(q->query))))
|
||||
die("At line %u: unable to send query '%s'", start_lineno, q->query);
|
||||
if(!(flags & QUERY_REAP))
|
||||
return 0;
|
||||
|
||||
if (mysql_reap_query(mysql))
|
||||
if (mysql_read_query_result(mysql))
|
||||
{
|
||||
if (q->require_file)
|
||||
abort_not_supported_test();
|
||||
|
@ -229,12 +229,11 @@ MYSQL * STDCALL mysql_real_connect(MYSQL *mysql, const char *host,
|
||||
void STDCALL mysql_close(MYSQL *sock);
|
||||
int STDCALL mysql_select_db(MYSQL *mysql, const char *db);
|
||||
int STDCALL mysql_query(MYSQL *mysql, const char *q);
|
||||
int STDCALL mysql_send_query(MYSQL *mysql, const char *q);
|
||||
int STDCALL mysql_reap_query(MYSQL *mysql);
|
||||
int STDCALL mysql_send_query(MYSQL *mysql, const char *q,
|
||||
unsigned int length);
|
||||
int STDCALL mysql_read_query_result(MYSQL *mysql);
|
||||
int STDCALL mysql_real_query(MYSQL *mysql, const char *q,
|
||||
unsigned int length);
|
||||
int STDCALL mysql_real_send_query(MYSQL *mysql, const char *q,
|
||||
unsigned int len);
|
||||
int STDCALL mysql_create_db(MYSQL *mysql, const char *DB);
|
||||
int STDCALL mysql_drop_db(MYSQL *mysql, const char *DB);
|
||||
int STDCALL mysql_shutdown(MYSQL *mysql);
|
||||
|
@ -1700,87 +1700,30 @@ mysql_query(MYSQL *mysql, const char *query)
|
||||
return mysql_real_query(mysql,query, (uint) strlen(query));
|
||||
}
|
||||
|
||||
int STDCALL
|
||||
mysql_send_query(MYSQL* mysql, const char* query)
|
||||
{
|
||||
return mysql_real_send_query(mysql, query, strlen(query));
|
||||
}
|
||||
|
||||
/* send the query and return so we can do something else */
|
||||
/* needs to be followed by mysql_reap_query() when we want to
|
||||
finish processing it
|
||||
/*
|
||||
Send the query and return so we can do something else.
|
||||
Needs to be followed by mysql_read_query_result() when we want to
|
||||
finish processing it.
|
||||
*/
|
||||
int STDCALL
|
||||
mysql_real_send_query(MYSQL* mysql, const char* query, uint len)
|
||||
{
|
||||
return simple_command(mysql, COM_QUERY, query, len, 1);
|
||||
}
|
||||
|
||||
int STDCALL
|
||||
mysql_reap_query(MYSQL* mysql)
|
||||
mysql_send_query(MYSQL* mysql, const char* query, uint length)
|
||||
{
|
||||
return simple_command(mysql, COM_QUERY, query, length, 1);
|
||||
}
|
||||
|
||||
int STDCALL mysql_read_query_result(MYSQL *mysql)
|
||||
{
|
||||
uchar *pos;
|
||||
ulong field_count;
|
||||
MYSQL_DATA *fields;
|
||||
uint len;
|
||||
DBUG_ENTER("mysql_reap_query");
|
||||
DBUG_PRINT("enter",("handle: %lx",mysql));
|
||||
if((len = net_safe_read(mysql)) == packet_error)
|
||||
uint length;
|
||||
DBUG_ENTER("mysql_read_query_result");
|
||||
|
||||
if ((length = net_safe_read(mysql)) == packet_error)
|
||||
DBUG_RETURN(-1);
|
||||
free_old_query(mysql); /* Free old result */
|
||||
get_info:
|
||||
pos=(uchar*) mysql->net.read_pos;
|
||||
if ((field_count= net_field_length(&pos)) == 0)
|
||||
{
|
||||
mysql->affected_rows= net_field_length_ll(&pos);
|
||||
mysql->insert_id= net_field_length_ll(&pos);
|
||||
if (mysql->server_capabilities & CLIENT_TRANSACTIONS)
|
||||
{
|
||||
mysql->server_status=uint2korr(pos); pos+=2;
|
||||
}
|
||||
if (pos < mysql->net.read_pos+len && net_field_length(&pos))
|
||||
mysql->info=(char*) pos;
|
||||
DBUG_RETURN(0);
|
||||
}
|
||||
if (field_count == NULL_LENGTH) /* LOAD DATA LOCAL INFILE */
|
||||
{
|
||||
int error=send_file_to_server(mysql,(char*) pos);
|
||||
if ((len=net_safe_read(mysql)) == packet_error || error)
|
||||
DBUG_RETURN(-1);
|
||||
goto get_info; /* Get info packet */
|
||||
}
|
||||
if (!(mysql->server_status & SERVER_STATUS_AUTOCOMMIT))
|
||||
mysql->server_status|= SERVER_STATUS_IN_TRANS;
|
||||
|
||||
mysql->extra_info= net_field_length_ll(&pos); /* Maybe number of rec */
|
||||
if (!(fields=read_rows(mysql,(MYSQL_FIELD*) 0,5)))
|
||||
DBUG_RETURN(-1);
|
||||
if (!(mysql->fields=unpack_fields(fields,&mysql->field_alloc,
|
||||
(uint) field_count,0,
|
||||
(my_bool) test(mysql->server_capabilities &
|
||||
CLIENT_LONG_FLAG))))
|
||||
DBUG_RETURN(-1);
|
||||
mysql->status=MYSQL_STATUS_GET_RESULT;
|
||||
mysql->field_count=field_count;
|
||||
DBUG_RETURN(0);
|
||||
|
||||
}
|
||||
|
||||
int STDCALL
|
||||
mysql_real_query(MYSQL *mysql, const char *query, uint length)
|
||||
{
|
||||
uchar *pos;
|
||||
ulong field_count;
|
||||
MYSQL_DATA *fields;
|
||||
DBUG_ENTER("mysql_real_query");
|
||||
DBUG_PRINT("enter",("handle: %lx",mysql));
|
||||
DBUG_PRINT("query",("Query = \"%s\"",query));
|
||||
|
||||
if (simple_command(mysql,COM_QUERY,query,length,1) ||
|
||||
(length=net_safe_read(mysql)) == packet_error)
|
||||
DBUG_RETURN(-1);
|
||||
free_old_query(mysql); /* Free old result */
|
||||
get_info:
|
||||
get_info:
|
||||
pos=(uchar*) mysql->net.read_pos;
|
||||
if ((field_count= net_field_length(&pos)) == 0)
|
||||
{
|
||||
@ -1817,6 +1760,16 @@ mysql_real_query(MYSQL *mysql, const char *query, uint length)
|
||||
DBUG_RETURN(0);
|
||||
}
|
||||
|
||||
int STDCALL
|
||||
mysql_real_query(MYSQL *mysql, const char *query, uint length)
|
||||
{
|
||||
DBUG_ENTER("mysql_real_query");
|
||||
DBUG_PRINT("enter",("handle: %lx",mysql));
|
||||
DBUG_PRINT("query",("Query = \"%s\"",query));
|
||||
if (simple_command(mysql,COM_QUERY,query,length,1))
|
||||
DBUG_RETURN(-1);
|
||||
DBUG_RETURN(mysql_read_query_result(mysql));
|
||||
}
|
||||
|
||||
static int
|
||||
send_file_to_server(MYSQL *mysql, const char *filename)
|
||||
|
@ -483,3 +483,10 @@ i j
|
||||
1 2
|
||||
build_path
|
||||
current
|
||||
a b
|
||||
a 2
|
||||
a b
|
||||
a 2
|
||||
a b
|
||||
a 1
|
||||
a 2
|
||||
|
@ -120,14 +120,14 @@ UserId
|
||||
b
|
||||
1
|
||||
table type possible_keys key key_len ref rows Extra
|
||||
t3 index a a 4 NULL 6 Using index; Using temporary
|
||||
t3 index a a 5 NULL 6 Using index; Using temporary
|
||||
t2 index a a 4 NULL 5 Using index; Distinct
|
||||
t1 eq_ref PRIMARY PRIMARY 4 t2.a 1 where used; Distinct
|
||||
a
|
||||
1
|
||||
table type possible_keys key key_len ref rows Extra
|
||||
t1 index PRIMARY PRIMARY 4 NULL 2 Using index; Using temporary
|
||||
t3 ref a a 5 t1.a 12 Using index; Distinct
|
||||
t3 ref a a 5 t1.a 10 Using index; Distinct
|
||||
a
|
||||
1
|
||||
2
|
||||
|
@ -10,3 +10,10 @@ name_id name
|
||||
name_id name
|
||||
name_id name
|
||||
2 [T,U]_axpby
|
||||
a b
|
||||
a 2
|
||||
a b
|
||||
a 2
|
||||
a b
|
||||
a 1
|
||||
a 2
|
||||
|
@ -103,9 +103,6 @@ test2
|
||||
test2
|
||||
c
|
||||
c
|
||||
test1
|
||||
test1
|
||||
test1
|
||||
incr othr
|
||||
incr othr
|
||||
1 10
|
||||
@ -118,4 +115,15 @@ count(*)
|
||||
20
|
||||
count(*)
|
||||
20
|
||||
Table Create Table
|
||||
t3 CREATE TABLE `t3` (
|
||||
`incr` int(11) NOT NULL default '0',
|
||||
`othr` int(11) NOT NULL default '0',
|
||||
PRIMARY KEY (`incr`)
|
||||
) TYPE=MRG_MyISAM UNION=(t1,t2)
|
||||
Table Create Table
|
||||
t3 CREATE TABLE `t3` (
|
||||
`incr` int(11) NOT NULL default '0',
|
||||
`othr` int(11) NOT NULL default '0'
|
||||
) TYPE=MRG_MyISAM UNION=(t1,t2)
|
||||
a
|
||||
|
@ -0,0 +1,3 @@
|
||||
a b
|
||||
126 first updated
|
||||
127 last
|
@ -728,3 +728,18 @@ where
|
||||
t3.platform_id = 2;
|
||||
|
||||
drop table t1, t2, t3, t4, t5, t6,t7;
|
||||
|
||||
#
|
||||
# Test with blob + tinyint key
|
||||
#
|
||||
|
||||
CREATE TABLE t1 (
|
||||
a tinytext NOT NULL,
|
||||
b tinyint(3) unsigned NOT NULL default '0',
|
||||
PRIMARY KEY (a(32),b)
|
||||
) TYPE=BDB;
|
||||
INSERT INTO t1 VALUES ('a',1),('a',2);
|
||||
SELECT * FROM t1 WHERE a='a' AND b=2;
|
||||
SELECT * FROM t1 WHERE a='a' AND b in (2);
|
||||
SELECT * FROM t1 WHERE a='a' AND b in (1,2);
|
||||
drop table t1;
|
||||
|
@ -144,3 +144,19 @@ INSERT INTO t1 VALUES (1, 1, 1, 1, 'a');
|
||||
INSERT INTO t1 VALUES (1, 1, 1, 1, 'b');
|
||||
!$1062 INSERT INTO t1 VALUES (1, 1, 1, 1, 'a');
|
||||
drop table t1;
|
||||
|
||||
#
|
||||
# Test with blob + tinyint key
|
||||
# (Failed for Greg Valure)
|
||||
#
|
||||
|
||||
CREATE TABLE t1 (
|
||||
a tinytext NOT NULL,
|
||||
b tinyint(3) unsigned NOT NULL default '0',
|
||||
PRIMARY KEY (a(32),b)
|
||||
) TYPE=MyISAM;
|
||||
INSERT INTO t1 VALUES ('a',1),('a',2);
|
||||
SELECT * FROM t1 WHERE a='a' AND b=2;
|
||||
SELECT * FROM t1 WHERE a='a' AND b in (2);
|
||||
SELECT * FROM t1 WHERE a='a' AND b in (1,2);
|
||||
drop table t1;
|
||||
|
@ -50,7 +50,7 @@ insert into t2 (c) values ('test2');
|
||||
insert into t2 (c) values ('test2');
|
||||
select * from t3;
|
||||
select * from t3;
|
||||
delete from t3;
|
||||
delete from t3 where 1=1;
|
||||
select * from t3;
|
||||
select * from t1;
|
||||
drop table t3,t2,t1;
|
||||
@ -78,6 +78,16 @@ alter table t3 UNION=(t1,t2);
|
||||
select count(*) from t3;
|
||||
alter table t3 TYPE=MYISAM;
|
||||
select count(*) from t3;
|
||||
|
||||
# Test that ALTER TABLE rembers the old UNION
|
||||
|
||||
drop table t3;
|
||||
CREATE TABLE t3 (incr int not null, othr int not null, primary key(incr))
|
||||
TYPE=MERGE UNION=(t1,t2);
|
||||
show create table t3;
|
||||
alter table t3 drop primary key;
|
||||
show create table t3;
|
||||
|
||||
drop table t3,t2,t1;
|
||||
|
||||
#
|
||||
|
@ -20,3 +20,17 @@ replace into t1 (gesuchnr,benutzer_id) values (1,1);
|
||||
alter table t1 type=heap;
|
||||
replace into t1 (gesuchnr,benutzer_id) values (1,1);
|
||||
drop table t1;
|
||||
|
||||
#
|
||||
# Test when using replace on a key that has used up it's whole range
|
||||
#
|
||||
|
||||
create table t1 (a tinyint not null auto_increment primary key, b char(20));
|
||||
insert into t1 values (126,"first"),(0,"last");
|
||||
--error 1062
|
||||
insert into t1 values (0,"error");
|
||||
--error 1062
|
||||
replace into t1 values (0,"error");
|
||||
replace into t1 values (126,"first updated");
|
||||
select * from t1;
|
||||
drop table t1;
|
||||
|
@ -31,12 +31,6 @@
|
||||
#define INCL_BASE
|
||||
#define INCL_NOPMAPI
|
||||
#include <os2emx.h>
|
||||
#endif
|
||||
|
||||
#ifndef __EMX__
|
||||
#ifdef HAVE_FCNTL
|
||||
static struct flock lock; /* Must be static for sun-sparc */
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* Lock a part of a file */
|
||||
@ -124,29 +118,32 @@ int my_lock(File fd, int locktype, my_off_t start, my_off_t length,
|
||||
}
|
||||
#else
|
||||
#if defined(HAVE_FCNTL)
|
||||
lock.l_type= (short) locktype;
|
||||
lock.l_whence=0L;
|
||||
lock.l_start=(long) start;
|
||||
lock.l_len=(long) length;
|
||||
if (MyFlags & MY_DONT_WAIT)
|
||||
{
|
||||
if (fcntl(fd,F_SETLK,&lock) != -1) /* Check if we can lock */
|
||||
DBUG_RETURN(0); /* Ok, file locked */
|
||||
DBUG_PRINT("info",("Was locked, trying with alarm"));
|
||||
ALARM_INIT;
|
||||
while ((value=fcntl(fd,F_SETLKW,&lock)) && ! ALARM_TEST &&
|
||||
errno == EINTR)
|
||||
{ /* Setup again so we don`t miss it */
|
||||
ALARM_REINIT;
|
||||
struct flock lock;
|
||||
lock.l_type= (short) locktype;
|
||||
lock.l_whence=0L;
|
||||
lock.l_start=(long) start;
|
||||
lock.l_len=(long) length;
|
||||
if (MyFlags & MY_DONT_WAIT)
|
||||
{
|
||||
if (fcntl(fd,F_SETLK,&lock) != -1) /* Check if we can lock */
|
||||
DBUG_RETURN(0); /* Ok, file locked */
|
||||
DBUG_PRINT("info",("Was locked, trying with alarm"));
|
||||
ALARM_INIT;
|
||||
while ((value=fcntl(fd,F_SETLKW,&lock)) && ! ALARM_TEST &&
|
||||
errno == EINTR)
|
||||
{ /* Setup again so we don`t miss it */
|
||||
ALARM_REINIT;
|
||||
}
|
||||
ALARM_END;
|
||||
if (value != -1)
|
||||
DBUG_RETURN(0);
|
||||
if (errno == EINTR)
|
||||
errno=EAGAIN;
|
||||
}
|
||||
ALARM_END;
|
||||
if (value != -1)
|
||||
else if (fcntl(fd,F_SETLKW,&lock) != -1) /* Wait until a lock */
|
||||
DBUG_RETURN(0);
|
||||
if (errno == EINTR)
|
||||
errno=EAGAIN;
|
||||
}
|
||||
else if (fcntl(fd,F_SETLKW,&lock) != -1) /* Wait until a lock */
|
||||
DBUG_RETURN(0);
|
||||
#else
|
||||
if (MyFlags & MY_SEEK_NOT_DONE)
|
||||
VOID(my_seek(fd,start,MY_SEEK_SET,MYF(MyFlags & ~MY_SEEK_NOT_DONE)));
|
||||
|
@ -482,9 +482,12 @@ check_or_range("id3","select_range_key2");
|
||||
|
||||
# Check reading on direct key on id and id3
|
||||
|
||||
check_select_key("id","select_key_prefix");
|
||||
check_select_key2("id","id2","select_key");
|
||||
check_select_key("id3","select_key2");
|
||||
check_select_key("*","id","select_key_prefix");
|
||||
check_select_key2("*","id","id2","select_key");
|
||||
check_select_key2("id,id2","id","id2","select_key_return_key");
|
||||
check_select_key("*","id3","select_key2");
|
||||
check_select_key("id3","id3","select_key2_return_key");
|
||||
check_select_key("id1,id2","id3","select_key2_return_prim");
|
||||
|
||||
####
|
||||
#### A lot of simple selects on ranges
|
||||
|
@ -45,7 +45,7 @@ public:
|
||||
uint8 null_bit; // And position to it
|
||||
struct st_table *table; // Pointer for table
|
||||
ulong query_id; // For quick test of used fields
|
||||
key_map key_start,part_of_key; // Key is part of these keys.
|
||||
key_map key_start,part_of_key,part_of_sortkey;// Field is part of these keys.
|
||||
const char *table_name,*field_name;
|
||||
utype unireg_check;
|
||||
uint32 field_length; // Length of field
|
||||
|
@ -459,7 +459,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
key_used_on_scan=primary_key;
|
||||
|
||||
/* Need some extra memory in case of packed keys */
|
||||
uint max_key_length= table->max_key_length + MAX_REF_PARTS*2;
|
||||
uint max_key_length= table->max_key_length + MAX_REF_PARTS*3;
|
||||
if (!(alloc_ptr=
|
||||
my_multi_malloc(MYF(MY_WME),
|
||||
&key_buff, max_key_length,
|
||||
@ -469,8 +469,9 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
table->key_info[table->primary_key].key_length),
|
||||
NullS)))
|
||||
DBUG_RETURN(1); /* purecov: inspected */
|
||||
if (!(rec_buff=my_malloc((alloced_rec_buff_length=table->rec_buff_length),
|
||||
MYF(MY_WME))))
|
||||
if (!(rec_buff= (byte*) my_malloc((alloced_rec_buff_length=
|
||||
table->rec_buff_length),
|
||||
MYF(MY_WME))))
|
||||
{
|
||||
my_free(alloc_ptr,MYF(0)); /* purecov: inspected */
|
||||
DBUG_RETURN(1); /* purecov: inspected */
|
||||
@ -479,7 +480,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
/* Init shared structure */
|
||||
if (!(share=get_share(name,table)))
|
||||
{
|
||||
my_free(rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free((char*) rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free(alloc_ptr,MYF(0)); /* purecov: inspected */
|
||||
DBUG_RETURN(1); /* purecov: inspected */
|
||||
}
|
||||
@ -496,7 +497,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
if ((error=db_create(&file, db_env, 0)))
|
||||
{
|
||||
free_share(share,table, hidden_primary_key,1); /* purecov: inspected */
|
||||
my_free(rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free((char*) rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free(alloc_ptr,MYF(0)); /* purecov: inspected */
|
||||
my_errno=error; /* purecov: inspected */
|
||||
DBUG_RETURN(1); /* purecov: inspected */
|
||||
@ -513,7 +514,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
|
||||
"main", DB_BTREE, open_mode,0))))
|
||||
{
|
||||
free_share(share,table, hidden_primary_key,1); /* purecov: inspected */
|
||||
my_free(rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free((char*) rec_buff,MYF(0)); /* purecov: inspected */
|
||||
my_free(alloc_ptr,MYF(0)); /* purecov: inspected */
|
||||
my_errno=error; /* purecov: inspected */
|
||||
DBUG_RETURN(1); /* purecov: inspected */
|
||||
@ -583,7 +584,7 @@ int ha_berkeley::close(void)
|
||||
{
|
||||
DBUG_ENTER("ha_berkeley::close");
|
||||
|
||||
my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR));
|
||||
my_free((char*) rec_buff,MYF(MY_ALLOW_ZERO_PTR));
|
||||
my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR));
|
||||
ha_berkeley::extra(HA_EXTRA_RESET); // current_row buffer
|
||||
DBUG_RETURN(free_share(share,table, hidden_primary_key,0));
|
||||
@ -613,7 +614,7 @@ ulong ha_berkeley::max_row_length(const byte *buf)
|
||||
{
|
||||
ulong length=table->reclength + table->fields*2;
|
||||
for (Field_blob **ptr=table->blob_field ; *ptr ; ptr++)
|
||||
length+= (*ptr)->get_length(buf+(*ptr)->offset())+2;
|
||||
length+= (*ptr)->get_length((char*) buf+(*ptr)->offset())+2;
|
||||
return length;
|
||||
}
|
||||
|
||||
@ -654,7 +655,8 @@ int ha_berkeley::pack_row(DBT *row, const byte *record, bool new_row)
|
||||
byte *ptr=rec_buff + table->null_bytes;
|
||||
|
||||
for (Field **field=table->field ; *field ; field++)
|
||||
ptr=(byte*) (*field)->pack((char*) ptr,record + (*field)->offset());
|
||||
ptr=(byte*) (*field)->pack((char*) ptr,
|
||||
(char*) record + (*field)->offset());
|
||||
|
||||
if (hidden_primary_key)
|
||||
{
|
||||
@ -753,7 +755,7 @@ DBT *ha_berkeley::create_key(DBT *key, uint keynr, char *buff,
|
||||
}
|
||||
*buff++ = 1; // Store NOT NULL marker
|
||||
}
|
||||
buff=key_part->field->pack_key(buff,record + key_part->offset,
|
||||
buff=key_part->field->pack_key(buff,(char*) (record + key_part->offset),
|
||||
key_part->length);
|
||||
key_length-=key_part->length;
|
||||
}
|
||||
@ -792,7 +794,7 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
|
||||
}
|
||||
offset=1; // Data is at key_ptr+1
|
||||
}
|
||||
buff=key_part->field->pack_key_from_key_image(buff,key_ptr+offset,
|
||||
buff=key_part->field->pack_key_from_key_image(buff,(char*) key_ptr+offset,
|
||||
key_part->length);
|
||||
key_ptr+=key_part->store_length;
|
||||
key_length-=key_part->store_length;
|
||||
@ -928,8 +930,8 @@ int ha_berkeley::key_cmp(uint keynr, const byte * old_row,
|
||||
if (key_part->key_part_flag & (HA_BLOB_PART | HA_VAR_LENGTH))
|
||||
{
|
||||
|
||||
if (key_part->field->cmp_binary(old_row + key_part->offset,
|
||||
new_row + key_part->offset,
|
||||
if (key_part->field->cmp_binary((char*) (old_row + key_part->offset),
|
||||
(char*) (new_row + key_part->offset),
|
||||
(ulong) key_part->length))
|
||||
return 1;
|
||||
}
|
||||
@ -1007,6 +1009,7 @@ int ha_berkeley::restore_keys(DB_TXN *trans, key_map changed_keys,
|
||||
{
|
||||
int error;
|
||||
DBT tmp_key;
|
||||
uint keynr;
|
||||
DBUG_ENTER("restore_keys");
|
||||
|
||||
/* Restore the old primary key, and the old row, but don't ignore
|
||||
@ -1020,7 +1023,7 @@ int ha_berkeley::restore_keys(DB_TXN *trans, key_map changed_keys,
|
||||
rolled back. The last key set in changed_keys is the one that
|
||||
triggered the duplicate key error (it wasn't inserted), so for
|
||||
that one just put back the old value. */
|
||||
for (uint keynr=0; changed_keys; keynr++, changed_keys >>= 1)
|
||||
for (keynr=0; changed_keys; keynr++, changed_keys >>= 1)
|
||||
{
|
||||
if (changed_keys & 1)
|
||||
{
|
||||
@ -1387,7 +1390,7 @@ int ha_berkeley::index_read_idx(byte * buf, uint keynr, const byte * key,
|
||||
pack_key(&last_key, keynr, key_buff, key,
|
||||
key_len),
|
||||
¤t_row,0),
|
||||
buf, keynr, ¤t_row, &last_key, 0));
|
||||
(char*) buf, keynr, ¤t_row, &last_key, 0));
|
||||
}
|
||||
|
||||
|
||||
@ -1401,7 +1404,7 @@ int ha_berkeley::index_read(byte * buf, const byte * key,
|
||||
|
||||
statistic_increment(ha_read_key_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
if (key_len == key_info->key_length + key_info->extra_length)
|
||||
if (key_len == key_info->key_length)
|
||||
{
|
||||
error=read_row(cursor->c_get(cursor, pack_key(&last_key,
|
||||
active_index,
|
||||
@ -1410,7 +1413,7 @@ int ha_berkeley::index_read(byte * buf, const byte * key,
|
||||
&row,
|
||||
(find_flag == HA_READ_KEY_EXACT ?
|
||||
DB_SET : DB_SET_RANGE)),
|
||||
buf, active_index, &row, (DBT*) 0, 0);
|
||||
(char*) buf, active_index, &row, (DBT*) 0, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1420,7 +1423,7 @@ int ha_berkeley::index_read(byte * buf, const byte * key,
|
||||
memcpy(key_buff2, key_buff, (key_len=last_key.size));
|
||||
key_info->handler.bdb_return_if_eq= -1;
|
||||
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_SET_RANGE),
|
||||
buf, active_index, &row, (DBT*) 0, 0);
|
||||
(char*) buf, active_index, &row, (DBT*) 0, 0);
|
||||
key_info->handler.bdb_return_if_eq= 0;
|
||||
if (!error && find_flag == HA_READ_KEY_EXACT)
|
||||
{
|
||||
@ -1440,7 +1443,7 @@ int ha_berkeley::index_next(byte * buf)
|
||||
statistic_increment(ha_read_next_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
DBUG_RETURN(read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT),
|
||||
buf, active_index, &row, &last_key, 1));
|
||||
(char*) buf, active_index, &row, &last_key, 1));
|
||||
}
|
||||
|
||||
int ha_berkeley::index_next_same(byte * buf, const byte *key, uint keylen)
|
||||
@ -1452,11 +1455,11 @@ int ha_berkeley::index_next_same(byte * buf, const byte *key, uint keylen)
|
||||
bzero((char*) &row,sizeof(row));
|
||||
if (keylen == table->key_info[active_index].key_length)
|
||||
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT_DUP),
|
||||
buf, active_index, &row, &last_key, 1);
|
||||
(char*) buf, active_index, &row, &last_key, 1);
|
||||
else
|
||||
{
|
||||
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT),
|
||||
buf, active_index, &row, &last_key, 1);
|
||||
(char*) buf, active_index, &row, &last_key, 1);
|
||||
if (!error && ::key_cmp(table, key, active_index, keylen))
|
||||
error=HA_ERR_END_OF_FILE;
|
||||
}
|
||||
@ -1471,7 +1474,7 @@ int ha_berkeley::index_prev(byte * buf)
|
||||
statistic_increment(ha_read_prev_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
DBUG_RETURN(read_row(cursor->c_get(cursor, &last_key, &row, DB_PREV),
|
||||
buf, active_index, &row, &last_key, 1));
|
||||
(char*) buf, active_index, &row, &last_key, 1));
|
||||
}
|
||||
|
||||
|
||||
@ -1482,7 +1485,7 @@ int ha_berkeley::index_first(byte * buf)
|
||||
statistic_increment(ha_read_first_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
DBUG_RETURN(read_row(cursor->c_get(cursor, &last_key, &row, DB_FIRST),
|
||||
buf, active_index, &row, &last_key, 0));
|
||||
(char*) buf, active_index, &row, &last_key, 0));
|
||||
}
|
||||
|
||||
int ha_berkeley::index_last(byte * buf)
|
||||
@ -1492,7 +1495,7 @@ int ha_berkeley::index_last(byte * buf)
|
||||
statistic_increment(ha_read_last_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
DBUG_RETURN(read_row(cursor->c_get(cursor, &last_key, &row, DB_LAST),
|
||||
buf, active_index, &row, &last_key, 0));
|
||||
(char*) buf, active_index, &row, &last_key, 0));
|
||||
}
|
||||
|
||||
int ha_berkeley::rnd_init(bool scan)
|
||||
@ -1513,7 +1516,7 @@ int ha_berkeley::rnd_next(byte *buf)
|
||||
statistic_increment(ha_read_rnd_next_count,&LOCK_status);
|
||||
bzero((char*) &row,sizeof(row));
|
||||
DBUG_RETURN(read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT),
|
||||
buf, active_index, &row, &last_key, 1));
|
||||
(char*) buf, active_index, &row, &last_key, 1));
|
||||
}
|
||||
|
||||
|
||||
@ -1530,7 +1533,7 @@ DBT *ha_berkeley::get_pos(DBT *to, byte *pos)
|
||||
KEY_PART_INFO *end=key_part+table->key_info[primary_key].key_parts;
|
||||
|
||||
for ( ; key_part != end ; key_part++)
|
||||
pos+=key_part->field->packed_col_length(pos);
|
||||
pos+=key_part->field->packed_col_length((char*) pos);
|
||||
to->size= (uint) (pos- (byte*) to->data);
|
||||
}
|
||||
return to;
|
||||
@ -1545,7 +1548,7 @@ int ha_berkeley::rnd_pos(byte * buf, byte *pos)
|
||||
return read_row(file->get(file, transaction,
|
||||
get_pos(&db_pos, pos),
|
||||
¤t_row, 0),
|
||||
buf, active_index, ¤t_row, (DBT*) 0, 0);
|
||||
(char*) buf, active_index, ¤t_row, (DBT*) 0, 0);
|
||||
}
|
||||
|
||||
void ha_berkeley::position(const byte *record)
|
||||
@ -1554,7 +1557,7 @@ void ha_berkeley::position(const byte *record)
|
||||
if (hidden_primary_key)
|
||||
memcpy_fixed(ref, (char*) current_ident, BDB_HIDDEN_PRIMARY_KEY_LENGTH);
|
||||
else
|
||||
create_key(&key, primary_key, ref, record);
|
||||
create_key(&key, primary_key, (char*) ref, record);
|
||||
}
|
||||
|
||||
|
||||
@ -1928,7 +1931,7 @@ longlong ha_berkeley::get_auto_increment()
|
||||
&last_key))
|
||||
{
|
||||
error=0; // Found value
|
||||
unpack_key(table->record[1], &last_key, active_index);
|
||||
unpack_key((char*) table->record[1], &last_key, active_index);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -2105,7 +2108,8 @@ static BDB_SHARE *get_share(const char *table_name, TABLE *table)
|
||||
BDB_SHARE *share;
|
||||
pthread_mutex_lock(&bdb_mutex);
|
||||
uint length=(uint) strlen(table_name);
|
||||
if (!(share=(BDB_SHARE*) hash_search(&bdb_open_tables, table_name, length)))
|
||||
if (!(share=(BDB_SHARE*) hash_search(&bdb_open_tables, (byte*) table_name,
|
||||
length)))
|
||||
{
|
||||
ha_rows *rec_per_key;
|
||||
char *tmp_name;
|
||||
@ -2127,7 +2131,7 @@ static BDB_SHARE *get_share(const char *table_name, TABLE *table)
|
||||
strmov(share->table_name,table_name);
|
||||
share->key_file = key_file;
|
||||
share->key_type = key_type;
|
||||
if (hash_insert(&bdb_open_tables, (char*) share))
|
||||
if (hash_insert(&bdb_open_tables, (byte*) share))
|
||||
{
|
||||
pthread_mutex_unlock(&bdb_mutex); /* purecov: inspected */
|
||||
my_free((gptr) share,0); /* purecov: inspected */
|
||||
@ -2162,7 +2166,7 @@ static int free_share(BDB_SHARE *share, TABLE *table, uint hidden_primary_key,
|
||||
if (share->status_block &&
|
||||
(error = share->status_block->close(share->status_block,0)))
|
||||
result = error; /* purecov: inspected */
|
||||
hash_delete(&bdb_open_tables, (gptr) share);
|
||||
hash_delete(&bdb_open_tables, (byte*) share);
|
||||
thr_lock_delete(&share->lock);
|
||||
pthread_mutex_destroy(&share->mutex);
|
||||
my_free((gptr) share, MYF(0));
|
||||
|
@ -736,7 +736,7 @@ ha_innobase::open(
|
||||
stored the string length as the first byte. */
|
||||
|
||||
buff_len = table->reclength + table->max_key_length
|
||||
+ MAX_REF_PARTS * 2;
|
||||
+ MAX_REF_PARTS * 3;
|
||||
if (!(mysql_byte*) my_multi_malloc(MYF(MY_WME),
|
||||
&upd_buff, buff_len,
|
||||
&key_val_buff, buff_len,
|
||||
@ -2594,10 +2594,10 @@ ha_innobase::update_table_comment(
|
||||
return (char*)comment;
|
||||
|
||||
sprintf(str,
|
||||
"%s; (See manual about Innobase stats); Innobase free: %lu kB",
|
||||
"%s; Innobase free: %lu kB",
|
||||
comment, (ulong) innobase_get_free_space());
|
||||
|
||||
return((char*) str);
|
||||
return(str);
|
||||
}
|
||||
|
||||
|
||||
|
@ -221,6 +221,37 @@ THR_LOCK_DATA **ha_myisammrg::store_lock(THD *thd,
|
||||
return to;
|
||||
}
|
||||
|
||||
void ha_myisammrg::update_create_info(HA_CREATE_INFO *create_info)
|
||||
{
|
||||
DBUG_ENTER("ha_myisammrg::update_create_info");
|
||||
if (!(create_info->used_fields & HA_CREATE_USED_UNION))
|
||||
{
|
||||
MYRG_TABLE *table;
|
||||
THD *thd=current_thd;
|
||||
create_info->merge_list.next= &create_info->merge_list.first;
|
||||
|
||||
for (table=file->open_tables ; table != file->end_table ; table++)
|
||||
{
|
||||
char *name=table->table->s->filename;
|
||||
char buff[FN_REFLEN];
|
||||
TABLE_LIST *ptr;
|
||||
if (!(ptr = (TABLE_LIST *) thd->calloc(sizeof(TABLE_LIST))))
|
||||
goto err;
|
||||
fn_format(buff,name,"","",3);
|
||||
if (!(ptr->real_name=thd->strdup(buff)))
|
||||
goto err;
|
||||
(*create_info->merge_list.next) = (byte*) ptr;
|
||||
create_info->merge_list.next= (byte**) &ptr->next;
|
||||
}
|
||||
*create_info->merge_list.next=0;
|
||||
}
|
||||
DBUG_VOID_RETURN;
|
||||
|
||||
err:
|
||||
create_info->merge_list.elements=0;
|
||||
create_info->merge_list.first=0;
|
||||
DBUG_VOID_RETURN;
|
||||
}
|
||||
|
||||
int ha_myisammrg::create(const char *name, register TABLE *form,
|
||||
HA_CREATE_INFO *create_info)
|
||||
|
@ -72,5 +72,6 @@ class ha_myisammrg: public handler
|
||||
int create(const char *name, TABLE *form, HA_CREATE_INFO *create_info);
|
||||
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
|
||||
enum thr_lock_type lock_type);
|
||||
void update_create_info(HA_CREATE_INFO *create_info);
|
||||
void append_create_info(String *packet);
|
||||
};
|
||||
|
114
sql/handler.cc
114
sql/handler.cc
@ -133,6 +133,8 @@ int ha_init()
|
||||
int error;
|
||||
if ((error=berkeley_init()))
|
||||
return error;
|
||||
if (!berkeley_skip) // If we couldn't use handler
|
||||
opt_using_transactions=1;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_INNOBASE_DB
|
||||
@ -140,6 +142,8 @@ int ha_init()
|
||||
{
|
||||
if (innobase_init())
|
||||
return -1;
|
||||
if (!innobase_skip) // If we couldn't use handler
|
||||
opt_using_transactions=1;
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
@ -190,13 +194,16 @@ int ha_autocommit_or_rollback(THD *thd, int error)
|
||||
{
|
||||
DBUG_ENTER("ha_autocommit_or_rollback");
|
||||
#ifdef USING_TRANSACTIONS
|
||||
if (!error)
|
||||
if (opt_using_transactions)
|
||||
{
|
||||
if (ha_commit_stmt(thd))
|
||||
error=1;
|
||||
if (!error)
|
||||
{
|
||||
if (ha_commit_stmt(thd))
|
||||
error=1;
|
||||
}
|
||||
else
|
||||
(void) ha_rollback_stmt(thd);
|
||||
}
|
||||
else
|
||||
(void) ha_rollback_stmt(thd);
|
||||
#endif
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
@ -207,73 +214,80 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans)
|
||||
int error=0;
|
||||
DBUG_ENTER("ha_commit");
|
||||
#ifdef USING_TRANSACTIONS
|
||||
/* Update the binary log if we have cached some queries */
|
||||
if (trans == &thd->transaction.all && mysql_bin_log.is_open() &&
|
||||
my_b_tell(&thd->transaction.trans_log))
|
||||
if (opt_using_transactions)
|
||||
{
|
||||
mysql_bin_log.write(&thd->transaction.trans_log);
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
}
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
if (trans->bdb_tid)
|
||||
{
|
||||
if ((error=berkeley_commit(thd,trans->bdb_tid)))
|
||||
/* Update the binary log if we have cached some queries */
|
||||
if (trans == &thd->transaction.all && mysql_bin_log.is_open() &&
|
||||
my_b_tell(&thd->transaction.trans_log))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
|
||||
error=1;
|
||||
mysql_bin_log.write(&thd->transaction.trans_log);
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
}
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
if (trans->bdb_tid)
|
||||
{
|
||||
if ((error=berkeley_commit(thd,trans->bdb_tid)))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
|
||||
error=1;
|
||||
}
|
||||
trans->bdb_tid=0;
|
||||
}
|
||||
trans->bdb_tid=0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_INNOBASE_DB
|
||||
if (trans->innobase_tid)
|
||||
{
|
||||
if ((error=innobase_commit(thd,trans->innobase_tid)))
|
||||
if (trans->innobase_tid)
|
||||
{
|
||||
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
|
||||
error=1;
|
||||
if ((error=innobase_commit(thd,trans->innobase_tid)))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
|
||||
error=1;
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
if (error && trans == &thd->transaction.all && mysql_bin_log.is_open())
|
||||
sql_print_error("Error: Got error during commit; Binlog is not up to date!");
|
||||
if (error && trans == &thd->transaction.all && mysql_bin_log.is_open())
|
||||
sql_print_error("Error: Got error during commit; Binlog is not up to date!");
|
||||
}
|
||||
#endif // using transactions
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
||||
|
||||
int ha_rollback_trans(THD *thd, THD_TRANS *trans)
|
||||
{
|
||||
int error=0;
|
||||
DBUG_ENTER("ha_rollback");
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
if (trans->bdb_tid)
|
||||
#ifdef USING_TRANSACTIONS
|
||||
if (opt_using_transactions)
|
||||
{
|
||||
if ((error=berkeley_rollback(thd, trans->bdb_tid)))
|
||||
#ifdef HAVE_BERKELEY_DB
|
||||
if (trans->bdb_tid)
|
||||
{
|
||||
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
|
||||
error=1;
|
||||
if ((error=berkeley_rollback(thd, trans->bdb_tid)))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
|
||||
error=1;
|
||||
}
|
||||
trans->bdb_tid=0;
|
||||
}
|
||||
trans->bdb_tid=0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_INNOBASE_DB
|
||||
if (trans->innobase_tid)
|
||||
{
|
||||
if ((error=innobase_rollback(thd, trans->innobase_tid)))
|
||||
if (trans->innobase_tid)
|
||||
{
|
||||
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
|
||||
error=1;
|
||||
if ((error=innobase_rollback(thd, trans->innobase_tid)))
|
||||
{
|
||||
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
|
||||
error=1;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
if (trans == &thd->transaction.all)
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
}
|
||||
#endif
|
||||
#ifdef USING_TRANSACTIONS
|
||||
if (trans == &thd->transaction.all)
|
||||
reinit_io_cache(&thd->transaction.trans_log,
|
||||
WRITE_CACHE, (my_off_t) 0, 0, 1);
|
||||
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
#endif
|
||||
#endif /* USING_TRANSACTIONS */
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
||||
@ -493,7 +507,10 @@ void handler::update_auto_increment()
|
||||
THD *thd;
|
||||
DBUG_ENTER("update_auto_increment");
|
||||
if (table->next_number_field->val_int() != 0)
|
||||
{
|
||||
auto_increment_column_changed=0;
|
||||
DBUG_VOID_RETURN;
|
||||
}
|
||||
thd=current_thd;
|
||||
if ((nr=thd->next_insert_id))
|
||||
thd->next_insert_id=0; // Clear after use
|
||||
@ -501,6 +518,7 @@ void handler::update_auto_increment()
|
||||
nr=get_auto_increment();
|
||||
thd->insert_id((ulonglong) nr);
|
||||
table->next_number_field->store(nr);
|
||||
auto_increment_column_changed=1;
|
||||
DBUG_VOID_RETURN;
|
||||
}
|
||||
|
||||
|
@ -115,8 +115,9 @@ enum row_type { ROW_TYPE_DEFAULT, ROW_TYPE_FIXED, ROW_TYPE_DYNAMIC,
|
||||
/* struct to hold information about the table that should be created */
|
||||
|
||||
/* Bits in used_fields */
|
||||
#define HA_CREATE_USED_AUTO 1
|
||||
#define HA_CREATE_USED_RAID 2
|
||||
#define HA_CREATE_USED_AUTO 1
|
||||
#define HA_CREATE_USED_RAID 2
|
||||
#define HA_CREATE_USED_UNION 4
|
||||
|
||||
typedef struct st_thd_trans {
|
||||
void *bdb_tid;
|
||||
@ -191,6 +192,7 @@ public:
|
||||
time_t update_time;
|
||||
ulong mean_rec_length; /* physical reclength */
|
||||
void *ft_handler;
|
||||
bool auto_increment_column_changed;
|
||||
|
||||
handler(TABLE *table_arg) : table(table_arg),active_index(MAX_REF_PARTS),
|
||||
ref(0),ref_length(sizeof(my_off_t)), block_size(0),records(0),deleted(0),
|
||||
|
@ -76,7 +76,7 @@ void key_copy(byte *key,TABLE *table,uint idx,uint key_length)
|
||||
KEY_PART_INFO *key_part;
|
||||
|
||||
if (key_length == 0)
|
||||
key_length=key_info->key_length+key_info->extra_length;
|
||||
key_length=key_info->key_length;
|
||||
for (key_part=key_info->key_part;
|
||||
(int) key_length > 0 ;
|
||||
key_part++)
|
||||
@ -122,7 +122,7 @@ void key_restore(TABLE *table,byte *key,uint idx,uint key_length)
|
||||
{
|
||||
if (idx == (uint) -1)
|
||||
return;
|
||||
key_length=key_info->key_length+key_info->extra_length;
|
||||
key_length=key_info->key_length;
|
||||
}
|
||||
for (key_part=key_info->key_part;
|
||||
(int) key_length > 0 ;
|
||||
|
@ -504,16 +504,16 @@ extern pthread_mutex_t LOCK_mysql_create_db,LOCK_Acl,LOCK_open,
|
||||
LOCK_delayed_status, LOCK_delayed_create, LOCK_crypt, LOCK_timezone,
|
||||
LOCK_binlog_update, LOCK_slave, LOCK_server_id;
|
||||
extern pthread_cond_t COND_refresh,COND_thread_count, COND_binlog_update,
|
||||
COND_slave_stopped, COND_slave_start;
|
||||
COND_slave_stopped, COND_slave_start;
|
||||
extern pthread_attr_t connection_attrib;
|
||||
extern bool opt_endinfo,using_udf_functions, locked_in_memory;
|
||||
extern bool opt_endinfo, using_udf_functions, locked_in_memory,
|
||||
opt_using_transactions, use_temp_pool;
|
||||
extern char f_fyllchar;
|
||||
extern ulong ha_read_count, ha_write_count, ha_delete_count, ha_update_count,
|
||||
ha_read_key_count, ha_read_next_count, ha_read_prev_count,
|
||||
ha_read_first_count, ha_read_last_count,
|
||||
ha_read_rnd_count, ha_read_rnd_next_count;
|
||||
extern MY_BITMAP temp_pool;
|
||||
extern bool use_temp_pool;
|
||||
extern char f_fyllchar;
|
||||
extern uchar *days_in_month;
|
||||
extern DATE_FORMAT dayord;
|
||||
extern double log_10[32];
|
||||
|
@ -249,6 +249,7 @@ ulong max_tmp_tables,max_heap_table_size;
|
||||
ulong bytes_sent = 0L, bytes_received = 0L;
|
||||
|
||||
bool opt_endinfo,using_udf_functions,low_priority_updates, locked_in_memory;
|
||||
bool opt_using_transactions;
|
||||
bool volatile abort_loop,select_thread_in_use,grant_option;
|
||||
bool volatile ready_to_exit,shutdown_in_progress;
|
||||
ulong refresh_version=1L,flush_version=1L; /* Increments on each reload */
|
||||
|
@ -2500,14 +2500,16 @@ print_key(KEY_PART *key_part,const char *key,uint used_length)
|
||||
fputc('/',DBUG_FILE);
|
||||
if (field->real_maybe_null())
|
||||
{
|
||||
length++;
|
||||
length++; // null byte is not in part_length
|
||||
if (*key++)
|
||||
{
|
||||
fwrite("NULL",sizeof(char),4,DBUG_FILE);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
field->set_key_image((char*) key,key_part->part_length);
|
||||
field->set_key_image((char*) key,key_part->part_length -
|
||||
((field->type() == FIELD_TYPE_BLOB) ?
|
||||
HA_KEY_BLOB_LENGTH : 0));
|
||||
field->val_str(&tmp,&tmp);
|
||||
fwrite(tmp.ptr(),sizeof(char),tmp.length(),DBUG_FILE);
|
||||
}
|
||||
|
@ -331,16 +331,15 @@ static bool find_range_key(TABLE_REF *ref, Field* field, COND *cond)
|
||||
part != part_end ;
|
||||
part++)
|
||||
{
|
||||
if (!part_of_cond(cond,part->field))
|
||||
if (!part_of_cond(cond,part->field) ||
|
||||
left_length < part->store_length)
|
||||
break;
|
||||
// Save found constant
|
||||
if (part->null_bit)
|
||||
*key_ptr++= (byte) test(part->field->is_null());
|
||||
if (left_length - part->length < 0)
|
||||
break; // Can't use this key
|
||||
part->field->get_image((char*) key_ptr,part->length);
|
||||
key_ptr+=part->length;
|
||||
left_length-=part->length;
|
||||
part->field->get_key_image((char*) key_ptr,part->length);
|
||||
key_ptr+=part->store_length - test(part->null_bit);
|
||||
left_length-=part->store_length;
|
||||
}
|
||||
if (part == part_end && part->field == field)
|
||||
{
|
||||
|
@ -91,7 +91,7 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
|
||||
tmp_table=0;
|
||||
lock=locked_tables=0;
|
||||
used_tables=0;
|
||||
cuted_fields=0L;
|
||||
cuted_fields=sent_row_count=0L;
|
||||
options=thd_startup_options;
|
||||
update_lock_default= low_priority_updates ? TL_WRITE_LOW_PRIORITY : TL_WRITE;
|
||||
start_time=(time_t) 0;
|
||||
@ -118,12 +118,15 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
|
||||
system_thread=0;
|
||||
bzero((char*) &mem_root,sizeof(mem_root));
|
||||
#ifdef USING_TRANSACTIONS
|
||||
bzero((char*) &transaction,sizeof(transaction));
|
||||
if (open_cached_file(&transaction.trans_log,
|
||||
mysql_tmpdir, LOG_PREFIX, binlog_cache_size,
|
||||
MYF(MY_WME)))
|
||||
killed=1;
|
||||
transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
if (opt_using_transactions)
|
||||
{
|
||||
bzero((char*) &transaction,sizeof(transaction));
|
||||
if (open_cached_file(&transaction.trans_log,
|
||||
mysql_tmpdir, LOG_PREFIX, binlog_cache_size,
|
||||
MYF(MY_WME)))
|
||||
killed=1;
|
||||
transaction.trans_log.end_of_file= max_binlog_cache_size;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef __WIN__
|
||||
@ -148,8 +151,11 @@ THD::~THD()
|
||||
}
|
||||
close_temporary_tables(this);
|
||||
#ifdef USING_TRANSACTIONS
|
||||
close_cached_file(&transaction.trans_log);
|
||||
ha_close_connection(this);
|
||||
if (opt_using_transactions)
|
||||
{
|
||||
close_cached_file(&transaction.trans_log);
|
||||
ha_close_connection(this);
|
||||
}
|
||||
#endif
|
||||
if (global_read_lock)
|
||||
{
|
||||
|
@ -346,6 +346,14 @@ int write_record(TABLE *table,COPY_INFO *info)
|
||||
error=HA_WRITE_SKIPP; /* Database can't find key */
|
||||
goto err;
|
||||
}
|
||||
/*
|
||||
Don't allow REPLACE to replace a row when a auto_increment column
|
||||
was used. This ensures that we don't get a problem when the
|
||||
whole range of the key has been used.
|
||||
*/
|
||||
if (table->next_number_field && key_nr == table->next_number_index &&
|
||||
table->file->auto_increment_column_changed)
|
||||
goto err;
|
||||
if (table->file->option_flag() & HA_DUPP_POS)
|
||||
{
|
||||
if (table->file->rnd_pos(table->record[1],table->file->dupp_ref))
|
||||
|
@ -156,7 +156,7 @@ static bool check_user(THD *thd,enum_server_command command, const char *user,
|
||||
{
|
||||
bool error=test(mysql_change_db(thd,db));
|
||||
if (error)
|
||||
decrease_user_connections(user,thd->host);
|
||||
decrease_user_connections(thd->user,thd->host);
|
||||
return error;
|
||||
}
|
||||
else
|
||||
@ -175,8 +175,8 @@ static DYNAMIC_ARRAY user_conn_array;
|
||||
extern pthread_mutex_t LOCK_user_conn;
|
||||
|
||||
struct user_conn {
|
||||
char user[USERNAME_LENGTH+HOSTNAME_LENGTH+2];
|
||||
int connections, len;
|
||||
char *user;
|
||||
uint len, connections;
|
||||
};
|
||||
|
||||
static byte* get_key_conn(user_conn *buff, uint *length,
|
||||
@ -188,18 +188,23 @@ static byte* get_key_conn(user_conn *buff, uint *length,
|
||||
|
||||
#define DEF_USER_COUNT 50
|
||||
|
||||
static void free_user(struct user_conn *uc)
|
||||
{
|
||||
my_free((char*) uc,MYF(0));
|
||||
}
|
||||
|
||||
void init_max_user_conn(void)
|
||||
{
|
||||
(void) hash_init(&hash_user_connections,DEF_USER_COUNT,0,0,
|
||||
(hash_get_key) get_key_conn,0, 0);
|
||||
(void) init_dynamic_array(&user_conn_array,sizeof(user_conn),
|
||||
DEF_USER_COUNT, DEF_USER_COUNT);
|
||||
(hash_get_key) get_key_conn, (void (*)(void*)) free_user,
|
||||
0);
|
||||
}
|
||||
|
||||
|
||||
static int check_for_max_user_connections(const char *user, int u_length,
|
||||
const char *host)
|
||||
{
|
||||
int error=1;
|
||||
uint temp_len;
|
||||
char temp_user[USERNAME_LENGTH+HOSTNAME_LENGTH+2];
|
||||
struct user_conn *uc;
|
||||
@ -207,6 +212,9 @@ static int check_for_max_user_connections(const char *user, int u_length,
|
||||
user="";
|
||||
if (!host)
|
||||
host="";
|
||||
DBUG_ENTER("check_for_max_user_connections");
|
||||
DBUG_PRINT("enter",("user: '%s' host: '%s'", user, host));
|
||||
|
||||
temp_len= (uint) (strxnmov(temp_user, sizeof(temp_user), user, "@", host,
|
||||
NullS) - temp_user);
|
||||
(void) pthread_mutex_lock(&LOCK_user_conn);
|
||||
@ -214,30 +222,40 @@ static int check_for_max_user_connections(const char *user, int u_length,
|
||||
(byte*) temp_user, temp_len);
|
||||
if (uc) /* user found ; check for no. of connections */
|
||||
{
|
||||
if ((uint) max_user_connections == uc->connections)
|
||||
if (max_user_connections == (uint) uc->connections)
|
||||
{
|
||||
net_printf(&(current_thd->net),ER_TOO_MANY_USER_CONNECTIONS, temp_user);
|
||||
pthread_mutex_unlock(&LOCK_user_conn);
|
||||
return 1;
|
||||
goto end;
|
||||
}
|
||||
uc->connections++;
|
||||
}
|
||||
else
|
||||
{
|
||||
/* the user is not found in the cache; Insert it */
|
||||
struct user_conn uc;
|
||||
memcpy(uc.user,temp_user,temp_len+1);
|
||||
uc.len = temp_len;
|
||||
uc.connections = 1;
|
||||
if (!insert_dynamic(&user_conn_array, (char *) &uc))
|
||||
struct user_conn *uc= ((struct user_conn*)
|
||||
my_malloc(sizeof(struct user_conn) + temp_len+1,
|
||||
MYF(MY_WME)));
|
||||
if (!uc)
|
||||
{
|
||||
hash_insert(&hash_user_connections,
|
||||
(byte *) dynamic_array_ptr(&user_conn_array,
|
||||
user_conn_array.elements - 1));
|
||||
send_error(¤t_thd->net, 0, NullS); // Out of memory
|
||||
goto end;
|
||||
}
|
||||
uc->user=(char*) (uc+1);
|
||||
memcpy(uc->user,temp_user,temp_len+1);
|
||||
uc->len = temp_len;
|
||||
uc->connections = 1;
|
||||
if (hash_insert(&hash_user_connections, (byte*) uc))
|
||||
{
|
||||
my_free((char*) uc,0);
|
||||
send_error(¤t_thd->net, 0, NullS); // Out of memory
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
error=0;
|
||||
|
||||
end:
|
||||
(void) pthread_mutex_unlock(&LOCK_user_conn);
|
||||
return 0;
|
||||
DBUG_RETURN(error);
|
||||
}
|
||||
|
||||
|
||||
@ -246,10 +264,15 @@ static void decrease_user_connections(const char *user, const char *host)
|
||||
char temp_user[USERNAME_LENGTH+HOSTNAME_LENGTH+2];
|
||||
int temp_len;
|
||||
struct user_conn uucc, *uc;
|
||||
if (!max_user_connections)
|
||||
return;
|
||||
if (!user)
|
||||
user="";
|
||||
if (!host)
|
||||
host="";
|
||||
DBUG_ENTER("decrease_user_connections");
|
||||
DBUG_PRINT("enter",("user: '%s' host: '%s'", user, host));
|
||||
|
||||
temp_len= (uint) (strxnmov(temp_user, sizeof(temp_user), user, "@", host,
|
||||
NullS) - temp_user);
|
||||
(void) pthread_mutex_lock(&LOCK_user_conn);
|
||||
@ -263,17 +286,15 @@ static void decrease_user_connections(const char *user, const char *host)
|
||||
{
|
||||
/* Last connection for user; Delete it */
|
||||
(void) hash_delete(&hash_user_connections,(char *) uc);
|
||||
uint element= ((uint) ((byte*) uc - (byte*) user_conn_array.buffer) /
|
||||
user_conn_array.size_of_element);
|
||||
delete_dynamic_element(&user_conn_array,element);
|
||||
}
|
||||
end:
|
||||
(void) pthread_mutex_unlock(&LOCK_user_conn);
|
||||
DBUG_VOID_RETURN;
|
||||
}
|
||||
|
||||
|
||||
void free_max_user_conn(void)
|
||||
{
|
||||
delete_dynamic(&user_conn_array);
|
||||
hash_free(&hash_user_connections);
|
||||
}
|
||||
|
||||
@ -336,20 +357,20 @@ check_connections(THD *thd)
|
||||
{
|
||||
/* buff[] needs to big enough to hold the server_version variable */
|
||||
char buff[SERVER_VERSION_LENGTH + SCRAMBLE_LENGTH+32],*end;
|
||||
int client_flags = CLIENT_LONG_FLAG | CLIENT_CONNECT_WITH_DB |
|
||||
CLIENT_TRANSACTIONS;
|
||||
LINT_INIT(pkt_len);
|
||||
int client_flags = CLIENT_LONG_FLAG | CLIENT_CONNECT_WITH_DB;
|
||||
if (opt_using_transactions)
|
||||
client_flags|=CLIENT_TRANSACTIONS;
|
||||
#ifdef HAVE_COMPRESS
|
||||
client_flags |= CLIENT_COMPRESS;
|
||||
#endif /* HAVE_COMPRESS */
|
||||
|
||||
end=strmov(buff,server_version)+1;
|
||||
int4store((uchar*) end,thd->thread_id);
|
||||
end+=4;
|
||||
memcpy(end,thd->scramble,SCRAMBLE_LENGTH+1);
|
||||
end+=SCRAMBLE_LENGTH +1;
|
||||
#ifdef HAVE_COMPRESS
|
||||
client_flags |= CLIENT_COMPRESS;
|
||||
#endif /* HAVE_COMPRESS */
|
||||
#ifdef HAVE_OPENSSL
|
||||
if (ssl_acceptor_fd!=0)
|
||||
if (ssl_acceptor_fd)
|
||||
client_flags |= CLIENT_SSL; /* Wow, SSL is avalaible! */
|
||||
/*
|
||||
* Without SSL the handshake consists of one packet. This packet
|
||||
@ -542,8 +563,7 @@ pthread_handler_decl(handle_one_connection,arg)
|
||||
thread_safe_increment(aborted_threads,&LOCK_thread_count);
|
||||
}
|
||||
|
||||
if (max_user_connections)
|
||||
decrease_user_connections(thd->user,thd->host);
|
||||
decrease_user_connections(thd->user,thd->host);
|
||||
end_thread:
|
||||
close_connection(net);
|
||||
end_thread(thd,1);
|
||||
@ -567,17 +587,18 @@ pthread_handler_decl(handle_bootstrap,arg)
|
||||
THD *thd=(THD*) arg;
|
||||
FILE *file=bootstrap_file;
|
||||
char *buff;
|
||||
DBUG_ENTER("handle_bootstrap");
|
||||
|
||||
pthread_detach_this_thread();
|
||||
thd->thread_stack= (char*) &thd;
|
||||
|
||||
/* The following must be called before DBUG_ENTER */
|
||||
if (my_thread_init() || thd->store_globals())
|
||||
{
|
||||
close_connection(&thd->net,ER_OUT_OF_RESOURCES);
|
||||
thd->fatal_error=1;
|
||||
goto end;
|
||||
}
|
||||
DBUG_ENTER("handle_bootstrap");
|
||||
|
||||
pthread_detach_this_thread();
|
||||
thd->thread_stack= (char*) &thd;
|
||||
thd->mysys_var=my_thread_var;
|
||||
thd->dbug_thread_id=my_thread_id();
|
||||
#ifndef __WIN__
|
||||
@ -2800,4 +2821,3 @@ static void refresh_status(void)
|
||||
pthread_mutex_unlock(&LOCK_status);
|
||||
pthread_mutex_unlock(&THR_LOCK_keycache);
|
||||
}
|
||||
|
||||
|
@ -2037,8 +2037,7 @@ get_best_combination(JOIN *join)
|
||||
if (keyparts == keyuse->keypart)
|
||||
{
|
||||
keyparts++;
|
||||
length+=keyinfo->key_part[keyuse->keypart].length +
|
||||
test(keyinfo->key_part[keyuse->keypart].null_bit);
|
||||
length+=keyinfo->key_part[keyuse->keypart].store_length;
|
||||
}
|
||||
}
|
||||
keyuse++;
|
||||
@ -2238,7 +2237,10 @@ make_join_select(JOIN *join,SQL_SELECT *select,COND *cond)
|
||||
make_cond_for_table(cond,join->const_table_map,(table_map) 0);
|
||||
DBUG_EXECUTE("where",print_where(const_cond,"constants"););
|
||||
if (const_cond && !const_cond->val_int())
|
||||
{
|
||||
DBUG_PRINT("info",("Found impossible WHERE condition"));
|
||||
DBUG_RETURN(1); // Impossible const condition
|
||||
}
|
||||
}
|
||||
used_tables=(select->const_tables=join->const_table_map) | RAND_TABLE_BIT;
|
||||
for (uint i=join->const_tables ; i < join->tables ; i++)
|
||||
@ -5131,7 +5133,7 @@ test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order,ha_rows select_limit)
|
||||
usable_keys=0;
|
||||
break;
|
||||
}
|
||||
usable_keys&=((Item_field*) (*tmp_order->item))->field->part_of_key;
|
||||
usable_keys&=((Item_field*) (*tmp_order->item))->field->part_of_sortkey;
|
||||
}
|
||||
|
||||
ref_key= -1;
|
||||
|
@ -740,6 +740,7 @@ create_table_option:
|
||||
lex->table_list.elements=1;
|
||||
lex->table_list.next= (byte**) &(table_list->next);
|
||||
table_list->next=0;
|
||||
lex->create_info.used_fields|= HA_CREATE_USED_UNION;
|
||||
}
|
||||
|
||||
table_types:
|
||||
@ -2666,8 +2667,11 @@ option_value:
|
||||
{
|
||||
Item_func_set_user_var *item = new Item_func_set_user_var($2,$4);
|
||||
if (item->fix_fields(current_thd,0) || item->update())
|
||||
{
|
||||
send_error(¤t_thd->net, ER_SET_CONSTANTS_ONLY);
|
||||
}
|
||||
YYABORT;
|
||||
}
|
||||
}
|
||||
| SQL_SLAVE_SKIP_COUNTER equal ULONG_NUM
|
||||
{
|
||||
pthread_mutex_lock(&LOCK_slave);
|
||||
|
19
sql/table.cc
19
sql/table.cc
@ -153,7 +153,6 @@ int openfrm(const char *name, const char *alias, uint db_stat, uint prgflag,
|
||||
|
||||
for (i=0 ; i < keys ; i++, keyinfo++)
|
||||
{
|
||||
uint null_parts=0;
|
||||
keyinfo->flags= ((uint) strpos[0]) ^ HA_NOSAME;
|
||||
keyinfo->key_length= (uint) uint2korr(strpos+1);
|
||||
keyinfo->key_parts= (uint) strpos[3]; strpos+=4;
|
||||
@ -185,7 +184,6 @@ int openfrm(const char *name, const char *alias, uint db_stat, uint prgflag,
|
||||
}
|
||||
key_part->store_length=key_part->length;
|
||||
}
|
||||
keyinfo->key_length+=null_parts;
|
||||
set_if_bigger(outparam->max_key_length,keyinfo->key_length+
|
||||
keyinfo->key_parts);
|
||||
if (keyinfo->flags & HA_NOSAME)
|
||||
@ -420,6 +418,7 @@ int openfrm(const char *name, const char *alias, uint db_stat, uint prgflag,
|
||||
key_part->store_length+=HA_KEY_NULL_LENGTH;
|
||||
keyinfo->flags|=HA_NULL_PART_KEY;
|
||||
keyinfo->extra_length+= HA_KEY_NULL_LENGTH;
|
||||
keyinfo->key_length+= HA_KEY_NULL_LENGTH;
|
||||
}
|
||||
if (field->type() == FIELD_TYPE_BLOB ||
|
||||
field->real_type() == FIELD_TYPE_VAR_STRING)
|
||||
@ -428,6 +427,7 @@ int openfrm(const char *name, const char *alias, uint db_stat, uint prgflag,
|
||||
key_part->key_part_flag|= HA_BLOB_PART;
|
||||
keyinfo->extra_length+=HA_KEY_BLOB_LENGTH;
|
||||
key_part->store_length+=HA_KEY_BLOB_LENGTH;
|
||||
keyinfo->key_length+= HA_KEY_BLOB_LENGTH;
|
||||
}
|
||||
if (i == 0 && key != primary_key)
|
||||
field->flags |=
|
||||
@ -438,11 +438,16 @@ int openfrm(const char *name, const char *alias, uint db_stat, uint prgflag,
|
||||
field->key_start|= ((key_map) 1 << key);
|
||||
if ((ha_option & HA_HAVE_KEY_READ_ONLY) &&
|
||||
field->key_length() == key_part->length &&
|
||||
field->type() != FIELD_TYPE_BLOB &&
|
||||
(field->key_type() != HA_KEYTYPE_TEXT ||
|
||||
(!(ha_option & HA_KEY_READ_WRONG_STR) &&
|
||||
!(keyinfo->flags & HA_FULLTEXT))))
|
||||
field->part_of_key|= ((key_map) 1 << key);
|
||||
field->type() != FIELD_TYPE_BLOB)
|
||||
{
|
||||
if (field->key_type() != HA_KEYTYPE_TEXT ||
|
||||
(!(ha_option & HA_KEY_READ_WRONG_STR) &&
|
||||
!(keyinfo->flags & HA_FULLTEXT)))
|
||||
field->part_of_key|= ((key_map) 1 << key);
|
||||
if (field->key_type() != HA_KEYTYPE_TEXT ||
|
||||
!(keyinfo->flags & HA_FULLTEXT))
|
||||
field->part_of_sortkey|= ((key_map) 1 << key);
|
||||
}
|
||||
if (!(key_part->key_part_flag & HA_REVERSE_SORT) &&
|
||||
usable_parts == i)
|
||||
usable_parts++; // For FILESORT
|
||||
|
Loading…
x
Reference in New Issue
Block a user