All statements are now run in a sub transaction

Give warning if we are using non transactional tables and do ROLLBACK
Porting of crash-me and benchmarks to FrontBase
This commit is contained in:
monty@donna.mysql.com 2000-11-24 01:51:18 +02:00
parent c62554025d
commit 81bc313e99
73 changed files with 1377 additions and 331 deletions

View File

@ -243,7 +243,7 @@ Installing a MySQL Source Distribution
Perl Installation Comments
* Perl installation:: Installing Perl on UNIX
* Perl installation:: Installing Perl on Unix
* ActiveState Perl:: Installing ActiveState Perl on Windows
* Windows Perl:: Installing the @strong{MySQL} Perl distribution on Windows
* Perl support problems:: Problems using the Perl @code{DBI}/@code{DBD} interface
@ -297,7 +297,7 @@ Windows Notes
* Windows symbolic links:: Splitting data across different disks under Win32
* Windows compiling:: Compiling MySQL clients on Windows.
* Windows and BDB tables.:: Windows and BDB Tables
* Windows vs Unix:: @strong{MySQL}-Windows compared to UNIX @strong{MySQL}
* Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL}
Post-installation Setup and Testing
@ -454,7 +454,7 @@ Functions for Use in @code{SELECT} and @code{WHERE} Clauses
* Silent column changes:: Silent column changes
@code{SHOW} syntax (Get information about tables, columns,...)
@code{SHOW} Syntax (Get Information About Tables, Columns,...)
* SHOW DATABASE INFO::
* SHOW TABLE STATUS::
@ -464,7 +464,7 @@ Functions for Use in @code{SELECT} and @code{WHERE} Clauses
* SHOW GRANTS::
* SHOW CREATE TABLE::
MySQL table types
MySQL Table Types
* MyISAM:: MyISAM tables
* MERGE:: MERGE tables
@ -472,18 +472,18 @@ MySQL table types
* HEAP:: HEAP tables
* BDB:: BDB or Berkeley_db tables
MyISAM tables
MyISAM Tables
* Key space:: Space needed for keys
* MyISAM table formats:: MyISAM table formats
MyISAM table formats
MyISAM Table Formats
* Static format:: Static (Fixed-length) table characteristics
* Dynamic format:: Dynamic table characteristics
* Compressed format:: Compressed table characteristics
BDB or Berkeley_db tables
BDB or Berkeley_db Tables
* BDB overview::
* BDB install::
@ -503,7 +503,7 @@ MySQL Tutorial
* Batch mode:: Using @code{mysql} in batch mode
* Twin:: Queries from twin project
Examples of common queries
Examples of Common Queries
* example-Maximum-column:: The maximum value for a column
* example-Maximum-row:: The row holding the maximum of a certain column
@ -512,7 +512,7 @@ Examples of common queries
* example-user-variables::
* example-Foreign keys:: Using foreign keys
Creating and using a database
Creating and Using a Database
* Creating database:: Creating a database
* Creating tables:: Creating a table
@ -520,7 +520,7 @@ Creating and using a database
* Retrieving data:: Retrieving information from a table
* Multiple tables:: Using more than one table
Retrieving information from a table
Retrieving Information from a Table
* Selecting all:: Selecting all data
* Selecting rows:: Selecting particular rows
@ -531,7 +531,7 @@ Retrieving information from a table
* Pattern matching:: Pattern matching
* Counting rows:: Counting rows
Queries from twin project
Queries from Twin Project
* Twin pool:: Find all non-distributed twins
* Twin event:: Show a table on twin pair status
@ -541,7 +541,7 @@ MySQL server functions
* Languages:: What languages are supported by @strong{MySQL}?
* Table size:: How big @strong{MySQL} tables can be
What languages are supported by MySQL?
What Languages Are Supported by MySQL?
* Character sets:: The character set used for data and sorting
* Adding character set:: Adding a new character set
@ -558,7 +558,7 @@ Replication in MySQL
* Replication SQL:: SQL Commands related to replication
* Replication FAQ:: Frequently Asked Questions about replication
Getting maximum performance from MySQL
Getting Maximum Performance from MySQL
* Optimize Basics:: Optimization overview
* System:: System/Compile time and startup parameter tuning
@ -572,7 +572,7 @@ Getting maximum performance from MySQL
* Portability:: Portability
* Internal use:: What have we used MySQL for?
System/Compile time and startup parameter tuning
System/Compile Time and Startup Parameter Tuning
* Compile and link options:: How compiling and linking affects the speed of MySQL
* Disk issues:: Disk issues
@ -831,6 +831,7 @@ MySQL change history
Changes in release 3.23.x (Recommended; beta)
* News-3.23.29:: Changes in release 3.23.29
* News-3.23.28:: Changes in release 3.23.28
* News-3.23.27:: Changes in release 3.23.27
* News-3.23.26:: Changes in release 3.23.26
@ -15573,7 +15574,7 @@ mysql> select ATAN(-2);
@end example
@findex ATAN2()
@item ATAN2(X,Y)
@item ATAN2(Y,X)
Returns the arc tangent of the two variables @code{X} and @code{Y}. It is
similar to calculating the arc tangent of @code{Y / X}, except that the
signs of both arguments are used to determine the quadrant of the
@ -17701,7 +17702,7 @@ using @code{myisampack}. @xref{Compressed format}.
@section @code{ALTER TABLE} Syntax
@example
ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...] [ORDER BY col]
ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...]
alter_specification:
ADD [COLUMN] create_definition [FIRST | AFTER column_name ]
@ -17719,6 +17720,7 @@ alter_specification:
or DROP PRIMARY KEY
or DROP INDEX index_name
or RENAME [TO] new_tbl_name
or ORDER BY col
or table_options
@end example
@ -17859,9 +17861,12 @@ if no @code{PRIMARY KEY} was specified explicitly.)
@item
@code{ORDER BY} allows you to create the new table with the rows in a
specific order. Note that the table will not remain in this order after
inserts and deletes. In some cases, it may make sorting easier for
inserts and deletes. In some cases, it may make sorting easier for
@strong{MySQL} if the table is in order by the column that you wish to
order it by later.
order it by later. This option is mainly useful when you know that you
are mostly going to query the rows in a certain order; By using this
option after big changes to the table, you may be able to get higher
performance.
@findex ALTER TABLE
@item
@ -17991,7 +17996,7 @@ to the original state.
@section @code{DROP TABLE} Syntax
@example
DROP TABLE [IF EXISTS] tbl_name [, tbl_name,...]
DROP TABLE [IF EXISTS] tbl_name [, tbl_name,...] [RESTRICT | CASCADE]
@end example
@code{DROP TABLE} removes one or more tables. All table data and the table
@ -18001,6 +18006,9 @@ In @strong{MySQL} Version 3.22 or later, you can use the keywords
@code{IF EXISTS} to prevent an error from occurring for tables that don't
exist.
@code{RESTRICT} and @code{CASCADE} are allowed to make porting easier.
For the moment they don't do anything.
@findex OPTIMIZE TABLE
@node OPTIMIZE TABLE, CHECK TABLE, DROP TABLE, Reference
@section @code{OPTIMIZE TABLE} Syntax
@ -20755,6 +20763,11 @@ COMMIT;
Note that if you are using non-transaction-safe tables, the changes will be
stored at once, independent of the status of the @code{autocommit} mode.
If you do a @code{ROLLBACK} when you have updated a non-transactional
table you will get an error (@code{ER_WARNING_NOT_COMPLETE_ROLLBACK}) as
a warning. All transactional safe tables will be restored but any
non-transactional table will not change.
@findex LOCK TABLES
@findex UNLOCK TABLES
@node LOCK TABLES, SET OPTION, COMMIT, Reference
@ -27240,6 +27253,10 @@ rows at the same time another table is reading from it. If this is important
for you, you should consider methods where you don't have to delete rows
or run @code{OPTIMIZE TABLE} after you have deleted a lot of rows.
@item
Use @code{ALTER TABLE ... ORDER BY expr1,expr2...} if you mostly
retrieve rows in expr1,expr2.. order. By using this option after big
changes to the table, you may be able to get higher performance.
@item
In some cases it may make sense to introduce a column that is 'hashed'
based on information from other columns. If this column is short and
reasonably unique it may be much faster than a big index on many
@ -32981,6 +32998,9 @@ contain a path, the file is written in the data directory.
The slow query log can be used to find queries that takes a long time to
execute and are thus candidates for optimization.
You are using @code{--log-long-format} then also queries that are not
using indexes are printed. @xref{Command-line options}.
@cindex database replication
@cindex replication, database
@cindex database mirroring
@ -38778,6 +38798,7 @@ version. The replication and BerkeleyDB code is still under development,
though, so Version 3.23 is not released as a stable version yet.
@menu
* News-3.23.29:: Changes in release 3.23.29
* News-3.23.28:: Changes in release 3.23.28
* News-3.23.27:: Changes in release 3.23.27
* News-3.23.26:: Changes in release 3.23.26
@ -38809,7 +38830,25 @@ though, so Version 3.23 is not released as a stable version yet.
* News-3.23.0:: Changes in release 3.23.0
@end menu
@node News-3.23.28, News-3.23.27, News-3.23.x, News-3.23.x
@node News-3.23.29, News-3.23.28, News-3.23.x, News-3.23.x
@appendixsubsec Changes in release 3.23.29
@itemize @bullet
@item
Changed crash-me and the MySQL benchmarks to also work with FrontBase.
@item
Allow @code{RESTRICT} and @code{CASCADE} after @code{DROP TABLE} to make
porting easier.
@item
If we get an error we now only rollback the statement (for BDB tables),
not the whole transaction.
@item
If you do a @code{ROLLBACK} when you have updated a non-transactional table
you will get an error as a warning.
@item
Reset status variable which could cause problem if one used @code{--slow-log}.
@end itemize
@node News-3.23.28, News-3.23.27, News-3.23.29, News-3.23.x
@appendixsubsec Changes in release 3.23.28
@itemize @bullet
@item

View File

@ -196,4 +196,5 @@
#define ER_UNKNOWN_SYSTEM_VARIABLE 1193
#define ER_CRASHED_ON_USAGE 1194
#define ER_CRASHED_ON_REPAIR 1195
#define ER_ERROR_MESSAGES 196
#define ER_WARNING_NOT_COMPLETE_ROLLBACK 1196
#define ER_ERROR_MESSAGES 197

View File

@ -38,7 +38,7 @@ require "$pwd/server-cfg" || die "Can't read Configuration file: $!\n";
$|=1; # Output data immediately
$opt_skip_test=$opt_skip_create=$opt_skip_delete=$opt_verbose=$opt_fast_insert=$opt_lock_tables=$opt_debug=$opt_skip_delete=$opt_fast=$opt_force=$opt_log=$opt_use_old_results=$opt_help=$opt_odbc=$opt_small_test=$opt_small_tables=$opt_samll_key_tables=$opt_stage=$opt_old_headers=$opt_die_on_errors=0;
$opt_skip_test=$opt_skip_create=$opt_skip_delete=$opt_verbose=$opt_fast_insert=$opt_lock_tables=$opt_debug=$opt_skip_delete=$opt_fast=$opt_force=$opt_log=$opt_use_old_results=$opt_help=$opt_odbc=$opt_small_test=$opt_small_tables=$opt_samll_key_tables=$opt_stage=$opt_old_headers=$opt_die_on_errors=$opt_tcpip=0;
$opt_cmp=$opt_user=$opt_password="";
$opt_server="mysql"; $opt_dir="output";
$opt_host="localhost";$opt_database="test";
@ -51,7 +51,7 @@ $log_prog_args=join(" ", skip_arguments(\@ARGV,"comments","cmp","server",
"user", "host", "database", "password",
"use-old-results","skip-test",
"machine", "dir", "suffix", "log"));
GetOptions("skip-test=s","comments=s","cmp=s","server=s","user=s","host=s","database=s","password=s","loop-count=i","row-count=i","skip-create","skip-delete","verbose","fast-insert","lock-tables","debug","fast","force","field-count=i","regions=i","groups=i","time-limit=i","log","use-old-results","machine=s","dir=s","suffix=s","help","odbc","small-test","small-tables","small-key-tables","stage=i","old-headers","die-on-errors","create-options=s","hires") || usage();
GetOptions("skip-test=s","comments=s","cmp=s","server=s","user=s","host=s","database=s","password=s","loop-count=i","row-count=i","skip-create","skip-delete","verbose","fast-insert","lock-tables","debug","fast","force","field-count=i","regions=i","groups=i","time-limit=i","log","use-old-results","machine=s","dir=s","suffix=s","help","odbc","small-test","small-tables","small-key-tables","stage=i","old-headers","die-on-errors","create-options=s","hires","tcpip") || usage();
usage() if ($opt_help);
$server=get_server($opt_server,$opt_host,$opt_database,$opt_odbc,
@ -256,6 +256,17 @@ sub fetch_all_rows
if (!$sth->execute)
{
print "\n" if ($opt_debug);
if (defined($server->{'error_on_execute_means_zero_rows'}) &&
!$server->abort_if_fatal_error())
{
if (defined($must_get_result) && $must_get_result)
{
die "Error: Query $query didn't return any rows\n";
}
$sth->finish;
print "0\n" if ($opt_debug);
return 0;
}
die "Error occured with execute($query)\n -> $DBI::errstr\n";
$sth->finish;
return undef;
@ -507,6 +518,11 @@ All benchmarks takes the following options:
different server options without overwritten old files.
When using --fast the suffix is automaticly set to '_fast'.
--tcpip
Inform test suite that we are using TCP/IP to connect to the server. In
this case we can\t do many new connections in a row as we in this case may
fill the TCP/IP stack
--time-limit (Default $opt_time_limit)
How long a test loop is allowed to take, in seconds, before the end result
is 'estimated'.

View File

@ -38,8 +38,7 @@
# as such, and clarify ones such as "mediumint" with comments such as
# "3-byte int" or "same as xxx".
$version="1.51";
$version="1.53";
use DBI;
use Getopt::Long;
@ -51,6 +50,7 @@ $opt_dir="limits";
$opt_debug=$opt_help=$opt_Information=$opt_restart=$opt_force=$opt_quick=0;
$opt_log_all_queries=$opt_fix_limit_file=$opt_batch_mode=0;
$opt_db_start_cmd=""; # the db server start command
$opt_check_server=0; # Check if server is alive before each query
$opt_sleep=10; # time to sleep while starting the db server
$limit_changed=0; # For configure file
$reconnect_count=0;
@ -61,7 +61,7 @@ $limits{'operating_system'}= machine();
$prompts{'operating_system'}='crash-me tested on';
$retry_limit=3;
GetOptions("Information","help","server=s","debug","user=s","password=s","database=s","restart","force","quick","log-all-queries","comment=s","host=s","fix-limit-file","dir=s","db-start-cmd=s","sleep=s","batch-mode","config-file=s","log-queries-to-file=s") || usage();
GetOptions("Information","help","server=s","debug","user=s","password=s","database=s","restart","force","quick","log-all-queries","comment=s","host=s","fix-limit-file","dir=s","db-start-cmd=s","sleep=s","batch-mode","config-file=s","log-queries-to-file=s","check-server") || usage();
usage() if ($opt_help || $opt_Information);
$opt_config_file="$pwd/$opt_dir/$opt_server.cfg" if (length($opt_config_file) == 0);
@ -136,6 +136,7 @@ $max_stacked_expressions="+2000";
$query_size=$max_buffer_size;
$longreadlen=16000000; # For retrieval buffer
#
# First do some checks that needed for the rest of the benchmark
#
@ -143,11 +144,43 @@ use sigtrap; # Must be removed with perl5.005_2 on Win98
$SIG{PIPE} = 'IGNORE';
$SIG{SEGV} = sub {warn('SEGFAULT')};
$dbh=safe_connect();
$dbh->do("drop table crash_me"); # Remove old run
$dbh->do("drop table crash_me2"); # Remove old run
$dbh->do("drop table crash_me3"); # Remove old run
$dbh->do("drop table crash_q"); # Remove old run
$dbh->do("drop table crash_q1"); # Remove old run
#
# Test if the database require RESTRICT/CASCADE after DROP TABLE
#
# Really remove the crash_me table
$prompt="drop table require cascade/restrict";
$drop_attr="";
$dbh->do("drop table crash_me");
$dbh->do("drop table crash_me cascade");
if (!safe_query(["create table crash_me (a integer not null)",
"drop table crash_me"]))
{
$dbh->do("drop table crash_me cascade");
if (safe_query(["create table crash_me (a integer not null)",
"drop table crash_me cascade"]))
{
save_config_data('drop_requires_cascade',"yes","$prompt");
$drop_attr="cascade";
}
else
{
die "Can't create and drop table 'crash_me'\n";
}
}
else
{
save_config_data('drop_requires_cascade',"no","$prompt");
$drop_attr="";
}
# Remove tables from old runs
$dbh->do("drop table crash_me $drop_attr");
$dbh->do("drop table crash_me2 $drop_attr");
$dbh->do("drop table crash_me3 $drop_attr");
$dbh->do("drop table crash_q $drop_attr");
$dbh->do("drop table crash_q1 $drop_attr");
$prompt="Tables without primary key";
if (!safe_query(["create table crash_me (a integer not null,b char(10) not null)",
@ -164,6 +197,7 @@ else
{
save_config_data('no_primary_key',"yes",$prompt);
}
#
# Define strings for character NULL and numeric NULL used in expressions
#
@ -222,28 +256,33 @@ report("Group by position",'group_by_position',
"select a from crash_me group by 1");
report("Group by alias",'group_by_alias',
"select a as ab from crash_me group by ab");
report("Group on unused column",'group_on_unused',
"select count(*) from crash_me group by a");
report("Order by",'order_by',"select a from crash_me order by a");
report("Order by position",'order_by_position',
"select a from crash_me order by 1");
report("Order by function","order_by_function",
"select a from crash_me order by a+1");
report("Order by on unused column",'order_on_unused',
"select b from crash_me order by a");
check_and_report("Order by DESC is remembered",'order_by_remember_desc',
["create table crash_q (s int,s1 int)",
"insert into crash_q values(1,1)",
"insert into crash_q values(3,1)",
"insert into crash_q values(2,1)"],
"select s,s1 from crash_q order by s1 DESC,s",
["drop table crash_q"],[3,2,1],7,undef(),3);
["drop table crash_q $drop_attr"],[3,2,1],7,undef(),3);
report("Compute",'compute',
"select a from crash_me order by a compute sum(a) by a");
report("Value lists in INSERT",'multi_value_insert',
"create table crash_q (s char(10))",
"insert into crash_q values ('a'),('b')",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("INSERT with set syntax",'insert_with_set',
"create table crash_q (a integer)",
"insert into crash_q SET a=1",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("allows end ';'","end_colon", "select * from crash_me;");
try_and_report("LIMIT number of rows","select_limit",
["with LIMIT",
@ -253,7 +292,7 @@ try_and_report("LIMIT number of rows","select_limit",
report("SELECT with LIMIT #,#","select_limit2", "select * from crash_me limit 1,1");
# The following alter table commands MUST be kept together!
if ($dbh->do("create table crash_q (a integer, b integer,c CHAR(10))"))
if ($dbh->do("create table crash_q (a integer, b integer,c1 CHAR(10))"))
{
report("Alter table add column",'alter_add_col',
"alter table crash_q add d integer");
@ -266,11 +305,10 @@ if ($dbh->do("create table crash_q (a integer, b integer,c CHAR(10))"))
# informix can only change data type with modify
report_one("Alter table modify column",'alter_modify_col',
[["alter table crash_q modify c CHAR(20)","yes"],
["alter table crash_q alter c CHAR(20)","with alter"]]);
[["alter table crash_q modify c1 CHAR(20)","yes"],
["alter table crash_q alter c1 CHAR(20)","with alter"]]);
report("Alter table alter column default",'alter_alter_col',
"alter table crash_q alter b set default 10",
"alter table crash_q alter b set default NULL");
"alter table crash_q alter b set default 10");
report_one("Alter table drop column",'alter_drop_col',
[["alter table crash_q drop column b","yes"],
["alter table crash_q drop column b restrict","with restrict/cascade"]]);
@ -278,46 +316,51 @@ if ($dbh->do("create table crash_q (a integer, b integer,c CHAR(10))"))
"alter table crash_q rename to crash_q1");
}
# Make sure both tables will be dropped, even if rename fails.
$dbh->do("drop table crash_q1");
$dbh->do("drop table crash_q");
$dbh->do("drop table crash_q1 $drop_attr");
$dbh->do("drop table crash_q $drop_attr");
report("rename table","rename_table",
"create table crash_q (a integer, b integer,c CHAR(10))",
"create table crash_q (a integer, b integer,c1 CHAR(10))",
"rename table crash_q to crash_q1",
"drop table crash_q1");
"drop table crash_q1 $drop_attr");
# Make sure both tables will be dropped, even if rename fails.
$dbh->do("drop table crash_q1");
$dbh->do("drop table crash_q");
$dbh->do("drop table crash_q1 $drop_attr");
$dbh->do("drop table crash_q $drop_attr");
report("truncate","truncate_table",
"create table crash_q (a integer, b integer,c CHAR(10))",
"create table crash_q (a integer, b integer,c1 CHAR(10))",
"truncate crash_q",
"drop table crash_q");
"drop table crash_q $drop_attr");
if ($dbh->do("create table crash_q (a integer, b integer,c CHAR(10))") &&
$dbh->do("create table crash_q1 (a integer, b integer,c CHAR(10) not null)"))
if ($dbh->do("create table crash_q (a integer, b integer,c1 CHAR(10))") &&
$dbh->do("create table crash_q1 (a integer, b integer,c1 CHAR(10) not null)"))
{
report("Alter table add constraint",'alter_add_constraint',
"alter table crash_q add constraint c1 check(a > b)");
report("Alter table drop constraint",'alter_drop_constraint',
"alter table crash_q drop constraint c1");
"alter table crash_q add constraint c2 check(a > b)");
report_one("Alter table drop constraint",'alter_drop_constraint',
[["alter table crash_q drop constraint c2","yes"],
["alter table crash_q drop constraint c2 restrict","with restrict/cascade"]]);
report("Alter table add unique",'alter_add_unique',
"alter table crash_q add constraint u1 unique(c)");
"alter table crash_q add constraint u1 unique(c1)");
try_and_report("Alter table drop unique",'alter_drop_unique',
["with constraint",
"alter table crash_q drop constraint u1"],
["with constraint and restrict/cascade",
"alter table crash_q drop constraint u1 restrict"],
["with drop key",
"alter table crash_q drop key c"]);
"alter table crash_q drop key c1"]);
try_and_report("Alter table add primary key",'alter_add_primary_key',
["with constraint",
"alter table crash_q1 add constraint p1 primary key(c)"],
"alter table crash_q1 add constraint p1 primary key(c1)"],
["with add primary key",
"alter table crash_q1 add primary key(c)"]);
"alter table crash_q1 add primary key(c1)"]);
report("Alter table add foreign key",'alter_add_foreign_key',
"alter table crash_q add constraint f1 foreign key(c) references crash_q1(c)");
"alter table crash_q add constraint f1 foreign key(c1) references crash_q1(c1)");
try_and_report("Alter table drop foreign key",'alter_drop_foreign_key',
["with drop constraint",
"alter table crash_q drop constraint f1"],
["with drop constraint and restrict/cascade",
"alter table crash_q drop constraint f1 restrict"],
["with drop foreign key",
"alter table crash_q drop foreign key f1"]);
try_and_report("Alter table drop primary key",'alter_drop_primary_key',
@ -326,8 +369,8 @@ if ($dbh->do("create table crash_q (a integer, b integer,c CHAR(10))") &&
["drop primary key",
"alter table crash_q1 drop primary key"]);
}
$dbh->do("drop table crash_q");
$dbh->do("drop table crash_q1");
$dbh->do("drop table crash_q $drop_attr");
$dbh->do("drop table crash_q1 $drop_attr");
check_and_report("case insensitive compare","case_insensitive_strings",
[],"select b from crash_me where b = 'A'",[],'a',1);
@ -337,8 +380,8 @@ check_and_report("group on column with null values",'group_by_null',
["create table crash_q (s char(10))",
"insert into crash_q values(null)",
"insert into crash_q values(null)"],
"select count(*) from crash_q group by s",
["drop table crash_q"],2,0);
"select count(*),s from crash_q group by s",
["drop table crash_q $drop_attr"],2,0);
$prompt="Having";
if (!defined($limits{'having'}))
@ -563,7 +606,7 @@ foreach $types (@types)
$tmp2 =~ s/_not_null//g;
report("Type $type","type_$types->[0]_$tmp2",
"create table crash_q (q $use_type)",
"drop table crash_q");
"drop table crash_q $drop_attr");
}
}
@ -575,7 +618,7 @@ check_and_report("Remembers end space in char()","remember_end_space",
["create table crash_q (a char(10))",
"insert into crash_q values('hello ')"],
"select a from crash_q where a = 'hello '",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
'hello ',6);
check_and_report("Remembers end space in varchar()",
@ -583,35 +626,35 @@ check_and_report("Remembers end space in varchar()",
["create table crash_q (a varchar(10))",
"insert into crash_q values('hello ')"],
"select a from crash_q where a = 'hello '",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
'hello ',6);
check_and_report("Supports 0000-00-00 dates","date_zero",
["create table crash_me2 (a date not null)",
"insert into crash_me2 values ('0000-00-00')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"0000-00-00",1);
check_and_report("Supports 0001-01-01 dates","date_one",
["create table crash_me2 (a date not null)",
"insert into crash_me2 values (DATE '0001-01-01')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"0001-01-01",1);
check_and_report("Supports 9999-12-31 dates","date_last",
["create table crash_me2 (a date not null)",
"insert into crash_me2 values (DATE '9999-12-31')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"9999-12-31",1);
check_and_report("Supports 'infinity dates","date_infinity",
["create table crash_me2 (a date not null)",
"insert into crash_me2 values ('infinity')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"infinity",1);
if (!defined($limits{'date_with_YY'}))
@ -620,7 +663,7 @@ if (!defined($limits{'date_with_YY'}))
["create table crash_me2 (a date not null)",
"insert into crash_me2 values ('98-03-03')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"1998-03-03",5);
if ($limits{'date_with_YY'} eq "yes")
{
@ -630,7 +673,7 @@ if (!defined($limits{'date_with_YY'}))
["create table crash_me2 (a date not null)",
"insert into crash_me2 values ('10-03-03')"],
"select a from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"2010-03-03",5);
}
}
@ -645,33 +688,33 @@ if (($limits{'type_extra_float(2_arg)'} eq "yes" ||
if (execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.14)"],
"select q1 from crash_q",
["drop table crash_q"],1.1,0) &&
["drop table crash_q $drop_attr"],1.1,0) &&
execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.16)"],
"select q1 from crash_q",
["drop table crash_q"],1.1,0))
["drop table crash_q $drop_attr"],1.1,0))
{
$result="truncate";
}
elsif (execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.14)"],
"select q1 from crash_q",
["drop table crash_q"],1.1,0) &&
["drop table crash_q $drop_attr"],1.1,0) &&
execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.16)"],
"select q1 from crash_q",
["drop table crash_q"],1.2,0))
["drop table crash_q $drop_attr"],1.2,0))
{
$result="round";
}
elsif (execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.14)"],
"select q1 from crash_q",
["drop table crash_q"],1.14,0) &&
["drop table crash_q $drop_attr"],1.14,0) &&
execute_and_check(["create table crash_q (q1 $type)",
"insert into crash_q values(1.16)"],
"select q1 from crash_q",
["drop table crash_q"],1.16,0))
["drop table crash_q $drop_attr"],1.16,0))
{
$result="exact";
}
@ -682,20 +725,20 @@ if (($limits{'type_extra_float(2_arg)'} eq "yes" ||
try_and_report("Type for row id", "rowid",
["rowid",
"create table crash_q (a rowid)","drop table crash_q"],
"create table crash_q (a rowid)","drop table crash_q $drop_attr"],
["auto_increment",
"create table crash_q (a int not null auto_increment, primary key(a))","drop table crash_q"],
"create table crash_q (a int not null auto_increment, primary key(a))","drop table crash_q $drop_attr"],
["oid",
"create table crash_q (a oid, primary key(a))","drop table crash_q"],
"create table crash_q (a oid, primary key(a))","drop table crash_q $drop_attr"],
["serial",
"create table crash_q (a serial, primary key(a))","drop table crash_q"]);
"create table crash_q (a serial, primary key(a))","drop table crash_q $drop_attr"]);
try_and_report("Automatic rowid", "automatic_rowid",
["_rowid",
"create table crash_q (a int not null, primary key(a))",
"insert into crash_q values (1)",
"select _rowid from crash_q",
"drop table crash_q"]);
"drop table crash_q $drop_attr"]);
#
# Test functions
@ -1081,19 +1124,19 @@ report("LIKE on numbers","like_with_number",
"create table crash_q (a int,b int)",
"insert into crash_q values(10,10)",
"select * from crash_q where a like '10'",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("column LIKE column","like_with_column",
"create table crash_q (a char(10),b char(10))",
"insert into crash_q values('abc','abc')",
"select * from crash_q where a like b",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("update of column= -column","NEG",
"create table crash_q (a integer)",
"insert into crash_q values(10)",
"update crash_q set a=-a",
"drop table crash_q");
"drop table crash_q $drop_attr");
if ($limits{'func_odbc_left'} eq 'yes' ||
$limits{'func_odbc_substring'} eq 'yes')
@ -1105,7 +1148,7 @@ if ($limits{'func_odbc_left'} eq 'yes' ||
["create table crash_me2 (a date not null)",
"insert into crash_me2 values ('1998-03-03')"],
"select $type from crash_me2",
["drop table crash_me2"],
["drop table crash_me2 $drop_attr"],
"1998",1);
}
@ -1125,7 +1168,7 @@ if (!defined($limits{'multi_table_update'}))
"insert into crash_q values(1,'c')",
"update crash_q left join crash_me on crash_q.a=crash_me.a set crash_q.b=crash_me.b"],
"select b from crash_q",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
"a",1,undef(),2))
{
check_and_report("Update with many tables","multi_table_update",
@ -1133,7 +1176,7 @@ if (!defined($limits{'multi_table_update'}))
"insert into crash_q values(1,'c')",
"update crash_q,crash_me set crash_q.b=crash_me.b where crash_q.a=crash_me.a"],
"select b from crash_q",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
"a",1,
1);
}
@ -1143,14 +1186,14 @@ report("DELETE FROM table1,table2...","multi_table_delete",
"create table crash_q (a integer,b char(10))",
"insert into crash_q values(1,'c')",
"delete crash_q.* from crash_q,crash_me where crash_q.a=crash_me.a",
"drop table crash_q");
"drop table crash_q $drop_attr");
check_and_report("Update with sub select","select_table_update",
["create table crash_q (a integer,b char(10))",
"insert into crash_q values(1,'c')",
"update crash_q set b= (select b from crash_me where crash_q.a = crash_me.a)"],
"select b from crash_q",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
"a",1);
check_and_report("Calculate 1--1","minus_neg",[],
@ -1178,7 +1221,7 @@ if (defined($found))
new query_many(["create table crash_q (q $found)",
"insert into crash_q values ('%s')"],
"select * from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
min($max_string_size,$limits{'query_size'}-30)));
}
@ -1187,8 +1230,8 @@ if (defined($found))
# what can be stored...
find_limit(($prompt="constant string size in where"),"where_string_size",
new query_repeat([],"select a from crash_me where b <'",
"","","a","","'"));
new query_repeat([],"select a from crash_me where b >='",
"","","1","","'"));
if ($limits{'where_string_size'} == 10)
{
save_config_data('where_string_size','nonstandard',$prompt);
@ -1251,11 +1294,11 @@ find_limit("tables in join", "join_tables",
report("primary key in create table",'primary_key_in_create',
"create table crash_q (q integer not null,primary key (q))",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("unique in create table",'unique_in_create',
"create table crash_q (q integer not null,unique (q))",
"drop table crash_q");
"drop table crash_q $drop_attr");
if ($limits{'unique_in_create'} eq 'yes')
{
@ -1264,29 +1307,29 @@ if ($limits{'unique_in_create'} eq 'yes')
"insert into crash_q (q) values (NULL)",
"insert into crash_q (q) values (NULL)",
"insert into crash_q (q) values (1)",
"drop table crash_q");
"drop table crash_q $drop_attr");
}
report("default value for column",'create_default',
"create table crash_q (q integer default 10 not null)",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("default value function for column",'create_default_func',
"create table crash_q (q integer not null,q1 integer default (1+1)",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("temporary tables",'tempoary_table',
"create temporary table crash_q (q integer not null)",
"drop table crash_q");
"drop table crash_q $drop_attr");
report_one("create table from select",'create_table_select',
[["create table crash_q SELECT * from crash_me","yes"],
["create table crash_q AS SELECT * from crash_me","with AS"]]);
$dbh->do("drop table crash_q");
$dbh->do("drop table crash_q $drop_attr");
report("index in create table",'index_in_create',
"create table crash_q (q integer not null,index (q))",
"drop table crash_q");
"drop table crash_q $drop_attr");
# The following must be executed as we need the value of end_drop_keyword
# later
@ -1345,7 +1388,7 @@ check_and_report("null in index","null_in_index",
[create_table("crash_q",["a char(10)"],["(a)"]),
"insert into crash_q values (NULL)"],
"select * from crash_q",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
undef(),4);
if ($limits{'unique_in_create'} eq 'yes')
@ -1354,7 +1397,12 @@ if ($limits{'unique_in_create'} eq 'yes')
create_table("crash_q",["q integer"],["unique(q)"]),
"insert into crash_q (q) values(NULL)",
"insert into crash_q (q) values(NULL)",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("null combination in unique index",'nulls_in_unique',
create_table("crash_q",["q integer,q1 integer"],["unique(q)"]),
"insert into crash_q (q,q1) values(1,NULL)",
"insert into crash_q (q,q1) values(1,NULL)",
"drop table crash_q $drop_attr");
}
if ($limits{'null_in_unique'} eq 'yes')
@ -1363,7 +1411,7 @@ if ($limits{'null_in_unique'} eq 'yes')
create_table("crash_q",["q integer, x integer"],["unique(q)"]),
"insert into crash_q(x) values(1)",
"insert into crash_q(x) values(2)",
"drop table crash_q");
"drop table crash_q $drop_attr");
}
if ($limits{'create_index'} ne 'no')
@ -1385,29 +1433,29 @@ if ($limits{'create_index'} ne 'no')
if (!report("case independent table names","table_name_case",
"create table crash_q (q integer)",
"drop table CRASH_Q"))
"drop table CRASH_Q $drop_attr"))
{
safe_query("drop table crash_q");
safe_query("drop table crash_q $drop_attr");
}
if (!report("drop table if exists","drop_if_exists",
"create table crash_q (q integer)",
"drop table if exists crash_q"))
"drop table if exists crash_q $drop_attr"))
{
safe_query("drop table crash_q");
safe_query("drop table crash_q $drop_attr");
}
report("create table if not exists","create_if_not_exists",
"create table crash_q (q integer)",
"create table if not exists crash_q (q integer)");
safe_query("drop table crash_q");
safe_query("drop table crash_q $drop_attr");
#
# test of different join types
#
assert("create table crash_me2 (a integer not null,b char(10) not null, c integer)");
assert("insert into crash_me2 (a,b,c) values (1,'b',1)");
assert("create table crash_me2 (a integer not null,b char(10) not null, c1 integer)");
assert("insert into crash_me2 (a,b,c1) values (1,'b',1)");
assert("create table crash_me3 (a integer not null,b char(10) not null)");
assert("insert into crash_me3 (a,b) values (1,'b')");
@ -1416,9 +1464,9 @@ report("inner join","inner_join",
report("left outer join","left_outer_join",
"select crash_me.a from crash_me left join crash_me2 ON crash_me.a=crash_me2.a");
report("natural left outer join","natural_left_outer_join",
"select c from crash_me natural left join crash_me2");
"select c1 from crash_me natural left join crash_me2");
report("left outer join using","left_outer_join_using",
"select c from crash_me left join crash_me2 using (a)");
"select c1 from crash_me left join crash_me2 using (a)");
report("left outer join odbc style","odbc_left_outer_join",
"select crash_me.a from { oj crash_me left outer join crash_me2 ON crash_me.a=crash_me2.a }");
report("right outer join","right_outer_join",
@ -1449,7 +1497,7 @@ report("minus","minus",
"select * from crash_me minus select * from crash_me3"); # oracle ...
report("natural join (incompatible lists)","natural_join_incompat",
"select c from crash_me natural join crash_me2");
"select c1 from crash_me natural join crash_me2");
report("union (incompatible lists)","union_incompat",
"select * from crash_me union select a,b from crash_me2");
report("union all (incompatible lists)","union_all_incompat",
@ -1469,8 +1517,8 @@ report("except all (incompatible lists)","except_all_incompat",
report("minus (incompatible lists)","minus_incompat",
"select * from crash_me minus select * from crash_me2"); # oracle ...
assert("drop table crash_me2");
assert("drop table crash_me3");
assert("drop table crash_me2 $drop_attr");
assert("drop table crash_me3 $drop_attr");
# somethings to be added here ....
# FOR UNION - INTERSECT - EXCEPT -> CORRESPONDING [ BY ]
@ -1489,13 +1537,13 @@ if (report("subqueries","subqueries",
report("insert INTO ... SELECT ...","insert_select",
"create table crash_q (a int)",
"insert into crash_q (a) SELECT crash_me.a from crash_me",
"drop table crash_q");
"drop table crash_q $drop_attr");
report_trans("transactions","transactions",
[create_table("crash_q",["a integer not null"],[]),
"insert into crash_q values (1)"],
"select * from crash_q",
"drop table crash_q"
"drop table crash_q $drop_attr"
);
report("atomic updates","atomic_updates",
@ -1504,7 +1552,7 @@ report("atomic updates","atomic_updates",
"insert into crash_q values (3)",
"insert into crash_q values (1)",
"update crash_q set a=a+1",
"drop table crash_q");
"drop table crash_q $drop_attr");
if ($limits{'atomic_updates'} eq 'yes')
{
@ -1515,14 +1563,14 @@ if ($limits{'atomic_updates'} eq 'yes')
"insert into crash_q values (3)",
"insert into crash_q values (1)",
"update crash_q set a=a+1 where a < 3",
"drop table crash_q");
"drop table crash_q $drop_attr");
}
# To add with the views:
# DROP VIEW - CREAT VIEW *** [ WITH [ CASCADE | LOCAL ] CHECK OPTION ]
report("views","views",
"create view crash_q as select a from crash_me",
"drop view crash_q");
"drop view crash_q $drop_attr");
report("foreign key syntax","foreign_key_syntax",
create_table("crash_q",["a integer not null"],["primary key (a)"]),
@ -1531,8 +1579,8 @@ report("foreign key syntax","foreign_key_syntax",
[]),
"insert into crash_q values (1)",
"insert into crash_q2 values (1)",
"drop table crash_q2",
"drop table crash_q");
"drop table crash_q2 $drop_attr",
"drop table crash_q $drop_attr");
if ($limits{'foreign_key_syntax'} eq 'yes')
{
@ -1544,8 +1592,8 @@ if ($limits{'foreign_key_syntax'} eq 'yes')
[]),
"insert into crash_q values (1)",
"insert into crash_q2 values (2)",
"drop table crash_q2",
"drop table crash_q");
"drop table crash_q2 $drop_attr",
"drop table crash_q $drop_attr");
}
report("Create SCHEMA","create_schema",
@ -1564,40 +1612,40 @@ if ($limits{'foreign_key'} eq 'yes')
report("Column constraints","constraint_check",
"create table crash_q (a int check (a>0))",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("Table constraints","constraint_check_table",
"create table crash_q (a int ,b int, check (a>b))",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("Named constraints","constraint_check",
"create table crash_q (a int ,b int, constraint abc check (a>b))",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("NULL constraint (SyBase style)","constraint_null",
"create table crash_q (a int null)",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("Triggers (ANSI SQL)","psm_trigger",
"create table crash_q (a int ,b int)",
"create trigger crash_trigger after insert on crash_q referencing new table as new_a when (localtime > time '18:00:00') begin atomic end",
"insert into crash_q values(1,2)",
"drop trigger crash_trigger",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("PSM procedures (ANSI SQL)","psm_procedures",
"create table crash_q (a int,b int)",
"create procedure crash_proc(in a1 int, in b1 int) language sql modifies sql data begin declare c1 int; set c1 = a1 + b1; insert into crash_q(a,b) values (a1,c1); end",
"call crash_proc(1,10)",
"drop procedure crash_proc",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("PSM modules (ANSI SQL)","psm_modules",
"create table crash_q (a int,b int)",
"create module crash_m declare procedure crash_proc(in a1 int, in b1 int) language sql modifies sql data begin declare c1 int; set c1 = a1 + b1; insert into crash_q(a,b) values (a1,c1); end; declare procedure crash_proc2(INOUT a int, in b int) contains sql set a = b + 10; end module",
"call crash_proc(1,10)",
"drop module crash_m cascade",
"drop table crash_q cascade");
"drop table crash_q cascade $drop_attr");
report("PSM functions (ANSI SQL)","psm_functions",
"create table crash_q (a int)",
@ -1605,14 +1653,14 @@ report("PSM functions (ANSI SQL)","psm_functions",
"insert into crash_q values(crash_func(2,4))",
"select a,crash_func(a,2) from crash_q",
"drop function crash_func cascade",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("Domains (ANSI SQL)","domains",
"create domain crash_d as varchar(10) default 'Empty' check (value <> 'abcd')",
"create table crash_q(a crash_d, b int)",
"insert into crash_q(a,b) values('xyz',10)",
"insert into crash_q(b) values(10)",
"drop table crash_q",
"drop table crash_q $drop_attr",
"drop domain crash_d");
@ -1632,10 +1680,17 @@ if (!defined($limits{'lock_tables'}))
if (!report("many tables to drop table","multi_drop",
"create table crash_q (a int)",
"create table crash_q2 (a int)",
"drop table crash_q,crash_q2"))
"drop table crash_q,crash_q2 $drop_attr"))
{
$dbh->do("drop table crash_q");
$dbh->do("drop table crash_q2");
$dbh->do("drop table crash_q $drop_attr");
$dbh->do("drop table crash_q2 $drop_attr");
}
if (!report("drop table with cascade/restrict","drop_restrict",
"create table crash_q (a int)",
"drop table crash_q restrict"))
{
$dbh->do("drop table crash_q $drop_attr");
}
@ -1656,13 +1711,13 @@ report("/* */ as comment","comment_/**/",
report("insert empty string","insert_empty_string",
create_table("crash_q",["a char(10) not null,b char(10)"],[]),
"insert into crash_q values ('','')",
"drop table crash_q");
"drop table crash_q $drop_attr");
report("Having with alias","having_with_alias",
create_table("crash_q",["a integer"],[]),
"insert into crash_q values (10)",
"select sum(a) as b from crash_q group by a having b > 0",
"drop table crash_q");
"drop table crash_q $drop_attr");
#
# test name limits
@ -1672,14 +1727,14 @@ find_limit("table name length","max_table_name",
new query_many(["create table crash_q%s (q integer)",
"insert into crash_q%s values(1)"],
"select * from crash_q%s",1,
["drop table crash_q%s"],
["drop table crash_q%s $drop_attr"],
$max_name_length,7,1));
find_limit("column name length","max_column_name",
new query_many(["create table crash_q (q%s integer)",
"insert into crash_q (q%s) values(1)"],
"select q%s from crash_q",1,
["drop table crash_q"],
["drop table crash_q $drop_attr"],
$max_name_length,1));
if ($limits{'column_alias'} eq 'yes')
@ -1714,7 +1769,7 @@ find_limit("max char() size","max_char_size",
new query_many(["create table crash_q (q char(%d))",
"insert into crash_q values ('%s')"],
"select * from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
min($max_string_size,$limits{'query_size'})));
if ($limits{'type_sql_varchar(1_arg)'} eq 'yes')
@ -1723,7 +1778,7 @@ if ($limits{'type_sql_varchar(1_arg)'} eq 'yes')
new query_many(["create table crash_q (q varchar(%d))",
"insert into crash_q values ('%s')"],
"select * from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
min($max_string_size,$limits{'query_size'})));
}
@ -1743,13 +1798,13 @@ if (defined($found))
new query_many(["create table crash_q (q $found)",
"insert into crash_q values ('%s')"],
"select * from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
min($max_string_size,$limits{'query_size'}-30)));
}
$tmp=new query_repeat([],"create table crash_q (a integer","","",
",a%d integer","",")",["drop table crash_q"],
",a%d integer","",")",["drop table crash_q $drop_attr"],
$max_columns);
$tmp->{'offset'}=1;
find_limit("Columns in table","max_columns",$tmp);
@ -1773,7 +1828,7 @@ if ($limits{'unique_in_create'} eq 'yes')
",q%d integer not null,unique (q%d)",")",
["insert into crash_q (q,%f) values (1,%v)"],
"select q from crash_q",1,
"drop table crash_q",
"drop table crash_q $drop_attr",
$max_keys,0));
find_limit("index parts","max_index_parts",
@ -1781,14 +1836,14 @@ if ($limits{'unique_in_create'} eq 'yes')
",q%d","))",
["insert into crash_q ($key_fields) values ($key_values)"],
"select q0 from crash_q",1,
"drop table crash_q",
"drop table crash_q $drop_attr",
$max_keys,1));
find_limit("max index part length","max_index_part_length",
new query_many(["create table crash_q (q char(%d) not null,unique(q))",
"insert into crash_q (q) values ('%s')"],
"select q from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
$limits{'max_char_size'},0));
if ($limits{'type_sql_varchar(1_arg)'} eq 'yes')
@ -1797,7 +1852,7 @@ if ($limits{'unique_in_create'} eq 'yes')
new query_many(["create table crash_q (q varchar(%d) not null,unique(q))",
"insert into crash_q (q) values ('%s')"],
"select q from crash_q","%s",
["drop table crash_q"],
["drop table crash_q $drop_attr"],
$limits{'max_varchar_size'},0));
}
}
@ -1829,7 +1884,7 @@ if ($limits{'create_index'} ne 'no')
$end_drop =~ s/%t/crash_q/;
assert($end_drop);
}
assert("drop table crash_q");
assert("drop table crash_q $drop_attr");
}
print "indexs: $limits{'max_index'}\n";
if (!defined($limits{'max_unique_index'}))
@ -1848,7 +1903,7 @@ if ($limits{'create_index'} ne 'no')
$end_drop =~ s/%t/crash_q/;
assert($end_drop);
}
assert("drop table crash_q");
assert("drop table crash_q $drop_attr");
}
print "unique indexes: $limits{'max_unique_index'}\n";
if (!defined($limits{'max_index_parts'}))
@ -1864,7 +1919,7 @@ if ($limits{'create_index'} ne 'no')
undef,undef,
$end_drop,
$max_keys,1));
assert("drop table crash_q");
assert("drop table crash_q $drop_attr");
}
else
{
@ -1881,20 +1936,20 @@ if ($limits{'create_index'} ne 'no')
"select q from crash_q",
"%s",
[ $end_drop,
"drop table crash_q"],
"drop table crash_q $drop_attr"],
min($limits{'max_char_size'},"+8192")));
}
}
find_limit("index length","max_index_length",
new query_index_length("create table crash_q ",
"drop table crash_q",
"drop table crash_q $drop_attr",
$max_key_length));
find_limit("max table row length (without blobs)","max_row_length",
new query_row_length("crash_q ",
"not null",
"drop table crash_q",
"drop table crash_q $drop_attr",
min($max_row_length,
$limits{'max_columns'}*
min($limits{'max_char_size'},255))));
@ -1903,7 +1958,7 @@ find_limit("table row length with nulls (without blobs)",
"max_row_length_with_null",
new query_row_length("crash_q ",
"",
"drop table crash_q",
"drop table crash_q $drop_attr",
$limits{'max_row_length'}*2));
find_limit("number of columns in order by","columns_in_order_by",
@ -1912,7 +1967,7 @@ find_limit("number of columns in order by","columns_in_order_by",
"insert into crash_q values(%v)"],
"select * from crash_q order by %f",
undef(),
["drop table crash_q"],
["drop table crash_q $drop_attr"],
$max_order_by));
find_limit("number of columns in group by","columns_in_group_by",
@ -1921,14 +1976,14 @@ find_limit("number of columns in group by","columns_in_group_by",
"insert into crash_q values(%v)"],
"select %f from crash_q group by %f",
undef(),
["drop table crash_q"],
["drop table crash_q $drop_attr"],
$max_order_by));
#
# End of test
#
$dbh->do("drop table crash_me"); # Remove temporary table
$dbh->do("drop table crash_me $drop_attr"); # Remove temporary table
print "crash-me safe: $limits{'crash_me_safe'}\n";
print "reconnected $reconnect_count times\n";
@ -1971,6 +2026,11 @@ $0 takes the following options:
--comment='some comment'
Add this comment to the crash-me limit file
--check-server
Do a new connection to the server every time crash-me checks if the server
is alive. This can help in cases where the server starts returning wrong
data because of an earlier select.
--database='database' (Default $opt_database)
Create test tables in this database.
@ -2014,7 +2074,7 @@ $0 takes the following options:
User name to log into the SQL server.
--start-cmd='command to restart server'
Automaticly restarts server with this command if the server dies.
Automaticly restarts server with this command if the database server dies.
--sleep='time in seconds' (Default $opt_sleep)
Wait this long before restarting server.
@ -2213,6 +2273,13 @@ sub check_connect
my ($sth);
print "Checking connection\n" if ($opt_log_all_queries);
# The following line will not work properly with interbase
if ($opt_check_server && defined($check_connect) && $dbh->{AutoCommit} != 0)
{
$dbh->disconnect;
$dbh=safe_connect($object);
return;
}
return if (defined($check_connect) && defined($dbh->do($check_connect)));
$dbh->disconnect || warn $dbh->errstr;
print "\nreconnecting\n" if ($opt_debug);

View File

@ -0,0 +1,516 @@
#This file is automaticly generated by crash-me 1.52
NEG=yes # update of column= -column
Need_cast_for_null=no # Need to cast NULL for arithmetic
alter_add_col=yes # Alter table add column
alter_add_constraint=yes # Alter table add constraint
alter_add_foreign_key=yes # Alter table add foreign key
alter_add_multi_col=no # Alter table add many columns
alter_add_primary_key=with constraint # Alter table add primary key
alter_add_unique=yes # Alter table add unique
alter_alter_col=yes # Alter table alter column default
alter_change_col=no # Alter table change column
alter_drop_col=with restrict/cascade # Alter table drop column
alter_drop_constraint=with restrict/cascade # Alter table drop constraint
alter_drop_foreign_key=with drop constraint and restrict/cascade # Alter table drop foreign key
alter_drop_primary_key=drop constraint # Alter table drop primary key
alter_drop_unique=with constraint and restrict/cascade # Alter table drop unique
alter_modify_col=no # Alter table modify column
alter_rename_table=no # Alter table rename table
atomic_updates=no # atomic updates
automatic_rowid=no # Automatic rowid
binary_numbers=yes # binary numbers (0b1001)
binary_strings=yes # binary strings (b'0110')
case_insensitive_strings=no # case insensitive compare
char_is_space_filled=no # char are space filled
column_alias=yes # Column alias
columns_in_group_by=+64 # number of columns in group by
columns_in_order_by=+64 # number of columns in order by
comment_#=no # # as comment
comment_--=no # -- as comment (ANSI)
comment_/**/=no # /* */ as comment
comment_//=no # // as comment (ANSI)
compute=no # Compute
connections=47 # Simultaneous connections (installation default)
constraint_check=yes # Column constraints
constraint_check_table=yes # Table constraints
constraint_null=no # NULL constraint (SyBase style)
crash_me_safe=no # crash me safe
crash_me_version=1.52 # crash me version
create_default=yes # default value for column
create_default_func=no # default value function for column
create_if_not_exists=no # create table if not exists
create_index=yes # create index
create_schema=yes # Create SCHEMA
create_table_select=no # create table from select
cross_join=yes # cross join (same as from a,b)
date_infinity=no # Supports 'infinity dates
date_last=yes # Supports 9999-12-31 dates
date_one=yes # Supports 0001-01-01 dates
date_with_YY=no # Supports YY-MM-DD dates
date_zero=no # Supports 0000-00-00 dates
domains=no # Domains (ANSI SQL)
double_quotes=yes # Double '' as ' in strings
drop_if_exists=no # drop table if exists
drop_index=yes # drop index
drop_requires_cascade=yes # drop table require cascade/restrict
drop_restrict=yes # drop table with cascade/restrict
end_colon=yes # allows end ';'
except=yes # except
except_all=yes # except all
except_all_incompat=no # except all (incompatible lists)
except_incompat=no # except (incompatible lists)
float_int_expr=yes # mixing of integer and float in expression
foreign_key=yes # foreign keys
foreign_key_circular=yes # Circular foreign keys
foreign_key_syntax=yes # foreign key syntax
full_outer_join=yes # full outer join
func_extra_!=no # Function NOT as '!' in SELECT
func_extra_%=no # Function MOD as %
func_extra_&=yes # Function & (bitwise and)
func_extra_&&=no # Function AND as '&&'
func_extra_<>=no # Function <> in SELECT
func_extra_==no # Function =
func_extra_add_months=no # Function ADD_MONTHS
func_extra_and_or=no # Function AND and OR in SELECT
func_extra_ascii_char=no # Function ASCII_CHAR
func_extra_ascii_code=no # Function ASCII_CODE
func_extra_atn2=no # Function ATN2
func_extra_auto_num2string=no # Function automatic num->string convert
func_extra_auto_string2num=no # Function automatic string->num convert
func_extra_between=no # Function BETWEEN in SELECT
func_extra_binary_shifts=no # Function << and >> (bitwise shifts)
func_extra_bit_count=no # Function BIT_COUNT
func_extra_ceil=no # Function CEIL
func_extra_charindex=no # Function CHARINDEX
func_extra_chr=no # Function CHR
func_extra_concat_as_+=no # Function concatenation with +
func_extra_concat_list=no # Function CONCAT(list)
func_extra_convert=no # Function CONVERT
func_extra_cosh=no # Function COSH
func_extra_date_format=no # Function DATE_FORMAT
func_extra_dateadd=no # Function DATEADD
func_extra_datediff=no # Function DATEDIFF
func_extra_datename=no # Function DATENAME
func_extra_datepart=no # Function DATEPART
func_extra_elt=no # Function ELT
func_extra_encrypt=no # Function ENCRYPT
func_extra_field=no # Function FIELD
func_extra_format=no # Function FORMAT
func_extra_from_days=no # Function FROM_DAYS
func_extra_from_unixtime=no # Function FROM_UNIXTIME
func_extra_getdate=no # Function GETDATE
func_extra_greatest=no # Function GREATEST
func_extra_if=no # Function IF
func_extra_in_num=no # Function IN on numbers in SELECT
func_extra_in_str=no # Function IN on strings in SELECT
func_extra_initcap=no # Function INITCAP
func_extra_instr=no # Function LOCATE as INSTR
func_extra_instr_oracle=no # Function INSTR (Oracle syntax)
func_extra_instrb=no # Function INSTRB
func_extra_interval=no # Function INTERVAL
func_extra_last_day=no # Function LAST_DAY
func_extra_last_insert_id=no # Function LAST_INSERT_ID
func_extra_least=no # Function LEAST
func_extra_lengthb=no # Function LENGTHB
func_extra_like=no # Function LIKE in SELECT
func_extra_like_escape=no # Function LIKE ESCAPE in SELECT
func_extra_ln=no # Function LN
func_extra_log(m_n)=no # Function LOG(m,n)
func_extra_logn=no # Function LOGN
func_extra_lpad=no # Function LPAD
func_extra_mdy=no # Function MDY
func_extra_mid=no # Function SUBSTRING as MID
func_extra_months_between=no # Function MONTHS_BETWEEN
func_extra_not=no # Function NOT in SELECT
func_extra_not_between=no # Function NOT BETWEEN in SELECT
func_extra_not_like=no # Function NOT LIKE in SELECT
func_extra_odbc_convert=no # Function ODBC CONVERT
func_extra_password=no # Function PASSWORD
func_extra_paste=no # Function PASTE
func_extra_patindex=no # Function PATINDEX
func_extra_period_add=no # Function PERIOD_ADD
func_extra_period_diff=no # Function PERIOD_DIFF
func_extra_pow=no # Function POW
func_extra_range=no # Function RANGE
func_extra_regexp=no # Function REGEXP in SELECT
func_extra_replicate=no # Function REPLICATE
func_extra_reverse=no # Function REVERSE
func_extra_root=no # Function ROOT
func_extra_round1=no # Function ROUND(1 arg)
func_extra_rpad=no # Function RPAD
func_extra_sec_to_time=no # Function SEC_TO_TIME
func_extra_sinh=no # Function SINH
func_extra_str=no # Function STR
func_extra_strcmp=no # Function STRCMP
func_extra_stuff=no # Function STUFF
func_extra_substrb=no # Function SUBSTRB
func_extra_substring_index=no # Function SUBSTRING_INDEX
func_extra_sysdate=no # Function SYSDATE
func_extra_tail=no # Function TAIL
func_extra_tanh=no # Function TANH
func_extra_time_to_sec=no # Function TIME_TO_SEC
func_extra_to_days=no # Function TO_DAYS
func_extra_translate=no # Function TRANSLATE
func_extra_trim_many_char=no # Function TRIM; Many char extension
func_extra_trim_substring=no # Function TRIM; Substring extension
func_extra_trunc=no # Function TRUNC
func_extra_uid=no # Function UID
func_extra_unix_timestamp=no # Function UNIX_TIMESTAMP
func_extra_userenv=no # Function USERENV
func_extra_version=no # Function VERSION
func_extra_weekday=no # Function WEEKDAY
func_extra_|=no # Function | (bitwise or)
func_extra_||=no # Function OR as '||'
func_extra_~*=no # Function ~* (case insensitive compare)
func_odbc_abs=no # Function ABS
func_odbc_acos=no # Function ACOS
func_odbc_ascii=no # Function ASCII
func_odbc_asin=no # Function ASIN
func_odbc_atan=no # Function ATAN
func_odbc_atan2=no # Function ATAN2
func_odbc_ceiling=no # Function CEILING
func_odbc_char=no # Function CHAR
func_odbc_concat=no # Function CONCAT(2 arg)
func_odbc_cos=no # Function COS
func_odbc_cot=no # Function COT
func_odbc_curdate=no # Function CURDATE
func_odbc_curtime=no # Function CURTIME
func_odbc_database=no # Function DATABASE
func_odbc_dayname=no # Function DAYNAME
func_odbc_dayofmonth=no # Function DAYOFMONTH
func_odbc_dayofweek=no # Function DAYOFWEEK
func_odbc_dayofyear=no # Function DAYOFYEAR
func_odbc_degrees=no # Function DEGREES
func_odbc_difference=no # Function DIFFERENCE()
func_odbc_exp=no # Function EXP
func_odbc_floor=no # Function FLOOR
func_odbc_fn_left=no # Function ODBC syntax LEFT & RIGHT
func_odbc_hour=no # Function HOUR
func_odbc_hour_time=no # Function ANSI HOUR
func_odbc_ifnull=no # Function IFNULL
func_odbc_insert=no # Function INSERT
func_odbc_lcase=no # Function LCASE
func_odbc_left=no # Function LEFT
func_odbc_length=no # Function REAL LENGTH
func_odbc_length_without_space=no # Function ODBC LENGTH
func_odbc_locate_2=no # Function LOCATE(2 arg)
func_odbc_locate_3=no # Function LOCATE(3 arg)
func_odbc_log=no # Function LOG
func_odbc_log10=no # Function LOG10
func_odbc_ltrim=no # Function LTRIM
func_odbc_minute=no # Function MINUTE
func_odbc_mod=no # Function MOD
func_odbc_month=no # Function MONTH
func_odbc_monthname=no # Function MONTHNAME
func_odbc_now=no # Function NOW
func_odbc_pi=no # Function PI
func_odbc_power=no # Function POWER
func_odbc_quarter=no # Function QUARTER
func_odbc_radians=no # Function RADIANS
func_odbc_rand=no # Function RAND
func_odbc_repeat=no # Function REPEAT
func_odbc_replace=no # Function REPLACE
func_odbc_right=no # Function RIGHT
func_odbc_round=no # Function ROUND(2 arg)
func_odbc_rtrim=no # Function RTRIM
func_odbc_second=no # Function SECOND
func_odbc_sign=no # Function SIGN
func_odbc_sin=no # Function SIN
func_odbc_soundex=no # Function SOUNDEX
func_odbc_space=no # Function SPACE
func_odbc_sqrt=no # Function SQRT
func_odbc_substring=no # Function ODBC SUBSTRING
func_odbc_tan=no # Function TAN
func_odbc_timestampadd=no # Function TIMESTAMPADD
func_odbc_timestampdiff=no # Function TIMESTAMPDIFF
func_odbc_truncate=no # Function TRUNCATE
func_odbc_ucase=no # Function UCASE
func_odbc_user()=no # Function USER()
func_odbc_week=no # Function WEEK
func_odbc_year=no # Function YEAR
func_sql_+=yes # Function +, -, * and /
func_sql_bit_length=yes # Function BIT_LENGTH
func_sql_cast=yes # Function CAST
func_sql_char_length=error # Function CHAR_LENGTH
func_sql_char_length(constant)=yes # Function CHAR_LENGTH(constant)
func_sql_character_length=yes # Function CHARACTER_LENGTH
func_sql_coalesce=no # Function COALESCE
func_sql_concat_as_||=yes # Function concatenation with ||
func_sql_current_date=yes # Function CURRENT_DATE
func_sql_current_time=yes # Function CURRENT_TIME
func_sql_current_timestamp=yes # Function CURRENT_TIMESTAMP
func_sql_current_user=yes # Function CURRENT_USER
func_sql_extract_sql=no # Function EXTRACT
func_sql_localtime=no # Function LOCALTIME
func_sql_localtimestamp=no # Function LOCALTIMESTAMP
func_sql_lower=yes # Function LOWER
func_sql_nullif_num=yes # Function NULLIF with numbers
func_sql_nullif_string=yes # Function NULLIF with strings
func_sql_octet_length=yes # Function OCTET_LENGTH
func_sql_position=yes # Function POSITION
func_sql_searched_case=no # Function searched CASE
func_sql_session_user=no # Function SESSION_USER
func_sql_simple_case=no # Function simple CASE
func_sql_substring=yes # Function ANSI SQL SUBSTRING
func_sql_system_user=yes # Function SYSTEM_USER
func_sql_trim=yes # Function TRIM
func_sql_upper=yes # Function UPPER
func_sql_user=yes # Function USER
func_where_between=yes # Function BETWEEN
func_where_eq_all=yes # Function = ALL
func_where_eq_any=yes # Function = ANY
func_where_eq_some=yes # Function = SOME
func_where_exists=yes # Function EXISTS
func_where_in_num=yes # Function IN on numbers
func_where_like=yes # Function LIKE
func_where_like_escape=yes # Function LIKE ESCAPE
func_where_match=no # Function MATCH
func_where_match_unique=no # Function MATCH UNIQUE
func_where_matches=no # Function MATCHES
func_where_not_between=yes # Function NOT BETWEEN
func_where_not_exists=yes # Function NOT EXISTS
func_where_not_like=yes # Function NOT LIKE
func_where_not_unique=no # Function NOT UNIQUE
func_where_unique=yes # Function UNIQUE
functions=yes # Functions
group_by=yes # Group by
group_by_alias=no # Group by alias
group_by_null=yes # group on column with null values
group_by_position=no # Group by position
group_distinct_functions=yes # Group functions with distinct
group_func_extra_bit_and=no # Group function BIT_AND
group_func_extra_bit_or=no # Group function BIT_OR
group_func_extra_count_distinct_list=no # Group function COUNT(DISTINCT expr,expr,...)
group_func_extra_std=no # Group function STD
group_func_extra_stddev=no # Group function STDDEV
group_func_extra_variance=no # Group function VARIANCE
group_func_sql_any=no # Group function ANY
group_func_sql_avg=yes # Group function AVG
group_func_sql_count_*=yes # Group function COUNT (*)
group_func_sql_count_column=yes # Group function COUNT column name
group_func_sql_count_distinct=no # Group function COUNT(DISTINCT expr)
group_func_sql_every=no # Group function EVERY
group_func_sql_max=yes # Group function MAX on numbers
group_func_sql_max_str=yes # Group function MAX on strings
group_func_sql_min=yes # Group function MIN on numbers
group_func_sql_min_str=yes # Group function MIN on strings
group_func_sql_some=no # Group function SOME
group_func_sql_sum=yes # Group function SUM
group_functions=yes # Group functions
group_on_unused=no # Group on unused column
has_true_false=no # TRUE and FALSE
having=no # Having
having_with_alias=no # Having with alias
hex_numbers=yes # hex numbers (0x41)
hex_strings=yes # hex strings (x'1ace')
ignore_end_space=yes # ignore end space in compare
index_in_create=no # index in create table
index_namespace=yes # different namespace for index
index_parts=no # index on column part (extension)
inner_join=yes # inner join
insert_empty_string=yes # insert empty string
insert_select=yes # insert INTO ... SELECT ...
insert_with_set=no # INSERT with set syntax
intersect=no # intersect
intersect_all=no # intersect all
intersect_all_incompat=no # intersect all (incompatible lists)
intersect_incompat=no # intersect (incompatible lists)
join_tables=+64 # tables in join
left_outer_join=yes # left outer join
left_outer_join_using=yes # left outer join using
like_with_column=yes # column LIKE column
like_with_number=no # LIKE on numbers
lock_tables=no # lock table
logical_value=not supported # Value of logical operation (1=1)
max_big_expressions=10 # big expressions
max_char_size=+8000000 # max char() size
max_column_name=128 # column name length
max_columns=+8192 # Columns in table
max_conditions=5427 # OR and AND in WHERE
max_expressions=4075 # simple expressions
max_index=38 # max index
max_index_length=+8192 # index length
max_index_name=128 # index name length
max_index_part_length=+8000000 # max index part length
max_index_parts=20 # index parts
max_index_varchar_part_length=+8000000 # index varchar part length
max_row_length=377681 # max table row length (without blobs)
max_row_length_with_null=0 # table row length with nulls (without blobs)
max_select_alias_name=128 # select alias name length
max_table_alias_name=128 # table alias name length
max_table_name=128 # table name length
max_unique_index=38 # unique indexes
max_varchar_size=+8000000 # max varchar() size
minus=no # minus
minus_incompat=no # minus (incompatible lists)
minus_neg=no # Calculate 1--1
multi_drop=no # many tables to drop table
multi_strings=yes # Multiple line strings
multi_table_delete=no # DELETE FROM table1,table2...
multi_table_update=no # Update with many tables
multi_value_insert=yes # Value lists in INSERT
natural_join=no # natural join
natural_join_incompat=no # natural join (incompatible lists)
natural_left_outer_join=yes # natural left outer join
no_primary_key=yes # Tables without primary key
null_concat_expr=no # Is 'a' || NULL = NULL
null_in_index=yes # null in index
null_in_unique=no # null in unique index
null_num_expr=no # Is 1+NULL = NULL
odbc_left_outer_join=no # left outer join odbc style
operating_system=Linux 2.2.14-my-SMP i686 # crash-me tested on
order_by=yes # Order by
order_by_alias=yes # Order by alias
order_by_function=no # Order by function
order_by_position=yes # Order by position
order_by_remember_desc=no # Order by DESC is remembered
order_on_unused=no # Order by on unused column
primary_key_in_create=yes # primary key in create table
psm_functions=no # PSM functions (ANSI SQL)
psm_modules=no # PSM modules (ANSI SQL)
psm_procedures=no # PSM procedures (ANSI SQL)
psm_trigger=no # Triggers (ANSI SQL)
query_size=16777216 # query size
quote_ident_with_"=yes # " as identifier quote (ANSI SQL)
quote_ident_with_[=no # [] as identifier quote
quote_ident_with_`=no # ` as identifier quote
quote_with_"=no # Allows ' and " as string markers
recursive_subqueries=+64 # recursive subqueries
remember_end_space=yes # Remembers end space in char()
remember_end_space_varchar=yes # Remembers end space in varchar()
rename_table=no # rename table
right_outer_join=yes # right outer join
rowid=no # Type for row id
select_constants=yes # Select constants
select_limit=no # LIMIT number of rows
select_limit2=no # SELECT with LIMIT #,#
select_string_size=4199664 # constant string size in SELECT
select_table_update=yes # Update with sub select
select_without_from=no # SELECT without FROM
server_version=2.1 # server version
simple_joins=yes # ANSI SQL simple joins
storage_of_float=round # Storage of float values
subqueries=yes # subqueries
table_alias=yes # Table alias
table_name_case=yes # case independent table names
table_wildcard=yes # Select table_name.*
tempoary_table=no # temporary tables
transactions=error # transactions
truncate_table=no # truncate
type_extra_abstime=no # Type abstime
type_extra_bfile=no # Type bfile
type_extra_blob=yes # Type blob
type_extra_bool=no # Type bool
type_extra_box=no # Type box
type_extra_byte=yes # Type byte
type_extra_char(1_arg)_binary=no # Type char(1 arg) binary
type_extra_cidr=no # Type cidr
type_extra_circle=no # Type circle
type_extra_clob=yes # Type clob
type_extra_datetime=no # Type datetime
type_extra_double=no # Type double
type_extra_enum(1_arg)=no # Type enum(1 arg)
type_extra_float(2_arg)=no # Type float(2 arg)
type_extra_float4=no # Type float4
type_extra_float8=no # Type float8
type_extra_image=no # Type image
type_extra_inet=no # Type inet
type_extra_int(1_arg)_zerofill=no # Type int(1 arg) zerofill
type_extra_int1=no # Type int1
type_extra_int2=no # Type int2
type_extra_int3=no # Type int3
type_extra_int4=no # Type int4
type_extra_int8=no # Type int8
type_extra_int_auto_increment=no # Type int not null auto_increment
type_extra_int_identity=no # Type int not null identity
type_extra_int_unsigned=no # Type int unsigned
type_extra_interval=no # Type interval
type_extra_line=no # Type line
type_extra_long=no # Type long
type_extra_long_raw=no # Type long raw
type_extra_long_varbinary=no # Type long varbinary
type_extra_long_varchar(1_arg)=no # Type long varchar(1 arg)
type_extra_lseg=no # Type lseg
type_extra_macaddr=no # Type macaddr
type_extra_mediumint=no # Type mediumint
type_extra_mediumtext=no # Type mediumtext
type_extra_middleint=no # Type middleint
type_extra_mlslabel=no # Type mlslabel
type_extra_money=no # Type money
type_extra_nclob=no # Type nclob
type_extra_number=no # Type number
type_extra_number(1_arg)=no # Type number(1 arg)
type_extra_number(2_arg)=no # Type number(2 arg)
type_extra_nvarchar2(1_arg)=no # Type nvarchar2(1 arg)
type_extra_path=no # Type path
type_extra_point=no # Type point
type_extra_polygon=no # Type polygon
type_extra_raw(1_arg)=no # Type raw(1 arg)
type_extra_reltime=no # Type reltime
type_extra_rowid=no # Type rowid
type_extra_serial=no # Type serial
type_extra_set(1_arg)=no # Type set(1 arg)
type_extra_smalldatetime=no # Type smalldatetime
type_extra_smallfloat=no # Type smallfloat
type_extra_smallmoney=no # Type smallmoney
type_extra_text=no # Type text
type_extra_text(1_arg)=no # Type text(1 arg)
type_extra_timespan=no # Type timespan
type_extra_uint=no # Type uint
type_extra_varchar2(1_arg)=no # Type varchar2(1 arg)
type_extra_year=no # Type year
type_odbc_bigint=no # Type bigint
type_odbc_binary(1_arg)=no # Type binary(1 arg)
type_odbc_datetime=no # Type datetime
type_odbc_tinyint=no # Type tinyint
type_odbc_varbinary(1_arg)=no # Type varbinary(1 arg)
type_sql_bit=yes # Type bit
type_sql_bit(1_arg)=yes # Type bit(1 arg)
type_sql_bit_varying(1_arg)=yes # Type bit varying(1 arg)
type_sql_boolean=yes # Type boolean
type_sql_char(1_arg)=yes # Type char(1 arg)
type_sql_char_varying(1_arg)=yes # Type char varying(1 arg)
type_sql_character(1_arg)=yes # Type character(1 arg)
type_sql_character_varying(1_arg)=yes # Type character varying(1 arg)
type_sql_date=yes # Type date
type_sql_dec(2_arg)=yes # Type dec(2 arg)
type_sql_decimal(2_arg)=yes # Type decimal(2 arg)
type_sql_double_precision=yes # Type double precision
type_sql_float=yes # Type float
type_sql_float(1_arg)=yes # Type float(1 arg)
type_sql_int=yes # Type int
type_sql_integer=yes # Type integer
type_sql_interval_day=yes # Type interval day
type_sql_interval_day_to_hour=yes # Type interval day to hour
type_sql_interval_day_to_minute=yes # Type interval day to minute
type_sql_interval_day_to_second=yes # Type interval day to second
type_sql_interval_hour=yes # Type interval hour
type_sql_interval_hour_to_minute=yes # Type interval hour to minute
type_sql_interval_hour_to_second=yes # Type interval hour to second
type_sql_interval_minute=yes # Type interval minute
type_sql_interval_minute_to_second=yes # Type interval minute to second
type_sql_interval_month=yes # Type interval month
type_sql_interval_second=yes # Type interval second
type_sql_interval_year=yes # Type interval year
type_sql_interval_year_to_month=yes # Type interval year to month
type_sql_national_char_varying(1_arg)=yes # Type national char varying(1 arg)
type_sql_national_character(1_arg)=yes # Type national character(1 arg)
type_sql_national_character_varying(1_arg)=yes # Type national character varying(1 arg)
type_sql_nchar(1_arg)=yes # Type nchar(1 arg)
type_sql_nchar_varying(1_arg)=yes # Type nchar varying(1 arg)
type_sql_numeric(2_arg)=yes # Type numeric(2 arg)
type_sql_real=yes # Type real
type_sql_smallint=yes # Type smallint
type_sql_time=yes # Type time
type_sql_timestamp=yes # Type timestamp
type_sql_timestamp_with_time_zone=yes # Type timestamp with time zone
type_sql_varchar(1_arg)=yes # Type varchar(1 arg)
union=yes # union
union_all=yes # union all
union_all_incompat=yes # union all (incompatible lists)
union_incompat=yes # union (incompatible lists)
unique_in_create=yes # unique in create table
unique_null_in_create=no # unique null in create
views=yes # views
where_string_size=7999965 # constant string size in where

View File

@ -45,6 +45,8 @@ sub get_server
{ $server= new db_Solid($host,$database); }
elsif ($name =~ /Empress/i)
{ $server= new db_Empress($host,$database); }
elsif ($name =~ /FrontBase/i)
{ $server= new db_FrontBase($host,$database); }
elsif ($name =~ /Oracle/i)
{ $server= new db_Oracle($host,$database); }
elsif ($name =~ /Access/i)
@ -71,7 +73,7 @@ sub get_server
{ $server= new db_interbase($host,$database); }
else
{
die "Unknown sql server name used: $name\nUse one of: Access, Adabas, AdabasD, Empress, Oracle, Informix, DB2, mSQL, Mimer, MS-SQL, MySQL, Pg, Solid or Sybase.\nIf the connection is done trough ODBC the name must end with _ODBC\n";
die "Unknown sql server name used: $name\nUse one of: Access, Adabas, AdabasD, Empress, FrontBase, Oracle, Informix, DB2, mSQL, Mimer, MS-SQL, MySQL, Pg, Solid or Sybase.\nIf the connection is done trough ODBC the name must end with _ODBC\n";
}
if ($name =~ /_ODBC$/i || defined($odbc) && $odbc)
{
@ -91,9 +93,9 @@ sub get_server
sub all_servers
{
return ["Access", "Adabas", "DB2", "Empress", "Oracle", "Informix",
"InterBase", "Mimer", "mSQL", "MS-SQL", "MySQL", "Pg", "Solid",
"Sybase"];
return ["Access", "Adabas", "DB2", "Empress", "FrontBase", "Oracle",
"Informix", "InterBase", "Mimer", "mSQL", "MS-SQL", "MySQL", "Pg",
"Solid", "Sybase"];
}
#############################################################################
@ -117,6 +119,7 @@ sub new
$self->{'text'} = "text";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'vacuum'} = 1; # When using with --fast
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 9999; # (Actually not a limit)
$limits{'max_columns'} = 2000; # Max number of columns in table
@ -162,6 +165,9 @@ sub new
$limits{'limit'} = 1; # supports the limit attribute
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
$smds{'time'} = 1;
$smds{'q1'} = 'b'; # with time not supp by mysql ('')
@ -344,9 +350,12 @@ sub new
bless $self;
$self->{'cmp_name'} = "msql";
$self->{'data_source'} = "DBI:mSQL:$database:$main::$opt_host";
$self->{'data_source'} = "DBI:mSQL:$database:$host";
$self->{'limits'} = \%limits;
$self->{'double_quotes'} = 0;
$self->{'drop_attr'} = "";
$self->{'blob'} = "text(" . $limits{'max_text_size'} .")";
$self->{'text'} = "text(" . $limits{'max_text_size'} .")";
$limits{'max_conditions'} = 74;
$limits{'max_columns'} = 75;
@ -388,9 +397,9 @@ sub new
$limits{'column_alias'} = 0;
$limits{'NEG'} = 0;
$limits{'func_extra_in_num'} = 0;
$self->{'blob'} = "text(" . $limits{'max_text_size'} .")";
$self->{'text'} = "text(" . $limits{'max_text_size'} .")";
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -539,6 +548,7 @@ sub new
$self->{'blob'} = "text";
$self->{'text'} = "text";
$self->{'double_quotes'} = 1;
$self->{'drop_attr'} = "";
$self->{"vacuum"} = 1;
$limits{'join_optimizer'} = 1; # Can optimize FROM tables
$limits{'load_data_infile'} = 0; # Is this true ?
@ -579,6 +589,9 @@ sub new
$limits{'query_size'} = 16777216;
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
# the different cases per query ...
$smds{'q1'} = 'b'; # with time
@ -812,6 +825,7 @@ sub new
$self->{'blob'} = "long varchar";
$self->{'text'} = "long varchar";
$self->{'double_quotes'} = 1;
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 9999; # Probably big enough
$limits{'max_columns'} = 2000; # From crash-me
@ -853,6 +867,9 @@ sub new
$limits{'func_extra_in_num'} = 1;
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
# for the smds small benchmark test ....
# the different cases per query ...
@ -1043,12 +1060,13 @@ sub new
bless $self;
$self->{'cmp_name'} = "empress";
$self->{'data_source'} = "DBI:EmpressNet:SERVER=$main::$opt_host;Database=/usr/local/empress/rdbms/bin/$database";
$self->{'data_source'} = "DBI:EmpressNet:SERVER=$host;Database=/usr/local/empress/rdbms/bin/$database";
$self->{'limits'} = \%limits;
$self->{'smds'} = \%smds;
$self->{'blob'} = "text";
$self->{'text'} = "text";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 1258;
$limits{'max_columns'} = 226; # server is disconnecting????
@ -1092,6 +1110,9 @@ sub new
$limits{'func_extra_in_num'} = 0;
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
# for the smds small benchmark test ....
# the different cases per query ... EMPRESS
@ -1327,6 +1348,7 @@ sub new
$self->{'blob'} = "long";
$self->{'text'} = "long";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$self->{"vacuum"} = 1;
$limits{'max_conditions'} = 9999; # (Actually not a limit)
@ -1370,6 +1392,9 @@ sub new
$limits{'func_extra_in_num'} = 1; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
$smds{'time'} = 1;
$smds{'q1'} = 'b'; # with time not supp by mysql ('')
@ -1578,7 +1603,8 @@ sub new
$self->{'blob'} = "byte in table";
$self->{'text'} = "byte in table";
$self->{'double_quotes'} = 0; # Can handle: 'Walker''s'
$self->{'host'} = $main::opt_host;
$self->{'drop_attr'} = "";
$self->{'host'} = $host;
$limits{'NEG'} = 1; # Supports -id
$limits{'alter_table'} = 1;
@ -1619,6 +1645,9 @@ sub new
$limits{'table_wildcard'} = 1; # Has SELECT table_name.*
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -1766,7 +1795,7 @@ sub new
$self->{'cmp_name'} = "access";
$self->{'data_source'} = "DBI:ODBC:$database";
if (defined($opt_host) && $opt_host ne "")
if (defined($host) && $host ne "")
{
$self->{'data_source'} .= ":$host";
}
@ -1775,6 +1804,7 @@ sub new
$self->{'blob'} = "blob";
$self->{'text'} = "blob"; # text ?
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 97; # We get 'Query is too complex'
$limits{'max_columns'} = 255; # Max number of columns in table
@ -1817,6 +1847,9 @@ sub new
$limits{'func_extra_in_num'} = 1; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -1938,7 +1971,7 @@ sub new
$self->{'cmp_name'} = "ms-sql";
$self->{'data_source'} = "DBI:ODBC:$database";
if (defined($opt_host) && $opt_host ne "")
if (defined($host) && $host ne "")
{
$self->{'data_source'} .= ":$host";
}
@ -1947,6 +1980,7 @@ sub new
$self->{'blob'} = "text";
$self->{'text'} = "text";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 1030; # We get 'Query is too complex'
$limits{'max_columns'} = 250; # Max number of columns in table
@ -1989,6 +2023,9 @@ sub new
$limits{'func_extra_in_num'} = 0; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -2121,7 +2158,7 @@ sub new
$self->{'cmp_name'} = "sybase";
$self->{'data_source'} = "DBI:ODBC:$database";
if (defined($opt_host) && $opt_host ne "")
if (defined($host) && $host ne "")
{
$self->{'data_source'} .= ":$host";
}
@ -2130,6 +2167,7 @@ sub new
$self->{'blob'} = "text";
$self->{'text'} = "text";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$self->{"vacuum"} = 1;
$limits{'max_conditions'} = 1030; # We get 'Query is too complex'
@ -2173,6 +2211,9 @@ sub new
$limits{'func_extra_in_num'} = 0; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -2318,6 +2359,7 @@ sub new
$self->{'blob'} = "long";
$self->{'text'} = "long";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 50; # (Actually not a limit)
$limits{'max_columns'} = 254; # Max number of columns in table
@ -2360,6 +2402,9 @@ sub new
$limits{'func_extra_in_num'} = 1; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
$smds{'time'} = 1;
$smds{'q1'} = 'b'; # with time not supp by mysql ('')
@ -2519,7 +2564,7 @@ sub new
$self->{'cmp_name'} = "DB2";
$self->{'data_source'} = "DBI:ODBC:$database";
if (defined($opt_host) && $opt_host ne "")
if (defined($host) && $host ne "")
{
$self->{'data_source'} .= ":$host";
}
@ -2528,6 +2573,7 @@ sub new
$self->{'blob'} = "varchar(255)";
$self->{'text'} = "varchar(255)";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$limits{'max_conditions'} = 418; # We get 'Query is too complex'
$limits{'max_columns'} = 500; # Max number of columns in table
@ -2570,6 +2616,9 @@ sub new
$limits{'func_extra_in_num'} = 0; # Has function in
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
return $self;
}
@ -2693,6 +2742,7 @@ sub new
$self->{'blob'} = "binary varying(15000)";
$self->{'text'} = "character varying(15000)";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$self->{'char_null'} = "cast(NULL as char(1))";
$self->{'numeric_null'} = "cast(NULL as int)";
@ -2739,6 +2789,9 @@ sub new
$limits{'limit'} = 0; # Does not support the limit attribute
$limits{'unique_index'} = 1; # Unique index works or not
$limits{'insert_select'} = 1;
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
$smds{'time'} = 1;
$smds{'q1'} = 'b'; # with time not supp by mysql ('')
@ -2889,6 +2942,7 @@ sub new
$self->{'blob'} = "blob";
$self->{'text'} = "";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = "";
$self->{'char_null'} = "";
$self->{'numeric_null'} = "";
@ -2933,6 +2987,9 @@ sub new
$limits{'NEG'} = 0; # Supports -id
$limits{'func_extra_in_num'} = 0; # Has function in
$limits{'limit'} = 0; # Does not support the limit attribute
$limits{'working_blobs'} = 1; # If big varchar/blobs works
$limits{'order_by_unused'} = 1;
$limits{'working_all_fields'} = 1;
$smds{'time'} = 1;
$smds{'q1'} = 'b'; # with time not supp by mysql ('')
@ -3077,4 +3134,208 @@ sub reconnect_on_errors
return 1;
}
#############################################################################
# Configuration for FrontBase
#############################################################################
package db_FrontBase;
sub new
{
my ($type,$host,$database)= @_;
my $self= {};
my %limits;
bless $self;
$self->{'cmp_name'} = "FrontBase";
$self->{'data_source'} = "DBI:FB:dbname=$database;host=$host";
$self->{'limits'} = \%limits;
$self->{'smds'} = \%smds;
$self->{'blob'} = "varchar(8000000)";
$self->{'text'} = "varchar(8000000)";
$self->{'double_quotes'} = 1; # Can handle: 'Walker''s'
$self->{'drop_attr'} = ' restrict';
$self->{'error_on_execute_means_zero_rows'}=1;
$limits{'max_conditions'} = 5427; # (Actually not a limit)
# The following should be 8192, but is smaller because Frontbase crashes..
$limits{'max_columns'} = 150; # Max number of columns in table
$limits{'max_tables'} = 5000; # 10000 crashed FrontBase
$limits{'max_text_size'} = 65000; # Max size with default buffers.
$limits{'query_size'} = 8000000; # Max size with default buffers.
$limits{'max_index'} = 38; # Max number of keys
$limits{'max_index_parts'} = 20; # Max segments/key
$limits{'max_column_name'} = 128; # max table and column name
$limits{'join_optimizer'} = 1; # Can optimize FROM tables
$limits{'load_data_infile'} = 1; # Has load data infile
$limits{'lock_tables'} = 0; # Has lock tables
$limits{'functions'} = 1; # Has simple functions (+/-)
$limits{'group_functions'} = 1; # Have group functions
$limits{'group_distinct_functions'}= 0; # Have count(distinct)
$limits{'select_without_from'}= 0;
$limits{'multi_drop'} = 0; # Drop table cannot take many tables
$limits{'subqueries'} = 1; # Supports sub-queries.
$limits{'left_outer_join'} = 1; # Supports left outer joins
$limits{'table_wildcard'} = 1; # Has SELECT table_name.*
$limits{'having_with_alias'} = 0; # Can use aliases in HAVING
$limits{'having_with_group'} = 0; # Can use group functions in HAVING
$limits{'like_with_column'} = 1; # Can use column1 LIKE column2
$limits{'order_by_position'} = 1; # Can use 'ORDER BY 1'
$limits{'group_by_position'} = 0; # Use of 'GROUP BY 1'
$limits{'alter_table'} = 1; # Have ALTER TABLE
$limits{'alter_add_multi_col'}= 0; # Have ALTER TABLE t add a int,add b int;
$limits{'alter_table_dropcol'}= 0; # Have ALTER TABLE DROP column
$limits{'multi_value_insert'} = 1;
$limits{'group_func_extra_std'} = 0; # Does not have group function std().
$limits{'func_odbc_mod'} = 0; # Have function mod.
$limits{'func_extra_%'} = 0; # Does not have % as alias for mod()
$limits{'func_odbc_floor'} = 0; # Has func_odbc_floor function
$limits{'func_extra_if'} = 0; # Does not have function if.
$limits{'column_alias'} = 1; # Alias for fields in select statement.
$limits{'NEG'} = 1; # Supports -id
$limits{'func_extra_in_num'} = 0; # Has function in
$limits{'limit'} = 0; # Does not support the limit attribute
$limits{'insert_select'} = 0;
$limits{'order_by_unused'} = 0;
# We don't get an error for duplicate row in 'test-insert'
$limits{'unique_index'} = 0; # Unique index works or not
# We can't use a blob as a normal string (we got a wierd error)
$limits{'working_blobs'} = 0;
# 'select min(region),max(region) from bench1' kills the server after a while
$limits{'group_func_sql_min_str'} = 0;
# If you do select f1,f2,f3...f200 from table, Frontbase dies.
$limits{'working_all_fields'} = 0;
return $self;
}
#
# Get the version number of the database
#
sub version
{
my ($self)=@_;
my ($dbh,$sth,$version,@row);
$dbh=$self->connect();
#
# Pick up SQLGetInfo option SQL_DBMS_VER (18)
#
#$version = $dbh->func(18, GetInfo);
$version="2.1";
$dbh->disconnect;
return $version;
}
#
# Connection with optional disabling of logging
#
sub connect
{
my ($self)=@_;
my ($dbh);
$dbh=DBI->connect($self->{'data_source'},
$main::opt_user,
$main::opt_password,
{ PrintError => 0 ,
'fb_host'=>$main::opt_host
}) ||
die "Got error: '$DBI::errstr' when connecting to " . $self->{'data_source'} ." with user: '$main::opt_user' password: '$main::opt_password'\n";
$db->{AutoCommit}=1;
# $dbh->do("SET OPTION LOG_OFF=1,UPDATE_LOG=0");
return $dbh;
}
#
# Returns a list of statements to create a table
# The field types are in ANSI SQL format.
#
# If one uses $main::opt_fast then one is allowed to use
# non standard types to get better speed.
#
sub create
{
my($self,$table_name,$fields,$index,$options) = @_;
my($query,@queries);
$query="create table $table_name (";
foreach $field (@$fields)
{
$field =~ s/ blob/ varchar(32000)/i;
$field =~ s/ big_decimal/ float/i;
$field =~ s/ double/ float/i;
$field =~ s/ tinyint/ smallint/i;
$field =~ s/ mediumint/ int/i;
$field =~ s/ integer/ int/i;
$field =~ s/ float\(\d,\d\)/ float/i;
$field =~ s/ smallint\(\d\)/ smallint/i;
$field =~ s/ int\(\d\)/ int/i;
$query.= $field . ',';
}
foreach $ind (@$index)
{
my @index;
if ( $ind =~ /\bKEY\b/i ){
push(@keys,"ALTER TABLE $table_name ADD $ind");
}else{
my @fields = split(' ',$index);
my $query="CREATE INDEX $fields[1] ON $table_name $fields[2]";
push(@index,$query);
}
}
substr($query,-1)=")"; # Remove last ',';
$query.=" $options" if (defined($options));
push(@queries,$query);
return @queries;
}
sub insert_file {
my($self,$dbname, $file) = @_;
print "insert of an ascii file isn't supported by InterBase\n";
return 0;
}
#
# Do any conversions to the ANSI SQL query so that the database can handle it
#
sub query {
my($self,$sql) = @_;
return $sql;
}
sub drop_index {
my ($self,$table,$index) = @_;
return "DROP INDEX $index";
}
#
# Abort if the server has crashed
# return: 0 if ok
# 1 question should be retried
#
sub abort_if_fatal_error
{
return 0 if ($DBI::errstr =~ /No raw data handle/);
return 1;
}
sub small_rollback_segment
{
return 0;
}
sub reconnect_on_errors
{
return 1;
}
1;

View File

@ -62,7 +62,7 @@ if (!$opt_skip_create)
my $array_ref = $tables[$ti];
# This may fail if we have no table so do not check answer
$sth = $dbh->do("drop table $table_name");
$sth = $dbh->do("drop table $table_name" . $server->{'drop_attr'});
print "Creating table $table_name\n" if ($opt_verbose);
do_many($dbh,@$array_ref);
@ -280,7 +280,7 @@ if (!$opt_skip_delete) # Only used when testing
for ($ti = 0; $ti <= $#table_names; $ti++)
{
my $table_name = $table_names[$ti];
$sth = $dbh->do("drop table $table_name");
$sth = $dbh->do("drop table $table_name" . $server->{'drop_attr'});
}
$end_time=new Benchmark;
@ -381,7 +381,7 @@ sub init_data
"flight_code integer(8) NOT NULL"],
["PRIMARY KEY (connect_code, leg_number, flight_code)"]);
@connection=
$server->create("connection",
$server->create("fconnection",
["connect_code integer(8) NOT NULL",
"from_airport char(3) NOT NULL",
"to_airport char(3) NOT NULL",
@ -405,7 +405,7 @@ sub init_data
"dual_airline char(2) NOT NULL",
"low_flight smallint(4) NOT NULL",
"high_flight smallint(4) NOT NULL",
"connection_name char(64) NOT NULL"],
"fconnection_name char(64) NOT NULL"],
["PRIMARY KEY (main_airline, dual_airline, low_flight)",
"INDEX main_airline1 (main_airline)"]);
@ -540,7 +540,7 @@ sub init_data
@table_names =
("aircraft", "airline", "airport", "airport_service",
"city", "class_of_service", "code_description",
"compound_class", "connect_leg", "connection", "day_name",
"compound_class", "connect_leg", "fconnection", "day_name",
"dual_carrier", "fare", "flight", "flight_class", "flight_day",
"flight_fare", "food_service", "ground_service", "time_interval",
"month_name",

View File

@ -68,7 +68,7 @@ $field_count= $opt_start_field_count;
$start_time=new Benchmark;
$dbh->do("drop table bench");
$dbh->do("drop table bench" . $server->{'drop_attr'});
do_many($dbh,$server->create("bench",\@fields,\@index));
print "Insert data into the table\n";
@ -92,9 +92,16 @@ print "Time for insert ($opt_row_count)",
$loop_time=new Benchmark;
$add= int(($opt_field_count-$opt_start_field_count)/$opt_loop_count)+1;
$add=1 if (!$limits{'alter_add_multi_col'});
$multi_add=$server->{'limits'}->{'alter_add_multi_col'} == 1;
if ($opt_fast)
{
$add=1 if (!$server->{'limits'}->{'alter_add_multi_col'});
}
else
{
$add=1 if (!$limits{'alter_add_multi_col'});
}
$count=0;
while ($field_count < $opt_field_count)
@ -159,7 +166,8 @@ while ($field_count > $opt_start_field_count)
{
$fields.=",DROP i${field_count}";
}
$dbh->do("ALTER TABLE bench " . substr($fields,1)) || die $DBI::errstr;
$dbh->do("ALTER TABLE bench " . substr($fields,1) . $server->{'drop_attr'})
|| die $DBI::errstr;
}
$end_time=new Benchmark;
@ -173,7 +181,7 @@ skip_dropcol:
#### End of the test...Finally print time used to execute the
#### whole test.
$dbh->do("drop table bench");
$dbh->do("drop table bench" . $server->{'drop_attr'});
$dbh->disconnect;

View File

@ -53,7 +53,7 @@ print "All tests are done $opt_loop_count times with $opt_field_count fields\n\n
$dbh = $server->connect();
print "Testing table with $opt_field_count fields\n";
$sth = $dbh->do("drop table bench1");
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'});
my @fields=();
my @index=();
@ -81,10 +81,14 @@ test_query("Testing select * from table with 1 record",
"select * from bench1",
$dbh,$opt_loop_count);
test_query("Testing select all_fields from table with 1 record",
"Time to select_many_fields",
"select $fields from bench1",
$dbh,$opt_loop_count);
if ($limits->{'working_all_fields'})
{
test_query("Testing select all_fields from table with 1 record",
"Time to select_many_fields",
"select $fields from bench1",
$dbh,$opt_loop_count);
}
test_query("Testing insert VALUES()",
"Time to insert_many_fields",
@ -101,7 +105,7 @@ test_command("Testing insert (all_fields) VALUES()",
"insert into bench1 ($fields) values($values)",
$dbh,$opt_loop_count);
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'}) or die $DBI::errstr;
if ($opt_fast && defined($server->{vacuum}))
{

View File

@ -42,6 +42,8 @@ if ($opt_small_test)
$opt_loop_count/=100;
}
$opt_loop_count=min(1000, $opt_loop_count) if ($opt_tcpip);
print "Testing the speed of connecting to the server and sending of data\n";
print "All tests are done $opt_loop_count times\n\n";
@ -68,11 +70,11 @@ for ($i=0 ; $i < $opt_loop_count ; $i++)
$dbh->disconnect;
last;
}
select(undef, undef, undef, 0.001);
select(undef, undef, undef, 0.01*$j);
print "$errors " if (($opt_debug));
$errors++;
}
die $DBI::errstr if ($j == $max_test);
die "Got error '$DBI::errstr' after $i connects" if ($j == $max_test);
$dbh->disconnect;
undef($dbh);
}
@ -128,7 +130,7 @@ if ($limits->{'select_without_from'})
#### Then we shall do $opt_loop_count selects from this table.
#### Table will contain very simple data.
$sth = $dbh->do("drop table bench1");
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'});
do_many($dbh,$server->create("bench1",
["a int NOT NULL",
"i int",
@ -221,7 +223,7 @@ if ($limits->{'functions'})
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
}
$sth = $dbh->do("drop table bench1")
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'})
or die $DBI::errstr;
if ($opt_fast && defined($server->{vacuum}))
@ -234,6 +236,8 @@ if ($opt_fast && defined($server->{vacuum}))
#### huge record in it and then we'll do $opt_loop_count selects
#### from it.
goto skip_blob_test if (!$limits->{'working_blobs'});
print "Testing retrieval of big records ($str_length bytes)\n";
do_many($dbh,$server->create("bench1", ["b blob"], []));
@ -265,7 +269,7 @@ $end_time=new Benchmark;
print "Time to select_big ($opt_loop_count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
$sth = $dbh->do("drop table bench1")
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'})
or do
{
# Fix for Access 2000
@ -277,6 +281,7 @@ if ($opt_fast && defined($server->{vacuum}))
$server->vacuum(0,\$dbh);
}
skip_blob_test:
################################ END ###################################
####

View File

@ -64,7 +64,7 @@ if ($opt_force) # If tables used in this test exist, drop 'em
print "Okay..Let's make sure that our tables don't exist yet.\n\n";
for ($i=1 ; $i <= $max_tables ; $i++)
{
$dbh->do("drop table bench_$i");
$dbh->do("drop table bench_$i" . $server->{'drop_attr'});
}
}
@ -90,7 +90,7 @@ for ($i=1 ; $i <= $max_tables ; $i++)
# Got an error; Do cleanup
for ($i=1 ; $i <= $max_tables ; $i++)
{
$dbh->do("drop table bench_$i");
$dbh->do("drop table bench_$i" . $server->{'drop_attr'});
}
die "Test aborted";
}
@ -148,13 +148,13 @@ if ($opt_fast && $server->{'limits'}->{'multi_drop'} &&
{
$query.=",bench_$i";
}
$sth = $dbh->do($query) or die $DBI::errstr;
$sth = $dbh->do($query . $server->{'drop_attr'}) or die $DBI::errstr;
}
else
{
for ($i=1 ; $i <= $max_tables ; $i++)
{
$sth = $dbh->do("drop table bench_$i")
$sth = $dbh->do("drop table bench_$i" . $server->{'drop_attr'})
or die $DBI::errstr;
}
}
@ -186,7 +186,7 @@ for ($i=1 ; $i <= $opt_loop_count ; $i++)
"s char(10)",
"v varchar(100)"],
["primary key (i)"]));
$sth = $dbh->do("drop table bench_$i") or die $DBI::errstr;
$sth = $dbh->do("drop table bench_$i" . $server->{'drop_attr'}) or die $DBI::errstr;
}
$end_time=new Benchmark;
@ -239,7 +239,7 @@ $loop_time=new Benchmark;
for ($i=1 ; $i <= $opt_loop_count ; $i++)
{
do_many($dbh,$server->create("bench_$i", \@fields, \@index));
$dbh->do("drop table bench_$i") or die $DBI::errstr;
$dbh->do("drop table bench_$i" . $server->{'drop_attr'}) or die $DBI::errstr;
}
$end_time=new Benchmark;

View File

@ -34,6 +34,7 @@ $opt_loop_count=100000; # number of rows/3
$small_loop_count=10; # Loop for full table retrieval
$range_loop_count=$small_loop_count*50;
$many_keys_loop_count=$opt_loop_count;
$opt_read_key_loop_count=$opt_loop_count;
chomp($pwd = `pwd`); $pwd = "." if ($pwd eq '');
require "$pwd/bench-init.pl" || die "Can't read Configuration file: $!\n";
@ -46,13 +47,15 @@ if ($opt_loop_count < 256)
if ($opt_small_test)
{
$opt_loop_count/=100;
$range_loop_count/=10;
$many_keys_loop_count=$opt_loop_count/10;
$range_loop_count=10;
$opt_read_key_loop_count=10;
}
elsif ($opt_small_tables)
{
$opt_loop_count=10000; # number of rows/3
$many_keys_loop_count=$opt_loop_count;
$opt_read_key_loop_count=10;
}
elsif ($opt_small_key_tables)
{
@ -96,9 +99,9 @@ goto keys_test if ($opt_stage == 2);
goto select_test if ($opt_skip_create);
print "Creating tables\n";
$dbh->do("drop table bench1");
$dbh->do("drop table bench2");
$dbh->do("drop table bench3");
$dbh->do("drop table bench1" . $server->{'drop_attr'});
$dbh->do("drop table bench2" . $server->{'drop_attr'});
$dbh->do("drop table bench3" . $server->{'drop_attr'});
do_many($dbh,$server->create("bench1",
["id int NOT NULL",
"id2 int NOT NULL",
@ -239,11 +242,11 @@ if ($limits->{'unique_index'})
die "Didn't get an error when inserting duplicate record $tmp\n";
}
}
}
$end_time=new Benchmark;
print "Time for insert_duplicates (" . ($opt_loop_count) . "): " .
$end_time=new Benchmark;
print "Time for insert_duplicates (" . ($opt_loop_count) . "): " .
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
}
#if ($opt_fast && defined($server->{vacuum}))
#{
@ -332,11 +335,13 @@ else
print " for order_by_big_key2 ($small_loop_count:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$sel=$limits->{'order_by_unused'} ? "id2" : "*";
$loop_time=new Benchmark;
$estimated=$rows=0;
for ($i=1 ; $i <= $small_loop_count ; $i++)
{
$rows+=fetch_all_rows($dbh,"select id2 from bench1 order by id3",1);
$rows+=fetch_all_rows($dbh,"select $sel from bench1 order by id3",1);
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$i,$i,
$small_loop_count));
@ -349,11 +354,12 @@ print " for order_by_big_key_diff ($small_loop_count:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$sel=$limits->{'order_by_unused'} ? "id" : "*";
$loop_time=new Benchmark;
$estimated=$rows=0;
for ($i=1 ; $i <= $small_loop_count ; $i++)
{
$rows+=fetch_all_rows($dbh,"select id from bench1 order by id2,id3",1);
$rows+=fetch_all_rows($dbh,"select $sel from bench1 order by id2,id3",1);
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$i,$i,
$small_loop_count));
@ -365,13 +371,15 @@ else
print " for order_by_big ($small_loop_count:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$sel=$limits->{'order_by_unused'} ? "dummy1" : "dummy1,id3";
$loop_time=new Benchmark;
$estimated=$rows=0;
for ($i=1 ; $i <= $range_loop_count ; $i++)
{
$start=$opt_loop_count/$range_loop_count*$i;
$end=$start+$i;
$rows+=fetch_all_rows($dbh,"select dummy1 from bench1 where id>=$start and id <= $end order by id3",1);
$rows+=fetch_all_rows($dbh,"select $sel from bench1 where id>=$start and id <= $end order by id3",1);
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$i,$i,
$range_loop_count));
@ -383,14 +391,14 @@ else
print " for order_by_range ($range_loop_count:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$sel=$limits->{'order_by_unused'} ? "dummy1" : "dummy1,id";
$loop_time=new Benchmark;
$estimated=$rows=0;
for ($i=1 ; $i <= $range_loop_count ; $i++)
{
$start=$opt_loop_count/$range_loop_count*$i;
$end=$start+$i;
$rows+=fetch_all_rows($dbh,"select dummy1 from bench1 where id>=$start and id <= $end order by id",1);
$rows+=fetch_all_rows($dbh,"select $sel from bench1 where id>=$start and id <= $end order by id",1);
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$i,$i,
$range_loop_count));
@ -402,13 +410,14 @@ else
print " for order_by_key ($range_loop_count:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$sel=$limits->{'order_by_unused'} ? "id2" : "id2,id3";
$loop_time=new Benchmark;
$estimated=$rows=0;
for ($i=1 ; $i <= $range_loop_count ; $i++)
{
$start=$opt_loop_count/$range_loop_count*$i;
$end=$start+$range_loop_count;
$rows+=fetch_all_rows($dbh,"select id2 from bench1 where id3>=$start and id3 <= $end order by id3",1);
$rows+=fetch_all_rows($dbh,"select $sel from bench1 where id3>=$start and id3 <= $end order by id3",1);
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$i,$i,
$range_loop_count));
@ -674,23 +683,26 @@ if ($limits->{'group_functions'})
print "Time for count ($count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$loop_time=new Benchmark;
$count=$estimated=0;
for ($tests=1 ; $tests <= $small_loop_count ; $tests++)
if ($limits->{'group_distinct_functions'})
{
$count+=2;
fetch_all_rows($dbh,"select count(distinct dummy1) from bench1");
fetch_all_rows($dbh,"select dummy1,count(distinct id) from bench1 group by dummy1");
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$count,$tests,
$small_loop_count));
$loop_time=new Benchmark;
$count=$estimated=0;
for ($tests=1 ; $tests <= $small_loop_count ; $tests++)
{
$count+=2;
fetch_all_rows($dbh,"select count(distinct dummy1) from bench1");
fetch_all_rows($dbh,"select dummy1,count(distinct id) from bench1 group by dummy1");
$end_time=new Benchmark;
last if ($estimated=predict_query_time($loop_time,$end_time,\$count,$tests,
$small_loop_count));
}
if ($estimated)
{ print "Estimated time"; }
else
{ print "Time"; }
print " for count_distinct_big ($count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
}
if ($estimated)
{ print "Estimated time"; }
else
{ print "Time"; }
print " for count_distinct_big ($count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
}
@ -1002,9 +1014,9 @@ if ($limits->{'insert_select'})
print "Time for insert_select_2_keys (1): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
$loop_time=new Benchmark;
$sth = $dbh->do("DROP TABLE bench2") ||
$sth = $dbh->do("DROP TABLE bench2" . $server->{'drop_attr'}) ||
die $DBI::errstr;
$sth = $dbh->do("DROP TABLE bench3") ||
$sth = $dbh->do("DROP TABLE bench3" . $server->{'drop_attr'}) ||
die $DBI::errstr;
$end_time=new Benchmark;
print "Time for drop table(2): " .
@ -1096,7 +1108,7 @@ if (!$opt_skip_delete)
{
$sth = $dbh->do("UNLOCK TABLES") || die $DBI::errstr;
}
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'}) or die $DBI::errstr;
}
if ($server->small_rollback_segment())
@ -1309,7 +1321,7 @@ $end_time=new Benchmark;
print "Time for delete_all_many_keys ($count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'}) or die $DBI::errstr;
if ($opt_fast && defined($server->{vacuum}))
{
$server->vacuum(1,\$dbh);
@ -1323,7 +1335,7 @@ if ($limits->{'multi_value_insert'})
{
$query_size=$limits->{'query_size'}; # Same limit for all databases
$sth = $dbh->do("drop table bench1");
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'});
do_many($dbh,$server->create("bench1",
["id int NOT NULL",
"id2 int NOT NULL",
@ -1375,7 +1387,7 @@ if ($limits->{'multi_value_insert'})
# A big table may take a while to drop
$loop_time=new Benchmark;
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
$sth = $dbh->do("drop table bench1" . $server->{'drop_attr'}) or die $DBI::errstr;
$end_time=new Benchmark;
print "Time for drop table(1): " .
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
@ -1405,7 +1417,7 @@ sub check_select_key
$estimated=0;
$loop_time=new Benchmark;
$count=0;
for ($i=1 ; $i <= $opt_loop_count; $i++)
for ($i=1 ; $i <= $opt_read_key_loop_count; $i++)
{
$count+=2;
$tmpvar^= ((($tmpvar + 63) + $i)*3 % $opt_loop_count);
@ -1437,7 +1449,7 @@ sub check_select_key2
$estimated=0;
$loop_time=new Benchmark;
$count=0;
for ($i=1 ; $i <= $opt_loop_count; $i++)
for ($i=1 ; $i <= $opt_read_key_loop_count; $i++)
{
$count+=2;
$tmpvar^= ((($tmpvar + 63) + $i)*3 % $opt_loop_count);

View File

@ -61,7 +61,7 @@ $start_time=new Benchmark;
goto select_test if ($opt_skip_create);
print "Creating table\n";
$dbh->do("drop table bench1");
$dbh->do("drop table bench1" . $server->{'drop_attr'});
do_many($dbh,$server->create("bench1",
["region char(1) NOT NULL",
@ -213,7 +213,7 @@ print " for select_range ($count:$rows): " .
# Testing MIN() and MAX() on keys
#
if ($limits->{'group_functions'})
if ($limits->{'group_functions'} && $limits->{'order_by_unused'})
{
$loop_time=new Benchmark;
$count=0;
@ -230,6 +230,7 @@ if ($limits->{'group_functions'})
fetch_all_rows($dbh,"select min(region),max(region) from bench1");
}
fetch_all_rows($dbh,"select min(rev_idn) from bench1 where region='$region'");
fetch_all_rows($dbh,"select max(grp) from bench1 where region='$region'");
fetch_all_rows($dbh,"select max(idn) from bench1 where region='$region' and grp=$grp");
if ($limits->{'group_func_sql_min_str'})
@ -265,6 +266,10 @@ if ($limits->{'group_functions'})
print " for count_on_key ($count): " .
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
}
if ($limits->{'group_functions'})
{
$loop_time=new Benchmark;
$rows=0;
for ($i=0 ; $i < $opt_medium_loop_count ; $i++)
@ -276,7 +281,7 @@ if ($limits->{'group_functions'})
timestr(timediff($end_time, $loop_time),"all") . "\n";
}
if ($limits->{'group_functions'})
if ($limits->{'group_distinct_functions'})
{
print "Testing count(distinct) on the table\n";
$loop_time=new Benchmark;
@ -362,7 +367,7 @@ if ($opt_lock_tables)
}
if (!$opt_skip_delete)
{
do_query($dbh,"drop table bench1");
do_query($dbh,"drop table bench1" . $server->{'drop_attr'});
}
if ($opt_fast && defined($server->{vacuum}))

View File

@ -57,7 +57,7 @@ if (!$opt_skip_create)
my $array_ref = $tables[$ti];
# This may fail if we have no table so do not check answer
$sth = $dbh->do("drop table $table_name");
$sth = $dbh->do("drop table $table_name" . $server->{'drop_attr'});
print "Creating table $table_name\n" if ($opt_verbose);
do_many($dbh,@$array_ref);
}
@ -201,7 +201,7 @@ if (!$opt_skip_delete)
for ($ti = 0; $ti <= $#table_names; $ti++)
{
my $table_name = $table_names[$ti];
$sth = $dbh->do("drop table $table_name");
$sth = $dbh->do("drop table $table_name" . $server->{'drop_attr'});
}
}

View File

@ -168,25 +168,23 @@ bool berkeley_flush_logs()
}
int berkeley_commit(THD *thd)
int berkeley_commit(THD *thd, void *trans)
{
DBUG_ENTER("berkeley_commit");
DBUG_PRINT("trans",("ending transaction"));
int error=txn_commit((DB_TXN*) thd->transaction.bdb_tid,0);
int error=txn_commit((DB_TXN*) trans,0);
#ifndef DBUG_OFF
if (error)
DBUG_PRINT("error",("error: %d",error));
#endif
thd->transaction.bdb_tid=0;
DBUG_RETURN(error);
}
int berkeley_rollback(THD *thd)
int berkeley_rollback(THD *thd, void *trans)
{
DBUG_ENTER("berkeley_rollback");
DBUG_PRINT("trans",("aborting transaction"));
int error=txn_abort((DB_TXN*) thd->transaction.bdb_tid);
thd->transaction.bdb_tid=0;
int error=txn_abort((DB_TXN*) trans);
DBUG_RETURN(error);
}
@ -1337,6 +1335,10 @@ int ha_berkeley::reset(void)
/*
As MySQL will execute an external lock for every new table it uses
we can use this to start the transactions.
If we are in auto_commit mode we just need to start a transaction
for the statement to be able to rollback the statement.
If not, we have to start a master transaction if there doesn't exist
one from before.
*/
int ha_berkeley::external_lock(THD *thd, int lock_type)
@ -1345,16 +1347,34 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
DBUG_ENTER("ha_berkeley::external_lock");
if (lock_type != F_UNLCK)
{
if (!thd->transaction.bdb_lock_count++ && !thd->transaction.bdb_tid)
if (!thd->transaction.bdb_lock_count++)
{
/* Found first lock, start transaction */
DBUG_PRINT("trans",("starting transaction"));
if ((error=txn_begin(db_env, 0,
(DB_TXN**) &thd->transaction.bdb_tid,
/* First table lock, start transaction */
if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)) &&
!thd->transaction.all.bdb_tid)
{
/* We have to start a master transaction */
DBUG_PRINT("trans",("starting transaction"));
if ((error=txn_begin(db_env, 0,
(DB_TXN**) &thd->transaction.all.bdb_tid,
0)))
{
thd->transaction.bdb_lock_count--; // We didn't get the lock
DBUG_RETURN(error);
}
}
DBUG_PRINT("trans",("starting transaction for statement"));
if ((error=txn_begin(db_env,
(DB_TXN*) thd->transaction.all.bdb_tid,
(DB_TXN**) &thd->transaction.stmt.bdb_tid,
0)))
thd->transaction.bdb_lock_count--;
{
/* We leave the possible master transaction open */
thd->transaction.bdb_lock_count--; // We didn't get the lock
DBUG_RETURN(error);
}
}
transaction= (DB_TXN*) thd->transaction.bdb_tid;
transaction= (DB_TXN*) thd->transaction.stmt.bdb_tid;
}
else
{
@ -1371,23 +1391,21 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
current_row.data=0;
if (!--thd->transaction.bdb_lock_count)
{
if (thd->transaction.bdb_tid && (thd->options & OPTION_AUTO_COMMIT)
&& !(thd->options & OPTION_BEGIN))
if (thd->transaction.stmt.bdb_tid)
{
/*
F_UNLOCK is done without a transaction commit / rollback.
This happens if the thread didn't update any rows or if
something went wrong during an update.
We can in this case silenty abort the transaction.
This happens if the thread didn't update any rows
We must in this case commit the work to keep the row locks
*/
DBUG_PRINT("trans",("aborting transaction"));
error=txn_abort((DB_TXN*) thd->transaction.bdb_tid);
thd->transaction.bdb_tid=0;
DBUG_PRINT("trans",("commiting non-updating transaction"));
error=txn_commit((DB_TXN*) thd->transaction.stmt.bdb_tid,0);
thd->transaction.stmt.bdb_tid=0;
}
}
}
DBUG_RETURN(error);
}
}
THR_LOCK_DATA **ha_berkeley::store_lock(THD *thd, THR_LOCK_DATA **to,

View File

@ -156,5 +156,5 @@ extern TYPELIB berkeley_lock_typelib;
bool berkeley_init(void);
bool berkeley_end(void);
bool berkeley_flush_logs(void);
int berkeley_commit(THD *thd);
int berkeley_rollback(THD *thd);
int berkeley_commit(THD *thd, void *trans);
int berkeley_rollback(THD *thd, void *trans);

View File

@ -174,78 +174,83 @@ int ha_panic(enum ha_panic_function flag)
} /* ha_panic */
/*
This is used to commit or rollback a single statement depending
on the value of error
*/
int ha_autocommit_or_rollback(THD *thd, int error)
{
DBUG_ENTER("ha_autocommit_or_rollback");
#if defined(HAVE_BERKELEY_DB) || defined(HAVE_INNOBASE_DB)
if ((thd->options & OPTION_AUTO_COMMIT) && !(thd->options & OPTION_BEGIN)
&& !thd->locked_tables)
#ifdef USING_TRANSACTIONS
if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)) &&
!thd->locked_tables)
{
if (!error)
{
if (ha_commit(thd))
if (ha_commit_stmt(thd))
error=1;
}
}
else
(void) ha_rollback(thd);
(void) ha_rollback_stmt(thd);
}
#endif
DBUG_RETURN(error);
}
int ha_commit(THD *thd)
int ha_commit_trans(THD *thd, THD_TRANS* trans)
{
int error=0;
DBUG_ENTER("ha_commit");
#ifdef HAVE_BERKELEY_DB
if (thd->transaction.bdb_tid)
if (trans->bdb_tid)
{
int error=berkeley_commit(thd);
if (error)
if ((error=berkeley_commit(thd,trans->bdb_tid)))
{
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
error=1;
}
trans->bdb_tid=0;
}
#endif
#ifdef HAVE_INNOBASE_DB
if (thd->transaction.innobase_tid)
{
int error=innobase_commit(thd);
if (error)
if ((error=innobase_commit(thd,trans->innobase_tid))
{
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
error=1;
}
trans->innobase_tid=0;
}
#endif
DBUG_RETURN(error);
}
int ha_rollback(THD *thd)
int ha_rollback_trans(THD *thd, THD_TRANS *trans)
{
int error=0;
DBUG_ENTER("ha_rollback");
#ifdef HAVE_BERKELEY_DB
if (thd->transaction.bdb_tid)
if (trans->bdb_tid)
{
int error=berkeley_rollback(thd);
if (error)
if ((error=berkeley_rollback(thd, trans->bdb_tid)))
{
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
error=1;
}
trans->bdb_tid=0;
}
#endif
#ifdef HAVE_INNOBASE_DB
if (thd->transaction.innobase_tid)
if (trans->innobase_tid)
{
int error=innobase_rollback(thd);
if (error)
if ((error=innobase_rollback(thd)))
{
my_error(ER_ERROR_DURING_ROLLBACK, MYF(0), error);
error=1;
}
trans->innobase_tid=0;
}
#endif
DBUG_RETURN(error);

View File

@ -25,6 +25,10 @@
#define NO_HASH /* Not yet implemented */
#endif
#if defined(HAVE_BERKELEY_DB) || defined(HAVE_INNOBASE_DB) || defined(HAVE_GEMENI_DB)
#define USING_TRANSACTIONS
#endif
// the following is for checking tables
#define HA_ADMIN_ALREADY_DONE 1
@ -114,6 +118,12 @@ enum row_type { ROW_TYPE_DEFAULT, ROW_TYPE_FIXED, ROW_TYPE_DYNAMIC,
#define HA_CREATE_USED_AUTO 1
#define HA_CREATE_USED_RAID 2
typedef struct st_thd_trans {
void *bdb_tid;
void *innobase_tid;
void *gemeni_tid;
} THD_TRANS;
typedef struct st_ha_create_information
{
ulong table_options;
@ -294,6 +304,12 @@ public:
extern const char *ha_row_type[];
extern TYPELIB ha_table_typelib;
/* Wrapper functions */
#define ha_commit_stmt(thd) (ha_commit_trans((thd), &((thd)->transaction.stmt)))
#define ha_rollback_stmt(thd) (ha_rollback_trans((thd), &((thd)->transaction.stmt)))
#define ha_commit(thd) (ha_commit_trans((thd), &((thd)->transaction.all)))
#define ha_rollback(thd) (ha_rollback_trans((thd), &((thd)->transaction.all)))
handler *get_new_handler(TABLE *table, enum db_type db_type);
my_off_t ha_get_ptr(byte *ptr, uint pack_length);
void ha_store_ptr(byte *buff, uint pack_length, my_off_t pos);
@ -304,8 +320,8 @@ int ha_create_table(const char *name, HA_CREATE_INFO *create_info,
bool update_create_info);
int ha_delete_table(enum db_type db_type, const char *path);
void ha_key_cache(void);
int ha_commit(THD *thd);
int ha_rollback(THD *thd);
int ha_commit_trans(THD *thd, THD_TRANS *trans);
int ha_rollback_trans(THD *thd, THD_TRANS *trans);
int ha_autocommit_or_rollback(THD *thd, int error);
bool ha_flush_logs(void);

View File

@ -134,7 +134,7 @@ void kill_one_thread(THD *thd, ulong id);
#define TEST_NO_EXTRA 128
#define TEST_CORE_ON_SIGNAL 256 /* Give core if signal */
/* options for select set by the yacc parser */
/* options for select set by the yacc parser (stored in lex->options) */
#define SELECT_DISTINCT 1
#define SELECT_STRAIGHT_JOIN 2
#define SELECT_DESCRIBE 4
@ -155,14 +155,17 @@ void kill_one_thread(THD *thd, ulong id);
#define OPTION_SAFE_UPDATES OPTION_ANSI_MODE*2
#define OPTION_BUFFER_RESULT OPTION_SAFE_UPDATES*2
#define OPTION_BIN_LOG OPTION_BUFFER_RESULT*2
#define OPTION_AUTO_COMMIT OPTION_BIN_LOG*2
#define OPTION_BEGIN OPTION_AUTO_COMMIT*2
#define OPTION_NOT_AUTO_COMMIT OPTION_BIN_LOG*2
#define OPTION_BEGIN OPTION_NOT_AUTO_COMMIT*2
#define OPTION_QUICK OPTION_BEGIN*2
#define OPTION_QUOTE_SHOW_CREATE OPTION_QUICK*2
/* Set if we are updating a non-transaction safe table */
#define OPTION_STATUS_NO_TRANS_UPDATE OPTION_QUOTE_SHOW_CREATE*2
/* The following is set when parsing the query */
#define OPTION_NO_INDEX_USED OPTION_QUOTE_SHOW_CREATE*2
#define OPTION_NO_GOOD_INDEX_USED OPTION_NO_INDEX_USED*2
#define QUERY_NO_INDEX_USED OPTION_STATUS_NO_TRANS_UPDATE*2
#define QUERY_NO_GOOD_INDEX_USED QUERY_NO_INDEX_USED*2
#define RAID_BLOCK_SIZE 1024
@ -255,6 +258,7 @@ int mysql_optimize_table(THD* thd, TABLE_LIST* table_list,
/* net_pkg.c */
void send_error(NET *net,uint sql_errno=0, const char *err=0);
void send_warning(NET *net, uint sql_errno, const char *err=0);
void net_printf(NET *net,uint sql_errno, ...);
void send_ok(NET *net,ha_rows affected_rows=0L,ulonglong id=0L,
const char *info=0);

View File

@ -192,8 +192,7 @@ uint test_flags, select_errors=0, dropping_tables=0,ha_open_options=0;
uint volatile thread_count=0, thread_running=0, kill_cached_threads=0,
wake_thread=0, global_read_lock=0;
ulong thd_startup_options=(OPTION_UPDATE_LOG | OPTION_AUTO_IS_NULL |
OPTION_BIN_LOG | OPTION_AUTO_COMMIT |
OPTION_QUOTE_SHOW_CREATE );
OPTION_BIN_LOG | OPTION_QUOTE_SHOW_CREATE );
uint protocol_version=PROTOCOL_VERSION;
ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size,
max_join_size,join_buff_size,tmp_table_size,thread_stack,

View File

@ -73,6 +73,18 @@ void send_error(NET *net, uint sql_errno, const char *err)
DBUG_VOID_RETURN;
}
/*
At some point we need to be able to distinguish between warnings and
errors; The following function will help make this easier.
*/
void send_warning(NET *net, uint sql_errno, const char *err)
{
DBUG_ENTER("send_warning");
send_error(net,sql_errno,err);
DBUG_VOID_RETURN;
}
/**
** write error package and flush to client
** It's a little too low level, but I don't want to allow another buffer

Binary file not shown.

View File

@ -211,3 +211,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -200,3 +200,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -201,3 +201,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -200,3 +200,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -199,3 +199,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -199,3 +199,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

View File

@ -199,3 +199,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

View File

@ -199,3 +199,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -201,3 +201,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

View File

@ -201,3 +201,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -200,3 +200,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -205,3 +205,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

Binary file not shown.

View File

@ -198,3 +198,4 @@
"Unknown system variable '%-.64'",
"Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back",

View File

@ -195,7 +195,7 @@
"Hittar inte ett FULLTEXT index i kolumnlistan",
"Kan inte exekvera kommandot emedan du har en låst tabell eller an aktiv transaktion",
"Okänd system variabel '%-.64'",
#ER_CRASHED_ON_USAGE
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
#ER_CRASHED_ON_REPAIR
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
#ER_WARNING_NOT_COMPLETE_ROLLBACK
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",

Binary file not shown.

View File

@ -197,3 +197,4 @@
"Okänd system variabel '%-.64'",
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",

View File

@ -548,8 +548,7 @@ static int init_slave_thread(THD* thd)
thd->master_access= ~0;
thd->priv_user = 0;
thd->slave_thread = 1;
thd->options = (((opt_log_slave_updates) ? OPTION_BIN_LOG:0)
| OPTION_AUTO_COMMIT | OPTION_AUTO_IS_NULL) ;
thd->options = (((opt_log_slave_updates) ? OPTION_BIN_LOG:0) | OPTION_AUTO_IS_NULL) ;
thd->system_thread = 1;
thd->client_capabilities = CLIENT_LOCAL_FILES;
slave_real_id=thd->real_id=pthread_self();

View File

@ -79,8 +79,8 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
bootstrap(0),in_lock_tables(0),
global_read_lock(0)
{
host=user=db=query=ip=0;
proc_info="login";
host=user=db=query=ip=0;
locked=killed=count_cuted_fields=some_tables_deleted=no_errors=password=
query_start_used=0;
query_length=col_access=0;
@ -117,13 +117,12 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
ull=0;
system_thread=0;
bzero((char*) &mem_root,sizeof(mem_root));
#if defined(HAVE_BERKELEY_DB) || defined(HAVE_INNOBASE_DB) || defined(HAVE_GEMENI_DB)
#ifdef USING_TRANSACTIONS
bzero((char*) &transaction,sizeof(transaction));
if (open_cached_file(&transaction.trans_log,
mysql_tmpdir,LOG_PREFIX,0,MYF(MY_WME)))
killed=1;
transaction.bdb_lock_count=0;
#endif
transaction.bdb_tid=0;
#ifdef __WIN__
real_id = 0 ;
@ -146,7 +145,7 @@ THD::~THD()
close_thread_tables(this);
}
close_temporary_tables(this);
#if defined(HAVE_BERKELEY_DB) || defined(HAVE_INNOBASE_DB) || defined(HAVE_GEMENI_DB)
#ifdef USING_TRANSACTIONS
close_cached_file(&transaction.trans_log);
#endif
if (global_read_lock)

View File

@ -244,7 +244,8 @@ public:
delayed_insert *di;
struct st_transactions {
IO_CACHE trans_log;
void *bdb_tid;
THD_TRANS all; /* Trans since BEGIN WORK */
THD_TRANS stmt; /* Trans for current statement */
uint bdb_lock_count;
} transaction;
Item *free_list;
@ -293,7 +294,12 @@ public:
}
return last_insert_id;
}
inline bool active_transaction() { return transaction.bdb_tid != 0; }
inline bool active_transaction()
{
return (transaction.all.bdb_tid != 0 ||
transaction.all.innobase_tid != 0 ||
transaction.all.gemeni_tid != 0);
}
inline gptr alloc(unsigned int size) { return alloc_root(&mem_root,size); }
inline gptr calloc(unsigned int size)
{

View File

@ -34,7 +34,7 @@ int generate_table(THD *thd, TABLE_LIST *table_list,
thd->proc_info="generate_table";
if(global_read_lock)
if (global_read_lock)
{
if(thd->global_read_lock)
{
@ -141,8 +141,8 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
use_generate_table= (!using_limit && !conds &&
!(specialflag &
(SPECIAL_NO_NEW_FUNC | SPECIAL_SAFE_MODE)) &&
(thd->options & OPTION_AUTO_COMMIT) &&
!(thd->options & OPTION_BEGIN));
!(thd->options &
(OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)));
if (use_generate_table && ! thd->open_tables)
{
error=generate_table(thd,table_list,(TABLE*) 0);
@ -175,7 +175,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
/* If running in safe sql mode, don't allow updates without keys */
if (!table->quick_keys)
{
thd->lex.options|=OPTION_NO_INDEX_USED;
thd->lex.options|=QUERY_NO_INDEX_USED;
if ((thd->options & OPTION_SAFE_UPDATES) && limit == HA_POS_ERROR)
{
delete select;
@ -223,6 +223,8 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
Query_log_event qinfo(thd, thd->query);
mysql_bin_log.write(&qinfo);
}
if (!table->file->has_transactions())
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
}
if (ha_autocommit_or_rollback(thd,error >= 0))
error=1;

View File

@ -222,7 +222,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
break;
}
}
if (lock_type == TL_WRITE_DELAYED)
if (lock_type == TL_WRITE_DELAYED && ! table->file->has_transactions())
{
error=write_delayed(thd,table,duplic,query, thd->query_length, log_on);
query=0;
@ -262,6 +262,8 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
Query_log_event qinfo(thd, thd->query);
mysql_bin_log.write(&qinfo);
}
if (!table->file->has_transactions())
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
}
error=ha_autocommit_or_rollback(thd,error);
if (thd->lock)
@ -282,7 +284,8 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
if (values_list.elements == 1 && (!(thd->options & OPTION_WARNINGS) ||
!thd->cuted_fields))
send_ok(&thd->net,info.copied+info.deleted,id);
else {
else
{
char buff[160];
if (duplic == DUP_IGNORE)
sprintf(buff,ER(ER_INSERT_INFO),info.records,info.records-info.copied,

View File

@ -241,6 +241,8 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
send_ok(&thd->net,info.copied+info.deleted,0L,name);
mysql_update_log.write(thd,thd->query,thd->query_length);
if (!table->file->has_transactions())
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
if (!read_file_from_client)
{
ex->skip_lines = save_skip_lines;

View File

@ -73,13 +73,12 @@ static void init_signals(void)
static inline bool end_active_trans(THD *thd)
{
if (!(thd->options & OPTION_AUTO_COMMIT) ||
(thd->options & OPTION_BEGIN))
if (thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN))
{
thd->options&= ~(ulong) (OPTION_BEGIN | OPTION_STATUS_NO_TRANS_UPDATE);
thd->server_status&= ~SERVER_STATUS_IN_TRANS;
if (ha_commit(thd))
return 1;
thd->options&= ~(ulong) (OPTION_BEGIN);
thd->server_status&= ~SERVER_STATUS_IN_TRANS;
}
return 0;
}
@ -576,6 +575,7 @@ bool do_command(THD *thd)
thread_running++;
VOID(pthread_mutex_unlock(&LOCK_thread_count));
thd->set_time();
thd->lex.options=0; // We store status here
switch(command) {
case COM_INIT_DB:
if (!mysql_change_db(thd,packet+1))
@ -827,7 +827,7 @@ bool do_command(THD *thd)
{
if ((ulong) (thd->start_time - thd->time_after_lock) > long_query_time ||
((thd->lex.options &
(OPTION_NO_INDEX_USED | OPTION_NO_GOOD_INDEX_USED)) &&
(QUERY_NO_INDEX_USED | QUERY_NO_GOOD_INDEX_USED)) &&
(specialflag & SPECIAL_LONG_LOG_FORMAT)))
{
long_query_count++;
@ -1560,12 +1560,12 @@ mysql_execute_command(void)
thd->options,(long) thd->default_select_limit));
/* Check if auto_commit mode changed */
if ((org_options ^ lex->options) & OPTION_AUTO_COMMIT)
if ((org_options ^ lex->options) & OPTION_NOT_AUTO_COMMIT)
{
if (!(org_options & OPTION_AUTO_COMMIT))
if ((org_options & OPTION_NOT_AUTO_COMMIT))
{
/* We changed to auto_commit mode */
thd->options&= ~(ulong) (OPTION_BEGIN);
thd->options&= ~(ulong) (OPTION_BEGIN | OPTION_STATUS_NO_TRANS_UPDATE);
thd->server_status|= SERVER_STATUS_AUTOCOMMIT;
if (ha_commit(thd))
{
@ -1750,7 +1750,7 @@ mysql_execute_command(void)
even if there is a problem with the OPTION_AUTO_COMMIT flag
(Which of course should never happen...)
*/
thd->options&= ~(ulong) (OPTION_BEGIN);
thd->options&= ~(ulong) (OPTION_BEGIN | OPTION_STATUS_NO_TRANS_UPDATE);
thd->server_status&= ~SERVER_STATUS_IN_TRANS;
if (!ha_commit(thd))
send_ok(&thd->net);
@ -1758,12 +1758,17 @@ mysql_execute_command(void)
res= -1;
break;
case SQLCOM_ROLLBACK:
thd->options&= ~(ulong) (OPTION_BEGIN);
thd->server_status&= ~SERVER_STATUS_IN_TRANS;
if (!ha_rollback(thd))
send_ok(&thd->net);
{
if (thd->options & OPTION_STATUS_NO_TRANS_UPDATE)
send_warning(&thd->net,ER_WARNING_NOT_COMPLETE_ROLLBACK,0);
else
send_ok(&thd->net);
}
else
res= -1;
thd->options&= ~(ulong) (OPTION_BEGIN | OPTION_STATUS_NO_TRANS_UPDATE);
break;
default: /* Impossible */
send_ok(&thd->net);

View File

@ -2420,7 +2420,7 @@ make_join_readinfo(JOIN *join,uint options)
/* These init changes read_record */
if (tab->use_quick == 2)
{
join->thd->lex.options|=OPTION_NO_GOOD_INDEX_USED;
join->thd->lex.options|=QUERY_NO_GOOD_INDEX_USED;
tab->read_first_record= join_init_quick_read_record;
statistic_increment(select_range_check_count, &LOCK_status);
}
@ -2435,7 +2435,7 @@ make_join_readinfo(JOIN *join,uint options)
}
else
{
join->thd->lex.options|=OPTION_NO_INDEX_USED;
join->thd->lex.options|=QUERY_NO_INDEX_USED;
statistic_increment(select_scan_count, &LOCK_status);
}
}
@ -2447,7 +2447,7 @@ make_join_readinfo(JOIN *join,uint options)
}
else
{
join->thd->lex.options|=OPTION_NO_INDEX_USED;
join->thd->lex.options|=QUERY_NO_INDEX_USED;
statistic_increment(select_full_join_count, &LOCK_status);
}
}

View File

@ -1543,7 +1543,11 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
goto err;
}
}
if ((error = ha_commit(thd)))
/* The ALTER TABLE is always in it's own transaction */
error = ha_commit_stmt(thd);
if (ha_commit(thd))
error=1;
if (error)
{
VOID(pthread_cond_broadcast(&COND_refresh));
VOID(pthread_mutex_unlock(&LOCK_open));
@ -1666,7 +1670,16 @@ copy_data_between_tables(TABLE *from,TABLE *to,
}
if (to->file->activate_all_index(thd))
error=1;
if (ha_commit(thd) || to->file->external_lock(thd,F_UNLCK))
/*
Ensure that the new table is saved properly to disk so that we
can do a rename
*/
if (ha_commit_stmt(thd))
error=1;
if (ha_commit(thd))
error=1;
if (to->file->external_lock(thd,F_UNLCK))
error=1;
err:
free_io_cache(from);

View File

@ -102,7 +102,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
/* If running in safe sql mode, don't allow updates without keys */
if (!table->quick_keys)
{
thd->lex.options|=OPTION_NO_INDEX_USED;
thd->lex.options|=QUERY_NO_INDEX_USED;
if ((thd->options & OPTION_SAFE_UPDATES) && limit == HA_POS_ERROR)
{
delete select;
@ -245,6 +245,8 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
Query_log_event qinfo(thd, thd->query);
mysql_bin_log.write(&qinfo);
}
if (!table->file->has_transactions())
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
}
if (ha_autocommit_or_rollback(thd, error >= 0))
error=1;

View File

@ -1943,7 +1943,7 @@ opt_into:
*/
drop:
DROP TABLE_SYM if_exists table_list
DROP TABLE_SYM if_exists table_list opt_restrict
{
Lex->sql_command = SQLCOM_DROP_TABLE;
Lex->drop_if_exists = $3;
@ -2573,6 +2573,13 @@ option_value:
else
Lex->options|= $1;
}
| AUTOCOMMIT equal NUM
{
if (atoi($3.str) != 0) /* Test NOT AUTOCOMMIT */
Lex->options&= ~(OPTION_NOT_AUTO_COMMIT);
else
Lex->options|= OPTION_NOT_AUTO_COMMIT;
}
| SQL_SELECT_LIMIT equal ULONG_NUM
{
Lex->select_limit= $3;
@ -2656,7 +2663,6 @@ text_or_password:
set_option:
SQL_BIG_TABLES { $$= OPTION_BIG_TABLES; }
| AUTOCOMMIT { $$= OPTION_AUTO_COMMIT; }
| SQL_BIG_SELECTS { $$= OPTION_BIG_SELECTS; }
| SQL_LOG_OFF { $$= OPTION_LOG_OFF; }
| SQL_LOG_UPDATE