merge of conflicts
This commit is contained in:
commit
44bc3f9b0e
@ -28,6 +28,8 @@ all: $(targets) txt_files
|
|||||||
txt_files: ../INSTALL-SOURCE ../COPYING ../COPYING.LIB \
|
txt_files: ../INSTALL-SOURCE ../COPYING ../COPYING.LIB \
|
||||||
../MIRRORS INSTALL-BINARY
|
../MIRRORS INSTALL-BINARY
|
||||||
|
|
||||||
|
CLEAN_FILES: manual.ps
|
||||||
|
|
||||||
# The PostScript version is so big that is not included in the
|
# The PostScript version is so big that is not included in the
|
||||||
# standard distribution. It is available for download from the home page.
|
# standard distribution. It is available for download from the home page.
|
||||||
paper: manual_a4.ps manual_letter.ps
|
paper: manual_a4.ps manual_letter.ps
|
||||||
|
149
Docs/manual.texi
149
Docs/manual.texi
@ -195,8 +195,8 @@ Installing a MySQL binary distribution
|
|||||||
|
|
||||||
System-specific issues
|
System-specific issues
|
||||||
|
|
||||||
* Binary notes-Linux:: Linux notes
|
* Binary notes-Linux:: Linux notes for binary distribution
|
||||||
* Binary notes-HP-UX:: HP-UX notes
|
* Binary notes-HP-UX:: HP-UX notes for binary distribution
|
||||||
|
|
||||||
Installing a MySQL source distribution
|
Installing a MySQL source distribution
|
||||||
|
|
||||||
@ -259,6 +259,7 @@ Windows notes
|
|||||||
* Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH
|
* Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH
|
||||||
* Windows symbolic links:: Splitting data across different disks under Win32
|
* Windows symbolic links:: Splitting data across different disks under Win32
|
||||||
* Windows compiling:: Compiling MySQL clients on Windows.
|
* Windows compiling:: Compiling MySQL clients on Windows.
|
||||||
|
* Windows and BDB tables.::
|
||||||
* Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL}
|
* Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL}
|
||||||
|
|
||||||
Post-installation setup and testing
|
Post-installation setup and testing
|
||||||
@ -4568,8 +4569,8 @@ files.
|
|||||||
@subsection System-specific issues
|
@subsection System-specific issues
|
||||||
|
|
||||||
@menu
|
@menu
|
||||||
* Binary notes-Linux:: Linux notes
|
* Binary notes-Linux:: Linux notes for binary distribution
|
||||||
* Binary notes-HP-UX:: HP-UX notes
|
* Binary notes-HP-UX:: HP-UX notes for binary distribution
|
||||||
@end menu
|
@end menu
|
||||||
|
|
||||||
The following sections indicate some of the issues that have been observed
|
The following sections indicate some of the issues that have been observed
|
||||||
@ -4577,7 +4578,7 @@ on particular systems when installing @strong{MySQL} from a binary
|
|||||||
distribution.
|
distribution.
|
||||||
|
|
||||||
@node Binary notes-Linux, Binary notes-HP-UX, Binary install system issues, Binary install system issues
|
@node Binary notes-Linux, Binary notes-HP-UX, Binary install system issues, Binary install system issues
|
||||||
@subsubsection Linux notes
|
@subsubsection Linux notes for binary distribution
|
||||||
|
|
||||||
@strong{MySQL} needs at least Linux 2.0.
|
@strong{MySQL} needs at least Linux 2.0.
|
||||||
|
|
||||||
@ -4653,7 +4654,7 @@ and clients on the same machine. We hope that the @code{Linux 2.4}
|
|||||||
kernel will fix this problem in the future.
|
kernel will fix this problem in the future.
|
||||||
|
|
||||||
@node Binary notes-HP-UX, , Binary notes-Linux, Binary install system issues
|
@node Binary notes-HP-UX, , Binary notes-Linux, Binary install system issues
|
||||||
@subsubsection HP-UX notes
|
@subsubsection HP-UX notes for binary distribution
|
||||||
|
|
||||||
Some of the binary distributions of @strong{MySQL} for HP-UX is
|
Some of the binary distributions of @strong{MySQL} for HP-UX is
|
||||||
distributed as an HP depot file and as a tar file. To use the depot
|
distributed as an HP depot file and as a tar file. To use the depot
|
||||||
@ -5753,8 +5754,6 @@ shell> CC=gcc CFLAGS="-O6" \
|
|||||||
If you have the Sun Workshop 4.2 compiler, you can run @code{configure} like
|
If you have the Sun Workshop 4.2 compiler, you can run @code{configure} like
|
||||||
this:
|
this:
|
||||||
|
|
||||||
CC=cc CFLAGS="-xstrconst -Xa -xO4 -native -mt" CXX=CC CXXFLAGS="-xO4 -native -noex -mt" ./configure --prefix=/usr/local/mysql
|
|
||||||
|
|
||||||
@example
|
@example
|
||||||
shell> CC=cc CFLAGS="-Xa -fast -xO4 -native -xstrconst -mt" \
|
shell> CC=cc CFLAGS="-Xa -fast -xO4 -native -xstrconst -mt" \
|
||||||
CXX=CC CXXFLAGS="-noex -XO4 -mt" \
|
CXX=CC CXXFLAGS="-noex -XO4 -mt" \
|
||||||
@ -7203,6 +7202,7 @@ is also described in the @file{README} file that comes with the
|
|||||||
* Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH
|
* Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH
|
||||||
* Windows symbolic links:: Splitting data across different disks under Win32
|
* Windows symbolic links:: Splitting data across different disks under Win32
|
||||||
* Windows compiling:: Compiling MySQL clients on Windows.
|
* Windows compiling:: Compiling MySQL clients on Windows.
|
||||||
|
* Windows and BDB tables.::
|
||||||
* Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL}
|
* Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL}
|
||||||
@end menu
|
@end menu
|
||||||
|
|
||||||
@ -7511,7 +7511,7 @@ should create the file @file{C:\mysql\data\foo.sym} that should contains the
|
|||||||
text @code{D:\data\foo}. After this, all tables created in the database
|
text @code{D:\data\foo}. After this, all tables created in the database
|
||||||
@code{foo} will be created in @file{D:\data\foo}.
|
@code{foo} will be created in @file{D:\data\foo}.
|
||||||
|
|
||||||
@node Windows compiling, Windows vs Unix, Windows symbolic links, Windows
|
@node Windows compiling, Windows and BDB tables., Windows symbolic links, Windows
|
||||||
@subsection Compiling MySQL clients on Windows.
|
@subsection Compiling MySQL clients on Windows.
|
||||||
|
|
||||||
In your source files, you should include @file{windows.h} before you include
|
In your source files, you should include @file{windows.h} before you include
|
||||||
@ -7531,7 +7531,17 @@ with the static @file{mysqlclient.lib} library.
|
|||||||
Note that as the mysqlclient libraries are compiled as threaded libraries,
|
Note that as the mysqlclient libraries are compiled as threaded libraries,
|
||||||
you should also compile your code to be multi-threaded!
|
you should also compile your code to be multi-threaded!
|
||||||
|
|
||||||
@node Windows vs Unix, , Windows compiling, Windows
|
@node Windows and BDB tables., Windows vs Unix, Windows compiling, Windows
|
||||||
|
@subsection Windows and BDB tables.
|
||||||
|
|
||||||
|
We are working on removing the requirement that one must have a primary
|
||||||
|
key in a BDB table; As soon as this is fixed we will throughly test the
|
||||||
|
BDB interface by running the @strong{MySQL} benchmark + our internal
|
||||||
|
test suite on it. When the above is done we will start release binary
|
||||||
|
distributions (for windows and Unix) of @strong{MySQL} that will include
|
||||||
|
support for BDB tables.
|
||||||
|
|
||||||
|
@node Windows vs Unix, , Windows and BDB tables., Windows
|
||||||
@subsection MySQL-Windows compared to Unix MySQL
|
@subsection MySQL-Windows compared to Unix MySQL
|
||||||
|
|
||||||
@strong{MySQL}-Windows has by now proven itself to be very stable. This version
|
@strong{MySQL}-Windows has by now proven itself to be very stable. This version
|
||||||
@ -13646,7 +13656,7 @@ column, you cannot index the entire thing.
|
|||||||
In @strong{MySQL} 3.23.23 or later, you can also create special
|
In @strong{MySQL} 3.23.23 or later, you can also create special
|
||||||
@strong{FULLTEXT} indexes. They are used for full-text search. Only the
|
@strong{FULLTEXT} indexes. They are used for full-text search. Only the
|
||||||
@code{MyISAM} table type supports @code{FULLTEXT} indexes. They can be created
|
@code{MyISAM} table type supports @code{FULLTEXT} indexes. They can be created
|
||||||
only from @code{VARCHAR}, @code{BLOB}, and @code{TEXT} columns.
|
only from @code{VARCHAR} and @code{TEXT} columns.
|
||||||
Indexing always happens over the entire column, partial indexing is not
|
Indexing always happens over the entire column, partial indexing is not
|
||||||
supported. See @ref{MySQL full-text search} for details of operation.
|
supported. See @ref{MySQL full-text search} for details of operation.
|
||||||
|
|
||||||
@ -16445,6 +16455,7 @@ or PASSWORD = "string"
|
|||||||
or DELAY_KEY_WRITE = @{0 | 1@}
|
or DELAY_KEY_WRITE = @{0 | 1@}
|
||||||
or ROW_FORMAT= @{ default | dynamic | static | compressed @}
|
or ROW_FORMAT= @{ default | dynamic | static | compressed @}
|
||||||
or RAID_TYPE= @{1 | STRIPED | RAID0 @} RAID_CHUNKS=# RAID_CHUNKSIZE=#;
|
or RAID_TYPE= @{1 | STRIPED | RAID0 @} RAID_CHUNKS=# RAID_CHUNKSIZE=#;
|
||||||
|
or UNION = (table_name,[table_name...])
|
||||||
|
|
||||||
select_statement:
|
select_statement:
|
||||||
[IGNORE | REPLACE] SELECT ... (Some legal select statement)
|
[IGNORE | REPLACE] SELECT ... (Some legal select statement)
|
||||||
@ -16633,7 +16644,7 @@ When you use @code{ORDER BY} or @code{GROUP BY} with a @code{TEXT} or
|
|||||||
In @strong{MySQL} 3.23.23 or later, you can also create special
|
In @strong{MySQL} 3.23.23 or later, you can also create special
|
||||||
@strong{FULLTEXT} indexes. They are used for full-text search. Only the
|
@strong{FULLTEXT} indexes. They are used for full-text search. Only the
|
||||||
@code{MyISAM} table type supports @code{FULLTEXT} indexes. They can be created
|
@code{MyISAM} table type supports @code{FULLTEXT} indexes. They can be created
|
||||||
only from @code{VARCHAR}, @code{BLOB}, and @code{TEXT} columns.
|
only from @code{VARCHAR} and @code{TEXT} columns.
|
||||||
Indexing always happens over the entire column, partial indexing is not
|
Indexing always happens over the entire column, partial indexing is not
|
||||||
supported. See @ref{MySQL full-text search} for details of operation.
|
supported. See @ref{MySQL full-text search} for details of operation.
|
||||||
|
|
||||||
@ -16742,8 +16753,14 @@ If you specify @code{RAID_TYPE=STRIPED} for a @code{MyISAM} table,
|
|||||||
to the data file, the @code{RAID} handler will map the first
|
to the data file, the @code{RAID} handler will map the first
|
||||||
@code{RAID_CHUNKSIZE} *1024 bytes to the first file, the next
|
@code{RAID_CHUNKSIZE} *1024 bytes to the first file, the next
|
||||||
@code{RAID_CHUNKSIZE} *1024 bytes to the next file and so on.
|
@code{RAID_CHUNKSIZE} *1024 bytes to the next file and so on.
|
||||||
@end itemize
|
|
||||||
|
|
||||||
|
@code{UNION} is used when you want to use a collection of identical
|
||||||
|
tables as one. This only works with MERGE tables. @xref{MERGE}.
|
||||||
|
|
||||||
|
For the moment you need to have @code{SELECT}, @code{UPDATE} and
|
||||||
|
@code{DELETE} privileges on the tables you map to a @code{MERGE} table.
|
||||||
|
All mapped tables must be in the same database as the @code{MERGE} table.
|
||||||
|
@end itemize
|
||||||
|
|
||||||
@node Silent column changes, , CREATE TABLE, CREATE TABLE
|
@node Silent column changes, , CREATE TABLE, CREATE TABLE
|
||||||
@subsection Silent column specification changes
|
@subsection Silent column specification changes
|
||||||
@ -20053,7 +20070,7 @@ table type.
|
|||||||
For more information about how @strong{MySQL} uses indexes, see
|
For more information about how @strong{MySQL} uses indexes, see
|
||||||
@ref{MySQL indexes, , @strong{MySQL} indexes}.
|
@ref{MySQL indexes, , @strong{MySQL} indexes}.
|
||||||
|
|
||||||
@code{FULLTEXT} indexes can index only @code{VARCHAR}, @code{BLOB}, and
|
@code{FULLTEXT} indexes can index only @code{VARCHAR} and
|
||||||
@code{TEXT} columns, and only in @code{MyISAM} tables. @code{FULLTEXT} indexes
|
@code{TEXT} columns, and only in @code{MyISAM} tables. @code{FULLTEXT} indexes
|
||||||
are available in @strong{MySQL} 3.23.23 and later.
|
are available in @strong{MySQL} 3.23.23 and later.
|
||||||
@ref{MySQL full-text search}.
|
@ref{MySQL full-text search}.
|
||||||
@ -20633,9 +20650,10 @@ missing is a way from the SQL prompt to say which tables are part of the
|
|||||||
@code{MERGE} table.
|
@code{MERGE} table.
|
||||||
|
|
||||||
A @code{MERGE} table is a collection of identical @code{MyISAM} tables
|
A @code{MERGE} table is a collection of identical @code{MyISAM} tables
|
||||||
that can be used as one. You can only @code{SELECT} from the collection
|
that can be used as one. You can only @code{SELECT}, @code{DELETE} and
|
||||||
of tables. If you @code{DROP} the @code{MERGE} table, you are only
|
@code{UPDATE} from the collection of tables. If you @code{DROP} the
|
||||||
dropping the @code{MERGE} specification.
|
@code{MERGE} table, you are only dropping the @code{MERGE}
|
||||||
|
specification.
|
||||||
|
|
||||||
With identical tables we mean that all tables are created with identical
|
With identical tables we mean that all tables are created with identical
|
||||||
column information. Some of the tables can be compressed with
|
column information. Some of the tables can be compressed with
|
||||||
@ -20646,7 +20664,10 @@ definition file and a @code{.MRG} table list file. The @code{.MRG} just
|
|||||||
contains a list of the index files (@code{.MYI} files) that should
|
contains a list of the index files (@code{.MYI} files) that should
|
||||||
be used as one.
|
be used as one.
|
||||||
|
|
||||||
@code{MERGE} tables helps you solve the following problems:
|
For the moment you need to have @code{SELECT}, @code{UPDATE} and
|
||||||
|
@code{DELETE} privileges on the tables you map to a @code{MERGE} table.
|
||||||
|
|
||||||
|
@code{MERGE} tables can help you solve the following problems:
|
||||||
|
|
||||||
@itemize @bullet
|
@itemize @bullet
|
||||||
@item
|
@item
|
||||||
@ -20671,13 +20692,22 @@ are mapped to a @code{MERGE} file than trying to repair a real big file.
|
|||||||
Instant mapping of many files as one; A @code{MERGE} table uses the
|
Instant mapping of many files as one; A @code{MERGE} table uses the
|
||||||
index of the individual tables; It doesn't need an index of its one.
|
index of the individual tables; It doesn't need an index of its one.
|
||||||
This makes @code{MERGE} table collections VERY fast to make or remap.
|
This makes @code{MERGE} table collections VERY fast to make or remap.
|
||||||
|
@item
|
||||||
|
If you have a set of tables which you join to a big tables on demand or
|
||||||
|
batch, you should instead create a @code{MERGE} table on them on demand.
|
||||||
|
This is much faster and will save a lot of disk space.
|
||||||
|
@item
|
||||||
|
Go around the file size limit for the operating system.
|
||||||
@end itemize
|
@end itemize
|
||||||
|
|
||||||
The disadvantages with @code{MERGE} tables are:
|
The disadvantages with @code{MERGE} tables are:
|
||||||
|
|
||||||
@itemize @bullet
|
@itemize @bullet
|
||||||
@item
|
@item
|
||||||
@code{MERGE} tables are read-only.
|
You can't use @code{INSERT} on @code{MERGE} tables, as @strong{MySQL} can't know
|
||||||
|
in which of the tables we should insert the row.
|
||||||
|
@item
|
||||||
|
You can only use identical @code{MyISAM} tables for a @code{MERGE} table.
|
||||||
@item
|
@item
|
||||||
@code{MERGE} tables uses more file descriptors: If you are using a
|
@code{MERGE} tables uses more file descriptors: If you are using a
|
||||||
@strong{MERGE} that maps over 10 tables and 10 users are using this, you
|
@strong{MERGE} that maps over 10 tables and 10 users are using this, you
|
||||||
@ -20703,18 +20733,15 @@ CREATE TABLE t1 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20));
|
|||||||
CREATE TABLE t2 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20));
|
CREATE TABLE t2 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20));
|
||||||
INSERT INTO t1 (message) VALUES ("Testing"),("table"),("t1");
|
INSERT INTO t1 (message) VALUES ("Testing"),("table"),("t1");
|
||||||
INSERT INTO t2 (message) VALUES ("Testing"),("table"),("t2");
|
INSERT INTO t2 (message) VALUES ("Testing"),("table"),("t2");
|
||||||
CREATE TABLE total (a INT NOT NULL, message CHAR(20), KEY(a)) TYPE=MERGE;
|
CREATE TABLE total (a INT NOT NULL, message CHAR(20), KEY(a)) TYPE=MERGE UNION=(t1,t2);
|
||||||
@end example
|
@end example
|
||||||
|
|
||||||
Note that we didn't create an @code{UNIQUE} or @code{PRIMARY KEY} in the
|
Note that we didn't create an @code{UNIQUE} or @code{PRIMARY KEY} in the
|
||||||
@code{total} table as the key isn't going to be unique in the @code{total}
|
@code{total} table as the key isn't going to be unique in the @code{total}
|
||||||
table.
|
table.
|
||||||
|
|
||||||
(We plan to in the future add the information in the @code{MERGE} handler
|
Note that you can also manipulate the @code{.MRG} file directly from
|
||||||
that unique keys are not necessarily unique in the @code{MERGE} table.)
|
the outside of the @code{MySQL} server:
|
||||||
|
|
||||||
Now you have to use tool (editor, unix command...) to insert the file
|
|
||||||
names into the 'total' table:
|
|
||||||
|
|
||||||
@example
|
@example
|
||||||
shell> cd /mysql-data-directory/current-database
|
shell> cd /mysql-data-directory/current-database
|
||||||
@ -20737,13 +20764,11 @@ mysql> select * from total;
|
|||||||
+---+---------+
|
+---+---------+
|
||||||
@end example
|
@end example
|
||||||
|
|
||||||
To remap a @code{MERGE} table you must either @code{DROP} it and recreate it
|
To remap a @code{MERGE} table you must either @code{DROP} it and
|
||||||
or change the @code{.MRG} file and issue a @code{FLUSH TABLE} on the
|
recreate it, use @code{ALTER TABLE} with a new @code{UNION}
|
||||||
@code{MERGE} table to force the handler to read the new definition file.
|
specification or change the @code{.MRG} file and issue a @code{FLUSH
|
||||||
|
TABLE} on the @code{MERGE} table and all underlying tables to force the
|
||||||
You can also put full paths to the index files in the @code{.MRG} file; If
|
handler to read the new definition file.
|
||||||
you don't do this, the @code{MERGE} handler assumes that the index files
|
|
||||||
are in the same directory as the @code{.MRG} file.
|
|
||||||
|
|
||||||
@node ISAM, HEAP, MERGE, Table types
|
@node ISAM, HEAP, MERGE, Table types
|
||||||
@section ISAM tables
|
@section ISAM tables
|
||||||
@ -28799,6 +28824,48 @@ string to a time. This would be great if the source was a text file, but
|
|||||||
is plain stupid when the source is an ODBC connection that reports
|
is plain stupid when the source is an ODBC connection that reports
|
||||||
exact types for each column.
|
exact types for each column.
|
||||||
@end itemize
|
@end itemize
|
||||||
|
@item Word
|
||||||
|
|
||||||
|
To retrieve data from @strong{MySQL}to Word/Excel documents, you need to
|
||||||
|
use the @code{MyODBC} driver and the Add-in Microsoft Query help.
|
||||||
|
|
||||||
|
For example, create a db with a table with 2 columns text.
|
||||||
|
|
||||||
|
@itemize @bullet
|
||||||
|
@item
|
||||||
|
Insert rows using the mysql client command line tool.
|
||||||
|
@item
|
||||||
|
Create a DSN file using the MyODBC driver e.g. my for the db above.
|
||||||
|
@item
|
||||||
|
Open the Word application.
|
||||||
|
@item
|
||||||
|
Create a blank new documentation.
|
||||||
|
@item
|
||||||
|
Using the tool bar called Database, press the button insert database.
|
||||||
|
@item
|
||||||
|
Press the button Get Data.
|
||||||
|
@item
|
||||||
|
At the right hand of the screen Get Data, press the button Ms Query.
|
||||||
|
@item
|
||||||
|
In the Ms Query create a New Data Source using the DSN file my.
|
||||||
|
@item
|
||||||
|
Select the new query.
|
||||||
|
@item
|
||||||
|
Select the columns that you want.
|
||||||
|
@item
|
||||||
|
Make a filter if you want.
|
||||||
|
@item
|
||||||
|
Make a Sort if you want.
|
||||||
|
@item
|
||||||
|
Select Return Data to Microsoft Word.
|
||||||
|
@item
|
||||||
|
Click Finish.
|
||||||
|
@item
|
||||||
|
Click Insert data and select the records.
|
||||||
|
@item
|
||||||
|
Click OK and you see the rows in your Word document.
|
||||||
|
@end itemize
|
||||||
|
|
||||||
@item odbcadmin
|
@item odbcadmin
|
||||||
Test program for ODBC.
|
Test program for ODBC.
|
||||||
@item Delphi
|
@item Delphi
|
||||||
@ -34709,9 +34776,9 @@ DELAYED} threads.
|
|||||||
Since version 3.23.23, @strong{MySQL} has support for full-text indexing
|
Since version 3.23.23, @strong{MySQL} has support for full-text indexing
|
||||||
and searching. Full-text index in @strong{MySQL} is an
|
and searching. Full-text index in @strong{MySQL} is an
|
||||||
index of type @code{FULLTEXT}. @code{FULLTEXT} indexes can be created from
|
index of type @code{FULLTEXT}. @code{FULLTEXT} indexes can be created from
|
||||||
@code{VARCHAR}, @code{TEXT}, and @code{BLOB} columns at
|
@code{VARCHAR} and @code{TEXT} columns at @code{CREATE TABLE} time or added
|
||||||
@code{CREATE TABLE} time or added later with @code{ALTER TABLE} or
|
later with @code{ALTER TABLE} or @code{CREATE INDEX}. Full-text search is
|
||||||
@code{CREATE INDEX}. Full-text search is performed with the @code{MATCH}
|
performed with the @code{MATCH}
|
||||||
function.
|
function.
|
||||||
|
|
||||||
@example
|
@example
|
||||||
@ -36224,6 +36291,11 @@ though, so 3.23 is not released as a stable version yet.
|
|||||||
@appendixsubsec Changes in release 3.23.25
|
@appendixsubsec Changes in release 3.23.25
|
||||||
@itemize @bullet
|
@itemize @bullet
|
||||||
@item
|
@item
|
||||||
|
@code{HEAP} tables didn't use keys properly. (Bug from 3.23.23)
|
||||||
|
@item
|
||||||
|
Added better support for @code{MERGE} tables (keys, mapping, creation,
|
||||||
|
documentation...). @xref{MERGE}.
|
||||||
|
@item
|
||||||
Fixed bug in mysqldump from 3.23 which caused that some @code{CHAR} columns
|
Fixed bug in mysqldump from 3.23 which caused that some @code{CHAR} columns
|
||||||
wheren't quoted.
|
wheren't quoted.
|
||||||
@item
|
@item
|
||||||
@ -40304,6 +40376,8 @@ Fixed @code{DISTINCT} with calculated columns.
|
|||||||
|
|
||||||
@itemize @bullet
|
@itemize @bullet
|
||||||
@item
|
@item
|
||||||
|
For the moment @code{MATCH} only works with @code{SELECT} statements.
|
||||||
|
@item
|
||||||
You cannot build in another directory when using
|
You cannot build in another directory when using
|
||||||
MIT-pthreads. Because this requires changes to MIT-pthreads, we are not
|
MIT-pthreads. Because this requires changes to MIT-pthreads, we are not
|
||||||
likely to fix this.
|
likely to fix this.
|
||||||
@ -40391,6 +40465,9 @@ the error value 'empty string', with numeric value 0.
|
|||||||
@item
|
@item
|
||||||
If you execute a @code{PROCEDURE} on a query with returns an empty set then
|
If you execute a @code{PROCEDURE} on a query with returns an empty set then
|
||||||
in some cases the @code{PROCEDURE} will not transform the columns.
|
in some cases the @code{PROCEDURE} will not transform the columns.
|
||||||
|
@item
|
||||||
|
Creation of a table of type @code{MERGE} doesn't check if the underlaying
|
||||||
|
tables are of compatible types.
|
||||||
@end itemize
|
@end itemize
|
||||||
|
|
||||||
The following is known bugs in earlier versions of @strong{MySQL}:
|
The following is known bugs in earlier versions of @strong{MySQL}:
|
||||||
@ -40464,6 +40541,8 @@ Allow users to change startup options.
|
|||||||
@item
|
@item
|
||||||
Subqueries. @code{select id from t where grp in (select grp from g where u > 100)}
|
Subqueries. @code{select id from t where grp in (select grp from g where u > 100)}
|
||||||
@item
|
@item
|
||||||
|
Add range checking to @code{MERGE} tables.
|
||||||
|
@item
|
||||||
Port of @strong{MySQL} to BeOS.
|
Port of @strong{MySQL} to BeOS.
|
||||||
@item
|
@item
|
||||||
Add a temporary key buffer cache during @code{insert/delete/update} so that we
|
Add a temporary key buffer cache during @code{insert/delete/update} so that we
|
||||||
|
@ -27,10 +27,14 @@ SUBDIRS = include @docs_dirs@ @readline_dir@ \
|
|||||||
@bench_dirs@ support-files
|
@bench_dirs@ support-files
|
||||||
|
|
||||||
# Relink after clean
|
# Relink after clean
|
||||||
CLEANFILES = linked_client_sources linked_server_sources linked_libmysql_sources linked_libmysql_r_sources
|
CLEANFILES = linked_client_sources linked_server_sources linked_libmysql_sources linked_libmysql_r_sources linked_include_sources
|
||||||
|
|
||||||
# This is just so that the linking is done early.
|
# This is just so that the linking is done early.
|
||||||
config.h: linked_client_sources linked_server_sources
|
config.h: linked_include_sources linked_client_sources linked_server_sources
|
||||||
|
|
||||||
|
linked_include_sources:
|
||||||
|
cd include; $(MAKE) link_sources
|
||||||
|
echo timestamp > linked_include_sources
|
||||||
|
|
||||||
linked_client_sources: @linked_client_targets@
|
linked_client_sources: @linked_client_targets@
|
||||||
echo timestamp > linked_client_sources
|
echo timestamp > linked_client_sources
|
||||||
|
@ -17,7 +17,7 @@
|
|||||||
|
|
||||||
/* Return error-text for system error messages and nisam messages */
|
/* Return error-text for system error messages and nisam messages */
|
||||||
|
|
||||||
#define PERROR_VERSION "2.2"
|
#define PERROR_VERSION "2.3"
|
||||||
|
|
||||||
#include <global.h>
|
#include <global.h>
|
||||||
#include <my_sys.h>
|
#include <my_sys.h>
|
||||||
@ -59,9 +59,11 @@ static HA_ERRORS ha_errlist[]=
|
|||||||
{ 136,"No more room in index file" },
|
{ 136,"No more room in index file" },
|
||||||
{ 137,"No more records (read after end of file)" },
|
{ 137,"No more records (read after end of file)" },
|
||||||
{ 138,"Unsupported extension used for table" },
|
{ 138,"Unsupported extension used for table" },
|
||||||
{ 139,"Too big row (>= 24 M)"},
|
{ 139,"Too big row (>= 16 M)"},
|
||||||
{ 140,"Wrong create options"},
|
{ 140,"Wrong create options"},
|
||||||
{ 141,"Dupplicate unique on write or update"},
|
{ 141,"Duplicate unique on write or update"},
|
||||||
|
{ 142,"Unknown character set used"},
|
||||||
|
{ 143,"Conflicting table definition between MERGE and mapped table"},
|
||||||
{ 0,NullS },
|
{ 0,NullS },
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -15,11 +15,11 @@
|
|||||||
# Software Foundation, Inc., 59 Temple Place - Suite 330, Boston,
|
# Software Foundation, Inc., 59 Temple Place - Suite 330, Boston,
|
||||||
# MA 02111-1307, USA
|
# MA 02111-1307, USA
|
||||||
|
|
||||||
BUILT_SOURCES = my_config.h mysql_version.h m_ctype.h
|
BUILT_SOURCES = mysql_version.h m_ctype.h
|
||||||
pkginclude_HEADERS = dbug.h m_string.h my_sys.h mysql.h mysql_com.h \
|
pkginclude_HEADERS = dbug.h m_string.h my_sys.h mysql.h mysql_com.h \
|
||||||
mysqld_error.h my_list.h \
|
mysqld_error.h my_list.h \
|
||||||
my_pthread.h my_no_pthread.h raid.h errmsg.h \
|
my_pthread.h my_no_pthread.h raid.h errmsg.h \
|
||||||
my_config.h my_global.h my_net.h \
|
my_global.h my_net.h \
|
||||||
sslopt-case.h sslopt-longopts.h sslopt-usage.h \
|
sslopt-case.h sslopt-longopts.h sslopt-usage.h \
|
||||||
sslopt-vars.h $(BUILT_SOURCES)
|
sslopt-vars.h $(BUILT_SOURCES)
|
||||||
noinst_HEADERS = global.h config-win.h \
|
noinst_HEADERS = global.h config-win.h \
|
||||||
@ -30,17 +30,19 @@ noinst_HEADERS = global.h config-win.h \
|
|||||||
my_tree.h hash.h thr_alarm.h thr_lock.h \
|
my_tree.h hash.h thr_alarm.h thr_lock.h \
|
||||||
getopt.h t_ctype.h violite.h \
|
getopt.h t_ctype.h violite.h \
|
||||||
mysql_version.h.in
|
mysql_version.h.in
|
||||||
|
EXTRA_DIST= my_config.h
|
||||||
|
|
||||||
# mysql_version.h are generated
|
# mysql_version.h are generated
|
||||||
SUPERCLEANFILES = mysql_version.h
|
SUPERCLEANFILES = mysql_version.h my_global.h
|
||||||
|
|
||||||
# Some include files that may be moved and patched by configure
|
# Some include files that may be moved and patched by configure
|
||||||
DISTCLEANFILES = sched.h
|
DISTCLEANFILES = sched.h
|
||||||
|
CLEANFILES = my_config.h
|
||||||
|
|
||||||
all-local: my_config.h my_global.h
|
all-local: my_global.h
|
||||||
|
|
||||||
# Since we include my_config.h it better exist from the beginning
|
# Since we include my_config.h it better exist from the beginning
|
||||||
my_config.h: ../config.h
|
link_sources:
|
||||||
$(CP) ../config.h my_config.h
|
$(CP) ../config.h my_config.h
|
||||||
|
|
||||||
# This should be changed in the source and removed.
|
# This should be changed in the source and removed.
|
||||||
|
@ -83,7 +83,8 @@ extern int myrg_rsame(MYRG_INFO *file,byte *record,int inx);
|
|||||||
extern int myrg_update(MYRG_INFO *file,const byte *old,byte *new_rec);
|
extern int myrg_update(MYRG_INFO *file,const byte *old,byte *new_rec);
|
||||||
extern int myrg_status(MYRG_INFO *file,MYMERGE_INFO *x,int flag);
|
extern int myrg_status(MYRG_INFO *file,MYMERGE_INFO *x,int flag);
|
||||||
extern int myrg_lock_database(MYRG_INFO *file,int lock_type);
|
extern int myrg_lock_database(MYRG_INFO *file,int lock_type);
|
||||||
extern int myrg_create(const char *name,const char **table_names);
|
extern int myrg_create(const char *name,const char **table_names,
|
||||||
|
my_bool fix_names);
|
||||||
extern int myrg_extra(MYRG_INFO *file,enum ha_extra_function function);
|
extern int myrg_extra(MYRG_INFO *file,enum ha_extra_function function);
|
||||||
extern ha_rows myrg_records_in_range(MYRG_INFO *info,int inx,
|
extern ha_rows myrg_records_in_range(MYRG_INFO *info,int inx,
|
||||||
const byte *start_key,uint start_key_len,
|
const byte *start_key,uint start_key_len,
|
||||||
|
@ -53,6 +53,7 @@ void delete_queue(QUEUE *queue);
|
|||||||
void queue_insert(QUEUE *queue,byte *element);
|
void queue_insert(QUEUE *queue,byte *element);
|
||||||
byte *queue_remove(QUEUE *queue,uint idx);
|
byte *queue_remove(QUEUE *queue,uint idx);
|
||||||
void _downheap(QUEUE *queue,uint idx);
|
void _downheap(QUEUE *queue,uint idx);
|
||||||
|
#define is_queue_inited(queue) ((queue)->root != 0)
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
@ -246,7 +246,7 @@ register char ***argv;
|
|||||||
/* Fall through */
|
/* Fall through */
|
||||||
case 'I':
|
case 'I':
|
||||||
case '?':
|
case '?':
|
||||||
printf("%s Ver 3.1 for %s at %s\n",my_progname,SYSTEM_TYPE,
|
printf("%s Ver 3.2 for %s at %s\n",my_progname,SYSTEM_TYPE,
|
||||||
MACHINE_TYPE);
|
MACHINE_TYPE);
|
||||||
puts("TCX Datakonsult AB, by Monty, for your professional use\n");
|
puts("TCX Datakonsult AB, by Monty, for your professional use\n");
|
||||||
if (version)
|
if (version)
|
||||||
@ -325,7 +325,7 @@ static int examine_log(my_string file_name, char **table_names)
|
|||||||
|
|
||||||
init_io_cache(&cache,file,0,READ_CACHE,start_offset,0,MYF(0));
|
init_io_cache(&cache,file,0,READ_CACHE,start_offset,0,MYF(0));
|
||||||
bzero((gptr) com_count,sizeof(com_count));
|
bzero((gptr) com_count,sizeof(com_count));
|
||||||
init_tree(&tree,0,sizeof(file_info),(qsort_cmp) file_info_compare,0,
|
init_tree(&tree,0,sizeof(file_info),(qsort_cmp) file_info_compare,1,
|
||||||
(void(*)(void*)) file_info_free);
|
(void(*)(void*)) file_info_free);
|
||||||
VOID(init_key_cache(KEY_CACHE_SIZE,(uint) (10*4*(IO_SIZE+MALLOC_OVERHEAD))));
|
VOID(init_key_cache(KEY_CACHE_SIZE,(uint) (10*4*(IO_SIZE+MALLOC_OVERHEAD))));
|
||||||
|
|
||||||
|
@ -75,7 +75,7 @@ clean-local:
|
|||||||
rm -f `echo $(mystringsobjects) | sed "s;\.lo;.c;g"` \
|
rm -f `echo $(mystringsobjects) | sed "s;\.lo;.c;g"` \
|
||||||
`echo $(dbugobjects) | sed "s;\.lo;.c;g"` \
|
`echo $(dbugobjects) | sed "s;\.lo;.c;g"` \
|
||||||
`echo $(mysysobjects) | sed "s;\.lo;.c;g"` \
|
`echo $(mysysobjects) | sed "s;\.lo;.c;g"` \
|
||||||
$(mystringsextra) ctype_extra_sources.c \
|
$(mystringsextra) $(mysysheaders) ctype_extra_sources.c \
|
||||||
../linked_client_sources
|
../linked_client_sources
|
||||||
|
|
||||||
ctype_extra_sources.c: conf_to_src
|
ctype_extra_sources.c: conf_to_src
|
||||||
|
@ -69,7 +69,8 @@ int mi_log(int activate_log)
|
|||||||
/* Logging of records and commands on logfile */
|
/* Logging of records and commands on logfile */
|
||||||
/* All logs starts with command(1) dfile(2) process(4) result(2) */
|
/* All logs starts with command(1) dfile(2) process(4) result(2) */
|
||||||
|
|
||||||
void _myisam_log(enum myisam_log_commands command, MI_INFO *info, const byte *buffert, uint length)
|
void _myisam_log(enum myisam_log_commands command, MI_INFO *info,
|
||||||
|
const byte *buffert, uint length)
|
||||||
{
|
{
|
||||||
char buff[11];
|
char buff[11];
|
||||||
int error,old_errno;
|
int error,old_errno;
|
||||||
|
@ -524,7 +524,11 @@ MI_INFO *mi_open(const char *name, int mode, uint handle_locking)
|
|||||||
myisam_open_list=list_add(myisam_open_list,&m_info->open_list);
|
myisam_open_list=list_add(myisam_open_list,&m_info->open_list);
|
||||||
|
|
||||||
pthread_mutex_unlock(&THR_LOCK_myisam);
|
pthread_mutex_unlock(&THR_LOCK_myisam);
|
||||||
myisam_log(MI_LOG_OPEN,m_info,share->filename,(uint) strlen(share->filename));
|
if (myisam_log_file >= 0)
|
||||||
|
{
|
||||||
|
intern_filename(name_buff,share->filename);
|
||||||
|
_myisam_log(MI_LOG_OPEN,m_info,name_buff,(uint) strlen(name_buff));
|
||||||
|
}
|
||||||
DBUG_RETURN(m_info);
|
DBUG_RETURN(m_info);
|
||||||
|
|
||||||
err:
|
err:
|
||||||
|
@ -70,7 +70,7 @@ static void printf_log(const char *str,...);
|
|||||||
static bool cmp_filename(struct file_info *file_info,my_string name);
|
static bool cmp_filename(struct file_info *file_info,my_string name);
|
||||||
|
|
||||||
static uint verbose=0,update=0,test_info=0,max_files=0,re_open_count=0,
|
static uint verbose=0,update=0,test_info=0,max_files=0,re_open_count=0,
|
||||||
recover=0,prefix_remove=0;
|
recover=0,prefix_remove=0,opt_processes=0;
|
||||||
static my_string log_filename=0,filepath=0,write_filename=0,record_pos_file=0;
|
static my_string log_filename=0,filepath=0,write_filename=0,record_pos_file=0;
|
||||||
static ulong com_count[10][3],number_of_commands=(ulong) ~0L,
|
static ulong com_count[10][3],number_of_commands=(ulong) ~0L,
|
||||||
isamlog_process;
|
isamlog_process;
|
||||||
@ -199,6 +199,9 @@ static void get_options(register int *argc, register char ***argv)
|
|||||||
update=1;
|
update=1;
|
||||||
recover++;
|
recover++;
|
||||||
break;
|
break;
|
||||||
|
case 'P':
|
||||||
|
opt_processes=1;
|
||||||
|
break;
|
||||||
case 'R':
|
case 'R':
|
||||||
if (! *++pos)
|
if (! *++pos)
|
||||||
{
|
{
|
||||||
@ -243,7 +246,7 @@ static void get_options(register int *argc, register char ***argv)
|
|||||||
/* Fall through */
|
/* Fall through */
|
||||||
case 'I':
|
case 'I':
|
||||||
case '?':
|
case '?':
|
||||||
printf("%s Ver 1.1 for %s at %s\n",my_progname,SYSTEM_TYPE,
|
printf("%s Ver 1.2 for %s at %s\n",my_progname,SYSTEM_TYPE,
|
||||||
MACHINE_TYPE);
|
MACHINE_TYPE);
|
||||||
puts("By Monty, for your professional use\n");
|
puts("By Monty, for your professional use\n");
|
||||||
if (version)
|
if (version)
|
||||||
@ -258,6 +261,7 @@ static void get_options(register int *argc, register char ***argv)
|
|||||||
puts(" -o \"offset\" -p # \"remove # components from path\"");
|
puts(" -o \"offset\" -p # \"remove # components from path\"");
|
||||||
puts(" -r \"recover\" -R \"file recordposition\"");
|
puts(" -r \"recover\" -R \"file recordposition\"");
|
||||||
puts(" -u \"update\" -v \"verbose\" -w \"write file\"");
|
puts(" -u \"update\" -v \"verbose\" -w \"write file\"");
|
||||||
|
puts(" -P \"processes\"");
|
||||||
puts("\nOne can give a second and a third '-v' for more verbose.");
|
puts("\nOne can give a second and a third '-v' for more verbose.");
|
||||||
puts("Normaly one does a update (-u).");
|
puts("Normaly one does a update (-u).");
|
||||||
puts("If a recover is done all writes and all possibly updates and deletes is done\nand errors are only counted.");
|
puts("If a recover is done all writes and all possibly updates and deletes is done\nand errors are only counted.");
|
||||||
@ -322,7 +326,7 @@ static int examine_log(my_string file_name, char **table_names)
|
|||||||
|
|
||||||
init_io_cache(&cache,file,0,READ_CACHE,start_offset,0,MYF(0));
|
init_io_cache(&cache,file,0,READ_CACHE,start_offset,0,MYF(0));
|
||||||
bzero((gptr) com_count,sizeof(com_count));
|
bzero((gptr) com_count,sizeof(com_count));
|
||||||
init_tree(&tree,0,sizeof(file_info),(qsort_cmp) file_info_compare,0,
|
init_tree(&tree,0,sizeof(file_info),(qsort_cmp) file_info_compare,1,
|
||||||
(void(*)(void*)) file_info_free);
|
(void(*)(void*)) file_info_free);
|
||||||
VOID(init_key_cache(KEY_CACHE_SIZE,(uint) (10*4*(IO_SIZE+MALLOC_OVERHEAD))));
|
VOID(init_key_cache(KEY_CACHE_SIZE,(uint) (10*4*(IO_SIZE+MALLOC_OVERHEAD))));
|
||||||
|
|
||||||
@ -333,6 +337,8 @@ static int examine_log(my_string file_name, char **table_names)
|
|||||||
isamlog_filepos=my_b_tell(&cache)-9L;
|
isamlog_filepos=my_b_tell(&cache)-9L;
|
||||||
file_info.filenr= mi_uint2korr(head+1);
|
file_info.filenr= mi_uint2korr(head+1);
|
||||||
isamlog_process=file_info.process=(long) mi_uint4korr(head+3);
|
isamlog_process=file_info.process=(long) mi_uint4korr(head+3);
|
||||||
|
if (!opt_processes)
|
||||||
|
file_info.process=0;
|
||||||
result= mi_uint2korr(head+7);
|
result= mi_uint2korr(head+7);
|
||||||
if ((curr_file_info=(struct file_info*) tree_search(&tree,&file_info)))
|
if ((curr_file_info=(struct file_info*) tree_search(&tree,&file_info)))
|
||||||
{
|
{
|
||||||
@ -374,11 +380,17 @@ static int examine_log(my_string file_name, char **table_names)
|
|||||||
goto err;
|
goto err;
|
||||||
{
|
{
|
||||||
uint i;
|
uint i;
|
||||||
char *pos=file_info.name,*to;
|
char *pos,*to;
|
||||||
|
|
||||||
|
/* Fix if old DOS files to new format */
|
||||||
|
for (pos=file_info.name; pos=strchr(pos,'\\') ; pos++)
|
||||||
|
*pos= '/';
|
||||||
|
|
||||||
|
pos=file_info.name;
|
||||||
for (i=0 ; i < prefix_remove ; i++)
|
for (i=0 ; i < prefix_remove ; i++)
|
||||||
{
|
{
|
||||||
char *next;
|
char *next;
|
||||||
if (!(next=strchr(pos,FN_LIBCHAR)))
|
if (!(next=strchr(pos,'/')))
|
||||||
break;
|
break;
|
||||||
pos=next+1;
|
pos=next+1;
|
||||||
}
|
}
|
||||||
@ -436,7 +448,7 @@ static int examine_log(my_string file_name, char **table_names)
|
|||||||
if (file_info.used)
|
if (file_info.used)
|
||||||
{
|
{
|
||||||
if (verbose && !record_pos_file)
|
if (verbose && !record_pos_file)
|
||||||
printf_log("%s: open",file_info.show_name);
|
printf_log("%s: open -> %d",file_info.show_name, file_info.filenr);
|
||||||
com_count[command][0]++;
|
com_count[command][0]++;
|
||||||
if (result)
|
if (result)
|
||||||
com_count[command][1]++;
|
com_count[command][1]++;
|
||||||
|
@ -29,4 +29,4 @@ extern pthread_mutex_t THR_LOCK_open;
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
int _myrg_init_queue(MYRG_INFO *info,int inx,enum ha_rkey_function search_flag);
|
int _myrg_init_queue(MYRG_INFO *info,int inx,enum ha_rkey_function search_flag);
|
||||||
|
int _myrg_finish_scan(MYRG_INFO *info, int inx, enum ha_rkey_function type);
|
||||||
|
@ -23,8 +23,7 @@
|
|||||||
a NULL-pointer last
|
a NULL-pointer last
|
||||||
*/
|
*/
|
||||||
|
|
||||||
int myrg_create(name,table_names)
|
int myrg_create(const char *name, const char **table_names, my_bool fix_names)
|
||||||
const char *name,**table_names;
|
|
||||||
{
|
{
|
||||||
int save_errno;
|
int save_errno;
|
||||||
uint errpos;
|
uint errpos;
|
||||||
@ -38,15 +37,19 @@ const char *name,**table_names;
|
|||||||
goto err;
|
goto err;
|
||||||
errpos=1;
|
errpos=1;
|
||||||
if (table_names)
|
if (table_names)
|
||||||
|
{
|
||||||
for ( ; *table_names ; table_names++)
|
for ( ; *table_names ; table_names++)
|
||||||
{
|
{
|
||||||
strmov(buff,*table_names);
|
strmov(buff,*table_names);
|
||||||
|
if (fix_names)
|
||||||
fn_same(buff,name,4);
|
fn_same(buff,name,4);
|
||||||
*(end=strend(buff))='\n';
|
*(end=strend(buff))='\n';
|
||||||
if (my_write(file,*table_names,(uint) (end-buff+1),
|
end[1]=0;
|
||||||
|
if (my_write(file,buff,(uint) (end-buff+1),
|
||||||
MYF(MY_WME | MY_NABP)))
|
MYF(MY_WME | MY_NABP)))
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
if (my_close(file,MYF(0)))
|
if (my_close(file,MYF(0)))
|
||||||
goto err;
|
goto err;
|
||||||
DBUG_RETURN(0);
|
DBUG_RETURN(0);
|
||||||
|
@ -58,7 +58,7 @@ int handle_locking;
|
|||||||
{
|
{
|
||||||
if ((end=strend(buff))[-1] == '\n')
|
if ((end=strend(buff))[-1] == '\n')
|
||||||
end[-1]='\0';
|
end[-1]='\0';
|
||||||
if (buff[0]) /* Skipp empty lines */
|
if (buff[0] && buff[0] != '#') /* Skipp empty lines and comments */
|
||||||
{
|
{
|
||||||
last_isam=isam;
|
last_isam=isam;
|
||||||
if (!test_if_hard_path(buff))
|
if (!test_if_hard_path(buff))
|
||||||
@ -93,7 +93,7 @@ int handle_locking;
|
|||||||
m_info->options|=isam->s->options;
|
m_info->options|=isam->s->options;
|
||||||
m_info->records+=isam->state->records;
|
m_info->records+=isam->state->records;
|
||||||
m_info->del+=isam->state->del;
|
m_info->del+=isam->state->del;
|
||||||
m_info->data_file_length=isam->state->data_file_length;
|
m_info->data_file_length+=isam->state->data_file_length;
|
||||||
if (i)
|
if (i)
|
||||||
isam=(MI_INFO*) (isam->open_list.next->data);
|
isam=(MI_INFO*) (isam->open_list.next->data);
|
||||||
}
|
}
|
||||||
|
@ -23,31 +23,32 @@ static int queue_key_cmp(void *keyseg, byte *a, byte *b)
|
|||||||
MI_INFO *aa=((MYRG_TABLE *)a)->table;
|
MI_INFO *aa=((MYRG_TABLE *)a)->table;
|
||||||
MI_INFO *bb=((MYRG_TABLE *)b)->table;
|
MI_INFO *bb=((MYRG_TABLE *)b)->table;
|
||||||
uint not_used;
|
uint not_used;
|
||||||
|
int ret= _mi_key_cmp((MI_KEYSEG *)keyseg, aa->lastkey, bb->lastkey,
|
||||||
return (_mi_key_cmp((MI_KEYSEG *)keyseg, aa->lastkey, bb->lastkey,
|
USE_WHOLE_KEY, SEARCH_FIND, ¬_used);
|
||||||
USE_WHOLE_KEY, SEARCH_FIND, ¬_used));
|
return ret < 0 ? -1 : ret > 0 ? 1 : 0;
|
||||||
} /* queue_key_cmp */
|
} /* queue_key_cmp */
|
||||||
|
|
||||||
|
|
||||||
int _myrg_init_queue(MYRG_INFO *info,int inx,enum ha_rkey_function search_flag)
|
int _myrg_init_queue(MYRG_INFO *info,int inx,enum ha_rkey_function search_flag)
|
||||||
{
|
{
|
||||||
|
int error=0;
|
||||||
QUEUE *q= &(info->by_key);
|
QUEUE *q= &(info->by_key);
|
||||||
|
|
||||||
if (!q->root)
|
if (!is_queue_inited(q))
|
||||||
{
|
{
|
||||||
if (init_queue(q,info->tables, 0,
|
if (init_queue(q,info->tables, 0,
|
||||||
(myisam_read_vec[search_flag]==SEARCH_SMALLER),
|
(myisam_readnext_vec[search_flag] == SEARCH_SMALLER),
|
||||||
queue_key_cmp,
|
queue_key_cmp,
|
||||||
info->open_tables->table->s->keyinfo[inx].seg))
|
info->open_tables->table->s->keyinfo[inx].seg))
|
||||||
return my_errno;
|
error=my_errno;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
if (reinit_queue(q,info->tables, 0,
|
if (reinit_queue(q,info->tables, 0,
|
||||||
(myisam_read_vec[search_flag]==SEARCH_SMALLER),
|
(myisam_readnext_vec[search_flag] == SEARCH_SMALLER),
|
||||||
queue_key_cmp,
|
queue_key_cmp,
|
||||||
info->open_tables->table->s->keyinfo[inx].seg))
|
info->open_tables->table->s->keyinfo[inx].seg))
|
||||||
return my_errno;
|
error=my_errno;
|
||||||
}
|
}
|
||||||
return 0;
|
return error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -16,7 +16,7 @@
|
|||||||
|
|
||||||
#include "mymrgdef.h"
|
#include "mymrgdef.h"
|
||||||
|
|
||||||
/* Read first row through a specfic key */
|
/* Read first row according to specific key */
|
||||||
|
|
||||||
int myrg_rfirst(MYRG_INFO *info, byte *buf, int inx)
|
int myrg_rfirst(MYRG_INFO *info, byte *buf, int inx)
|
||||||
{
|
{
|
||||||
@ -29,17 +29,17 @@ int myrg_rfirst(MYRG_INFO *info, byte *buf, int inx)
|
|||||||
|
|
||||||
for (table=info->open_tables ; table < info->end_table ; table++)
|
for (table=info->open_tables ; table < info->end_table ; table++)
|
||||||
{
|
{
|
||||||
err=mi_rfirst(table->table,NULL,inx);
|
if ((err=mi_rfirst(table->table,NULL,inx)))
|
||||||
info->last_used_table=table;
|
{
|
||||||
|
|
||||||
if (err == HA_ERR_END_OF_FILE)
|
if (err == HA_ERR_END_OF_FILE)
|
||||||
continue;
|
continue;
|
||||||
if (err)
|
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
/* adding to queue */
|
/* adding to queue */
|
||||||
queue_insert(&(info->by_key),(byte *)table);
|
queue_insert(&(info->by_key),(byte *)table);
|
||||||
}
|
}
|
||||||
|
/* We have done a read in all tables */
|
||||||
|
info->last_used_table=table;
|
||||||
|
|
||||||
if (!info->by_key.elements)
|
if (!info->by_key.elements)
|
||||||
return HA_ERR_END_OF_FILE;
|
return HA_ERR_END_OF_FILE;
|
||||||
|
@ -16,6 +16,17 @@
|
|||||||
|
|
||||||
/* Read record based on a key */
|
/* Read record based on a key */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* HA_READ_KEY_EXACT => SEARCH_BIGGER
|
||||||
|
* HA_READ_KEY_OR_NEXT => SEARCH_BIGGER
|
||||||
|
* HA_READ_AFTER_KEY => SEARCH_BIGGER
|
||||||
|
* HA_READ_PREFIX => SEARCH_BIGGER
|
||||||
|
* HA_READ_KEY_OR_PREV => SEARCH_SMALLER
|
||||||
|
* HA_READ_BEFORE_KEY => SEARCH_SMALLER
|
||||||
|
* HA_READ_PREFIX_LAST => SEARCH_SMALLER
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
#include "mymrgdef.h"
|
#include "mymrgdef.h"
|
||||||
|
|
||||||
/* todo: we could store some additional info to speedup lookups:
|
/* todo: we could store some additional info to speedup lookups:
|
||||||
@ -52,13 +63,14 @@ int myrg_rkey(MYRG_INFO *info,byte *record,int inx, const byte *key,
|
|||||||
{
|
{
|
||||||
err=_mi_rkey(mi,buf,inx,key_buff,pack_key_length,search_flag,FALSE);
|
err=_mi_rkey(mi,buf,inx,key_buff,pack_key_length,search_flag,FALSE);
|
||||||
}
|
}
|
||||||
info->last_used_table=table;
|
info->last_used_table=table+1;
|
||||||
|
|
||||||
|
if (err)
|
||||||
|
{
|
||||||
if (err == HA_ERR_KEY_NOT_FOUND)
|
if (err == HA_ERR_KEY_NOT_FOUND)
|
||||||
continue;
|
continue;
|
||||||
if (err)
|
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
/* adding to queue */
|
/* adding to queue */
|
||||||
queue_insert(&(info->by_key),(byte *)table);
|
queue_insert(&(info->by_key),(byte *)table);
|
||||||
|
|
||||||
@ -76,14 +88,3 @@ int myrg_rkey(MYRG_INFO *info,byte *record,int inx, const byte *key,
|
|||||||
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
||||||
return mi_rrnd(mi,record,mi->lastpos);
|
return mi_rrnd(mi,record,mi->lastpos);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* HA_READ_KEY_EXACT => SEARCH_BIGGER
|
|
||||||
* HA_READ_KEY_OR_NEXT => SEARCH_BIGGER
|
|
||||||
* HA_READ_AFTER_KEY => SEARCH_BIGGER
|
|
||||||
* HA_READ_PREFIX => SEARCH_BIGGER
|
|
||||||
* HA_READ_KEY_OR_PREV => SEARCH_SMALLER
|
|
||||||
* HA_READ_BEFORE_KEY => SEARCH_SMALLER
|
|
||||||
* HA_READ_PREFIX_LAST => SEARCH_SMALLER
|
|
||||||
*/
|
|
||||||
|
|
||||||
|
@ -29,17 +29,17 @@ int myrg_rlast(MYRG_INFO *info, byte *buf, int inx)
|
|||||||
|
|
||||||
for (table=info->open_tables ; table < info->end_table ; table++)
|
for (table=info->open_tables ; table < info->end_table ; table++)
|
||||||
{
|
{
|
||||||
err=mi_rlast(table->table,NULL,inx);
|
if ((err=mi_rlast(table->table,NULL,inx)))
|
||||||
info->last_used_table=table;
|
{
|
||||||
|
|
||||||
if (err == HA_ERR_END_OF_FILE)
|
if (err == HA_ERR_END_OF_FILE)
|
||||||
continue;
|
continue;
|
||||||
if (err)
|
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
/* adding to queue */
|
/* adding to queue */
|
||||||
queue_insert(&(info->by_key),(byte *)table);
|
queue_insert(&(info->by_key),(byte *)table);
|
||||||
}
|
}
|
||||||
|
/* We have done a read in all tables */
|
||||||
|
info->last_used_table=table;
|
||||||
|
|
||||||
if (!info->by_key.elements)
|
if (!info->by_key.elements)
|
||||||
return HA_ERR_END_OF_FILE;
|
return HA_ERR_END_OF_FILE;
|
||||||
|
@ -22,22 +22,21 @@
|
|||||||
|
|
||||||
int myrg_rnext(MYRG_INFO *info, byte *buf, int inx)
|
int myrg_rnext(MYRG_INFO *info, byte *buf, int inx)
|
||||||
{
|
{
|
||||||
MYRG_TABLE *table;
|
|
||||||
MI_INFO *mi;
|
|
||||||
byte *key_buff;
|
|
||||||
uint pack_key_length;
|
|
||||||
int err;
|
int err;
|
||||||
|
MI_INFO *mi;
|
||||||
|
|
||||||
/* at first, do rnext for the table found before */
|
/* at first, do rnext for the table found before */
|
||||||
err=mi_rnext(info->current_table->table,NULL,inx);
|
if ((err=mi_rnext(info->current_table->table,NULL,inx)))
|
||||||
|
{
|
||||||
if (err == HA_ERR_END_OF_FILE)
|
if (err == HA_ERR_END_OF_FILE)
|
||||||
{
|
{
|
||||||
queue_remove(&(info->by_key),0);
|
queue_remove(&(info->by_key),0);
|
||||||
if (!info->by_key.elements)
|
if (!info->by_key.elements)
|
||||||
return HA_ERR_END_OF_FILE;
|
return HA_ERR_END_OF_FILE;
|
||||||
}
|
}
|
||||||
else if (err)
|
else
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
/* Found here, adding to queue */
|
/* Found here, adding to queue */
|
||||||
@ -46,30 +45,42 @@ int myrg_rnext(MYRG_INFO *info, byte *buf, int inx)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* next, let's finish myrg_rkey's initial scan */
|
/* next, let's finish myrg_rkey's initial scan */
|
||||||
table=info->last_used_table+1;
|
if ((err=_myrg_finish_scan(info, inx, HA_READ_KEY_OR_NEXT)))
|
||||||
if (table < info->end_table)
|
|
||||||
{
|
|
||||||
mi=info->last_used_table->table;
|
|
||||||
key_buff=(byte*) mi->lastkey+mi->s->base.max_key_length;
|
|
||||||
pack_key_length=mi->last_rkey_length;
|
|
||||||
for (; table < info->end_table ; table++)
|
|
||||||
{
|
|
||||||
mi=table->table;
|
|
||||||
err=_mi_rkey(mi,NULL,inx,key_buff,pack_key_length,HA_READ_KEY_OR_NEXT,FALSE);
|
|
||||||
info->last_used_table=table;
|
|
||||||
|
|
||||||
if (err == HA_ERR_KEY_NOT_FOUND)
|
|
||||||
continue;
|
|
||||||
if (err)
|
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
/* Found here, adding to queue */
|
|
||||||
queue_insert(&(info->by_key),(byte *)table);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* now, mymerge's read_next is as simple as one queue_top */
|
/* now, mymerge's read_next is as simple as one queue_top */
|
||||||
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
||||||
return mi_rrnd(mi,buf,mi->lastpos);
|
return mi_rrnd(mi,buf,mi->lastpos);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/* let's finish myrg_rkey's initial scan */
|
||||||
|
|
||||||
|
int _myrg_finish_scan(MYRG_INFO *info, int inx, enum ha_rkey_function type)
|
||||||
|
{
|
||||||
|
int err;
|
||||||
|
MYRG_TABLE *table=info->last_used_table;
|
||||||
|
if (table < info->end_table)
|
||||||
|
{
|
||||||
|
MI_INFO *mi= table[-1].table;
|
||||||
|
byte *key_buff=(byte*) mi->lastkey+mi->s->base.max_key_length;
|
||||||
|
uint pack_key_length= mi->last_rkey_length;
|
||||||
|
|
||||||
|
for (; table < info->end_table ; table++)
|
||||||
|
{
|
||||||
|
mi=table->table;
|
||||||
|
if ((err=_mi_rkey(mi,NULL,inx,key_buff,pack_key_length,
|
||||||
|
type,FALSE)))
|
||||||
|
{
|
||||||
|
if (err == HA_ERR_KEY_NOT_FOUND) /* If end of file */
|
||||||
|
continue;
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
/* Found here, adding to queue */
|
||||||
|
queue_insert(&(info->by_key),(byte *) table);
|
||||||
|
}
|
||||||
|
/* All tables are now used */
|
||||||
|
info->last_used_table=table;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
@ -22,22 +22,21 @@
|
|||||||
|
|
||||||
int myrg_rprev(MYRG_INFO *info, byte *buf, int inx)
|
int myrg_rprev(MYRG_INFO *info, byte *buf, int inx)
|
||||||
{
|
{
|
||||||
MYRG_TABLE *table;
|
|
||||||
MI_INFO *mi;
|
|
||||||
byte *key_buff;
|
|
||||||
uint pack_key_length;
|
|
||||||
int err;
|
int err;
|
||||||
|
MI_INFO *mi;
|
||||||
|
|
||||||
/* at first, do rnext for the table found before */
|
/* at first, do rprev for the table found before */
|
||||||
err=mi_rprev(info->current_table->table,NULL,inx);
|
if ((err=mi_rprev(info->current_table->table,NULL,inx)))
|
||||||
|
{
|
||||||
if (err == HA_ERR_END_OF_FILE)
|
if (err == HA_ERR_END_OF_FILE)
|
||||||
{
|
{
|
||||||
queue_remove(&(info->by_key),0);
|
queue_remove(&(info->by_key),0);
|
||||||
if (!info->by_key.elements)
|
if (!info->by_key.elements)
|
||||||
return HA_ERR_END_OF_FILE;
|
return HA_ERR_END_OF_FILE;
|
||||||
}
|
}
|
||||||
else if (err)
|
else
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
/* Found here, adding to queue */
|
/* Found here, adding to queue */
|
||||||
@ -46,29 +45,9 @@ int myrg_rprev(MYRG_INFO *info, byte *buf, int inx)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* next, let's finish myrg_rkey's initial scan */
|
/* next, let's finish myrg_rkey's initial scan */
|
||||||
table=info->last_used_table+1;
|
if ((err=_myrg_finish_scan(info, inx, HA_READ_KEY_OR_PREV)))
|
||||||
if (table < info->end_table)
|
|
||||||
{
|
|
||||||
mi=info->last_used_table->table;
|
|
||||||
key_buff=(byte*) mi->lastkey+mi->s->base.max_key_length;
|
|
||||||
pack_key_length=mi->last_rkey_length;
|
|
||||||
for (; table < info->end_table ; table++)
|
|
||||||
{
|
|
||||||
mi=table->table;
|
|
||||||
err=_mi_rkey(mi,NULL,inx,key_buff,pack_key_length,
|
|
||||||
HA_READ_KEY_OR_PREV,FALSE);
|
|
||||||
info->last_used_table=table;
|
|
||||||
|
|
||||||
if (err == HA_ERR_KEY_NOT_FOUND)
|
|
||||||
continue;
|
|
||||||
if (err)
|
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
/* Found here, adding to queue */
|
|
||||||
queue_insert(&(info->by_key),(byte *)table);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* now, mymerge's read_prev is as simple as one queue_top */
|
/* now, mymerge's read_prev is as simple as one queue_top */
|
||||||
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
mi=(info->current_table=(MYRG_TABLE *)queue_top(&(info->by_key)))->table;
|
||||||
return mi_rrnd(mi,buf,mi->lastpos);
|
return mi_rrnd(mi,buf,mi->lastpos);
|
||||||
|
@ -84,9 +84,9 @@ int myrg_rrnd(MYRG_INFO *info,byte *buf,ulonglong filepos)
|
|||||||
info->end_table-1,filepos);
|
info->end_table-1,filepos);
|
||||||
isam_info=info->current_table->table;
|
isam_info=info->current_table->table;
|
||||||
isam_info->update&= HA_STATE_CHANGED;
|
isam_info->update&= HA_STATE_CHANGED;
|
||||||
return ((*isam_info->s->read_rnd)(isam_info,(byte*) buf,
|
return ((*isam_info->s->read_rnd)
|
||||||
(ha_rows) (filepos -
|
(isam_info, (byte*) buf,
|
||||||
info->current_table->file_offset),
|
(ha_rows) (filepos - info->current_table->file_offset),
|
||||||
0));
|
0));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
BIN
mysql.proj
BIN
mysql.proj
Binary file not shown.
@ -25,7 +25,7 @@
|
|||||||
#include <queues.h>
|
#include <queues.h>
|
||||||
|
|
||||||
|
|
||||||
/* The actuall code for handling queues */
|
/* Init queue */
|
||||||
|
|
||||||
int init_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
|
int init_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
|
||||||
pbool max_at_top, int (*compare) (void *, byte *, byte *),
|
pbool max_at_top, int (*compare) (void *, byte *, byte *),
|
||||||
@ -44,6 +44,12 @@ int init_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
|
|||||||
DBUG_RETURN(0);
|
DBUG_RETURN(0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
Reinitialize queue for new usage; Note that you can't currently resize
|
||||||
|
the number of elements! If you need this, fix it :)
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
int reinit_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
|
int reinit_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
|
||||||
pbool max_at_top, int (*compare) (void *, byte *, byte *),
|
pbool max_at_top, int (*compare) (void *, byte *, byte *),
|
||||||
void *first_cmp_arg)
|
void *first_cmp_arg)
|
||||||
@ -78,6 +84,7 @@ void delete_queue(QUEUE *queue)
|
|||||||
void queue_insert(register QUEUE *queue, byte *element)
|
void queue_insert(register QUEUE *queue, byte *element)
|
||||||
{
|
{
|
||||||
reg2 uint idx,next;
|
reg2 uint idx,next;
|
||||||
|
int cmp;
|
||||||
|
|
||||||
#ifndef DBUG_OFF
|
#ifndef DBUG_OFF
|
||||||
if (queue->elements < queue->max_elements)
|
if (queue->elements < queue->max_elements)
|
||||||
@ -86,10 +93,12 @@ void queue_insert(register QUEUE *queue, byte *element)
|
|||||||
queue->root[0]=element;
|
queue->root[0]=element;
|
||||||
idx= ++queue->elements;
|
idx= ++queue->elements;
|
||||||
|
|
||||||
while ((queue->compare(queue->first_cmp_arg,
|
/* max_at_top swaps the comparison if we want to order by desc */
|
||||||
|
while ((cmp=queue->compare(queue->first_cmp_arg,
|
||||||
element+queue->offset_to_key,
|
element+queue->offset_to_key,
|
||||||
queue->root[(next=idx >> 1)]+queue->offset_to_key)
|
queue->root[(next=idx >> 1)] +
|
||||||
^ queue->max_at_top) < 0)
|
queue->offset_to_key)) &&
|
||||||
|
(cmp ^ queue->max_at_top) < 0)
|
||||||
{
|
{
|
||||||
queue->root[idx]=queue->root[next];
|
queue->root[idx]=queue->root[next];
|
||||||
idx=next;
|
idx=next;
|
||||||
|
@ -53,12 +53,14 @@ pkgdata_DATA = make_binary_distribution
|
|||||||
CLEANFILES = @server_scripts@ \
|
CLEANFILES = @server_scripts@ \
|
||||||
make_binary_distribution \
|
make_binary_distribution \
|
||||||
msql2mysql \
|
msql2mysql \
|
||||||
|
mysql_config \
|
||||||
mysql_fix_privilege_tables \
|
mysql_fix_privilege_tables \
|
||||||
mysql_setpermission \
|
mysql_setpermission \
|
||||||
mysql_zap \
|
mysql_zap \
|
||||||
mysqlaccess \
|
mysqlaccess \
|
||||||
mysql_convert_table_format \
|
mysql_convert_table_format \
|
||||||
mysql_find_rows
|
mysql_find_rows \
|
||||||
|
mysqlhotcopy
|
||||||
|
|
||||||
SUPERCLEANFILES = mysqlbug
|
SUPERCLEANFILES = mysqlbug
|
||||||
|
|
||||||
|
0
scripts/make_binary_distribution.sh
Executable file → Normal file
0
scripts/make_binary_distribution.sh
Executable file → Normal file
0
scripts/mysql_convert_table_format.sh
Executable file → Normal file
0
scripts/mysql_convert_table_format.sh
Executable file → Normal file
0
scripts/mysql_find_rows.sh
Executable file → Normal file
0
scripts/mysql_find_rows.sh
Executable file → Normal file
0
scripts/mysql_setpermission.sh
Executable file → Normal file
0
scripts/mysql_setpermission.sh
Executable file → Normal file
0
scripts/mysql_zap.sh
Executable file → Normal file
0
scripts/mysql_zap.sh
Executable file → Normal file
0
scripts/mysqlaccess.conf
Executable file → Normal file
0
scripts/mysqlaccess.conf
Executable file → Normal file
0
scripts/mysqlaccess.sh
Executable file → Normal file
0
scripts/mysqlaccess.sh
Executable file → Normal file
0
scripts/mysqlbug.sh
Executable file → Normal file
0
scripts/mysqlbug.sh
Executable file → Normal file
0
scripts/mysqlhotcopy.sh
Executable file → Normal file
0
scripts/mysqlhotcopy.sh
Executable file → Normal file
0
scripts/safe_mysqld-watch.sh
Executable file → Normal file
0
scripts/safe_mysqld-watch.sh
Executable file → Normal file
0
scripts/safe_mysqld.sh
Executable file → Normal file
0
scripts/safe_mysqld.sh
Executable file → Normal file
@ -348,12 +348,12 @@ print " for select_diff_key ($count:$rows): " .
|
|||||||
# Test select that is very popular when using ODBC
|
# Test select that is very popular when using ODBC
|
||||||
|
|
||||||
check_or_range("id","select_range_prefix");
|
check_or_range("id","select_range_prefix");
|
||||||
check_or_range("id3","select_range");
|
check_or_range("id3","select_range_key2");
|
||||||
|
|
||||||
# Check reading on direct key on id and id3
|
# Check reading on direct key on id and id3
|
||||||
|
|
||||||
check_select_key("id","select_key_prefix");
|
check_select_key("id","select_key_prefix");
|
||||||
check_select_key("id3","select_key");
|
check_select_key("id3","select_key_key2");
|
||||||
|
|
||||||
####
|
####
|
||||||
#### A lot of simple selects on ranges
|
#### A lot of simple selects on ranges
|
||||||
@ -403,7 +403,7 @@ check_select_key("id3","select_key");
|
|||||||
|
|
||||||
print "\nTest of compares with simple ranges\n";
|
print "\nTest of compares with simple ranges\n";
|
||||||
check_select_range("id","select_range_prefix");
|
check_select_range("id","select_range_prefix");
|
||||||
check_select_range("id3","select_range");
|
check_select_range("id3","select_range_key2");
|
||||||
|
|
||||||
####
|
####
|
||||||
#### Some group queries
|
#### Some group queries
|
||||||
@ -1107,20 +1107,28 @@ if ($server->small_rollback_segment())
|
|||||||
# Delete everything from table
|
# Delete everything from table
|
||||||
#
|
#
|
||||||
|
|
||||||
print "Deleting everything from table\n";
|
print "Deleting rows from the table\n";
|
||||||
$loop_time=new Benchmark;
|
$loop_time=new Benchmark;
|
||||||
$count=0;
|
$count=0;
|
||||||
|
|
||||||
|
for ($i=0 ; $i < 128 ; $i++)
|
||||||
|
{
|
||||||
|
$dbh->do("delete from bench1 where field1 = $i") or die $DBI::errstr;
|
||||||
|
}
|
||||||
|
|
||||||
|
$end_time=new Benchmark;
|
||||||
|
print "Time for delete_big_many_keys ($count): " .
|
||||||
|
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
|
||||||
|
|
||||||
|
print "Deleting everything from table\n";
|
||||||
|
$count=1;
|
||||||
if ($opt_fast)
|
if ($opt_fast)
|
||||||
{
|
{
|
||||||
$dbh->do("delete from bench1 where field1 = 0") or die $DBI::errstr;
|
|
||||||
$dbh->do("delete from bench1") or die $DBI::errstr;
|
$dbh->do("delete from bench1") or die $DBI::errstr;
|
||||||
$count+=2;
|
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
$dbh->do("delete from bench1 where field1 = 0") or die $DBI::errstr;
|
|
||||||
$dbh->do("delete from bench1 where field1 > 0") or die $DBI::errstr;
|
$dbh->do("delete from bench1 where field1 > 0") or die $DBI::errstr;
|
||||||
$count+=2;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if ($opt_lock_tables)
|
if ($opt_lock_tables)
|
||||||
@ -1129,7 +1137,7 @@ if ($opt_lock_tables)
|
|||||||
}
|
}
|
||||||
|
|
||||||
$end_time=new Benchmark;
|
$end_time=new Benchmark;
|
||||||
print "Time for delete_big_many_keys ($count): " .
|
print "Time for delete_all_many_keys ($count): " .
|
||||||
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
|
timestr(timediff($end_time, $loop_time),"all") . "\n\n";
|
||||||
|
|
||||||
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
|
$sth = $dbh->do("drop table bench1") or die $DBI::errstr;
|
||||||
|
@ -261,7 +261,7 @@ ha_rows ha_heap::records_in_range(int inx,
|
|||||||
if (start_key_len != end_key_len ||
|
if (start_key_len != end_key_len ||
|
||||||
start_key_len != pos->key_length ||
|
start_key_len != pos->key_length ||
|
||||||
start_search_flag != HA_READ_KEY_EXACT ||
|
start_search_flag != HA_READ_KEY_EXACT ||
|
||||||
end_search_flag != HA_READ_KEY_EXACT)
|
end_search_flag != HA_READ_AFTER_KEY)
|
||||||
return HA_POS_ERROR; // Can't only use exact keys
|
return HA_POS_ERROR; // Can't only use exact keys
|
||||||
return 10; // Good guess
|
return 10; // Good guess
|
||||||
}
|
}
|
||||||
|
@ -33,14 +33,15 @@ class ha_heap: public handler
|
|||||||
const char *table_type() const { return "HEAP"; }
|
const char *table_type() const { return "HEAP"; }
|
||||||
const char **bas_ext() const;
|
const char **bas_ext() const;
|
||||||
ulong option_flag() const
|
ulong option_flag() const
|
||||||
{ return (HA_READ_RND_SAME+HA_NO_INDEX+HA_BINARY_KEYS+HA_WRONG_ASCII_ORDER+
|
{ return (HA_READ_RND_SAME | HA_NO_INDEX | HA_ONLY_WHOLE_INDEX |
|
||||||
HA_KEYPOS_TO_RNDPOS+HA_NO_BLOBS+HA_REC_NOT_IN_SEQ); }
|
HA_WRONG_ASCII_ORDER | HA_KEYPOS_TO_RNDPOS | HA_NO_BLOBS |
|
||||||
|
HA_REC_NOT_IN_SEQ); }
|
||||||
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
||||||
uint max_keys() const { return MAX_KEY; }
|
uint max_keys() const { return MAX_KEY; }
|
||||||
uint max_key_parts() const { return MAX_REF_PARTS; }
|
uint max_key_parts() const { return MAX_REF_PARTS; }
|
||||||
uint max_key_length() const { return HA_MAX_REC_LENGTH; }
|
uint max_key_length() const { return HA_MAX_REC_LENGTH; }
|
||||||
virtual double scan_time() { return (double) (records+deleted) / 100.0; }
|
virtual double scan_time() { return (double) (records+deleted) / 20.0+10; }
|
||||||
virtual double read_time(ha_rows rows) { return (double) rows / 100.0; }
|
virtual double read_time(ha_rows rows) { return (double) rows / 20.0+1; }
|
||||||
virtual bool fast_key_read() { return 1;}
|
virtual bool fast_key_read() { return 1;}
|
||||||
|
|
||||||
int open(const char *name, int mode, int test_if_locked);
|
int open(const char *name, int mode, int test_if_locked);
|
||||||
|
@ -45,7 +45,7 @@ class ha_myisam: public handler
|
|||||||
const char **bas_ext() const;
|
const char **bas_ext() const;
|
||||||
ulong option_flag() const { return int_option_flag; }
|
ulong option_flag() const { return int_option_flag; }
|
||||||
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
||||||
uint max_keys() const { return 1; }
|
uint max_keys() const { return MI_MAX_KEY; }
|
||||||
uint max_key_parts() const { return MAX_REF_PARTS; }
|
uint max_key_parts() const { return MAX_REF_PARTS; }
|
||||||
uint max_key_length() const { return MAX_KEY_LENGTH; }
|
uint max_key_length() const { return MAX_KEY_LENGTH; }
|
||||||
|
|
||||||
|
@ -180,11 +180,7 @@ void ha_myisammrg::info(uint flag)
|
|||||||
mean_rec_length=info.reclength;
|
mean_rec_length=info.reclength;
|
||||||
block_size=0;
|
block_size=0;
|
||||||
update_time=0;
|
update_time=0;
|
||||||
#if SIZEOF_OFF_T > 4
|
|
||||||
ref_length=6; // Should be big enough
|
ref_length=6; // Should be big enough
|
||||||
#else
|
|
||||||
ref_length=4;
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -228,6 +224,16 @@ THR_LOCK_DATA **ha_myisammrg::store_lock(THD *thd,
|
|||||||
int ha_myisammrg::create(const char *name, register TABLE *form,
|
int ha_myisammrg::create(const char *name, register TABLE *form,
|
||||||
HA_CREATE_INFO *create_info)
|
HA_CREATE_INFO *create_info)
|
||||||
{
|
{
|
||||||
char buff[FN_REFLEN];
|
char buff[FN_REFLEN],**table_names,**pos;
|
||||||
return myrg_create(fn_format(buff,name,"","",2+4+16),0);
|
TABLE_LIST *tables= (TABLE_LIST*) create_info->merge_list.first;
|
||||||
|
DBUG_ENTER("ha_myisammrg::create");
|
||||||
|
|
||||||
|
if (!(table_names= (char**) sql_alloc((create_info->merge_list.elements+1)*
|
||||||
|
sizeof(char*))))
|
||||||
|
DBUG_RETURN(1);
|
||||||
|
for (pos=table_names ; tables ; tables=tables->next)
|
||||||
|
*pos++= tables->real_name;
|
||||||
|
*pos=0;
|
||||||
|
DBUG_RETURN(myrg_create(fn_format(buff,name,"","",2+4+16),
|
||||||
|
(const char **) table_names, (my_bool) 0));
|
||||||
}
|
}
|
||||||
|
@ -32,15 +32,19 @@ class ha_myisammrg: public handler
|
|||||||
~ha_myisammrg() {}
|
~ha_myisammrg() {}
|
||||||
const char *table_type() const { return "MRG_MyISAM"; }
|
const char *table_type() const { return "MRG_MyISAM"; }
|
||||||
const char **bas_ext() const;
|
const char **bas_ext() const;
|
||||||
ulong option_flag() const { return HA_REC_NOT_IN_SEQ+HA_READ_NEXT+
|
ulong option_flag() const
|
||||||
HA_READ_PREV+HA_READ_RND_SAME+HA_HAVE_KEY_READ_ONLY+
|
{ return (HA_REC_NOT_IN_SEQ | HA_READ_NEXT |
|
||||||
HA_KEYPOS_TO_RNDPOS+HA_READ_ORDER+
|
HA_READ_PREV | HA_READ_RND_SAME |
|
||||||
HA_LASTKEY_ORDER+HA_READ_NOT_EXACT_KEY+
|
HA_HAVE_KEY_READ_ONLY |
|
||||||
HA_LONGLONG_KEYS+HA_NULL_KEY+HA_BLOB_KEY; }
|
HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER |
|
||||||
|
HA_LASTKEY_ORDER | HA_READ_NOT_EXACT_KEY |
|
||||||
|
HA_LONGLONG_KEYS | HA_NULL_KEY | HA_BLOB_KEY); }
|
||||||
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
uint max_record_length() const { return HA_MAX_REC_LENGTH; }
|
||||||
uint max_keys() const { return 1; }
|
uint max_keys() const { return MI_MAX_KEY; }
|
||||||
uint max_key_parts() const { return MAX_REF_PARTS; }
|
uint max_key_parts() const { return MAX_REF_PARTS; }
|
||||||
uint max_key_length() const { return MAX_KEY_LENGTH; }
|
uint max_key_length() const { return MAX_KEY_LENGTH; }
|
||||||
|
virtual double scan_time()
|
||||||
|
{ return ulonglong2double(data_file_length) / IO_SIZE + file->tables; }
|
||||||
|
|
||||||
int open(const char *name, int mode, int test_if_locked);
|
int open(const char *name, int mode, int test_if_locked);
|
||||||
int close(void);
|
int close(void);
|
||||||
|
@ -48,7 +48,7 @@
|
|||||||
if database is updated after read) */
|
if database is updated after read) */
|
||||||
#define HA_REC_NOT_IN_SEQ 64 /* ha_info don't return recnumber;
|
#define HA_REC_NOT_IN_SEQ 64 /* ha_info don't return recnumber;
|
||||||
It returns a position to ha_r_rnd */
|
It returns a position to ha_r_rnd */
|
||||||
#define HA_BINARY_KEYS 128 /* Keys must be exact */
|
#define HA_ONLY_WHOLE_INDEX 128 /* Can't use part key searches */
|
||||||
#define HA_RSAME_NO_INDEX 256 /* RSAME can't restore index */
|
#define HA_RSAME_NO_INDEX 256 /* RSAME can't restore index */
|
||||||
#define HA_WRONG_ASCII_ORDER 512 /* Can't use sorting through key */
|
#define HA_WRONG_ASCII_ORDER 512 /* Can't use sorting through key */
|
||||||
#define HA_HAVE_KEY_READ_ONLY 1024 /* Can read only keys (no record) */
|
#define HA_HAVE_KEY_READ_ONLY 1024 /* Can read only keys (no record) */
|
||||||
@ -128,6 +128,7 @@ typedef struct st_ha_create_information
|
|||||||
ulong raid_chunksize;
|
ulong raid_chunksize;
|
||||||
bool if_not_exists;
|
bool if_not_exists;
|
||||||
ulong used_fields;
|
ulong used_fields;
|
||||||
|
SQL_LIST merge_list;
|
||||||
} HA_CREATE_INFO;
|
} HA_CREATE_INFO;
|
||||||
|
|
||||||
|
|
||||||
|
@ -299,11 +299,12 @@ static SYMBOL symbols[] = {
|
|||||||
{ "TRAILING", SYM(TRAILING),0,0},
|
{ "TRAILING", SYM(TRAILING),0,0},
|
||||||
{ "TO", SYM(TO_SYM),0,0},
|
{ "TO", SYM(TO_SYM),0,0},
|
||||||
{ "TYPE", SYM(TYPE_SYM),0,0},
|
{ "TYPE", SYM(TYPE_SYM),0,0},
|
||||||
{ "USE", SYM(USE_SYM),0,0},
|
{ "UNION", SYM(UNION_SYM),0,0},
|
||||||
{ "USING", SYM(USING),0,0},
|
|
||||||
{ "UNIQUE", SYM(UNIQUE_SYM),0,0},
|
{ "UNIQUE", SYM(UNIQUE_SYM),0,0},
|
||||||
{ "UNLOCK", SYM(UNLOCK_SYM),0,0},
|
{ "UNLOCK", SYM(UNLOCK_SYM),0,0},
|
||||||
{ "UNSIGNED", SYM(UNSIGNED),0,0},
|
{ "UNSIGNED", SYM(UNSIGNED),0,0},
|
||||||
|
{ "USE", SYM(USE_SYM),0,0},
|
||||||
|
{ "USING", SYM(USING),0,0},
|
||||||
{ "UPDATE", SYM(UPDATE_SYM),0,0},
|
{ "UPDATE", SYM(UPDATE_SYM),0,0},
|
||||||
{ "USAGE", SYM(USAGE),0,0},
|
{ "USAGE", SYM(USAGE),0,0},
|
||||||
{ "VALUES", SYM(VALUES),0,0},
|
{ "VALUES", SYM(VALUES),0,0},
|
||||||
|
@ -79,6 +79,10 @@ void sql_element_free(void *ptr);
|
|||||||
// instead of reading with keys. The number says how many evaluation of the
|
// instead of reading with keys. The number says how many evaluation of the
|
||||||
// WHERE clause is comparable to reading one extra row from a table.
|
// WHERE clause is comparable to reading one extra row from a table.
|
||||||
#define TIME_FOR_COMPARE 5 // 5 compares == one read
|
#define TIME_FOR_COMPARE 5 // 5 compares == one read
|
||||||
|
// Number of rows in a reference table when refereed through a not unique key.
|
||||||
|
// This value is only used when we don't know anything about the key
|
||||||
|
// distribution.
|
||||||
|
#define MATCHING_ROWS_IN_OTHER_TABLE 10
|
||||||
|
|
||||||
/* Don't pack string keys shorter than this (if PACK_KEYS=1 isn't used) */
|
/* Don't pack string keys shorter than this (if PACK_KEYS=1 isn't used) */
|
||||||
#define KEY_DEFAULT_PACK_LENGTH 8
|
#define KEY_DEFAULT_PACK_LENGTH 8
|
||||||
|
@ -16,7 +16,7 @@ install-data-local:
|
|||||||
$(DESTDIR)$(pkgdatadir)/$$lang/errmsg.txt; \
|
$(DESTDIR)$(pkgdatadir)/$$lang/errmsg.txt; \
|
||||||
done
|
done
|
||||||
$(mkinstalldirs) $(DESTDIR)$(pkgdatadir)/charsets
|
$(mkinstalldirs) $(DESTDIR)$(pkgdatadir)/charsets
|
||||||
(for f in Index README "*.conf"; \
|
(cd $(srcdir)/charsets; for f in Index README "*.conf"; \
|
||||||
do \
|
do \
|
||||||
$(INSTALL_DATA) $(srcdir)/charsets/$$f $(DESTDIR)$(pkgdatadir)/charsets/; \
|
$(INSTALL_DATA) $$f $(DESTDIR)$(pkgdatadir)/charsets/; \
|
||||||
done)
|
done)
|
||||||
|
@ -1905,6 +1905,7 @@ int grant_init (void)
|
|||||||
{
|
{
|
||||||
t_table->file->index_end();
|
t_table->file->index_end();
|
||||||
mysql_unlock_tables(thd, lock);
|
mysql_unlock_tables(thd, lock);
|
||||||
|
thd->version--; // Force close to free memory
|
||||||
close_thread_tables(thd);
|
close_thread_tables(thd);
|
||||||
delete thd;
|
delete thd;
|
||||||
DBUG_RETURN(0); // Empty table is ok!
|
DBUG_RETURN(0); // Empty table is ok!
|
||||||
|
@ -94,16 +94,9 @@ typedef struct st_lex {
|
|||||||
LEX_YYSTYPE yylval;
|
LEX_YYSTYPE yylval;
|
||||||
uchar *ptr,*tok_start,*tok_end,*end_of_query;
|
uchar *ptr,*tok_start,*tok_end,*end_of_query;
|
||||||
ha_rows select_limit,offset_limit;
|
ha_rows select_limit,offset_limit;
|
||||||
bool create_refs,drop_primary,drop_if_exists,local_file,
|
|
||||||
in_comment,ignore_space,verbose;
|
|
||||||
enum_sql_command sql_command;
|
|
||||||
enum lex_states next_state;
|
|
||||||
ulong options;
|
|
||||||
uint in_sum_expr,grant,grant_tot_col,which_columns, sort_default;
|
|
||||||
char *length,*dec,*change,*name;
|
char *length,*dec,*change,*name;
|
||||||
String *wild;
|
String *wild;
|
||||||
sql_exchange *exchange;
|
sql_exchange *exchange;
|
||||||
thr_lock_type lock_option;
|
|
||||||
|
|
||||||
List<List_item> expr_list;
|
List<List_item> expr_list;
|
||||||
List<List_item> when_list;
|
List<List_item> when_list;
|
||||||
@ -124,9 +117,6 @@ typedef struct st_lex {
|
|||||||
create_field *last_field;
|
create_field *last_field;
|
||||||
|
|
||||||
Item *where,*having,*default_value;
|
Item *where,*having,*default_value;
|
||||||
enum enum_duplicates duplicates;
|
|
||||||
ulong thread_id,type;
|
|
||||||
HA_CREATE_INFO create_info;
|
|
||||||
CONVERT *convert_set;
|
CONVERT *convert_set;
|
||||||
LEX_USER *grant_user;
|
LEX_USER *grant_user;
|
||||||
char *db,*db1,*table1,*db2,*table2; /* For outer join using .. */
|
char *db,*db1,*table1,*db2,*table2; /* For outer join using .. */
|
||||||
@ -134,8 +124,18 @@ typedef struct st_lex {
|
|||||||
THD *thd;
|
THD *thd;
|
||||||
udf_func udf;
|
udf_func udf;
|
||||||
HA_CHECK_OPT check_opt; // check/repair options
|
HA_CHECK_OPT check_opt; // check/repair options
|
||||||
|
HA_CREATE_INFO create_info;
|
||||||
LEX_MASTER_INFO mi; // used by CHANGE MASTER
|
LEX_MASTER_INFO mi; // used by CHANGE MASTER
|
||||||
char* backup_dir; // used by BACKUP / RESTORE
|
ulong thread_id,type;
|
||||||
|
ulong options;
|
||||||
|
enum_sql_command sql_command;
|
||||||
|
enum lex_states next_state;
|
||||||
|
enum enum_duplicates duplicates;
|
||||||
|
uint in_sum_expr,grant,grant_tot_col,which_columns, sort_default;
|
||||||
|
thr_lock_type lock_option;
|
||||||
|
bool create_refs,drop_primary,drop_if_exists,local_file;
|
||||||
|
bool in_comment,ignore_space,verbose;
|
||||||
|
|
||||||
} LEX;
|
} LEX;
|
||||||
|
|
||||||
|
|
||||||
|
@ -40,6 +40,7 @@ extern "C" int gethostname(char *name, int namelen);
|
|||||||
|
|
||||||
static bool check_table_access(THD *thd,uint want_access, TABLE_LIST *tables);
|
static bool check_table_access(THD *thd,uint want_access, TABLE_LIST *tables);
|
||||||
static bool check_db_used(THD *thd,TABLE_LIST *tables);
|
static bool check_db_used(THD *thd,TABLE_LIST *tables);
|
||||||
|
static bool check_merge_table_access(THD *thd, char *db, TABLE_LIST *tables);
|
||||||
static bool check_dup(THD *thd,const char *db,const char *name,
|
static bool check_dup(THD *thd,const char *db,const char *name,
|
||||||
TABLE_LIST *tables);
|
TABLE_LIST *tables);
|
||||||
static void mysql_init_query(THD *thd);
|
static void mysql_init_query(THD *thd);
|
||||||
@ -1007,10 +1008,12 @@ mysql_execute_command(void)
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case SQLCOM_CREATE_TABLE:
|
case SQLCOM_CREATE_TABLE:
|
||||||
#ifdef DEMO_VERSION
|
if (!tables->db)
|
||||||
send_error(&thd->net,ER_NOT_ALLOWED_COMMAND);
|
tables->db=thd->db;
|
||||||
#else
|
if (check_access(thd,CREATE_ACL,tables->db,&tables->grant.privilege) ||
|
||||||
if (check_access(thd,CREATE_ACL,tables->db,&tables->grant.privilege))
|
check_merge_table_access(thd, tables->db,
|
||||||
|
(TABLE_LIST *)
|
||||||
|
lex->create_info.merge_list.first))
|
||||||
goto error; /* purecov: inspected */
|
goto error; /* purecov: inspected */
|
||||||
if (grant_option)
|
if (grant_option)
|
||||||
{
|
{
|
||||||
@ -1091,7 +1094,6 @@ mysql_execute_command(void)
|
|||||||
if (grant_option && check_grant(thd,INDEX_ACL,tables))
|
if (grant_option && check_grant(thd,INDEX_ACL,tables))
|
||||||
goto error;
|
goto error;
|
||||||
res = mysql_create_index(thd, tables, lex->key_list);
|
res = mysql_create_index(thd, tables, lex->key_list);
|
||||||
#endif
|
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case SQLCOM_SLAVE_START:
|
case SQLCOM_SLAVE_START:
|
||||||
@ -1101,7 +1103,6 @@ mysql_execute_command(void)
|
|||||||
stop_slave(thd);
|
stop_slave(thd);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
|
||||||
case SQLCOM_ALTER_TABLE:
|
case SQLCOM_ALTER_TABLE:
|
||||||
#if defined(DONT_ALLOW_SHOW_COMMANDS)
|
#if defined(DONT_ALLOW_SHOW_COMMANDS)
|
||||||
send_error(&thd->net,ER_NOT_ALLOWED_COMMAND); /* purecov: inspected */
|
send_error(&thd->net,ER_NOT_ALLOWED_COMMAND); /* purecov: inspected */
|
||||||
@ -1115,10 +1116,15 @@ mysql_execute_command(void)
|
|||||||
res=0;
|
res=0;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
if (!tables->db)
|
||||||
|
tables->db=thd->db;
|
||||||
if (!lex->db)
|
if (!lex->db)
|
||||||
lex->db=tables->db;
|
lex->db=tables->db;
|
||||||
if (check_access(thd,ALTER_ACL,tables->db,&tables->grant.privilege) ||
|
if (check_access(thd,ALTER_ACL,tables->db,&tables->grant.privilege) ||
|
||||||
check_access(thd,INSERT_ACL | CREATE_ACL,lex->db,&priv))
|
check_access(thd,INSERT_ACL | CREATE_ACL,lex->db,&priv) ||
|
||||||
|
check_merge_table_access(thd, tables->db,
|
||||||
|
(TABLE_LIST *)
|
||||||
|
lex->create_info.merge_list.first))
|
||||||
goto error; /* purecov: inspected */
|
goto error; /* purecov: inspected */
|
||||||
if (!tables->db)
|
if (!tables->db)
|
||||||
tables->db=thd->db;
|
tables->db=thd->db;
|
||||||
@ -1373,7 +1379,7 @@ mysql_execute_command(void)
|
|||||||
res = mysql_drop_index(thd, tables, lex->drop_list);
|
res = mysql_drop_index(thd, tables, lex->drop_list);
|
||||||
break;
|
break;
|
||||||
case SQLCOM_SHOW_DATABASES:
|
case SQLCOM_SHOW_DATABASES:
|
||||||
#if defined(DONT_ALLOW_SHOW_COMMANDS) || defined(DEMO_VERSION)
|
#if defined(DONT_ALLOW_SHOW_COMMANDS)
|
||||||
send_error(&thd->net,ER_NOT_ALLOWED_COMMAND); /* purecov: inspected */
|
send_error(&thd->net,ER_NOT_ALLOWED_COMMAND); /* purecov: inspected */
|
||||||
DBUG_VOID_RETURN;
|
DBUG_VOID_RETURN;
|
||||||
#else
|
#else
|
||||||
@ -1829,6 +1835,22 @@ static bool check_db_used(THD *thd,TABLE_LIST *tables)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
static bool check_merge_table_access(THD *thd, char *db, TABLE_LIST *table_list)
|
||||||
|
{
|
||||||
|
int error=0;
|
||||||
|
if (table_list)
|
||||||
|
{
|
||||||
|
/* Force all tables to use the current database */
|
||||||
|
TABLE_LIST *tmp;
|
||||||
|
for (tmp=table_list; tmp ; tmp=tmp->next)
|
||||||
|
tmp->db=db;
|
||||||
|
error=check_table_access(thd, SELECT_ACL | UPDATE_ACL | DELETE_ACL,
|
||||||
|
table_list);
|
||||||
|
}
|
||||||
|
return error;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/****************************************************************************
|
/****************************************************************************
|
||||||
Check stack size; Send error if there isn't enough stack to continue
|
Check stack size; Send error if there isn't enough stack to continue
|
||||||
****************************************************************************/
|
****************************************************************************/
|
||||||
|
@ -942,7 +942,7 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
|
|||||||
}
|
}
|
||||||
/* Approximate found rows and time to read them */
|
/* Approximate found rows and time to read them */
|
||||||
s->found_records=s->records=s->table->file->records;
|
s->found_records=s->records=s->table->file->records;
|
||||||
s->read_time=(ha_rows) ((s->table->file->data_file_length)/IO_SIZE)+1;
|
s->read_time=(ha_rows) s->table->file->scan_time();
|
||||||
|
|
||||||
/* Set a max range of how many seeks we can expect when using keys */
|
/* Set a max range of how many seeks we can expect when using keys */
|
||||||
s->worst_seeks= (double) (s->read_time*2);
|
s->worst_seeks= (double) (s->read_time*2);
|
||||||
@ -1532,7 +1532,7 @@ find_best(JOIN *join,table_map rest_tables,uint idx,double record_count,
|
|||||||
double best_records=DBL_MAX;
|
double best_records=DBL_MAX;
|
||||||
|
|
||||||
/* Test how we can use keys */
|
/* Test how we can use keys */
|
||||||
rec= s->records/10; /* Assume 10 records/key */
|
rec= s->records/MATCHING_ROWS_IN_OTHER_TABLE; /* Assumed records/key */
|
||||||
for (keyuse=s->keyuse ; keyuse->table == table ;)
|
for (keyuse=s->keyuse ; keyuse->table == table ;)
|
||||||
{
|
{
|
||||||
key_map found_part=0;
|
key_map found_part=0;
|
||||||
@ -1571,7 +1571,7 @@ find_best(JOIN *join,table_map rest_tables,uint idx,double record_count,
|
|||||||
if (map == 1) // Only one table
|
if (map == 1) // Only one table
|
||||||
{
|
{
|
||||||
TABLE *tmp_table=join->all_tables[tablenr];
|
TABLE *tmp_table=join->all_tables[tablenr];
|
||||||
if (rec > tmp_table->file->records)
|
if (rec > tmp_table->file->records && rec > 100)
|
||||||
rec=max(tmp_table->file->records,100);
|
rec=max(tmp_table->file->records,100);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1615,12 +1615,12 @@ find_best(JOIN *join,table_map rest_tables,uint idx,double record_count,
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
if (!found_ref) // If not const key
|
if (!found_ref)
|
||||||
{
|
{ // We found a const key
|
||||||
if (table->quick_keys & ((key_map) 1 << key))
|
if (table->quick_keys & ((key_map) 1 << key))
|
||||||
records= (double) table->quick_rows[key];
|
records= (double) table->quick_rows[key];
|
||||||
else
|
else
|
||||||
records= (double) s->records; // quick_range couldn't use key!
|
records= (double) s->records/rec; // quick_range couldn't use key!
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
@ -1654,7 +1654,8 @@ find_best(JOIN *join,table_map rest_tables,uint idx,double record_count,
|
|||||||
** than a not unique key
|
** than a not unique key
|
||||||
** Set tmp to (previous record count) * (records / combination)
|
** Set tmp to (previous record count) * (records / combination)
|
||||||
*/
|
*/
|
||||||
if (found_part & 1)
|
if ((found_part & 1) &&
|
||||||
|
!(table->file->option_flag() & HA_ONLY_WHOLE_INDEX))
|
||||||
{
|
{
|
||||||
uint max_key_part=max_part_bit(found_part);
|
uint max_key_part=max_part_bit(found_part);
|
||||||
/* Check if quick_range could determinate how many rows we
|
/* Check if quick_range could determinate how many rows we
|
||||||
|
@ -176,7 +176,7 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
|
|||||||
DBUG_ENTER("mysql_create_table");
|
DBUG_ENTER("mysql_create_table");
|
||||||
|
|
||||||
/*
|
/*
|
||||||
** Check for dupplicate fields and check type of table to create
|
** Check for duplicate fields and check type of table to create
|
||||||
*/
|
*/
|
||||||
|
|
||||||
if (!fields.elements)
|
if (!fields.elements)
|
||||||
@ -302,7 +302,7 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
|
|||||||
bool primary_key=0,unique_key=0;
|
bool primary_key=0,unique_key=0;
|
||||||
Key *key;
|
Key *key;
|
||||||
uint tmp;
|
uint tmp;
|
||||||
tmp=max(file->max_keys(), MAX_KEY);
|
tmp=min(file->max_keys(), MAX_KEY);
|
||||||
|
|
||||||
if (key_count > tmp)
|
if (key_count > tmp)
|
||||||
{
|
{
|
||||||
|
@ -264,6 +264,7 @@ bool my_yyoverflow(short **a, YYSTYPE **b,int *yystacksize);
|
|||||||
%token UDF_RETURNS_SYM
|
%token UDF_RETURNS_SYM
|
||||||
%token UDF_SONAME_SYM
|
%token UDF_SONAME_SYM
|
||||||
%token UDF_SYM
|
%token UDF_SYM
|
||||||
|
%token UNION_SYM
|
||||||
%token UNIQUE_SYM
|
%token UNIQUE_SYM
|
||||||
%token USAGE
|
%token USAGE
|
||||||
%token USE_SYM
|
%token USE_SYM
|
||||||
@ -716,6 +717,18 @@ create_table_option:
|
|||||||
| RAID_TYPE EQ raid_types { Lex->create_info.raid_type= $3; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
| RAID_TYPE EQ raid_types { Lex->create_info.raid_type= $3; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
||||||
| RAID_CHUNKS EQ ULONG_NUM { Lex->create_info.raid_chunks= $3; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
| RAID_CHUNKS EQ ULONG_NUM { Lex->create_info.raid_chunks= $3; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
||||||
| RAID_CHUNKSIZE EQ ULONG_NUM { Lex->create_info.raid_chunksize= $3*RAID_BLOCK_SIZE; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
| RAID_CHUNKSIZE EQ ULONG_NUM { Lex->create_info.raid_chunksize= $3*RAID_BLOCK_SIZE; Lex->create_info.used_fields|= HA_CREATE_USED_RAID;}
|
||||||
|
| UNION_SYM EQ '(' table_list ')'
|
||||||
|
{
|
||||||
|
/* Move the union list to the merge_list */
|
||||||
|
LEX *lex=Lex;
|
||||||
|
TABLE_LIST *table_list= (TABLE_LIST*) lex->table_list.first;
|
||||||
|
lex->create_info.merge_list= lex->table_list;
|
||||||
|
lex->create_info.merge_list.elements--;
|
||||||
|
lex->create_info.merge_list.first= (byte*) (table_list->next);
|
||||||
|
lex->table_list.elements=1;
|
||||||
|
lex->table_list.next= (byte**) &(table_list->next);
|
||||||
|
table_list->next=0;
|
||||||
|
}
|
||||||
|
|
||||||
table_types:
|
table_types:
|
||||||
ISAM_SYM { $$= DB_TYPE_ISAM; }
|
ISAM_SYM { $$= DB_TYPE_ISAM; }
|
||||||
|
@ -41,7 +41,10 @@ CLEANFILES = my-small.cnf \
|
|||||||
my-huge.cnf \
|
my-huge.cnf \
|
||||||
mysql.spec \
|
mysql.spec \
|
||||||
mysql-@VERSION@.spec \
|
mysql-@VERSION@.spec \
|
||||||
mysql.server
|
mysql-log-rotate \
|
||||||
|
mysql.server \
|
||||||
|
binary-configure
|
||||||
|
|
||||||
|
|
||||||
mysql-@VERSION@.spec: mysql.spec
|
mysql-@VERSION@.spec: mysql.spec
|
||||||
rm -f $@
|
rm -f $@
|
||||||
|
0
support-files/binary-configure.sh
Executable file → Normal file
0
support-files/binary-configure.sh
Executable file → Normal file
Loading…
x
Reference in New Issue
Block a user