3089 Commits

Author SHA1 Message Date
Oleksandr Byelkin
f1102da37a Merge branch '11.8' into 12.0 2025-05-22 09:22:55 +02:00
Oleg Smirnov
fa929a2be6 MDEV-36486 Optimizer hints are resolved against the INSERT part of INSERT..SELECT
When processing queries like
  INSERT INTO t1 (..) SELECT .. FROM t1, t2 ...,

there is a single query block (i.e., a single SELECT_LEX) for both INSERT and
SELECT parts. During hints resolution, when hints are attached to particular
TABLE_LIST's, the search is performed by table name across the whole
query block.
So, if a table mentioned in an optimizer hint is present in the INSERT part,
the hint is attached to the that table. This is obviously wrong as
optimizer hints are supposed to only affect the SELECT part of
an INSERT..SELECT clause.

This commit disables possible attaching hints to tables in the INSERT part
and fixes some other bugs related to INSERT..SELECT statements processing
2025-05-05 12:02:48 +07:00
Oleg Smirnov
67319f3e8d MDEV-34860 Implement MAX_EXECUTION_TIME hint
It places a limit N (a timeout value in milliseconds) on how long
a statement is permitted to execute before the server terminates it.

Syntax:
SELECT /*+ MAX_EXECUTION_TIME(milliseconds) */ ...

Only top-level SELECT statements support the hint.
2025-05-05 12:02:47 +07:00
Monty
d9c3b775b8 Comment and indentation improvements 2025-04-28 12:59:39 +03:00
Monty
efc5d3f84d rename binlog_show_create_table_() to binlog_show_create_table()
Other things:
- Removed not existing binlog_show_create_table() declaration in
  class select_create()
2025-04-28 12:59:39 +03:00
Aleksey Midenkov
1f85eeeb53 MDEV-25292 Refactoring: moved select_field_count into Alter_info.
There is a need in MDEV-25292 to have both C_ALTER_TABLE and
select_field_count in one call. Semantically creation mode and field
count are two different things. Making creation mode negative
constants and field count positive variable into one parameter seems
to be a lazy hack for not making the second parameter.

select_count does not make sense without alter_info->create_list, so
the natural way is to hold it in Alter_info too. select_count is now
stored in member select_field_count.

Merged and updated by: Monty
2025-04-28 12:59:39 +03:00
Monty
f8ba5ced55 MDEV-36099 Ensure that creation and usage of temporary tables in replication is predictable
MDEV-36563 Assertion `!mysql_bin_log.is_open()' failed in
           THD::mark_tmp_table_as_free_for_reuse

The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log.  It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.

In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.

New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
  STATEMENT binlog format is used. If it is binlogged, 1 is stored in
  TABLE_SHARE->table_creation_was_logged. The user can change this
  behavior by setting create_temporary_table_binlog_formats to
  MIXED,STATEMENT in which case the create is logged in statement
  format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
  the CREATE was logged. The logging happens under STATEMENT or MIXED.
  If binlog_format=ROW, temporary table changes are not binlogged. A
  temporary table that are changed under ROW are marked as 'not up to
  date in binlog' and no future row changes are logged.  Any usage of
  this temporary table will force row logging of other tables in any
  future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.

Changes done:
- Row logging is forced for any statement using temporary tables that
  are not up to date in the binary log.
  (Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
  the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
  temporary tables:
  0  Table creating was not logged to binary log
  1  Table creating was logged to binary log and table is up to date.
  2  Table creating was logged to binary log but some changes where
     not logged to binary log.
  Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
  all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
  that are not up to date.
  Before row logging was enforced if the user had any
  temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
  that slave will not stop if it had crashed and lost the
  temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
  a connection closes that has open temporary tables. Added 'generated by
  server' at the end of the DROP.

Bugs fixed:
- When using temporary tables with commands that forced row based,
  like INSERT INTO temporary_table VALUES (UUID()), this was never
  logged which causes the temporary table to be inconsistent on
  master and slave.
- Used binlog format is now clearly defined. It is now only depending
  on the current binlog_format and the tables used.
  Before it was depending on the user had ANY temporary tables and
  the state of 'current_stmt_binlog_format' set by previous queries.
  This also caused temporary tables to be logged to binary log in
  some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
  to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
  which caused replication to stop.

Changes in behavior:

- By default create_temporary_table_binlog_formats=STATEMENT, which
  means that CREATE TEMPORARY is not logged to binary log under MIXED
  binary logging. This can be changed by setting
  create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
  cause any query using them for updating other tables to be logged in
  ROW format. Before all queries was logged in ROW format if the user had
  any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
  has a "generated by server" comment in the binary log.

The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.

For example:
  BEGIN;
  CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
  large_table1 JOIN large_table2 ON ...;
  INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
  DROP TEMPORARY TABLE tmp;
  COMMIT;

By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
2025-04-28 12:59:38 +03:00
Sergei Golubchik
237e24497b Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
Oleksandr Byelkin
a8d4642375 Merge branch '10.11' into 11.4 2025-04-26 10:53:02 +02:00
Oleksandr Byelkin
4d41ec081e Merge branch '10.6' into 10.11 2025-04-26 10:47:03 +02:00
Oleksandr Byelkin
19644f6821 Merge branch '10.5' into 10.6 2025-04-26 10:41:52 +02:00
Andrei Elkin
a0b77eb806 MDEV-36685 CREATE-SELECT may lose in binlog side-effects of stored-routine
When the SELECT sub-statement executes a stored function that is defined
to modify a non-transactional table, like

   delimiter |;
   create function f_ia(arg int)
   returns integer
   begin
     insert into ti_pk set a=1;
     insert into ta set a=1;
     insert into ti_pk set a=arg;
     return 1;
   end |
   delimiter ;|

any modified records that the function has succeeded
on must be binlogged as a "side effect" of CREATE-SELECT.

It is expected that a failing CREATE-SELECT like

  --error ER_DUP_ENTRY
  set statement binlog_format = ROW for create table t_y (a int) engine=aria select f_ia(1 /* err in Innodb after Aria stmt is done */) as a;

leaves upon itself the following state:

  include/show_binlog_events.inc
  Log_name	Pos	Event_type	Server_id	End_log_pos	Info
  master-bin.000001	#	Gtid	#	#	BEGIN GTID #-#-#
  master-bin.000001	#	Table_map	#	#
  table_id: # (test.  ta)
  master-bin.000001	#	Write_rows_v1	#	#
  table_id: # flags:   STMT_END_F
  master-bin.000001	#	Query	#	#	COMMIT
  select * from ta;
  a
  1
  select count(*) = 0 from ti_pk;
  true

However it's not so for the binlog part.
The reason is that prior to MDEV-34150 fixes the CREATE-SELECT's
errored phase leaves the binlog caches intact (the file:pos from 10.11 c06c36218a5)

to defer their reset to the rollback phase of the top-level

/* the statement cache gets binlogged */

where the side-effect changes gets binlogged.

MDEV-34150 fixes harmed (+#4 line) the statement cache in particular
in the error phase (file:pos are from 395db6f1d53 the current 11.8 )

/* The caches incl the statement cache are gone */
/* 'cos of MDEV-34150 */
+#4  0x00005d75f9b6a92e in THD::binlog_remove_rows_events (this=0x52c000240288) at log.cc:579

Apparently it should not have been there, as proper emptying (either
with reset for the transactional cache or flush and then reset for the
statement cache) is (must be) always done via binlog_rollback of the
top-level statement.
To observe the above requirement the case is fixed with the removal of
thd->binlog_remove_rows_events() and its definition.

Tested with rpl.rpl_create_select_row.

Reviewed-by Brandon Nesterenko.
2025-04-25 21:26:35 +03:00
Oleksandr Byelkin
4fc9dc84b0 MDEV-32086 (part 2) Server crash when inserting from derived table containing insert target table
Get rid of need of matherialization for usual INSERT (cache results in
Item_cache* if needed)

- subqueries in VALUE do not see new records in the table we are
  inserting to
- subqueries in RETIRNING prohibited to use the table we are inserting to
2025-04-25 15:10:36 +02:00
Oleksandr Byelkin
9b313d2de1 MDEV-32086 Server crash when inserting from derived table containing insert target table
Use original solution for INSERT ... SELECT - select result buferisation.

Also fix MDEV-36447 and MDEV-33139
2025-04-25 12:58:24 +02:00
Dmitry Shulga
ecb7c9b692 MDEV-10164: Add support for TRIGGERS that fire on multiple events
Added capability to create a trigger associated with several trigger
events. For this goal, the syntax of the CREATE TRIGGER statement
was extended to support the syntax structure { event [ OR ... ] }
for the `trigger_event` clause. Since one trigger will be able to
handle several events it should be provided a way to determine what
kind of event is handled on execution of a trigger. For this goal
support of the clauses INSERTING, UPDATING , DELETING was added by
this patch. These clauses can be used inside a trigger body to detect
what kind of trigger action is currently processed using the following
boilerplate:
  IF INSERTING THEN ...
  ELSIF UPDATING THEN ...
  ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.

After this patch be pushed, one Trigger object will be associated with
several trigger events. It means that the array
  Table_triggers_list::triggers
can contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger
events for the same trigger requires that the data members `next` and
`action_order` of the Trigger class be converted to arrays to store
relating information per trigger event base.

Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event
trigger, when the OLD or NEW qualifiers doesn't match a current event
type against that the trigger is run. The clause OLD should produces
the NULL value for INSERT event, whereas the clause NEW should produce
the NULL value for DELETE event.
2025-04-19 18:36:03 +07:00
Thirunarayanan Balathandayuthapani
f388222d49 MDEV-36504 Memory leak after CREATE TABLE..SELECT
Problem:
========
- After commit cc8eefb0dca1372378905fbae11044f20364c42d (MDEV-33087),
InnoDB does use bulk insert operation for ALTER TABLE.. ALGORITHM=COPY
and CREATE TABLE..SELECT as well. InnoDB fails to clear the bulk
buffer when it encounters error during CREATE..SELECT. Problem
is that while transaction cleanup, InnoDB fails to identify
the bulk insert for DDL operation.

Fix:
====
- Represent bulk_insert in trx by 2 bits. By doing that, InnoDB
can distinguish between TRX_DML_BULK, TRX_DDL_BULK. During DDL,
set bulk insert value for transaction to TRX_DDL_BULK.

- Introduce a parameter HA_EXTRA_ABORT_ALTER_COPY which rollbacks
only TRX_DDL_BULK transaction.

- bulk_insert_apply() happens for TRX_DDL_BULK transaction happens
only during HA_EXTRA_END_ALTER_COPY extra() call.
2025-04-17 12:04:09 +05:30
Andrei Elkin
1b4efbeb8c MDEV-35207 ignored error at binlogging by CREATE-TABLE-SELECT leads to assert
MDEV-35499 Errored-out CREATE-or-REPLACE-SELECT does not log DROP table into binlog
MDEV-35502 Failed at ROW-format binlogging CREATE-TABLE-SELECT should
           not generate Incident event

When a CREATE TABLE .. SELECT errors while inserting data, a user
would expect that all changes are rolled back
and the table would not exist after executing the query.

However CREATE-TABLE-SELECT can face an error near the end of its execution
select_create::send_eof() so that the error was never checked which
led to various assert inside binlogging path that should not be
attended at all.
Specifically when binlog_commit() of ha_commit_one_phase() that
CREATE-TABLE-SELECT employs errored out because of a limited cache size
(binlog_commit may try writing to a transactional cache) the cache
was not flushed to binlog. The missed error check allowed further
execution down to trans_commit_implicit() in whose stack

  DBUG_ASSERT(!(entry->using_trx_cache && !mngr->trx_cache.empty() &&
                mngr->get_binlog_cache_log(TRUE)->error));

fired. In a non-debug build that table remains created/populated
inconsistently with binlog.

The fixes need and install the error checking in select_create::send_eof().
That prevents from any further execution when ha_commit_one_phase() fails
for any reason (typically due to binlog_commit()).

This commit also covers CREATE-or-REPLACE-SELECT that additionally had
a specific issue in that DROP TABLE was not logged the binary log, MDEV-35499.
See changes select_create::abort_result_set().

The current commit also corrects an unnecessary Incident event
logging when CREATE-TABLE-SELECT encounters a binloging issue, MDEV-35502.
The Incident was actually only harmful in this case as the table was
never going to be created, therefore replicated, in such a case.
In "normal" cases when the SELECT phase errors due to binlogging, an
internal incident flag gets reset inside select_create::abort_result_set().

A hunk in select_insert::prepare_eof() addresses a specific kind of
this issue that deals with incorrect computation of the binlog cache type.
Because of that in the OLD version execution was allowed to proceed along
ha_commit_trans()..binlog_commit() while a Pending event was not
flushed to the transactional cache. That might lead to the unnecessary
binlogged Incident despite the select_create::abort_result_set()
measures. However now with the corrected cache type any binlogging error
to flush the Pending event is covered according to the normal case.
non-transaction table, updates to the non-transactional table

NOTE the commit contains few tests overlapping with unfixed yet MDEV-36027.

Thanks to Brandon Nesterenko and Kristian Nielsen for thorough review,
and Kristian additionally for ideas to simplify the patch and some
code contribution.
2025-04-03 19:00:02 +03:00
Marko Mäkelä
f5bd250f5b Merge 10.11 into 11.4 2025-03-28 13:55:21 +02:00
Marko Mäkelä
ab0f2a00b6 Merge 10.6 into 10.11 2025-03-27 08:01:47 +02:00
Vasilii Lakhin
717c12de0e Fix typos in C comments inside sql/ 2025-03-14 12:08:56 +04:00
Julius Goryavsky
e3d7d5ca26 Merge branch '10.5' into '10.6' 2025-02-27 04:02:33 +01:00
Sergei Golubchik
9ee09a33bb Merge branch '11.7' into 11.8 2025-02-11 20:29:43 +01:00
Sergei Golubchik
cd7513995d MDEV-36026 Problem with INSERT SELECT on NOT NULL columns while having BEFORE UPDATE trigger
MDEV-8605 and MDEV-19761 didn't handle INSERT (columns) SELECT

followup for a69da0c31e9
2025-02-10 15:55:00 +01:00
Sergei Golubchik
2d971709a8 MDEV-36026 Problem with INSERT SELECT on NOT NULL columns while having BEFORE UPDATE trigger
MDEV-8605 and MDEV-19761 didn't handle INSERT (columns) SELECT

followup for a69da0c31e9
2025-02-10 15:26:08 +01:00
Sergei Golubchik
ba01c2aaf0 Merge branch '11.4' into 11.7
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
  (checks are done in a different order)
2025-02-06 16:46:36 +01:00
Sergei Golubchik
7d657fda64 Merge branch '10.11 into 11.4 2025-01-30 12:01:11 +01:00
Sergei Golubchik
e69f8cae1a Merge branch '10.6' into 10.11 2025-01-30 11:55:13 +01:00
Sergei Golubchik
066e8d6aea Merge branch '10.5' into 10.6 2025-01-29 11:17:38 +01:00
Sergei Golubchik
9a0ac0cdf7 MDEV-35911 Assertion `marked_for_write_or_computed()' failed in bool Field_new_decimal::store_value(const my_decimal*, int*)
disable the assert.

also, use the same check for check_that_all_fields_are_given_values()
as it's used in not_null_fields_have_null_values() - to avoid
issuing the same warning twice.
2025-01-28 19:29:54 +01:00
Dmitry Shulga
4c956fa15b MDEV-34724: Skipping a row operation from a trigger
Implementation of this task adds ability to raise the signal with
SQLSTATE '02TRG' from a BEFORE INSERT/UPDATE/DELETE trigger and handles
this signal as an indicator meaning 'to throw away the current row'
on processing the INSERT/UPDATE/DELETE statement. The signal with
SQLSTATE '02TRG' has special meaning only in case it is raised inside
BEFORE triggers, for AFTER trigger's this value of SQLSTATE isn't treated
in any special way. In according with SQL standard, the SQLSTATE class '02'
means NO DATA and sql_errno for this class is set to value
ER_SIGNAL_NOT_FOUND by current implementation of MariaDB server.
Implementation of this task assigns the value ER_SIGNAL_SKIP_ROW_FROM_TRIGGER
to sql_errno in Diagnostics_area in case the signal is raised from a trigger
and SQLSTATE has value '02TRG'.

To catch signal with SQLTSATE '02TRG' and handle it in special way, the methods
 Table_triggers_list::process_triggers
 select_insert::store_values
 select_create::store_values
 Rows_log_event::process_triggers
and the overloaded function
 fill_record_n_invoke_before_triggers
were extended with extra out parameter for returning the flag whether
to skip the current values being processed by INSERT/UPDATE/DELETE
statement. This extra parameter is passed as nullptr in case of AFTER trigger
and BEFORE trigger this parameter points to a variable to store a marker
whether to skip the current record or store it by calling write_record().
2025-01-27 16:30:27 +07:00
Jan Lindström
43c36b3c88 MDEV-35852 : ASAN heap-use-after-free in WSREP_DEBUG after INSERT DELAYED
Problem was that in case of INSERT DELAYED thd->query() is
freed before we call trans_rollback where WSREP_DEBUG
could access thd->query() in wsrep_thd_query().

Fix is to reset thd->query() to NULL in delayed_insert
destructor after it is freed. There is already
null guard at wsrep_thd_query().

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2025-01-20 12:19:31 +01:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Aleksey Midenkov
78c192644c MDEV-35343 unexpected replace behaviour when long unique index on
system versioned table

For versioned table REPLACE first tries to insert a row, if it gets
duplicate key error and optimization is possible it does UPDATE +
INSERT history. If optimization is not possible it goes normal branch
for UPDATE to history and repeats the cycle of INSERT.

The failure was in normal branch when we tried UPDATE to history but
such history already exists from previous cycles. There is no such
failures in optimized branch because vers_insert_history_row() already
ignores duplicates.

The fix ignores duplicate errors for UPDATE to history and does DELETE
instead.
2025-01-14 18:56:13 +03:00
Aleksey Midenkov
92383f8db1 MDEV-26891 Segfault in Field::register_field_in_read_map upon INSERT
DELAYED with virtual columns

Segfault was cause by two different copies of same Field instance in
prepared delayed insert. One was made by
Delayed_insert::get_local_table() (see make_new_field()). That copy
went through parse_vcol_defs() and received new vcol_info->expr.

Another one was made by copy_keys_from_share() by this code:

        /*
          We are using only a prefix of the column as a key:
          Create a new field for the key part that matches the index
        */
        field= key_part->field=field->make_new_field(root, outparam, 0);
        field->field_length= key_part->length;

So, key_part and table got different objects of same field and the
crash was because key_part->field->vcol_info->expr is NULL.

The fix does update_keypart_vcol_info() to update vcol_info->expr in
key_part->field.

Cleanup: memdup_vcol() is static inline instead of macro + check OOM.
2025-01-14 18:56:13 +03:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Marko Mäkelä
15700f54c2 Merge 11.4 into 11.7 2025-01-09 09:41:38 +02:00
Marko Mäkelä
17f01186f5 Merge 10.11 into 11.4 2025-01-09 07:58:08 +02:00
Marko Mäkelä
3f914afd3a Merge 10.6 into 10.11 2025-01-02 12:39:56 +02:00
Yuchen Pei
671f80c738
Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Oleg Smirnov
d98ac8511e MDEV-26247 MariaDB Server SEGV on INSERT .. SELECT
This problem occured for statements like `INSERT INTO t1 SELECT 1`,
which do not have tables in the SELECT part. In such scenarios
SELECT_LEX::insert_tables was not properly set at `setup_tables()`,
and this led to either incorrect execution or a crash

Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
2024-12-14 14:04:21 +07:00
Oleg Smirnov
e640373389 Revert "MDEV-26427 MariaDB Server SEGV on INSERT .. SELECT"
This reverts commit 49e14000eeb245ea27e9207d2f63cb0a28be1ca9
as it introduces regression MDEV-29935 and has to be reconsidered
in general
2024-12-14 13:08:17 +07:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Oleksandr Byelkin
b12ff287ec Merge branch '11.6' into 11.7 2024-11-10 19:22:21 +01:00
Oleksandr Byelkin
9e1fb104a3 MariaDB 11.4.4 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEF39AEP5WyjM2MAMF8WVvJMdM0dgFAmck77AACgkQ8WVvJMdM
 0dgccQ/+Lls8fWt4D+gMPP7x+drJSO/IE/gZFt3ugbWF+/p3B2xXAs5AAE83wxEh
 QSbp4DCkb/9PnuakhLmzg0lFbxMUlh4rsJ1YyiuLB2J+YgKbAc36eQQf+rtYSipd
 DT5uRk36c9wOcOXo/mMv4APEvpPXBIBdIL4VvpKFbIOE7xT24Sp767zWXdXqrB1f
 JgOQdM2ct+bvSPC55oZ5p1kqyxwvd6K6+3RB3CIpwW9zrVSLg7enT3maLjj/761s
 jvlRae+Cv+r+Hit9XpmEH6n2FYVgIJ3o3WhdAHwN0kxKabXYTg7OCB7QxDZiUHI9
 C/5goKmKaPB1PCQyuTQyLSyyK9a8nPfgn6tqw/p/ZKDQhKT9sWJv/5bSWecrVndx
 LLYifSTrFC/eXLzgPvCnNv/U8SjsZaAdMIKS681+qDJ0P5abghUIlGnMYTjYXuX1
 1B6Vrr0bdrQ3V1CLB3tpkRjpUvicrsabtuAUAP65QnEG2G9UJXklOer+DE291Gsl
 f1I0o6C1zVGAOkUUD3QEYaHD8w7hlvyfKme5oXKUm3DOjaAar5UUKLdr6prxRZL4
 ebhmGEy42Mf8fBYoeohIxmxgvv6h2Xd9xCukgPp8hFpqJGw8abg7JNZTTKH4h2IY
 J51RpD10h4eoi6WRn3opEcjexTGvZ+xNR7yYO5WxWw6VIre9IUA=
 =s+WW
 -----END PGP SIGNATURE-----

Merge tag '11.4' into 11.6

MariaDB 11.4.4 release
2024-11-08 07:17:00 +01:00
Sergey Vojtovich
883fb66cd4 MDEV-35130 Assertion fails in trx_t::check_bulk_buffer upon CREATE.. SELECT with vector key
Similarly to "ALTER TABLE fixes for high-level indexes", don't enable bulk
insert when issuing create ... insert into a table containing vector
index. InnoDB can't handle situation when bulk insert is enabled for
one table but disabled for another. We can't do bulk insert on vector
index as it does table updates currently.
2024-11-05 14:00:52 -08:00
Sergey Vojtovich
3283688797 Simplified quick_rm_table() and mysql_rename_table()
Replaced obscure FRM_ONLY, NO_FRM_RENAME, NO_HA_TABLE, NO_PAR_TABLE with
straightforward explicit flags:

QRMT_FRM - [re]moves .frm
QRMT_PAR - [re]moves .par
QRMT_HANDLER - calls ha_delete_table()/ha_rename_table() and [re]moves
               high-level indexes
QRMT_DEFAULT - same as QRMT_FRM | QRMT_HANDLER, which is regular table
               drop/rename.
2024-11-05 14:00:50 -08:00
Sergei Golubchik
f2512c0fa8 cleanup: prepare_for_insert() -> prepare_for_modify()
make handler::prepare_for_insert() to be called to prepare
the handler for writes, INSERT/UPDATE/DELETE.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
d6add9a03d initial support for vector indexes
MDEV-33407 Parser support for vector indexes

The syntax is

  create table t1 (... vector index (v) ...);

limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported

MDEV-33404 Engine-independent indexes: subtable method

added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.

MDEV-33406 basic optimizer support for k-NN searches

for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
08a7f18b19 cleanup: init_tmp_table_share(bool thread_specific)
let the caller tell init_tmp_table_share() whether the table
should be thread_specific or not.

In particular, internal tmp tables created in the slave thread
are perfectly thread specific
2024-11-05 14:00:48 -08:00