Patch that changes metadata locking subsystem to use mutex per lock and

condition variable per context instead of one mutex and one conditional
variable for the whole subsystem.

This should increase concurrency in this subsystem.

It also opens the way for further changes which are necessary to solve
such bugs as bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock"
and bug #37346 "innodb does not detect deadlock between update and alter
table".

Two other notable changes done by this patch:

- MDL subsystem no longer implicitly acquires global intention exclusive
  metadata lock when per-object metadata lock is acquired. Now this has
  to be done by explicit calls outside of MDL subsystem.
- Instead of using separate MDL_context for opening system tables/tables
  for purposes of I_S we now create MDL savepoint in the main context
  before opening tables and rollback to this savepoint after closing
  them. This means that it is now possible to get ER_LOCK_DEADLOCK error
  even not inside a transaction. This might happen in unlikely case when
  one runs DDL on one of system tables while also running DDL on some
  other tables. Cases when this ER_LOCK_DEADLOCK error is not justified
  will be addressed by advanced deadlock detector for MDL subsystem which
  we plan to implement.

mysql-test/include/handler.inc:
  Adjusted handler_myisam.test and handler_innodb.test to the fact that
  exclusive metadata locks on tables are now acquired according to
  alphabetical order of fully qualified table names instead of order
  in which tables are mentioned in statement.
mysql-test/r/handler_innodb.result:
  Adjusted handler_myisam.test and handler_innodb.test to the fact that
  exclusive metadata locks on tables are now acquired according to
  alphabetical order of fully qualified table names instead of order
  in which tables are mentioned in statement.
mysql-test/r/handler_myisam.result:
  Adjusted handler_myisam.test and handler_innodb.test to the fact that
  exclusive metadata locks on tables are now acquired according to
  alphabetical order of fully qualified table names instead of order
  in which tables are mentioned in statement.
mysql-test/r/mdl_sync.result:
  Adjusted mdl_sync.test to the fact that exclusive metadata locks on
  tables are now acquired according to alphabetical order of fully
  qualified table names instead of order in which tables are mentioned
  in statement.
mysql-test/t/mdl_sync.test:
  Adjusted mdl_sync.test to the fact that exclusive metadata locks on
  tables are now acquired according to alphabetical order of fully
  qualified table names instead of order in which tables are mentioned
  in statement.
sql/events.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/ha_ndbcluster.cc:
  Since manipulations with open table state no longer install proxy
  MDL_context it does not make sense to perform them in order to
  satisfy assert in mysql_rm_tables_part2(). Removed them per agreement
  with Cluster team. This has not broken test suite since scenario in
  which deadlock can occur and assertion fails is not covered by tests.
sql/lock.cc:
  MDL subsystem no longer implicitly acquires global intention exclusive
  metadata lock when per-object exclusive metadata lock is acquired.
  Now this has to be done by explicit calls outside of MDL subsystem.
sql/log.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/mdl.cc:
  Changed metadata locking subsystem to use mutex per lock and condition
  variable per context instead of one mutex and one conditional variable
  for the whole subsystem.
  Changed approach to handling of global metadata locks. Instead of
  implicitly acquiring intention exclusive locks when user requests
  per-object upgradeable or exclusive locks now we require them to be
  acquired explicitly in the same way as ordinary metadata locks.
  In fact global lock are now ordinary metadata locks in new GLOBAL
  namespace.
  
  To implement these changes:
  - Removed LOCK_mdl mutex and COND_mdl condition variable.
  - Introduced MDL_lock::m_mutex mutexes which protect individual lock
    objects.
  - Replaced mdl_locks hash with MDL_map class, which has hash for
    MDL_lock objects as a member and separate mutex which protects this
    hash. Methods of this class allow to find(), find_or_create() or
    remove() MDL_lock objects in concurrency-friendly fashion (i.e.
    for most common operation, find_or_create(), we don't acquire
    MDL_lock::m_mutex while holding MDL_map::m_mutex. Thanks to MikaelR
    for this idea and benchmarks!). Added three auxiliary members to
    MDL_lock class (m_is_destroyed, m_ref_usage, m_ref_release) to
    support this concurrency-friendly behavior.
  - Introduced MDL_context::m_ctx_wakeup_cond condition variable to be
    used for waiting until this context's pending request can be
    satisfied or its thread has to perform actions to resolve potential
    deadlock. Context which want to wait add ticket corresponding to the
    request to an appropriate queue of waiters in MDL_lock object so
    they can be noticed when other contexts change state of lock and be
    awaken by them by signalling on MDL_context::m_ctx_wakeup_cond.
    As consequence MDL_ticket objects has to be used for any waiting
    in metadata locking subsystem including one which happens in
    MDL_context::wait_for_locks() method.
    Another consequence is that MDL_context is no longer copyable and
    can't be saved/restored when working with system tables.
  - Made MDL_lock an abstract class, which delegates specifying exact
    compatibility matrix to its descendants. Added MDL_global_lock child
    class for global lock (The old is_lock_type_compatible() method
    became can_grant_lock() method of this class). Added MDL_object_lock
    class to represent per-object lock (The old MDL_lock::can_grant_lock()
    became its method). Choice between two classes happens based on MDL
    namespace in MDL_lock::create() method.
  - Got rid of MDL_lock::type member as its meaning became ambigous for
    global locks.
  - To simplify waking up of contexts waiting for lock split waiting queue
    in MDL_lock class in two queues. One for pending requests for exclusive
    (including intention exclusive) locks and another for requests for
    shared locks.
  - Added virtual wake_up_waiters() method to MDL_lock, MDL_global_lock and
    MDL_object_lock classes which allows to wake up waiting contexts after
    state of lock changes. Replaced old duplicated code with calls to this
    method.
  - Adjusted MDL_context::try_acquire_shared_lock()/exclusive_lock()/
    global_shared_lock(), MDL_ticket::upgrade_shared_lock_to_exclusive_lock()
    and MDL_context::release_ticket() methods to use MDL_map and
    MDL_lock::m_mutex instead of single LOCK_mdl mutex and wake up
    waiters according to the approach described above. The latter method
    also was renamed to MDL_context::release_lock().
  - Changed MDL_context::try_acquire_shared_lock()/exclusive_lock() and
    release_lock() not to handle global locks. They are now supposed to
    be taken explicitly like ordinary metadata locks.
  - Added helper MDL_context::try_acquire_global_intention_exclusive_lock()
    and acquire_global_intention_exclusive_lock() methods.
  - Moved common code from MDL_context::acquire_global_shared_lock() and
    acquire_global_intention_exclusive_lock() to new method -
    MDL_context::acquire_lock_impl().
  - Moved common code from MDL_context::try_acquire_shared_lock(),
    try_acquire_global_intention_exclusive_lock()/exclusive_lock()
    to MDL_context::try_acquire_lock_impl().
  - Since acquiring of several exclusive locks can no longer happen under
    single LOCK_mdl mutex the approach to it had to be changed. Now we do
    it in one by one fashion. This is done in alphabetical order to avoid
    deadlocks. Changed MDL_context::acquire_exclusive_locks() accordingly
    (as part of this change moved code responsible for acquiring single
    exclusive lock to new MDL_context::acquire_exclusive_lock_impl()
    method).
  - Since we no longer have single LOCK_mdl mutex which protects all
    MDL_context::m_is_waiting_in_mdl members using these members to
    determine if we have really awaken context holding conflicting
    shared lock became inconvinient. Got rid of this member and changed
    notify_shared_lock() helper function and process of acquiring
    of/upgrading to exclusive lock not to rely on such information.
    Now in MDL_context::acquire_exclusive_lock_impl() and
    MDL_ticket::upgrade_shared_lock_to_exclusive_lock() we simply
    re-try to wake up threads holding conflicting shared locks after
    small time out.
  - Adjusted MDL_context::can_wait_lead_to_deadlock() and
    MDL_ticket::has_pending_conflicting_lock() to use per-lock
    mutexes instead of LOCK_mdl. To do this introduced
    MDL_lock::has_pending_exclusive_lock() method.
sql/mdl.h:
  Changed metadata locking subsystem to use mutex per lock and condition
  variable per context instead of one mutex and one conditional variable
  for the whole subsystem. In order to implement this change:
  
  - Added MDL_key::cmp() method to be able to sort MDL_key objects
    alphabetically. Changed length fields in MDL_key class to uint16
    as 16-bit is enough for length of any key.
  - Changed MDL_ticket::get_ctx() to return pointer to non-const
    object in order to be able to use MDL_context::awake() method
    for such contexts.
  - Got rid of unlocked versions of can_wait_lead_to_deadlock()/
    has_pending_conflicting_lock() methods in MDL_context and
    MDL_ticket. We no longer has single mutex which protects all
    locks. Thus one always has to use versions of these methods
    which acquire per-lock mutexes.
  - MDL_request_list type of list now counts its elements.
  - Added MDL_context::m_ctx_wakeup_cond condition variable to be used
    for waiting until this context's pending request can be satisfied
    or its thread has to perform actions to resolve potential deadlock.
    Added awake() method to wake up context from such wait.
    Addition of condition variable made MDL_context uncopyable.
    As result we no longer can save/restore MDL_context when working
    with system tables. Instead we create MDL savepoint before opening
    those tables and rollback to it once they are closed.
  - MDL_context::release_ticket() became release_lock() method.
  - Added auxiliary MDL_context::acquire_exclusive_lock_impl() method
    which does all necessary work to acquire exclusive lock on one object
    but should not be used directly as it does not enforce any asserts
    ensuring that no deadlocks are possible.
  - Since we no longer need to know if thread trying to acquire exclusive
    lock managed to wake up any threads having conflicting shared locks
    (as, anyway, we will try to wake up such threads again shortly)
  - MDL_context::m_is_waiting_in_mdl member became unnecessary and
    notify_shared_lock() no longer needs to be friend of MDL_context.
  
  Changed approach to handling of global metadata locks. Instead of
  implicitly acquiring intention exclusive locks when user requests
  per-object upgradeable or exclusive locks now we require them to be
  acquired explicitly in the same way as ordinary metadata locks.
  
  - Added new GLOBAL namespace for such locks.
  - Added new type of lock to be requested MDL_INTENTION_EXCLISIVE.
  - Added MDL_context::try_acquire_global_intention_exclusive_lock()
    and acquire_global_intention_exclusive_lock() methods.
  - Moved common code from MDL_context::acquire_global_shared_lock()
    and acquire_global_intention_exclusive_lock() to new method -
    MDL_context::acquire_lock_impl().
  - Moved common code from MDL_context::try_acquire_shared_lock(),
    try_acquire_global_intention_exclusive_lock()/exclusive_lock()
    to MDL_context::try_acquire_lock_impl().
  - Added helper MDL_context::is_global_lock_owner() method to be
    able easily to find what kind of global lock this context holds.
  - MDL_context::m_has_global_shared_lock became unnecessary as
    global read lock is now represented by ordinary ticket.
  - Removed assert in MDL_context::set_lt_or_ha_sentinel() which became
    false for cases when we execute LOCK TABLES under global read lock
    mode.
sql/mysql_priv.h:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result calls
  opening/closing system tables were changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/sp.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/sp.h:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/sql_base.cc:
  close_thread_tables():
    Since we no longer use separate MDL_context for opening system
    tables we need to avoid releasing all transaction locks when
    closing system table. Releasing metadata lock on system table
    is now responsibility of THD::restore_backup_open_tables_state().
  open_table_get_mdl_lock(),
  Open_table_context::recover_from_failed_open():
    MDL subsystem no longer implicitly acquires global intention exclusive
    metadata lock when per-object upgradable or exclusive metadata lock is
    acquired. So this have to be done explicitly from these calls.
    Changed Open_table_context class to store MDL_request object for
    global intention exclusive lock acquired when opening tables.
  open_table():
    Do not release metadata lock if we have failed to open table as
    this lock might have been acquired by one of previous statements
    in transaction, and therefore should not be released.
  open_system_tables_for_read()/close_system_tables()/
  open_performance_schema_table():
    Instead of using separate MDL_context for opening system tables we now
    create MDL savepoint in the main context before opening such tables
    and rollback to this savepoint after closing them. To support this
    change methods of THD responsible for saving/restoring open table
    state were changed to use Open_tables_backup class which in addition
    to Open_table_state has a member for this savepoint. As result code
    opening/closing system tables was changed to use Open_tables_backup
    instead of Open_table_state class as well.
  close_performance_schema_table():
    Got rid of duplicated code.
sql/sql_class.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. Also releasing
  metadata lock on system table is now responsibility of
  THD::restore_backup_open_tables_state().
  Adjusted assert in THD::cleanup() to take into account fact that now
  we also use MDL sentinel for global read lock.
sql/sql_class.h:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. As result:
  - 'mdl_context' member was moved out of Open_tables_state to THD class.
    enter_locked_tables_mode()/leave_locked_tables_mode() had to follow.
  - Methods of THD responsible for saving/restoring open table state were
    changed to use Open_tables_backup class which in addition to
    Open_table_state has a member for this savepoint.
  Changed Open_table_context class to store MDL_request object for
  global intention exclusive lock acquired when opening tables.
sql/sql_delete.cc:
  MDL subsystem no longer implicitly acquires global intention exclusive
  metadata lock when per-object exclusive metadata lock is acquired.
  Now this has to be done by explicit calls outside of MDL subsystem.
sql/sql_help.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
sql/sql_parse.cc:
  Adjusted assert reload_acl_and_cache() to the fact that global read
  lock now takes full-blown metadata lock.
sql/sql_plist.h:
  Added support for element counting to I_P_List list template.
  One can use policy classes to specify if such counting is needed
  or not needed for particular list.
sql/sql_show.cc:
  Instead of using separate MDL_context for opening tables for I_S
  purposes we now create MDL savepoint in the main context before
  opening tables and rollback to this savepoint after closing them.
  To support this and similar change for system tables methods of
  THD responsible for saving/restoring open table state were changed
  to use Open_tables_backup class which in addition to Open_table_state
  has a member for this savepoint. As result code opening/closing tables
  for I_S purposes was changed to use Open_tables_backup instead of
  Open_table_state class as well.
sql/sql_table.cc:
  mysql_rm_tables_part2():
    Since now global intention exclusive metadata lock is ordinary
    metadata lock we no longer can rely that by releasing MDL locks
    on all tables we will release all locks acquired by this routine.
    So in non-LOCK-TABLES mode we have to release all locks acquired
    explicitly.
  prepare_for_repair(), mysql_alter_table():
    MDL subsystem no longer implicitly acquires global intention
    exclusive metadata lock when per-object exclusive metadata lock
    is acquired. Now this has to be done by explicit calls outside of
    MDL subsystem.
sql/tztime.cc:
  Instead of using separate MDL_context for opening system tables we now
  create MDL savepoint in the main context before opening such tables
  and rollback to this savepoint after closing them. To support this
  change methods of THD responsible for saving/restoring open table
  state were changed to use Open_tables_backup class which in addition
  to Open_table_state has a member for this savepoint. As result code
  opening/closing system tables was changed to use Open_tables_backup
  instead of Open_table_state class as well.
  Also changed code not to use special mechanism for open system tables
  when it is not really necessary.
This commit is contained in:
Dmitry Lenev 2010-01-21 23:43:03 +03:00
parent 661fd506a3
commit 6ddd01c27a
24 changed files with 1595 additions and 848 deletions

View File

@ -543,7 +543,7 @@ disconnect flush;
#
--disable_warnings
drop table if exists t1,t2;
drop table if exists t1, t0;
--enable_warnings
create table t1 (c1 int);
--echo connection: default
@ -552,31 +552,31 @@ handler t1 read first;
connect (flush,localhost,root,,);
connection flush;
--echo connection: flush
--send rename table t1 to t2;
--send rename table t1 to t0;
connection waiter;
--echo connection: waiter
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = "Waiting for table" and info = "rename table t1 to t2";
where state = "Waiting for table" and info = "rename table t1 to t0";
--source include/wait_condition.inc
connection default;
--echo connection: default
--echo #
--echo # RENAME placed two pending locks and waits.
--echo # When HANDLER t2 OPEN does open_tables(), it calls
--echo # When HANDLER t0 OPEN does open_tables(), it calls
--echo # mysql_ha_flush(), which in turn closes the open HANDLER for t1.
--echo # RENAME TABLE gets unblocked. If it gets scheduled quickly
--echo # and manages to complete before open_tables()
--echo # of HANDLER t2 OPEN, open_tables() and therefore the whole
--echo # HANDLER t2 OPEN succeeds. Otherwise open_tables()
--echo # of HANDLER t0 OPEN, open_tables() and therefore the whole
--echo # HANDLER t0 OPEN succeeds. Otherwise open_tables()
--echo # notices a pending or active exclusive metadata lock on t2
--echo # and the whole HANDLER t2 OPEN fails with ER_LOCK_DEADLOCK
--echo # and the whole HANDLER t0 OPEN fails with ER_LOCK_DEADLOCK
--echo # error.
--echo #
--error 0, ER_LOCK_DEADLOCK
handler t2 open;
handler t0 open;
--error 0, ER_UNKNOWN_TABLE
handler t2 close;
handler t0 close;
--echo connection: flush
connection flush;
reap;
@ -585,7 +585,7 @@ handler t1 read next;
--error ER_UNKNOWN_TABLE
handler t1 close;
connection default;
drop table t2;
drop table t0;
connection flush;
disconnect flush;
--source include/wait_until_disconnected.inc
@ -972,35 +972,29 @@ connection default;
--echo #
create table t1 (a int, key a (a));
insert into t1 (a) values (1), (2), (3), (4), (5);
create table t2 (a int, key a (a));
insert into t2 (a) values (1), (2), (3), (4), (5);
create table t0 (a int, key a (a));
insert into t0 (a) values (1), (2), (3), (4), (5);
begin;
select * from t1;
--echo # --> connection con1
connection con1;
lock table t2 read;
--echo # --> connection con2
connection con2;
--echo # Sending:
send rename table t2 to t3, t1 to t2, t3 to t1;
send rename table t0 to t3, t1 to t0, t3 to t1;
--echo # --> connection con1
connection con1;
--echo # Waiting for 'rename table ...' to get blocked...
let $wait_condition=select count(*)=1 from information_schema.processlist
where state='Waiting for table' and info='rename table t2 to t3, t1 to t2, t3 to t1';
where state='Waiting for table' and info='rename table t0 to t3, t1 to t0, t3 to t1';
--source include/wait_condition.inc
--echo # --> connection default
connection default;
--error ER_LOCK_DEADLOCK
handler t2 open;
handler t0 open;
--error ER_LOCK_DEADLOCK
select * from t2;
select * from t0;
handler t1 open;
commit;
handler t1 close;
--echo # --> connection con1
connection con1;
unlock tables;
--echo # --> connection con2
connection con2;
--echo # Reaping 'rename table ...'...
@ -1010,7 +1004,7 @@ connection default;
handler t1 open;
handler t1 read a prev;
handler t1 close;
drop table t2;
drop table t0;
--echo #
--echo # Originally there was a deadlock error in this test.
--echo # With implementation of deadlock detector

View File

@ -560,36 +560,36 @@ c1
handler t1 close;
handler t2 close;
drop table t1,t2;
drop table if exists t1,t2;
drop table if exists t1, t0;
create table t1 (c1 int);
connection: default
handler t1 open;
handler t1 read first;
c1
connection: flush
rename table t1 to t2;;
rename table t1 to t0;;
connection: waiter
connection: default
#
# RENAME placed two pending locks and waits.
# When HANDLER t2 OPEN does open_tables(), it calls
# When HANDLER t0 OPEN does open_tables(), it calls
# mysql_ha_flush(), which in turn closes the open HANDLER for t1.
# RENAME TABLE gets unblocked. If it gets scheduled quickly
# and manages to complete before open_tables()
# of HANDLER t2 OPEN, open_tables() and therefore the whole
# HANDLER t2 OPEN succeeds. Otherwise open_tables()
# of HANDLER t0 OPEN, open_tables() and therefore the whole
# HANDLER t0 OPEN succeeds. Otherwise open_tables()
# notices a pending or active exclusive metadata lock on t2
# and the whole HANDLER t2 OPEN fails with ER_LOCK_DEADLOCK
# and the whole HANDLER t0 OPEN fails with ER_LOCK_DEADLOCK
# error.
#
handler t2 open;
handler t2 close;
handler t0 open;
handler t0 close;
connection: flush
handler t1 read next;
ERROR 42S02: Unknown table 't1' in HANDLER
handler t1 close;
ERROR 42S02: Unknown table 't1' in HANDLER
drop table t2;
drop table t0;
drop table if exists t1;
create temporary table t1 (a int, b char(1), key a(a), key b(a,b));
insert into t1 values (0,"a"),(1,"b"),(2,"c"),(3,"d"),(4,"e"),
@ -989,8 +989,8 @@ handler t1 close;
#
create table t1 (a int, key a (a));
insert into t1 (a) values (1), (2), (3), (4), (5);
create table t2 (a int, key a (a));
insert into t2 (a) values (1), (2), (3), (4), (5);
create table t0 (a int, key a (a));
insert into t0 (a) values (1), (2), (3), (4), (5);
begin;
select * from t1;
a
@ -999,23 +999,19 @@ a
3
4
5
# --> connection con1
lock table t2 read;
# --> connection con2
# Sending:
rename table t2 to t3, t1 to t2, t3 to t1;
rename table t0 to t3, t1 to t0, t3 to t1;
# --> connection con1
# Waiting for 'rename table ...' to get blocked...
# --> connection default
handler t2 open;
handler t0 open;
ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
select * from t2;
select * from t0;
ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
handler t1 open;
commit;
handler t1 close;
# --> connection con1
unlock tables;
# --> connection con2
# Reaping 'rename table ...'...
# --> connection default
@ -1024,7 +1020,7 @@ handler t1 read a prev;
a
5
handler t1 close;
drop table t2;
drop table t0;
#
# Originally there was a deadlock error in this test.
# With implementation of deadlock detector

View File

@ -559,36 +559,36 @@ c1
handler t1 close;
handler t2 close;
drop table t1,t2;
drop table if exists t1,t2;
drop table if exists t1, t0;
create table t1 (c1 int);
connection: default
handler t1 open;
handler t1 read first;
c1
connection: flush
rename table t1 to t2;;
rename table t1 to t0;;
connection: waiter
connection: default
#
# RENAME placed two pending locks and waits.
# When HANDLER t2 OPEN does open_tables(), it calls
# When HANDLER t0 OPEN does open_tables(), it calls
# mysql_ha_flush(), which in turn closes the open HANDLER for t1.
# RENAME TABLE gets unblocked. If it gets scheduled quickly
# and manages to complete before open_tables()
# of HANDLER t2 OPEN, open_tables() and therefore the whole
# HANDLER t2 OPEN succeeds. Otherwise open_tables()
# of HANDLER t0 OPEN, open_tables() and therefore the whole
# HANDLER t0 OPEN succeeds. Otherwise open_tables()
# notices a pending or active exclusive metadata lock on t2
# and the whole HANDLER t2 OPEN fails with ER_LOCK_DEADLOCK
# and the whole HANDLER t0 OPEN fails with ER_LOCK_DEADLOCK
# error.
#
handler t2 open;
handler t2 close;
handler t0 open;
handler t0 close;
connection: flush
handler t1 read next;
ERROR 42S02: Unknown table 't1' in HANDLER
handler t1 close;
ERROR 42S02: Unknown table 't1' in HANDLER
drop table t2;
drop table t0;
drop table if exists t1;
create temporary table t1 (a int, b char(1), key a(a), key b(a,b));
insert into t1 values (0,"a"),(1,"b"),(2,"c"),(3,"d"),(4,"e"),
@ -986,8 +986,8 @@ handler t1 close;
#
create table t1 (a int, key a (a));
insert into t1 (a) values (1), (2), (3), (4), (5);
create table t2 (a int, key a (a));
insert into t2 (a) values (1), (2), (3), (4), (5);
create table t0 (a int, key a (a));
insert into t0 (a) values (1), (2), (3), (4), (5);
begin;
select * from t1;
a
@ -996,23 +996,19 @@ a
3
4
5
# --> connection con1
lock table t2 read;
# --> connection con2
# Sending:
rename table t2 to t3, t1 to t2, t3 to t1;
rename table t0 to t3, t1 to t0, t3 to t1;
# --> connection con1
# Waiting for 'rename table ...' to get blocked...
# --> connection default
handler t2 open;
handler t0 open;
ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
select * from t2;
select * from t0;
ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
handler t1 open;
commit;
handler t1 close;
# --> connection con1
unlock tables;
# --> connection con2
# Reaping 'rename table ...'...
# --> connection default
@ -1021,7 +1017,7 @@ handler t1 read a prev;
a
5
handler t1 close;
drop table t2;
drop table t0;
#
# Originally there was a deadlock error in this test.
# With implementation of deadlock detector

View File

@ -23,7 +23,7 @@ SET DEBUG_SYNC= 'RESET';
# Test coverage for basic deadlock detection in metadata
# locking subsystem.
#
drop tables if exists t1, t2, t3, t4;
drop tables if exists t0, t1, t2, t3, t4, t5;
create table t1 (i int);
create table t2 (j int);
create table t3 (k int);
@ -90,7 +90,7 @@ commit;
#
# Switching to connection 'deadlock_con1'.
begin;
insert into t1 values (2);
insert into t2 values (2);
#
# Switching to connection 'default'.
# Send:
@ -98,11 +98,11 @@ rename table t2 to t0, t1 to t2, t0 to t1;;
#
# Switching to connection 'deadlock_con1'.
# Wait until the above RENAME TABLE is blocked because it has to wait
# for 'deadlock_con1' which holds shared metadata lock on 't1'.
# for 'deadlock_con1' which holds shared metadata lock on 't2'.
#
# The below statement should not wait as doing so will cause deadlock.
# Instead it should fail and emit ER_LOCK_DEADLOCK statement.
select * from t2;
select * from t1;
ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
#
# Let us check that failure of the above statement has not released
@ -141,7 +141,7 @@ select * from t2;;
# for an exclusive metadata lock to go away.
# Send RENAME TABLE statement that will deadlock with the
# SELECT statement and thus should abort the latter.
rename table t1 to t0, t2 to t1, t0 to t2;;
rename table t1 to t5, t2 to t1, t5 to t2;;
#
# Switching to connection 'deadlock_con1'.
# Since the latest RENAME TABLE entered in deadlock with SELECT
@ -156,15 +156,17 @@ ERROR 40001: Deadlock found when trying to get lock; try restarting transaction
# Commit transaction to unblock this RENAME TABLE.
commit;
#
# Switching to connection 'deadlock_con3'.
# Reap RENAME TABLE t1 TO t0 ... .
#
# Switching to connection 'deadlock_con2'.
# Commit transaction to unblock the first RENAME TABLE.
commit;
#
# Switching to connection 'default'.
# Reap RENAME TABLE t2 TO t0 ... .
#
# Switching to connection 'deadlock_con3'.
# Reap RENAME TABLE t1 TO t5 ... .
#
# Switching to connection 'default'.
drop tables t1, t2, t3, t4;
#
# Now, test case which shows that deadlock detection empiric

View File

@ -78,7 +78,7 @@ SET DEBUG_SYNC= 'RESET';
--echo # locking subsystem.
--echo #
--disable_warnings
drop tables if exists t1, t2, t3, t4;
drop tables if exists t0, t1, t2, t3, t4, t5;
--enable_warnings
connect(deadlock_con1,localhost,root,,);
@ -189,7 +189,7 @@ connection default;
--echo # Switching to connection 'deadlock_con1'.
connection deadlock_con1;
begin;
insert into t1 values (2);
insert into t2 values (2);
--echo #
--echo # Switching to connection 'default'.
@ -201,7 +201,7 @@ connection default;
--echo # Switching to connection 'deadlock_con1'.
connection deadlock_con1;
--echo # Wait until the above RENAME TABLE is blocked because it has to wait
--echo # for 'deadlock_con1' which holds shared metadata lock on 't1'.
--echo # for 'deadlock_con1' which holds shared metadata lock on 't2'.
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = "Waiting for table" and info = "rename table t2 to t0, t1 to t2, t0 to t1";
@ -210,7 +210,7 @@ let $wait_condition=
--echo # The below statement should not wait as doing so will cause deadlock.
--echo # Instead it should fail and emit ER_LOCK_DEADLOCK statement.
--error ER_LOCK_DEADLOCK
select * from t2;
select * from t1;
--echo #
--echo # Let us check that failure of the above statement has not released
@ -276,7 +276,7 @@ let $wait_condition=
--echo # Send RENAME TABLE statement that will deadlock with the
--echo # SELECT statement and thus should abort the latter.
--send rename table t1 to t0, t2 to t1, t0 to t2;
--send rename table t1 to t5, t2 to t1, t5 to t2;
--echo #
--echo # Switching to connection 'deadlock_con1'.
@ -294,17 +294,11 @@ connection deadlock_con1;
--echo # is blocked.
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = "Waiting for table" and info = "rename table t1 to t0, t2 to t1, t0 to t2";
where state = "Waiting for table" and info = "rename table t1 to t5, t2 to t1, t5 to t2";
--source include/wait_condition.inc
--echo # Commit transaction to unblock this RENAME TABLE.
commit;
--echo #
--echo # Switching to connection 'deadlock_con3'.
connection deadlock_con3;
--echo # Reap RENAME TABLE t1 TO t0 ... .
--reap;
--echo #
--echo # Switching to connection 'deadlock_con2'.
connection deadlock_con2;
@ -317,6 +311,16 @@ connection default;
--echo # Reap RENAME TABLE t2 TO t0 ... .
--reap
--echo #
--echo # Switching to connection 'deadlock_con3'.
connection deadlock_con3;
--echo # Reap RENAME TABLE t1 TO t5 ... .
--reap;
--echo #
--echo # Switching to connection 'default'.
connection default;
drop tables t1, t2, t3, t4;
--echo #

View File

@ -770,7 +770,7 @@ send_show_create_event(THD *thd, Event_timed *et, Protocol *protocol)
bool
Events::show_create_event(THD *thd, LEX_STRING dbname, LEX_STRING name)
{
Open_tables_state open_tables_backup;
Open_tables_backup open_tables_backup;
Event_timed et;
bool ret;
@ -826,7 +826,7 @@ Events::fill_schema_events(THD *thd, TABLE_LIST *tables, COND * /* cond */)
{
char *db= NULL;
int ret;
Open_tables_state open_tables_backup;
Open_tables_backup open_tables_backup;
DBUG_ENTER("Events::fill_schema_events");
if (check_if_system_tables_error())

View File

@ -7285,14 +7285,10 @@ int ndbcluster_find_files(handlerton *hton, THD *thd,
code below will try to obtain exclusive metadata lock on some table
while holding shared meta-data lock on other tables. This might lead to
a deadlock, and therefore is disallowed by assertions of the metadata
locking subsystem. In order to temporarily make the code work, we must
reset and backup the open tables state, thus hide the existing locks
from MDL asserts. But in the essence this is violation of metadata
locking subsystem. This is violation of metadata
locking protocol which has to be closed ASAP.
XXX: the scenario described above is not covered with any test.
*/
Open_tables_state open_tables_state_backup;
thd->reset_n_backup_open_tables_state(&open_tables_state_backup);
if (!global_read_lock)
{
// Delete old files
@ -7316,8 +7312,6 @@ int ndbcluster_find_files(handlerton *hton, THD *thd,
}
}
thd->restore_backup_open_tables_state(&open_tables_state_backup);
/* Lock mutex before creating .FRM files. */
pthread_mutex_lock(&LOCK_open);
/* Create new files. */

View File

@ -949,8 +949,11 @@ static MYSQL_LOCK *get_lock_data(THD *thd, TABLE **table_ptr, uint count,
bool lock_table_names(THD *thd, TABLE_LIST *table_list)
{
MDL_request_list mdl_requests;
MDL_request global_request;
TABLE_LIST *lock_table;
global_request.init(MDL_key::GLOBAL, "", "", MDL_INTENTION_EXCLUSIVE);
for (lock_table= table_list; lock_table; lock_table= lock_table->next_local)
{
lock_table->mdl_request.init(MDL_key::TABLE,
@ -958,8 +961,15 @@ bool lock_table_names(THD *thd, TABLE_LIST *table_list)
MDL_EXCLUSIVE);
mdl_requests.push_front(&lock_table->mdl_request);
}
if (thd->mdl_context.acquire_exclusive_locks(&mdl_requests))
if (thd->mdl_context.acquire_global_intention_exclusive_lock(&global_request))
return 1;
if (thd->mdl_context.acquire_exclusive_locks(&mdl_requests))
{
thd->mdl_context.release_lock(global_request.ticket);
return 1;
}
return 0;
}
@ -1009,6 +1019,7 @@ bool lock_routine_name(THD *thd, bool is_function,
MDL_key::enum_mdl_namespace mdl_type= (is_function ?
MDL_key::FUNCTION :
MDL_key::PROCEDURE);
MDL_request global_request;
MDL_request mdl_request;
if (thd->locked_tables_mode)
@ -1021,11 +1032,18 @@ bool lock_routine_name(THD *thd, bool is_function,
DBUG_ASSERT(name);
DEBUG_SYNC(thd, "before_wait_locked_pname");
global_request.init(MDL_key::GLOBAL, "", "", MDL_INTENTION_EXCLUSIVE);
mdl_request.init(mdl_type, db, name, MDL_EXCLUSIVE);
if (thd->mdl_context.acquire_exclusive_lock(&mdl_request))
if (thd->mdl_context.acquire_global_intention_exclusive_lock(&global_request))
return TRUE;
if (thd->mdl_context.acquire_exclusive_lock(&mdl_request))
{
thd->mdl_context.release_lock(global_request.ticket);
return TRUE;
}
DEBUG_SYNC(thd, "after_wait_locked_pname");
return FALSE;
}

View File

@ -410,7 +410,7 @@ bool Log_to_csv_event_handler::
bool need_rnd_end= FALSE;
uint field_index;
Silence_log_table_errors error_handler;
Open_tables_state open_tables_backup;
Open_tables_backup open_tables_backup;
ulonglong save_thd_options;
bool save_time_zone_used;
@ -572,7 +572,7 @@ bool Log_to_csv_event_handler::
bool need_close= FALSE;
bool need_rnd_end= FALSE;
Silence_log_table_errors error_handler;
Open_tables_state open_tables_backup;
Open_tables_backup open_tables_backup;
CHARSET_INFO *client_cs= thd->variables.character_set_client;
bool save_time_zone_used;
DBUG_ENTER("Log_to_csv_event_handler::log_slow");
@ -727,7 +727,7 @@ int Log_to_csv_event_handler::
TABLE *table;
LEX_STRING *UNINIT_VAR(log_name);
int result;
Open_tables_state open_tables_backup;
Open_tables_backup open_tables_backup;
DBUG_ENTER("Log_to_csv_event_handler::activate_log");

1639
sql/mdl.cc

File diff suppressed because it is too large Load Diff

136
sql/mdl.h
View File

@ -36,11 +36,13 @@ class MDL_ticket;
(because of that their acquisition involves implicit
acquisition of global intention-exclusive lock).
@see Comments for can_grant_lock() and can_grant_global_lock() for details.
@sa Comments for MDL_object_lock::can_grant_lock() and
MDL_global_lock::can_grant_lock() for details.
*/
enum enum_mdl_type {MDL_SHARED=0, MDL_SHARED_HIGH_PRIO,
MDL_SHARED_UPGRADABLE, MDL_EXCLUSIVE};
MDL_SHARED_UPGRADABLE, MDL_INTENTION_EXCLUSIVE,
MDL_EXCLUSIVE};
/** States which a metadata lock ticket can have. */
@ -78,7 +80,8 @@ public:
enum enum_mdl_namespace { TABLE=0,
FUNCTION,
PROCEDURE,
TRIGGER };
TRIGGER,
GLOBAL };
const uchar *ptr() const { return (uchar*) m_ptr; }
uint length() const { return m_length; }
@ -93,7 +96,8 @@ public:
{ return (enum_mdl_namespace)(m_ptr[0]); }
/**
Construct a metadata lock key from a triplet (mdl_namespace, database and name).
Construct a metadata lock key from a triplet (mdl_namespace,
database and name).
@remark The key for a table is <mdl_namespace>+<database name>+<table name>
@ -102,11 +106,12 @@ public:
@param name Name of of the object
@param key Where to store the the MDL key.
*/
void mdl_key_init(enum_mdl_namespace mdl_namespace, const char *db, const char *name)
void mdl_key_init(enum_mdl_namespace mdl_namespace,
const char *db, const char *name)
{
m_ptr[0]= (char) mdl_namespace;
m_db_name_length= (uint) (strmov(m_ptr + 1, db) - m_ptr - 1);
m_length= (uint) (strmov(m_ptr + m_db_name_length + 2, name) - m_ptr + 1);
m_db_name_length= (uint16) (strmov(m_ptr + 1, db) - m_ptr - 1);
m_length= (uint16) (strmov(m_ptr + m_db_name_length + 2, name) - m_ptr + 1);
}
void mdl_key_init(const MDL_key *rhs)
{
@ -119,20 +124,34 @@ public:
return (m_length == rhs->m_length &&
memcmp(m_ptr, rhs->m_ptr, m_length) == 0);
}
/**
Compare two MDL keys lexicographically.
*/
int cmp(const MDL_key *rhs) const
{
/*
The key buffer is always '\0'-terminated. Since key
character set is utf-8, we can safely assume that no
character starts with a zero byte.
*/
return memcmp(m_ptr, rhs->m_ptr, min(m_length, rhs->m_length)+1);
}
MDL_key(const MDL_key *rhs)
{
mdl_key_init(rhs);
}
MDL_key(enum_mdl_namespace namespace_arg, const char *db_arg, const char *name_arg)
MDL_key(enum_mdl_namespace namespace_arg,
const char *db_arg, const char *name_arg)
{
mdl_key_init(namespace_arg, db_arg, name_arg);
}
MDL_key() {} /* To use when part of MDL_request. */
private:
uint16 m_length;
uint16 m_db_name_length;
char m_ptr[MAX_MDLKEY_LENGTH];
uint m_length;
uint m_db_name_length;
private:
MDL_key(const MDL_key &); /* not implemented */
MDL_key &operator=(const MDL_key &); /* not implemented */
@ -198,7 +217,7 @@ public:
DBUG_ASSERT(ticket == NULL);
type= type_arg;
}
bool is_shared() const { return type < MDL_EXCLUSIVE; }
bool is_shared() const { return type < MDL_INTENTION_EXCLUSIVE; }
static MDL_request *create(MDL_key::enum_mdl_namespace mdl_namespace,
const char *db, const char *name,
@ -243,6 +262,17 @@ typedef void (*mdl_cached_object_release_hook)(void *);
@note Multiple shared locks on a same object are represented by a
single ticket. The same does not apply for other lock types.
@note There are two groups of MDL_ticket members:
- "Externally accessible". These members can be accessed from
threads/contexts different than ticket owner in cases when
ticket participates in some list of granted or waiting tickets
for a lock. Therefore one should change these members before
including then to waiting/granted lists or while holding lock
protecting those lists.
- "Context private". Such members are private to thread/context
owning this ticket. I.e. they should not be accessed from other
threads/contexts.
*/
class MDL_ticket
@ -250,12 +280,13 @@ class MDL_ticket
public:
/**
Pointers for participating in the list of lock requests for this context.
Context private.
*/
MDL_ticket *next_in_context;
MDL_ticket **prev_in_context;
/**
Pointers for participating in the list of satisfied/pending requests
for the lock.
for the lock. Externally accessible.
*/
MDL_ticket *next_in_lock;
MDL_ticket **prev_in_lock;
@ -265,8 +296,8 @@ public:
void *get_cached_object();
void set_cached_object(void *cached_object,
mdl_cached_object_release_hook release_hook);
const MDL_context *get_ctx() const { return m_ctx; }
bool is_shared() const { return m_type < MDL_EXCLUSIVE; }
MDL_context *get_ctx() const { return m_ctx; }
bool is_shared() const { return m_type < MDL_INTENTION_EXCLUSIVE; }
bool is_upgradable_or_exclusive() const
{
return m_type == MDL_SHARED_UPGRADABLE || m_type == MDL_EXCLUSIVE;
@ -275,6 +306,8 @@ public:
void downgrade_exclusive_lock();
private:
friend class MDL_context;
friend class MDL_global_lock;
friend class MDL_object_lock;
MDL_ticket(MDL_context *ctx_arg, enum_mdl_type type_arg)
: m_type(type_arg),
@ -283,31 +316,31 @@ private:
m_lock(NULL)
{}
static MDL_ticket *create(MDL_context *ctx_arg, enum_mdl_type type_arg);
static void destroy(MDL_ticket *ticket);
private:
/** Type of metadata lock. */
/** Type of metadata lock. Externally accessible. */
enum enum_mdl_type m_type;
/** State of the metadata lock ticket. */
/** State of the metadata lock ticket. Context private. */
enum enum_mdl_state m_state;
/** Context of the owner of the metadata lock ticket. */
/**
Context of the owner of the metadata lock ticket. Externally accessible.
*/
MDL_context *m_ctx;
/** Pointer to the lock object for this lock ticket. */
/** Pointer to the lock object for this lock ticket. Context private. */
MDL_lock *m_lock;
private:
MDL_ticket(const MDL_ticket &); /* not implemented */
MDL_ticket &operator=(const MDL_ticket &); /* not implemented */
bool has_pending_conflicting_lock_impl() const;
};
typedef I_P_List<MDL_request, I_P_List_adapter<MDL_request,
&MDL_request::next_in_list,
&MDL_request::prev_in_list> >
&MDL_request::prev_in_list>,
I_P_List_counter>
MDL_request_list;
/**
@ -326,21 +359,19 @@ public:
typedef Ticket_list::Iterator Ticket_iterator;
void init(THD *thd);
MDL_context();
void destroy();
bool try_acquire_shared_lock(MDL_request *mdl_request);
bool acquire_exclusive_lock(MDL_request *mdl_request);
bool acquire_exclusive_locks(MDL_request_list *requests);
bool try_acquire_exclusive_lock(MDL_request *mdl_request);
bool acquire_global_shared_lock();
bool clone_ticket(MDL_request *mdl_request);
bool wait_for_locks(MDL_request_list *requests);
void release_all_locks_for_name(MDL_ticket *ticket);
void release_lock(MDL_ticket *ticket);
void release_global_shared_lock();
bool is_exclusive_lock_owner(MDL_key::enum_mdl_namespace mdl_namespace,
const char *db,
@ -368,7 +399,6 @@ public:
void set_lt_or_ha_sentinel()
{
DBUG_ASSERT(m_lt_or_ha_sentinel == NULL);
m_lt_or_ha_sentinel= mdl_savepoint();
}
MDL_ticket *lt_or_ha_sentinel() const { return m_lt_or_ha_sentinel; }
@ -385,16 +415,35 @@ public:
bool can_wait_lead_to_deadlock() const;
inline THD *get_thd() const { return m_thd; }
bool is_waiting_in_mdl() const { return m_is_waiting_in_mdl; }
/**
Wake up context which is waiting for a change of MDL_lock state.
*/
void awake()
{
pthread_cond_signal(&m_ctx_wakeup_cond);
}
bool try_acquire_global_intention_exclusive_lock(MDL_request *mdl_request);
bool acquire_global_intention_exclusive_lock(MDL_request *mdl_request);
bool acquire_global_shared_lock();
void release_global_shared_lock();
/**
Check if this context owns global lock of particular type.
*/
bool is_global_lock_owner(enum_mdl_type type_arg)
{
MDL_request mdl_request;
bool not_used;
mdl_request.init(MDL_key::GLOBAL, "", "", type_arg);
return find_ticket(&mdl_request, &not_used);
}
void init(THD *thd_arg) { m_thd= thd_arg; }
private:
Ticket_list m_tickets;
bool m_has_global_shared_lock;
/**
Indicates that the owner of this context is waiting in
wait_for_locks() method.
*/
bool m_is_waiting_in_mdl;
/**
This member has two uses:
1) When entering LOCK TABLES mode, remember the last taken
@ -406,12 +455,27 @@ private:
*/
MDL_ticket *m_lt_or_ha_sentinel;
THD *m_thd;
/**
Condvar which is used for waiting until this context's pending
request can be satisfied or this thread has to perform actions
to resolve potential deadlock (we subscribe for such notification
by adding ticket corresponding to the request to an appropriate
queue of waiters).
*/
pthread_cond_t m_ctx_wakeup_cond;
private:
void release_ticket(MDL_ticket *ticket);
bool can_wait_lead_to_deadlock_impl() const;
MDL_ticket *find_ticket(MDL_request *mdl_req,
bool *is_lt_or_ha);
void release_locks_stored_before(MDL_ticket *sentinel);
bool try_acquire_lock_impl(MDL_request *mdl_request);
bool acquire_lock_impl(MDL_request *mdl_request);
bool acquire_exclusive_lock_impl(MDL_request *mdl_request);
friend bool MDL_ticket::upgrade_shared_lock_to_exclusive();
private:
MDL_context(const MDL_context &rhs); /* not implemented */
MDL_context &operator=(MDL_context &rhs); /* not implemented */
};

View File

@ -1689,13 +1689,13 @@ void close_open_tables_and_downgrade(ALTER_PARTITION_PARAM_TYPE *lpt);
/* Functions to work with system tables. */
bool open_system_tables_for_read(THD *thd, TABLE_LIST *table_list,
Open_tables_state *backup);
void close_system_tables(THD *thd, Open_tables_state *backup);
Open_tables_backup *backup);
void close_system_tables(THD *thd, Open_tables_backup *backup);
TABLE *open_system_table_for_update(THD *thd, TABLE_LIST *one_table);
TABLE *open_performance_schema_table(THD *thd, TABLE_LIST *one_table,
Open_tables_state *backup);
void close_performance_schema_table(THD *thd, Open_tables_state *backup);
Open_tables_backup *backup);
void close_performance_schema_table(THD *thd, Open_tables_backup *backup);
bool close_cached_tables(THD *thd, TABLE_LIST *tables, bool have_lock,
bool wait_for_refresh);

View File

@ -260,7 +260,7 @@ Stored_routine_creation_ctx::load_from_db(THD *thd,
\# Pointer to TABLE object of mysql.proc
*/
TABLE *open_proc_table_for_read(THD *thd, Open_tables_state *backup)
TABLE *open_proc_table_for_read(THD *thd, Open_tables_backup *backup)
{
TABLE_LIST table;
@ -382,7 +382,7 @@ db_find_routine(THD *thd, int type, sp_name *name, sp_head **sphp)
String str(buff, sizeof(buff), &my_charset_bin);
bool saved_time_zone_used= thd->time_zone_used;
ulong sql_mode, saved_mode= thd->variables.sql_mode;
Open_tables_state open_tables_state_backup;
Open_tables_backup open_tables_state_backup;
Stored_program_creation_ctx *creation_ctx;
DBUG_ENTER("db_find_routine");
@ -1432,7 +1432,7 @@ sp_routine_exists_in_table(THD *thd, int type, sp_name *name)
{
TABLE *table;
int ret;
Open_tables_state open_tables_state_backup;
Open_tables_backup open_tables_state_backup;
if (!(table= open_proc_table_for_read(thd, &open_tables_state_backup)))
ret= SP_OPEN_TABLE_FAILED;

View File

@ -128,6 +128,6 @@ extern "C" uchar* sp_sroutine_key(const uchar *ptr, size_t *plen,
Routines which allow open/lock and close mysql.proc table even when
we already have some tables open and locked.
*/
TABLE *open_proc_table_for_read(THD *thd, Open_tables_state *backup);
TABLE *open_proc_table_for_read(THD *thd, Open_tables_backup *backup);
#endif /* _SP_H_ */

View File

@ -1519,25 +1519,23 @@ void close_thread_tables(THD *thd)
if (thd->open_tables)
close_open_tables(thd);
if (thd->state_flags & Open_tables_state::BACKUPS_AVAIL)
/*
- If inside a multi-statement transaction,
defer the release of metadata locks until the current
transaction is either committed or rolled back. This prevents
other statements from modifying the table for the entire
duration of this transaction. This provides commit ordering
and guarantees serializability across multiple transactions.
- If closing a system table, defer the release of metadata locks
to the caller. We have no sentinel in MDL subsystem to guard
transactional locks from system tables locks, so don't know
which locks are which here.
- If in autocommit mode, or outside a transactional context,
automatically release metadata locks of the current statement.
*/
if (! thd->in_multi_stmt_transaction() &&
! (thd->state_flags & Open_tables_state::BACKUPS_AVAIL))
{
/* We can't have an open HANDLER in the backup open tables state. */
DBUG_ASSERT(thd->mdl_context.lt_or_ha_sentinel() == NULL);
/*
Due to the above assert, this is guaranteed to release *all* locks
in the context.
*/
thd->mdl_context.release_transactional_locks();
}
else if (! thd->in_multi_stmt_transaction())
{
/*
Defer the release of metadata locks until the current transaction
is either committed or rolled back. This prevents other statements
from modifying the table for the entire duration of this transaction.
This provides commitment ordering for guaranteeing serializability
across multiple transactions.
*/
thd->mdl_context.release_transactional_locks();
}
@ -2336,10 +2334,9 @@ open_table_get_mdl_lock(THD *thd, TABLE_LIST *table_list,
Open_table_context *ot_ctx,
uint flags)
{
ot_ctx->add_request(mdl_request);
if (table_list->lock_strategy)
{
MDL_request *global_request;
/*
In case of CREATE TABLE .. If NOT EXISTS .. SELECT, the table
may not yet exist. Let's acquire an exclusive lock for that
@ -2349,10 +2346,24 @@ open_table_get_mdl_lock(THD *thd, TABLE_LIST *table_list,
shared locks. This invariant is preserved here and is also
enforced by asserts in metadata locking subsystem.
*/
mdl_request->set_type(MDL_EXCLUSIVE);
DBUG_ASSERT(! thd->mdl_context.has_locks() ||
thd->handler_tables_hash.records);
thd->handler_tables_hash.records ||
thd->global_read_lock);
if (!(global_request= ot_ctx->get_global_mdl_request(thd)))
return 1;
if (! global_request->ticket)
{
ot_ctx->add_request(global_request);
if (thd->mdl_context.acquire_global_intention_exclusive_lock(
global_request))
return 1;
}
ot_ctx->add_request(mdl_request);
if (thd->mdl_context.acquire_exclusive_lock(mdl_request))
return 1;
}
@ -2371,8 +2382,29 @@ open_table_get_mdl_lock(THD *thd, TABLE_LIST *table_list,
if (flags & MYSQL_LOCK_IGNORE_FLUSH)
mdl_request->set_type(MDL_SHARED_HIGH_PRIO);
if (mdl_request->type == MDL_SHARED_UPGRADABLE)
{
MDL_request *global_request;
if (!(global_request= ot_ctx->get_global_mdl_request(thd)))
return 1;
if (! global_request->ticket)
{
ot_ctx->add_request(global_request);
if (thd->mdl_context.try_acquire_global_intention_exclusive_lock(
global_request))
return 1;
if (! global_request->ticket)
goto failure;
}
}
ot_ctx->add_request(mdl_request);
if (thd->mdl_context.try_acquire_shared_lock(mdl_request))
return 1;
failure:
if (mdl_request->ticket == NULL)
{
if (flags & MYSQL_OPEN_FAIL_ON_MDL_CONFLICT)
@ -2919,8 +2951,6 @@ err_unlock:
release_table_share(share);
err_unlock2:
pthread_mutex_unlock(&LOCK_open);
if (! (flags & MYSQL_OPEN_HAS_MDL_LOCK))
thd->mdl_context.release_lock(mdl_ticket);
DBUG_RETURN(TRUE);
}
@ -3713,10 +3743,33 @@ Open_table_context::Open_table_context(THD *thd)
m_start_of_statement_svp(thd->mdl_context.mdl_savepoint()),
m_has_locks((thd->in_multi_stmt_transaction() ||
thd->mdl_context.lt_or_ha_sentinel()) &&
thd->mdl_context.has_locks())
thd->mdl_context.has_locks()),
m_global_mdl_request(NULL)
{}
/**
Get MDL_request object for global intention exclusive lock which
is acquired during opening tables for statements which take
upgradable shared metadata locks.
*/
MDL_request *Open_table_context::get_global_mdl_request(THD *thd)
{
if (! m_global_mdl_request)
{
char *buff;
if ((buff= (char*)thd->alloc(sizeof(MDL_request))))
{
m_global_mdl_request= new (buff) MDL_request();
m_global_mdl_request->init(MDL_key::GLOBAL, "", "",
MDL_INTENTION_EXCLUSIVE);
}
}
return m_global_mdl_request;
}
/**
Check if we can back-off and set back off action if we can.
Otherwise report and return error.
@ -3777,6 +3830,11 @@ recover_from_failed_open(THD *thd, MDL_request *mdl_request,
TABLE_LIST *table)
{
bool result= FALSE;
/*
Remove reference to released ticket from MDL_request.
*/
if (m_global_mdl_request)
m_global_mdl_request->ticket= NULL;
/* Execute the action. */
switch (m_action)
{
@ -3787,11 +3845,26 @@ recover_from_failed_open(THD *thd, MDL_request *mdl_request,
break;
case OT_DISCOVER:
{
MDL_request mdl_global_request;
MDL_request mdl_xlock_request(mdl_request);
mdl_global_request.init(MDL_key::GLOBAL, "", "",
MDL_INTENTION_EXCLUSIVE);
mdl_xlock_request.set_type(MDL_EXCLUSIVE);
if ((result= thd->mdl_context.acquire_global_intention_exclusive_lock(
&mdl_global_request)))
break;
if ((result=
thd->mdl_context.acquire_exclusive_lock(&mdl_xlock_request)))
{
/*
We rely on close_thread_tables() to release global lock eventually.
*/
break;
}
DBUG_ASSERT(mdl_request->key.mdl_namespace() == MDL_key::TABLE);
pthread_mutex_lock(&LOCK_open);
@ -3805,16 +3878,30 @@ recover_from_failed_open(THD *thd, MDL_request *mdl_request,
thd->warning_info->clear_warning_info(thd->query_id);
thd->clear_error(); // Clear error message
thd->mdl_context.release_lock(mdl_xlock_request.ticket);
thd->mdl_context.release_transactional_locks();
break;
}
case OT_REPAIR:
{
MDL_request mdl_global_request;
MDL_request mdl_xlock_request(mdl_request);
mdl_global_request.init(MDL_key::GLOBAL, "", "",
MDL_INTENTION_EXCLUSIVE);
mdl_xlock_request.set_type(MDL_EXCLUSIVE);
if ((result= thd->mdl_context.acquire_global_intention_exclusive_lock(
&mdl_global_request)))
break;
if ((result=
thd->mdl_context.acquire_exclusive_lock(&mdl_xlock_request)))
{
/*
We rely on close_thread_tables() to release global lock eventually.
*/
break;
}
DBUG_ASSERT(mdl_request->key.mdl_namespace() == MDL_key::TABLE);
pthread_mutex_lock(&LOCK_open);
@ -3824,7 +3911,7 @@ recover_from_failed_open(THD *thd, MDL_request *mdl_request,
pthread_mutex_unlock(&LOCK_open);
result= auto_repair_table(thd, table);
thd->mdl_context.release_lock(mdl_xlock_request.ticket);
thd->mdl_context.release_transactional_locks();
break;
}
default:
@ -3921,6 +4008,13 @@ open_and_process_routine(THD *thd, Query_tables_list *prelocking_ctx,
mdl_type != MDL_key::PROCEDURE)
{
ot_ctx->add_request(&rt->mdl_request);
/*
Since we acquire only shared lock on routines we don't
need to care about global intention exclusive locks.
*/
DBUG_ASSERT(rt->mdl_request.type == MDL_SHARED);
if (thd->mdl_context.try_acquire_shared_lock(&rt->mdl_request))
DBUG_RETURN(TRUE);
@ -8784,7 +8878,7 @@ has_write_table_with_auto_increment(TABLE_LIST *tables)
bool
open_system_tables_for_read(THD *thd, TABLE_LIST *table_list,
Open_tables_state *backup)
Open_tables_backup *backup)
{
Query_tables_list query_tables_list_backup;
LEX *lex= thd->lex;
@ -8830,13 +8924,13 @@ error:
SYNOPSIS
close_system_tables()
thd Thread context
backup Pointer to Open_tables_state instance which holds
backup Pointer to Open_tables_backup instance which holds
information about tables which were open before we
decided to access system tables.
*/
void
close_system_tables(THD *thd, Open_tables_state *backup)
close_system_tables(THD *thd, Open_tables_backup *backup)
{
close_thread_tables(thd);
thd->restore_backup_open_tables_state(backup);
@ -8887,7 +8981,7 @@ open_system_table_for_update(THD *thd, TABLE_LIST *one_table)
*/
TABLE *
open_performance_schema_table(THD *thd, TABLE_LIST *one_table,
Open_tables_state *backup)
Open_tables_backup *backup)
{
uint flags= ( MYSQL_LOCK_IGNORE_GLOBAL_READ_LOCK |
MYSQL_LOCK_IGNORE_GLOBAL_READ_ONLY |
@ -8936,51 +9030,9 @@ open_performance_schema_table(THD *thd, TABLE_LIST *one_table,
@param thd The current thread
@param backup [in] the context to restore.
*/
void close_performance_schema_table(THD *thd, Open_tables_state *backup)
void close_performance_schema_table(THD *thd, Open_tables_backup *backup)
{
bool found_old_table;
/*
If open_performance_schema_table() fails,
this function should not be called.
*/
DBUG_ASSERT(thd->lock != NULL);
/*
Note:
We do not create explicitly a separate transaction for the
performance table I/O, but borrow the current transaction.
lock + unlock will autocommit the change done in the
performance schema table: this is the expected result.
The current transaction should not be affected by this code.
TODO: Note that if a transactional engine is used for log tables,
this code will need to be revised, as a separate transaction
might be needed.
*/
mysql_unlock_tables(thd, thd->lock);
thd->lock= 0;
pthread_mutex_lock(&LOCK_open);
found_old_table= false;
/*
Note that we need to hold LOCK_open while changing the
open_tables list. Another thread may work on it.
(See: notify_thread_having_shared_lock())
*/
while (thd->open_tables)
found_old_table|= close_thread_table(thd, &thd->open_tables);
if (found_old_table)
broadcast_refresh();
pthread_mutex_unlock(&LOCK_open);
/* We can't have an open HANDLER in the backup context. */
DBUG_ASSERT(thd->mdl_context.lt_or_ha_sentinel() == NULL);
thd->mdl_context.release_transactional_locks();
thd->restore_backup_open_tables_state(backup);
close_system_tables(thd, backup);
}
/**

View File

@ -471,6 +471,7 @@ THD::THD()
{
ulong tmp;
mdl_context.init(this);
/*
Pass nominal parameters to init_alloc_root only to ensure that
the destructor works OK in case of an error. The main_mem_root
@ -1007,7 +1008,8 @@ void THD::cleanup(void)
*/
DBUG_ASSERT(open_tables == NULL);
/* All HANDLERs must have been closed by now. */
DBUG_ASSERT(mdl_context.lt_or_ha_sentinel() == NULL);
DBUG_ASSERT(mdl_context.lt_or_ha_sentinel() == NULL ||
global_read_lock);
/*
Due to the above assert, this is guaranteed to release *all* in
this session.
@ -3024,19 +3026,21 @@ bool Security_context::user_matches(Security_context *them)
access to mysql.proc table to find definitions of stored routines.
****************************************************************************/
void THD::reset_n_backup_open_tables_state(Open_tables_state *backup)
void THD::reset_n_backup_open_tables_state(Open_tables_backup *backup)
{
DBUG_ENTER("reset_n_backup_open_tables_state");
backup->set_open_tables_state(this);
backup->mdl_system_tables_svp= mdl_context.mdl_savepoint();
reset_open_tables_state(this);
state_flags|= Open_tables_state::BACKUPS_AVAIL;
DBUG_VOID_RETURN;
}
void THD::restore_backup_open_tables_state(Open_tables_state *backup)
void THD::restore_backup_open_tables_state(Open_tables_backup *backup)
{
DBUG_ENTER("restore_backup_open_tables_state");
mdl_context.rollback_to_savepoint(backup->mdl_system_tables_svp);
/*
Before we will throw away current open tables state we want
to be sure that it was properly cleaned up.
@ -3046,7 +3050,6 @@ void THD::restore_backup_open_tables_state(Open_tables_state *backup)
lock == 0 &&
locked_tables_mode == LTM_NONE &&
m_reprepare_observer == NULL);
mdl_context.destroy();
set_open_tables_state(backup);
DBUG_VOID_RETURN;

View File

@ -978,9 +978,6 @@ public:
Flags with information about the open tables state.
*/
uint state_flags;
MDL_context mdl_context;
/**
This constructor initializes Open_tables_state instance which can only
be used as backup storage. To prepare Open_tables_state instance for
@ -1010,21 +1007,29 @@ public:
locked_tables_mode= LTM_NONE;
state_flags= 0U;
m_reprepare_observer= NULL;
mdl_context.init(thd);
}
void enter_locked_tables_mode(enum_locked_tables_mode mode_arg)
{
DBUG_ASSERT(locked_tables_mode == LTM_NONE);
mdl_context.set_lt_or_ha_sentinel();
locked_tables_mode= mode_arg;
}
void leave_locked_tables_mode()
{
locked_tables_mode= LTM_NONE;
mdl_context.clear_lt_or_ha_sentinel();
}
};
/**
Storage for backup of Open_tables_state. Must
be used only to open system tables (TABLE_CATEGORY_SYSTEM
and TABLE_CATEGORY_LOG).
*/
class Open_tables_backup: public Open_tables_state
{
public:
/**
When we backup the open tables state to open a system
table or tables, points at the last metadata lock
acquired before the backup. Is used to release
metadata locks on system tables after they are
no longer used.
*/
MDL_ticket *mdl_system_tables_svp;
};
/**
@class Sub_statement_state
@brief Used to save context when executing a function or trigger
@ -1308,6 +1313,9 @@ public:
{
return m_start_of_statement_svp;
}
MDL_request *get_global_mdl_request(THD *thd);
private:
/** List of requests for all locks taken so far. Used for waiting on locks. */
MDL_request_list m_mdl_requests;
@ -1320,6 +1328,11 @@ private:
and we can't safely do back-off (and release them).
*/
bool m_has_locks;
/**
Request object for global intention exclusive lock which is acquired during
opening tables for statements which take upgradable shared metadata locks.
*/
MDL_request *m_global_mdl_request;
};
@ -1426,6 +1439,8 @@ class THD :public Statement,
public Open_tables_state
{
public:
MDL_context mdl_context;
/* Used to execute base64 coded binlog events in MySQL server */
Relay_log_info* rli_fake;
@ -2314,8 +2329,8 @@ public:
void set_status_var_init();
bool is_context_analysis_only()
{ return stmt_arena->is_stmt_prepare() || lex->view_prepare_mode; }
void reset_n_backup_open_tables_state(Open_tables_state *backup);
void restore_backup_open_tables_state(Open_tables_state *backup);
void reset_n_backup_open_tables_state(Open_tables_backup *backup);
void restore_backup_open_tables_state(Open_tables_backup *backup);
void reset_sub_statement_state(Sub_statement_state *backup, uint new_state);
void restore_sub_statement_state(Sub_statement_state *backup);
void set_n_backup_active_arena(Query_arena *set, Query_arena *backup);
@ -2567,6 +2582,19 @@ public:
Protected with LOCK_thd_data mutex.
*/
void set_query(char *query_arg, uint32 query_length_arg);
void enter_locked_tables_mode(enum_locked_tables_mode mode_arg)
{
DBUG_ASSERT(locked_tables_mode == LTM_NONE);
DBUG_ASSERT(! mdl_context.lt_or_ha_sentinel() ||
mdl_context.is_global_lock_owner(MDL_SHARED));
mdl_context.set_lt_or_ha_sentinel();
locked_tables_mode= mode_arg;
}
void leave_locked_tables_mode()
{
locked_tables_mode= LTM_NONE;
mdl_context.clear_lt_or_ha_sentinel();
}
private:
/** The current internal error handler for this thread, or NULL. */
Internal_error_handler *m_internal_handler;

View File

@ -1100,7 +1100,7 @@ bool mysql_truncate(THD *thd, TABLE_LIST *table_list, bool dont_send_ok)
TABLE *table;
bool error= TRUE;
uint path_length;
MDL_request mdl_request;
MDL_request mdl_global_request, mdl_request;
/*
Is set if we're under LOCK TABLES, and used
to downgrade the exclusive lock after the
@ -1207,10 +1207,21 @@ bool mysql_truncate(THD *thd, TABLE_LIST *table_list, bool dont_send_ok)
the table can be re-created as an empty table with TRUNCATE
TABLE, even if the data or index files have become corrupted.
*/
mdl_global_request.init(MDL_key::GLOBAL, "", "", MDL_INTENTION_EXCLUSIVE);
mdl_request.init(MDL_key::TABLE, table_list->db, table_list->table_name,
MDL_EXCLUSIVE);
if (thd->mdl_context.acquire_exclusive_lock(&mdl_request))
if (thd->mdl_context.acquire_global_intention_exclusive_lock(
&mdl_global_request))
DBUG_RETURN(TRUE);
if (thd->mdl_context.acquire_exclusive_lock(&mdl_request))
{
/*
We rely on that close_thread_tables() to release global lock
in this case.
*/
DBUG_RETURN(TRUE);
}
has_mdl_lock= TRUE;
pthread_mutex_lock(&LOCK_open);
tdc_remove_table(thd, TDC_RT_REMOVE_ALL, table_list->db,
@ -1250,7 +1261,7 @@ end:
my_ok(thd); // This should return record count
}
if (has_mdl_lock)
thd->mdl_context.release_lock(mdl_request.ticket);
thd->mdl_context.release_transactional_locks();
if (mdl_ticket)
mdl_ticket->downgrade_exclusive_lock();
}

View File

@ -655,7 +655,12 @@ bool mysqld_help(THD *thd, const char *mask)
tables[0].db= tables[1].db= tables[2].db= tables[3].db= (char*) "mysql";
init_mdl_requests(tables);
Open_tables_state open_tables_state_backup;
/*
HELP must be available under LOCK TABLES.
Reset and backup the current open tables state to
make it possible.
*/
Open_tables_backup open_tables_state_backup;
if (open_system_tables_for_read(thd, tables, &open_tables_state_backup))
goto error2;

View File

@ -6502,7 +6502,8 @@ bool reload_acl_and_cache(THD *thd, ulong options, TABLE_LIST *tables,
DBUG_ASSERT(!thd || thd->locked_tables_mode ||
!thd->mdl_context.has_locks() ||
thd->handler_tables_hash.records);
thd->handler_tables_hash.records ||
thd->global_read_lock);
/*
Note that if REFRESH_READ_LOCK bit is set then REFRESH_TABLES is set too

View File

@ -18,7 +18,8 @@
#include <my_global.h>
template <typename T, typename B> class I_P_List_iterator;
template <typename T, typename B, typename C> class I_P_List_iterator;
class I_P_List_null_counter;
/**
@ -47,10 +48,14 @@ template <typename T, typename B> class I_P_List_iterator;
return &el->prev;
}
};
@param C Policy class specifying how counting of elements in the list
should be done. Instance of this class is also used as a place
where information about number of list elements is stored.
@sa I_P_List_null_counter, I_P_List_counter
*/
template <typename T, typename B>
class I_P_List
template <typename T, typename B, typename C = I_P_List_null_counter>
class I_P_List : public C
{
T *first;
@ -61,7 +66,7 @@ class I_P_List
*/
public:
I_P_List() : first(NULL) { };
inline void empty() { first= NULL; }
inline void empty() { first= NULL; C::reset(); }
inline bool is_empty() const { return (first == NULL); }
inline void push_front(T* a)
{
@ -70,6 +75,7 @@ public:
*B::prev_ptr(first)= B::next_ptr(a);
first= a;
*B::prev_ptr(a)= &first;
C::inc();
}
inline void push_back(T *a)
{
@ -107,21 +113,23 @@ public:
if (next)
*B::prev_ptr(next)= *B::prev_ptr(a);
**B::prev_ptr(a)= next;
C::dec();
}
inline T* front() { return first; }
inline const T *front() const { return first; }
void swap(I_P_List<T,B> &rhs)
void swap(I_P_List<T, B, C> &rhs)
{
swap_variables(T *, first, rhs.first);
if (first)
*B::prev_ptr(first)= &first;
if (rhs.first)
*B::prev_ptr(rhs.first)= &rhs.first;
C::swap(rhs);
}
#ifndef _lint
friend class I_P_List_iterator<T, B>;
friend class I_P_List_iterator<T, B, C>;
#endif
typedef I_P_List_iterator<T, B> Iterator;
typedef I_P_List_iterator<T, B, C> Iterator;
};
@ -129,15 +137,15 @@ public:
Iterator for I_P_List.
*/
template <typename T, typename B>
template <typename T, typename B, typename C = I_P_List_null_counter>
class I_P_List_iterator
{
const I_P_List<T, B> *list;
const I_P_List<T, B, C> *list;
T *current;
public:
I_P_List_iterator(const I_P_List<T, B> &a) : list(&a), current(a.first) {}
I_P_List_iterator(const I_P_List<T, B> &a, T* current_arg) : list(&a), current(current_arg) {}
inline void init(I_P_List<T, B> &a)
I_P_List_iterator(const I_P_List<T, B, C> &a) : list(&a), current(a.first) {}
I_P_List_iterator(const I_P_List<T, B, C> &a, T* current_arg) : list(&a), current(current_arg) {}
inline void init(const I_P_List<T, B, C> &a)
{
list= &a;
current= a.first;
@ -160,4 +168,39 @@ public:
}
};
/**
Element counting policy class for I_P_List to be used in
cases when no element counting should be done.
*/
class I_P_List_null_counter
{
protected:
void reset() {}
void inc() {}
void dec() {}
void swap(I_P_List_null_counter &rhs) {}
};
/**
Element counting policy class for I_P_List which provides
basic element counting.
*/
class I_P_List_counter
{
uint m_counter;
protected:
I_P_List_counter() : m_counter (0) {}
void reset() {m_counter= 0;}
void inc() {m_counter++;}
void dec() {m_counter--;}
void swap(I_P_List_counter &rhs)
{ swap_variables(uint, m_counter, rhs.m_counter); }
public:
uint elements() const { return m_counter; }
};
#endif

View File

@ -2871,7 +2871,7 @@ make_table_name_list(THD *thd, List<LEX_STRING> *table_names, LEX *lex,
due to metadata locks, so to avoid
them we should not wait in case if
conflicting lock is present.
@param[in] open_tables_state_backup pointer to Open_tables_state object
@param[in] open_tables_state_backup pointer to Open_tables_backup object
which is used to save|restore original
status of variables related to
open tables state
@ -2885,7 +2885,7 @@ static int
fill_schema_show_cols_or_idxs(THD *thd, TABLE_LIST *tables,
ST_SCHEMA_TABLE *schema_table,
bool can_deadlock,
Open_tables_state *open_tables_state_backup)
Open_tables_backup *open_tables_state_backup)
{
LEX *lex= thd->lex;
bool res;
@ -2941,7 +2941,8 @@ fill_schema_show_cols_or_idxs(THD *thd, TABLE_LIST *tables,
table, res, db_name,
table_name));
thd->temporary_tables= 0;
close_tables_for_reopen(thd, &show_table_list, NULL);
close_tables_for_reopen(thd, &show_table_list,
open_tables_state_backup->mdl_system_tables_svp);
DBUG_RETURN(error);
}
@ -3236,8 +3237,12 @@ end_share:
end_unlock:
pthread_mutex_unlock(&LOCK_open);
/*
Don't release the MDL lock, it can be part of a transaction.
If it is not, it will be released by the call to
MDL_context::rollback_to_savepoint() in the caller.
*/
thd->mdl_context.release_lock(table_list.mdl_request.ticket);
thd->clear_error();
return res;
}
@ -3281,7 +3286,7 @@ int get_all_tables(THD *thd, TABLE_LIST *tables, COND *cond)
COND *partial_cond= 0;
uint derived_tables= lex->derived_tables;
int error= 1;
Open_tables_state open_tables_state_backup;
Open_tables_backup open_tables_state_backup;
bool save_view_prepare_mode= lex->view_prepare_mode;
Query_tables_list query_tables_list_backup;
#ifndef NO_EMBEDDED_ACCESS_CHECKS
@ -3500,7 +3505,8 @@ int get_all_tables(THD *thd, TABLE_LIST *tables, COND *cond)
res= schema_table->process_table(thd, show_table_list, table,
res, &orig_db_name,
&tmp_lex_string);
close_tables_for_reopen(thd, &show_table_list, NULL);
close_tables_for_reopen(thd, &show_table_list,
open_tables_state_backup.mdl_system_tables_svp);
}
DBUG_ASSERT(!lex->query_tables_own_last);
if (res)
@ -4302,7 +4308,7 @@ int fill_schema_proc(THD *thd, TABLE_LIST *tables, COND *cond)
TABLE *table= tables->table;
bool full_access;
char definer[USER_HOST_BUFF_SIZE];
Open_tables_state open_tables_state_backup;
Open_tables_backup open_tables_state_backup;
DBUG_ENTER("fill_schema_proc");
strxmov(definer, thd->security_ctx->priv_user, "@",

View File

@ -2207,22 +2207,26 @@ err:
locked. Additional check for 'non_temp_tables_count' is to avoid
leaving LOCK TABLES mode if we have dropped only temporary tables.
*/
if (thd->locked_tables_mode &&
thd->lock && thd->lock->table_count == 0 && non_temp_tables_count > 0)
if (! thd->locked_tables_mode)
unlock_table_names(thd);
else
{
thd->locked_tables_list.unlock_locked_tables(thd);
goto end;
}
for (table= tables; table; table= table->next_local)
{
if (table->mdl_request.ticket)
if (thd->lock && thd->lock->table_count == 0 && non_temp_tables_count > 0)
{
/*
Under LOCK TABLES we may have several instances of table open
and locked and therefore have to remove several metadata lock
requests associated with them.
*/
thd->mdl_context.release_all_locks_for_name(table->mdl_request.ticket);
thd->locked_tables_list.unlock_locked_tables(thd);
goto end;
}
for (table= tables; table; table= table->next_local)
{
if (table->mdl_request.ticket)
{
/*
Under LOCK TABLES we may have several instances of table open
and locked and therefore have to remove several metadata lock
requests associated with them.
*/
thd->mdl_context.release_all_locks_for_name(table->mdl_request.ticket);
}
}
}
}
@ -4349,6 +4353,14 @@ static int prepare_for_repair(THD *thd, TABLE_LIST *table_list,
if (!(table= table_list->table))
{
/*
If the table didn't exist, we have a shared metadata lock
on it that is left from mysql_admin_table()'s attempt to
open it. Release the shared metadata lock before trying to
acquire the exclusive lock to satisfy MDL asserts and avoid
deadlocks.
*/
thd->mdl_context.release_transactional_locks();
/*
Attempt to do full-blown table open in mysql_admin_table() has failed.
Let us try to open at least a .FRM for this table.
@ -4360,6 +4372,14 @@ static int prepare_for_repair(THD *thd, TABLE_LIST *table_list,
table_list->mdl_request.init(MDL_key::TABLE,
table_list->db, table_list->table_name,
MDL_EXCLUSIVE);
MDL_request mdl_global_request;
mdl_global_request.init(MDL_key::GLOBAL, "", "", MDL_INTENTION_EXCLUSIVE);
if (thd->mdl_context.acquire_global_intention_exclusive_lock(
&mdl_global_request))
DBUG_RETURN(0);
if (thd->mdl_context.acquire_exclusive_lock(&table_list->mdl_request))
DBUG_RETURN(0);
has_mdl_lock= TRUE;
@ -4491,7 +4511,7 @@ end:
}
/* In case of a temporary table there will be no metadata lock. */
if (error && has_mdl_lock)
thd->mdl_context.release_lock(table_list->mdl_request.ticket);
thd->mdl_context.release_transactional_locks();
DBUG_RETURN(error);
}
@ -6544,6 +6564,13 @@ view_err:
{
target_mdl_request.init(MDL_key::TABLE, new_db, new_name,
MDL_EXCLUSIVE);
/*
Global intention exclusive lock must have been already acquired when
table to be altered was open, so there is no need to do it here.
*/
DBUG_ASSERT(thd->
mdl_context.is_global_lock_owner(MDL_INTENTION_EXCLUSIVE));
if (thd->mdl_context.try_acquire_exclusive_lock(&target_mdl_request))
DBUG_RETURN(TRUE);
if (target_mdl_request.ticket == NULL)

View File

@ -1563,7 +1563,6 @@ my_tz_init(THD *org_thd, const char *default_tzname, my_bool bootstrap)
{
THD *thd;
TABLE_LIST tz_tables[1+MY_TZ_TABLES_COUNT];
Open_tables_state open_tables_state_backup;
TABLE *table;
Tz_names_entry *tmp_tzname;
my_bool return_val= 1;
@ -1642,7 +1641,8 @@ my_tz_init(THD *org_thd, const char *default_tzname, my_bool bootstrap)
We need to open only mysql.time_zone_leap_second, but we try to
open all time zone tables to see if they exist.
*/
if (open_system_tables_for_read(thd, tz_tables, &open_tables_state_backup))
if (open_and_lock_tables_derived(thd, tz_tables, FALSE,
MYSQL_LOCK_IGNORE_FLUSH))
{
sql_print_warning("Can't open and lock time zone table: %s "
"trying to live without them", thd->stmt_da->message());
@ -1651,6 +1651,9 @@ my_tz_init(THD *org_thd, const char *default_tzname, my_bool bootstrap)
goto end_with_setting_default_tz;
}
for (TABLE_LIST *tl= tz_tables; tl; tl= tl->next_global)
tl->table->use_all_columns();
/*
Now we are going to load leap seconds descriptions that are shared
between all time zones that use them. We are using index for getting
@ -1739,7 +1742,8 @@ end_with_close:
if (time_zone_tables_exist)
{
thd->version--; /* Force close to free memory */
close_system_tables(thd, &open_tables_state_backup);
close_thread_tables(thd);
thd->mdl_context.release_transactional_locks();
}
end_with_cleanup:
@ -2293,7 +2297,7 @@ my_tz_find(THD *thd, const String *name)
else if (time_zone_tables_exist)
{
TABLE_LIST tz_tables[MY_TZ_TABLES_COUNT];
Open_tables_state open_tables_state_backup;
Open_tables_backup open_tables_state_backup;
tz_init_table_list(tz_tables);
init_mdl_requests(tz_tables);