Skip to content

Conversation

bsrikanth-mariadb
Copy link
Contributor

when doing multiple DELETE on tables:
-> if there was data to delete, then no ddls were dumped to the trace.
-> However, when there was no data that is being deleted from the tables,
   then ddls of the tables were getting dumped to the trace.

The problem is that store_tables_context_in_trace() is not getting
invoked, when there was data to delete from the tables. Although, there
was no error when executing the delete query, the result of basic cleanup was
non-zero and failure of the the result check was prohibiting from invoking
store_tables_context_in_trace() method.

Removed this additional result check for invoking
store_tables_context_in_trace() from mysql_execute_command().

This feature stores the ddls of the tables/views that are used in
a query, to the optimizer trace. It is currently controlled by a
system variable store_ddls_in_optimizer_trace, and is not enabled
by default. All the ddls will be stored in a single json array, with each
element having table/view name, and the associated create definition
of the table/view.

The approach taken is to read global query_tables from the thd->lex,
and read them in reverse. Create a record with table_name, ddl of
the table and add the table_name to the hash,
along with dumping the information to the trace.
dbName_plus_tableName is used as a key,
and the duplicate entries are not added to the hash.

The main suite tests are also run with the feature enabled, and they
all succeed.
This feature stores the ddls of the tables/views that are used in
a query, to the optimizer trace. It is currently controlled by a
system variable store_ddls_in_optimizer_trace, and is not enabled
by default. All the ddls will be stored in a single json array, with each
element having table/view name, and the associated create definition
of the table/view.

The approach taken is to read global query_tables from the thd->lex,
and read them in reverse. Create a record with table_name, ddl of
the table and add it to the hash, along with dumping the information
to the trace. dbName_plus_tableName is used as a key,
and the duplicate entries are not added to the hash.

The main suite tests are also run with the feature enabled, and they
all succeed.
This feature stores the basic stats of the base tables that are used in
a query, to the optimizer trace. This feature is also controlled by
optimizer_record_context, and is not enabled by default. The stats
such as num_of_records present in the table, indexes if present
then their names, along with the average number of records_per_key
with in each index are dumped to the trace. Additionally, stats from
range analysis of the queries are also dumped into the trace

The approach taken here is to extend the existing function
store_tables_context_in_trace() and add new dump_stats_to_trace()
to opt_trace_ddl_info.cc Several new tests are added to
opt_trace_store_stats.test
when doing multiple DELETE on tables:
-> if there was data to delete, then no ddls were dumped to the trace.
-> However, when there was no data that is being deleted from the tables,
   then ddls of the tables were getting dumped to the trace.

The problem is that store_tables_context_in_trace() is not getting
invoked, when there was data to delete from the tables. Although, there
was no error when executing the delete query, the result of basic cleanup was
non-zero and failure of the the result check was prohibiting from invoking
store_tables_context_in_trace() method.

Removed this additional result check for invoking
store_tables_context_in_trace() from mysql_execute_command().
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

1 participant