Skip to content

Commit e7877e2

Browse files
authored
Backport config option skip_physical_aggregate_schema_check #13176 to 42 (#13189)
* Add config option `skip_physical_aggregate_schema_check ` (#13176) * Add option to skip physical aggregate check * tweak wording * update test * Change default value
1 parent 0011f45 commit e7877e2

File tree

4 files changed

+21
-3
lines changed

4 files changed

+21
-3
lines changed

datafusion/common/src/config.rs

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,19 @@ config_namespace! {
268268
/// Defaults to the number of CPU cores on the system
269269
pub planning_concurrency: usize, default = num_cpus::get()
270270

271+
/// When set to true, skips verifying that the schema produced by
272+
/// planning the input of `LogicalPlan::Aggregate` exactly matches the
273+
/// schema of the input plan.
274+
///
275+
/// When set to false, if the schema does not match exactly
276+
/// (including nullability and metadata), a planning error will be raised.
277+
///
278+
/// This is used to workaround bugs in the planner that are now caught by
279+
/// the new schema verification step.
280+
///
281+
/// This configuration option will default to `false` in future releases.
282+
pub skip_physical_aggregate_schema_check: bool, default = true
283+
271284
/// Specifies the reserved memory for each spillable sort operation to
272285
/// facilitate an in-memory merge.
273286
///

datafusion/core/src/physical_planner.rs

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -665,14 +665,16 @@ impl DefaultPhysicalPlanner {
665665
aggr_expr,
666666
..
667667
}) => {
668+
let options = session_state.config().options();
668669
// Initially need to perform the aggregate and then merge the partitions
669670
let input_exec = children.one()?;
670671
let physical_input_schema = input_exec.schema();
671672
let logical_input_schema = input.as_ref().schema();
672-
let physical_input_schema_from_logical: Arc<Schema> =
673-
logical_input_schema.as_ref().clone().into();
673+
let physical_input_schema_from_logical = logical_input_schema.inner();
674674

675-
if physical_input_schema != physical_input_schema_from_logical {
675+
if &physical_input_schema != physical_input_schema_from_logical
676+
&& !options.execution.skip_physical_aggregate_schema_check
677+
{
676678
return internal_err!("Physical input schema should be the same as the one converted from logical input schema.");
677679
}
678680

datafusion/sqllogictest/test_files/information_schema.slt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -208,6 +208,7 @@ datafusion.execution.parquet.writer_version 1.0
208208
datafusion.execution.planning_concurrency 13
209209
datafusion.execution.skip_partial_aggregation_probe_ratio_threshold 0.8
210210
datafusion.execution.skip_partial_aggregation_probe_rows_threshold 100000
211+
datafusion.execution.skip_physical_aggregate_schema_check true
211212
datafusion.execution.soft_max_rows_per_output_file 50000000
212213
datafusion.execution.sort_in_place_threshold_bytes 1048576
213214
datafusion.execution.sort_spill_reservation_bytes 10485760
@@ -298,6 +299,7 @@ datafusion.execution.parquet.writer_version 1.0 (writing) Sets parquet writer ve
298299
datafusion.execution.planning_concurrency 13 Fan-out during initial physical planning. This is mostly use to plan `UNION` children in parallel. Defaults to the number of CPU cores on the system
299300
datafusion.execution.skip_partial_aggregation_probe_ratio_threshold 0.8 Aggregation ratio (number of distinct groups / number of input rows) threshold for skipping partial aggregation. If the value is greater then partial aggregation will skip aggregation for further input
300301
datafusion.execution.skip_partial_aggregation_probe_rows_threshold 100000 Number of input rows partial aggregation partition should process, before aggregation ratio check and trying to switch to skipping aggregation mode
302+
datafusion.execution.skip_physical_aggregate_schema_check true When set to true, skips verifying that the schema produced by planning the input of `LogicalPlan::Aggregate` exactly matches the schema of the input plan. When set to false, if the schema does not match exactly (including nullability and metadata), a planning error will be raised. This is used to workaround bugs in the planner that are now caught by the new schema verification step. This configuration option will default to `false` in future releases.
301303
datafusion.execution.soft_max_rows_per_output_file 50000000 Target number of rows in output files when writing multiple. This is a soft max, so it can be exceeded slightly. There also will be one file smaller than the limit if the total number of rows written is not roughly divisible by the soft max
302304
datafusion.execution.sort_in_place_threshold_bytes 1048576 When sorting, below what size should data be concatenated and sorted in a single RecordBatch rather than sorted in batches and merged.
303305
datafusion.execution.sort_spill_reservation_bytes 10485760 Specifies the reserved memory for each spillable sort operation to facilitate an in-memory merge. When a sort operation spills to disk, the in-memory data must be sorted and merged before being written to a file. This setting reserves a specific amount of memory for that in-memory sort/merge process. Note: This setting is irrelevant if the sort operation cannot spill (i.e., if there's no `DiskManager` configured).

docs/source/user-guide/configs.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,7 @@ Environment variables are read during `SessionConfig` initialisation so they mus
7878
| datafusion.execution.parquet.maximum_parallel_row_group_writers | 1 | (writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame. |
7979
| datafusion.execution.parquet.maximum_buffered_record_batches_per_stream | 2 | (writing) By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing maximum_parallel_row_group_writers and maximum_buffered_record_batches_per_stream if your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame. |
8080
| datafusion.execution.planning_concurrency | 0 | Fan-out during initial physical planning. This is mostly use to plan `UNION` children in parallel. Defaults to the number of CPU cores on the system |
81+
| datafusion.execution.skip_physical_aggregate_schema_check | true | When set to true, skips verifying that the schema produced by planning the input of `LogicalPlan::Aggregate` exactly matches the schema of the input plan. When set to false, if the schema does not match exactly (including nullability and metadata), a planning error will be raised. This is used to workaround bugs in the planner that are now caught by the new schema verification step. This configuration option will default to `false` in future releases. |
8182
| datafusion.execution.sort_spill_reservation_bytes | 10485760 | Specifies the reserved memory for each spillable sort operation to facilitate an in-memory merge. When a sort operation spills to disk, the in-memory data must be sorted and merged before being written to a file. This setting reserves a specific amount of memory for that in-memory sort/merge process. Note: This setting is irrelevant if the sort operation cannot spill (i.e., if there's no `DiskManager` configured). |
8283
| datafusion.execution.sort_in_place_threshold_bytes | 1048576 | When sorting, below what size should data be concatenated and sorted in a single RecordBatch rather than sorted in batches and merged. |
8384
| datafusion.execution.meta_fetch_concurrency | 32 | Number of files to read in parallel when inferring schema and statistics |

0 commit comments

Comments
 (0)