Skip to content

Files

This branch is 1308 commits behind apache/spark:master.

sql

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Nov 28, 2024
Nov 28, 2024
Nov 27, 2024
Nov 28, 2024
Nov 22, 2024
Nov 20, 2024
Jul 25, 2024
Oct 11, 2022
Sep 20, 2024
Jan 31, 2024
Sep 21, 2024
Feb 19, 2020

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into five subprojects:

  • API (sql/api) - Includes some public API like DataType, Row, etc. This component can be shared between Catalyst and Spark Connect client.
  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.

Running ./sql/create-docs.sh generates SQL documentation for built-in functions under sql/site, and SQL configuration documentation that gets included as part of configuration.md in the main docs directory.