|
1 |
| -# BatchTableWrite |
| 1 | +--- |
| 2 | +title: BatchTableWrite |
| 3 | +--- |
| 4 | + |
| 5 | +# BatchTableWrite Flow Execution |
2 | 6 |
|
3 | 7 | `BatchTableWrite` is a [FlowExecution](FlowExecution.md) that writes a batch `DataFrame` to a [Table](#destination).
|
4 | 8 |
|
|
7 | 11 | `BatchTableWrite` takes the following to be created:
|
8 | 12 |
|
9 | 13 | * <span id="identifier"> `TableIdentifier`
|
10 |
| -* <span id="flow"> `ResolvedFlow` |
| 14 | +* <span id="flow"> [ResolvedFlow](ResolvedFlow.md) |
11 | 15 | * <span id="graph"> [DataflowGraph](DataflowGraph.md)
|
12 | 16 | * <span id="destination"> [Table](Table.md)
|
13 | 17 | * <span id="updateContext"> [PipelineUpdateContext](PipelineUpdateContext.md)
|
14 | 18 | * <span id="sqlConf"> Configuration Properties
|
15 | 19 |
|
16 | 20 | `BatchTableWrite` is created when:
|
17 | 21 |
|
18 |
| -* FIXME |
| 22 | +* `FlowPlanner` is requested to [plan a CompleteFlow](FlowPlanner.md#plan) |
19 | 23 |
|
20 |
| -## executeInternal { #executeInternal } |
| 24 | +## Execute { #executeInternal } |
21 | 25 |
|
22 |
| -```scala |
23 |
| -executeInternal(): Future[Unit] |
24 |
| -``` |
| 26 | +??? note "FlowExecution" |
25 | 27 |
|
26 |
| -`executeInternal`...FIXME |
| 28 | + ```scala |
| 29 | + executeInternal(): Future[Unit] |
| 30 | + ``` |
27 | 31 |
|
28 |
| ---- |
| 32 | + `executeInternal` is part of the [FlowExecution](FlowExecution.md#executeInternal) abstraction. |
| 33 | + |
| 34 | +`executeInternal` activates the [configuration properties](#sqlConf) in the current [SparkSession](FlowExecution.md#spark). |
| 35 | + |
| 36 | +`executeInternal` requests this [PipelineUpdateContext](#updateContext) for the [FlowProgressEventLogger](PipelineUpdateContext.md#flowProgressEventLogger) to [recordRunning](FlowProgressEventLogger.md#recordRunning) with this [ResolvedFlow](#flow). |
| 37 | + |
| 38 | +`executeInternal` requests this [DataflowGraph](#graph) to [re-analyze](DataflowGraph.md#reanalyzeFlow) this [ResolvedFlow](#flow) to get the [DataFrame](ResolvedFlow.md#df) (the logical query plan) |
| 39 | + |
| 40 | +`executeInternal` executes `append` batch write asynchronously: |
| 41 | + |
| 42 | +1. Creates a [DataFrameWriter](../DataFrameWriter.md) for the batch query's logical plan (the [DataFrame](ResolvedFlow.md#df)). |
| 43 | +1. Sets the write format to the [format](Table.md#format) of this [Table](#destination). |
| 44 | +1. In the end, `executeInternal` appends the rows to this [Table](#destination) (using [DataFrameWriter.saveAsTable](../DataFrameWriter.md#saveAsTable) operator). |
| 45 | + |
| 46 | +## isStreaming { #isStreaming } |
| 47 | + |
| 48 | +??? note "FlowExecution" |
| 49 | + |
| 50 | + ```scala |
| 51 | + isStreaming: Boolean |
| 52 | + ``` |
29 | 53 |
|
30 |
| -`executeInternal` is used when: |
| 54 | + `isStreaming` is part of the [FlowExecution](FlowExecution.md#isStreaming) abstraction. |
31 | 55 |
|
32 |
| -* FIXME |
| 56 | +`isStreaming` is always disabled (`false`). |
0 commit comments