Skip to content

Commit baa8498

Browse files
authored
Spelling fixes (#183)
2 parents e51b1a3 + 4e001a4 commit baa8498

File tree

7 files changed

+13
-15
lines changed

7 files changed

+13
-15
lines changed

docs/source/release_notes.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Release Notes
44

55
2.8.4
66
-----
7-
Release date: May 4, 2023
7+
Release date: May 5, 2023
88

99
* Added support for creating ADSDataset from pandas dataframe.
1010
* Added support for multi-model deployment using Triton.
@@ -272,7 +272,7 @@ Release date: March 3, 2022
272272

273273
Release date: February 4, 2022
274274

275-
* Fixed bug in DataFlow ``Job`` creation.
275+
* Fixed bug in Data Flow ``Job`` creation.
276276
* Fixed bug in ADSDataset ``get_recommendations`` raising ``HTML is not defined`` exception.
277277
* Fixed bug in jobs ``ScriptRuntime`` causing the parent artifact folder to be zipped and uploaded instead of the specified folder.
278278
* Fixed bug in ``ModelDeployment`` raising ``TypeError`` exception when updating an existing model deployment.

docs/source/user_guide/apachespark/quickstart.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ Quick Start
44

55
Data Flow is a hosted Apache Spark server. It is quick to start, and can scale to handle large datasets in parallel. ADS provides a convenient API for creating and maintaining workloads on Data Flow.
66

7-
Submit a Toy Python Script to DataFlow
8-
======================================
7+
Submit a Toy Python Script to Data Flow
8+
=======================================
99

1010
From a Python Environment
1111
-------------------------
1212

13-
Submit a python script to DataFlow entirely from your python environment.
13+
Submit a python script to Data Flow entirely from your python environment.
1414
The following snippet uses a toy python script that prints "Hello World"
1515
followed by the spark version, 3.2.1.
1616

@@ -111,7 +111,7 @@ Assuming you have the following two files written in your current directory as `
111111
Real Data Flow Example with Conda Environment
112112
=============================================
113113

114-
From PySpark v3.0.0 and onwards, Data Flow allows a published conda environment as the `Spark runtime environment <https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html#using-conda>`_ when built with `ADS`. Data Flow supports published conda environments only. Conda packs are tar'd conda environments. When you publish your own conda packs to object storage, ensure that the DataFlow Resource has access to read the object or bucket.
114+
From PySpark v3.0.0 and onwards, Data Flow allows a published conda environment as the `Spark runtime environment <https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html#using-conda>`_ when built with `ADS`. Data Flow supports published conda environments only. Conda packs are tar'd conda environments. When you publish your own conda packs to object storage, ensure that the Data Flow Resource has access to read the object or bucket.
115115
Below is a more built-out example using conda packs:
116116

117117
From a Python Environment

docs/source/user_guide/jobs/data_science_job.rst

-2
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ Quick Start
88

99
See :doc:`policies` and `About Data Science Policies <https://docs.oracle.com/en-us/iaas/data-science/using/policies.htm>`_.
1010

11-
.. include:: ../jobs/toc_local.rst
12-
1311
Define a Job
1412
============
1513

docs/source/user_guide/jobs/overview.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,4 @@ Each model can write its results to the Logging service or Object Storage.
2828
Then you can run a final sequential job that uses the best model class, and trains the final model on the entire dataset.
2929

3030
The following sections provides details on running workloads with OCI Data Science Jobs using ADS Jobs APIs.
31-
You can use similar APIs to :doc:`Run a OCI DataFlow Application <../apachespark/quickstart>`.
31+
You can use similar APIs to :doc:`Run a OCI Data Flow Application <../apachespark/quickstart>`.

docs/source/user_guide/jobs/yaml_schema.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ Following is the YAML schema for validating the YAML using `Cerberus <https://do
3131
:linenos:
3232

3333

34-
DataFlow
35-
========
34+
Data Flow
35+
=========
3636

3737
.. raw:: html
3838
:file: ../../yaml_schema/jobs/dataFlow.html
@@ -126,8 +126,8 @@ Following is the YAML schema for validating the YAML using `Cerberus <https://do
126126
:linenos:
127127

128128

129-
DataFlow Runtime
130-
--------------
129+
Data Flow Runtime
130+
----------------
131131

132132
.. raw:: html
133133
:file: ../../yaml_schema/jobs/dataFlowRuntime.html

docs/source/yaml_schema/jobs/job.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ <h4 id="job.spec"><code>job.spec</code> schema</h4>
8989
<code>dict</code>
9090
</td>
9191
<td>
92-
See Data Science Job or DataFlow schema.
92+
See Data Science Job or Data Flow schema.
9393
</td>
9494
</tr>
9595

docs/source/yaml_schema/jobs/job.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ spec:
1515
type: string
1616
infrastructure:
1717
type: dict
18-
meta: See Data Science Job or DataFlow schema.
18+
meta: See Data Science Job or Data Flow schema.
1919
name:
2020
required: false
2121
type: string

0 commit comments

Comments
 (0)