Skip to content

Commit 192ca29

Browse files
committed
PR Feedback
1 parent d375fc4 commit 192ca29

File tree

5 files changed

+51
-51
lines changed

5 files changed

+51
-51
lines changed

CHANGELOG.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -144,8 +144,8 @@ CHANGED
144144
[@berndverst](https://github.com/berndverst)
145145
- Http and grpc protocols and their secure variants are stripped from the host name parameter if
146146
provided. Secure mode is enabled if the protocol provided is https or grpcs
147-
([#38](https://github.com/microsoft/durabletask-python/pull/38) - by
148-
[@berndverst)(https://github.com/berndverst)
147+
([#38](https://github.com/microsoft/durabletask-python/pull/38)) - by
148+
[@berndverst](https://github.com/berndverst)
149149
- Improve ProtoGen by downloading proto file directly instead of using submodule
150150
([#39](https://github.com/microsoft/durabletask-python/pull/39) - by
151151
[@berndverst](https://github.com/berndverst)

docs/features.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ Entities can perform actions such signaling other entities or starting new orche
175175

176176
##### Locking and concurrency
177177

178-
Because entites can be accessed from multiple running orchestrations at the same time, entities may
178+
Because entities can be accessed from multiple running orchestrations at the same time, entities may
179179
also be locked by a single orchestrator ensuring exclusive access during the duration of the lock
180180
(also known as a critical section). Think semaphores:
181181

@@ -191,7 +191,7 @@ details and advanced usage, see the examples and API documentation.
191191

192192
##### Deleting entities
193193

194-
Entites are represented as orchestration instances in your Task Hub, and their state is persisted in
194+
Entities are represented as orchestration instances in your Task Hub, and their state is persisted in
195195
the Task Hub as well. When using the Durable Task Scheduler as your durability provider, the backend
196196
will automatically clean up entities when their state is empty, this is effectively the "delete"
197197
operation to save space in the Task Hub. In the DTS Dashboard, "delete entity" simply signals the

docs/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
example](../examples/dts/sub-orchestrations-with-fan-out-fan-in/README.md)
77
for detailed instructions on running the order processing example.
88

9-
### Explore Other Samples
9+
## Explore Other Samples
1010

1111
- Visit the [examples](../examples/dts/) directory to find a variety of sample orchestrations and
1212
learn how to run them.

docs/supported-patterns.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def sequence(ctx: task.OrchestrationContext, _):
2222

2323
See the full [function chaining example](../examples/activity_sequence.py).
2424

25-
### Fan-out/fan-in
25+
## Fan-out/fan-in
2626

2727
An orchestration can fan-out a dynamic number of function calls in parallel and then fan-in the
2828
results using the following syntax:
@@ -51,9 +51,9 @@ def orchestrator(ctx: task.OrchestrationContext, _):
5151

5252
See the full [fan-out sample](../examples/fanout_fanin.py).
5353

54-
### Human interaction and durable timers
54+
## Human interaction and durable timers
5555

56-
An orchestration can wait for a user-defined event, such as a human approval event, before proceding
56+
An orchestration can wait for a user-defined event, such as a human approval event, before proceeding
5757
to the next step. In addition, the orchestration can create a timer with an arbitrary duration that
5858
triggers some alternate action if the external event hasn't been received:
5959

@@ -87,7 +87,7 @@ automatically by the SDK.
8787

8888
See the full [human interaction sample](../examples/human_interaction.py).
8989

90-
### Version-aware orchestrator
90+
## Version-aware orchestrator
9191

9292
When utilizing orchestration versioning, it is possible for an orchestrator to remain
9393
backwards-compatible with orchestrations created using the previously defined version. For instance,
@@ -104,7 +104,7 @@ def my_orchestrator(ctx: task.OrchestrationContext, order: Order):
104104
Assume that any orchestrations created using this orchestrator were versioned 1.0.0. If the
105105
signature of this method needs to be updated to call activity_three between the calls to
106106
activity_one and activity_two, ordinarily this would break any running orchestrations at the time of
107-
deployment. However, the following orchestrator will be able to process both orchestraions versioned
107+
deployment. However, the following orchestrator will be able to process both orchestrations versioned
108108
1.0.0 and 2.0.0 after the change:
109109

110110
```python
@@ -132,7 +132,7 @@ def my_orchestrator(ctx: task.OrchestrationContext, order: Order):
132132

133133
See the full [version-aware orchestrator sample](../examples/version_aware_orchestrator.py)
134134

135-
### Work item filtering
135+
## Work item filtering
136136

137137
When running multiple workers against the same task hub, each worker can declare which work items it
138138
handles. The backend then dispatches only the matching orchestrations, activities, and entities,
@@ -173,7 +173,7 @@ w.use_work_item_filters(WorkItemFilters(
173173

174174
See the full [work item filtering sample](../examples/work_item_filtering.py).
175175

176-
### Large payload externalization
176+
## Large payload externalization
177177

178178
When orchestrations work with very large inputs, outputs, or event data, the payloads can exceed
179179
gRPC message size limits. The large payload externalization pattern transparently offloads these

examples/sub-orchestrations-with-fan-out-fan-in/README.md

Lines changed: 39 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -35,21 +35,21 @@ easy-to-use Docker container.
3535

3636
2. Pull the Docker Image for the Emulator:
3737

38-
```bash
39-
docker pull mcr.microsoft.com/dts/dts-emulator:v0.0.6
40-
```
38+
```bash
39+
docker pull mcr.microsoft.com/dts/dts-emulator:v0.0.6
40+
```
4141

42-
1. Run the Emulator: Wait a few seconds for the container to be ready.
42+
3. Run the Emulator: Wait a few seconds for the container to be ready.
4343

44-
```bash
45-
docker run --name dtsemulator -d -p 8080:8080 mcr.microsoft.com/dts/dts-emulator:v0.0.6
46-
```
44+
```bash
45+
docker run --name dtsemulator -d -p 8080:8080 mcr.microsoft.com/dts/dts-emulator:v0.0.6
46+
```
4747

48-
1. Install the Required Packages:
48+
4. Install the Required Packages:
4949

50-
```bash
51-
pip install -r requirements.txt
52-
```
50+
```bash
51+
pip install -r requirements.txt
52+
```
5353

5454
Note: The example code has been updated to use the default emulator settings automatically
5555
(endpoint: [http://localhost:8080](http://localhost:8080), taskhub: default). You don't need to set
@@ -62,43 +62,43 @@ Azure CLI:
6262

6363
1. Create a Scheduler:
6464

65-
```bash
66-
az durabletask scheduler create --resource-group <testrg> \
67-
--name <testscheduler> \
68-
--location <eastus> --ip-allowlist "[0.0.0.0/0]" --sku-capacity 1 \
69-
--sku-name "Dedicated" --tags "{'myattribute':'myvalue'}"
70-
```
65+
```bash
66+
az durabletask scheduler create --resource-group <testrg> \
67+
--name <testscheduler> \
68+
--location <eastus> --ip-allowlist "[0.0.0.0/0]" --sku-capacity 1 \
69+
--sku-name "Dedicated" --tags "{'myattribute':'myvalue'}"
70+
```
7171

72-
1. Create Your Taskhub:
72+
2. Create Your Taskhub:
7373

74-
```bash
75-
az durabletask taskhub create --resource-group <testrg> \
76-
--scheduler-name <testscheduler> --name <testtaskhub>
77-
```
74+
```bash
75+
az durabletask taskhub create --resource-group <testrg> \
76+
--scheduler-name <testscheduler> --name <testtaskhub>
77+
```
7878

79-
1. Retrieve the Endpoint for the Scheduler: Locate the taskhub in the Azure portal to find the
79+
3. Retrieve the Endpoint for the Scheduler: Locate the taskhub in the Azure portal to find the
8080
endpoint.
8181

82-
2. Set the Environment Variables:
83-
Bash:
82+
4. Set the Environment Variables:
83+
Bash:
8484

85-
```bash
86-
export TASKHUB=<taskhubname>
87-
export ENDPOINT=<taskhubEndpoint>
88-
```
85+
```bash
86+
export TASKHUB=<taskhubname>
87+
export ENDPOINT=<taskhubEndpoint>
88+
```
8989

90-
Powershell:
90+
PowerShell:
9191

92-
```powershell
93-
$env:TASKHUB = "<taskhubname>"
94-
$env:ENDPOINT = "<taskhubEndpoint>"
95-
```
92+
```powershell
93+
$env:TASKHUB = "<taskhubname>"
94+
$env:ENDPOINT = "<taskhubEndpoint>"
95+
```
9696

97-
1. Install the Required Packages:
97+
5. Install the Required Packages:
9898

99-
```bash
100-
pip install -r requirements.txt
101-
```
99+
```bash
100+
pip install -r requirements.txt
101+
```
102102

103103
### Running the Python Components
104104

@@ -123,7 +123,7 @@ You should start seeing logs for processing orders in both shell outputs.
123123

124124
To access the Durable Task Scheduler Dashboard, follow these steps:
125125

126-
- **Using the Emulator**: By default, the dashboard runs on portal 8082.
126+
- **Using the Emulator**: By default, the dashboard runs on port 8082.
127127
Navigate to [http://localhost:8082](http://localhost:8082) and click on the default task hub.
128128

129129
- **Using a Deployed Scheduler**: Navigate to the Scheduler resource. Then, go to the Task Hub

0 commit comments

Comments
 (0)