Skip to content

Commit 80667fb

Browse files
committed
Merge branch 'mindspore-lab:main' into main
2 parents 8c73ce4 + 020120f commit 80667fb

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+630
-349
lines changed

.github/workflows/docs.yml

+4
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,10 @@ jobs:
2121
run: |
2222
python -m pip install --upgrade pip
2323
pip install -r requirements/docs.txt
24+
- name: Make the script executable
25+
run: chmod +x docs/replace_path.sh
26+
- name: Run the scripts
27+
run: ./docs/replace_path.sh
2428
- name: Build site
2529
run: mkdocs build
2630
- name: Deploy to gh-pages

README.md

+14-14
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ You can do MindSpore Lite inference in MindOCR using **MindOCR models** or **Thi
274274
<summary>Key Information Extraction</summary>
275275
276276
- [x] [LayoutXLM](configs/kie/vi_layoutxlm/README.md) (arXiv'2021)
277-
- [x] [LayoutLMv3](configs/kie/layoutlmv3/README.md) (arXiv'2022)
277+
- [x] [LayoutLMv3](configs/layout/layoutlmv3/README.md) (arXiv'2022)
278278
279279
</details>
280280
@@ -292,7 +292,7 @@ You can do MindSpore Lite inference in MindOCR using **MindOCR models** or **Thi
292292
293293
</details>
294294
295-
For the detailed performance of the trained models, please refer to [https://github.com/mindspore-lab/mindocr/blob/main/configs](./configs).
295+
For the detailed performance of the trained models, please refer to [configs](configs).
296296
297297
For details of MindSpore Lite inference models support, please refer to [MindOCR Models Support List](docs/en/inference/mindocr_models_list.md) and [Third-party Models Support List](docs/en/inference/thirdparty_models_list.md) (PaddleOCR etc.).
298298
@@ -306,45 +306,45 @@ MindOCR provides a [dataset conversion tool](https://github.com/mindspore-lab/mi
306306
- [Born-Digital Images](https://rrc.cvc.uab.es/?ch=1) [[download](docs/en/datasets/borndigital.md)]
307307
- [CASIA-10K](http://www.nlpr.ia.ac.cn/pal/CASIA10K.html) [[download](docs/en/datasets/casia10k.md)]
308308
- [CCPD](https://github.com/detectRecog/CCPD) [[download](docs/en/datasets/ccpd.md)]
309-
- [Chinese Text Recognition Benchmark](https://github.com/FudanVI/benchmarking-chinese-text-recognition) [[paper](https://arxiv.org/abs/2112.15093)] [[download](docs/en/datasets/chinese_text_recognition.md)]
309+
- [Chinese Text Recognition Benchmark](https://github.com/FudanVI/benchmarking-chinese-text-recognition) [[paper](https://arxiv.org/abs/2112.15093)] \[[download](docs/en/datasets/chinese_text_recognition.md)]
310310
- [COCO-Text](https://rrc.cvc.uab.es/?ch=5) [[download](docs/en/datasets/cocotext.md)]
311311
- [CTW](https://ctwdataset.github.io/) [[download](docs/en/datasets/ctw.md)]
312-
- [ICDAR2015](https://rrc.cvc.uab.es/?ch=4) [[paper](https://rrc.cvc.uab.es/files/short_rrc_2015.pdf)] [[download](docs/en/datasets/icdar2015.md)]
312+
- [ICDAR2015](https://rrc.cvc.uab.es/?ch=4) [[paper](https://rrc.cvc.uab.es/files/short_rrc_2015.pdf)] \[[download](docs/en/datasets/icdar2015.md)]
313313
- [ICDAR2019 ArT](https://rrc.cvc.uab.es/?ch=14) [[download](docs/en/datasets/ic19_art.md)]
314314
- [LSVT](https://rrc.cvc.uab.es/?ch=16) [[download](docs/en/datasets/lsvt.md)]
315-
- [MLT2017](https://rrc.cvc.uab.es/?ch=8) [[paper](https://ieeexplore.ieee.org/abstract/document/8270168)] [[download](docs/en/datasets/mlt2017.md)]
316-
- [MSRA-TD500](http://www.iapr-tc11.org/mediawiki/index.php/MSRA_Text_Detection_500_Database_(MSRA-TD500)) [[paper](https://ieeexplore.ieee.org/abstract/document/6247787)] [[download](docs/en/datasets/td500.md)]
315+
- [MLT2017](https://rrc.cvc.uab.es/?ch=8) [[paper](https://ieeexplore.ieee.org/abstract/document/8270168)] \[[download](docs/en/datasets/mlt2017.md)]
316+
- [MSRA-TD500](http://www.iapr-tc11.org/mediawiki/index.php/MSRA_Text_Detection_500_Database_(MSRA-TD500)) [[paper](https://ieeexplore.ieee.org/abstract/document/6247787)] \[[download](docs/en/datasets/td500.md)]
317317
- [MTWI-2018](https://tianchi.aliyun.com/competition/entrance/231651/introduction) [[download](docs/en/datasets/mtwi2018.md)]
318318
- [RCTW-17](https://rctw.vlrlab.net/) [[download](docs/en/datasets/rctw17.md)]
319319
- [ReCTS](https://rrc.cvc.uab.es/?ch=12) [[download](docs/en/datasets/rects.md)]
320-
- [SCUT-CTW1500](https://github.com/Yuliang-Liu/Curve-Text-Detector) [[paper](https://www.sciencedirect.com/science/article/pii/S0031320319300664)] [[download](docs/en/datasets/ctw1500.md)]
320+
- [SCUT-CTW1500](https://github.com/Yuliang-Liu/Curve-Text-Detector) [[paper](https://www.sciencedirect.com/science/article/pii/S0031320319300664)] \[[download](docs/en/datasets/ctw1500.md)]
321321
- [SROIE](https://rrc.cvc.uab.es/?ch=13) [[download](docs/en/datasets/sroie.md)]
322322
- [SVT](http://www.iapr-tc11.org/mediawiki/index.php/The_Street_View_Text_Dataset) [[download](docs/en/datasets/svt.md)]
323-
- [SynText150k](https://github.com/aim-uofa/AdelaiDet) [[paper](https://arxiv.org/abs/2002.10200)] [[download](docs/en/datasets/syntext150k.md)]
324-
- [SynthText](https://www.robots.ox.ac.uk/~vgg/data/scenetext/) [[paper](https://www.robots.ox.ac.uk/~vgg/publications/2016/Gupta16/)] [[download](docs/en/datasets/synthtext.md)]
323+
- [SynText150k](https://github.com/aim-uofa/AdelaiDet) [[paper](https://arxiv.org/abs/2002.10200)] \[[download](docs/en/datasets/syntext150k.md)]
324+
- [SynthText](https://www.robots.ox.ac.uk/~vgg/data/scenetext/) [[paper](https://www.robots.ox.ac.uk/~vgg/publications/2016/Gupta16/)] \[[download](docs/en/datasets/synthtext.md)]
325325
- [TextOCR](https://textvqa.org/textocr/) [[download](docs/en/datasets/textocr.md)]
326-
- [Total-Text](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) [[paper](https://arxiv.org/abs/1710.10400)] [[download](docs/en/datasets/totaltext.md)]
326+
- [Total-Text](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) [[paper](https://arxiv.org/abs/1710.10400)] \[[download](docs/en/datasets/totaltext.md)]
327327
328328
</details>
329329
330330
<details close markdown>
331331
<summary>Layout Analysis Datasets</summary>
332332
333-
- [PublayNet](https://github.com/ibm-aur-nlp/PubLayNet) [[paper](https://arxiv.org/abs/1908.07836)] [[download](https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz)]
333+
- [PublayNet](https://github.com/ibm-aur-nlp/PubLayNet) [[paper](https://arxiv.org/abs/1908.07836)] \[[download](https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz)]
334334
335335
</details>
336336
337337
<details close markdown>
338338
<summary>Key Information Extraction Datasets</summary>
339339
340-
- [XFUND](https://github.com/doc-analysis/XFUND) [[paper](https://aclanthology.org/2022.findings-acl.253/)] [[download](https://github.com/doc-analysis/XFUND/releases/tag/v1.0)]
340+
- [XFUND](https://github.com/doc-analysis/XFUND) [[paper](https://aclanthology.org/2022.findings-acl.253/)] \[[download](https://github.com/doc-analysis/XFUND/releases/tag/v1.0)]
341341
342342
</details>
343343
344344
<details close markdown>
345345
<summary>Table Recognition Datasets</summary>
346346
347-
- [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet) [[paper](https://arxiv.org/pdf/1911.10683.pdf)] [[download](https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz)]
347+
- [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet) [[paper](https://arxiv.org/pdf/1911.10683.pdf)] \[[download](https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz)]
348348
349349
</details>
350350
@@ -362,7 +362,7 @@ Frequently asked questions about configuring environment and mindocr, please ref
362362
363363
- 2023/04/01
364364
1. Add new trained models
365-
- [LayoutLMv3](configs/kie/layoutlmv3/) for key information extraction
365+
- [LayoutLMv3](configs/layout/layoutlmv3/) for key information extraction
366366
367367
- 2024/03/20
368368
1. Add new trained models

README_CN.md

+17-17
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030

3131
<!--start-->
3232
## 简介
33-
MindOCR是一个基于[MindSpore](https://www.mindspore.cn/en) 框架开发的OCR开源工具箱,集成系列主流文字检测识别的算法、模型,并提供易用的训练和推理工具,可以帮助用户快速开发和应用业界SoTA文本检测、文本识别模型,如DBNet/DBNet++和CRNN/SVTR,满足图像文档理解的需求。
33+
MindOCR是一个基于[MindSpore](https://www.mindspore.cn/) 框架开发的OCR开源工具箱,集成系列主流文字检测识别的算法、模型,并提供易用的训练和推理工具,可以帮助用户快速开发和应用业界SoTA文本检测、文本识别模型,如DBNet/DBNet++和CRNN/SVTR,满足图像文档理解的需求。
3434

3535

3636
<details open markdown>
@@ -219,9 +219,9 @@ python tools/infer/text/predict_system.py --image_dir {path_to_img or dir_to_img
219219

220220
### 3. 模型离线推理
221221

222-
你可以在MindOCR中对**MindOCR原生模型****第三方模型**(如PaddleOCR、MMOCR等)进行MindSpore Lite推理。详情请参考[模型离线推理教程](docs/zh/inference/inference_tutorial.md)。
222+
你可以在MindOCR中对 **MindOCR原生模型****第三方模型**(如PaddleOCR、MMOCR等)进行MindSpore Lite推理。详情请参考[模型离线推理教程](docs/zh/inference/inference_tutorial.md)。
223223

224-
## 使用教程
224+
## <span id="使用教程">使用教程</span>
225225

226226
- 数据集
227227
- [数据集准备](docs/zh/datasets/converters.md)
@@ -275,7 +275,7 @@ python tools/infer/text/predict_system.py --image_dir {path_to_img or dir_to_img
275275
<summary>关键信息抽取</summary>
276276
277277
- [x] [LayoutXLM](configs/kie/vi_layoutxlm/README_CN.md) (arXiv'2021)
278-
- [x] [LayoutLMv3](configs/kie/layoutlmv3/README_CN.md) (arXiv'2022)
278+
- [x] [LayoutLMv3](configs/layout/layoutlmv3/README_CN.md) (arXiv'2022)
279279
280280
</details>
281281
@@ -294,7 +294,7 @@ python tools/infer/text/predict_system.py --image_dir {path_to_img or dir_to_img
294294
</details>
295295
296296
297-
关于以上模型的具体训练方法和结果,请参见[configs](https://github.com/mindspore-lab/mindocr/blob/main/configs)下各模型子目录的readme文档。
297+
关于以上模型的具体训练方法和结果,请参见[configs](configs)下各模型子目录的readme文档。
298298
299299
[MindSpore Lite](https://www.mindspore.cn/lite)模型推理的支持列表,
300300
请参见[MindOCR原生模型推理支持列表](docs/zh/inference/mindocr_models_list.md) 和 [第三方模型推理支持列表](docs/zh/inference/thirdparty_models_list.md)(如PaddleOCR)。
@@ -310,45 +310,45 @@ MindOCR提供了[数据格式转换工具](https://github.com/mindspore-lab/mind
310310
- [Born-Digital Images](https://rrc.cvc.uab.es/?ch=1) [[download](docs/zh/datasets/borndigital.md)]
311311
- [CASIA-10K](http://www.nlpr.ia.ac.cn/pal/CASIA10K.html) [[download](docs/zh/datasets/casia10k.md)]
312312
- [CCPD](https://github.com/detectRecog/CCPD) [[download](docs/zh/datasets/ccpd.md)]
313-
- [Chinese Text Recognition Benchmark](https://github.com/FudanVI/benchmarking-chinese-text-recognition) [[paper](https://arxiv.org/abs/2112.15093)] [[download](docs/zh/datasets/chinese_text_recognition.md)]
313+
- [Chinese Text Recognition Benchmark](https://github.com/FudanVI/benchmarking-chinese-text-recognition) [[paper](https://arxiv.org/abs/2112.15093)] \[[download](docs/zh/datasets/chinese_text_recognition.md)]
314314
- [COCO-Text](https://rrc.cvc.uab.es/?ch=5) [[download](docs/zh/datasets/cocotext.md)]
315315
- [CTW](https://ctwdataset.github.io/) [[download](docs/zh/datasets/ctw.md)]
316-
- [ICDAR2015](https://rrc.cvc.uab.es/?ch=4) [[paper](https://rrc.cvc.uab.es/files/short_rrc_2015.pdf)] [[download](docs/zh/datasets/icdar2015.md)]
316+
- [ICDAR2015](https://rrc.cvc.uab.es/?ch=4) [[paper](https://rrc.cvc.uab.es/files/short_rrc_2015.pdf)] \[[download](docs/zh/datasets/icdar2015.md)]
317317
- [ICDAR2019 ArT](https://rrc.cvc.uab.es/?ch=14) [[download](docs/zh/datasets/ic19_art.md)]
318318
- [LSVT](https://rrc.cvc.uab.es/?ch=16) [[download](docs/zh/datasets/lsvt.md)]
319-
- [MLT2017](https://rrc.cvc.uab.es/?ch=8) [[paper](https://ieeexplore.ieee.org/abstract/document/8270168)] [[download](docs/zh/datasets/mlt2017.md)]
320-
- [MSRA-TD500](http://www.iapr-tc11.org/mediawiki/index.php/MSRA_Text_Detection_500_Database_(MSRA-TD500)) [[paper](https://ieeexplore.ieee.org/abstract/document/6247787)] [[download](docs/zh/datasets/td500.md)]
319+
- [MLT2017](https://rrc.cvc.uab.es/?ch=8) [[paper](https://ieeexplore.ieee.org/abstract/document/8270168)] \[[download](docs/zh/datasets/mlt2017.md)]
320+
- [MSRA-TD500](http://www.iapr-tc11.org/mediawiki/index.php/MSRA_Text_Detection_500_Database_(MSRA-TD500)) [[paper](https://ieeexplore.ieee.org/abstract/document/6247787)] \[[download](docs/zh/datasets/td500.md)]
321321
- [MTWI-2018](https://tianchi.aliyun.com/competition/entrance/231651/introduction) [[download](docs/zh/datasets/mtwi2018.md)]
322322
- [RCTW-17](https://rctw.vlrlab.net/) [[download](docs/zh/datasets/rctw17.md)]
323323
- [ReCTS](https://rrc.cvc.uab.es/?ch=12) [[download](docs/zh/datasets/rects.md)]
324-
- [SCUT-CTW1500](https://github.com/Yuliang-Liu/Curve-Text-Detector) [[paper](https://www.sciencedirect.com/science/article/pii/S0031320319300664)] [[download](docs/zh/datasets/ctw1500.md)]
324+
- [SCUT-CTW1500](https://github.com/Yuliang-Liu/Curve-Text-Detector) [[paper](https://www.sciencedirect.com/science/article/pii/S0031320319300664)] \[[download](docs/zh/datasets/ctw1500.md)]
325325
- [SROIE](https://rrc.cvc.uab.es/?ch=13) [[download](docs/zh/datasets/sroie.md)]
326326
- [SVT](http://www.iapr-tc11.org/mediawiki/index.php/The_Street_View_Text_Dataset) [[download](docs/zh/datasets/svt.md)]
327-
- [SynText150k](https://github.com/aim-uofa/AdelaiDet) [[paper](https://arxiv.org/abs/2002.10200)] [[download](docs/zh/datasets/syntext150k.md)]
328-
- [SynthText](https://www.robots.ox.ac.uk/~vgg/data/scenetext/) [[paper](https://www.robots.ox.ac.uk/~vgg/publications/2016/Gupta16/)] [[download](docs/zh/datasets/synthtext.md)]
327+
- [SynText150k](https://github.com/aim-uofa/AdelaiDet) [[paper](https://arxiv.org/abs/2002.10200)] \[[download](docs/zh/datasets/syntext150k.md)]
328+
- [SynthText](https://www.robots.ox.ac.uk/~vgg/data/scenetext/) [[paper](https://www.robots.ox.ac.uk/~vgg/publications/2016/Gupta16/)] \[[download](docs/zh/datasets/synthtext.md)]
329329
- [TextOCR](https://textvqa.org/textocr/) [[download](docs/zh/datasets/textocr.md)]
330-
- [Total-Text](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) [[paper](https://arxiv.org/abs/1710.10400)] [[download](docs/zh/datasets/totaltext.md)]
330+
- [Total-Text](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) [[paper](https://arxiv.org/abs/1710.10400)] \[[download](docs/zh/datasets/totaltext.md)]
331331
332332
</details>
333333
334334
<details close markdown>
335335
<summary>版面分析数据集</summary>
336336
337-
- [PublayNet](https://github.com/ibm-aur-nlp/PubLayNet) [[paper](https://arxiv.org/abs/1908.07836)] [[download](https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz)]
337+
- [PublayNet](https://github.com/ibm-aur-nlp/PubLayNet) [[paper](https://arxiv.org/abs/1908.07836)] \[[download](https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz)]
338338
339339
</details>
340340
341341
<details close markdown>
342342
<summary>关键信息抽取数据集</summary>
343343
344-
- [XFUND](https://github.com/doc-analysis/XFUND) [[paper](https://aclanthology.org/2022.findings-acl.253/)] [[download](https://github.com/doc-analysis/XFUND/releases/tag/v1.0)]
344+
- [XFUND](https://github.com/doc-analysis/XFUND) [[paper](https://aclanthology.org/2022.findings-acl.253/)] \[[download](https://github.com/doc-analysis/XFUND/releases/tag/v1.0)]
345345
346346
</details>
347347
348348
<details close markdown>
349349
<summary>表格识别数据集</summary>
350350
351-
- [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet) [[paper](https://arxiv.org/pdf/1911.10683.pdf)] [[download](https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz)]
351+
- [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet) [[paper](https://arxiv.org/pdf/1911.10683.pdf)] \[[download](https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz)]
352352
353353
</details>
354354
@@ -365,7 +365,7 @@ MindOCR提供了[数据格式转换工具](https://github.com/mindspore-lab/mind
365365
366366
- 2023/04/01
367367
1. 增加新模型
368-
- 关键信息抽取[LayoutLMv3](configs/kie/layoutlmv3/)
368+
- 关键信息抽取[LayoutLMv3](configs/layout/layoutlmv3/)
369369
370370
- 2024/03/20
371371
1. 增加新模型

configs/det/dbnet/README.md

+9-2
Original file line numberDiff line numberDiff line change
@@ -282,9 +282,16 @@ python tools/train.py -c=configs/det/dbnet/db_r50_icdar15.yaml
282282
Please set `distribute` in yaml config file to be True.
283283

284284
```shell
285-
# n is the number of NPUs
286-
mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
285+
# worker_num is the total number of Worker processes participating in the distributed task.
286+
# local_worker_num is the number of Worker processes pulled up on the current node.
287+
# The number of processes is equal to the number of NPUs used for training. In the case of single-machine multi-card worker_num and local_worker_num must be the same.
288+
msrun --worker_num=2 --local_worker_num=2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
289+
290+
# Based on verification,binding cores usually results in performance acceleration.Please configure the parameters and run.
291+
export MS_ENABLE_NUMA=True
292+
msrun --bind_core=True --worker_num=2 --local_worker_num=2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
287293
```
294+
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/en/r2.3.1/parallel/msrun_launcher.html).
288295

289296
The training result (including checkpoints, per-epoch performance and curves) will be saved in the directory parsed by the arg `ckpt_save_dir` in yaml config file. The default directory is `./tmp_det`.
290297

configs/det/dbnet/README_CN.md

+9-2
Original file line numberDiff line numberDiff line change
@@ -263,9 +263,16 @@ python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
263263
请确保yaml文件中的`distribute`参数为True。
264264

265265
```shell
266-
# n is the number of NPUs
267-
mpirun --allow-run-as-root -n 2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
266+
# worker_num代表分布式总进程数量。
267+
# local_worker_num代表当前节点进程数量。
268+
# 进程数量即为训练使用的NPU的数量,单机多卡情况下worker_num和local_worker_num需保持一致。
269+
msrun --worker_num=2 --local_worker_num=2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
270+
271+
# 经验证,绑核在大部分情况下有性能加速,请配置参数并运行
272+
export MS_ENABLE_NUMA=True
273+
msrun --bind_core=True --worker_num=2 --local_worker_num=2 python tools/train.py --config configs/det/dbnet/db_r50_icdar15.yaml
268274
```
275+
**注意:** 有关 msrun 配置的更多信息,请参考[此处](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).
269276

270277
训练结果(包括checkpoint、每个epoch的性能和曲线图)将被保存在yaml配置文件的`ckpt_save_dir`参数配置的路径下,默认为`./tmp_det`。
271278

configs/det/dbnet/README_CN_PP-OCRv3.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -330,8 +330,17 @@ model:
330330

331331
```shell
332332
# 在多个 Ascend 设备上进行分布式训练
333-
mpirun --allow-run-as-root -n 4 python tools/train.py --config configs/det/dbnet/db_mobilenetv3_ppocrv3.yaml
333+
# worker_num代表分布式总进程数量。
334+
# local_worker_num代表当前节点进程数量。
335+
# 进程数量即为训练使用的NPU的数量,单机多卡情况下worker_num和local_worker_num需保持一致。
336+
msrun --worker_num=4 --local_worker_num=4 python tools/train.py --config configs/det/dbnet/db_mobilenetv3_ppocrv3.yaml
337+
338+
# 经验证,绑核在大部分情况下有性能加速,请配置参数并运行
339+
export MS_ENABLE_NUMA=True
340+
msrun --bind_core=True --worker_num=4 --local_worker_num=4 python tools/train.py --config configs/det/dbnet/db_mobilenetv3_ppocrv3.yaml
334341
```
342+
**注意:** 有关 msrun 配置的更多信息,请参考[此处](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).
343+
335344

336345
* 单卡训练
337346

0 commit comments

Comments
 (0)