Skip to content

Commit 9810487

Browse files
committed
1 parent db48b10 commit 9810487

35 files changed

+1615
-166
lines changed

โ€Ž.github/ISSUE_TEMPLATE/1_TRANSLATE_REQUEST.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,4 @@ _(๋ฐ˜๋“œ์‹œ ์ง€ํ‚ค์…”์•ผ ํ•˜๋Š” ์ผ์ •์ด ์•„๋‹™๋‹ˆ๋‹ค - ์ผ์ •์ด ๋„ˆ๋ฌด ๋Šฆ์–ด
1515
## ๊ด€๋ จ ์ด์Šˆ
1616
_ํ˜„์žฌ ๋ฒˆ์—ญ ์š”์ฒญ / ์ง„ํ–‰ ๋‚ด์—ญ์„ ๋ณด๊ธฐ ์œ„ํ•ด ๊ฐ ๋ฒ„์ „์˜ ๋ฉ”์ธ ์ด์Šˆ๋ฅผ ์ฐธ์กฐํ•ฉ๋‹ˆ๋‹ค._ <br />
1717
_(ํŠน๋ณ„ํ•œ ์ผ์ด ์—†๋‹ค๋ฉด ๋ณ€๊ฒฝํ•˜์ง€ ์•Š์œผ์…”๋„ ๋ฉ๋‹ˆ๋‹ค.)_
18-
* ๊ด€๋ จ ์ด์Šˆ: #387 (v1.10)
18+
* ๊ด€๋ จ ์ด์Šˆ: #445 (v1.11)

โ€ŽREADME.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
PyTorch์—์„œ ์ œ๊ณตํ•˜๋Š” ํŠœํ† ๋ฆฌ์–ผ์˜ ํ•œ๊ตญ์–ด ๋ฒˆ์—ญ์„ ์œ„ํ•œ ์ €์žฅ์†Œ์ž…๋‹ˆ๋‹ค.\
66
๋ฒˆ์—ญ์˜ ๊ฒฐ๊ณผ๋ฌผ์€ [https://tutorials.pytorch.kr](https://tutorials.pytorch.kr)์—์„œ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๋ฉฐ **๋ถˆ๊ทœ์น™์ ์œผ๋กœ** ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค.)\
7-
ํ˜„์žฌ ๋ฒ„์ „์˜ ๋ฒˆ์—ญ / ๋ณ€๊ฒฝ ๊ด€๋ จ ์ด์Šˆ๋Š” [#387 ์ด์Šˆ](https://github.com/PyTorchKorea/tutorials-kr/issues/387)๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.
7+
ํ˜„์žฌ ๋ฒ„์ „์˜ ๋ฒˆ์—ญ / ๋ณ€๊ฒฝ ๊ด€๋ จ ์ด์Šˆ๋Š” [#445 ์ด์Šˆ](https://github.com/PyTorchKorea/tutorials-kr/issues/445)๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.
88

99
## ๊ธฐ์—ฌํ•˜๊ธฐ
1010

@@ -22,20 +22,12 @@ PyTorch์—์„œ ์ œ๊ณตํ•˜๋Š” ํŠœํ† ๋ฆฌ์–ผ์˜ ํ•œ๊ตญ์–ด ๋ฒˆ์—ญ์„ ์œ„ํ•œ ์ €์žฅ์†Œ
2222

2323
## ์›๋ฌธ
2424

25-
ํ˜„์žฌ PyTorch v1.10.1 ํŠœํ† ๋ฆฌ์–ผ([pytorch/tutorials@444fbd1](https://github.com/pytorch/tutorials/commit/444fbd16f2ddf9967baf8b06e83867a141b071c2) ๊ธฐ์ค€) ๋ฒˆ์—ญ์ด ์ง„ํ–‰ ์ค‘์ž…๋‹ˆ๋‹ค.
25+
ํ˜„์žฌ PyTorch v1.11 ํŠœํ† ๋ฆฌ์–ผ([pytorch/tutorials@6e21cf2](https://github.com/pytorch/tutorials/commit/6e21cf2e81beb8b4dddc9713be0c8746087fd59e) ๊ธฐ์ค€) ๋ฒˆ์—ญ์ด ์ง„ํ–‰ ์ค‘์ž…๋‹ˆ๋‹ค.
2626

2727
์ตœ์‹  ๋ฒ„์ „์˜ ํŠœํ† ๋ฆฌ์–ผ(๊ณต์‹, ์˜์–ด)์€ [PyTorch tutorials ์‚ฌ์ดํŠธ](https://pytorch.org/tutorials) ๋ฐ [PyTorch tutorials ์ €์žฅ์†Œ](https://github.com/pytorch/tutorials)๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.
2828

2929
## ๊ณผ๊ฑฐ ๋ฒ„์ „
3030

31-
### PyTorch v1.0 ์ด์ƒ์˜ ํŠœํ† ๋ฆฌ์–ผ ๋ณด๊ธฐ
32-
33-
v1.0 ์ดํ›„ ๋ฒˆ์—ญ์€ ๋ณ„๋„ ์ €์žฅ์†Œ๋กœ ๊ด€๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. [์ด ์ €์žฅ์†Œ์˜ Release ๋ฉ”๋‰ด](https://github.com/PyTorchKorea/tutorials-kr/releases)๋ฅผ ํ™•์ธํ•ด์ฃผ์„ธ์š”. \
34-
`๋ฒ„์ „-base`(์˜ˆ. `1.9-base`)๋Š” ํ•ด๋‹น ๋ฒ„์ „์„ ์‹œ์ž‘ํ•  ๋•Œ์˜ ๋ฆด๋ฆฌ์ฆˆ์ด๊ณ , `๋ฒ„์ „-latest`(์˜ˆ. `1.9-latest`)๋Š” ํ•ด๋‹น ๋ฒ„์ „์˜ ๋งˆ์ง€๋ง‰ ๋ฆด๋ฆฌ์ฆˆ์ž…๋‹ˆ๋‹ค.
35-
36-
ํ•ด๋‹น ๋ฆด๋ฆฌ์ฆˆ์˜ ๋ฌธ์„œ๋ฅผ ๋‚ด๋ ค๋ฐ›์œผ์‹  ํ›„ ๋นŒ๋“œํ•˜์‹œ๋ฉด ํ•ด๋‹น ๋ฒ„์ „์˜ ๋ฌธ์„œ๋ฅผ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. \
37-
๋นŒ๋“œ ๋ฐฉ๋ฒ•์€ [๊ธฐ์—ฌํ•˜๊ธฐ ๋ฌธ์„œ์˜ `2-5. (๋‚ด ์ปดํ“จํ„ฐ์—์„œ) ๊ฒฐ๊ณผ ํ™•์ธํ•˜๊ธฐ`](https://github.com/PyTorchKorea/tutorials-kr/blob/master/CONTRIBUTING.md#2-5-๋‚ด-์ปดํ“จํ„ฐ์—์„œ-๊ฒฐ๊ณผ-ํ™•์ธํ•˜๊ธฐ) ๋ถ€๋ถ„์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.
38-
3931
### PyTorch v1.0 ์ด์ „(v0.3 & v0.4)์˜ ํŠœํ† ๋ฆฌ์–ผ ๋ณด๊ธฐ
4032

4133
์•„๋ž˜ ๋งํฌ์—์„œ ๊ณผ๊ฑฐ ๋ฒ„์ „์˜ ํŠœํ† ๋ฆฌ์–ผ ๋ฒˆ์—ญ์„ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ๋Š” ๋ฒˆ์—ญ์ด ์ด๋ค„์ง€๊ณ  ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
@@ -45,6 +37,14 @@ v1.0 ์ดํ›„ ๋ฒˆ์—ญ์€ ๋ณ„๋„ ์ €์žฅ์†Œ๋กœ ๊ด€๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. [์ด ์ €์žฅ
4537
| 0.4.1 | [PyTorch-tutorials-kr-0.4](https://9bow.github.io/PyTorch-tutorials-kr-0.4) | [GitHub ์ €์žฅ์†Œ](https://github.com/PyTorchKorea/tutorials-kr-0.4) |
4638
| 0.3.1 | [PyTorch-tutorials-kr-0.3.1](https://9bow.github.io/PyTorch-tutorials-kr-0.3.1) | [GitHub ์ €์žฅ์†Œ](https://github.com/PyTorchKorea/tutorials-kr-0.3.1) |
4739

40+
### PyTorch v1.0 ์ด์ƒ์˜ ํŠœํ† ๋ฆฌ์–ผ ๋ณด๊ธฐ
41+
42+
v1.0 ์ดํ›„ ๋ฒˆ์—ญ์€ ๋ณ„๋„ ์ €์žฅ์†Œ๋กœ ๊ด€๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. [์ด ์ €์žฅ์†Œ์˜ Release ๋ฉ”๋‰ด](https://github.com/PyTorchKorea/tutorials-kr/releases)๋ฅผ ํ™•์ธํ•ด์ฃผ์„ธ์š”. \
43+
`๋ฒ„์ „-base`(์˜ˆ. `1.9-base`)๋Š” ํ•ด๋‹น ๋ฒ„์ „์„ ์‹œ์ž‘ํ•  ๋•Œ์˜ ๋ฆด๋ฆฌ์ฆˆ์ด๊ณ , `๋ฒ„์ „-latest`(์˜ˆ. `1.9-latest`)๋Š” ํ•ด๋‹น ๋ฒ„์ „์˜ ๋งˆ์ง€๋ง‰ ๋ฆด๋ฆฌ์ฆˆ์ž…๋‹ˆ๋‹ค.
44+
45+
ํ•ด๋‹น ๋ฆด๋ฆฌ์ฆˆ์˜ ๋ฌธ์„œ๋ฅผ ๋‚ด๋ ค๋ฐ›์œผ์‹  ํ›„ ๋นŒ๋“œํ•˜์‹œ๋ฉด ํ•ด๋‹น ๋ฒ„์ „์˜ ๋ฌธ์„œ๋ฅผ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. \
46+
๋นŒ๋“œ ๋ฐฉ๋ฒ•์€ [๊ธฐ์—ฌํ•˜๊ธฐ ๋ฌธ์„œ์˜ `2-5. (๋‚ด ์ปดํ“จํ„ฐ์—์„œ) ๊ฒฐ๊ณผ ํ™•์ธํ•˜๊ธฐ`](https://github.com/PyTorchKorea/tutorials-kr/blob/master/CONTRIBUTING.md#2-5-๋‚ด-์ปดํ“จํ„ฐ์—์„œ-๊ฒฐ๊ณผ-ํ™•์ธํ•˜๊ธฐ) ๋ถ€๋ถ„์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.
47+
4848
---
49-
This is a project to translate [pytorch/tutorials@444fbd1](https://github.com/pytorch/tutorials/commit/444fbd16f2ddf9967baf8b06e83867a141b071c2) into Korean.
49+
This is a project to translate [pytorch/tutorials@6e21cf2](https://github.com/pytorch/tutorials/commit/6e21cf2e81beb8b4dddc9713be0c8746087fd59e) into Korean.
5050
For the latest version, please visit to the [official PyTorch tutorials repo](https://github.com/pytorch/tutorials).
1.97 MB
Loading
1.64 MB
Loading
1.66 MB
Loading
555 KB
Loading
Loading
Loading
Loading
26.1 KB
Loading

โ€Žadvanced_source/cpp_extension.rst

+23-2
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,27 @@ into C++. Our primary datatype for all computations will be
206206
also that we can include ``<iostream>`` or *any other C or C++ header* -- we have
207207
the full power of C++11 at our disposal.
208208

209+
Note that CUDA-11.5 nvcc will hit internal compiler error while parsing torch/extension.h on Windows.
210+
To workaround the issue, move python binding logic to pure C++ file.
211+
Example use:
212+
213+
.. code-block:: cpp
214+
215+
#include <ATen/ATen.h>
216+
at::Tensor SigmoidAlphaBlendForwardCuda(....)
217+
218+
Instead of:
219+
220+
.. code-block:: cpp
221+
222+
#include <torch/extension.h>
223+
torch::Tensor SigmoidAlphaBlendForwardCuda(...)
224+
225+
Currently open issue for nvcc bug `here
226+
<https://github.com/pytorch/pytorch/issues/69460>`_.
227+
Complete workaround code example `here
228+
<https://github.com/facebookresearch/pytorch3d/commit/cb170ac024a949f1f9614ffe6af1c38d972f7d48>`_.
229+
209230
Forward Pass
210231
************
211232

@@ -497,7 +518,7 @@ duration::
497518
(new_h.sum() + new_C.sum()).backward()
498519
backward += time.time() - start
499520

500-
print('Forward: {:.3f} us | Backward {:.3f} us'.format(forward * 1e6/1e5, backward * 1e6/1e5))
521+
print('Forward: {:.3f} s | Backward {:.3f} s'.format(forward, backward))
501522

502523
If we run this code with the original LLTM we wrote in pure Python at the start
503524
of this post, we get the following numbers (on my machine)::
@@ -667,7 +688,7 @@ We'll start with the C++ file, which we'll call ``lltm_cuda.cpp``, for example:
667688
668689
// C++ interface
669690
670-
#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
691+
#define CHECK_CUDA(x) TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor")
671692
#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
672693
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
673694

โ€Žadvanced_source/ddp_pipeline.py

+8-7
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
55
**Author**: `Pritam Damania <https://github.com/pritamdamania87>`_
66
**๋ฒˆ์—ญ**: `๋ฐฑ์„ ํฌ <https://github.com/spongebob03>`_
7-
7+
88
์ด ํŠœํ† ๋ฆฌ์–ผ์€ `๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ(Distributed Data Parallel) <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__ ์™€
99
`๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ <https://pytorch.org/docs/stable/pipeline.html>`__
1010
๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ GPU์— ๊ฑธ์นœ ๊ฑฐ๋Œ€ํ•œ ํŠธ๋žœ์Šคํฌ๋จธ(Transformer) ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ ํ•™์Šต์‹œํ‚ค๋Š”์ง€ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
@@ -26,7 +26,7 @@
2626
#
2727

2828
######################################################################
29-
# ``PositionalEncoding`` ๋ชจ๋“ˆ์€ ์‹œํ€€์Šค์—์„œ ํ† ํฐ์˜ ์ƒ๋Œ€์ , ์ ˆ๋Œ€์  ์œ„์น˜์— ๋Œ€ํ•œ
29+
# ``PositionalEncoding`` ๋ชจ๋“ˆ์€ ์‹œํ€€์Šค์—์„œ ํ† ํฐ์˜ ์ƒ๋Œ€์ , ์ ˆ๋Œ€์  ์œ„์น˜์— ๋Œ€ํ•œ
3030
# ๋ช‡๋ช‡ ์ •๋ณด๋ฅผ ์ฃผ์ž…ํ•ฉ๋‹ˆ๋‹ค.
3131
# ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ์ž„๋ฒ ๋”ฉ๊ณผ ๊ฐ™์€ ์ฐจ์›์„ ๊ฐ€์ง€๋ฏ€๋กœ
3232
# ๋‘˜์„ ํ•ฉ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ, ์ฃผํŒŒ์ˆ˜๊ฐ€ ๋‹ค๋ฅธ ``sine`` ๊ณผ ``cosine`` ํ•จ์ˆ˜๋ฅผ
@@ -61,10 +61,10 @@ def forward(self, x):
6161

6262

6363
######################################################################
64-
# ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š”, ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ๋‘ ๊ฐœ์˜ GPU์— ๊ฑธ์ณ์„œ ๋‚˜๋ˆ„๊ณ 
64+
# ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š”, ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ๋‘ ๊ฐœ์˜ GPU์— ๊ฑธ์ณ์„œ ๋‚˜๋ˆ„๊ณ 
6565
# ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ํ•™์Šต์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€๋กœ,
6666
# `๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__
67-
# ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋‘ ๋ณต์ œ๋ฅผ ํ›ˆ๋ จ์‹œํ‚ต๋‹ˆ๋‹ค. ํ•œ ํ”„๋กœ์„ธ์Šค๋Š”
67+
# ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋‘ ๋ณต์ œ๋ฅผ ํ›ˆ๋ จ์‹œํ‚ต๋‹ˆ๋‹ค. ํ•œ ํ”„๋กœ์„ธ์Šค๋Š”
6868
# GPUs 0, 1์— ๊ฑฐ์ณ ํŒŒ์ดํ”„๋ฅผ ๊ตฌ๋™ํ•˜๊ณ  ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์Šค๋Š” GPUs 2, 3์—์„œ ํŒŒ์ดํ”„๋ฅผ ๊ตฌ๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ, ์ด ๋‘
6969
# ํ”„๋กœ์„ธ์Šค๋Š” ๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ๋กœ ๋‘ ๋ณต์ œ๋ณธ(replica)์„ ํ•™์Šต์‹œํ‚ต๋‹ˆ๋‹ค.
7070
# ๋ชจ๋ธ์€ ๋ฐ”๋กœ `nn.Transformer ์™€ TorchText ๋กœ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(Sequence-to-Sequence) ๋ชจ๋ธ๋งํ•˜๊ธฐ
@@ -138,9 +138,10 @@ def run_worker(rank, world_size):
138138
# ---------------------------
139139
#
140140

141-
142141
######################################################################
143142
# ํ•™์Šต ํ”„๋กœ์„ธ์Šค๋Š” ``torchtext`` ์˜ Wikitext-2 ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
143+
# torchtext ๋ฐ์ดํ„ฐ์…‹์— ์ ‘๊ทผํ•˜๊ธฐ ์ „์—, https://github.com/pytorch/data ์„ ์ฐธ๊ณ ํ•˜์—ฌ torchdata๋ฅผ
144+
# ์„ค์น˜ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.
144145
# ๋‹จ์–ด ์˜ค๋ธŒ์ ํŠธ๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋งŒ๋“ค์–ด์ง€๊ณ , ํ† ํฐ์„ ํ…์„œ(tensor)๋กœ ์ˆ˜์น˜ํ™”ํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
145146
# ์‹œํ€€์Šค ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์—ฌ, ``batchify()`` ํ•จ์ˆ˜๋Š” ๋ฐ์ดํ„ฐ์…‹์„ ์—ด(column)๋“ค๋กœ ์ •๋ฆฌํ•˜๊ณ ,
146147
# ``batch_size`` ์‚ฌ์ด์ฆˆ์˜ ๋ฐฐ์น˜๋“ค๋กœ ๋‚˜๋ˆˆ ํ›„์— ๋‚จ์€ ๋ชจ๋“  ํ† ํฐ์„ ๋ฒ„๋ฆฝ๋‹ˆ๋‹ค.
@@ -223,7 +224,7 @@ def batchify(data, bsz, rank, world_size, is_train=False):
223224
#
224225
# .. image:: ../_static/img/transformer_input_target.png
225226
#
226-
# ์ฒญํฌ๊ฐ€ ์ฐจ์› 0์— ์†ํ•˜๋ฉฐ
227+
# ์ฒญํฌ๊ฐ€ ์ฐจ์› 0์— ์†ํ•˜๋ฉฐ
227228
# ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์˜ ``S`` ์ฐจ์›๊ณผ ์ผ์น˜ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์œ ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
228229
# ๋ฐฐ์น˜ ์ฐจ์›ย ``N``ย ์€ 1 ์ฐจ์›์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค.
229230
#
@@ -246,7 +247,7 @@ def get_batch(source, i):
246247
######################################################################
247248
# ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ™œ์šฉํ•œ ๋Œ€ํ˜• ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ ํ•™์Šต์„ ๋ณด์ด๊ธฐ ์œ„ํ•ด,
248249
# ํŠธ๋žœ์Šคํฌ๋จธ ๊ณ„์ธต ๊ทœ๋ชจ๋ฅผ ์ ์ ˆํžˆ ํ™•์žฅ์‹œํ‚ต๋‹ˆ๋‹ค.
249-
# 4096์ฐจ์›์˜ ์ž„๋ฒ ๋”ฉ ๋ฒกํ„ฐ, 4096์˜ ์€๋‹‰ ์‚ฌ์ด์ฆˆ, 16๊ฐœ์˜ ์–ดํ…์…˜ ํ—ค๋“œ(attention head)์™€ ์ด 8 ๊ฐœ์˜
250+
# 4096์ฐจ์›์˜ ์ž„๋ฒ ๋”ฉ ๋ฒกํ„ฐ, 4096์˜ ์€๋‹‰ ์‚ฌ์ด์ฆˆ, 16๊ฐœ์˜ ์–ดํ…์…˜ ํ—ค๋“œ(attention head)์™€ ์ด 8 ๊ฐœ์˜
250251
# ํŠธ๋žœ์Šคํฌ๋จธ ๊ณ„์ธต (``nn.TransformerEncoderLayer``)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ตœ๋Œ€
251252
# **~1 ์–ต** ๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ฐ–๋Š” ๋ชจ๋ธ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
252253
#

โ€Žbeginner_source/audio_feature_extractions_tutorial.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ def plot_kaldi_pitch(waveform, sample_rate, pitch, nfcc):
265265
# Mel Filter Bank
266266
# ---------------
267267
#
268-
# ``torchaudio.functional.create_fb_matrix`` generates the filter bank
268+
# ``torchaudio.functional.melscale_fbanks`` generates the filter bank
269269
# for converting frequency bins to mel-scale bins.
270270
#
271271
# Since this function does not require input audio/features, there is no
@@ -277,7 +277,7 @@ def plot_kaldi_pitch(waveform, sample_rate, pitch, nfcc):
277277
n_mels = 64
278278
sample_rate = 6000
279279

280-
mel_filters = F.create_fb_matrix(
280+
mel_filters = F.melscale_fbanks(
281281
int(n_fft // 2 + 1),
282282
n_mels=n_mels,
283283
f_min=0.,

โ€Žbeginner_source/basics/autogradqs_tutorial.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@
5959
# ์—์„œ ์ฐพ์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
6060
#
6161

62-
print('Gradient function for z =', z.grad_fn)
63-
print('Gradient function for loss =', loss.grad_fn)
62+
print(f"Gradient function for z = {z.grad_fn}")
63+
print(f"Gradient function for loss = {loss.grad_fn}")
6464

6565
######################################################################
6666
# ๋ณ€ํ™”๋„(Gradient) ๊ณ„์‚ฐํ•˜๊ธฐ
@@ -185,12 +185,12 @@
185185
inp = torch.eye(5, requires_grad=True)
186186
out = (inp+1).pow(2)
187187
out.backward(torch.ones_like(inp), retain_graph=True)
188-
print("First call\n", inp.grad)
188+
print(f"First call\n{inp.grad}")
189189
out.backward(torch.ones_like(inp), retain_graph=True)
190-
print("\nSecond call\n", inp.grad)
190+
print(f"\nSecond call\n{inp.grad}")
191191
inp.grad.zero_()
192192
out.backward(torch.ones_like(inp), retain_graph=True)
193-
print("\nCall after zeroing gradients\n", inp.grad)
193+
print(f"\nCall after zeroing gradients\n{inp.grad}")
194194

195195

196196
######################################################################

โ€Žbeginner_source/basics/buildmodel_tutorial.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,8 @@
3636
# `torch.cuda <https://pytorch.org/docs/stable/notes/cuda.html>`_ ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€
3737
# ํ™•์ธํ•˜๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด CPU๋ฅผ ๊ณ„์† ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
3838

39-
device = 'cuda' if torch.cuda.is_available() else 'cpu'
40-
print(f'Using {device} device')
39+
device = "cuda" if torch.cuda.is_available() else "cpu"
40+
print(f"Using {device} device")
4141

4242
##############################################
4343
# ํด๋ž˜์Šค ์ •์˜ํ•˜๊ธฐ
@@ -176,7 +176,7 @@ def forward(self, x):
176176
#
177177

178178

179-
print("Model structure: ", model, "\n\n")
179+
print(f"Model structure: {model}\n\n")
180180

181181
for name, param in model.named_parameters():
182182
print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")

โ€Žbeginner_source/basics/data_tutorial.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ def __getitem__(self, idx):
161161

162162

163163
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
164-
self.img_labels = pd.read_csv(annotations_file, names=['file_name', 'label'])
164+
self.img_labels = pd.read_csv(annotations_file)
165165
self.img_dir = img_dir
166166
self.transform = transform
167167
self.target_transform = target_transform

โ€Žbeginner_source/basics/quickstart_tutorial.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,7 @@
2525
from torch import nn
2626
from torch.utils.data import DataLoader
2727
from torchvision import datasets
28-
from torchvision.transforms import ToTensor, Lambda, Compose
29-
import matplotlib.pyplot as plt
28+
from torchvision.transforms import ToTensor
3029

3130
######################################################################
3231
# PyTorch๋Š” `TorchText <https://pytorch.org/text/stable/index.html>`_, `TorchVision <https://pytorch.org/vision/stable/index.html>`_ ๋ฐ
@@ -66,8 +65,8 @@
6665
test_dataloader = DataLoader(test_data, batch_size=batch_size)
6766

6867
for X, y in test_dataloader:
69-
print("Shape of X [N, C, H, W]: ", X.shape)
70-
print("Shape of y: ", y.shape, y.dtype)
68+
print(f"Shape of X [N, C, H, W]: {X.shape}")
69+
print(f"Shape of y: {y.shape} {y.dtype}")
7170
break
7271

7372
######################################################################

โ€Žbeginner_source/basics/tensorqs_tutorial.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@
112112

113113
# GPU๊ฐ€ ์กด์žฌํ•˜๋ฉด ํ…์„œ๋ฅผ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค
114114
if torch.cuda.is_available():
115-
tensor = tensor.to('cuda')
115+
tensor = tensor.to("cuda")
116116

117117

118118
######################################################################
@@ -124,9 +124,9 @@
124124
# **NumPy์‹์˜ ํ‘œ์ค€ ์ธ๋ฑ์‹ฑ๊ณผ ์Šฌ๋ผ์ด์‹ฑ:**
125125

126126
tensor = torch.ones(4, 4)
127-
print('First row: ', tensor[0])
128-
print('First column: ', tensor[:, 0])
129-
print('Last column:', tensor[..., -1])
127+
print(f"First row: {tensor[0]}")
128+
print(f"First column: {tensor[:, 0]}")
129+
print(f"Last column: {tensor[..., -1]}")
130130
tensor[:,1] = 0
131131
print(tensor)
132132

@@ -171,7 +171,7 @@
171171
# ์—ฐ์‚ฐ ๊ฒฐ๊ณผ๋ฅผ ํ”ผ์—ฐ์‚ฐ์ž(operand)์— ์ €์žฅํ•˜๋Š” ์—ฐ์‚ฐ์„ ๋ฐ”๊ฟ”์น˜๊ธฐ ์—ฐ์‚ฐ์ด๋ผ๊ณ  ๋ถ€๋ฅด๋ฉฐ, ``_`` ์ ‘๋ฏธ์‚ฌ๋ฅผ ๊ฐ–์Šต๋‹ˆ๋‹ค.
172172
# ์˜ˆ๋ฅผ ๋“ค์–ด: ``x.copy_(y)`` ๋‚˜ ``x.t_()`` ๋Š” ``x`` ๋ฅผ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค.
173173

174-
print(tensor, "\n")
174+
print(f"{tensor} \n")
175175
tensor.add_(5)
176176
print(tensor)
177177

โ€Žbeginner_source/text_sentiment_ngrams_tutorial.py

+19-14
Original file line numberDiff line numberDiff line change
@@ -18,30 +18,35 @@
1818
#
1919
# torchtext ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๊ฐ€๊ณต๋˜์ง€ ์•Š์€ ํ…์ŠคํŠธ ๋ฌธ์žฅ๋“ค์„ ๋งŒ๋“œ๋Š”(yield) ๋ช‡ ๊ฐ€์ง€ ๊ธฐ์ดˆ ๋ฐ์ดํ„ฐ์…‹ ๋ฐ˜๋ณต์ž(raw dataset iterator)๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.
2020
# ์˜ˆ๋ฅผ ๋“ค์–ด, ``AG_NEWS`` ๋ฐ์ดํ„ฐ์…‹ ๋ฐ˜๋ณต์ž๋Š” ๋ ˆ์ด๋ธ”(label)๊ณผ ๋ฌธ์žฅ์˜ ํŠœํ”Œ(tuple) ํ˜•ํƒœ๋กœ ๊ฐ€๊ณต๋˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
21+
#
22+
# torchtext ๋ฐ์ดํ„ฐ์…‹์— ์ ‘๊ทผํ•˜๊ธฐ ์ „์—, https://github.com/pytorch/data ์„ ์ฐธ๊ณ ํ•˜์—ฌ torchdata๋ฅผ
23+
# ์„ค์น˜ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.
24+
#
2125

2226
import torch
2327
from torchtext.datasets import AG_NEWS
24-
train_iter = AG_NEWS(split='train')
28+
train_iter = iter(AG_NEWS(split='train'))
2529

2630
######################################################################
2731
# ::
2832
#
2933
# next(train_iter)
30-
# >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters -
31-
# Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green
32-
# again.")
34+
# >>> (3, "Fears for T N pension after talks Unions representing workers at Turner
35+
# Newall say they are 'disappointed' after talks with stricken parent firm Federal
36+
# Mogul.")
3337
#
3438
# next(train_iter)
35-
# >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private
36-
# investment firm Carlyle Group,\\which has a reputation for making well-timed
37-
# and occasionally\\controversial plays in the defense industry, has quietly
38-
# placed\\its bets on another part of the market.')
39+
# >>> (4, "The Race is On: Second Private Team Sets Launch Date for Human
40+
# Spaceflight (SPACE.com) SPACE.com - TORONTO, Canada -- A second\\team of
41+
# rocketeers competing for the #36;10 million Ansari X Prize, a contest
42+
# for\\privately funded suborbital space flight, has officially announced
43+
# the first\\launch date for its manned rocket.")
3944
#
4045
# next(train_iter)
41-
# >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring
42-
# crude prices plus worries\\about the economy and the outlook for earnings are
43-
# expected to\\hang over the stock market next week during the depth of
44-
# the\\summer doldrums.")
46+
# >>> (4, 'Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded
47+
# by a chemistry researcher at the University of Louisville won a grant to develop
48+
# a method of producing better peptides, which are short chains of amino acids, the
49+
# building blocks of proteins.')
4550
#
4651

4752
######################################################################
@@ -75,7 +80,7 @@ def yield_tokens(data_iter):
7580
# ::
7681
#
7782
# vocab(['here', 'is', 'an', 'example'])
78-
# >>> [475, 21, 30, 5286]
83+
# >>> [475, 21, 30, 5297]
7984
#
8085
# ํ† ํฌ๋‚˜์ด์ €์™€ ์–ดํœ˜์ง‘์„ ๊ฐ–์ถ˜ ํ…์ŠคํŠธ ์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค.
8186
# ํ…์ŠคํŠธ ํŒŒ์ดํ”„๋ผ์ธ๊ณผ ๋ ˆ์ด๋ธ”(label) ํŒŒ์ดํ”„๋ผ์ธ์€ ๋ฐ์ดํ„ฐ์…‹ ๋ฐ˜๋ณต์ž๋กœ๋ถ€ํ„ฐ ์–ป์–ด์˜จ ๊ฐ€๊ณต๋˜์ง€ ์•Š์€ ๋ฌธ์žฅ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
@@ -91,7 +96,7 @@ def yield_tokens(data_iter):
9196
# ::
9297
#
9398
# text_pipeline('here is the an example')
94-
# >>> [475, 21, 2, 30, 5286]
99+
# >>> [475, 21, 2, 30, 5297]
95100
# label_pipeline('10')
96101
# >>> 9
97102
#

0 commit comments

Comments
ย (0)