Skip to content

Commit 19f09fb

Browse files
committed
Rebuild
1 parent 555973a commit 19f09fb

File tree

8 files changed

+93
-95
lines changed

8 files changed

+93
-95
lines changed

docs/_downloads/3dbbd6931d76adb0dc37d4e88b328852/tensor_tutorial.ipynb

+9-9
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\nPyTorch\uac00 \ubb34\uc5c7\uc778\uac00\uc694?\n=======================\n\nPython \uae30\ubc18\uc758 \uacfc\ud559 \uc5f0\uc0b0 \ud328\ud0a4\uc9c0\ub85c \ub2e4\uc74c\uacfc \uac19\uc740 \ub450 \uc9d1\ub2e8\uc744 \ub300\uc0c1\uc73c\ub85c \ud569\ub2c8\ub2e4:\n\n- NumPy\ub97c \ub300\uccb4\ud558\uba74\uc11c GPU\ub97c \uc774\uc6a9\ud55c \uc5f0\uc0b0\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\n- \ucd5c\ub300\ud55c\uc758 \uc720\uc5f0\uc131\uacfc \uc18d\ub3c4\ub97c \uc81c\uacf5\ud558\ub294 \ub525\ub7ec\ub2dd \uc5f0\uad6c \ud50c\ub7ab\ud3fc\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\n\n\uc2dc\uc791\ud558\uae30\n-----------\n\nTensors\n^^^^^^^\n\nTensor\ub294 NumPy\uc758 ndarray\uc640 \uc720\uc0ac\ud558\uba70, \ucd94\uac00\ub85c GPU\ub97c \uc0ac\uc6a9\ud55c \uc5f0\uc0b0 \uac00\uc18d\ub3c4 \uac00\ub2a5\ud569\ub2c8\ub2e4.\n\n"
18+
"\nPyTorch\uac00 \ubb34\uc5c7\uc778\uac00\uc694?\n=======================\n\nPython \uae30\ubc18\uc758 \uacfc\ud559 \uc5f0\uc0b0 \ud328\ud0a4\uc9c0\ub85c \ub2e4\uc74c\uacfc \uac19\uc740 \ub450 \uc9d1\ub2e8\uc744 \ub300\uc0c1\uc73c\ub85c \ud569\ub2c8\ub2e4:\n\n- NumPy\ub97c \ub300\uccb4\ud558\uba74\uc11c GPU\ub97c \uc774\uc6a9\ud55c \uc5f0\uc0b0\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\n- \ucd5c\ub300\ud55c\uc758 \uc720\uc5f0\uc131\uacfc \uc18d\ub3c4\ub97c \uc81c\uacf5\ud558\ub294 \ub525\ub7ec\ub2dd \uc5f0\uad6c \ud50c\ub7ab\ud3fc\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\n\n\uc2dc\uc791\ud558\uae30\n-----------\n\nTensors\n^^^^^^^\n\nTensor\ub294 NumPy\uc758 ndarray\uc640 \uc720\uc0ac\ud558\uba70, GPU\ub97c \uc0ac\uc6a9\ud55c \uc5f0\uc0b0 \uac00\uc18d\ub3c4 \uac00\ub2a5\ud569\ub2c8\ub2e4.\n\n"
1919
]
2020
},
2121
{
@@ -112,7 +112,7 @@
112112
"cell_type": "markdown",
113113
"metadata": {},
114114
"source": [
115-
"\ub610\ub294 \uc874\uc7ac\ud558\ub294 tensor\ub97c \ubc14\ud0d5\uc73c\ub85c tensor\ub97c \ub9cc\ub4ed\ub2c8\ub2e4. \uc774 \uba54\uc18c\ub4dc(method)\ub4e4\uc740\n\uc0ac\uc6a9\uc790\ub85c\ubd80\ud130 \uc81c\uacf5\ub41c \uc0c8\ub85c\uc6b4 \uac12\uc774 \uc5c6\ub294 \ud55c, \uc785\ub825 tensor\uc758 \uc18d\uc131\ub4e4(\uc608. dtype)\uc744\n\uc7ac\uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n"
115+
"\ub610\ub294 \uae30\uc874 tensor\ub97c \ubc14\ud0d5\uc73c\ub85c \uc0c8\ub85c\uc6b4 tensor\ub97c \ub9cc\ub4ed\ub2c8\ub2e4. \uc774\ub4e4 \uba54\uc18c\ub4dc(method)\ub294\n\uc0ac\uc6a9\uc790\ub85c\ubd80\ud130 \uc0c8\ub85c\uc6b4 \uac12\uc744 \uc81c\uacf5\ubc1b\uc9c0 \uc54a\uc740 \ud55c, \uc785\ub825 tensor\uc758 \uc18d\uc131\ub4e4(\uc608. dtype)\uc744\n\uc7ac\uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n"
116116
]
117117
},
118118
{
@@ -148,7 +148,7 @@
148148
"cell_type": "markdown",
149149
"metadata": {},
150150
"source": [
151-
"<div class=\"alert alert-info\"><h4>Note</h4><p>``torch.Size`` \ub294 \uc0ac\uc2e4 \ud29c\ud50c(tuple)\uacfc \uac19\uc73c\uba70, \ubaa8\ub4e0 \ud29c\ud50c \uc5f0\uc0b0\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.</p></div>\n\n\uc5f0\uc0b0(Operations)\n^^^^^^^^^^^^^^^^\n\uc5f0\uc0b0\uc744 \uc704\ud55c \uc5ec\ub7ec\uac00\uc9c0 \ubb38\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ub2e4\uc74c \uc608\uc81c\ub4e4\uc744 \ud1b5\ud574 \ub367\uc148 \uc5f0\uc0b0\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\ub367\uc148: \ubb38\ubc951\n\n"
151+
"<div class=\"alert alert-info\"><h4>Note</h4><p>``torch.Size`` \ub294 \ud29c\ud50c(tuple) \ud0c0\uc785\uc73c\ub85c, \ubaa8\ub4e0 \ud29c\ud50c \uc5f0\uc0b0\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.</p></div>\n\n\uc5f0\uc0b0(Operations)\n^^^^^^^^^^^^^^^^\n\uc5f0\uc0b0\uc744 \uc704\ud55c \uc5ec\ub7ec\uac00\uc9c0 \ubb38\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ub2e4\uc74c \uc608\uc81c\ub4e4\uc744 \ud1b5\ud574 \ub367\uc148 \uc5f0\uc0b0\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\ub367\uc148: \ubb38\ubc951\n\n"
152152
]
153153
},
154154
{
@@ -202,7 +202,7 @@
202202
"cell_type": "markdown",
203203
"metadata": {},
204204
"source": [
205-
"\ub367\uc148: \ubc14\uafd4\uce58\uae30(In-place) \ubc29\uc2dd\n\n"
205+
"\ub367\uc148: \ubc14\uafd4\uce58\uae30(in-place) \ubc29\uc2dd\n\n"
206206
]
207207
},
208208
{
@@ -220,7 +220,7 @@
220220
"cell_type": "markdown",
221221
"metadata": {},
222222
"source": [
223-
"<div class=\"alert alert-info\"><h4>Note</h4><p>\ubc14\uafd4\uce58\uae30(In-place) \ubc29\uc2dd\uc73c\ub85c tensor\uc758 \uac12\uc744 \ubcc0\uacbd\ud558\ub294 \uc5f0\uc0b0\uc740 ``_`` \ub97c \uc811\ubbf8\uc0ac\ub85c\n \uac16\uc2b5\ub2c8\ub2e4.\n \uc608: ``x.copy_(y)``, ``x.t_()`` \ub294 ``x`` \ub97c \ubcc0\uacbd\ud569\ub2c8\ub2e4.</p></div>\n\nNumPy\uc2a4\ub7ec\uc6b4 \uc778\ub371\uc2f1 \ud45c\uae30 \ubc29\ubc95\uc744 \uc0ac\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4!\n\n"
223+
"<div class=\"alert alert-info\"><h4>Note</h4><p>\ubc14\uafd4\uce58\uae30(in-place) \ubc29\uc2dd\uc73c\ub85c tensor\uc758 \uac12\uc744 \ubcc0\uacbd\ud558\ub294 \uc5f0\uc0b0 \ub4a4\uc5d0\ub294 ``_``\uac00 \ubd99\uc2b5\ub2c8\ub2e4.\n \uc608: ``x.copy_(y)``, ``x.t_()`` \ub294 ``x`` \ub97c \ubcc0\uacbd\ud569\ub2c8\ub2e4.</p></div>\n\nNumPy\uc2a4\ub7ec\uc6b4 \uc778\ub371\uc2f1 \ud45c\uae30 \ubc29\ubc95\uc744 \uc0ac\uc6a9\ud558\uc2e4 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4!\n\n"
224224
]
225225
},
226226
{
@@ -249,7 +249,7 @@
249249
},
250250
"outputs": [],
251251
"source": [
252-
"x = torch.randn(4, 4)\ny = x.view(16)\nz = x.view(-1, 8) # -1\uc740 \ub2e4\ub978 \ucc28\uc6d0\ub4e4\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc720\ucd94\ud569\ub2c8\ub2e4.\nprint(x.size(), y.size(), z.size())"
252+
"x = torch.randn(4, 4)\ny = x.view(16)\nz = x.view(-1, 8) # -1\uc740 \ub2e4\ub978 \ucc28\uc6d0\uc5d0\uc11c \uc720\ucd94\ud569\ub2c8\ub2e4.\nprint(x.size(), y.size(), z.size())"
253253
]
254254
},
255255
{
@@ -274,7 +274,7 @@
274274
"cell_type": "markdown",
275275
"metadata": {},
276276
"source": [
277-
"**\ub354 \uc77d\uc744\uac70\ub9ac:**\n\n\n \uc804\uce58(transposing), \uc778\ub371\uc2f1(indexing), \uc2ac\ub77c\uc774\uc2f1(slicing), \uc218\ud559 \uacc4\uc0b0,\n \uc120\ud615 \ub300\uc218, \ub09c\uc218(random number) \ub4f1\uacfc \uac19\uc740 100\uac00\uc9c0 \uc774\uc0c1\uc758 Tensor \uc5f0\uc0b0\uc740\n `\uc5ec\uae30 <http://pytorch.org/docs/torch>`_ \uc5d0 \uc124\uba85\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\nNumPy \ubcc0\ud658(Bridge)\n-------------------\n\nTorch Tensor\ub97c NumPy \ubc30\uc5f4(array)\ub85c \ubcc0\ud658\ud558\uac70\ub098, \uadf8 \ubc18\ub300\ub85c \ud558\ub294 \uac83\uc740 \ub9e4\uc6b0 \uc27d\uc2b5\ub2c8\ub2e4.\n\n(CPU \uc0c1\uc758) Torch Tensor\uc640 NumPy \ubc30\uc5f4\uc740 \uc800\uc7a5 \uacf5\uac04\uc744 \uacf5\uc720\ud558\uae30 \ub54c\ubb38\uc5d0,\n\ud558\ub098\ub97c \ubcc0\uacbd\ud558\uba74 \ub2e4\ub978 \ud558\ub098\ub3c4 \ubcc0\uacbd\ub429\ub2c8\ub2e4.\n\nTorch Tensor\ub97c NumPy \ubc30\uc5f4\ub85c \ubcc0\ud658\ud558\uae30\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n"
277+
"**\ub354 \uc77d\uc744\uac70\ub9ac:**\n\n\n \uc804\uce58(transposing), \uc778\ub371\uc2f1(indexing), \uc2ac\ub77c\uc774\uc2f1(slicing), \uc218\ud559 \uacc4\uc0b0,\n \uc120\ud615 \ub300\uc218, \ub09c\uc218(random number) \ub4f1, 100\uac00\uc9c0 \uc774\uc0c1\uc758 Tensor \uc5f0\uc0b0\uc740\n `\uc5ec\uae30 <http://pytorch.org/docs/torch>`_ \uc5d0\uc11c \ud655\uc778\ud558\uc2e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nNumPy \ubcc0\ud658(Bridge)\n-------------------\n\nTorch Tensor\ub97c NumPy \ubc30\uc5f4(array)\ub85c \ubcc0\ud658\ud558\uac70\ub098, \uadf8 \ubc18\ub300\ub85c \ud558\ub294 \uac83\uc740 \ub9e4\uc6b0 \uc27d\uc2b5\ub2c8\ub2e4.\n\n(Torch Tensor\uac00 CPU \uc0c1\uc5d0 \uc788\ub2e4\uba74) Torch Tensor\uc640 NumPy \ubc30\uc5f4\uc740 \uba54\ubaa8\ub9ac \uacf5\uac04\uc744\n\uacf5\uc720\ud558\uae30 \ub54c\ubb38\uc5d0, \ud558\ub098\ub97c \ubcc0\uacbd\ud558\uba74 \ub2e4\ub978 \ud558\ub098\ub3c4 \ubcc0\uacbd\ub429\ub2c8\ub2e4.\n\nTorch Tensor\ub97c NumPy \ubc30\uc5f4\ub85c \ubcc0\ud658\ud558\uae30\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n"
278278
]
279279
},
280280
{
@@ -321,7 +321,7 @@
321321
"cell_type": "markdown",
322322
"metadata": {},
323323
"source": [
324-
"NumPy \ubc30\uc5f4\uc744 Torch Tensor\ub85c \ubcc0\ud658\ud558\uae30\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNumPy(np) \ubc30\uc5f4\uc744 \ubcc0\uacbd\ud558\uba74 Torch Tensor\uc758 \uac12\ub3c4 \uc790\ub3d9 \ubcc0\uacbd\ub418\ub294 \uac83\uc744 \ud655\uc778\ud574\ubcf4\uc138\uc694.\n\n"
324+
"NumPy \ubc30\uc5f4\uc744 Torch Tensor\ub85c \ubcc0\ud658\ud558\uae30\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nnp (NumPy) \ubc30\uc5f4\uc744 \ubcc0\uacbd\ud558\uba74 Torch Tensor\uc758 \uac12\ub3c4 \uc790\ub3d9 \ubcc0\uacbd\ub418\ub294 \uac83\uc744 \ud655\uc778\ud574\ubcf4\uc138\uc694.\n\n"
325325
]
326326
},
327327
{
@@ -339,7 +339,7 @@
339339
"cell_type": "markdown",
340340
"metadata": {},
341341
"source": [
342-
"CharTensor\ub97c \uc81c\uc678\ud55c CPU \uc0c1\uc758 \ubaa8\ub4e0 Tensor\ub294 NumPy\ub85c\uc758 \ubcc0\ud658\uc744 \uc9c0\uc6d0\ud558\uba70,\n(NumPy\uc5d0\uc11c Tensor\ub85c\uc758) \ubc18\ub300 \ubcc0\ud658\ub3c4 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\nCUDA Tensors\n------------\n\n``.to`` \uba54\uc18c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec Tensor\ub97c \uc5b4\ub5a0\ud55c \uc7a5\uce58\ub85c\ub3c4 \uc62e\uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n"
342+
"CharTensor\ub97c \uc81c\uc678\ud55c CPU \uc0c1\uc758 \ubaa8\ub4e0 Tensor\ub294 NumPy\ub85c \ubcc0\ud658\ud560 \uc218 \uc788\uace0,\n(NumPy\uc5d0\uc11c Tensor\ub85c\uc758) \ubc18\ub300 \ubcc0\ud658\ub3c4 \uac00\ub2a5\ud569\ub2c8\ub2e4.\n\nCUDA Tensors\n------------\n\n``.to`` \uba54\uc18c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec Tensor\ub97c \uc5b4\ub5a0\ud55c \uc7a5\uce58\ub85c\ub3c4 \uc62e\uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n"
343343
]
344344
},
345345
{

docs/_downloads/6133d4c3ca687bdecb6dda6d3a243c24/tensor_tutorial.py

+15-16
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
Tensors
1515
^^^^^^^
1616
17-
Tensor는 NumPy의 ndarray와 유사하며, 추가로 GPU를 사용한 연산 가속도 가능합니다.
17+
Tensor는 NumPy의 ndarray와 유사하며, GPU를 사용한 연산 가속도 가능합니다.
1818
"""
1919

2020
from __future__ import print_function
@@ -51,8 +51,8 @@
5151
print(x)
5252

5353
###############################################################
54-
# 또는 존재하는 tensor를 바탕으로 tensor를 만듭니다. 메소드(method)들은
55-
# 사용자로부터 제공된 새로운 값이 없는 한, 입력 tensor의 속성들(예. dtype)을
54+
# 또는 기존 tensor를 바탕으로 새로운 tensor를 만듭니다. 이들 메소드(method)
55+
# 사용자로부터 새로운 값을 제공받지 않은 한, 입력 tensor의 속성들(예. dtype)을
5656
# 재사용합니다.
5757

5858
x = x.new_ones(5, 3, dtype=torch.double) # new_* 메소드는 크기를 받습니다
@@ -68,7 +68,7 @@
6868

6969
###############################################################
7070
# .. note::
71-
# ``torch.Size`` 는 사실 튜플(tuple)과 같으며, 모든 튜플 연산을 지원합니다.
71+
# ``torch.Size`` 는 튜플(tuple) 타입으로, 모든 튜플 연산을 지원합니다.
7272
#
7373
# 연산(Operations)
7474
# ^^^^^^^^^^^^^^^^
@@ -90,27 +90,26 @@
9090
print(result)
9191

9292
###############################################################
93-
# 덧셈: 바꿔치기(In-place) 방식
93+
# 덧셈: 바꿔치기(in-place) 방식
9494

9595
# y에 x 더하기
9696
y.add_(x)
9797
print(y)
9898

9999
###############################################################
100100
# .. note::
101-
# 바꿔치기(In-place) 방식으로 tensor의 값을 변경하는 연산은 ``_`` 를 접미사로
102-
# 갖습니다.
101+
# 바꿔치기(in-place) 방식으로 tensor의 값을 변경하는 연산 뒤에는 ``_``가 붙습니다.
103102
# 예: ``x.copy_(y)``, ``x.t_()`` 는 ``x`` 를 변경합니다.
104103
#
105-
# NumPy스러운 인덱싱 표기 방법을 사용할 수도 있습니다!
104+
# NumPy스러운 인덱싱 표기 방법을 사용하실 수도 있습니다!
106105

107106
print(x[:, 1])
108107

109108
###############################################################
110109
# 크기 변경: tensor의 크기(size)나 모양(shape)을 변경하고 싶다면 ``torch.view`` 를 사용합니다:
111110
x = torch.randn(4, 4)
112111
y = x.view(16)
113-
z = x.view(-1, 8) # -1은 다른 차원들을 사용하여 유추합니다.
112+
z = x.view(-1, 8) # -1은 다른 차원에서 유추합니다.
114113
print(x.size(), y.size(), z.size())
115114

116115
###############################################################
@@ -124,16 +123,16 @@
124123
#
125124
#
126125
# 전치(transposing), 인덱싱(indexing), 슬라이싱(slicing), 수학 계산,
127-
# 선형 대수, 난수(random number) 등과 같은 100가지 이상의 Tensor 연산은
128-
# `여기 <http://pytorch.org/docs/torch>`_ 에 설명되어 있습니다.
126+
# 선형 대수, 난수(random number) 등, 100가지 이상의 Tensor 연산은
127+
# `여기 <http://pytorch.org/docs/torch>`_ 에서 확인하실 수 있습니다.
129128
#
130129
# NumPy 변환(Bridge)
131130
# -------------------
132131
#
133132
# Torch Tensor를 NumPy 배열(array)로 변환하거나, 그 반대로 하는 것은 매우 쉽습니다.
134133
#
135-
# (CPU 상의) Torch Tensor와 NumPy 배열은 저장 공간을 공유하기 때문에,
136-
# 하나를 변경하면 다른 하나도 변경됩니다.
134+
# (Torch Tensor가 CPU 상에 있다면) Torch Tensor와 NumPy 배열은 메모리 공간을
135+
# 공유하기 때문에, 하나를 변경하면 다른 하나도 변경됩니다.
137136
#
138137
# Torch Tensor를 NumPy 배열로 변환하기
139138
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -157,7 +156,7 @@
157156
###############################################################
158157
# NumPy 배열을 Torch Tensor로 변환하기
159158
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
160-
# NumPy(np) 배열을 변경하면 Torch Tensor의 값도 자동 변경되는 것을 확인해보세요.
159+
# np (NumPy) 배열을 변경하면 Torch Tensor의 값도 자동 변경되는 것을 확인해보세요.
161160

162161
import numpy as np
163162
a = np.ones(5)
@@ -167,8 +166,8 @@
167166
print(b)
168167

169168
###############################################################
170-
# CharTensor를 제외한 CPU 상의 모든 Tensor는 NumPy로의 변환을 지원하며,
171-
# (NumPy에서 Tensor로의) 반대 변환도 지원합니다.
169+
# CharTensor를 제외한 CPU 상의 모든 Tensor는 NumPy로 변환할 수 있고,
170+
# (NumPy에서 Tensor로의) 반대 변환도 가능합니다.
172171
#
173172
# CUDA Tensors
174173
# ------------

docs/advanced/sg_execution_times.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -291,9 +291,9 @@
291291

292292
<div class="section" id="computation-times">
293293
<span id="sphx-glr-advanced-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
294-
<p><strong>00:00.067</strong> total execution time for <strong>advanced</strong> files:</p>
294+
<p><strong>00:00.056</strong> total execution time for <strong>advanced</strong> files:</p>
295295
<ul class="simple">
296-
<li><p><strong>00:00.067</strong>: <a class="reference internal" href="neural_style_tutorial.html#sphx-glr-advanced-neural-style-tutorial-py"><span class="std std-ref">PyTorch를 이용한 신경망-변환(Neural-Transfer)</span></a> (<code class="docutils literal notranslate"><span class="pre">neural_style_tutorial.py</span></code>)</p></li>
296+
<li><p><strong>00:00.056</strong>: <a class="reference internal" href="neural_style_tutorial.html#sphx-glr-advanced-neural-style-tutorial-py"><span class="std std-ref">PyTorch를 이용한 신경망-변환(Neural-Transfer)</span></a> (<code class="docutils literal notranslate"><span class="pre">neural_style_tutorial.py</span></code>)</p></li>
297297
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="dynamic_quantization_tutorial.html#sphx-glr-advanced-dynamic-quantization-tutorial-py"><span class="std std-ref">(experimental) Dynamic Quantization on an LSTM Word Language Model</span></a> (<code class="docutils literal notranslate"><span class="pre">dynamic_quantization_tutorial.py</span></code>)</p></li>
298298
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="numpy_extensions_tutorial.html#sphx-glr-advanced-numpy-extensions-tutorial-py"><span class="std std-ref">Creating Extensions Using numpy and scipy</span></a> (<code class="docutils literal notranslate"><span class="pre">numpy_extensions_tutorial.py</span></code>)</p></li>
299299
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="static_quantization_tutorial.html#sphx-glr-advanced-static-quantization-tutorial-py"><span class="std std-ref">(experimental) Static Quantization with Eager Mode in PyTorch</span></a> (<code class="docutils literal notranslate"><span class="pre">static_quantization_tutorial.py</span></code>)</p></li>

docs/beginner/blitz/sg_execution_times.html

+3-3
Original file line numberDiff line numberDiff line change
@@ -291,13 +291,13 @@
291291

292292
<div class="section" id="computation-times">
293293
<span id="sphx-glr-beginner-blitz-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
294-
<p><strong>00:00.060</strong> total execution time for <strong>beginner_blitz</strong> files:</p>
294+
<p><strong>00:02.255</strong> total execution time for <strong>beginner_blitz</strong> files:</p>
295295
<ul class="simple">
296-
<li><p><strong>00:00.060</strong>: <a class="reference internal" href="autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py"><span class="std std-ref">Autograd: 자동 미분</span></a> (<code class="docutils literal notranslate"><span class="pre">autograd_tutorial.py</span></code>)</p></li>
296+
<li><p><strong>00:02.255</strong>: <a class="reference internal" href="tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py"><span class="std std-ref">PyTorch가 무엇인가요?</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_tutorial.py</span></code>)</p></li>
297+
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py"><span class="std std-ref">Autograd: 자동 미분</span></a> (<code class="docutils literal notranslate"><span class="pre">autograd_tutorial.py</span></code>)</p></li>
297298
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py"><span class="std std-ref">분류기(Classifier) 학습하기</span></a> (<code class="docutils literal notranslate"><span class="pre">cifar10_tutorial.py</span></code>)</p></li>
298299
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="data_parallel_tutorial.html#sphx-glr-beginner-blitz-data-parallel-tutorial-py"><span class="std std-ref">Optional: Data Parallelism</span></a> (<code class="docutils literal notranslate"><span class="pre">data_parallel_tutorial.py</span></code>)</p></li>
299300
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py"><span class="std std-ref">신경망(Neural Networks)</span></a> (<code class="docutils literal notranslate"><span class="pre">neural_networks_tutorial.py</span></code>)</p></li>
300-
<li><p><strong>00:00.000</strong>: <a class="reference internal" href="tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py"><span class="std std-ref">PyTorch가 무엇인가요?</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_tutorial.py</span></code>)</p></li>
301301
</ul>
302302
</div>
303303

0 commit comments

Comments
 (0)