1
1
"""
2
- Saving and loading multiple models in one file using PyTorch
2
+ PyTorch์์ ์ฌ๋ฌ ๋ชจ๋ธ์ ํ๋์ ํ์ผ์ ์ ์ฅํ๊ธฐ & ๋ถ๋ฌ์ค๊ธฐ
3
3
============================================================
4
- Saving and loading multiple models can be helpful for reusing models
5
- that you have previously trained.
4
+ ์ฌ๋ฌ ๋ชจ๋ธ์ ์ ์ฅํ๊ณ ๋ถ๋ฌ์ค๋ ๊ฒ์ ์ด์ ์ ํ์ตํ๋ ๋ชจ๋ธ๋ค์ ์ฌ์ฌ์ฉํ๋๋ฐ ๋์์ด ๋ฉ๋๋ค.
6
5
7
- Introduction
6
+ ๊ฐ์
8
7
------------
9
- When saving a model comprised of multiple ``torch.nn.Modules``, such as
10
- a GAN, a sequence-to-sequence model, or an ensemble of models, you must
11
- save a dictionary of each modelโs state_dict and corresponding
12
- optimizer. You can also save any other items that may aid you in
13
- resuming training by simply appending them to the dictionary.
14
- To load the models, first initialize the models and optimizers, then
15
- load the dictionary locally using ``torch.load()``. From here, you can
16
- easily access the saved items by simply querying the dictionary as you
17
- would expect.
18
- In this recipe, we will demonstrate how to save multiple models to one
19
- file using PyTorch.
20
-
21
- Setup
22
- -----
23
- Before we begin, we need to install ``torch`` if it isnโt already
24
- available.
8
+ GAN์ด๋ ์ํ์ค-ํฌ-์ํ์ค(sequence-to-sequence model), ์์๋ธ ๋ชจ๋ธ(ensemble of models)๊ณผ
9
+ ๊ฐ์ด ์ฌ๋ฌ ``torch.nn.Modules`` ๋ก ๊ตฌ์ฑ๋ ๋ชจ๋ธ์ ์ ์ฅํ ๋๋ ๊ฐ ๋ชจ๋ธ์ state_dict์
10
+ ํด๋น ์ตํฐ๋ง์ด์ (optimizer)์ ์ฌ์ ์ ์ ์ฅํด์ผ ํฉ๋๋ค. ๋ํ, ํ์ต ํ์ต์ ์ฌ๊ฐํ๋๋ฐ
11
+ ํ์ํ ๋ค๋ฅธ ํญ๋ชฉ๋ค์ ์ฌ์ ์ ์ถ๊ฐํ ์ ์์ต๋๋ค. ๋ชจ๋ธ๋ค์ ๋ถ๋ฌ์ฌ ๋์๋, ๋จผ์
12
+ ๋ชจ๋ธ๋ค๊ณผ ์ตํฐ๋ง์ด์ ๋ฅผ ์ด๊ธฐํํ๊ณ , ``torch.load()`` ๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ์ ์ ๋ถ๋ฌ์ต๋๋ค.
13
+ ์ดํ ์ํ๋๋๋ก ์ ์ฅํ ํญ๋ชฉ๋ค์ ์ฌ์ ์ ์กฐํํ์ฌ ์ ๊ทผํ ์ ์์ต๋๋ค.
14
+ ์ด ๋ ์ํผ์์๋ PyTorch๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ๋ฌ ๋ชจ๋ธ๋ค์ ํ๋์ ํ์ผ์ ์ด๋ป๊ฒ ์ ์ฅํ๊ณ
15
+ ๋ถ๋ฌ์ค๋์ง ์ดํด๋ณด๊ฒ ์ต๋๋ค.
16
+
17
+ ์ค์
18
+ ---------
19
+ ์์ํ๊ธฐ ์ ์ ``torch`` ๊ฐ ์๋ค๋ฉด ์ค์นํด์ผ ํฉ๋๋ค.
25
20
26
21
::
27
22
28
23
pip install torch
29
-
24
+
30
25
"""
31
26
32
27
33
28
34
29
######################################################################
35
- # Steps
36
- # -----
37
- #
38
- # 1. Import all necessary libraries for loading our data
39
- # 2. Define and intialize the neural network
40
- # 3. Initialize the optimizer
41
- # 4. Save multiple models
42
- # 5. Load multiple models
43
- #
44
- # 1. Import necessary libraries for loading our data
30
+ # ๋จ๊ณ( Steps)
31
+ # -------------
32
+ #
33
+ # 1. ๋ฐ์ดํฐ ๋ถ๋ฌ์ฌ ๋ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ค ๋ถ๋ฌ์ค๊ธฐ
34
+ # 2. ์ ๊ฒฝ๋ง์ ๊ตฌ์ฑํ๊ณ ์ด๊ธฐํํ๊ธฐ
35
+ # 3. ์ตํฐ๋ง์ด์ ์ด๊ธฐํํ๊ธฐ
36
+ # 4. ์ฌ๋ฌ ๋ชจ๋ธ๋ค ์ ์ฅํ๊ธฐ
37
+ # 5. ์ฌ๋ฌ ๋ชจ๋ธ๋ค ๋ถ๋ฌ์ค๊ธฐ
38
+ #
39
+ # 1. ๋ฐ์ดํฐ ๋ถ๋ฌ์ฌ ๋ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ค ๋ถ๋ฌ์ค๊ธฐ
45
40
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
46
- #
47
- # For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``
48
- # and ``torch.optim`` .
49
- #
41
+ #
42
+ # ์ด ๋ ์ํผ์์๋ ``torch`` ์ ์ฌ๊ธฐ ํฌํจ๋ ``torch.nn`` ์ ``torch.optim` ์
43
+ # ์ฌ์ฉํ๊ฒ ์ต๋๋ค .
44
+ #
50
45
51
46
import torch
52
47
import torch .nn as nn
53
48
import torch .optim as optim
54
49
55
50
56
51
######################################################################
57
- # 2. Define and intialize the neural network
52
+ # 2. ์ ๊ฒฝ๋ง์ ๊ตฌ์ฑํ๊ณ ์ด๊ธฐํํ๊ธฐ
58
53
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59
- #
60
- # For sake of example, we will create a neural network for training
61
- # images. To learn more see the Defining a Neural Network recipe. Build
62
- # two variables for the models to eventually save.
63
- #
54
+ #
55
+ # ์๋ฅผ ๋ค์ด, ์ด๋ฏธ์ง๋ฅผ ํ์ตํ๋ ์ ๊ฒฝ๋ง์ ๋ง๋ค์ด๋ณด๊ฒ ์ต๋๋ค. ๋ ์์ธํ ๋ด์ฉ์
56
+ # ์ ๊ฒฝ๋ง ๊ตฌ์ฑํ๊ธฐ ๋ ์ํผ๋ฅผ ์ฐธ๊ณ ํด์ฃผ์ธ์. ๋ชจ๋ธ์ ์ ์ฅํ 2๊ฐ์ ๋ณ์๋ค์ ๋ง๋ญ๋๋ค.
57
+ #
64
58
65
59
class Net (nn .Module ):
66
60
def __init__ (self ):
@@ -86,25 +80,24 @@ def forward(self, x):
86
80
87
81
88
82
######################################################################
89
- # 3. Initialize the optimizer
83
+ # 3. ์ตํฐ๋ง์ด์ ์ด๊ธฐํํ๊ธฐ
90
84
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
91
- #
92
- # We will use SGD with momentum to build an optimizer for each model we
93
- # created.
94
- #
85
+ #
86
+ # ์์ฑํ ๋ชจ๋ธ๋ค ๊ฐ๊ฐ์ ๋ชจ๋ฉํ
(momentum)์ ๊ฐ๋ SGD๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค.
87
+ #
95
88
96
89
optimizerA = optim .SGD (netA .parameters (), lr = 0.001 , momentum = 0.9 )
97
90
optimizerB = optim .SGD (netB .parameters (), lr = 0.001 , momentum = 0.9 )
98
91
99
92
100
93
######################################################################
101
- # 4. Save multiple models
94
+ # 4. ์ฌ๋ฌ ๋ชจ๋ธ๋ค ์ ์ฅํ๊ธฐ
102
95
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
103
- #
104
- # Collect all relevant information and build your dictionary .
105
- #
96
+ #
97
+ # ๊ด๋ จ๋ ๋ชจ๋ ์ ๋ณด๋ค์ ๋ชจ์์ ์ฌ์ ์ ๊ตฌ์ฑํฉ๋๋ค .
98
+ #
106
99
107
- # Specify a path to save to
100
+ # ์ ์ฅํ ๊ฒฝ๋ก ์ง์
108
101
PATH = "model.pt"
109
102
110
103
torch .save ({
@@ -116,12 +109,11 @@ def forward(self, x):
116
109
117
110
118
111
######################################################################
119
- # 4. Load multiple models
112
+ # 5. ์ฌ๋ฌ ๋ชจ๋ธ๋ค ๋ถ๋ฌ์ค๊ธฐ
120
113
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
121
- #
122
- # Remember to first initialize the models and optimizers, then load the
123
- # dictionary locally.
124
- #
114
+ #
115
+ # ๋จผ์ ๋ชจ๋ธ๊ณผ ์ตํฐ๋ง์ด์ ๋ฅผ ์ด๊ธฐํํ ๋ค, ์ฌ์ ์ ๋ถ๋ฌ์ค๋ ๊ฒ์ ๊ธฐ์ตํ์ญ์์ค.
116
+ #
125
117
126
118
modelA = Net ()
127
119
modelB = Net ()
@@ -136,27 +128,26 @@ def forward(self, x):
136
128
137
129
modelA .eval ()
138
130
modelB .eval ()
139
- # - or -
131
+ # - ๋๋ -
140
132
modelA .train ()
141
133
modelB .train ()
142
134
143
135
144
136
######################################################################
145
- # You must call ``model.eval()`` to set dropout and batch normalization
146
- # layers to evaluation mode before running inference. Failing to do this
147
- # will yield inconsistent inference results.
148
- #
149
- # If you wish to resuming training, call ``model.train()`` to ensure these
150
- # layers are in training mode.
151
- #
152
- # Congratulations! You have successfully saved and loaded multiple models
153
- # in PyTorch.
154
- #
155
- # Learn More
156
- # ----------
157
- #
158
- # Take a look at these other recipes to continue your learning:
159
- #
160
- # - TBD
161
- # - TBD
162
- #
137
+ # ์ถ๋ก (inference)์ ์คํํ๊ธฐ ์ ์ ``model.eval()`` ์ ํธ์ถํ์ฌ ๋๋กญ์์(dropout)๊ณผ
138
+ # ๋ฐฐ์น ์ ๊ทํ ์ธต(batch normalization layer)์ ํ๊ฐ(evaluation) ๋ชจ๋๋ก ๋ฐ๊ฟ์ผํ๋ค๋
139
+ # ๊ฒ์ ๊ธฐ์ตํ์ธ์. ์ด๊ฒ์ ๋นผ๋จน์ผ๋ฉด ์ผ๊ด์ฑ ์๋ ์ถ๋ก ๊ฒฐ๊ณผ๋ฅผ ์ป๊ฒ ๋ฉ๋๋ค.
140
+ #
141
+ # ๋ง์ฝ ํ์ต์ ๊ณ์ํ๊ธธ ์ํ๋ค๋ฉด ``model.train()`` ์ ํธ์ถํ์ฌ ์ด ์ธต(layer)๋ค์ด
142
+ # ํ์ต ๋ชจ๋์ธ์ง ํ์ธ(ensure)ํ์ธ์.
143
+ #
144
+ # ์ถํํฉ๋๋ค! ์ง๊ธ๊น์ง PyTorch์์ ์ฌ๋ฌ ๋ชจ๋ธ๋ค์ ์ ์ฅํ๊ณ ๋ถ๋ฌ์์ต๋๋ค.
145
+ #
146
+ # ๋ ์์๋ณด๊ธฐ
147
+ # ------------
148
+ #
149
+ # ๋ค๋ฅธ ๋ ์ํผ๋ฅผ ๋๋ฌ๋ณด๊ณ ๊ณ์ ๋ฐฐ์๋ณด์ธ์:
150
+ #
151
+ # - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`
152
+ # - :doc:`/recipes/recipes/saving_multiple_models_in_one_file`
153
+ #
0 commit comments