Skip to content

Commit 75c20a4

Browse files
authored
Merge pull request PyTorchKorea#76 from 9bow/recipes_source/recipes/saving_multiple_models_in_one_file
'PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ & ๋ถˆ๋Ÿฌ์˜ค๊ธฐ' ๋ฒˆ์—ญ (PyTorchKorea#75)
2 parents 19f09fb + 750fdb9 commit 75c20a4

File tree

1 file changed

+66
-75
lines changed

1 file changed

+66
-75
lines changed
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,60 @@
11
"""
2-
Saving and loading multiple models in one file using PyTorch
2+
PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ & ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
33
============================================================
4-
Saving and loading multiple models can be helpful for reusing models
5-
that you have previously trained.
4+
์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์€ ์ด์ „์— ํ•™์Šตํ–ˆ๋˜ ๋ชจ๋ธ๋“ค์„ ์žฌ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค.
65
7-
Introduction
6+
๊ฐœ์š”
87
------------
9-
When saving a model comprised of multiple ``torch.nn.Modules``, such as
10-
a GAN, a sequence-to-sequence model, or an ensemble of models, you must
11-
save a dictionary of each modelโ€™s state_dict and corresponding
12-
optimizer. You can also save any other items that may aid you in
13-
resuming training by simply appending them to the dictionary.
14-
To load the models, first initialize the models and optimizers, then
15-
load the dictionary locally using ``torch.load()``. From here, you can
16-
easily access the saved items by simply querying the dictionary as you
17-
would expect.
18-
In this recipe, we will demonstrate how to save multiple models to one
19-
file using PyTorch.
20-
21-
Setup
22-
-----
23-
Before we begin, we need to install ``torch`` if it isnโ€™t already
24-
available.
8+
GAN์ด๋‚˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(sequence-to-sequence model), ์•™์ƒ๋ธ” ๋ชจ๋ธ(ensemble of models)๊ณผ
9+
๊ฐ™์ด ์—ฌ๋Ÿฌ ``torch.nn.Modules`` ๋กœ ๊ตฌ์„ฑ๋œ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๋•Œ๋Š” ๊ฐ ๋ชจ๋ธ์˜ state_dict์™€
10+
ํ•ด๋‹น ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer)์˜ ์‚ฌ์ „์„ ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ•™์Šต ํ•™์Šต์„ ์žฌ๊ฐœํ•˜๋Š”๋ฐ
11+
ํ•„์š”ํ•œ ๋‹ค๋ฅธ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ๋“ค์„ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ์—๋Š”, ๋จผ์ €
12+
๋ชจ๋ธ๋“ค๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ , ``torch.load()`` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค.
13+
์ดํ›„ ์›ํ•˜๋Š”๋Œ€๋กœ ์ €์žฅํ•œ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์กฐํšŒํ•˜์—ฌ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
14+
์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์–ด๋–ป๊ฒŒ ์ €์žฅํ•˜๊ณ 
15+
๋ถˆ๋Ÿฌ์˜ค๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
16+
17+
์„ค์ •
18+
---------
19+
์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ``torch`` ๊ฐ€ ์—†๋‹ค๋ฉด ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
2520
2621
::
2722
2823
pip install torch
29-
24+
3025
"""
3126

3227

3328

3429
######################################################################
35-
# Steps
36-
# -----
37-
#
38-
# 1. Import all necessary libraries for loading our data
39-
# 2. Define and intialize the neural network
40-
# 3. Initialize the optimizer
41-
# 4. Save multiple models
42-
# 5. Load multiple models
43-
#
44-
# 1. Import necessary libraries for loading our data
30+
# ๋‹จ๊ณ„(Steps)
31+
# -------------
32+
#
33+
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
34+
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
35+
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
36+
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
37+
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
38+
#
39+
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
4540
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
46-
#
47-
# For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``
48-
# and ``torch.optim``.
49-
#
41+
#
42+
# ์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” ``torch`` ์™€ ์—ฌ๊ธฐ ํฌํ•จ๋œ ``torch.nn`` ์™€ ``torch.optim` ์„
43+
# ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
44+
#
5045

5146
import torch
5247
import torch.nn as nn
5348
import torch.optim as optim
5449

5550

5651
######################################################################
57-
# 2. Define and intialize the neural network
52+
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
5853
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59-
#
60-
# For sake of example, we will create a neural network for training
61-
# images. To learn more see the Defining a Neural Network recipe. Build
62-
# two variables for the models to eventually save.
63-
#
54+
#
55+
# ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€๋ฅผ ํ•™์Šตํ•˜๋Š” ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€
56+
# ์‹ ๊ฒฝ๋ง ๊ตฌ์„ฑํ•˜๊ธฐ ๋ ˆ์‹œํ”ผ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. ๋ชจ๋ธ์„ ์ €์žฅํ•  2๊ฐœ์˜ ๋ณ€์ˆ˜๋“ค์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
57+
#
6458

6559
class Net(nn.Module):
6660
def __init__(self):
@@ -86,25 +80,24 @@ def forward(self, x):
8680

8781

8882
######################################################################
89-
# 3. Initialize the optimizer
83+
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
9084
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
91-
#
92-
# We will use SGD with momentum to build an optimizer for each model we
93-
# created.
94-
#
85+
#
86+
# ์ƒ์„ฑํ•œ ๋ชจ๋ธ๋“ค ๊ฐ๊ฐ์— ๋ชจ๋ฉ˜ํ…€(momentum)์„ ๊ฐ–๋Š” SGD๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
87+
#
9588

9689
optimizerA = optim.SGD(netA.parameters(), lr=0.001, momentum=0.9)
9790
optimizerB = optim.SGD(netB.parameters(), lr=0.001, momentum=0.9)
9891

9992

10093
######################################################################
101-
# 4. Save multiple models
94+
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
10295
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
103-
#
104-
# Collect all relevant information and build your dictionary.
105-
#
96+
#
97+
# ๊ด€๋ จ๋œ ๋ชจ๋“  ์ •๋ณด๋“ค์„ ๋ชจ์•„์„œ ์‚ฌ์ „์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค.
98+
#
10699

107-
# Specify a path to save to
100+
# ์ €์žฅํ•  ๊ฒฝ๋กœ ์ง€์ •
108101
PATH = "model.pt"
109102

110103
torch.save({
@@ -116,12 +109,11 @@ def forward(self, x):
116109

117110

118111
######################################################################
119-
# 4. Load multiple models
112+
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
120113
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
121-
#
122-
# Remember to first initialize the models and optimizers, then load the
123-
# dictionary locally.
124-
#
114+
#
115+
# ๋จผ์ € ๋ชจ๋ธ๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•œ ๋’ค, ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์‹ญ์‹œ์˜ค.
116+
#
125117

126118
modelA = Net()
127119
modelB = Net()
@@ -136,27 +128,26 @@ def forward(self, x):
136128

137129
modelA.eval()
138130
modelB.eval()
139-
# - or -
131+
# - ๋˜๋Š” -
140132
modelA.train()
141133
modelB.train()
142134

143135

144136
######################################################################
145-
# You must call ``model.eval()`` to set dropout and batch normalization
146-
# layers to evaluation mode before running inference. Failing to do this
147-
# will yield inconsistent inference results.
148-
#
149-
# If you wish to resuming training, call ``model.train()`` to ensure these
150-
# layers are in training mode.
151-
#
152-
# Congratulations! You have successfully saved and loaded multiple models
153-
# in PyTorch.
154-
#
155-
# Learn More
156-
# ----------
157-
#
158-
# Take a look at these other recipes to continue your learning:
159-
#
160-
# - TBD
161-
# - TBD
162-
#
137+
# ์ถ”๋ก (inference)์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ``model.eval()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ๋“œ๋กญ์•„์›ƒ(dropout)๊ณผ
138+
# ๋ฐฐ์น˜ ์ •๊ทœํ™” ์ธต(batch normalization layer)์„ ํ‰๊ฐ€(evaluation) ๋ชจ๋“œ๋กœ ๋ฐ”๊ฟ”์•ผํ•œ๋‹ค๋Š”
139+
# ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๊ฒƒ์„ ๋นผ๋จน์œผ๋ฉด ์ผ๊ด€์„ฑ ์—†๋Š” ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
140+
#
141+
# ๋งŒ์•ฝ ํ•™์Šต์„ ๊ณ„์†ํ•˜๊ธธ ์›ํ•œ๋‹ค๋ฉด ``model.train()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ์ด ์ธต(layer)๋“ค์ด
142+
# ํ•™์Šต ๋ชจ๋“œ์ธ์ง€ ํ™•์ธ(ensure)ํ•˜์„ธ์š”.
143+
#
144+
# ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ง€๊ธˆ๊นŒ์ง€ PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์™”์Šต๋‹ˆ๋‹ค.
145+
#
146+
# ๋” ์•Œ์•„๋ณด๊ธฐ
147+
# ------------
148+
#
149+
# ๋‹ค๋ฅธ ๋ ˆ์‹œํ”ผ๋ฅผ ๋‘˜๋Ÿฌ๋ณด๊ณ  ๊ณ„์† ๋ฐฐ์›Œ๋ณด์„ธ์š”:
150+
#
151+
# - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`
152+
# - :doc:`/recipes/recipes/saving_multiple_models_in_one_file`
153+
#

0 commit comments

Comments
ย (0)