@@ -22,16 +22,14 @@ the `model hub <https://huggingface.co/models>`__.
22
22
23
23
Optionally, you can join an existing organization or create a new one.
24
24
25
- Prepare your model for uploading
26
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
27
25
28
26
We have seen in the :doc: `training tutorial <training >`: how to fine-tune a model on a given task. You have probably
29
27
done something similar on your task, either using the model directly in your own training loop or using the
30
28
:class: `~.transformers.Trainer `/:class: `~.transformers.TFTrainer ` class. Let's see how you can share the result on the
31
29
`model hub <https://huggingface.co/models >`__.
32
30
33
31
Model versioning
34
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
32
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
35
33
36
34
Since version v3.5.0, the model hub has built-in model versioning based on git and git-lfs. It is based on the paradigm
37
35
that one model *is * one repo.
@@ -54,6 +52,106 @@ For instance:
54
52
>>> revision="v2.0.1" # tag name, or branch name, or commit hash
55
53
>>> )
56
54
55
+
56
+ Push your model from Python
57
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58
+
59
+ Preparation
60
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
61
+
62
+ The first step is to make sure your credentials to the hub are stored somewhere. This can be done in two ways. If you
63
+ have access to a terminal, you cam just run the following command in the virtual environment where you installed 🤗
64
+ Transformers:
65
+
66
+ .. code-block :: bash
67
+
68
+ transformers-cli login
69
+
70
+ It will store your access token in the Hugging Face cache folder (by default :obj: `~/.cache/ `).
71
+
72
+ If you don't have an easy access to a terminal (for instance in a Colab session), you can find a token linked to your
73
+ acount by going on `huggingface.co <https://huggingface.co/> `, click on your avatar on the top left corner, then on
74
+ `Edit profile ` on the left, just beneath your profile picture. In the submenu `API Tokens `, you will find your API
75
+ token that you can just copy.
76
+
77
+ Directly push your model to the hub
78
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
79
+
80
+ Once you have an API token (either stored in the cache or copied and pasted in your notebook), you can directly push a
81
+ finetuned model you saved in :obj: `save_drectory ` by calling:
82
+
83
+ .. code-block :: python
84
+
85
+ finetuned_model.push_to_hub(" my-awesome-model" )
86
+
87
+ If you have your API token not stored in the cache, you will need to pass it with :obj: `use_auth_token=your_token `.
88
+ This is also be the case for all the examples below, so we won't mention it again.
89
+
90
+ This will create a repository in your namespace name :obj: `my-awesome-model `, so anyone can now run:
91
+
92
+ .. code-block :: python
93
+
94
+ from transformers import AutoModel
95
+
96
+ model = AutoModel.from_pretrained(" your_username/my-awesome-model" )
97
+
98
+ Even better, you can combine this push to the hub with the call to :obj: `save_pretrained `:
99
+
100
+ .. code-block :: python
101
+
102
+ finetuned_model.save_pretrained(save_directory, push_to_hub = True , repo_name = " my-awesome-model" )
103
+
104
+ If you are a premium user and want your model to be private, just add :obj: `private=True ` to this call.
105
+
106
+ If you are a member of an organization and want to push it inside the namespace of the organization instead of yours,
107
+ just add :obj: `organization=my_amazing_org `.
108
+
109
+ Add new files to your model repo
110
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
111
+
112
+ Once you have pushed your model to the hub, you might want to add the tokenizer, or a version of your model for another
113
+ framework (TensorFlow, PyTorch, Flax). This is super easy to do! Let's begin with the tokenizer. You can add it to the
114
+ repo you created before like this
115
+
116
+ .. code-block :: python
117
+
118
+ tokenizer.push_to_hub(" my-awesome-model" )
119
+
120
+ If you know its URL (it should be :obj: `https://huggingface.co/username/repo_name `), you can also do:
121
+
122
+ .. code-block :: python
123
+
124
+ tokenizer.push_to_hub(repo_url = my_repo_url)
125
+
126
+ And that's all there is to it! It's also a very easy way to fix a mistake if one of the files online had a bug.
127
+
128
+ To add a model for another backend, it's also super easy. Let's say you have fine-tuned a TensorFlow model and want to
129
+ add the pytorch model files to your model repo, so that anyone in the community can use it. The following allows you to
130
+ directly create a PyTorch version of your TensorFlow model:
131
+
132
+ .. code-block :: python
133
+
134
+ from transfomers import AutoModel
135
+
136
+ model = AutoModel.from_pretrained(save_directory, from_tf = True )
137
+
138
+ You can also replace :obj: `save_directory ` by the identifier of your model (:obj: `username/repo_name `) if you don't
139
+ have a local save of it anymore. Then, just do the same as before:
140
+
141
+ .. code-block :: python
142
+
143
+ model.push_to_hub(" my-awesome-model" )
144
+
145
+ or
146
+
147
+ .. code-block :: python
148
+
149
+ model.push_to_hub(repo_url = my_repo_url)
150
+
151
+
152
+ Use your terminal and git
153
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
+
57
155
Basic steps
58
156
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59
157
0 commit comments