You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<pclass="gdpr-notice">To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: <ahref="https://www.facebook.com/policies/cookies/">Cookies Policy</a>.</p>
3
+
<pclass="gdpr-notice">사이트 트래픽을 분석하고 사용 경험을 최적화하기 위해 쿠키를 사용합니다. 메뉴를 클릭하거나 사이트를 탐색하면 쿠키 사용을 허용하는 데 동의하는 것으로 간주합니다.</p>
Copy file name to clipboardexpand all lines: _includes/open_graph_and_meta.html
+2-2
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,9 @@
2
2
<meta
3
3
name="description"
4
4
property="og:description"
5
-
content="(Unofficial) Korean user community for PyTorch which is an open source machine learning framework that accelerates the path from research prototyping to production deployment."
5
+
content="파이토치 한국 사용자 모임에 오신 것을 환영합니다. 딥러닝 프레임워크인 파이토치(PyTorch)를 사용하는 한국어 사용자들을 위해 문서를 번역하고 정보를 공유하고 있습니다."
Copy file name to clipboardexpand all lines: _includes/quick_start_local.html
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
<p>사용 환경을 선택하고 설치 명령을 복사해서 실행해 보세요. Stable 버전은 테스트 및 지원되고 있는 가장 최근의 PyTorch 버전으로, 대부분의 사용자에게 적합합니다.
2
-
Preview 버전은 아직 완전히 테스트되지 않은 최신 1.11 버전으로 매일 업데이트됩니다. 사용 중인 패키지 매니저에 따라 <b>아래의 사전 요구사항(예: numpy)</b>이 충족되었는지 확인해 주세요.
2
+
Preview 버전은 아직 완전히 테스트되지 않은 최신 1.12 버전으로 매일 업데이트됩니다. 사용 중인 패키지 매니저에 따라 <b>아래의 사전 요구사항(예: numpy)</b>이 충족되었는지 확인해 주세요.
3
3
모든 의존성을 설치할 수 있는 Anaconda를 패키지 매니저로 추천합니다. <ahref="{{ site.baseurl }}/get-started/previous-versions">이전 버전의 PyTorch도 설치할 수 있습니다</a>. LibTorch는 C++에서만 지원합니다.
4
4
</p>
5
5
<p><ahref="https://pytorch.org/enterprise-support-program" target="_blank">PyTorch Enterprise Support Program</a>을 통해 Stable 및 LTS 바이너리에 대한 추가적인 지원 / 보증이 가능합니다.
@@ -33,7 +33,7 @@
33
33
<divclass="option-text">PyTorch 빌드</div>
34
34
</div>
35
35
<divclass="col-md-4 option block version selected" id="stable">
Copy file name to clipboardexpand all lines: _mobile/home.md
+2
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,8 @@ redirect_from: "/mobile/"
11
11
12
12
# PyTorch Mobile
13
13
14
+
**New!** Build AI-powered mobile apps in minutes with [PyTorch Live]({{site.baseurl}}/live).
15
+
14
16
There is a growing need to execute ML models on edge devices to reduce latency, preserve privacy, and enable new interactive use cases.
15
17
16
18
The PyTorch Mobile runtime beta release allows you to seamlessly go from training a model to deploying it, while staying entirely within the PyTorch ecosystem. It provides an end-to-end workflow that simplifies the research to production environment for mobile devices. In addition, it paves the way for privacy-preserving features via federated learning techniques.
Copy file name to clipboardexpand all lines: _mobile/ios.md
+8-3
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,11 @@ To get started with PyTorch on iOS, we recommend exploring the following [HelloW
16
16
17
17
HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses Objective-C as a bridge.
18
18
19
+
### Requirements
20
+
21
+
- XCode 11.0 or above
22
+
- iOS 12.0 or above
23
+
19
24
### Model Preparation
20
25
21
26
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
@@ -32,7 +37,7 @@ Once we have TorchVision installed successfully, let's navigate to the HelloWorl
32
37
python trace_model.py
33
38
```
34
39
35
-
If everything works well, we should have our model - `model.pt`generated in the `HelloWorld` folder. Now copy the model file to our application folder `HelloWorld/model`.
40
+
If everything works well, `model.pt`should be generated and saved in the `HelloWorld/HelloWorld/model` folder.
36
41
37
42
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
We first load the image from our bundle and resize it to 224x224. Then we call this `normalized()` category method to normalized the pixel buffer. Let's take a closer look at the code below.
73
+
We first load the image from our bundle and resize it to 224x224. Then we call this `normalized()` category method to normalize the pixel buffer. Let's take a closer look at the code below.
69
74
70
75
```swift
71
76
var normalizedBuffer: [Float32] = [Float32](repeating: 0, count: w * h *3)
@@ -82,7 +87,7 @@ The code might look weird at first glance, but it’ll make sense once we unders
82
87
83
88
#### TorchScript Module
84
89
85
-
Now that we have preprocessed our input data and we have a pre-trained TorchScript model, the next step is to use them to run predication. To do that, we'll first load our model into the application.
90
+
Now that we have preprocessed our input data and we have a pre-trained TorchScript model, the next step is to use them to run prediction. To do that, we'll first load our model into the application.
0 commit comments