Skip to content

Commit 2e648dc

Browse files
author
machineuser
committed
🔖 @hugginface/inference v2.0.0
1 parent c19bf42 commit 2e648dc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+2321
-917
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ You can run our packages with vanilla JS, without any bundler, by using a CDN or
6464
```html
6565

6666
<script type="module">
67-
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@1.8.0/+esm';
67+
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@2.0.0/+esm';
6868
import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm";
6969
</script>
7070
```

docs/_toctree.yml

+34-36
Original file line numberDiff line numberDiff line change
@@ -62,51 +62,49 @@
6262
sections:
6363
- title: HfInference
6464
local: inference/classes/HfInference
65-
- title: Enums
66-
sections:
67-
- title: TextGenerationStreamFinishReason
68-
local: inference/enums/TextGenerationStreamFinishReason
65+
- title: HfInferenceEndpoint
66+
local: inference/classes/HfInferenceEndpoint
6967
- title: Interfaces
7068
sections:
71-
- title: Args
72-
local: inference/interfaces/Args
73-
- title: AudioClassificationReturnValue
74-
local: inference/interfaces/AudioClassificationReturnValue
75-
- title: AutomaticSpeechRecognitionReturn
76-
local: inference/interfaces/AutomaticSpeechRecognitionReturn
77-
- title: ConversationalReturn
78-
local: inference/interfaces/ConversationalReturn
79-
- title: ImageClassificationReturnValue
80-
local: inference/interfaces/ImageClassificationReturnValue
81-
- title: ImageSegmentationReturnValue
82-
local: inference/interfaces/ImageSegmentationReturnValue
83-
- title: ImageToTextReturn
84-
local: inference/interfaces/ImageToTextReturn
85-
- title: ObjectDetectionReturnValue
86-
local: inference/interfaces/ObjectDetectionReturnValue
69+
- title: AudioClassificationOutputValue
70+
local: inference/interfaces/AudioClassificationOutputValue
71+
- title: AutomaticSpeechRecognitionOutput
72+
local: inference/interfaces/AutomaticSpeechRecognitionOutput
73+
- title: BaseArgs
74+
local: inference/interfaces/BaseArgs
75+
- title: ConversationalOutput
76+
local: inference/interfaces/ConversationalOutput
77+
- title: ImageClassificationOutputValue
78+
local: inference/interfaces/ImageClassificationOutputValue
79+
- title: ImageSegmentationOutputValue
80+
local: inference/interfaces/ImageSegmentationOutputValue
81+
- title: ImageToTextOutput
82+
local: inference/interfaces/ImageToTextOutput
83+
- title: ObjectDetectionOutputValue
84+
local: inference/interfaces/ObjectDetectionOutputValue
8785
- title: Options
8886
local: inference/interfaces/Options
89-
- title: QuestionAnswerReturn
90-
local: inference/interfaces/QuestionAnswerReturn
91-
- title: SummarizationReturn
92-
local: inference/interfaces/SummarizationReturn
93-
- title: TableQuestionAnswerReturn
94-
local: inference/interfaces/TableQuestionAnswerReturn
95-
- title: TextGenerationReturn
96-
local: inference/interfaces/TextGenerationReturn
87+
- title: QuestionAnsweringOutput
88+
local: inference/interfaces/QuestionAnsweringOutput
89+
- title: SummarizationOutput
90+
local: inference/interfaces/SummarizationOutput
91+
- title: TableQuestionAnsweringOutput
92+
local: inference/interfaces/TableQuestionAnsweringOutput
93+
- title: TextGenerationOutput
94+
local: inference/interfaces/TextGenerationOutput
9795
- title: TextGenerationStreamBestOfSequence
9896
local: inference/interfaces/TextGenerationStreamBestOfSequence
9997
- title: TextGenerationStreamDetails
10098
local: inference/interfaces/TextGenerationStreamDetails
99+
- title: TextGenerationStreamOutput
100+
local: inference/interfaces/TextGenerationStreamOutput
101101
- title: TextGenerationStreamPrefillToken
102102
local: inference/interfaces/TextGenerationStreamPrefillToken
103-
- title: TextGenerationStreamReturn
104-
local: inference/interfaces/TextGenerationStreamReturn
105103
- title: TextGenerationStreamToken
106104
local: inference/interfaces/TextGenerationStreamToken
107-
- title: TokenClassificationReturnValue
108-
local: inference/interfaces/TokenClassificationReturnValue
109-
- title: TranslationReturn
110-
local: inference/interfaces/TranslationReturn
111-
- title: ZeroShotClassificationReturnValue
112-
local: inference/interfaces/ZeroShotClassificationReturnValue
105+
- title: TokenClassificationOutputValue
106+
local: inference/interfaces/TokenClassificationOutputValue
107+
- title: TranslationOutput
108+
local: inference/interfaces/TranslationOutput
109+
- title: ZeroShotClassificationOutputValue
110+
local: inference/interfaces/ZeroShotClassificationOutputValue

docs/index.md

+57-27
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,28 @@
99
<br/>
1010
</p>
1111

12+
```ts
13+
await inference.translation({
14+
model: 't5-base',
15+
inputs: 'My name is Wolfgang and I live in Berlin'
16+
})
17+
18+
await inference.textToImage({
19+
model: 'stabilityai/stable-diffusion-2',
20+
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
21+
parameters: {
22+
negative_prompt: 'blurry',
23+
}
24+
})
25+
```
26+
1227
# Hugging Face JS libraries
1328

1429
This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.
1530

31+
- [@huggingface/inference](inference/README): Use the Inference API to make calls to 100,000+ Machine Learning models, or your own [inference endpoints](https://hf.co/docs/inference-endpoints/)!
1632
- [@huggingface/hub](hub/README): Interact with huggingface.co to create or delete repos and commit / download files
17-
- [@huggingface/inference](inference/README): Use the Inference API to make calls to 100,000+ Machine Learning models!
33+
1834

1935
With more to come, like `@huggingface/endpoints` to manage your HF Endpoints!
2036

@@ -29,15 +45,15 @@ The libraries are still very young, please help us by opening issues!
2945
To install via NPM, you can download the libraries as needed:
3046

3147
```bash
32-
npm install @huggingface/hub
3348
npm install @huggingface/inference
49+
npm install @huggingface/hub
3450
```
3551

3652
Then import the libraries in your code:
3753

3854
```ts
39-
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
4055
import { HfInference } from "@huggingface/inference";
56+
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
4157
import type { RepoId, Credentials } from "@huggingface/hub";
4258
```
4359

@@ -48,18 +64,52 @@ You can run our packages with vanilla JS, without any bundler, by using a CDN or
4864
```html
4965

5066
<script type="module">
51-
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@1.8.0/+esm';
67+
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@2.0.0/+esm';
5268
import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm";
5369
</script>
5470
```
5571

56-
## Usage example
72+
## Usage examples
73+
74+
Get your HF access token in your [account settings](https://huggingface.co/settings/tokens).
75+
76+
### @huggingface/inference examples
5777

5878
```ts
59-
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";
6079
import { HfInference } from "@huggingface/inference";
6180

62-
// use an access token from your free account
81+
const HF_ACCESS_TOKEN = "hf_...";
82+
83+
const inference = new HfInference(HF_ACCESS_TOKEN);
84+
85+
await inference.translation({
86+
model: 't5-base',
87+
inputs: 'My name is Wolfgang and I live in Berlin'
88+
})
89+
90+
await inference.textToImage({
91+
model: 'stabilityai/stable-diffusion-2',
92+
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
93+
parameters: {
94+
negative_prompt: 'blurry',
95+
}
96+
})
97+
98+
await inference.imageToText({
99+
data: await (await fetch('https://picsum.photos/300/300')).blob(),
100+
model: 'nlpconnect/vit-gpt2-image-captioning',
101+
})
102+
103+
// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
104+
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
105+
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
106+
```
107+
108+
### @huggingface/hub examples
109+
110+
```ts
111+
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";
112+
63113
const HF_ACCESS_TOKEN = "hf_...";
64114

65115
await createRepo({
@@ -82,26 +132,6 @@ await deleteFiles({
82132
credentials: {accessToken: HF_ACCESS_TOKEN},
83133
paths: ["README.md", ".gitattributes"]
84134
});
85-
86-
const inference = new HfInference(HF_ACCESS_TOKEN);
87-
88-
await inference.translation({
89-
model: 't5-base',
90-
inputs: 'My name is Wolfgang and I live in Berlin'
91-
})
92-
93-
await inference.textToImage({
94-
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
95-
model: 'stabilityai/stable-diffusion-2',
96-
parameters: {
97-
negative_prompt: 'blurry',
98-
}
99-
})
100-
101-
await inference.imageToText({
102-
data: await (await fetch('https://picsum.photos/300/300')).blob(),
103-
model: 'nlpconnect/vit-gpt2-image-captioning',
104-
})
105135
```
106136

107137
There are more features of course, check each library's README!

docs/inference/README.md

+56-7
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
# 🤗 Hugging Face Inference API
22

3-
A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index).
3+
A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index). It also works with [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index).
4+
5+
You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference) or see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs).
46

57
## Install
68

@@ -14,16 +16,16 @@ pnpm add @huggingface/inference
1416

1517
## Usage
1618

17-
**Important note:** Using an API key is optional to get started, however you will be rate limited eventually. Join [Hugging Face](https://huggingface.co/join) and then visit [access tokens](https://huggingface.co/settings/tokens) to generate your API key for **free**.
19+
**Important note:** Using an access token is optional to get started, however you will be rate limited eventually. Join [Hugging Face](https://huggingface.co/join) and then visit [access tokens](https://huggingface.co/settings/tokens) to generate your access token for **free**.
1820

19-
Your API key should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the API key.
21+
Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.
2022

2123
### Basic examples
2224

2325
```typescript
2426
import { HfInference } from '@huggingface/inference'
2527

26-
const hf = new HfInference('your api key')
28+
const hf = new HfInference('your access token')
2729

2830
// Natural Language
2931

@@ -41,15 +43,15 @@ await hf.summarization({
4143
}
4244
})
4345

44-
await hf.questionAnswer({
46+
await hf.questionAnswering({
4547
model: 'deepset/roberta-base-squad2',
4648
inputs: {
4749
question: 'What is the capital of France?',
4850
context: 'The capital of France is Paris.'
4951
}
5052
})
5153

52-
await hf.tableQuestionAnswer({
54+
await hf.tableQuestionAnswering({
5355
model: 'google/tapas-base-finetuned-wtq',
5456
inputs: {
5557
query: 'How many stars does the transformers repository have?',
@@ -107,7 +109,7 @@ await hf.conversational({
107109
}
108110
})
109111

110-
await hf.featureExtraction({
112+
await hf.sentenceSimilarity({
111113
model: 'sentence-transformers/paraphrase-xlm-r-multilingual-v1',
112114
inputs: {
113115
source_sentence: 'That is a happy person',
@@ -119,6 +121,11 @@ await hf.featureExtraction({
119121
}
120122
})
121123

124+
await hf.featureExtraction({
125+
model: "sentence-transformers/distilbert-base-nli-mean-tokens",
126+
inputs: "That is a happy person",
127+
});
128+
122129
// Audio
123130

124131
await hf.automaticSpeechRecognition({
@@ -160,6 +167,30 @@ await hf.imageToText({
160167
data: readFileSync('test/cats.png'),
161168
model: 'nlpconnect/vit-gpt2-image-captioning'
162169
})
170+
171+
// Custom call, for models with custom parameters / outputs
172+
await hf.request({
173+
model: 'my-custom-model',
174+
inputs: 'hello world',
175+
parameters: {
176+
custom_param: 'some magic',
177+
}
178+
})
179+
180+
// Custom streaming call, for models with custom parameters / outputs
181+
for await (const output of hf.streamingRequest({
182+
model: 'my-custom-model',
183+
inputs: 'hello world',
184+
parameters: {
185+
custom_param: 'some magic',
186+
}
187+
})) {
188+
...
189+
}
190+
191+
// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
192+
const gpt2 = hf.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
193+
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
163194
```
164195

165196
## Supported Tasks
@@ -179,6 +210,7 @@ await hf.imageToText({
179210
- [x] Zero-shot classification
180211
- [x] Conversational
181212
- [x] Feature extraction
213+
- [x] Sentence Similarity
182214

183215
### Audio
184216

@@ -193,6 +225,23 @@ await hf.imageToText({
193225
- [x] Text to image
194226
- [x] Image to text
195227

228+
## Tree-shaking
229+
230+
You can import the functions you need directly from the module, rather than using the `HfInference` class:
231+
232+
```ts
233+
import {textGeneration} from "@huggingface/inference";
234+
235+
await textGeneration({
236+
accessToken: "hf_...",
237+
model: "model_or_endpoint",
238+
inputs: ...,
239+
parameters: ...
240+
})
241+
```
242+
243+
This will enable tree-shaking by your bundler.
244+
196245
## Running tests
197246

198247
```console

0 commit comments

Comments
 (0)