Skip to content

fix(gen): fixed typos #3459

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions ai-data/managed-inference/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ categories:
---
## Allowed IPs

Allowed IPs are single IPs or IP blocks which have the [required permissions to remotely access a deployment](/ai-data/managed-inference/how-to/manage-allowed-ips/). They allow you to define which host and networks can connect to your Managed Inference endpoints. You can add, edit, or delete allowed IPs. In the absence of allowed IPs, all IP addresses are allowed by default.
Allowed IPs are single IPs or IP blocks that have the [required permissions to remotely access a deployment](/ai-data/managed-inference/how-to/manage-allowed-ips/). They allow you to define which host and networks can connect to your Managed Inference endpoints. You can add, edit, or delete allowed IPs. In the absence of allowed IPs, all IP addresses are allowed by default.

Access control is handled directly at the network level by Load Balancers, making the filtering more efficient and universal and relieving the Managed Inference server from this task.

Expand All @@ -27,7 +27,7 @@ A deployment makes a trained language model available for real-world application

## Embedding models

Embedding models are a representation-learning technique that converts textual data into numerical vectors. These vectors capture semantic information about the text, and are often used as input to downstream machine-learning models, or algorithms.
Embedding models are a representation-learning technique that converts textual data into numerical vectors. These vectors capture semantic information about the text and are often used as input to downstream machine-learning models, or algorithms.

## Endpoint

Expand Down Expand Up @@ -65,7 +65,7 @@ LLMs have applications in natural language processing, text generation, translat

In the context of LLMs, a prompt refers to the input provided to the model to generate a desired response.
It typically consists of a sentence, paragraph, or series of keywords or instructions that guide the model in producing text relevant to the given context or task.
The quality and specificity of the prompt greatly influences the generated output, as the model uses it to understand the user's intent and create responses accordingly.
The quality and specificity of the prompt greatly influence the generated output, as the model uses it to understand the user's intent and create responses accordingly.

## Quantization

Expand All @@ -74,4 +74,4 @@ LLMs provided for deployment are named with suffixes that denote their quantizat

## Retrieval Augmented Generation (RAG)

RAG is an architecture combining information retrieval elements with language generation to enhance the capabilities of LLMs. It involves retrieving relevant context or knowledge from external sources, and incorporating it into the generation process to produce more informative and contextually grounded outputs.
RAG is an architecture combining information retrieval elements with language generation to enhance the capabilities of LLMs. It involves retrieving relevant context or knowledge from external sources and incorporating it into the generation process to produce more informative and contextually grounded outputs.
2 changes: 1 addition & 1 deletion ai-data/managed-inference/how-to/create-deployment.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ dates:
- Enabling both private and public networks will result in two distinct endpoints (public and private) for your deployment.
- Deployments must have at least one endpoint, either public or private.
</Message>
6. Click **Create deployment** to launch the deployment process. Once the deployment is ready, it will be listed among your deployments.
6. Click **Create deployment** to launch the deployment process. Once the deployment is ready, it will be listed among your deployments.
2 changes: 1 addition & 1 deletion ai-data/managed-inference/how-to/delete-deployment.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@ Once you have finished your inference tasks you can delete your deployment. This

<Message type="important">
Deleting a deployment is a permanent action and will erase all its associated data.
</Message>
</Message>
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ Using a Private Network for communications between your Instances hosting your a
3. Go to the **Overview** tab and locate the **Endpoints** section.
4. Click **Detach Private Network**. A pop-up displays.
<Lightbox src="scaleway-inference-pn-detach.webp" alt="A screenshot of the Managed Interface product overview tab in the Scaleway console, highlighting the Private Network detach section" size="medium" />
5. Click **Detach Private Network** to confirm removal of the private endpoint for your deployment.
5. Click **Detach Private Network** to confirm the removal of the private endpoint for your deployment.
<Message type="tip">
Alternatively, you can access the **Security** tab and detach a network from the **Private Network** section.
</Message>
Expand Down
6 changes: 3 additions & 3 deletions ai-data/managed-inference/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,10 +67,10 @@ Managed Inference deployments use dynamic tokens generated with Scaleway's Ident

1. Click **Managed Inference** in the **AI & Data** section of the side menu. The Managed Inference dashboard displays.
2. Click <Icon name="more" /> next to the deployment you want to edit. The deployment dashboard displays.
3. Click the **Inference** tab. Code examples in various environments display. Copy and paste them in your code editor or terminal.
3. Click the **Inference** tab. Code examples in various environments display. Copy and paste them into your code editor or terminal.

<Message type="note">
Prompt structure may vary from one model to another. Refer to the specific instructions for use in our [dedicated documentation](/ai-data/managed-inference/reference-content/)
Prompt structure may vary from one model to another. Refer to the specific instructions for use in our [dedicated documentation](/ai-data/managed-inference/reference-content/).
</Message>

## How to delete a deployment
Expand All @@ -83,4 +83,4 @@ Managed Inference deployments use dynamic tokens generated with Scaleway's Ident

<Message type="important">
Deleting a deployment is a permanent action, and will erase all its associated configuration and resources.
</Message>
</Message>
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ categories:
- ai-data
---



## Model overview

| Attribute | Details |
Expand All @@ -37,11 +35,11 @@ meta/llama-3-70b-instruct:int8

Meta’s Llama 3 is an iteration of the open-access Llama family.
Llama 3 was designed to match the best proprietary models, enhanced by community feedback for greater utility and responsibly spearheading the deployment of LLMs.
With a commitment to open source principles, this release marks the beginning of a multilingual, multimodal future for Llama 3, pushing the boundaries in reasoning and coding capabilities.
With a commitment to open-source principles, this release marks the beginning of a multilingual, multimodal future for Llama 3, pushing the boundaries in reasoning and coding capabilities.

## Why is it useful?

We are dedicated to supporting Meta's commitment to open(weight) AI and their mission, through integration in the Scaleway ecosystem.
We are dedicated to supporting Meta's commitment to open(weight) AI and its mission, through integration into the Scaleway ecosystem.
Llama 3 marks a significant advancement over Llama 2 and other available models due to several enhancements:

Llama-3-70b-instruct offers seamless integration with chat applications and customer service platforms, facilitating smooth communication between businesses and their customers.
Expand All @@ -52,7 +50,6 @@ In particular, this model:
- Uses a more extensive token vocabulary, featuring 128,000 tokens, allowing for more efficient language encoding.
- Demonstrates a reduction in false "refusals" by less than one-third compared to Llama 2.


## How to use it

### Sending Managed Inference requests
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ categories:
- ai-data
---



## Model overview

| Attribute | Details |
Expand All @@ -37,8 +35,7 @@ meta/llama-3-8b-instruct:bf16

Meta’s Llama 3 is an iteration of the open-access Llama family.
Llama 3 was designed to match the best proprietary models, enhanced by community feedback for greater utility and responsibly spearheading the deployment of LLMs.
With a commitment to open source principles, this release marks the beginning of a multilingual, multimodal future for Llama 3, pushing the boundaries in reasoning and coding capabilities.

With a commitment to open-source principles, this release marks the beginning of a multilingual, multimodal future for Llama 3, pushing the boundaries in reasoning and coding capabilities.

## Why is it useful?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ mistral-7b-instruct-v0.3:bf16
## Model introduction

The first dense model released by Mistral AI, perfect for experimentation, customization, and quick iteration. At the time of the release, it matched the capabilities of models up to 30B parameters.
This model is open weight and distributed under the Apache 2.0 license.
This model is open-weight and distributed under the Apache 2.0 license.

## Why is it useful?

Mistral-7B-Instruct-v0.3 is the smallest and latest Large Language Model (LLM) from Mistral AI, providing a 32k context window and support of function calling.
Mistral-7B-Instruct-v0.3 is the smallest and latest Large Language Model (LLM) from Mistral AI, providing a 32k context window and support for function calling.
It does not have any moderation mechanisms to finely respect guardrails. Use with caution for deployments in environments requiring moderated outputs.

## How to use it
Expand Down Expand Up @@ -73,4 +73,4 @@ Process the output data according to your application's needs. The response will

<Message type="note">
Despite efforts for accuracy, the possibility of generated text containing inaccuracies or [hallucinations](/ai-data/managed-inference/concepts/#hallucinations) exists. Always verify the content generated independently.
</Message>
</Message>
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ Trained on vast instructional datasets, it provides clear and concise instructio
Mixtral-8x7b-instruct-v0.1, trained on the [Nabuchodonosor supercomputer](https://www.scaleway.com/en/ai-supercomputers/), delivers high-quality instruction generation with exceptional performance.
This model excels in code generation and understanding multiple languages, making it an ideal choice for developing virtual assistants or educational platforms that require reliability and excellence.


## How to use it

### Sending Inference requests
Expand Down Expand Up @@ -76,4 +75,4 @@ Process the output data according to your application's needs. The response will

<Message type="note">
Despite efforts for accuracy, the possibility of generated text containing inaccuracies or [hallucinations](/ai-data/managed-inference/concepts/#hallucinations) exists. Always verify the content generated independently.
</Message>
</Message>
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ categories:
- ai-data
---

You can use any of the OpenAI [official libraries](https://platform.openai.com/docs/libraries/), for example the [OpenAI Python client library](https://github.com/openai/openai-python) to interact with your Scaleway Managed Inference deployment.
You can use any of the OpenAI [official libraries](https://platform.openai.com/docs/libraries/), for example, the [OpenAI Python client library](https://github.com/openai/openai-python) to interact with your Scaleway Managed Inference deployment.
This feature is especially beneficial for those looking to seamlessly transition applications already utilizing OpenAI.

## Chat Completions API
Expand Down Expand Up @@ -55,7 +55,7 @@ print(chat_completion.choices[0].message.content)
```

<Message type="note">
More OpenAI-like APIs (e.g audio) will be released step by step once related models are supported.
More OpenAI-like APIs (e.. audio) will be released step by step once related models are supported.
</Message>

### Supported parameters
Expand Down
2 changes: 1 addition & 1 deletion bare-metal/apple-silicon/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Apple silicon is Apple's own design of processor. It is the basis of Mac compute

Scaleway Apple silicon as-a-Service uses [Apple Mac mini](#mac-mini) hardware. These devices rely on the power of Apple's [silicon](#apple-silicon) technology, ensuring exceptional performance and energy efficiency.

Apple silicon as-a-Service is tailored for developing, building, testing, and signing applications for Apple devices such as iPhones, iPads, Mac computers and more. The Mac mini boasts a sophisticated neural engine that significantly enhances machine learning capabilities.
Apple silicon as-a-Service is tailored for developing, building, testing, and signing applications for Apple devices such as iPhones, iPads, Mac computers, and more. The Mac mini boasts a sophisticated neural engine that significantly enhances machine learning capabilities.

## Mac mini

Expand Down
13 changes: 5 additions & 8 deletions bare-metal/apple-silicon/how-to/connect-to-mac-mini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This page shows how to connect to your [Mac mini](/bare-metal/apple-silicon/conc

## How to connect via VNC

VNC is a remote desktop-sharing protocol. It allows you to visualize the graphical screen output of a remote computer and transfer local keyboard and mouse events to the remote computer using a network connection. The protocol is platform-independent, which means that various clients exist for Linux, Windows, and macOS based computers. The VNC server used on the Mac mini is directly integrated into the macOS system without any restriction from our side.
VNC is a remote desktop-sharing protocol. It allows you to visualize the graphical screen output of a remote computer and transfer local keyboard and mouse events to the remote computer using a network connection. The protocol is platform-independent, which means that various clients exist for Linux, Windows, and macOS-based computers. The VNC server used on the Mac mini is directly integrated into the macOS system without any restriction from our side.

<Message type="tip">
If your local machine is running Windows or Linux, you might not have a VNC client installed, or your VNC client may not be compatible with an Apple machine.
Expand Down Expand Up @@ -80,14 +80,13 @@ If you are using Linux and experience problems using the [VNC button](#how-to-co

4. Click **Save and connect** to save these settings for the future, and launch a connection to your Mac mini.

You can now log in the graphical environment of macOS using the default user m1 and the VNC password.
You can now log in to the graphical environment of macOS using the default user m1 and the VNC password.
<Message type="note">
macOS may ask you for your password once logged into the VNC session. Change the keyboard layout of macOS to your computer's local keyboard layout before entering the password. Click on U.S. keyboard in the top right corner to display a list of all available keyboard layouts.
macOS may ask you for your password once logged into the VNC session. Change the keyboard layout of macOS to your computer's local keyboard layout before entering the password. Click the U.S. keyboard in the top right corner to display a list of all available keyboard layouts.

<Lightbox src="scaleway-m1-m1-vnc-lang.webp" alt="" />
</Message>


## How to connect via SSH

You can also connect directly to the terminal of your Mac mini using the SSH protocol and your [SSH key](/console/account/concepts/#ssh-key).
Expand All @@ -104,7 +103,5 @@ Check out our documentation on [how to connect to an Instance](/compute/instance
</Message>

<Message type="note">
Mac mini, macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>


Mac mini and macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>
8 changes: 3 additions & 5 deletions bare-metal/apple-silicon/how-to/create-mac-mini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,13 @@ This page shows how to create your first [Mac mini](/bare-metal/apple-silicon/co
</Message>
2. Click **Create Mac mini**. The Mac mini creation wizard displays.
3. Complete the following steps in the wizard:
- Choose an **Availability Zone**, which is the geographical region where your Mac mini will be deployed. The available Mac mini configurations depend on the Availbility Zone:
- Choose an **Availability Zone**, which is the geographical region where your Mac mini will be deployed. The available Mac mini configurations depend on the Availability Zone:
- Mac mini M2 pro and M2 are available in PARIS 1
- Mac mini M1 are available in PARIS 3
- Choose a macOS version. Note that if you choose a macOS other than the one installed by default, there will be a delay of about 1 hour before the Mac mini is made available.
- Enter a **Name** for your Mac mini, or leave the randomly-generated name in place.
- Verify the **Estimated cost** for your Mac mini based on your chosen specifications.
4. Click **Create Mac mini** to finish. The installation of your Apple silicon is launched, and you are informed when it is ready.
<Message type="note">
Mac mini, macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>


Mac mini and macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>
7 changes: 3 additions & 4 deletions bare-metal/apple-silicon/how-to/delete-mac-mini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,11 @@ This page shows how to delete your [Mac mini](/bare-metal/apple-silicon/concepts

1. Click **Apple silicon** in the **Bare Metal** section of the side menu. A list of your Mac minis displays.
<Lightbox src="scaleway-apple-silicon-dashbaord.webp" alt="" />
2. Click <Icon name="more" /> next to the Mac mini you want to delete, and select **Delete** from the drop-down menu. A pop-up asks you to confirm the action.
2. Click <Icon name="more" /> next to the Mac mini you want to delete and select **Delete** from the drop-down menu. A pop-up asks you to confirm the action.
3. Type **DELETE** and then click **Delete Mac mini** to confirm the deletion of your Mac mini.

You are returned to the list of your Mac minis, where the machine you deleted no longer appears.

<Message type="note">
Mac mini, macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>

Mac mini and macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>
6 changes: 2 additions & 4 deletions bare-metal/apple-silicon/how-to/reboot-mac-mini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,5 @@ This page shows how to reboot your [Mac mini](/bare-metal/apple-silicon/concepts
</Message>

<Message type="note">
Mac mini, macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>


Mac mini and macOS are trademarks of Apple Inc., registered in the U.S. and other countries and regions. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used by Apple under license. Scaleway is not affiliated with Apple Inc.
</Message>
Loading