Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: remove lablup or backend.ai text for concealing backend.ai to end-users by customer request #73

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 24 additions & 24 deletions docs/admin_menu/admin_menu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Admin Menus
===========

Logging in with an admin account will reveal an extra Administration menu on the bottom left of the sidebar.
User information registered in Backend.AI is listed in the Users tab. Domain admin can see only the users who belong to the domain,
User information registered is listed in the Users tab. Domain admin can see only the users who belong to the domain,
while superadmin can see all users' information. Only superadmin can create and deactivate a user.

User ID (email), Name (username), and Main Access Key can be filtered by typing text in the
Expand Down Expand Up @@ -112,7 +112,7 @@ Manage User's Keypairs
----------------------

Each user account usually have one or more keypairs. A keypair is used for API
authentication to the Backend.AI server, after user logs in. Login requires
authentication to the server, after user logs in. Login requires
authentication via user email and password, but every request the user sends to
the server is authenticated based on the keypair.

Expand All @@ -138,7 +138,7 @@ can also explicitly enter the access key and secret key by clicking the Advanced
panel.

The Rate Limit field is where you specify the maximum number of requests that
can be sent to the Backend.AI server in 15 minutes. For example, if set to 1000,
can be sent to the server in 15 minutes. For example, if set to 1000,
and the keypair sends more than 1000 API requests in 15 minutes, and the server
throws an error and does not accept the request. It is recommended to use the
default value and increase it when the API request frequency goes up high
Expand All @@ -165,7 +165,7 @@ according to the user's pattern.
Share project storage folders with project members
--------------------------------------------------

Backend.AI provides storage folders for projects, in addition to user's own
Storage folders for projects are provided, in addition to user's own
storage folder. A project storage folder is a folder belonging to a specific
project, not a specific user, and can be accessed by all users in that project.

Expand Down Expand Up @@ -281,7 +281,7 @@ Manage Resource Policy
Keypair Resource Policy
~~~~~~~~~~~~~~~~~~~~~~~

In Backend.AI, administrators have the ability to set limits on the total resources available for each keypair, user, and project.
Administrators have the ability to set limits on the total resources available for each keypair, user, and project.
Resource policies enable you to define the maximum allowed resources and other compute session-related settings.
Additionally, it is possible to create multiple resource policies for different needs,
such as user or research requirements, and apply them on an individual basis.
Expand Down Expand Up @@ -348,9 +348,9 @@ About details of each option in resource policy dialog, see the description belo
various and set by the administrators. (max value: 15552000 (approx. 180 days))

* Folders
* Allowed hosts: Backend.AI supports many NFS mountpoint. This field limits
the accessibility to them. Even if a NFS named "data-1" is mounted on
Backend.AI, users cannot access it unless it is allowed by resource policy.
* Allowed hosts: Many NFS mountpoints are supported. This field limits
the accessibility to them. Even if a NFS named "data-1" is mounted,
users cannot access it unless it is allowed by resource policy.
* (Deprecated since 23.09.4) Max. #: the maximum number of storage folders that
can be created/invited. (max value: 100).

Expand Down Expand Up @@ -395,7 +395,7 @@ table. This will bring up a dialog where you can select the columns you want to
User Resource Policy
~~~~~~~~~~~~~~~~~~~~

Starting from version 24.03, Backend.AI supports user resource policy management. While each
Starting from version 24.03, user resource policy management is available. While each
user can have multiple keypairs, a user can only have one user resource policy. In the user
resource policy page, users can set restrictions on various settings related to folders such as
Max Folder Count and Max Folder Size, as well as individual resource limits like Max Session
Expand Down Expand Up @@ -445,7 +445,7 @@ clicking the 'Setting (Gear)' button at the bottom right of the table.
Project Resource Policy
~~~~~~~~~~~~~~~~~~~~~~~

Starting from version 24.03, Backend.AI supports project resource policy management. Project
Starting from version 24.03, project resource policy management is available. Project
resource policies manage storage space (quota) and folder-related limitations for projects.

When clicking the Project tab of the Resource Policy page, you can see the list of project
Expand Down Expand Up @@ -498,7 +498,7 @@ Manage Images

Admins can manage images, which are used in creating a compute session, in the
Images tab of the Environments page. In the tab, meta information of all images
currently in the Backend.AI server is displayed. You can check information such
currently in the server is displayed. You can check information such
as registry, namespace, image name, image's based OS, digest, and minimum
resources required for each image. For images downloaded to one or more agent
nodes, there will be a ``installed`` tag in each Status column.
Expand Down Expand Up @@ -561,9 +561,9 @@ registered by default, and it is a registry provided by Harbor.
In the offline environment, the default registry is not accessible, so
click the trash icon on the right to delete it.

Click the refresh icon in Controls to update image metadata for Backend.AI from
Click the refresh icon in Controls to update image metadata for the platform from
the connected registry. Image information which does not have labels for
Backend.AI among the images stored in the registry is not updated.
the platform among the images stored in the registry is not updated.

.. image:: image_registries_page.png
:alt: Registries page
Expand Down Expand Up @@ -615,7 +615,7 @@ of currently defined resource presets.
You can set resources such as CPU, RAM, fGPU, etc. to be provided by the
resource preset by clicking the 'Setting (Gear)' (cogwheel) in the Controls panel.
In the example below, the GPU field is disabled since the GPU provision mode of
the Backend.AI server is set to "fractional". After setting the resources with
the server is set to "fractional". After setting the resources with
the desired values, save it and check if the corresponding preset is displayed
when creating a compute session. If available resources are less
than the amount of resources defined in the preset, the corresponding preset
Expand All @@ -641,7 +641,7 @@ Manage agent nodes
------------------

Superadmins can view the list of agent worker nodes, currently connected to
Backend.AI, by visiting the Resources page. You can check agent node's IP,
the platform, by visiting the Resources page. You can check agent node's IP,
connecting time, actual resources currently in use, etc. The Web-UI does
not provide the function to manipulate agent nodes.

Expand Down Expand Up @@ -799,7 +799,7 @@ In Quota setting page, there are two panels that represent the corresponding ite
Set User Quota
~~~~~~~~~~~~~~~~

In Backend.AI, there are two types of vfolders created by user and admin(project). In this section,
There are two types of vfolders created by user and admin(project). In this section,
we would like to show how to check current quota setting per-user and how to configure it.
First, make sure the active tab of quota settings panel is ``For User``. Then, select user you desire to
check and edit the quota. You can see the quota id that corresponds to user's id and the configuration already set
Expand Down Expand Up @@ -870,7 +870,7 @@ Please note that a file name can have up to 255 characters.
System settings
---------------

In the Configuration page, you can see main settings of Backend.AI server.
In the Configuration page, you can see main settings of the server.
Currently, it provides several controls which can change and list settings.


Expand All @@ -893,7 +893,7 @@ You can also change settings for scaling and plugins.
:alt: System setting about scaling and plugins

When a user launches a multi-node cluster session, which is introduced at
version 20.09, Backend.AI will dynamically create an overlay network to support
version 20.09, It will dynamically create an overlay network to support
private inter-node communication. Admins can set the value of the Maximum
Transmission Unit (MTU) for the overlay network, if it is certain that the value
will enhance the network speed.
Expand All @@ -904,8 +904,8 @@ will enhance the network speed.
:alt: Overlay network setting dialog

.. seealso::
For more information about Backend.AI Cluster session, please refer to
:ref:`Backend.AI Cluster Compute Session<backendai-cluster-compute-session>` section.
For more information about Cluster session, please refer to
:ref:`Cluster Compute Session<backendai-cluster-compute-session>` section.

You can edit the configuration per job scheduler by clicking the Scheduler's config button.
The values in the scheduler setting are the defaults to use when there is no scheduler
Expand All @@ -916,7 +916,7 @@ Currently supported scheduling methods include ``FIFO``, ``LIFO``, and ``DRF``.
Each method of scheduling is exactly the same as the :ref:`scheduling methods<scheduling-methods>` above.
Scheduler options include session creation retries. Session creation retries refers to the number
of retries to create a session if it fails. If the session cannot be created within the trials,
the request will be ignored and Backend.AI will process the next request. Currently, changes are
the request will be ignored and It will process the next request. Currently, changes are
only possible when the scheduler is FIFO.

.. image:: system_setting_dialog_scheduler_settings.png
Expand All @@ -939,12 +939,12 @@ Go to the Maintenance page and you will see some buttons to manage the server.

- RECALCULATE USAGE: Occasionally, due to unstable network connections or
container management problem of Docker daemon, there may be a case where the
resource occupied by Backend.AI does not match the resource actually used by
resource occupied by It does not match the resource actually used by
the container. In this case, click the RECALCULATE USAGE button to manually
correct the resource occupancy.
- RESCAN IMAGES: Update image meta information from all registered Docker
registries. It can be used when a new image is pushed to a
Backend.AI-connected docker registry.
server-connected docker registry.

.. image:: maintenance_page.png
:width: 500
Expand All @@ -961,7 +961,7 @@ Detailed Information

In Information page, you can see several detailed information and status of each feature.
To see Manager version and API version, check the Core panel. To see whether each component
for Backend.AI is compatible or not, check the Component panel.
is compatible or not, check the Component panel.

.. note::

Expand Down
Binary file modified docs/agent_summary/agent_summary.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 9 additions & 9 deletions docs/cluster_session/cluster_session.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
==================================
Backend.AI Cluster Compute Session
Cluster Compute Session
==================================

.. _backendai-cluster-compute-session:

.. note::
Cluster compute session feature is supported from Backend.AI server 20.09 or
Cluster compute session feature is supported from the server 20.09 or
higher.

Overview of Backend.AI cluster compute session
Overview of cluster compute session
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Backend.AI supports cluster compute session to support distributed computing /
cluster compute session is supported to support distributed computing /
training tasks. A cluster session consists of multiple containers, each of which
is created across multiple Agent nodes. Containers under a cluster session are
automatically connected each other through a dynamically-created private
Expand All @@ -20,7 +20,7 @@ given, making it simple to execute networking tasks such as SSH connection. All
the necessary secret keys and various settings for SSH connection between
containers are automatically generated.

For detailed about Backend.AI cluster session, refer to the following.
For detailed about cluster session, refer to the following.

.. image::
overview_cluster_session.png
Expand Down Expand Up @@ -84,8 +84,8 @@ container information.
``BACKENDAI_KERNEL_ID`` is the same as ``BACKENDAI_SESSION_ID``.


Use of Backend.AI cluster compute session
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use of cluster compute session
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In this section, we will take a look at how to actually create and use cluster
compute sessions through the user GUI.
Expand Down Expand Up @@ -145,12 +145,12 @@ is displayed.
:width: 500
:align: center

In this way, Backend.AI makes it easy to create cluster computing sessions. In
In this way, It makes it easy to create cluster computing sessions. In
order to execute distributed learning and calculation through a cluster
calculation session, a distributed learning module provided by ML libraries such
as TensorFlow/PyTorch, or additional supporting software such as Horovod, NNI,
MLFlow, etc. is required, and code in a way that can utilize the software. Must
be written carefully. Backend.AI provides a kernel image containing the software
be written carefully. It provides a kernel image containing the software
required for distributed learning, so you can use that image to create a nice
distributed learning algorithm.

Expand Down
15 changes: 5 additions & 10 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

Backend.AI Web-UI User Guide
KT Web-UI User Guide
=================================

User's guide for the Backend.AI Web-UI.
User's guide for the KT Web-UI.

Backend.AI Web-UI is a web or app that provides easy-to-use GUI interface
to work with the Backend.AI server.
KT Web-UI is a web or app that provides easy-to-use GUI interface
to work with the server.

The latest versions of this document can be found from sites below:

Expand All @@ -21,9 +21,6 @@ The latest versions of this document can be found from sites below:
:caption: Table of Contents

quickstart
disclaimer
overview/overview
installation/installation
login/login
summary/summary
sessions_all/sessions_all
Expand All @@ -40,6 +37,4 @@ The latest versions of this document can be found from sites below:
cluster_session/cluster_session
admin_menu/admin_menu
trouble_shooting/trouble_shooting
appendix/appendix
license_agreement/license_agreement
references/references
appendix/appendix
Binary file modified docs/login/forgot_password_panel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions docs/login/login.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,5 +119,5 @@ right side of the header.
:width: 800


There is a question mark icon at the lower right side of the header.
Click this icon to access the web version of this guide document.
.. There is a question mark icon at the lower right side of the header.
.. Click this icon to access the web version of this guide document.
Binary file modified docs/login/login_dialog.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/login/signout_button.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/login/signup_dialog.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/login/theme_mode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/login/topbar_usermenu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/login/ui_menu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 5 additions & 5 deletions docs/model_serving/model_serving.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Model Service
.. note::
This feature is supported in Enterprise version only.

Backend.AI not only facilitates the construction of development environments
It does not only facilitates the construction of development environments
and resource management during the model training phase, but also supports
the model service feature from version 23.09 onwards. This feature allows
end-users (such as AI-based mobile apps and web service backends) to make
Expand Down Expand Up @@ -83,7 +83,7 @@ Creating a Model Definition File
regard it as ``model-definition.yml`` or ``model-definition.yaml``.

The model definition file contains the configuration information
required by the Backend.AI system to automatically start, initialize,
required by the system to automatically start, initialize,
and scale the inference session. It is stored in the model type folder
independently from the container image that contains the inference
service engine. This allows the engine to serve different models based on
Expand Down Expand Up @@ -139,7 +139,7 @@ The model definition file follows the following format:
- ``max_retries``: Specify the number of retries to be made if there is no response after a request is sent to the service during model serving.


**Description for service action supported in Backend.AI Model serving**
**Description for service action supported in Model serving**

.. _prestart_actions:

Expand Down Expand Up @@ -270,7 +270,7 @@ resources that can be allocated to the model service.
It is useful when you trying to create a model service using runtime variant. some runtime variant needs
certain environment variable setting before execution.

Before creating model service, Backend.AI supports validation feature to check
Before creating model service, It supports validation feature to check
whether execution is available or not(due to any errors during execution).
By clicking the 'Validate' button at the bottom-left of the service launcher,
a new popup for listening to validation events will pop up. In the popup modal,
Expand Down Expand Up @@ -404,7 +404,7 @@ Terminating Model Service

The model service periodically runs a scheduler to adjust the routing
count to match the desired session count. However, this puts a burden on
the Backend.AI scheduler. Therefore, it is recommended to terminate the
the scheduler. Therefore, it is recommended to terminate the
model service if it is no longer needed. To terminate the model service,
click on the 'trash' button in the Control column. A modal will appear asking
for confirmation to terminate the model service. Clicking ``OK``
Expand Down
8 changes: 4 additions & 4 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
Quickstart
==============

Welcome to Quickstart guide of Backend.AI WebUI.
This tutorial will cover the essentials of using Backend.AI without any
Welcome to Quickstart guide of KT WebUI.
This tutorial will cover the essentials of using KT without any
knowledge base.


Objectives
------------

Part 1. Basic guide to using Backend.AI
Part 1. Basic guide to using KT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- :ref:`How to create a virtual folder<create_storage_folder>`
- :ref:`How to create a session<create_session>`
- :ref:`How to use a session<use_session>`
- :ref:`How to delete a session<delete_session>`

Part 2. Advanced guide to using Backend.AI
Part 2. Advanced guide to using KT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- :ref:`How to use terminal application with tmux<tmux_guide>`
- :ref:`How to install extra pip package using automount virtual folder<install_pip_pkg>`
Expand Down
Loading