title | sidebar_position | slug |
---|---|---|
Kubernetes |
1 |
/deployment-kubernetes |
This guide will help you get LangFlow up and running in Kubernetes cluster, including the following steps:
- Install LangFlow as IDE in a Kubernetes cluster (for development)
- Install LangFlow as a standalone application in a Kubernetes cluster (for production runtime workloads)
This solution is designed to provide a complete environment for developers to create, test, and debug their flows. It includes both the API and the UI.
- Kubernetes server
- kubectl
- Helm
We use Minikube for this example, but you can use any Kubernetes cluster.
-
Create a Kubernetes cluster on Minikube.
minikube start
-
Set
kubectl
to use Minikube.kubectl config use-context minikube
-
Add the repository to Helm.
helm repo add langflow <https://langflow-ai.github.io/langflow-helm-charts> helm repo update
-
Install LangFlow with the default options in the
langflow
namespace.helm install langflow-ide langflow/langflow-ide -n langflow --create-namespace
-
Check the status of the pods
kubectl get pods -n langflow
NAME READY STATUS RESTARTS AGE langflow-0 1/1 Running 0 33s langflow-frontend-5d9c558dbb-g7tc9 1/1 Running 0 38s
Enable local port forwarding to access LangFlow from your local machine.
kubectl port-forward -n langflow svc/langflow-langflow-runtime 7860:7860
Now you can access LangFlow at http://localhost:7860/.
To specify a different LangFlow version, you can set the langflow.backend.image.tag
and langflow.frontend.image.tag
values in the values.yaml
file.
langflow:
backend:
image:
tag: "1.0.0a59"
frontend:
image:
tag: "1.0.0a59"
By default, the chart will use a SQLLite database stored in a local persistent disk.
If you want to use an external PostgreSQL database, you can set the langflow.database
values in the values.yaml
file.
# Deploy postgresql. You can skip this section if you have an existing postgresql database.
postgresql:
enabled: true
fullnameOverride: "langflow-ide-postgresql-service"
auth:
username: "langflow"
password: "langflow-postgres"
database: "langflow-db"
langflow:
backend:
externalDatabase:
enabled: true
driver:
value: "postgresql"
host:
value: "langflow-ide-postgresql-service"
port:
value: "5432"
database:
value: "langflow-db"
user:
value: "langflow"
password:
valueFrom:
secretKeyRef:
key: "password"
name: "langflow-ide-postgresql-service"
sqlite:
enabled: false
You can scale the number of replicas for the LangFlow backend and frontend services by changing the replicaCount
value in the values.yaml
file.
langflow:
backend:
replicaCount: 3
frontend:
replicaCount: 3
You can scale frontend and backend services independently.
To scale vertically (increase the resources for the pods), you can set the resources
values in the values.yaml
file.
langflow:
backend:
resources:
requests:
memory: "2Gi"
cpu: "1000m"
frontend:
resources:
requests:
memory: "1Gi"
cpu: "1000m"
Visit the LangFlow Helm Charts repository for more information.
The runtime chart is tailored for deploying applications in a production environment. It is focused on stability, performance, isolation, and security to ensure that applications run reliably and efficiently.
Using a dedicated deployment for a set of flows is fundamental in production environments to have granular resource control.
- Kubernetes server
- kubectl
- Helm
Follow the same steps as for the LangFlow IDE.
-
Add the repository to Helm.
helm repo add langflow <https://langflow-ai.github.io/langflow-helm-charts> helm repo update
-
Install the LangFlow app with the default options in the
langflow
namespace. If you bundled the flow in a docker image, you can specify the image name in thevalues.yaml
file or with the-set
flag: If you want to download the flow from a remote location, you can specify the URL in thevalues.yaml
file or with the-set
flag:helm install my-langflow-app langflow/langflow-runtime -n langflow --create-namespace --set image.repository=myuser/langflow-just-chat --set image.tag=1.0.0
helm install my-langflow-app langflow/langflow-runtime -n langflow --create-namespace --set downloadFlows.flows[0].url=https://raw.githubusercontent.com/langflow-ai/langflow/dev/src/backend/base/langflow/initial_setup/starter_projects/Basic%20Prompting%20(Hello%2C%20world!).json
-
Check the status of the pods.
kubectl get pods -n langflow
Enable local port forwarding to access LangFlow from your local machine.
kubectl port-forward -n langflow svc/langflow-my-langflow-app 7860:7860
Now you can access the API at http://localhost:7860/api/v1/flows and execute the flow:
id=$(curl -s <http://localhost:7860/api/v1/flows> | jq -r '.flows[0].id')
curl -X POST \\
"<http://localhost:7860/api/v1/run/$id?stream=false>" \\
-H 'Content-Type: application/json'\\
-d '{
"input_value": "Hello!",
"output_type": "chat",
"input_type": "chat"
}'
In this case, storage is not needed as our deployment is stateless.
You can set the log level and other LangFlow configurations in the values.yaml
file.
env:
- name: LANGFLOW_LOG_LEVEL
value: "INFO"
To inject secrets and LangFlow global variables, you can use the secrets
and env
sections in the values.yaml
file.
Let's say your flow uses a global variable which is a secret; when you export the flow as JSON, it's recommended to not include it.
When importing the flow in the LangFlow runtime, you can set the global variable using the env
section in the values.yaml
file.
Assuming you have a global variable called openai_key_var
, you can read it directly from a secret:
env:
- name: openai_key_var
valueFrom:
secretKeyRef:
name: openai-key
key: openai-key
or directly from the values file (not recommended for secret values!):
env:
- name: openai_key_var
value: "sk-...."
You can scale the number of replicas for the LangFlow app by changing the replicaCount
value in the values.yaml
file.
replicaCount: 3
To scale vertically (increase the resources for the pods), you can set the resources
values in the values.yaml
file.
resources:
requests:
memory: "2Gi"
cpu: "1000m"
Visit the LangFlow Helm Charts repository for more examples and configurations. Use the default values file as reference for all the options available.
:::note
Visit the examples directory to learn more about different deployment options.
:::