Skip to content

Entropy-Foundation/vastai_tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VastAI Management Scripts

Simple shell scripts for managing VastAI GPU instances with vLLM server deployment.

Prerequisites

  • VastAI CLI installed (vastai)
  • .env file with required tokens (see below)

Configuration

Copy the template and fill in your API keys:

cp .env.template .env

Then edit .env with your actual keys:

Usage

Check Account Balance

./check_balance.sh

Shows account information, credit balance, and recent billing history.

Find RTX 5090 Offers

./query_gpus.sh

Lists available RTX 5090 single GPU servers sorted by price (lowest first).

Create LLM Instance

./start_llm_instance.sh <offer_id>

Creates a new instance with vLLM server running Gemma-3-27b model on port 8080.

Create Minimal Instance

./start_minimal_instance.sh <offer_id>

Launches a lightweight Ubuntu 22.04 environment with only the host NVIDIA drivers available (no CUDA toolkit or vLLM setup). Perfect for custom runtimes or manual installs.

List Your Instances

./list_instances.sh

Shows all your running instances with status and connection info.

Example Workflow

# 1. Check your account balance
./check_balance.sh

# 2. Find available GPU offers
./query_gpus.sh

# 3. Create instance from an offer (use ID from step 2)
# LLM-ready environment
./start_llm_instance.sh 26128186

# Minimal barebones environment
./start_minimal_instance.sh 26128186

# 4. Monitor your instances
./list_instances.sh

# 5. Connect to vLLM server
# Once running, the server will be available at:
# http://<instance_ip>:8080

vLLM Server Details

The scripts automatically deploy a vLLM OpenAI-compatible API server with:

  • Model: ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
  • Port: 8080
  • API: OpenAI-compatible endpoints
  • Max Context: 32,768 tokens

Instance Management

Use VastAI CLI commands for additional management:

# SSH into instance
vastai ssh-url <instance_id>

# Check logs
vastai logs <instance_id>

# Stop instance
vastai stop instance <instance_id>

# Delete instance
vastai destroy instance <instance_id>

About

Simple zsh scripts for managing VastAI GPU instances with vLLM server deployment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages