Skip to content

Commit

Permalink
Merge pull request #2 from douglasmakey/feature/use-rust-genai
Browse files Browse the repository at this point in the history
feat: add rust-genai.
  • Loading branch information
douglasmakey authored Jun 16, 2024
2 parents 68eaf81 + 7910cd6 commit 835020b
Show file tree
Hide file tree
Showing 9 changed files with 85 additions and 14 deletions.
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ regex = "1.10.4"
serde = { version = "1.0.203", features = ["derive"] }
serde_json = "1.0.117"
tokio = { version = "1.38.0", features = ["full"] }
genai = "=0.1.1"
16 changes: 11 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

![](./assets/shelldon.jpeg)

Shelldon is a command-line tool written in Rust. It provides a set of utilities for executing shell commands, managing prompts, and interacting with OpenAI GPT.
Shelldon is a command-line tool written in Rust. It provides utilities for executing shell commands, managing prompts, and interacting with multiple LLMs.

Yes, another CLI with GPT features. Shelldon is not intended to be a full GPT client from the terminal; there are a couple of CLIs much better for that and also a lot of applications and even the OpenAI ChatGPT apps. Shelldon is to solve some personal use cases and it is very useful for me; I hope it could be useful for you too. Also, I made it to have fun playing with Rust!

Expand Down Expand Up @@ -32,18 +32,24 @@ cargo build --release

## Usage

To use Shelldon, you’ll need to set your OpenAI token. You can do this by setting an environment variable. Here’s how you can set it in your terminal:
Shelldon supports different AI providers such as Ollama, OpenAI, Gemini, Anthropic, and Cohere. You can control which provider to use with the `--model` flag. For example, `--model claude-3-haiku-20240307` or `--model gemini-1.5-flash-latest`. By default, Shelldon uses `gpt-4o` as the model.

To use Shelldon, you need to set your API keys for the mentioned providers. You can do this by setting an environment variable. Here’s how to set it in your terminal:

```sh
export OPENAI_API_KEY="your-openai-api-key"
export OPENAI_API_KEY="api-key"
export ANTHROPIC_API_KEY="api-key"
export COHERE_API_KEY="api-key"
export GEMINI_API_KEY="api-key"

```

Shelldon allows you to integrate GPT features into your shell commands easily. Here are some examples to get you started:

### Running Shell Commands

```sh
$ shelldon exec "Show all the graphics ports for the Vagrant machine using Libvirt."
$ shelldon exec "Show all the graphics ports for the Vagrant machine using Libvirt." --model gpt-4o
Command to execute: vagrant ssh -c "virsh list --all | grep vagrant | awk '{print \$1}' | xargs -I {} virsh domdisplay {}"
? [R]un, [M]odify, [C]opy, [A]bort ›
```
Expand Down Expand Up @@ -188,7 +194,7 @@ So the ability to handle dynamic prompts with args and use them makes Shelldon a
- [ ] Improve error handling.
- [ ] Add default prompts.
- [ ] Implement OpenAI functions?
- [ ] Implement Ollama? Maybe in the future. Do you need it?
- [X] Implement Ollama? Maybe in the future. Do you need it?
## Contributing
Expand Down
59 changes: 59 additions & 0 deletions src/backend/genai.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
use crate::processor::CompletionGenerator;
use crate::{Error, Result};
use async_stream::stream;
use futures::{stream::LocalBoxStream, StreamExt};
use genai::{
chat::{ChatMessage, ChatRequest, ChatStreamEvent, StreamChunk},
client::Client,
};

pub struct GenAI {
client: Client,
}

impl GenAI {
pub fn new() -> Self {
Self {
client: Client::default(),
}
}
}

impl CompletionGenerator for GenAI {
async fn generate_completion(
&self,
model: &str,
_temperature: f32,
prompt: &str,
input: &str,
) -> crate::Result<String> {
let req = ChatRequest::new(vec![ChatMessage::system(prompt), ChatMessage::user(input)]);
let resp = self.client.exec_chat(model, req.clone(), None).await?;
resp.content.ok_or(Error::EmptyResponse)
}

async fn stream_completion(
&self,
model: &str,
_temperature: f32,
prompt: &str,
input: &str,
) -> Result<LocalBoxStream<String>> {
let req = ChatRequest::new(vec![ChatMessage::system(prompt), ChatMessage::user(input)]);
let resp = self
.client
.exec_chat_stream(model, req.clone(), None)
.await?;

let async_stream = stream! {
let mut stream = resp.stream;
while let Some(Ok(stream_event)) = stream.next().await {
if let ChatStreamEvent::Chunk(StreamChunk { content }) = stream_event {
yield content;
}
};
};

Ok(Box::pin(async_stream))
}
}
1 change: 1 addition & 0 deletions src/backend/mod.rs
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
pub mod genai;
pub mod openai;
4 changes: 2 additions & 2 deletions src/backend/openai.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ use async_openai::{
Client,
};
use async_stream::stream;
use futures::stream::BoxStream;
use futures::stream::LocalBoxStream;

pub struct OpenAI {
client: Client<OpenAIConfig>,
Expand Down Expand Up @@ -70,7 +70,7 @@ impl CompletionGenerator for OpenAI {
temperature: f32,
prompt: &str,
input: &str,
) -> Result<BoxStream<String>> {
) -> Result<LocalBoxStream<String>> {
let messages = [
ChatCompletionRequestSystemMessageArgs::default()
.content(prompt)
Expand Down
4 changes: 2 additions & 2 deletions src/command/ask.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use super::CommonArgs;
use crate::{
backend::openai::OpenAI,
backend::genai::GenAI,
command::{parse_prompt, read_input},
config::Config,
processor::CompletionProcessor,
Expand All @@ -18,7 +18,7 @@ pub struct AskArgs {
}

pub async fn handle_ask(config: Config, args: AskArgs) -> Result<()> {
let processor = CompletionProcessor::new(OpenAI::new()?);
let processor = CompletionProcessor::new(GenAI::new());
let input = read_input(&args.common.input)?;
let prompt = parse_prompt(config, args.common.prompt, args.common.set, "")?;
let mut completion = processor
Expand Down
4 changes: 2 additions & 2 deletions src/command/exec.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use super::{parse_prompt, read_input, CommonArgs};
use crate::{
backend::openai::OpenAI,
backend::genai::GenAI,
config::Config,
processor::CompletionProcessor,
system::{self, copy_to_clipboard, run_cmd},
Expand Down Expand Up @@ -36,7 +36,7 @@ pub struct ExecArgs {
}

pub async fn handle_exec(config: Config, args: ExecArgs) -> Result<()> {
let processor = CompletionProcessor::new(OpenAI::new()?);
let processor = CompletionProcessor::new(GenAI::new());
let input = read_input(&args.common.input)?;
let default_prompt = SHELL_PROMPT
.replace("{shell}", &system::get_current_shell())
Expand Down
4 changes: 4 additions & 0 deletions src/error.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ pub enum Error {
CommandFailed { command: String },
#[display(fmt = "API key not set")]
APIKeyNotSet,
#[display(fmt = "Empty response")]
EmptyResponse,

#[from]
OpenAI(async_openai::error::OpenAIError),
Expand All @@ -23,4 +25,6 @@ pub enum Error {
Serde(serde_json::Error),
#[from]
Dialoguer(dialoguer::Error),
#[from]
GenAI(genai::Error),
}
6 changes: 3 additions & 3 deletions src/processor.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
use crate::Result;
use futures::stream::BoxStream;
use futures::stream::LocalBoxStream;

pub trait CompletionGenerator {
async fn generate_completion(
Expand All @@ -16,7 +16,7 @@ pub trait CompletionGenerator {
temperature: f32,
prompt: &str,
input: &str,
) -> Result<BoxStream<String>>;
) -> Result<LocalBoxStream<String>>;
}

pub struct CompletionProcessor<T: CompletionGenerator> {
Expand Down Expand Up @@ -48,7 +48,7 @@ impl<T: CompletionGenerator> CompletionProcessor<T> {
input: &str,
model: &str,
temperature: f32,
) -> Result<BoxStream<String>> {
) -> Result<LocalBoxStream<String>> {
self.generator
.stream_completion(model, temperature, prompt, input)
.await
Expand Down

0 comments on commit 835020b

Please sign in to comment.