Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 134 additions & 0 deletions examples/tracing/openllmetry/openllmetry_tracing.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2722b419",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/openllmetry/openllmetry_tracing.ipynb)\n",
"\n",
"\n",
"# <a id=\"top\">OpenLLMetry quickstart</a>\n",
"\n",
"This notebook shows how to export traces captured by [OpenLLMetry](https://github.com/traceloop/openllmetry) (by Traceloop) to Openlayer. The integration is done via the Openlayer's [OpenTelemetry endpoint](https://www.openlayer.com/docs/integrations/opentelemetry)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "020c8f6a",
"metadata": {},
"outputs": [],
"source": [
"!pip install openai traceloop-sdk"
]
},
{
"cell_type": "markdown",
"id": "75c2a473",
"metadata": {},
"source": [
"## 1. Set the environment variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f3f4fa13",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import openai\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
"\n",
"# Env variables pointing to Openlayer's OpenTelemetry endpoint (make sure to keep the `%20` to enconde the space between the `Bearer` and the `YOUR_OPENLAYER_API_KEY_HERE` string)\n",
"os.environ[\"TRACELOOP_BASE_URL\"] = \"https://api.openlayer.com/v1/otel\"\n",
"os.environ[\"TRACELOOP_HEADERS\"] = \"Authorization=Bearer%20YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_PIPELINE_ID_HERE\""
]
},
{
"cell_type": "markdown",
"id": "9758533f",
"metadata": {},
"source": [
"## 2. Initialize OpenLLMetry instrumentation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c35d9860-dc41-4f7c-8d69-cc2ac7e5e485",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Failed to export batch code: 404, reason: {\"error\": \"The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.\", \"code\": 404}\n"
]
}
],
"source": [
"from traceloop.sdk import Traceloop\n",
"\n",
"Traceloop.init(disable_batch=True)"
]
},
{
"cell_type": "markdown",
"id": "72a6b954",
"metadata": {},
"source": [
"## 3. Use LLMs and workflows as usual\n",
"\n",
"That's it! Now you can continue using LLMs and workflows as usual.The trace data is automatically exported to Openlayer and you can start creating tests around it."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e00c1c79",
"metadata": {},
"outputs": [],
"source": [
"client = openai.OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "abaf6987-c257-4f0d-96e7-3739b24c7206",
"metadata": {},
"outputs": [],
"source": [
"client.chat.completions.create(\n",
" model=\"gpt-4o-mini\", messages=[{\"role\": \"user\", \"content\": \"How are you doing today?\"}]\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "otel",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
175 changes: 175 additions & 0 deletions examples/tracing/semantic-kernel/semantic_kernel.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2722b419",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/semantic-kernel/semantic_kernel.ipynb)\n",
"\n",
"\n",
"# <a id=\"top\">Semantic Kernel quickstart</a>\n",
"\n",
"This notebook shows how to export traces captured by [Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/overview/) to Openlayer. The integration is done via the Openlayer's [OpenTelemetry endpoint](https://www.openlayer.com/docs/integrations/opentelemetry)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "020c8f6a",
"metadata": {},
"outputs": [],
"source": [
"!pip install openlit semantic-kernel"
]
},
{
"cell_type": "markdown",
"id": "75c2a473",
"metadata": {},
"source": [
"## 1. Set the environment variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f3f4fa13",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
"\n",
"# Env variables pointing to Openlayer's OpenTelemetry endpoint\n",
"os.environ[\"OTEL_EXPORTER_OTLP_ENDPOINT\"] = \"https://api.openlayer.com/v1/otel\"\n",
"os.environ[\"OTEL_EXPORTER_OTLP_HEADERS\"] = \"Authorization=Bearer YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_OPENLAYER_PIPELINE_ID_HERE\""
]
},
{
"cell_type": "markdown",
"id": "9758533f",
"metadata": {},
"source": [
"## 2. Initialize OpenLIT and Semantic Kernel"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c35d9860-dc41-4f7c-8d69-cc2ac7e5e485",
"metadata": {},
"outputs": [],
"source": [
"import openlit\n",
"\n",
"openlit.init()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9c0d5bae",
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel import Kernel\n",
"\n",
"kernel = Kernel()"
]
},
{
"cell_type": "markdown",
"id": "72a6b954",
"metadata": {},
"source": [
"## 3. Use LLMs as usual\n",
"\n",
"That's it! Now you can continue using LLMs and workflows as usual. The trace data is automatically exported to Openlayer and you can start creating tests around it."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e00c1c79",
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"\n",
"kernel.add_service(\n",
" OpenAIChatCompletion(ai_model_id=\"gpt-4o-mini\"),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "abaf6987-c257-4f0d-96e7-3739b24c7206",
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig\n",
"\n",
"prompt = \"\"\"{{$input}}\n",
"Please provide a concise response to the question above.\n",
"\"\"\"\n",
"\n",
"prompt_template_config = PromptTemplateConfig(\n",
" template=prompt,\n",
" name=\"question_answerer\",\n",
" template_format=\"semantic-kernel\",\n",
" input_variables=[\n",
" InputVariable(name=\"input\", description=\"The question from the user\", is_required=True),\n",
" ]\n",
")\n",
"\n",
"summarize = kernel.add_function(\n",
" function_name=\"answerQuestionFunc\",\n",
" plugin_name=\"questionAnswererPlugin\",\n",
" prompt_template_config=prompt_template_config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49c606ac",
"metadata": {},
"outputs": [],
"source": [
"await kernel.invoke(summarize, input=\"What's the meaning of life?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0377af7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "semantic-kernel-2",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}