- Microsoft LangChain Library supports C# and Python and offers several features, some of which are still in development and may be unclear on how to implement. However, it is simple, stable, and faster than Python-based open-source software. The features listed on the link include: Semantic Kernel Feature Matrix / doc:ref / blog:ref / git [Feb 2023]
- .NET Semantic Kernel SDK: 1. Renamed packages and classes that used the term “Skill” to now use “Plugin”. 2. OpenAI specific in Semantic Kernel core to be AI service agnostic 3. Consolidated our planner implementations into a single package ref [10 Oct 2023]
- Road to v1.0 for the Python Semantic Kernel SDK ref [23 Jan 2024] backlog
- Agent Framework: A module for AI agents, and agentic patterns / Process Framework: A module for creating a structured sequence of activities or tasks. [Oct 2024]
- AutoGen will transition seamlessly into Semantic Kernel in early 2025 [15 Nov 2024]
- Unlocking the Power of Memory: Announcing General Availability of Semantic Kernel’s Memory Packages: new Vector Store abstractions, improving on the older Memory Store abstractions. [25 Nov 2024]
- Micro-orchestration in LLM pipelines is the detailed management of LLM interactions, focusing on data flow within tasks.
- e.g., Semantic Kernel, LangChain, LlamaIndex, Haystack, and AdalFlow.
- Semantic Kernel sample application:💡Chat Copilot [Apr 2023] / Virtual Customer Success Manager (VCSM) [Jul 2024] / Project Micronaire: A Semantic Kernel RAG Evaluation Pipeline git [3 Oct 2024]
- Semantic Kernel Recipes: A collection of C# notebooks git [Mar 2023]
- Deploy Semantic Kernel with Bot Framework ref git [26 Oct 2023]
- Semantic Kernel-Powered OpenAI Plugin Development Lifecycle ref [30 Oct 2023]
- SemanticKernel Implementation sample to overcome Token limits of Open AI model. ref [06 May 2023]
- Learning Paths for Semantic Kernel [28 Mar 2024]
- A Pythonista’s Intro to Semantic Kernel💡[3 Sep 2023]
- Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor: Step-by-step guide to building an AI monitoring dashboard using Semantic Kernel and Azure Monitor to track token usage and custom metrics. [23 Aug 2024]
- Working with Audio in Semantic Kernel Python [15 Nov 2024]
-
Semantic Kernel Planner ref [24 Jul 2023]
-
Is Semantic Kernel Planner the same as LangChain agents?
Planner in SK is not the same as Agents in LangChain. cite [11 May 2023]
Agents in LangChain use recursive calls to the LLM to decide the next step to take based on the current state. The two planner implementations in SK are not self-correcting. Sequential planner tries to produce all the steps at the very beginning, so it is unable to handle unexpected errors. Action planner only chooses one tool to satisfy the goal
-
Stepwise Planner released. The Stepwise Planner features the "CreateScratchPad" function, acting as a 'Scratch Pad' to aggregate goal-oriented steps. [16 Aug 2023]
-
Gen-4 and Gen-5 planners: 1. Gen-4: Generate multi-step plans with the Handlebars 2. Gen-5: Stepwise Planner supports Function Calling. ref [16 Nov 2023]
-
Use function calling for most tasks; it's more powerful and easier.
Stepwise and Handlebars planners will be deprecated
ref [Jun 2024] -
The future of Planners in Semantic Kernel [23 July 2024]
-
Semantic Kernel Functions vs. Plugins:
- Function: Individual units of work that perform specific tasks. Execute actions based on user requests. ref [12 Nov 2024]
- Plugin: Collections of functions. Orchestrate multiple functions for complex tasks.
-
Semantic Function - expressed in natural language in a text file "skprompt.txt" using SK's Prompt Template language. Each semantic function is defined by a unique prompt template file, developed using modern prompt engineering techniques. cite
-
Prompt Template language Key takeaways
1. Variables : use the {{$variableName}} syntax : Hello {{$name}}, welcome to Semantic Kernel!
2. Function calls: use the {{namespace.functionName}} syntax : The weather today is {{weather.getForecast}}.
3. Function parameters: {{namespace.functionName $varName}} and {{namespace.functionName "value"}} syntax
: The weather today in {{$city}} is {{weather.getForecast $city}}.
4. Prompts needing double curly braces :
{{ "{{" }} and {{ "}}" }} are special SK sequences.
5. Values that include quotes, and escaping :
For instance:
... {{ 'no need to \\"escape" ' }} ...
is equivalent to:
... {{ 'no need to "escape" ' }} ...
-
Glossary in Git / Glossary in MS Doc
Term Short Description ASK A user's goal is sent to SK as an ASK Kernel The kernel orchestrates a user's ASK Planner The planner breaks it down into steps based upon resources that are available [deprecated] -> replaced by function calling Resources Planning involves leveraging available skills, memories, and connectors Steps A plan is a series of steps for the kernel to execute Pipeline Executing the steps results in fulfilling the user's ASK -
Architecting AI Apps with Semantic Kernel How you could recreate Microsoft Word Copilot [6 Mar 2024]
-
DSPy (Declarative Self-improving Language Programs, pronounced “dee-es-pie”) / doc:ref / git
-
DSPy Documentation & Cheetsheet ref
-
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines [5 Oct 2023] / git
-
DSPy Explained! 📺 [30 Jan 2024]
-
DSPy RAG example in weviate recipes:
recipes > integrations
git -
Prompt Like a Data Scientist: Auto Prompt Optimization and Testing with DSPy [6 May 2024]
-
Instead of a hard-coded prompt template, "Modular approach: compositions of modules -> compile". Building blocks such as ChainOfThought or Retrieve and compiling the program, optimizing the prompts based on specific metrics. Unifying strategies for both prompting and fine-tuning in one tool, Pythonic operations, prioritizing and tracing program execution. These features distinguish it from other LMP frameworks such as LangChain, and LlamaIndex. ref [Jan 2023]
-
Automatically iterate until the best result is achieved: 1. Collect Data -> 2. Write DSPy Program -> 3. Define validtion logic -> 4. Compile DSPy program
- These frameworks, including DSpy, utilize algorithmic methods inspired by machine learning to improve prompts, outputs, and overall performance in LLM applications.
- AdalFlow:💡The Library to Build and Auto-optimize LLM Applications [Apr 2024]
- TextGrad: automatic ``differentiation` via text. Backpropagation through text feedback provided by LLMs [Jun 2024]
- Glossary reference to the ref.
- Signatures: Hand-written prompts and fine-tuning are abstracted and replaced by signatures.
"question -> answer"
"long-document -> summary"
"context, question -> answer" - Modules: Prompting techniques, such as
Chain of Thought
orReAct
, are abstracted and replaced by modules.# pass a signature to ChainOfThought module generate_answer = dspy.ChainOfThought("context, question -> answer")
- Optimizers (formerly Teleprompters): Manual iterations of prompt engineering is automated with optimizers (teleprompters) and a DSPy Compiler.
# Self-generate complete demonstrations. Teacher-student paradigm, `BootstrapFewShotWithOptuna`, `BootstrapFewShotWithRandomSearch` etc. which work on the same principle. optimizer = BootstrapFewShot(metric=dspy.evaluate.answer_exact_match)
- DSPy Compiler: Internally trace your program and then optimize it using an optimizer (teleprompter) to maximize a given metric (e.g., improve quality or cost) for your task.
- e.g., the DSPy compiler optimizes the initial prompt and thus eliminates the need for manual prompt tuning.
cot_compiled = teleprompter.compile(CoT(), trainset=trainset, valset=devset) cot_compiled.save('turbo_gsm8k.json')
- Signatures: Hand-written prompts and fine-tuning are abstracted and replaced by signatures.
-
Automatic Few-Shot Learning
-
As a rule of thumb, if you don't know where to start, use
BootstrapFewShotWithRandomSearch
. -
If you have very little data, e.g. 10 examples of your task, use
BootstrapFewShot
. -
If you have slightly more data, e.g. 50 examples of your task, use
BootstrapFewShotWithRandomSearch
. -
If you have more data than that, e.g. 300 examples or more, use
BayesianSignatureOptimizer
. -> deprecated and replaced with MIPRO. -
KNNFewShot
: k-Nearest Neighbors to select the closest training examples, which are then used in the BootstrapFewShot optimization process
-
-
Automatic Instruction Optimization
-
COPRO
: Repeat for a set number of iterations, tracking the best-performing instructions. -
MIPRO
: Repeat for a set number of iterations, tracking the best-performing combinations (instructions and examples). -> replaced withMIPROv2
. -
MIPROv2
: If you want to keep your prompt 0-shot, or use 40+ trials or 200+ examples, choose MIPROv2. [March 2024]
-
-
Automatic Finetuning
- If you have been able to use one of these with a large LM (e.g., 7B parameters or above) and need a very efficient program, compile that down to a small LM with
BootstrapFinetune
.
- If you have been able to use one of these with a large LM (e.g., 7B parameters or above) and need a very efficient program, compile that down to a small LM with
-
Program Transformations
Ensemble
: Combines DSPy programs using all or randomly sampling a subset into a single program.