Skip to content

Commit 899af0a

Browse files
dlqqqmeeseeksmachinesrdaspre-commit-ci[bot]alanmeeson
authored
V3: The Beginning (#1169)
* Backport PR #1049: Added new Anthropic Sonnet3.5 v2 models (#1050) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1051: Added Developer documentation for streaming responses (#1058) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1048: Implement streaming for `/fix` (#1059) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1057: [pre-commit.ci] pre-commit autoupdate (#1060) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Backport PR #1064: Added Ollama to the providers table in user docs (#1066) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1056: Add examples of using Fields and EnvAuthStrategy to developer documentation (#1073) Co-authored-by: Alan Meeson <[email protected]> * Backport PR #1069: Merge Anthropic language model providers (#1076) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1068: Allow `$` to literally denote quantities of USD in chat (#1079) Co-authored-by: david qiu <[email protected]> * Backport PR #1075: Fix magic commands when using non-chat providers w/ history (#1080) Co-authored-by: Alan Meeson <[email protected]> * Backport PR #1077: Fix `/export` by including streamed agent messages (#1081) Co-authored-by: Mahmut CAVDAR <[email protected]> * Backport PR #1072: Reduced padding in cell around code icons in code toolbar (#1084) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1087: Improve installation documentation and clarify provider dependencies (#1091) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1092: Remove retired models and add new `Haiku-3.5` model in Anthropic (#1093) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1094: Continue to allow `$` symbols to delimit inline math in human messages (#1095) Co-authored-by: david qiu <[email protected]> * Backport PR #1097: Update `faiss-cpu` version range (#1101) Co-authored-by: david qiu <[email protected]> * Backport PR #1104: Fix rendering of code blocks in JupyterLab 4.3.0+ (#1105) Co-authored-by: david qiu <[email protected]> * Backport PR #1106: Catch error on non plaintext files in `@file` and reply gracefully in chat (#1110) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1109: Bump LangChain minimum versions (#1112) Co-authored-by: david qiu <[email protected]> * Backport PR #1119: Downgrade spurious 'error' logs (#1124) Co-authored-by: ctcjab <[email protected]> * Backport PR #1127: Removes outdated OpenAI models and adds new ones (#1130) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1131: [pre-commit.ci] pre-commit autoupdate (#1132) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Backport PR #1125: Update model fields immediately on save (#1133) Co-authored-by: david qiu <[email protected]> * Backport PR #1139: Fix install step in CI (#1140) Co-authored-by: david qiu <[email protected]> * Backport PR #1129: Fix JSON serialization error in Ollama models (#1141) Co-authored-by: Mr.W <[email protected]> * Backport PR #1137: Update completion model fields immediately on save (#1142) Co-authored-by: david qiu <[email protected]> * [v3-dev] Initial migration to `jupyterlab-chat` (#1043) * Very first version of the AI working in jupyterlab_collaborative_chat * Allows both collaborative and regular chat to work with AI * handle the help message in the chat too * Autocompletion (#2) * Fix handler methods' parameters * Add slash commands (autocompletion) to the chat input * Stream messages (#3) * Allow for stream messages * update jupyter collaborative chat dependency * AI settings (#4) * Add a menu option to open the AI settings * Remove the input option from the setting widget * pre-commit * linting * Homogeneize typing for optional arguments * Fix import * Showing that the bot is writing (answering) (#5) * Show that the bot is writing (answering) * Update jupyter chat dependency * Some typing * Update extension to jupyterlab_chat (0.6.0) (#8) * Fix linting * Remove try/except to import jupyterlab_chat (not optional anymore), and fix typing * linter * Python unit tests * Fix typing * lint * Fix lint and mypy all together * Fix web_app settings accessor * Fix jupyter_collaboration version Co-authored-by: david qiu <[email protected]> * Remove unecessary try/except * Dedicate one set of chat handlers per room (#9) * create new set of chat handlers per room * make YChat an instance attribute on BaseChatHandler * revert changes to chat handlers * pre-commit * use room_id local var Co-authored-by: Nicolas Brichet <[email protected]> --------- Co-authored-by: Nicolas Brichet <[email protected]> --------- Co-authored-by: david qiu <[email protected]> Co-authored-by: david qiu <[email protected]> * Backport PR #1134: Improve user messaging and documentation for Cross-Region Inference on Amazon Bedrock (#1143) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1136: Add base API URL field for Ollama and OpenAI embedding models (#1149) Co-authored-by: Sanjiv Das <[email protected]> * [v3-dev] Remove `/export`, `/clear`, and `/fix` (#1148) * remove /export * remove /clear * remove /fix * Fix CI in `v3-dev` branch (#1154) * fix check release by bumping to impossible version * fix types * Update Playwright Snapshots --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * [v3-dev] Dedicate one LangChain history object per chat (#1151) * dedicate a separate LangChain history object per chat * pre-commit * fix mypy * Backport PR #1160: Trigger update snapshots based on commenter's role (#1161) Co-authored-by: david qiu <[email protected]> * Backport PR #1155: Fix code output format in IPython (#1162) Co-authored-by: Divyansh Choudhary <[email protected]> * Backport PR #1158: Update `/generate` to not split classes & functions across cells (#1164) Co-authored-by: Sanjiv Das <[email protected]> * Remove v2 frontend components (#1156) * First pass to remove the front end chat * Remove code-toolbar by using a simplified markdown renderer in settings * Remove chat-message-menu (should be ported in jupyter-chat) * Remove chat handler * Follow up 'Remove chat-message-menu (should be ported in jupyter-chat)' commit * Clean package.json * Remove UI tests * Remove the generative AI menu * Remove unused components * run yarn dedupe --------- Co-authored-by: David L. Qiu <[email protected]> * Upgrade to `jupyterlab-chat>=0.7.0` (#1166) * upgrade to jupyterlab-chat 0.7.0 * pre-commit * upgrade to @jupyter/chat ^0.7.0 in frontend * Remove v2 backend components (#1168) * remove v2 llm memory, implement ReplyStream * remove v2 websockets & REST handlers * remove unused v2 data models * fix slash command autocomplete * fix unit tests * remove unused _learned context provider * fix mypy * pre-commit * fix optional k arg in YChatHistory * bump jupyter chat to 0.7.1 to fix Python 3.9 tests * revert accidentally breaking /learn --------- Co-authored-by: Lumberbot (aka Jack) <[email protected]> Co-authored-by: Sanjiv Das <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Alan Meeson <[email protected]> Co-authored-by: Mahmut CAVDAR <[email protected]> Co-authored-by: ctcjab <[email protected]> Co-authored-by: Mr.W <[email protected]> Co-authored-by: Nicolas Brichet <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Divyansh Choudhary <[email protected]>
1 parent 624662a commit 899af0a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+1015
-5428
lines changed

.github/workflows/check-release.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ jobs:
2424
uses: jupyter-server/jupyter_releaser/.github/actions/check-release@v2
2525
with:
2626
token: ${{ secrets.GITHUB_TOKEN }}
27-
version_spec: minor
27+
version_spec: "12.34.56"
2828

2929
- name: Upload Distributions
3030
uses: actions/upload-artifact@v4

packages/jupyter-ai-module-cookiecutter/{{cookiecutter.root_dir_name}}/{{cookiecutter.python_name}}/slash_command.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
from jupyter_ai.chat_handlers.base import BaseChatHandler, SlashCommandRoutingType
2-
from jupyter_ai.models import HumanChatMessage
2+
from jupyterlab_chat.models import Message
33

44

55
class TestSlashCommand(BaseChatHandler):
@@ -25,5 +25,5 @@ class TestSlashCommand(BaseChatHandler):
2525
def __init__(self, *args, **kwargs):
2626
super().__init__(*args, **kwargs)
2727

28-
async def process_message(self, message: HumanChatMessage):
28+
async def process_message(self, message: Message):
2929
self.reply("This is the `/test` slash command.")

packages/jupyter-ai-test/jupyter_ai_test/test_slash_commands.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
from jupyter_ai.chat_handlers.base import BaseChatHandler, SlashCommandRoutingType
2-
from jupyter_ai.models import HumanChatMessage
2+
from jupyterlab_chat.models import Message
33

44

55
class TestSlashCommand(BaseChatHandler):
@@ -25,5 +25,5 @@ class TestSlashCommand(BaseChatHandler):
2525
def __init__(self, *args, **kwargs):
2626
super().__init__(*args, **kwargs)
2727

28-
async def process_message(self, message: HumanChatMessage):
28+
async def process_message(self, message: Message):
2929
self.reply("This is the `/test` slash command.")
Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
1+
# The following import is to make sure jupyter_ydoc is imported before
2+
# jupyterlab_chat, otherwise it leads to circular import because of the
3+
# YChat relying on YBaseDoc, and jupyter_ydoc registering YChat from the entry point.
4+
import jupyter_ydoc
5+
16
from .ask import AskChatHandler
27
from .base import BaseChatHandler, SlashCommandRoutingType
3-
from .clear import ClearChatHandler
48
from .default import DefaultChatHandler
5-
from .export import ExportChatHandler
6-
from .fix import FixChatHandler
79
from .generate import GenerateChatHandler
810
from .help import HelpChatHandler
911
from .learn import LearnChatHandler

packages/jupyter-ai/jupyter_ai/chat_handlers/ask.py

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
import argparse
22
from typing import Dict, Type
33

4-
from jupyter_ai.models import HumanChatMessage
54
from jupyter_ai_magics.providers import BaseProvider
5+
from jupyterlab_chat.models import Message
66
from langchain.chains import ConversationalRetrievalChain
77
from langchain.memory import ConversationBufferWindowMemory
88
from langchain_core.prompts import PromptTemplate
@@ -59,7 +59,7 @@ def create_llm_chain(
5959
verbose=False,
6060
)
6161

62-
async def process_message(self, message: HumanChatMessage):
62+
async def process_message(self, message: Message):
6363
args = self.parse_args(message)
6464
if args is None:
6565
return
@@ -70,21 +70,24 @@ async def process_message(self, message: HumanChatMessage):
7070

7171
self.get_llm_chain()
7272

73-
try:
74-
with self.pending("Searching learned documents", message):
73+
with self.start_reply_stream() as reply_stream:
74+
try:
7575
assert self.llm_chain
7676
# TODO: migrate this class to use a LCEL `Runnable` instead of
7777
# `Chain`, then remove the below ignore comment.
7878
result = await self.llm_chain.acall( # type:ignore[attr-defined]
7979
{"question": query}
8080
)
8181
response = result["answer"]
82-
self.reply(response, message)
83-
except AssertionError as e:
84-
self.log.error(e)
85-
response = """Sorry, an error occurred while reading the from the learned documents.
86-
If you have changed the embedding provider, try deleting the existing index by running
87-
`/learn -d` command and then re-submitting the `learn <directory>` to learn the documents,
88-
and then asking the question again.
89-
"""
90-
self.reply(response, message)
82+
83+
# old pending message: "Searching learned documents..."
84+
# TODO: configure this pending message in jupyterlab-chat
85+
reply_stream.write(response)
86+
except AssertionError as e:
87+
self.log.error(e)
88+
response = """Sorry, an error occurred while reading the from the learned documents.
89+
If you have changed the embedding provider, try deleting the existing index by running
90+
`/learn -d` command and then re-submitting the `learn <directory>` to learn the documents,
91+
and then asking the question again.
92+
"""
93+
reply_stream.write(response, message)

0 commit comments

Comments
 (0)