forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Docs update for PR #7 on Promptless/langchain-test #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…i#26928) **Description:** Moves yield to after callback for `_stream` function for the MLX pipeline model in the community llm package **Issue:** langchain-ai#16913
- [x] PR title: Fix typo in code example in mlflow.py - In libs/community/langchain_community/chat_models/mlflow.py
…26927) **Description:** Moves yield to after callback for `_stream` function for the cloudfare workersai model in the community llm package **Issue:** langchain-ai#16913
All auto-fixes.
Addressing some lingering comments from langchain-ai#26944, adding parameters for - python version - working directory 
…emory (langchain-ai#26855) This PR updates the documentation examples that used RunnableWithMessageHistory to show how to achieve the same implementation with langgraph memory. Some of the underlying PRs (not all of them): - docs[patch]: update chatbot tutorial and migration guide (langchain-ai#26780) - docs[patch]: update chatbot memory how-to (langchain-ai#26790) - docs[patch]: update chatbot tools how-to (langchain-ai#26816) - docs: update chat history in rag how-to (langchain-ai#26821) - docs: update trim messages notebook (langchain-ai#26793) - docs: clean up imports in how to guide for rag qa with chat history (langchain-ai#26825) - docs[patch]: update conversational rag tutorial (langchain-ai#26814) --------- Co-authored-by: ccurme <[email protected]> Co-authored-by: Vadym Barda <[email protected]> Co-authored-by: mercyspirit <[email protected]> Co-authored-by: aqiu7 <[email protected]> Co-authored-by: John <[email protected]> Co-authored-by: Erick Friis <[email protected]> Co-authored-by: William FH <[email protected]> Co-authored-by: Subhrajyoty Roy <[email protected]> Co-authored-by: Rajendra Kadam <[email protected]> Co-authored-by: Christophe Bornet <[email protected]> Co-authored-by: Devin Gaffney <[email protected]> Co-authored-by: Bagatur <[email protected]>
…6848) Co-authored-by: Eugene Yurtsev <[email protected]>
These allow converting linked documents (such as those used with GraphVectorStore) to networkx for rendering and/or in-memory graph algorithms such as community detection.
…support (langchain-ai#26960) **Description:** Update the code interpreter tools feature table to reflect Riza file upload support (blog announcement here: https://riza.io/blog/adding-support-for-input-files-and-http-credentials) **Issue:** N/A **Dependencies:** N/A
httpx clients aren't serializable
template_format is an init argument on ChatPromptTemplate but not an attribute on the object so was getting shoved into StructuredPrompt.structured_ouptut_kwargs
* [chore]: Agent Observation should be casted to string to avoid errors * Merge branch 'master' into fix_observation_type_streaming * [chore]: Using Json.dumps * [chore]: Exact same logic as when casting agent oobservation to string
Description: Fix typo in list of PDF loaders. Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
Fixes langchain-ai#26685 --------- Co-authored-by: Tibor Reiss <[email protected]>
…in-ai#25754) - **Description:** prevent index function to re-index entire source document even if nothing has changed. - **Issue:** langchain-ai#22135 I worked on a solution to this issue that is a compromise between being cheap and being fast. In the previous code, when batch_size is greater than the number of docs from a certain source almost the entire source is deleted (all documents from that source except for the documents in the first batch) My solution deletes documents from vector store and record manager only if at least one document has changed for that source. Hope this can help! --------- Co-authored-by: Eugene Yurtsev <[email protected]>
- **Description:** URL is appended with = which is not working - **Issue:** removing the = symbol makes the URL valid - **Twitter handle:** @arunprakash_com Co-authored-by: Erick Friis <[email protected]>
Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" The PR is an adjustment on few grammar adjustments on the page. @leomofthings is my twitter handle If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: Erick Friis <[email protected]>
Edited various notebooks in the tutorial section to fix: * Grammatical Errors * Improve Readability by changing the sentence structure or reducing repeated words which bears the same meaning * Edited a code block to follow the PEP 8 Standard * Added more information in some sentences to make the concept more clear and reduce ambiguity --------- Co-authored-by: Eugene Yurtsev <[email protected]>
Thank you for contributing to LangChain! Changes: - docs: Added `WatsonxRerank` documentation - docs Updated `WatsonxEmbeddings` with docs template - docs: Updated `ChatWatsonx` with docs template - docs: Updated `WatsonxLLM` with docs template - docs: Added `ChatWatsonx` to list with Chat models providers. Added [test_chat_models_standard](https://github.com/langchain-ai/langchain-ibm/blob/main/libs/ibm/tests/integration_tests/test_chat_models_standard.py) to `langchain_ibm` tests suite. - docs: Added `IBM` to list with Embedding models providers. Added [test_embeddings_standard](https://github.com/langchain-ai/langchain-ibm/blob/main/libs/ibm/tests/integration_tests/test_embeddings_standard.py) to `langchain_ibm` tests suite. - docs: Updated `langcahin_ibm` recommended versions compatible with `LangChain v0.3` --------- Co-authored-by: Erick Friis <[email protected]>
**PR Title**: `docs: fix typo in query analysis documentation` **Description**: This PR corrects a typo on line 68 in the query analysis documentation, changing **"pharsings"** to **"phrasings"** for clarity and accuracy. Only one instance of the typo was fixed in the last merge, and this PR fixes the second instance. **Issue**: N/A **Dependencies**: None **Additional Notes**: No functional changes were made; this is a documentation fix only.
Updated the documentation to fix some grammar errors - **Description:** Some language errors exist in the documentation - **Issue:** the issue # Changed the structure of some sentences
…n-ai#27703) Thank you for contributing to LangChain! Add notice of upcoming package consolidation of `langchain-databricks` into `databricks-langchain`. <img width="1047" alt="image" src="https://github.com/user-attachments/assets/18eaa394-4e82-444b-85d5-7812be322674"> Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Signed-off-by: Prithvi Kannan <[email protected]> Co-authored-by: Erick Friis <[email protected]>
Permit trimming message lists of length 1
…ector store (langchain-ai#27736) This fix an error caused by missing custom content_key handling in Redis vector store in function similarity_search_with_score.
…nts for ChatNVIDIA (langchain-ai#27734) * **PR title**: "docs: Replaced langchain import with langchain-nvidia-ai-endpoints in NVIDIA Endpoints Tab" * **PR message**: + **Description:** Replaced the import of `langchain` with `langchain-nvidia-ai-endpoints` in the NVIDIA Endpoints Tab to resolve an error caused by the documentation attempting to import the generic `langchain` module despite the targeted import. + **Issue:** + **Dependencies:** No additional dependencies introduced; simply updated the existing import to a more specific module. + **Twitter handle:** https://x.com/nawaz0x1 * **Add tests and docs**: + **Applicability:** Not applicable in this case, as the change is a fix to an existing integration rather than the addition of a new one. + **Rationale:** No new functionality or integrations are introduced, only a corrective import change. * **Lint and test**: + **Status:** Completed + **Outcome:** - `make format`: **Passed** - `make lint`: **Passed** - `make test`: **Passed** 
…in-ai#27731) **Description:** - Add the `lora_request` parameter to the VLLM class to support LoRA model configurations. This enhancement allows users to specify LoRA requests directly when using VLLM, enabling more flexible and efficient model customization. **Issue:** - No existing issue for `lora_adapter` in VLLM. This PR addresses the need for configuring LoRA requests within the VLLM framework. - Reference : [Using LoRA Adapters in vLLM](https://docs.vllm.ai/en/stable/models/lora.html#using-lora-adapters) **Example Code :** Before this change, the `lora_request` parameter was not applied correctly: ```python ADAPTER_PATH = "/path/of/lora_adapter" llm = VLLM(model="Bllossom/llama-3.2-Korean-Bllossom-3B", max_new_tokens=512, top_k=2, top_p=0.90, temperature=0.1, vllm_kwargs={ "gpu_memory_utilization":0.5, "enable_lora":True, "max_model_len":1024, } ) print(llm.invoke( ["...prompt_content..."], lora_request=LoRARequest("lora_adapter", 1, ADAPTER_PATH) )) ``` **Before Change Output:** ```bash response was not applied lora_request ``` So, I attempted to apply the lora_adapter to langchain_community.llms.vllm.VLLM. **current output:** ```bash response applied lora_request ``` **Dependencies:** - None **Lint and test:** - All tests and lint checks have passed. --------- Co-authored-by: Um Changyong <[email protected]>
…ls (langchain-ai#27728) Added anthropic.claude-3-5-sonnet-20241022-v2:0 cost detials
…similarity_search (langchain-ai#27723) ### Description/Issue: I had problems filtering when setting up a local Milvus db and noticed that the `filter` option in the `similarity_search` and `similarity_search_with_score` appeared to do nothing. Instead, the `expr` option should be used. The `expr` option is correctly used in the retriever example further down in the documentation. The `expr` option seems to be correctly passed on, for example [here](https://github.com/langchain-ai/langchain/blob/447c0dd2f051157a3ccdac49a8d5ca6c06ea1401/libs/community/langchain_community/vectorstores/milvus.py#L701) ### Solution: Update the documentation for the functions mentioned to show intended behavior. --------- Co-authored-by: Chester Curme <[email protected]>
…eam() and astream() response (langchain-ai#27677) Thank you for contributing to LangChain! - **Description:** Add token_usage and model_name metadata to ChatZhipuAI stream() and astream() response - **Issue:** None - **Dependencies:** None - **Twitter handle:** None - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: jianfehuang <[email protected]>
…-ai#27639) Fixed a small typo (added a missing "t" in ElasticsearchRetriever docs page) https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/#:~:text=It%20is%20possible%20to%20cusomize%20the%20function%20tha%20maps%20an%20Elasticsearch%20result%20(hit)%20to%20a%20LangChain%20document.
## Before  ## After  ## Typo `(either in PR summary of in a linked issue)` => `either in PR summary or in a linked issue` --------- Co-authored-by: Chester Curme <[email protected]>
…angchain-ai#27372) Thank you for contributing to LangChain! **Description:** Added the model parameters to be passed in the OpenAI Assistant. Enabled it at the `OpenAIAssistantV2Runnable` class. **Issue:** NA **Dependencies:** None **Twitter handle:** luizf0992
- **Description:** add/improve docstrings of OpenAIAssistantV2Runnable - **Issue:** the issue langchain-ai#21983 Co-authored-by: Chester Curme <[email protected]>
- Add xfail on integration test (fails [> 50% of the time](https://github.com/langchain-ai/langchain/actions/workflows/scheduled_test.yml)); - Remove xfail on passing unit test.
`ChatDatabricks` added support for structured output and JSON mode in the last release. This PR updates the feature table accordingly. Signed-off-by: B-Step62 <[email protected]>
**Description:** - Fix bug in Replicate LLM class, where it was looking for parameter names in a place where they no longer exist in pydantic 2, resulting in the "Field required" validation error described in the issue. - Fix Replicate LLM integration tests to: - Use active models on Replicate. - Use the correct model parameter `max_new_tokens` as shown in the [Replicate docs](https://replicate.com/docs/guides/language-models/how-to-use#minimum-and-maximum-new-tokens). - Use callbacks instead of deprecated callback_manager. **Issue:** langchain-ai#26937 **Dependencies:** n/a **Twitter handle:** n/a --------- Signed-off-by: Fayvor Love <[email protected]> Co-authored-by: Chester Curme <[email protected]>
Add how-to guides to [Run notebooks job](https://github.com/langchain-ai/langchain/actions/workflows/run_notebooks.yml) and fix existing notebooks. - As with tutorials, cassettes must be updated when HTTP calls in guides change (by running existing [script](https://github.com/langchain-ai/langchain/blob/master/docs/scripts/update_cassettes.sh)). - Cassettes now total ~62mb over 474 files. - `docs/scripts/prepare_notebooks_for_ci.py` lists a number of notebooks that do not run (e.g., due to requiring additional infra, slowness, requiring `input()`, etc.).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change was created automatically using the following context: