Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: The promptPrefix (system message) seems to get truncated when conversation exceeds the context limit. #5466

Open
1 task done
Lavanille777 opened this issue Jan 26, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Lavanille777
Copy link

What happened?

As the conversation approaches the context limit, I noticed that the large model seems to forget its system prompt. I haven’t confirmed this through packet capture yet, but it appears to be the case. The version I am using is 'librechat-dev:8b31f255f54a570c794d61cef313d1c43a7cd356'

Steps to Reproduce

With gpt-4o-2024-11-20, you can command it to output the first sentence of its system prompt. Then, as your conversation exceeds the maximum context length, you’ll notice that it can no longer recall or reproduce the system prompt.

What browsers are you seeing the problem on?

No response

Relevant log output

Screenshots

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Lavanille777 Lavanille777 added the bug Something isn't working label Jan 26, 2025
@danny-avila
Copy link
Owner

This was working when system instructions were appended before the latest message. I changed this to move the system message placed first in the chat history to conform to what a lot of providers expect (namely DeepSeek, Mistral), and forgot to update this. Will fix soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants