You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consider possibility to add relevance cutoff as a setting via env. variable. If case of larger knowledge bases, file search returns quite a lot of document chunks even with quite little relevance score. If you use some of the larger models (e.g. GPT-4o or Clause 3.5 Sonnet), it can increase costs quite quickly.
Am I correct to assume that the lower the relevance score the better? Usually in our searches, the actually relevant documents has around 0.2-0.3 relevance score, the least relevant 0.7 - 0.8. In total it return about 25 pages of text, which is quite lot.
The text was updated successfully, but these errors were encountered:
After some additional thought, it would be beneficial to have a default relevance limit with optionally being able to overwrite this default for specific endpoint/agent in LibreChat. In some use cases you want only very relevant results, in others broader scope of results e.g. for additional information.
Consider possibility to add relevance cutoff as a setting via env. variable. If case of larger knowledge bases, file search returns quite a lot of document chunks even with quite little relevance score. If you use some of the larger models (e.g. GPT-4o or Clause 3.5 Sonnet), it can increase costs quite quickly.
Am I correct to assume that the lower the relevance score the better? Usually in our searches, the actually relevant documents has around 0.2-0.3 relevance score, the least relevant 0.7 - 0.8. In total it return about 25 pages of text, which is quite lot.
The text was updated successfully, but these errors were encountered: