You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the deepseek-reasoner model, we obtain a reasoning_content parameter that represents the reasoning process of the model.
However, while working with the ChainOfThought() module, I noticed that the content of reasoning_content does not match the content wrapped in [[ ## reasoning ## ]].
How should we handle this inconsistency? Are there any specific guidelines or fixes to ensure alignment between reasoning_content and the [[ ## reasoning ## ]] section?
Question 2:
Currently, when using the deepseek-reasoner model, there is no straightforward way to access the model's internal reasoning results (reasoning_content).
This would allow users to clearly view the model's reasoning process, improving transparency and debugging capabilities.
A simple and feasible approach would be to include the reasoning_content parameter in the inspect_history() function.
This would allow users to clearly view the model's reasoning process, improving transparency and debugging capabilities.
Thank you for your contributions to this great project.
If possible, I would like to know your thoughts on this issue. I have already written a small portion of the code, and I am willing to make changes to the rest if needed. PR
Would you like to contribute?
Yes, I'd like to help implement this.
No, I just want to request it.
Additional Context
Here is the log of my testing:
react = dspy.ChainOfThought("question -> answer")
pred = react(question="Who is Apple's CEO?")
log:
>>>litellm resp:
ModelResponse(id='92ac4fca-75c8-44ff-b2c4-d596206ee2e7', created=1739098591, model='deepseek-reasoner', object='chat.completion', system_fingerprint='fp_7e73fd9a08', choices=[Choices(finish_reason='stop', index=0, message=Message(content="[[ ## reasoning ## ]]\nThe question asks for the current CEO of Apple. As of the latest available information, Tim Cook has been serving as Apple's CEO since August 2011, following Steve Jobs' resignation. There have been no recent announcements indicating a change in this position.\n\n[[ ## answer ## ]]\nTim Cook\n\n[[ ## completed ## ]]", role='assistant', tool_calls=None, function_call=None, provider_specific_fields={'reasoning_content': "Okay, the user is asking who Apple's CEO is. Let me recall the current information. I know that Tim Cook has been the CEO of Apple for several years now, taking over after Steve Jobs. But wait, when exactly did he become CEO? I think it was around 2011. Let me double-check that. Yes, Steve Jobs resigned in August 2011 and recommended Cook as his successor. Since then, Cook has been leading Apple. Are there any recent changes? I haven't heard any news about a new CEO, so it's safe to assume he's still in the role. Also, checking Apple's official website or recent press releases would confirm this. No conflicting information comes to mind. Therefore, the answer should be Tim Cook."}, reasoning_content="Okay, the user is asking who Apple's CEO is. Let me recall the current information. I know that Tim Cook has been the CEO of Apple for several years now, taking over after Steve Jobs. But wait, when exactly did he become CEO? I think it was around 2011. Let me double-check that. Yes, Steve Jobs resigned in August 2011 and recommended Cook as his successor. Since then, Cook has been leading Apple. Are there any recent changes? I haven't heard any news about a new CEO, so it's safe to assume he's still in the role. Also, checking Apple's official website or recent press releases would confirm this. No conflicting information comes to mind. Therefore, the answer should be Tim Cook."))], usage=Usage(completion_tokens=226, prompt_tokens=167, total_tokens=393, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=153, rejected_prediction_tokens=None, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=64, text_tokens=None, image_tokens=None), prompt_cache_hit_tokens=64, prompt_cache_miss_tokens=103))
answer Tim Cook
>>> inspect_history():
[2025-02-09T18:57:08.127734]
System message:
Your input fields are:
1. `question` (str)
Your output fields are:
1. `reasoning` (str)
2. `answer` (str)
All interactions will be structured in the following way, with the appropriate values filled in.
[[ ## question ## ]]
{question}
[[ ## reasoning ## ]]
{reasoning}
[[ ## answer ## ]]
{answer}
[[ ## completed ## ]]
In adhering to this structure, your objective is:
Given the fields `question`, produce the fields `answer`.
User message:
[[ ## question ## ]]
Who is Apple's CEO?
Respond with the corresponding output fields, starting with the field `[[ ## reasoning ## ]]`, then `[[ ## answer ## ]]`, and then ending with the marker for `[[ ## completed ## ]]`.
Response:
[[ ## reasoning ## ]]
The question asks for the current CEO of Apple. As of the latest available information, Tim Cook has been serving as Apple's CEO since August 2011, following Steve Jobs' resignation. There have been no recent announcements indicating a change in this position.
[[ ## answer ## ]]
Tim Cook
[[ ## completed ## ]]
Process finished with exit code 0
The text was updated successfully, but these errors were encountered:
What feature would you like to see?
Question 1:
When using the deepseek-reasoner model, we obtain a
reasoning_content
parameter that represents the reasoning process of the model.However, while working with the ChainOfThought() module, I noticed that the content of reasoning_content does not match the content wrapped in [[ ## reasoning ## ]].
How should we handle this inconsistency? Are there any specific guidelines or fixes to ensure alignment between reasoning_content and the [[ ## reasoning ## ]] section?
Question 2:
Currently, when using the deepseek-reasoner model, there is no straightforward way to access the model's internal reasoning results (
reasoning_content
).This would allow users to clearly view the model's reasoning process, improving transparency and debugging capabilities.
A simple and feasible approach would be to include the
reasoning_content
parameter in theinspect_history()
function.This would allow users to clearly view the model's reasoning process, improving transparency and debugging capabilities.
Thank you for your contributions to this great project.
If possible, I would like to know your thoughts on this issue. I have already written a small portion of the code, and I am willing to make changes to the rest if needed. PR
Would you like to contribute?
Additional Context
Here is the log of my testing:
log:
The text was updated successfully, but these errors were encountered: