Skip to content

Commit

Permalink
simplify some tests, set temperature default to 1 instead of 0, updat…
Browse files Browse the repository at this point in the history
…e docs
  • Loading branch information
souzatharsis committed Oct 17, 2024
1 parent be205db commit a4e2630
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 11 deletions.
2 changes: 1 addition & 1 deletion podcastfy/conversation_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ engagement_techniques:
- "anecdotes"
- "analogies"
- "humor"
creativity: 0
creativity: 1

text_to_speech:
default_tts_model: "openai"
Expand Down
11 changes: 3 additions & 8 deletions tests/test_genai_podcast.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,11 @@
# TODO: Should be a fixture
def sample_conversation_config():
conversation_config = {
"word_count": 2000,
"conversation_style": ["formal", "educational"],
"word_count": 500,
"roles_person1": "professor",
"roles_person2": "student",
"dialogue_structure": ["Introduction", "Main Points", "Conclusion"],
"podcast_name": "Teachfy",
"podcast_tagline": "Learning Through Conversation",
"output_language": "English",
"engagement_techniques": ["examples", "questions", "case studies"],
"creativity": 0,
"podcast_tagline": "Learning Through Conversation"
}
return conversation_config

Expand Down Expand Up @@ -54,7 +49,7 @@ def test_custom_conversation_config(self):
"""
conversation_config = sample_conversation_config()
content_generator = ContentGenerator(self.api_key, conversation_config)
input_text = "Artificial Intelligence in Education"
input_text = "United States of America"

result = content_generator.generate_qa_content(input_text)

Expand Down
2 changes: 1 addition & 1 deletion usage/config_custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ See [conversation_custom.md](conversation_custom.md) for more details.
- `max_output_tokens`: 8192
- Maximum number of tokens for the output generated by the AI model.
- `temperature`: 0
- Controls randomness in the AI's output. 0 means deterministic responses.
- Controls randomness in the AI's output. 0 means deterministic responses. Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0)
- `langchain_tracing_v2`: true
- Enables LangChain tracing for debugging and monitoring.

Expand Down
2 changes: 1 addition & 1 deletion usage/conversation_custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ custom_config = {
"word_count": 200,
"conversation_style": ["casual", "humorous"],
"podcast_name": "Tech Chuckles",
"creativity": 7
"creativity": 0.7
}
generate_podcast(
Expand Down

0 comments on commit a4e2630

Please sign in to comment.