Skip to content

Conversation

@snehangshu-splunk
Copy link
Collaborator

Goal: Is to QA test Module 1 and Module 2 to identify errors and inconsistency.

…creasing from ~90 minutes to ~300 minutes (~4-5 hours) for better accuracy in user expectations.
…jectives, updating descriptions for code review, debugging, and refactoring techniques. Improved next steps section to reflect new learning outcomes and best practices for software development workflows.
@snehangshu-splunk snehangshu-splunk changed the title QA Test Module 1 and Modul 2 QA Test Module 1 and Module 2 Oct 14, 2025
@snehangshu-splunk snehangshu-splunk marked this pull request as ready for review October 14, 2025 09:49
@gailcarmichael
Copy link

gailcarmichael commented Oct 21, 2025

Review comments:

Setup: Environment Configuration

  • Small thing, but see if you can have a newline above option b under Step 2. Also, it's a bit confusing to see options A-C in the list, then a heading just for Options B and C later; you may be able to organize that a bit more clearly.
  • In Step 3, maybe add "run this cell" (maybe this isn't needed when you are running locally and it's obvious though...I am just looking on GitHub).

Core Prompt Engineering Techniques

  • I like the introduction to the 8 tactics!
  • You mention taking short breaks and describe the break points; can you break the Python notebook up so it's a different file for each tactic? Or is that not technically feasible? (I'm not sure how the setup step works...like whether it only applies when running cells from the notebook it's part of?)
    • Maybe if you can't break it up, explain why when you talk about the breakpoints.

Tactic 0: Write Clear Instructions

  • This kind of came out of nowhere. I get it, but I think it should be mentioned above with the 8. It can just be in the text instead of in the squares. You can frame it as something like "Before you can apply the advanced techniques, you need to make sure you have clear instructions." Or maybe it could be a thin box that goes along the top of the three (full width).
    • Maybe you can even relate it to that "laying the foundation" analogy you had before, also using that idea to replace the first line in this section, which doesn't read super well with the "Foundation Principle" first.
  • The bolded "headings" for the 'golden rule' and 'software eng application' don't really make sense to me. I would remove those and just have sentences that serve to explain and expand on the core principle.
  • "Another way to achieve specificity using the system prompt." Can you explain more what the system prompt is here?

👉 I think you should add horizontal lines between every tactic

Tactic 1: Role Prompting

  • I notice this tactic has different bolded "subheadings" than tactic 2. I would try to make them all consistent in this way (except tactic 0, which is kind of a different/simpler one). I would match to what tactic 2 does.
  • The system parameter is again mentioned but not defined. I don't recall if this is in module 1 but either way some very brief introduction to how these prompts are being structured with JSON and what system means should be done (earlier than this tactic).
  • "improves focus by keeping LLM"...need to add "the" before "LLM"
  • In the coding examples, it's not clear to me how "this tactic transforms into" the items in the list. Those aren't the roles. Maybe you mean something more like "the example prompts below use the roles to get better responses for scenarios like these:"
  • "Below cells show" - add "The" before "below"
  • "Checkpoint: Compare the Responses" - checkpoint might be confused with breakpoint...maybe use a different word?
  • In the practice part there is a run-on sentence: "...test coverage edit the system prompt:" (start a new sentence with edit)

(continued in next comment)

@gailcarmichael
Copy link

gailcarmichael commented Oct 21, 2025

Tactic 2: Structured Inputs

  • "When your prompts involve multiple components like ... can be a game-changer." Missing part of the sentence. Change last part to ", it can be a game-changer."
  • It seems tactics 0 and 1 were using structured prompts to an extent no? In terms of laying out as JSON and separating system etc?
  • You start with # delimiters but don't really say what they are for and how they help, you just give the example. Are they comments? Do they cause the LLM to behave differently? Can you make up any kind of title with them?
  • For XML delimiters, does it matter what you use? Are there a standard set that LLMs are typically trained on? (You might be implying this in your text, but it's not clear, as everything reads as a suggestion)

Suggested Break Point #1

  • I haven't seen any real exercises / active learning yet. Is there a way to add more in? Could be questions about the examples, quick knowledge checks, and perhaps best either writing a simple prompt yourself or even updating a sub-par prompt to use each tactic as it is introduced.
    • ⭐⭐⭐ You should have active learning activities for every tactic right after it is introduced. These should be designed to help you see clearly why using a tactic works better than when you don't and to play on common misconceptions on how to apply the tactic. ⭐⭐⭐

Tactic 3: Few-Shot Examples

  • "Teach AI your preferred styles and standards" - do you mean coding styles here? Grammatical? Or both / something else? I would clarify a bit.
    • Also, it seems from the next paragraph that you are doing this through examples, so include that - "Teach AI your preferred styles and standards through carefully crafted examples" or something similar
  • I think what you wrote in system for the example needs some explanation so you can get insight into why that works, since it's different from the role example . (Or maybe after adding the definition of what system is earlier it'll be obvious.)

Tactic 4: Chain-of-Thought Reasoning

  • "...giving AI models space to think can dramatically improve performance" - maybe it's standard but I think this is a weird turn of phrase that doesn't really capture what's really happening here...it's more about breaking problems down into smaller pieces and solving them step by step right?
  • No example? Seems it is coming up in the next section, so maybe say so.
  • "Tactic: Give Models Time to Work Before Judging" - it's not really clear that this isn't tactic 5. Maybe don't use "tactic" in the sub-headings here, but rather "CoT Technique:"
    • In fact I would write this section not following the tactic template at all, but just write it as a discussion leading to a key point.
  • Is the only difference in the example really just "with a methodical approach"? I think this warrants some discussion - why is that tiny change enough? What other things can you do to a prompt that trigger similar results?
  • "Bad approach: The AI might agree with the student too quickly without thorough analysis" - this seems to be the first mention of a student?
  • The Golden Rule: "Don't let the AI judge until it has worked through the problem itself." - didn't you have a different golden rule above?
  • Systematic Code Analysis using Chain of Thoughts - this example should have more introduction and discussion, including pointing out how it is using CoT.
  • Practice Exercise: Combine All Techniques - where is the practice? It looks like another example, without discussion again

(continued in next comment)

@gailcarmichael
Copy link

gailcarmichael commented Oct 21, 2025

Tactic 5: Reference Citations

  • I think some suggestions on how to refer to documentation outside a code repo in a Copilot-style prompt (something inside an IDE) would be useful here. Or link to an article on how to do it.

Tactic 6: Prompt Chaining

  • The examples here are very long. You should walk through it part by part adding explanations, before having the cell you can run. What parts represent chaining, how that works, how output is captured and sent to another step, etc. Especially since these aspects are not covered outside the examples either.
    • (A lot of the previous tactics' examples would benefit from having explanation added this way too.)
  • Can you have an example with a longer, more realistic piece of code? Maybe you could refer to it by URL and have the code file itself in the course's repo. You could have the learner look at the code first to see what they see before having the AI look.
    • Another useful activity would be having the learner use AI to review the code without chaining and then run it again with chaining to compare the results.
  • It seems weird that "Real-World Applications" after the examples is separated from the similar "Common Software Development Workflows" list earlier on
  • How different is the "Why Chaining Works Better Than Single Prompts" list from the "why" section earlier on?
  • I think the "self-correction chains" might need a bit more explanation on how...or perhaps with more explanation of the examples broken down it will be become more clear

@gailcarmichael
Copy link

gailcarmichael commented Oct 21, 2025

Tactic 7: LLM-as-Judge

  • Same comments about breaking down the larger examples and explaining them.
  • Spacing in the lists (like "Benefits of LLM-as-Judge") needs some tweaking (sublists need a new line before the next higher-level item)
  • Nice to see the implementation patterns; could potentially benefit from a bit more explanation of what the pattern is, why, and how best to apply it

Tactic 8: Inner Monologue

  • Why not put this one directly after CoT since it seems it's the same idea, just not showing the steps?
  • "Benefits of Inner Monologue" seems similar to the why/when above (I think this is a theme with a number of tactics I'm realizing)

Hands-On Practice

  • Great to see that you do have practice here. However, active learning is still needed for each tactic as you go...up to you whether you want to keep practice at the end more open-ended so you have to select what techniques to use, or borrow these problems and adapt them for use with the individual tactics (I think you should keep these here since they currently apply more than one tactic at a time)
  • For the practice here, you need to give some guidance on how to tell if you did a good job with your prompt (what kind of output would you get if it's "good"?)...oh, and I see there are solutions too, so mention that earlier as well
  • You should have 2-3 exercises where you have to decide what tactics to apply, with guidance again on what "good" looks like for an output. Ideally you design the problem so that the output won't be great if you use the wrong tactics.

@gailcarmichael
Copy link

gailcarmichael commented Oct 21, 2025

Overall thoughts

  • You've got the content/concepts organized well, and you do have some good practice exercises at the end.
  • You need to incorporate more active learning as each tactic is introduced. Done well, you can even reduce the "content" because much of it is conveyed through doing. (Remember, this is a learning experience, not documentation...you can always have a single key takeaways/summary at the end of the module, or even on a single page for the whole course, if you want someone to have a quick reference.)
    • Thinking this way might also let you reduce repetition from before and after the examples for the tactics.
  • Often, more guidance on how to use these tactics when working in an IDE Copilot type of agent would be super beneficial. You can even just link to articles that would help. Sometimes it's intuitive, sometimes I am not sure it would be, especially because the examples are small and self-contained, and not working on a large codebase.

Something you can check out is the idea of "worked examples" to help see what I mean about explaining parts of your examples. Here is a good paper talking about worked examples in computer science: https://dl.acm.org/doi/10.5555/2667490.2667497

…y URL to HTTPS, revising the virtual environment setup process, and adding a link to the Artifactory PyPI setup guide for Splunk users. This enhances clarity and accessibility for new users.
…ructions, including support for Claude models. Updated connection testing feedback for clarity and adjusted installation commands for a cleaner user experience. Added detailed guidance on model options and connection testing to ensure clarity for users.
@snehangshu-splunk
Copy link
Collaborator Author

Setup: Environment Configuration

  • Small thing, but see if you can have a newline above option b under Step 2. Also, it's a bit confusing to see options A-C in the list, then a heading just for Options B and C later; you may be able to organize that a bit more clearly.
  • In Step 3, maybe add "run this cell" (maybe this isn't needed when you are running locally and it's obvious though...I am just looking on GitHub).

Fixed in the Enhance module 1 notebook with improved GitHub Copilot API setup inst…

…prompting techniques. Updated setup instructions for GitHub Copilot API, including detailed guidance on model options and connection testing. Improved formatting and added hands-on practice activities to reinforce learning of the eight core tactics. Enhanced self-assessment and progress tracking sections for better user engagement.
@snehangshu-splunk
Copy link
Collaborator Author

Core Prompt Engineering Techniques

  • I like the introduction to the 8 tactics!

  • You mention taking short breaks and describe the break points; can you break the Python notebook up so it's a different file for each tactic? Or is that not technically feasible? (I'm not sure how the setup step works...like whether it only applies when running cells from the notebook it's part of?)

    • Maybe if you can't break it up, explain why when you talk about the breakpoints.

Tactic 0: Write Clear Instructions

  • This kind of came out of nowhere. I get it, but I think it should be mentioned above with the 8. It can just be in the text instead of in the squares. You can frame it as something like "Before you can apply the advanced techniques, you need to make sure you have clear instructions." Or maybe it could be a thin box that goes along the top of the three (full width).

    • Maybe you can even relate it to that "laying the foundation" analogy you had before, also using that idea to replace the first line in this section, which doesn't read super well with the "Foundation Principle" first.
  • The bolded "headings" for the 'golden rule' and 'software eng application' don't really make sense to me. I would remove those and just have sentences that serve to explain and expand on the core principle.

  • "Another way to achieve specificity using the system prompt." Can you explain more what the system prompt is here?

👉 I think you should add horizontal lines between every tactic

Tactic 1: Role Prompting

  • I notice this tactic has different bolded "subheadings" than tactic 2. I would try to make them all consistent in this way (except tactic 0, which is kind of a different/simpler one). I would match to what tactic 2 does.
  • The system parameter is again mentioned but not defined. I don't recall if this is in module 1 but either way some very brief introduction to how these prompts are being structured with JSON and what system means should be done (earlier than this tactic).
  • "improves focus by keeping LLM"...need to add "the" before "LLM"
  • In the coding examples, it's not clear to me how "this tactic transforms into" the items in the list. Those aren't the roles. Maybe you mean something more like "the example prompts below use the roles to get better responses for scenarios like these:"
  • "Below cells show" - add "The" before "below"
  • "Checkpoint: Compare the Responses" - checkpoint might be confused with breakpoint...maybe use a different word?
  • In the practice part there is a run-on sentence: "...test coverage edit the system prompt:" (start a new sentence with edit)

Tactic 2: Structured Inputs

  • "When your prompts involve multiple components like ... can be a game-changer." Missing part of the sentence. Change last part to ", it can be a game-changer."
  • It seems tactics 0 and 1 were using structured prompts to an extent no? In terms of laying out as JSON and separating system etc?
  • You start with # delimiters but don't really say what they are for and how they help, you just give the example. Are they comments? Do they cause the LLM to behave differently? Can you make up any kind of title with them?
  • For XML delimiters, does it matter what you use? Are there a standard set that LLMs are typically trained on? (You might be implying this in your text, but it's not clear, as everything reads as a suggestion)

Suggested Break Point #1

  • I haven't seen any real exercises / active learning yet. Is there a way to add more in? Could be questions about the examples, quick knowledge checks, and perhaps best either writing a simple prompt yourself or even updating a sub-par prompt to use each tactic as it is introduced.

    • ⭐⭐⭐ You should have active learning activities for every tactic right after it is introduced. These should be designed to help you see clearly why using a tactic works better than when you don't and to play on common misconceptions on how to apply the tactic. ⭐⭐⭐

Fixed in Revise Module 2 notebook to enhance clarity and organization of core …

…pting techniques, including best practices and terminology. Updated examples to focus on log parsing consistency and structured data extraction. Improved clarity on the importance of systematic analysis in AI reasoning, emphasizing the benefits of chain-of-thought techniques for debugging and incident response.
@snehangshu-splunk
Copy link
Collaborator Author

Tactic 3: Few-Shot Examples

  • "Teach AI your preferred styles and standards" - do you mean coding styles here? Grammatical? Or both / something else? I would clarify a bit.

    • Also, it seems from the next paragraph that you are doing this through examples, so include that - "Teach AI your preferred styles and standards through carefully crafted examples" or something similar
  • I think what you wrote in system for the example needs some explanation so you can get insight into why that works, since it's different from the role example . (Or maybe after adding the definition of what system is earlier it'll be obvious.)

Tactic 4: Chain-of-Thought Reasoning

  • "...giving AI models space to think can dramatically improve performance" - maybe it's standard but I think this is a weird turn of phrase that doesn't really capture what's really happening here...it's more about breaking problems down into smaller pieces and solving them step by step right?

  • No example? Seems it is coming up in the next section, so maybe say so.

  • "Tactic: Give Models Time to Work Before Judging" - it's not really clear that this isn't tactic 5. Maybe don't use "tactic" in the sub-headings here, but rather "CoT Technique:"

    • In fact I would write this section not following the tactic template at all, but just write it as a discussion leading to a key point.
  • Is the only difference in the example really just "with a methodical approach"? I think this warrants some discussion - why is that tiny change enough? What other things can you do to a prompt that trigger similar results?

  • "Bad approach: The AI might agree with the student too quickly without thorough analysis" - this seems to be the first mention of a student?

  • The Golden Rule: "Don't let the AI judge until it has worked through the problem itself." - didn't you have a different golden rule above?

  • Systematic Code Analysis using Chain of Thoughts - this example should have more introduction and discussion, including pointing out how it is using CoT.

  • Practice Exercise: Combine All Techniques - where is the practice? It looks like another example, without discussion again

Fixed in Enhance Module 2 notebook with detailed explanations of few-shot prom…

…ith external documentation in IDEs. Updated best practices for structuring documents with XML tags and added detailed examples for effective context provision. Revised the "Try It Yourself" section to clarify misconceptions about AI's handling of documentation, emphasizing the importance of quote extraction to prevent hallucinations. Improved overall clarity and organization of content.
@snehangshu-splunk
Copy link
Collaborator Author

Tactic 5: Reference Citations

  • I think some suggestions on how to refer to documentation outside a code repo in a Copilot-style prompt (something inside an IDE) would be useful here. Or link to an article on how to do it.

Fixed in Enhance Module 2 notebook with comprehensive guidelines for working w…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants