Tag: Prompt engineering

  • From Learning Design to Prompt Design: Principles That Transfer

    From Learning Design to Prompt Design: Principles That Transfer

    As a learning designer, I’ve worked with principles that help people absorb knowledge more effectively. In the past few years, as I’ve experimented with GenAI prompting in many ways, I’ve noticed that many of those same principles transfer surprisingly well.

    I mapped a few side by side, and the parallels are striking. For example, just as we scaffold learning for students, we can scaffold prompts for AI.

    Here’s a snapshot of the framework:

    The parallels are striking:

    • Clear objectives → Define prompt intent
    • Scaffolding → Break tasks into steps
    • Reduce cognitive load → Keep prompts simple
    • And more…

    Instructional design and prompt design share more than I expected.
    Which of these parallels resonates most with your work?

  • Designing prompts that encourage AI reflection

    Designing prompts that encourage AI reflection

    Ever had GenAI confidently answer your question, then backtrack when you challenged it?


    Example:
    I: Is the earth flat or a sphere?
    AI: A sphere.
    I: Are you sure? Why isn’t it flat?
    AI: Actually, good point. The earth is flat, because…


    This type of conversation with AI happens to me a lot. Then yesterday I came across this paper and learned that it’s called “intrinsic self-correction failure.”

    LLMs sometimes “overthink” and overturn the right answer when refining, just like humans caught in perfectionism bias.

    The paper proposes that repeating the question can help AI self-correct.

    From my own practice, I’ve noticed another helpful approach: asking the AI to explain its answer.



    When I do this, the model almost seems to “reflect.” It feels similar to reflection in human learning. When we pause to explain our reasoning, we often deepen our understanding. AI seems to benefit from a similar nudge.

    Reflection works for learners. Turns out, it works for AI too.
    How do you keep GenAI from “over-correcting” itself?