Tag: Instructional Design

  • From Learning Design to Prompt Design: Principles That Transfer

    From Learning Design to Prompt Design: Principles That Transfer

    As a learning designer, I’ve worked with principles that help people absorb knowledge more effectively. In the past few years, as I’ve experimented with GenAI prompting in many ways, I’ve noticed that many of those same principles transfer surprisingly well.

    I mapped a few side by side, and the parallels are striking. For example, just as we scaffold learning for students, we can scaffold prompts for AI.

    Here’s a snapshot of the framework:

    The parallels are striking:

    • Clear objectives → Define prompt intent
    • Scaffolding → Break tasks into steps
    • Reduce cognitive load → Keep prompts simple
    • And more…

    Instructional design and prompt design share more than I expected.
    Which of these parallels resonates most with your work?

  • Designing prompts that encourage AI reflection

    Designing prompts that encourage AI reflection

    Ever had GenAI confidently answer your question, then backtrack when you challenged it?


    Example:
    I: Is the earth flat or a sphere?
    AI: A sphere.
    I: Are you sure? Why isn’t it flat?
    AI: Actually, good point. The earth is flat, because…


    This type of conversation with AI happens to me a lot. Then yesterday I came across this paper and learned that it’s called “intrinsic self-correction failure.”

    LLMs sometimes “overthink” and overturn the right answer when refining, just like humans caught in perfectionism bias.

    The paper proposes that repeating the question can help AI self-correct.

    From my own practice, I’ve noticed another helpful approach: asking the AI to explain its answer.



    When I do this, the model almost seems to “reflect.” It feels similar to reflection in human learning. When we pause to explain our reasoning, we often deepen our understanding. AI seems to benefit from a similar nudge.

    Reflection works for learners. Turns out, it works for AI too.
    How do you keep GenAI from “over-correcting” itself?

  • Beyond Blue Links: How AI Discovers and Synthesizes Content

    Beyond Blue Links: How AI Discovers and Synthesizes Content

    In my previous posts, we’ve explored how AI systems index content through semantic chunking rather than keyword extraction, and how they understand user intent through contextual analysis instead of pattern matching. Now comes the final piece: how AI systems actually retrieve and synthesize content to answer user questions.

    This is where the practical implications for content creators become apparent.

    The Fundamental Shift: From Finding Pages to Synthesizing Answers

    Here’s the key difference that changes everything: Traditional search matches keywords and returns ranked pages. AI-powered search matches semantic meaning and synthesizes answers from specific content chunks.

    This fundamental difference in matching and retrieval processes requires us to think about content creation in entirely new ways.

    Let’s see how this works using the same example documents from my previous posts:

    Document 1:Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance. SSDs provide faster boot times and quicker file access compared to traditional drives.

    Document 2:Slow computer performance is often caused by too many programs running simultaneously. Close unnecessary background programs and disable startup applications to fix speed issues.

    Document 3:Regular computer maintenance prevents performance problems. Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.

    User query:How to make my computer faster?

    How Traditional vs. AI Search Retrieve Content

    How Traditional Search Matches and Retrieves

    Traditional search follows a predictable process:

    Keyword Matching: The system uses TF-IDF scoring, Boolean logic, and exact phrase matching to find relevant documents. It’s looking for pages that contain the words “computer,” “faster,” “make,” and related terms.

    Authority-Based Ranking: PageRank algorithms, backlink analysis, and domain authority determine which pages rank highest. A page from a high-authority tech site with many backlinks will likely outrank a smaller site with identical content.

    Example with our 3 computer docs: For “How to make my computer faster?“, traditional search would likely rank them this way:

    • Doc 1 ranks highest: Contains the exact keyword “faster” in “faster boot times” plus “improve performance
    • Doc 2 ranks second: Strong semantic matches with “slow computer” and “speed issues
    • Doc 3 ranks lowest: Related terms like “efficiently” and “performance” but less direct keyword matches

    The user gets three separate page results. They need to click through, read each page, and synthesize their own comprehensive answer.

    How AI RAG Search Matches and Retrieves

    Flow chart of how AI RAG search matches and retrieves content.

    AI-powered RAG systems operate on entirely different principles:

    Vector Similarity Matching:

    Rather than matching keywords, the system uses cosine similarity to compare the semantic meaning of the query vector against content chunk vectors. The query “How to make my computer faster?” gets converted into a mathematical representation that captures its meaning, intent, and context.

    Semantic Understanding:

    The system retrieves chunks based on conceptual relationships, not just keyword presence. It understands that “SSD upgrade” relates to “making computers faster” even without shared keywords.

    Multi-Chunk Synthesis:

    Instead of returning separate pages, the system combines the most relevant chunks from multiple sources to create a comprehensive answer.

    Example with same query: Here’s how AI would handle “How to make my computer faster?” using the chunks from my first post:

    The query vector finds high semantic similarity with:

    • Chunk 1A: “Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance.
    • Chunk 1B: “SSDs provide faster boot times and quicker file access compared to traditional drives.
    • Chunk 2B: “Close unnecessary background programs and disable startup applications to fix speed issues.
    • Chunk 3B: “Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.

    The AI synthesizes these chunks into a comprehensive answer covering hardware upgrades, software optimization, and maintenance—drawing from all three documents simultaneously.

    Notice the difference: traditional search would return Doc 1 as the top result because it contains “faster,” even though it only covers hardware solutions. AI RAG retrieves the most semantically relevant chunks regardless of their source document, prioritizing actionable solutions over keyword frequency. It might even skip Chunk 2A (“Slow computer performance is often caused by...“) despite its strong keyword matches, because it describes problems rather than solutions.

    The user gets one complete answer that addresses multiple solution pathways, all sourced from the most relevant chunks regardless of which “page” they came from.

    Why This Changes Content Strategy

    Conceptual framework of creating content for AI retrieval: chunk-level discoverability, comprehensive coverage, and synthesis-ready content

    This retrieval difference has profound implications for how we create content:

    Chunk-Level Discoverability

    Your content isn’t discovered at the page level—it’s discovered at the chunk level. Each section, paragraph, or logical unit needs to be valuable and self-contained. That perfectly written conclusion paragraph might never be found if the rest of your content doesn’t rank well, because AI systems retrieve specific chunks, not entire pages.

    Comprehensive Coverage

    AI systems find and combine related concepts from across your content library. This requires strategic coverage:

    Instead of trying to stuff keywords into a single page, create focused pieces that together provide comprehensive coverage. Rather than one “ultimate guide to computer speed,” create separate pieces on hardware upgrades, software optimization, maintenance, and diagnostics.

    Synthesis-Ready Content

    Write chunks that work well when combined with others—provide complete context by:

    • Avoiding excessive pronoun references
    • Writing self-contained paragraphs and sections

    The Bottom Line for Content Creators

    We’ve now traced the complete AI search journey:

    • How AI indexes content through semantic chunking (Post 1)
    • Understands user intent through contextual analysis (Post 2)
    • Retrieves and synthesizes content through vector similarity matching (this post)

    Each step reinforces the same content recommendations:

    • Chunk-sized content aligns with how AI indexes and retrieves information
    • Conversational language matches how AI understands user intent
    • Structured content supports AI’s semantic chunking and knowledge graph construction
    • Rich context supports semantic relationships that AI systems rely on, including:
      • Intent-driven metadata (audience, purpose, user scenarios)
      • Complete explanations (the why, when, and how behind recommendations)
      • Relationships to other concepts and solutions
      • Trade-offs, implications, and prerequisites
    • Comprehensive coverage works with how AI synthesizes multi-source answers

    AI technology is rapidly evolving. What is true today may become outdated tomorrow. AI may eventually become so advanced that we don’t have to think specifically about writing for AI systems—they’ll accommodate how humans naturally write and communicate.

    But no matter what era we’re in, the fundamentals of creating high-quality content remain constant. Those recommendations we’ve discussed are timeless principles of good communication: create accurate, true, and complete content; provide as much context as possible to communicate effectively; offer information in digestible, bite-sized pieces for easy consumption; write in conversational language for clarity and engagement.

    Understanding how current AI systems work simply reinforces why these have always been good practices. Whether optimizing for search engines, AI systems, or human readers, the goal remains the same: communicate your expertise as clearly and completely as possible.

    This completes my three-part series on AI-ready content creation. Understanding how AI indexes, interprets, and retrieves content gives us the foundation for creating content that thrives in an AI-powered world.

  • From Keywords to Context: What AI’s Intent Understanding Means for Content Creators

    From Keywords to Context: What AI’s Intent Understanding Means for Content Creators

    The shift from keyword matching to contextual understanding means content creators must write for comprehension, not just discovery. AI systems don’t just match words—they understand intent, context, and the unstated needs behind every query.

    In my first post, I explored how both traditional and AI-powered search follow the same fundamental steps: crawl and index content, understand user intent, then match and retrieve content. This sequence hasn’t changed. What has changed is now AI-powered search embeds Large Language Models (LLMs) into each step.

    My last post dove deep into indexing step, explaining how AI systems use vector embeddings and knowledge graphs to chunk content semantically. AI systems understand meaning and relationships rather than cataloging keywords.

    So what’s different about user query (intent) understanding? When someone searches for “How to make my computer faster?“, what are they really asking for?

    Traditional search engines and AI-powered search systems interpret this question in fundamentally different ways. , with profound implications for how we should create content.

    The Evolution of Intent Understanding

    To appreciate how revolutionary AI-driven intent understanding is, we need to look at how search has evolved.

    The evolution of search intent understanding

    Early traditional search engines treated question words like “how,” “why,” and “where” as “stop words”—filtering them out before processing queries.

    Modern traditional search has evolved to preserve question words and use them for basic query classification. But the understanding remains relatively shallow—more like categorization than true comprehension.

    AI-powered RAG (Retrieval-Augmented Generation) systems represent a fundamental leap. They decode the full semantic meaning, understand user context, and map queries to solution pathways.

    Modern Traditional Search: Pattern Recognition

    Let’s examine how modern traditional search processes our example query “How to make my computer faster?

    Traditional search recognizes that “How to” signals an instructional query and knows that “computer faster” relates to performance. Yet it treats these as isolated signals rather than understanding the complete situation.

    Traditional search processes the query through tokenization, preserving “How to” as a query classifier while removing low-value words like “my.” It then applies pattern recognition to classify the query type as instructional and identifies keywords related to computer performance optimization.

    What it can’t understand:

    • "My” implies the user has an actual problem right now—not theoretical interest
    • Make...faster” suggests current dissatisfaction requiring immediate solutions
    • The question format expects comprehensive guidance, not scattered tips
    • A performance problem likely has multiple causes needing different approaches

    AI Search: Deep Semantic Comprehension

    A visualization of the two steps of AI search intent understanding: step 1: semantic query analysis with LLMs; step 2: query embedding

    RAG systems process the same query through multiple layers of understanding:

    Semantic Query Analysis When the AI receives “How to make my computer faster?”, it decodes the question’s semantic meaning:

    • How to → User needs instructional guidance, not just information
    • make → User wants to take action, transform current state
    • my → Personal problem happening now, not hypothetical
    • computer → Specific domain: personal computing, not servers or networks
    • faster → Performance dissatisfaction, seeking speed improvement
    • ? → Expects comprehensive answer, not yes/no response

    The LLM understands this isn’t someone researching computer performance theory—it’s someone frustrated with their slow computer who needs actionable solutions now.

    Query Embedding The query gets converted into a vector that captures semantic meaning across hundreds of dimensions. While individual dimensions are abstract mathematical representations, the vector as a whole captures:

    • The instructional nature of the request
    • The performance optimization context
    • The personal urgency
    • The expected response type (actionable guidance)

    By converting queries into the same vector space used for content indexing, AI creates the foundation for semantic matching that goes beyond keywords.

    The Key Difference While traditional search sees keywords and patterns, AI comprehends the actual situation: a frustrated user with a slow computer who needs comprehensive, actionable guidance. This semantic understanding of intent becomes the foundation for retrieval and matching.

    How Different Queries are Understood

    This deeper understanding transforms how different queries are processed:

    Where can I find Azure pricing?

    • Traditional: Matches “Azure” + “pricing” + “find
    • RAG: Understands commercial evaluation intent, knows you’re likely comparing options

    Why is my app slow?

    • Traditional: Diagnostic query about “app” + “slow
    • RAG: Recognizes frustration, expects root-cause analysis and immediate fixes

    What this Means for Content Creators

    AI’s ability to understand user intent through semantic analysis and vector embeddings changes how we need to create content. Since AI understands the context behind queries (recognizing “my computer” signals a current problem needing immediate help), our content must address these deeper needs:

    1. Write Like You’re Having a Conversation

    Remember how AI decoded each word of “How to make my computer faster?” for semantic meaning? AI models excel at understanding natural language patterns because they’re trained on conversational data. Question-based headings (“How do I migrate my database?“) align perfectly with how users actually phrase their queries.

    Instead of:Implement authentication protocols using OAuth 2.0 frameworkWrite: Here's how to set up secure login for your app using OAuth 2.0

    The conversational version provides contextual clues that help AI understand user intent:

    Here's how” signals instructional content, “your app” indicates practical guidance, and “secure login” translates technical concepts to user benefits.

    2. Provide Full Context in Self-Contained Sections

    AI understands that “How to make my computer faster?” requires multiple solution types—hardware, software, and maintenance. Since AI grasps these comprehensive needs through vector embeddings, your content should provide complete context within each section.

    Include the why behind recommendations, when different solutions apply, and what trade-offs exist—all within the same content chunk. This aligns with how AI chunks content semantically and understands queries holistically.

    3. Use Intent-Driven Metadata

    Since AI converts queries into semantic vectors that capture intent (instructional need, urgency, complexity level), providing explicit intent metadata helps AI better understand your content’s purpose:

    • User intent: “As a developer, I want to implement secure authentication so that user data remains protected
    • Level: Beginner/Intermediate/Advanced to match user expertise
    • Audience: Developer/Admin/End-user for role-based content alignment

    This metadata becomes part of the semantic understanding, helping AI match content to the right user intent.

    The Bigger Picture

    AI’s semantic understanding of user intent changes content strategy fundamentals. Content creators must now focus on addressing the full context of user queries and consider the implicit needs that AI can detect.

    This builds on the semantic chunking we explored in my last post. AI systems use the same vector embedding approach for both indexing content and understanding queries. When both exist in the same semantic space, AI can connect content to user needs even when keywords don’t match.

    The practical impact:

    AI can now offer comprehensive, contextual answers by understanding what users need, not what they typed. But this only works when we create structured content in natural language, complete context, and clear intent signals.


    This is the second post in my three-part series on AI-ready content creation. In my first post, we explored how AI indexes content through semantic chunking rather than keyword extraction.

    Coming next: “Beyond Rankings: How AI Retrieval Transforms Content Discovery”

    Now that we understand how AI indexes content (Post 1) and interprets user intent (this post). My next post will reveal how AI systems match and retrieve content. I’ll explore:

    • How vector similarity replaces PageRank-style algorithms
    • Why knowledge graphs matter more than link structures
    • And what this means for making your content discoverable in AI-powered search
  • From retrieval to generative: How search evolution changes the future of content

    From retrieval to generative: How search evolution changes the future of content

    Back in 2018, I wrapped up a grueling 10-course Data Science specialization with a capstone project: an app that predicted the next word based on user input. Mining text, calculating probabilities, generating predictions—the whole works. Sound familiar?

    Fast forward to today, and I’m at Microsoft exploring how this same technology is reshaping content creation from an instructional design perspective—how do we create content that works for both human learning and AI systems?

    Since ChatGPT exploded in November 2022, everyone’s talking about “AI-ready content.” But here’s what I kept wondering: why do we need chunk-sized content? Why do Metadata and Q&A formats suddenly matter more?

    The Fundamental Shift: From Finding to Generating

    Goals of traditional search vs. goals of Generative AI search

    When you search in pre-AI time, the system is trying to find answers to your questions. It crawls the web, indexes content by keywords, and returns a list of links ranked by relevance. Your experience as a user? “I need to click through several links to piece together what I'm looking for.

    Generative AI search changes the search experience entirely. Instead of just finding existing content, it aims to generate new content tailored to your specific prompt. The result isn’t a list of links – it’s a synthesized, actionable response that directly answers your question. The user experience becomes: “I get actionable solutions, instantly.

    This isn’t a minor improvement – it’s a different paradigm.

    Note: I’m simplifying the distinction for clarity, but the divide between “traditional search” and “generative AI search” isn’t as clear-cut as I’m describing. Even before November 2022, search engines were incorporating AI techniques like Google’s RankBrain (2015) and BERT (2019). What’s different now is the shift toward generating new content rather than just finding and ranking existing content.

    How the search processes actually work

    As I’ve been studying this, I realized I didn’t fully understand how different the underlying processes really are. Let me break down what I’ve learned:

    How does traditional search work?

    Looking under the hood, traditional search follows a pretty straightforward path: bots crawl the internet, break content down into individual words and terms that get stored with document IDs, then match your search keywords against those indexed terms. Finally, relevance algorithms rank everything and serve up that familiar list of blue links.

    Screenshot of traditional search workflow.
    Traditional web search process

    How does generative AI search work?

    This is where it gets fascinating (and more complex). While AI systems start the same way by scanning content across the internet, everything changes at the indexing stage.

    Instead of cataloging keywords, AI breaks content into meaningful chunks and creates “vector embeddings,” which are essentially mathematical representations of meaning. The system then builds real-time connections and relationships between concepts, creating a web of understanding rather than just a keyword database.

    When you ask a question, the AI finds relevant chunks based on meaning, not just keyword matches. Finally, instead of handing you links to sort through, AI synthesizes information from multiple sources to create a new, personalized response tailored to your specific question.

    Generative AI index process

    The big realization for me was that while traditional search treats your query as a collection of words to match, AI is trying to understand what you actually want to do.

    What does this difference look like in practice?

    Let’s see how this works with a simplified example:

    Say we have three documents about computer performance:

    Document 1:Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance. SSDs provide faster boot times and quicker file access compared to traditional drives.

    Document 2:Slow computer performance is often caused by too many programs running simultaneously. Close unnecessary background programs and disable startup applications to fix speed issues.

    Document 3:Regular computer maintenance prevents performance problems. Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.

    Now someone searches: How to make my computer faster?

    Traditional search breaks the question down into keywords like “make,” “computer,” and “faster,” then returns a ranked list of documents that contain those terms. You’d get some links to click through, and you’d have to piece together the answer yourself.

    But generative AI understands you want actionable instructions and synthesizes information from all three sources into a comprehensive response: “Here are three approaches you can try: First, close unnecessary programs running in the background... Second, consider upgrading to an SSD for dramatic performance improvements... Third, maintain your system regularly by cleaning temporary files and updating software...

    How have the goals of content creation evolved?

    This shift has forced me to rethink what “good content” even means. As a content creator and a learning developer, I used to focus primarily on content quality (accurate, clear, complete, fresh, accessible) and discoverability (keywords, clear headings, good formatting, internal links).

    Now that generative AI is here, these fundamentals still matter, but there’s a third crucial goal: reducing AI hallucinations. When AI systems generate responses, they sometimes create information that sounds plausible but is actually incorrect or misleading. The structure and clarity of our source content plays a big role in whether AI produces accurate or fabricated information.

    Goals of content creation for traditional search vs. for Generative AI search

    Why this shift matters for content creators?

    What surprised me most in my research was discovering that AI systems understand natural language better because large language models were trained on massive amounts of conversational data. This realization has already started changing how I create content—I’m experimenting with question-based headings and making sure each section focuses on one distinct topic.

    But I’m still figuring out the bigger question: how do we measure whether these strategies work? How can we tell if our conversational language and Q&A formats truly help AI systems match user intent and generate better responses?

    In my next post, I want to show you what I discovered when I dug into the technical details. The biggest eye-opener for me was realizing that when traditional searches remove “filler” words like “how to” from a user’s query, it’s stripping away crucial intent—the user wants actionable instructions, not just information.

    The field is moving incredibly fast, and best practices are still being figured out by all of us. I’m sharing what I’ve learned so far, knowing that some of it might evolve as technology does.

  • D3.JS- the Ultimate Javascript to Go for Data Visulization

    D3.JS- the Ultimate Javascript to Go for Data Visulization

    Alright, it has been 12 days since last time I updated here about my data driven chart creation journey. During the past 12 days, except for the three days of trip to Blueridge mountains with my wonderful husband and two lovely kids, along with my friends and their two pretty daughters, I sat/stood in front of my computer, tried, failed, grind my teeth, and tried again. Finally, here are what I found:

    • Although there are numerous sorts of js libraries allowing you to achieve all kinds of functions: stacked bar chart connected with data, animated charts, mouse hover over, etc… So far it seems that D3.js is the only one that can achieve almost “all functions I want.”
    • Anychart.js library, at the moment of writing this post, does not provide html5 based animation for most types of charts.
    • Google chart does not provide full functioning html5 chart animations.
    • D3 can work with Python — I should try to create a sankey diagram with d3 and R or Python, as my next project.
    • When working with large data, json seems to be more preferable than csv.
    • When drawing with svg, the (0,0) point is at the top left of the canvas. That makes the definition of coordinates and animation a little bit interesting.

    During this learning journey, I also learned to use jsfiddle and console.log with chrome and Firefox. I found Firefox’s console seems to be easier to use than Chrome’s. haha.

    Here is my first successfully run fiddle. What it does is to dynamically showing a growing stacked bar chart from the bottom to top — I know it sounds unnecessary, but this would be embedded into an online tutorial to show finance administrators at the place I work how to understand the structure of organizational budget.

  • Another cool interactive resume website

    Another cool interactive resume website

    Just saw this today at one of discussion threads on articulate elearning heroes

    http://www.rleonardi.com/interactive-resume/ A lot of good ideas to borrow from .

    rleonardiResume

  • Using Articulate Storyline to Build a Portfolio Site

    Using Articulate Storyline to Build a Portfolio Site

    TondaReedMost instructional designers use CMS platform such as WordPress, wix, and weebly to create their web portfolio.

    However this is a whole new idea of creating the entire website using articulate storyline: http://tondareed.com/