In the era of AI, what does knowledge management (KM)truly mean? Is it about storing information, or making knowledge dynamic, discoverable, and actionable in real time?
For decades, knowledge management (KM) has focused on capturing and organizing information—wikis, document libraries, and structured taxonomies. But today’s organizations need more than static repositories. They need systems that surface answers instantly, connect insights across silos, and turn content into action.
Each organization’s KM strategy depends on its unique mix of content types, governance needs, and user expectations. Some rely on structured formats and rigid taxonomies; others have sprawling repositories of Office files, PDFs, and web pages.
On September 18th, Microsoft released its knowledge agent (preview). This agent allows you to do many things in the scope of enterprise knowledge management, such as:
Ask questions about your content
Summarize files
Compare content
Generate FAQ from files
Create audio overviews (Word & PDF)
Review and fix a SharePoint site
Create SharePoint pages, sections, and content
Refine SharePoint pages
The agent currently supports:
Microsoft Office files (doc, docx, ppt, pptx, and xlsx),
Modern Microsoft 365: FLUID, LOOP
Universal: PDF, TXT, RTF
Web files: ASPX, HTM, HTML
OpenDocument: ODT, ODP
This is especially powerful for organizations that don’t have structured file types like Markdown or JSON but still want AI-driven KM. Instead of forcing a migration to rigid formats, Knowledge Agent works with what you already have.
Traditional KM tools often require heavy upfront structuring—taxonomies, metadata, and governance models. But in reality, most enterprises have unstructured or semi-structured content scattered across SharePoint, Teams, and legacy systems. Knowledge Agent bridges that gap by:
Reducing friction: No need to reformat everything into specialized schemas.
Enhancing discoverability: Natural language Q&A over your existing content.
Accelerating content improvement: Automated site reviews and page refinements.
In short, it’s a practical way to unlock the value of your existing knowledge assets while layering in AI capabilities.
What do you think of the AI era of enterprise knowledge management? What solution will you choose?
As a learning designer, I’ve worked with principles that help people absorb knowledge more effectively. In the past few years, as I’ve experimented with GenAI prompting in many ways, I’ve noticed that many of those same principles transfer surprisingly well.
I mapped a few side by side, and the parallels are striking. For example, just as we scaffold learning for students, we can scaffold prompts for AI.
Here’s a snapshot of the framework:
The parallels are striking:
Clear objectives → Define prompt intent
Scaffolding → Break tasks into steps
Reduce cognitive load → Keep prompts simple
And more…
Instructional design and prompt design share more than I expected. Which of these parallels resonates most with your work?
Ever had GenAI confidently answer your question, then backtrack when you challenged it?
Example: I: Is the earth flat or a sphere? AI: A sphere. I: Are you sure? Why isn’t it flat? AI: Actually, good point. The earth is flat, because…
This type of conversation with AI happens to me a lot. Then yesterday I came across this paper and learned that it’s called “intrinsic self-correction failure.”
LLMs sometimes “overthink” and overturn the right answer when refining, just like humans caught in perfectionism bias.
The paper proposes that repeating the question can help AI self-correct.
From my own practice, I’ve noticed another helpful approach: asking the AI to explain its answer.
When I do this, the model almost seems to “reflect.” It feels similar to reflection in human learning. When we pause to explain our reasoning, we often deepen our understanding. AI seems to benefit from a similar nudge.
Reflection works for learners. Turns out, it works for AI too. How do you keep GenAI from “over-correcting” itself?
For documentation writers managing large sets of content—enterprise knowledge bases, multi-product help portals, or internal wikis—the challenge goes beyond polishing individual sentences. You need to:
Keep a consistent voice and style across hundreds of articles.
Spot duplicate or overlapping topics
Maintain accurate metadata and links
Gain insights into content gaps and structure
This is where GitHub Copilot inside Visual Studio Code stands out. Unlike generic Gen-AI chatbots, Copilot has visibility across your entire content set, not just the file you’re editing. With carefully crafted prompts and instructions, that means you can ask it to:
Highlight potential gaps, redundancies, or structural issues.
Suggest rewrites that preserve consistency across articles.
Surface related content to link or cross-reference.
In other words, Copilot isn’t just a text improver—it’s a content intelligence partner for documentation at scale. And if you’re already working in VS Code, it integrates directly into your workflow without requiring a new toolset.
What Can GitHub Copilot Do for Your Documentation
Once installed, GitHub Copilot can work directly on your .md, .html, .xml, or .yml files. Here’s how it helps across both single documents and large collections:
Refine Specific Text Blocks
Highlight a section and ask Copilot to improve the writing. This makes it easy to sharpen clarity and tone in targeted areas.
Suggest Edits Across the Entire Article
Use Copilot Chat to get suggestions for consistency and flow across an entire piece.
Fill in Metadata and Unfinished Sections
Copilot can auto-complete metadata fields or unfinished drafts, reducing the chance of missing key details.
Surface Relevant Links
While you’re writing, Copilot may suggest links to related articles in your repository—helping you connect content for the reader.
Spot Duplicates and Gaps(emerging use)
With tailored prompts, you can ask Copilot to scan for overlap between articles or flag areas where documentation is thin. This gives you content architecture insights, not just sentence-level edits.
Note: While GitHub Copilot offers a free tier, paid plans provide additional features and higher usage limits.
Why Copilot Is Different from Copilot in Word or other Gen-AI Chatbots
At first glance, you might think these features look similar to what Copilot in Word or other generative AI chatbots can do. But GitHub Copilot offers unique advantages for documentation work:
Cross-Document Awareness Because it’s embedded in VS Code, Copilot has visibility into your entire local repo. For example, if you’re writing about pay-as-you-go billing in one article, it can pull phrasing or context from another relevant file almost instantly.
Enterprise Content Intelligence With prompts, you can ask Copilot to analyze your portfolio: identify duplicate topics, find potential links, and even suggest improvements to your information architecture. This is especially valuable for knowledge bases and enterprise-scale content libraries.
Code-Style Edit Reviews Visual Studio Code + GitHub Copilot has the ability to show suggested edits as code updates. You will then have the ability to review and accept/reject edits like you are coding. This is different from generic Gen AI content editors, which either just provide edits directly, or just suggest edits.
Customizable Rules and Prompts You can set up an instruction.md file that defines rules for tone, heading style, or terminology. You can also create reusable prompt files and call them with / during chats. This ensures your writing is not just polished, but also consistent with your team’s standards.
Together, these capabilities transform GitHub Copilot from a document-level writing assistant into a documentation co-architect.
Limitations
Like any AI tool, GitHub Copilot isn’t perfect. Keep these in mind:
Always review suggestions Like any other Gen AI tools, GitHub Copilot can hallucinate. Always review its suggestions and validate its edits.
Wrap-Up: Copilot as Your Content Partner
GitHub Copilot inside Visual Studio Code isn’t just another AI writing assistant—it’s a tool that scales with your entire content ecosystem.
It refines text, polishes full articles, completes metadata, and suggests links.
It leverages cross-document awareness to reveal gaps, duplicates, and structural improvements.
It enforces custom rules and standards, ensuring consistency across hundreds of files.
And here’s where the real advantage comes in: with careful crafting of prompts and instruction files, Copilot becomes more than a reactive assistant. You can guide it to apply your team’s style, enforce terminology, highlight structural issues, and even surface information architecture insights. In other words, the quality of what Copilot gives you is shaped by the quality of what you feed it.
For content creators managing large sets of documentation, Copilot is more than a co-writer—it’s a content intelligence partner and co-architect. With thoughtful setup and prompt design, it helps you maintain quality, speed, and consistency—even at enterprise scale.
👉 Try it in your next documentation sprint and see how it transforms the way you manage your whole body of content.
But I didn’t dive deep into the third pillar that makes RAG truly powerful: metadata. While chunking handles the “what” of your content and embeddings capture the “meaning,” metadata provides the essential “context” that transforms good retrieval into precise, relevant results.
The Three Pillars of RAG-Optimized Content
Chunking (The “What”): Breaks content into digestible, topic-focused pieces
Structured formats with clear headings
Single-topic chunks
Consistent templates
Vector Embeddings (The “Meaning”): Captures semantic understanding
Question-format headings
Conversational language
Semantic similarity matching
Metadata (The “Context”): Provides situational relevance
Article type and intended audience
Skill level and role requirements
Date, version, and related topics
To understand why richer metadata can provide better context, we need to understand how vector embeddings are stored in vector database. After all, when RAG compare and retrieve chunks, it searches inside of the vector database to find semantic match.
So, what a vector record (the data entry) in a vector database looks like?
What’s Inside a Vector Record?
A vector record has three parts:
1. Unique ID A label that helps you quickly find the original content, which is stored separately.
2. The Vector A list of numbers that represents your content’s meaning (like a mathematical “fingerprint”). For example, text might become a list of 768 numbers.
Key rule: All vectors in the same collection must have the same length – you can’t mix different sizes.
3. Metadata Extra tags that add context, including:
How RAG Search Combines Vector Matching with Metadata Filtering
RAG (Retrieval-Augmented Generation) search combines vector similarity with metadata filtering to make your results both relevant and contextually appropriate. The RAG framework was first introduced by researchers at Meta in 2020 (see the original paper)::
Vector Similarity Matching When you ask a question, the system converts your question into a vector embedding (that same list of numbers we discussed). Then it searches the database for content vectors that are mathematically similar to your question vector. Think of it like finding documents that “mean” similar things to what you’re asking about.
Metadata Context Enhancement The system enhances similarity matching by also considering metadata context. These metadata filters can be set by users (when they specify requirements) or automatically by the system (based on context clues in the query). The system considers:
Time relevance: “Only show me recent information from 2023 or later”
Source credibility: “Only include content from verified authors or trusted platforms”
Content type: “Focus on technical documentation, not blog posts”
Geographic relevance: “Prioritize information relevant to my location”
This combined approach is also more efficient – metadata filtering can quickly eliminate irrelevant content before expensive similarity calculations.
The Combined Power Instead of getting thousands of somewhat-related results, you get a curated set of content that is both:
Semantically similar (the vector embeddings match your question’s meaning)
Contextually appropriate (the metadata ensures it meets your specific requirements)
For example, when you ask “How do I optimize database performance?” the system finds semantically similar content, then prioritizes results that match your context – returning recent technical articles by database experts while filtering out outdated blog posts or marketing content. You get the authoritative, current information you need.
What This Means for Content Creators
Understanding how metadata works in RAG systems reveals a crucial opportunity for content creators. Among the three types of metadata stored in vector databases, only one is truly under your control:
Automatically Generated Metadata:
Chunk metadata: Created during content processing (chunk size, position, relationships)
Platform metadata: Added by publishing systems (creation date, source URL, file type)
Creator-Controlled Metadata:
Universal metadata: The contextual information you can strategically add to improve intent alignment
This is where you can make the biggest impact. By enriching your content with universal metadata, you help RAG systems understand not just what your content says, but who it’s for and how it should be used:
When you provide this contextual metadata, you’re essentially helping RAG systems deliver your content to the right person, at the right time, for the right purpose. The technical foundation we’ve explored – vector similarity plus metadata filtering – becomes much more powerful when content creators take advantage of universal metadata to improve intent alignment.
Your content doesn’t just need to be semantically relevant; it needs to be contextually perfect. Universal metadata is how you achieve that precision.
In my previous posts, we’ve explored how AI systems index content through semantic chunking rather than keyword extraction, and how they understand user intent through contextual analysis instead of pattern matching. Now comes the final piece: how AI systems actually retrieve and synthesize content to answer user questions.
This is where the practical implications for content creators become apparent.
The Fundamental Shift: From Finding Pages to Synthesizing Answers
Here’s the key difference that changes everything: Traditional search matches keywords and returns ranked pages. AI-powered search matches semantic meaning and synthesizes answers from specific content chunks.
This fundamental difference in matching and retrieval processes requires us to think about content creation in entirely new ways.
Let’s see how this works using the same example documents from my previous posts:
Document 1: “Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance. SSDs provide faster boot times and quicker file access compared to traditional drives.“
Document 2: “Slow computer performance is often caused by too many programs running simultaneously. Close unnecessary background programs and disable startup applications to fix speed issues.“
Document 3: “Regular computer maintenance prevents performance problems. Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.“
User query: “How to make my computer faster?“
How Traditional vs. AI Search Retrieve Content
How Traditional Search Matches and Retrieves
Traditional search follows a predictable process:
Keyword Matching: The system uses TF-IDF scoring, Boolean logic, and exact phrase matching to find relevant documents. It’s looking for pages that contain the words “computer,” “faster,” “make,” and related terms.
Authority-Based Ranking: PageRank algorithms, backlink analysis, and domain authority determine which pages rank highest. A page from a high-authority tech site with many backlinks will likely outrank a smaller site with identical content.
Example with our 3 computer docs: For “How to make my computer faster?“, traditional search would likely rank them this way:
Doc 1 ranks highest: Contains the exact keyword “faster” in “faster boot times” plus “improve performance“
Doc 2 ranks second: Strong semantic matches with “slow computer” and “speed issues“
Doc 3 ranks lowest: Related terms like “efficiently” and “performance” but less direct keyword matches
The user gets three separate page results. They need to click through, read each page, and synthesize their own comprehensive answer.
How AI RAG Search Matches and Retrieves
AI-powered RAG systems operate on entirely different principles:
Vector Similarity Matching:
Rather than matching keywords, the system uses cosine similarity to compare the semantic meaning of the query vector against content chunk vectors. The query “How to make my computer faster?” gets converted into a mathematical representation that captures its meaning, intent, and context.
Semantic Understanding:
The system retrieves chunks based on conceptual relationships, not just keyword presence. It understands that “SSD upgrade” relates to “making computers faster” even without shared keywords.
Multi-Chunk Synthesis:
Instead of returning separate pages, the system combines the most relevant chunks from multiple sources to create a comprehensive answer.
Example with same query: Here’s how AI would handle “How to make my computer faster?” using the chunks from my first post:
The query vector finds high semantic similarity with:
Chunk 1A: “Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance.“
Chunk 1B: “SSDs provide faster boot times and quicker file access compared to traditional drives.“
Chunk 2B: “Close unnecessary background programs and disable startup applications to fix speed issues.“
Chunk 3B: “Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.“
The AI synthesizes these chunks into a comprehensive answer covering hardware upgrades, software optimization, and maintenance—drawing from all three documents simultaneously.
Notice the difference: traditional search would return Doc 1 as the top result because it contains “faster,” even though it only covers hardware solutions. AI RAG retrieves the most semantically relevant chunks regardless of their source document, prioritizing actionable solutions over keyword frequency. It might even skip Chunk 2A (“Slow computer performance is often caused by...“) despite its strong keyword matches, because it describes problems rather than solutions.
The user gets one complete answer that addresses multiple solution pathways, all sourced from the most relevant chunks regardless of which “page” they came from.
Why This Changes Content Strategy
This retrieval difference has profound implications for how we create content:
Chunk-Level Discoverability
Your content isn’t discovered at the page level—it’s discovered at the chunk level. Each section, paragraph, or logical unit needs to be valuable and self-contained. That perfectly written conclusion paragraph might never be found if the rest of your content doesn’t rank well, because AI systems retrieve specific chunks, not entire pages.
Comprehensive Coverage
AI systems find and combine related concepts from across your content library. This requires strategic coverage:
Instead of trying to stuff keywords into a single page, create focused pieces that together provide comprehensive coverage. Rather than one “ultimate guide to computer speed,” create separate pieces on hardware upgrades, software optimization, maintenance, and diagnostics.
Synthesis-Ready Content
Write chunks that work well when combined with others—provide complete context by:
Avoiding excessive pronoun references
Writing self-contained paragraphs and sections
The Bottom Line for Content Creators
We’ve now traced the complete AI search journey:
How AI indexes content through semantic chunking (Post 1)
Understands user intent through contextual analysis (Post 2)
Retrieves and synthesizes content through vector similarity matching (this post)
Each step reinforces the same content recommendations:
Chunk-sized content aligns with how AI indexes and retrieves information
Conversational language matches how AI understands user intent
Structured content supports AI’s semantic chunking and knowledge graph construction
Rich context supports semantic relationships that AI systems rely on, including:
Intent-driven metadata (audience, purpose, user scenarios)
Complete explanations (the why, when, and how behind recommendations)
Relationships to other concepts and solutions
Trade-offs, implications, and prerequisites
Comprehensive coverage works with how AI synthesizes multi-source answers
AI technology is rapidly evolving. What is true today may become outdated tomorrow. AI may eventually become so advanced that we don’t have to think specifically about writing for AI systems—they’ll accommodate how humans naturally write and communicate.
But no matter what era we’re in, the fundamentals of creating high-quality content remain constant. Those recommendations we’ve discussed are timeless principles of good communication: create accurate, true, and complete content; provide as much context as possible to communicate effectively; offer information in digestible, bite-sized pieces for easy consumption; write in conversational language for clarity and engagement.
Understanding how current AI systems work simply reinforces why these have always been good practices. Whether optimizing for search engines, AI systems, or human readers, the goal remains the same: communicate your expertise as clearly and completely as possible.
This completes my three-part series on AI-ready content creation. Understanding how AI indexes, interprets, and retrieves content gives us the foundation for creating content that thrives in an AI-powered world.
In my earlier post, I explained the fundamental shift from traditional search to generative AI search. Traditional search finds existing content. Generative AI creates new responses.
If you’ve been hearing recommendations about “AI-ready content” like chunk-sized content, conversational language, Q&A formats, and structured writing, these probably sound familiar. As instructional designers and content developers, we’ve used most of these approaches for years. We chunk content for better learning, write conversationally to engage readers, and use metadata for reporting and semantic web purposes.
Today, I want to examine how this shift starts at the very beginning: when systems index and process content.
What is Indexing?
Indexing is how search systems break down and organize content to make it searchable. Traditional search creates keyword indexes, while AI search creates vector embeddings and knowledge graphs from semantic chunks. The move from keywords to chunks signifies one of the most significant changes in how search technology works.
Let’s trace how both systems process the same content using three sample documents from my previous post:
Document 1: “Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance. SSDs provide faster boot times and quicker file access compared to traditional drives.“
Document 2: “Slow computer performance is often caused by too many programs running simultaneously. Close unnecessary background programs and disable startup applications to fix speed issues.“
Document 3: “Regular computer maintenance prevents performance problems. Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.“
User query: “How to make my computer faster?“
How does traditional search index content?
Traditional search follows three mechanical steps:
Step 1: Tokenization
This step breaks raw text into individual words. The three docs after tokenization look like this:
Stop words are common words that appear frequently in text but carry little meaningful information for search purposes. They’re typically removed during text preprocessing to focus on content-bearing words.
Common English stop words:
a, an, the, is, are, was, were, be, been, being, have, has, had, do, does, did, will, would, could, should, may, might, can, of, in, on, at, by, for, with, to, from, up, down, into, over, under, and, or, but, not, no, yes, this, that, these, those, here, there, when, where, why, how, what, who, which, your, my, our, their
What is Stemming?
Stemming is the process of reducing words to their root form by removing suffixes, prefixes, and other word endings. The goal is to treat different forms of the same word as identical for search purposes.
Some stemming Examples:
Original Word → Stemmed Form
"running" → "run"
"runs" → "run"
"runner" → "run"
"performance" → "perform"
"performed" → "perform"
"performing" → "perform"
The three sample documents after stop words removal and stemming look like this:
An inverted index is like a book’s index, but instead of mapping topics to page numbers, it maps each unique word to all the documents that contain it. It’s called “inverted” because instead of going from documents to words, it goes from words to documents.
Note: For clarity and space, I’m showing only a representative subset that demonstrates key patterns.
The complete inverted index would contain entries for all ~28 unique terms from our processed documents. The key patterns include:
Terms appearing in all documents (common terms like “comput”)
Terms unique to one document (distinctive terms like “ssd”)
Terms with varying frequencies (like “program” with tf=2)
The result: An inverted index that maps each word to the documents containing it, along with frequency counts.
Why inverted indexing matters for content creators:
Traditional search relies on keyword matching. This is why SEO focused on keyword density and exact phrase matching.
How do AI systems index content?
AI systems take a fundamentally different approach:
Step 1: Semantic chunking
AI doesn’t break content into words. Instead, it creates meaningful, self-contained chunks. AI systems analyze content for topic boundaries, logical sections, and complete thoughts to determine where to split content. They look for natural break points that preserve context and meaning.
What AI Systems Look For When Chunking
1. Semantic Coherence
Topic consistency: Does this section maintain the same subject matter?
Conceptual relationships: Are these sentences talking about related ideas?
Context dependency: Do these sentences need each other to make sense?
2. Structural Signals
HTML tags: Headings (H1, H2, H3), paragraphs, lists, sections
Formatting cues: Line breaks, bullet points, numbered steps
Visual hierarchy: How content is organized on the page
Pronoun references: “It,” “This,” “These” that refer to previous concepts
Discourse markers: Words that signal topic shifts or continuations
4. Completeness of Information
Self-contained units: Can this chunk answer a question independently?
Context sufficiency: Does the chunk have enough background to be understood?
Action completeness: For instructions, does it contain a complete process?
5. Optimal Size Constraints
Token limits: Most AI models have processing windows (512, 1024, 4096 tokens)
Embedding efficiency: Chunks need to be small enough for accurate vector representation
Memory constraints: Balance between context preservation and processing speed
6. Content Type Recognition
Question-answer pairs: Natural chunk boundaries
Step-by-step instructions: Each step or related steps become chunks
Examples and explanations: Keep examples with their explanations
Lists and enumerations: Group related list items
For demonstration purposes, I’m breaking our sample documents by sentences, though real AI systems use more sophisticated semantic analysis:
DOC1 → Chunk 1A: "Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance."
DOC1 → Chunk 1B: "SSDs provide faster boot times and quicker file access compared to traditional drives."
DOC2 → Chunk 2A: "Slow computer performance is often caused by too many programs running simultaneously."
DOC2 → Chunk 2B: "Close unnecessary background programs and disable startup applications to fix speed issues."
DOC3 → Chunk 3A: "Regular computer maintenance prevents performance problems."
DOC3 → Chunk 3B: "Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently."
Step 2: Vector embedding
Vector embeddings are created using pre-trained transformer neural networks like BERT, RoBERTa, or Sentence-BERT. These models have already learned semantic relationships from massive text datasets. Chunks are tokenized first, then passed through the pre-trained models. After that, each chunk becomes a mathematical representation of meaning.
Screenshot of the visualized knowledge graph based on the sample docs
A knowledge graph is a structured way to represent information as a network of connected entities and their relationships. Think of it like a map that shows how different concepts relate to each other. For example, it captures that “SSD improves performance” or “too many programs cause slowness.” This explicit relationship mapping helps AI systems understand not just what words appear together, but how concepts actually connect and influence each other.
How is knowledge graph constructed?
The system analyzes each text chunk to identify: (1) Entities – the important “things” mentioned (like Computer, SSD, Performance), (2) Relationships – how these things connect to each other (like “SSD improves Performance”), and (3) Entity Types – what category each entity belongs to (Hardware, Software, Metric, Process). These extracted elements are then linked together to form a web of knowledge that captures the logical structure of the information.
Vector embeddings and knowledge graphs work together as complementary approaches. Vector embeddings capture implicit semantic similarities (chunks about “SSD benefits” and “computer speed” have similar vectors even without shared keywords), while knowledge graphs capture explicit logical relationships (SSD → improves → Performance). During search, vector similarity finds semantically related content, and the knowledge graph provides reasoning paths to discover connected concepts and comprehensive answers. This combination enables both fuzzy semantic matching and precise logical reasoning.
Why AI indexing drives the chunk-sized and structured content recommendation?
When AI systems chunk content, they look for topic boundaries, complete thoughts, and logical sections. They analyze content for natural break points that preserve context and meaning. AI systems perform better when content is already organized into self-contained, meaningful units.
When you structure content with clear section breaks and complete thoughts, you do the chunking work for the AI. This ensures related information stays together and context isn’t lost during the indexing process.
What’s coming up next?
In the next blogpost of this series, I’ll dive into how generative AI and RAG-powered search reshape the way systems interpret user queries, as opposed to the traditional keyword-focused methods. Our current post showed that AI indexes content by meaning, through chunking, vector embeddings, and building concept networks. It’s equally important to highlight how AI understands what users actually mean when they search.
Back in 2018, I wrapped up a grueling 10-course Data Science specialization with a capstone project: an app that predicted the next word based on user input. Mining text, calculating probabilities, generating predictions—the whole works. Sound familiar?
Fast forward to today, and I’m at Microsoft exploring how this same technology is reshaping content creation from an instructional design perspective—how do we create content that works for both human learning and AI systems?
Since ChatGPT exploded in November 2022, everyone’s talking about “AI-ready content.” But here’s what I kept wondering: why do we need chunk-sized content? Why do Metadata and Q&A formats suddenly matter more?
The Fundamental Shift: From Finding to Generating
Goals of traditional search vs. goals of Generative AI search
Generative AI search changes the search experience entirely. Instead of just finding existing content, it aims to generate new content tailored to your specific prompt. The result isn’t a list of links – it’s a synthesized, actionable response that directly answers your question. The user experience becomes: “I get actionable solutions, instantly.“
This isn’t a minor improvement – it’s a different paradigm.
Note: I’m simplifying the distinction for clarity, but the divide between “traditional search” and “generative AI search” isn’t as clear-cut as I’m describing. Even before November 2022, search engines were incorporating AI techniques like Google’s RankBrain (2015) and BERT (2019). What’s different now is the shift toward generating new content rather than just finding and ranking existing content.
How the search processes actually work
As I’ve been studying this, I realized I didn’t fully understand how different the underlying processes really are. Let me break down what I’ve learned:
How does traditional search work?
Looking under the hood, traditional search follows a pretty straightforward path: bots crawl the internet, break content down into individual words and terms that get stored with document IDs, then match your search keywords against those indexed terms. Finally, relevance algorithms rank everything and serve up that familiar list of blue links.
Traditional web search process
How does generative AI search work?
This is where it gets fascinating (and more complex). While AI systems start the same way by scanning content across the internet, everything changes at the indexing stage.
Instead of cataloging keywords, AI breaks content into meaningful chunks and creates “vector embeddings,” which are essentially mathematical representations of meaning. The system then builds real-time connections and relationships between concepts, creating a web of understanding rather than just a keyword database.
When you ask a question, the AI finds relevant chunks based on meaning, not just keyword matches. Finally, instead of handing you links to sort through, AI synthesizes information from multiple sources to create a new, personalized response tailored to your specific question.
Generative AI index process
The big realization for me was that while traditional search treats your query as a collection of words to match, AI is trying to understand what you actually want to do.
What does this difference look like in practice?
Let’s see how this works with a simplified example:
Say we have three documents about computer performance:
Document 1: “Upgrading your computer's hard drive to a solid-state drive (SSD) can dramatically improve performance. SSDs provide faster boot times and quicker file access compared to traditional drives.“
Document 2: “Slow computer performance is often caused by too many programs running simultaneously. Close unnecessary background programs and disable startup applications to fix speed issues.“
Document 3: “Regular computer maintenance prevents performance problems. Clean temporary files, update software, and run system diagnostics to keep your computer running efficiently.“
Now someone searches: “How to make my computer faster?“
Traditional search breaks the question down into keywords like “make,” “computer,” and “faster,” then returns a ranked list of documents that contain those terms. You’d get some links to click through, and you’d have to piece together the answer yourself.
But generative AI understands you want actionable instructions and synthesizes information from all three sources into a comprehensive response: “Here are three approaches you can try: First, close unnecessary programs running in the background... Second, consider upgrading to an SSD for dramatic performance improvements... Third, maintain your system regularly by cleaning temporary files and updating software...“
How have the goals of content creation evolved?
This shift has forced me to rethink what “good content” even means. As a content creator and a learning developer, I used to focus primarily on content quality (accurate, clear, complete, fresh, accessible) and discoverability (keywords, clear headings, good formatting, internal links).
Now that generative AI is here, these fundamentals still matter, but there’s a third crucial goal: reducing AI hallucinations. When AI systems generate responses, they sometimes create information that sounds plausible but is actually incorrect or misleading. The structure and clarity of our source content plays a big role in whether AI produces accurate or fabricated information.
Goals of content creation for traditional search vs. for Generative AI search
Why this shift matters for content creators?
What surprised me most in my research was discovering that AI systems understand natural language better because large language models were trained on massive amounts of conversational data. This realization has already started changing how I create content—I’m experimenting with question-based headings and making sure each section focuses on one distinct topic.
But I’m still figuring out the bigger question: how do we measure whether these strategies work? How can we tell if our conversational language and Q&A formats truly help AI systems match user intent and generate better responses?
In my next post, I want to show you what I discovered when I dug into the technical details. The biggest eye-opener for me was realizing that when traditional searches remove “filler” words like “how to” from a user’s query, it’s stripping away crucial intent—the user wants actionable instructions, not just information.
The field is moving incredibly fast, and best practices are still being figured out by all of us. I’m sharing what I’ve learned so far, knowing that some of it might evolve as technology does.