Work with generative AI Agents

If your team connects MessageMind™ with a knowledge base to fuel its responses, you're using a generative AI Agent. This section contains everything you need to know about how to get your AI Agent up and running.

Before you begin

Before working on your AI Agent, get some background information on generative content and MessageMind's resolution engine.

When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.

By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with MessageMind™ - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.

This guide will take you through MessageMind™’s technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.

Understand Large Language Models (LLMs) and Generative AI

The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that MessageMind™ uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.

When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.

Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.

Understand your AI Agent's Content Filters

LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.

Before sending any generated response to your customer, your AI Agent checks to make sure the response is:

  • Safe: The response doesn't contain any harmful content.
  • Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
  • Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.

With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.

Understand MessageMind™’s Reasoning Engine

Your AI Agent runs on a sophisticated Reasoning Engine MessageMind™ created to provide customers with the knowledge and solutions they need.

When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:

  • Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
  • Knowledge base: Does the knowledge base contain the information the customer is looking for?
  • Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?

From there, it decides how to respond to the customer:

  • Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
  • Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response.
  • Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
  • Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.

Together, the mechanism that makes these complex decisions on how to help the customer is called MessageMind™’s Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.

Understand How Your AI Agent Prevents Prompt Injections

Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.

The reasoning engine behind MessageMind™’s AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:

  • A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message.
  • Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat.
  • Models that aim to detect and filter out harmful content in inputs or outputs.

With state-of-the-art generative AI testing prior to new deployments, MessageMind™ ensures a secure and effective customer interaction experience.

When you connect your AI Agent to your knowledge base and start to serve automatically generated content to your customers, it might feel like magic. But it's not! This topic takes you through what happens behind the scenes when you start serving knowledge base content to customers.

How MessageMind™ ingests your knowledge base

When you link your knowledge base to your AI Agent, your AI Agent copies down all of your knowledge base content, so it can quickly search through it and serve relevant information from it. Here's how it happens:

  1. When you link your AI Agent with your knowledge base, your AI Agent imports all of your knowledge base content.

    Depending on the tools you use to create and host your knowledge base, your knowledge base then updates with different frequencies:

    • If your knowledge base is in Zendesk or Salesforce, your AI Agent checks back for updates every 15 minutes.

      • If your AI Agent hasn't had any conversations, either immediately after you linked it with your knowledge base or in the last 30 days, your AI Agent pauses syncing. To trigger a sync with your knowledge base, have a test conversation with your AI Agent.
    • If your knowledge base is hosted elsewhere, you or your MessageMind™ team have to build an integration to scrape it and upload content to MessageMind's Knowledge API. If this is the case, the frequency of updates depends on the integration.

  2. Your AI Agent splits your articles into chunks, so it doesn't have to search through long articles each time it looks for information - it can just look at the shorter chunks instead.

    While each article can cover a variety of related concepts, each chunk should only cover one key concept. Additionally, your AI Agent includes context for each chunk; each chunk contains the headings that preceded it.

  3. Your AI Agent sends each chunk to a Large Language Model (LLM), which it uses to assign the chunks numerical representations that correspond to the meaning of each chunk. These numerical values are called embeddings, and it saves them into a database.

    The database is then ready to provide information for GPT to put together into natural-sounding responses to customer questions.

How MessageMind™ creates responses from knowledge base content

After saving your knowledge base content into a database, your AI Agent is ready to provide content from it to answer your customers' questions. Here's how it does that:

  1. Your AI Agent sends the customer's query to the LLM, so it can get an embedding (a numerical value) that corresponds with the information the customer was asking for.

    Before proceeding, the AI Agent sends the content through a moderation check via the LLM to see if the customer's question was inappropriate or toxic. If it was, your AI Agent rejects the query and doesn't continue with the answer generation process.

  2. Your AI Agent then compares embeddings between the customer's question and the chunks in its database, to see if it can find relevant chunks that match the meaning of the customer's question. This process is called retrieval.

    Your AI Agent looks for the best match in meaning in the database to what the customer asked for, which is called semantic similarity, and saves the top three most relevant chunks.

    If the customer's question is a follow-up to a previous question, your AI Agent might get the LLM to rewrite the customer's question to include context to increase the chances of getting relevant chunks. For example, if a customer asks your AI Agent whether your store sells cookies, and your AI Agent says yes, your customer may respond with "how much are they?" That question doesn't have enough information on its own, but a question like "how much are your cookies?" provides enough context to get a meaningful chunk of information back.

    If your AI Agent isn't able to find any relevant matches to the customer's question in the database's chunks at this point, it serves the customer a message asking them to rephrase their question or escalates the query to a human agent, rather than attempting to generate a response and risking serving inaccurate information.

  3. Your AI Agent sends the three chunks from the database that are the most relevant to the customer's question to GPT to stitch together into a response. Then, your AI Agent sends the generated response through three filters:

    1. The Safety filter checks to make sure that the generated response doesn't contain any harmful content.

    2. The Relevance filter checks to make sure that the generated response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.

    3. The Accuracy filter checks to make sure that the generated response matches the content in your knowledge base, so it can verify that the AI Agent's response is true.

  4. If the generated response passes these three filters, your AI Agent serves it to the customer.

Getting ready to create automatically generated content from your knowledge base for the first time? Or maybe you're just looking to tune up your knowledge base? Follow these principles to make the information in your knowledge base easy for your AI Agent to parse, which will improve your AI Agent's chances of serving up relevant and helpful information to your customers.

Structure information with your customer in mind

When you're maintaining a knowledge base, it can be easy to organize information in a way that makes sense to you, but not to people who aren't familiar with the information in it already. If you watch a new customer trying to navigate your knowledge base, you'll almost certainly be surprised at what they do! If you're like most people who maintain knowledge bases, you probably aren't a beginner, which means that you likely aren't your own target audience.

So how can you make sure your knowledge base is useful for customers, and what does that have to do with AI? The answer here comes down to using titles and headings. We'll call these signposts as a collective, because they act as signposts for both humans and AI, to indicate how likely it is that a customer is getting closer to the information they want to get to in your knowledge base. Additionally, when MessageMind™ ingests your knowledge base content, it saves the topic title as context for each chunk of information it splits your knowledge base into. When information has proper context, it's less likely that your AI Agent will serve irrelevant information to customers.

  • Categorize your information into groups that don't overlap. That way, both your human customers and your AI are less likely to make a wrong turn and find information that isn't relevant to what they're looking for.

  • Make every signpost relevant to all of the information under it. If there's information under a heading that isn't relevant to that heading, both customers and AI might have trouble finding that information. Similarly, if the heading is confusing or implies that it's followed by information that isn't actually there, it makes it harder for customers and AI to navigate your knowledge base.

  • Always organize information from most broad to most specific. It should be easy for customers to figure out whether they're getting closer to the information they're looking for by following your signposts.

  • Make your signposts descriptive.

    • Orient signposts around customer objectives. Most likely, customers are coming to your knowledge base or AI Agent looking for help with a specific task, so making it clear which articles are about which tasks is very helpful.

      As a best practice, use verbs in your signposts to make it easier for customers to find the actions they want to perform.

    • To help people and AI scan your content more easily, put important verbs and vocabulary closer to the front of your signposts than to the end.

    • Wherever you can, avoid mentioning concepts or terminology that new customers might not be familiar with yet. Unfamiliar wording makes it harder for signposts to do their job, because both people and LLMs can find them confusing.

    • Try to make it easy for customers to figure out whether they're the intended audience for an article just from reading the signposts. No customer wants to waste their time opening articles just to find that they're not relevant, or reading irrelevant responses.

  • Use proper HTML structure to create signposts. It might look just as good if you highlight some text, increase the size, and make it bold, but an AI model might struggle to recognize that that formatting is supposed to indicate a heading. Instead, use the appropriate <h1> tags, and so on. When you do, your formatting will be much more consistent, and AI can pick out the hierarchy your information is in. Additionally, your customers who have visual impairments can navigate properly formatted content more easily, because their assistive technology (e.g., screen readers) are programmed to parse HTML.

  • Don't assume that customers are going to read your knowledge base in order. Customers might find an article, or even a section of an article, via a search engine, and might get frustrated if the information they see requires a lot of context they don't have. Likewise, if your AI Agent sends a customer information without context, that can be a frustrating chat experience too. Make sure you lay some groundwork in your more advanced articles so all of your customers can go back and get more information if they need to.

The study of organizing information to aid customer navigation is called information architecture. If you're interested in more information, including resources on how to perform tests to see how your knowledge base organization works for new customers, see Information Architecture: Study Guide at the Nielsen Norman Group's website.

Create informative chunks by writing standalone content

When an AI ingests the content in your knowledge base, it breaks the information up into chunks. Then, when customers ask your AI Agent questions, your AI Agent searches for chunks that have relevant meanings and uses them to create responses. Here are some ways you can ensure each chunk makes sense on its own:

  • Provide information in full sentences. Because your AI Agent puts information into chunks, the best way to make sure your information has full context is to provide full sentences.

    For example, let's say your knowledge base contains a FAQ, and one of your questions is "Can I pay by credit card?" Instead of a simple "Yes," which isn't helpful on its own, phrase the answer as "Yes, you can pay by credit card."

  • Avoid references to other locations in your knowledge base. When customers are reading your knowledge base content in the context of a AI Agent, they won't have context for references like "As you saw in our last example." Avoid these kinds of references that make customers feel like they're missing out on information.

Write clearly and concisely

Now that we've talked about how your information should be organized, we can look at what the information itself should look like. The simpler your content is, the easier it is for both humans and AI to find the important pieces of information they need.

  • Use clear terminology that doesn't overlap. The easier your terminology is to follow, the more likely it is that a customer or AI can recognize whether content is relevant to a customer's question.

    For example, let's say your company makes music publishing software, and your knowledge base has some information about making a demo. But your knowledge base might also have information about how prospective customers can contact your Sales team for a demo of your software. The word "demo" meaning two different things in your knowledge base can cause confusion and cause irrelevant search results to come up. If you can, see if you can replace one instance with a different word, so the word "demo" consistently means only one thing in your knowledge base.

  • Use simple language. Have you ever read a really long, meandering sentence, and by the end weren't really sure what the author was trying to say? This can happen when either human customers or an AI are parsing your knowledge base. Take the time to cut unnecessary content so it's easier to pick out the takeaways from your content.

  • Minimize your reliance on images and videos. Generative content only works with text; your AI Agent can't access images in your knowledge base. If you have content in images, it's a good idea to re-evaluate if there's a way to provide that same content in text.

    Another reason this is a good idea is for accessibility: your customers who have visual disabilities may not be able to see your images or videos. Making as much text available in text as possible, like in alt text or transcripts, helps both customers chatting with your AI Agent and your customers who access your knowledge base using assistive technology like screen readers.

  • Verify your table content. Some tables work better than others with AI; sometimes AI is able to parse the spatial relationships between cells, and sometimes it gets confused. After setting up your AI Agent, test its ability to provide information that comes from tables in your knowledge base. Your tables are likely to work if they're formatted using proper Markdown, but note that they won't work if they're embedded in images.

Make data-driven maintenance decisions

It's common for documentation teams to be small and sometimes struggle with keeping entire knowledge bases up to date. If you're on a team like that, the idea of turning over your knowledge base to an AI Agent can feel daunting. You're not alone! If this situation feels familiar to you, it's important to work smarter and not harder by following some best practices:

  • Collect analytics data for your knowledge base. There are lots of ways to track usage data for your knowledge base, depending on the tools you use to make it. Customer usage data is often surprising to people who spend all day using a product - that's why it's important to collect it.

  • Decide which metrics best indicate success for your knowledge base. The metrics in your analytics data can be tricky: the stories they tell are often up to interpretation.

    For example, if customers only tend to spend 10 seconds on a long topic, is it because they tend to be looking for a crucial piece of information near the top of the page? If that's the case, you probably don't need to change anything. But what if they're spending that time scanning through the page for information that isn't there and leaving in frustration? In that case, there's probably something you can improve about the way your knowledge base is organized.

    There's no right or wrong set of metrics to focus on. It's common to focus on the topics that are most commonly viewed, or topics that have the most positive or negative reviews from customers, but ultimately it's up to you and your organization to choose how to measure the success of your knowledge base. That strategy can change over time, but you should have some data that you can refer back to.

  • Prioritize content reviews based on the data you collect. After collecting some customer usage data, start to go through it. Can you find patterns about the kinds of content that customers seemed to gravitate towards, or other content that customers didn't touch at all? Using your usage data, start creating a priority list for important topics to make sure they're polished.

  • You can always disable articles from the Knowledge page. If you know that a topic is out of date, but it's too low on your priority list to get to right away, you can disable it from showing up in generative content. That way, you can prevent inaccurate information from appearing in your AI Agent, without delaying your launch. In the future, when you do get a chance to update that topic, you can enable it again.

  • Revisit your data on a regular basis. After connecting your knowledge base to your AI Agent, you'll have even more usage data from your customers to analyze. When you revisit your data, you can test the success of your prior decisions and adjust your priorities accordingly.

  • Make use of MessageMind's reporting tools. On your MessageMind™ dashboard, you can see high-level reports on your AI Agent's automated resolution rate, and dig deeper into individual conversations to see how your AI Agent performed. Once your AI Agent is customer-facing, you'll have even more information on how your knowledge base is serving your customers through generative AI.

If you're just getting started and don't have analytics yet, consider prioritizing a few key areas of your product to review first, and go from there. You don't have to have a perfect plan for collecting analytics data right away! The important thing is that you eventually have a system where you can both collect and analyze usage data.

Improve your knowledge base content over time

What do you do once you have customer data? You use it! Here are a few tips:

  • Keep maintenance regular. As your product changes, so will your documentation, and so will your customers' questions. Set aside regular maintenance time to take a look at your AI Agent's reports and conversation transcripts, so you can pick out opportunities to improve your knowledge base and have your AI Agent performing even better for future customers.

  • Keep feedback loops tight. If you see an opportunity to improve your knowledge base so you can improve your AI Agent, make that change right away, so you can improve your AI Agent's performance right away.

Over time, with the information about how your customers interact with your AI Agent, you'll be able to settle into a workflow where you can improve both your knowledge base and your AI Agent all at once.

Before you begin

Before working on your AI Agent, get some background information on generative content and MessageMind's resolution engine.

When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.

By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with MessageMind™ - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.

This guide will take you through MessageMind™’s technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.

Understand Large Language Models (LLMs) and Generative AI

The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that MessageMind™ uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.

When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.

Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.

Understand your AI Agent's Content Filters

LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.

Before sending any generated response to your customer, your AI Agent checks to make sure the response is:

  • Safe: The response doesn't contain any harmful content.
  • Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
  • Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.

With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.

Understand MessageMind™’s Reasoning Engine

Your AI Agent runs on a sophisticated Reasoning Engine MessageMind™ created to provide customers with the knowledge and solutions they need.

When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:

  • Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
  • Knowledge base: Does the knowledge base contain the information the customer is looking for?
  • Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?

From there, it decides how to respond to the customer:

  • Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
  • Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response.
  • Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
  • Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.

Together, the mechanism that makes these complex decisions on how to help the customer is called MessageMind™’s Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.

Understand How Your AI Agent Prevents Prompt Injections

Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.

The reasoning engine behind MessageMind™’s AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:

  • A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message.
  • Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat.
  • Models that aim to detect and filter out harmful content in inputs or outputs.

With state-of-the-art generative AI testing prior to new deployments, MessageMind™ ensures a secure and effective customer interaction experience.

When you connect your AI Agent to your knowledge base and start to serve automatically generated content to your customers, it might feel like magic. But it's not! This topic takes you through what happens behind the scenes when you start serving knowledge base content to customers.

How MessageMind™ ingests your knowledge base

When you link your knowledge base to your AI Agent, your AI Agent copies down all of your knowledge base content, so it can quickly search through it and serve relevant information from it. Here's how it happens:

  1. When you link your AI Agent with your knowledge base, your AI Agent imports all of your knowledge base content.

    Depending on the tools you use to create and host your knowledge base, your knowledge base then updates with different frequencies:

    • If your knowledge base is in Zendesk or Salesforce, your AI Agent checks back for updates every 15 minutes.

      • If your AI Agent hasn't had any conversations, either immediately after you linked it with your knowledge base or in the last 30 days, your AI Agent pauses syncing. To trigger a sync with your knowledge base, have a test conversation with your AI Agent.
    • If your knowledge base is hosted elsewhere, you or your MessageMind™ team have to build an integration to scrape it and upload content to MessageMind's Knowledge API. If this is the case, the frequency of updates depends on the integration.

  2. Your AI Agent splits your articles into chunks, so it doesn't have to search through long articles each time it looks for information - it can just look at the shorter chunks instead.

    While each article can cover a variety of related concepts, each chunk should only cover one key concept. Additionally, your AI Agent includes context for each chunk; each chunk contains the headings that preceded it.

  3. Your AI Agent sends each chunk to a Large Language Model (LLM), which it uses to assign the chunks numerical representations that correspond to the meaning of each chunk. These numerical values are called embeddings, and it saves them into a database.

    The database is then ready to provide information for GPT to put together into natural-sounding responses to customer questions.

How MessageMind™ creates responses from knowledge base content

After saving your knowledge base content into a database, your AI Agent is ready to provide content from it to answer your customers' questions. Here's how it does that:

  1. Your AI Agent sends the customer's query to the LLM, so it can get an embedding (a numerical value) that corresponds with the information the customer was asking for.

    Before proceeding, the AI Agent sends the content through a moderation check via the LLM to see if the customer's question was inappropriate or toxic. If it was, your AI Agent rejects the query and doesn't continue with the answer generation process.

  2. Your AI Agent then compares embeddings between the customer's question and the chunks in its database, to see if it can find relevant chunks that match the meaning of the customer's question. This process is called retrieval.

    Your AI Agent looks for the best match in meaning in the database to what the customer asked for, which is called semantic similarity, and saves the top three most relevant chunks.

    If the customer's question is a follow-up to a previous question, your AI Agent might get the LLM to rewrite the customer's question to include context to increase the chances of getting relevant chunks. For example, if a customer asks your AI Agent whether your store sells cookies, and your AI Agent says yes, your customer may respond with "how much are they?" That question doesn't have enough information on its own, but a question like "how much are your cookies?" provides enough context to get a meaningful chunk of information back.

    If your AI Agent isn't able to find any relevant matches to the customer's question in the database's chunks at this point, it serves the customer a message asking them to rephrase their question or escalates the query to a human agent, rather than attempting to generate a response and risking serving inaccurate information.

  3. Your AI Agent sends the three chunks from the database that are the most relevant to the customer's question to GPT to stitch together into a response. Then, your AI Agent sends the generated response through three filters:

    1. The Safety filter checks to make sure that the generated response doesn't contain any harmful content.

    2. The Relevance filter checks to make sure that the generated response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.

    3. The Accuracy filter checks to make sure that the generated response matches the content in your knowledge base, so it can verify that the AI Agent's response is true.

  4. If the generated response passes these three filters, your AI Agent serves it to the customer.

Getting ready to create automatically generated content from your knowledge base for the first time? Or maybe you're just looking to tune up your knowledge base? Follow these principles to make the information in your knowledge base easy for your AI Agent to parse, which will improve your AI Agent's chances of serving up relevant and helpful information to your customers.

Structure information with your customer in mind

When you're maintaining a knowledge base, it can be easy to organize information in a way that makes sense to you, but not to people who aren't familiar with the information in it already. If you watch a new customer trying to navigate your knowledge base, you'll almost certainly be surprised at what they do! If you're like most people who maintain knowledge bases, you probably aren't a beginner, which means that you likely aren't your own target audience.

So how can you make sure your knowledge base is useful for customers, and what does that have to do with AI? The answer here comes down to using titles and headings. We'll call these signposts as a collective, because they act as signposts for both humans and AI, to indicate how likely it is that a customer is getting closer to the information they want to get to in your knowledge base. Additionally, when MessageMind™ ingests your knowledge base content, it saves the topic title as context for each chunk of information it splits your knowledge base into. When information has proper context, it's less likely that your AI Agent will serve irrelevant information to customers.

  • Categorize your information into groups that don't overlap. That way, both your human customers and your AI are less likely to make a wrong turn and find information that isn't relevant to what they're looking for.

  • Make every signpost relevant to all of the information under it. If there's information under a heading that isn't relevant to that heading, both customers and AI might have trouble finding that information. Similarly, if the heading is confusing or implies that it's followed by information that isn't actually there, it makes it harder for customers and AI to navigate your knowledge base.

  • Always organize information from most broad to most specific. It should be easy for customers to figure out whether they're getting closer to the information they're looking for by following your signposts.

  • Make your signposts descriptive.

    • Orient signposts around customer objectives. Most likely, customers are coming to your knowledge base or AI Agent looking for help with a specific task, so making it clear which articles are about which tasks is very helpful.

      As a best practice, use verbs in your signposts to make it easier for customers to find the actions they want to perform.

    • To help people and AI scan your content more easily, put important verbs and vocabulary closer to the front of your signposts than to the end.

    • Wherever you can, avoid mentioning concepts or terminology that new customers might not be familiar with yet. Unfamiliar wording makes it harder for signposts to do their job, because both people and LLMs can find them confusing.

    • Try to make it easy for customers to figure out whether they're the intended audience for an article just from reading the signposts. No customer wants to waste their time opening articles just to find that they're not relevant, or reading irrelevant responses.

  • Use proper HTML structure to create signposts. It might look just as good if you highlight some text, increase the size, and make it bold, but an AI model might struggle to recognize that that formatting is supposed to indicate a heading. Instead, use the appropriate <h1> tags, and so on. When you do, your formatting will be much more consistent, and AI can pick out the hierarchy your information is in. Additionally, your customers who have visual impairments can navigate properly formatted content more easily, because their assistive technology (e.g., screen readers) are programmed to parse HTML.

  • Don't assume that customers are going to read your knowledge base in order. Customers might find an article, or even a section of an article, via a search engine, and might get frustrated if the information they see requires a lot of context they don't have. Likewise, if your AI Agent sends a customer information without context, that can be a frustrating chat experience too. Make sure you lay some groundwork in your more advanced articles so all of your customers can go back and get more information if they need to.

The study of organizing information to aid customer navigation is called information architecture. If you're interested in more information, including resources on how to perform tests to see how your knowledge base organization works for new customers, see Information Architecture: Study Guide at the Nielsen Norman Group's website.

Create informative chunks by writing standalone content

When an AI ingests the content in your knowledge base, it breaks the information up into chunks. Then, when customers ask your AI Agent questions, your AI Agent searches for chunks that have relevant meanings and uses them to create responses. Here are some ways you can ensure each chunk makes sense on its own:

  • Provide information in full sentences. Because your AI Agent puts information into chunks, the best way to make sure your information has full context is to provide full sentences.

    For example, let's say your knowledge base contains a FAQ, and one of your questions is "Can I pay by credit card?" Instead of a simple "Yes," which isn't helpful on its own, phrase the answer as "Yes, you can pay by credit card."

  • Avoid references to other locations in your knowledge base. When customers are reading your knowledge base content in the context of a AI Agent, they won't have context for references like "As you saw in our last example." Avoid these kinds of references that make customers feel like they're missing out on information.

Write clearly and concisely

Now that we've talked about how your information should be organized, we can look at what the information itself should look like. The simpler your content is, the easier it is for both humans and AI to find the important pieces of information they need.

  • Use clear terminology that doesn't overlap. The easier your terminology is to follow, the more likely it is that a customer or AI can recognize whether content is relevant to a customer's question.

    For example, let's say your company makes music publishing software, and your knowledge base has some information about making a demo. But your knowledge base might also have information about how prospective customers can contact your Sales team for a demo of your software. The word "demo" meaning two different things in your knowledge base can cause confusion and cause irrelevant search results to come up. If you can, see if you can replace one instance with a different word, so the word "demo" consistently means only one thing in your knowledge base.

  • Use simple language. Have you ever read a really long, meandering sentence, and by the end weren't really sure what the author was trying to say? This can happen when either human customers or an AI are parsing your knowledge base. Take the time to cut unnecessary content so it's easier to pick out the takeaways from your content.

  • Minimize your reliance on images and videos. Generative content only works with text; your AI Agent can't access images in your knowledge base. If you have content in images, it's a good idea to re-evaluate if there's a way to provide that same content in text.

    Another reason this is a good idea is for accessibility: your customers who have visual disabilities may not be able to see your images or videos. Making as much text available in text as possible, like in alt text or transcripts, helps both customers chatting with your AI Agent and your customers who access your knowledge base using assistive technology like screen readers.

  • Verify your table content. Some tables work better than others with AI; sometimes AI is able to parse the spatial relationships between cells, and sometimes it gets confused. After setting up your AI Agent, test its ability to provide information that comes from tables in your knowledge base. Your tables are likely to work if they're formatted using proper Markdown, but note that they won't work if they're embedded in images.

Make data-driven maintenance decisions

It's common for documentation teams to be small and sometimes struggle with keeping entire knowledge bases up to date. If you're on a team like that, the idea of turning over your knowledge base to an AI Agent can feel daunting. You're not alone! If this situation feels familiar to you, it's important to work smarter and not harder by following some best practices:

  • Collect analytics data for your knowledge base. There are lots of ways to track usage data for your knowledge base, depending on the tools you use to make it. Customer usage data is often surprising to people who spend all day using a product - that's why it's important to collect it.

  • Decide which metrics best indicate success for your knowledge base. The metrics in your analytics data can be tricky: the stories they tell are often up to interpretation.

    For example, if customers only tend to spend 10 seconds on a long topic, is it because they tend to be looking for a crucial piece of information near the top of the page? If that's the case, you probably don't need to change anything. But what if they're spending that time scanning through the page for information that isn't there and leaving in frustration? In that case, there's probably something you can improve about the way your knowledge base is organized.

    There's no right or wrong set of metrics to focus on. It's common to focus on the topics that are most commonly viewed, or topics that have the most positive or negative reviews from customers, but ultimately it's up to you and your organization to choose how to measure the success of your knowledge base. That strategy can change over time, but you should have some data that you can refer back to.

  • Prioritize content reviews based on the data you collect. After collecting some customer usage data, start to go through it. Can you find patterns about the kinds of content that customers seemed to gravitate towards, or other content that customers didn't touch at all? Using your usage data, start creating a priority list for important topics to make sure they're polished.

  • You can always disable articles from the Knowledge page. If you know that a topic is out of date, but it's too low on your priority list to get to right away, you can disable it from showing up in generative content. That way, you can prevent inaccurate information from appearing in your AI Agent, without delaying your launch. In the future, when you do get a chance to update that topic, you can enable it again.

  • Revisit your data on a regular basis. After connecting your knowledge base to your AI Agent, you'll have even more usage data from your customers to analyze. When you revisit your data, you can test the success of your prior decisions and adjust your priorities accordingly.

  • Make use of MessageMind's reporting tools. On your MessageMind™ dashboard, you can see high-level reports on your AI Agent's automated resolution rate, and dig deeper into individual conversations to see how your AI Agent performed. Once your AI Agent is customer-facing, you'll have even more information on how your knowledge base is serving your customers through generative AI.

If you're just getting started and don't have analytics yet, consider prioritizing a few key areas of your product to review first, and go from there. You don't have to have a perfect plan for collecting analytics data right away! The important thing is that you eventually have a system where you can both collect and analyze usage data.

Improve your knowledge base content over time

What do you do once you have customer data? You use it! Here are a few tips:

  • Keep maintenance regular. As your product changes, so will your documentation, and so will your customers' questions. Set aside regular maintenance time to take a look at your AI Agent's reports and conversation transcripts, so you can pick out opportunities to improve your knowledge base and have your AI Agent performing even better for future customers.

  • Keep feedback loops tight. If you see an opportunity to improve your knowledge base so you can improve your AI Agent, make that change right away, so you can improve your AI Agent's performance right away.

Over time, with the information about how your customers interact with your AI Agent, you'll be able to settle into a workflow where you can improve both your knowledge base and your AI Agent all at once.

Before you begin

Before working on your AI Agent, get some background information on generative content and MessageMind's resolution engine.

When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.

By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with MessageMind™ - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.

This guide will take you through MessageMind™’s technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.

Understand Large Language Models (LLMs) and Generative AI

The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that MessageMind™ uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.

When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.

Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.

Understand your AI Agent's Content Filters

LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.

Before sending any generated response to your customer, your AI Agent checks to make sure the response is:

  • Safe: The response doesn't contain any harmful content.
  • Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
  • Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.

With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.

Understand MessageMind™’s Reasoning Engine

Your AI Agent runs on a sophisticated Reasoning Engine MessageMind™ created to provide customers with the knowledge and solutions they need.

When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:

  • Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
  • Knowledge base: Does the knowledge base contain the information the customer is looking for?
  • Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?

From there, it decides how to respond to the customer:

  • Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
  • Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response.
  • Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
  • Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.

Together, the mechanism that makes these complex decisions on how to help the customer is called MessageMind™’s Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.

Understand How Your AI Agent Prevents Prompt Injections

Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.

The reasoning engine behind MessageMind™’s AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:

  • A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message.
  • Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat.
  • Models that aim to detect and filter out harmful content in inputs or outputs.

With state-of-the-art generative AI testing prior to new deployments, MessageMind™ ensures a secure and effective customer interaction experience.

When you connect your AI Agent to your knowledge base and start to serve automatically generated content to your customers, it might feel like magic. But it's not! This topic takes you through what happens behind the scenes when you start serving knowledge base content to customers.

How MessageMind™ ingests your knowledge base

When you link your knowledge base to your AI Agent, your AI Agent copies down all of your knowledge base content, so it can quickly search through it and serve relevant information from it. Here's how it happens:

  1. When you link your AI Agent with your knowledge base, your AI Agent imports all of your knowledge base content.

    Depending on the tools you use to create and host your knowledge base, your knowledge base then updates with different frequencies:

    • If your knowledge base is in Zendesk or Salesforce, your AI Agent checks back for updates every 15 minutes.

      • If your AI Agent hasn't had any conversations, either immediately after you linked it with your knowledge base or in the last 30 days, your AI Agent pauses syncing. To trigger a sync with your knowledge base, have a test conversation with your AI Agent.
    • If your knowledge base is hosted elsewhere, you or your MessageMind™ team have to build an integration to scrape it and upload content to MessageMind's Knowledge API. If this is the case, the frequency of updates depends on the integration.

  2. Your AI Agent splits your articles into chunks, so it doesn't have to search through long articles each time it looks for information - it can just look at the shorter chunks instead.

    While each article can cover a variety of related concepts, each chunk should only cover one key concept. Additionally, your AI Agent includes context for each chunk; each chunk contains the headings that preceded it.

  3. Your AI Agent sends each chunk to a Large Language Model (LLM), which it uses to assign the chunks numerical representations that correspond to the meaning of each chunk. These numerical values are called embeddings, and it saves them into a database.

    The database is then ready to provide information for GPT to put together into natural-sounding responses to customer questions.

How MessageMind™ creates responses from knowledge base content

After saving your knowledge base content into a database, your AI Agent is ready to provide content from it to answer your customers' questions. Here's how it does that:

  1. Your AI Agent sends the customer's query to the LLM, so it can get an embedding (a numerical value) that corresponds with the information the customer was asking for.

    Before proceeding, the AI Agent sends the content through a moderation check via the LLM to see if the customer's question was inappropriate or toxic. If it was, your AI Agent rejects the query and doesn't continue with the answer generation process.

  2. Your AI Agent then compares embeddings between the customer's question and the chunks in its database, to see if it can find relevant chunks that match the meaning of the customer's question. This process is called retrieval.

    Your AI Agent looks for the best match in meaning in the database to what the customer asked for, which is called semantic similarity, and saves the top three most relevant chunks.

    If the customer's question is a follow-up to a previous question, your AI Agent might get the LLM to rewrite the customer's question to include context to increase the chances of getting relevant chunks. For example, if a customer asks your AI Agent whether your store sells cookies, and your AI Agent says yes, your customer may respond with "how much are they?" That question doesn't have enough information on its own, but a question like "how much are your cookies?" provides enough context to get a meaningful chunk of information back.

    If your AI Agent isn't able to find any relevant matches to the customer's question in the database's chunks at this point, it serves the customer a message asking them to rephrase their question or escalates the query to a human agent, rather than attempting to generate a response and risking serving inaccurate information.

  3. Your AI Agent sends the three chunks from the database that are the most relevant to the customer's question to GPT to stitch together into a response. Then, your AI Agent sends the generated response through three filters:

    1. The Safety filter checks to make sure that the generated response doesn't contain any harmful content.

    2. The Relevance filter checks to make sure that the generated response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.

    3. The Accuracy filter checks to make sure that the generated response matches the content in your knowledge base, so it can verify that the AI Agent's response is true.

  4. If the generated response passes these three filters, your AI Agent serves it to the customer.

Getting ready to create automatically generated content from your knowledge base for the first time? Or maybe you're just looking to tune up your knowledge base? Follow these principles to make the information in your knowledge base easy for your AI Agent to parse, which will improve your AI Agent's chances of serving up relevant and helpful information to your customers.

Structure information with your customer in mind

When you're maintaining a knowledge base, it can be easy to organize information in a way that makes sense to you, but not to people who aren't familiar with the information in it already. If you watch a new customer trying to navigate your knowledge base, you'll almost certainly be surprised at what they do! If you're like most people who maintain knowledge bases, you probably aren't a beginner, which means that you likely aren't your own target audience.

So how can you make sure your knowledge base is useful for customers, and what does that have to do with AI? The answer here comes down to using titles and headings. We'll call these signposts as a collective, because they act as signposts for both humans and AI, to indicate how likely it is that a customer is getting closer to the information they want to get to in your knowledge base. Additionally, when MessageMind™ ingests your knowledge base content, it saves the topic title as context for each chunk of information it splits your knowledge base into. When information has proper context, it's less likely that your AI Agent will serve irrelevant information to customers.

  • Categorize your information into groups that don't overlap. That way, both your human customers and your AI are less likely to make a wrong turn and find information that isn't relevant to what they're looking for.

  • Make every signpost relevant to all of the information under it. If there's information under a heading that isn't relevant to that heading, both customers and AI might have trouble finding that information. Similarly, if the heading is confusing or implies that it's followed by information that isn't actually there, it makes it harder for customers and AI to navigate your knowledge base.

  • Always organize information from most broad to most specific. It should be easy for customers to figure out whether they're getting closer to the information they're looking for by following your signposts.

  • Make your signposts descriptive.

    • Orient signposts around customer objectives. Most likely, customers are coming to your knowledge base or AI Agent looking for help with a specific task, so making it clear which articles are about which tasks is very helpful.

      As a best practice, use verbs in your signposts to make it easier for customers to find the actions they want to perform.

    • To help people and AI scan your content more easily, put important verbs and vocabulary closer to the front of your signposts than to the end.

    • Wherever you can, avoid mentioning concepts or terminology that new customers might not be familiar with yet. Unfamiliar wording makes it harder for signposts to do their job, because both people and LLMs can find them confusing.

    • Try to make it easy for customers to figure out whether they're the intended audience for an article just from reading the signposts. No customer wants to waste their time opening articles just to find that they're not relevant, or reading irrelevant responses.

  • Use proper HTML structure to create signposts. It might look just as good if you highlight some text, increase the size, and make it bold, but an AI model might struggle to recognize that that formatting is supposed to indicate a heading. Instead, use the appropriate <h1> tags, and so on. When you do, your formatting will be much more consistent, and AI can pick out the hierarchy your information is in. Additionally, your customers who have visual impairments can navigate properly formatted content more easily, because their assistive technology (e.g., screen readers) are programmed to parse HTML.

  • Don't assume that customers are going to read your knowledge base in order. Customers might find an article, or even a section of an article, via a search engine, and might get frustrated if the information they see requires a lot of context they don't have. Likewise, if your AI Agent sends a customer information without context, that can be a frustrating chat experience too. Make sure you lay some groundwork in your more advanced articles so all of your customers can go back and get more information if they need to.

The study of organizing information to aid customer navigation is called information architecture. If you're interested in more information, including resources on how to perform tests to see how your knowledge base organization works for new customers, see Information Architecture: Study Guide at the Nielsen Norman Group's website.

Create informative chunks by writing standalone content

When an AI ingests the content in your knowledge base, it breaks the information up into chunks. Then, when customers ask your AI Agent questions, your AI Agent searches for chunks that have relevant meanings and uses them to create responses. Here are some ways you can ensure each chunk makes sense on its own:

  • Provide information in full sentences. Because your AI Agent puts information into chunks, the best way to make sure your information has full context is to provide full sentences.

    For example, let's say your knowledge base contains a FAQ, and one of your questions is "Can I pay by credit card?" Instead of a simple "Yes," which isn't helpful on its own, phrase the answer as "Yes, you can pay by credit card."

  • Avoid references to other locations in your knowledge base. When customers are reading your knowledge base content in the context of a AI Agent, they won't have context for references like "As you saw in our last example." Avoid these kinds of references that make customers feel like they're missing out on information.

Write clearly and concisely

Now that we've talked about how your information should be organized, we can look at what the information itself should look like. The simpler your content is, the easier it is for both humans and AI to find the important pieces of information they need.

  • Use clear terminology that doesn't overlap. The easier your terminology is to follow, the more likely it is that a customer or AI can recognize whether content is relevant to a customer's question.

    For example, let's say your company makes music publishing software, and your knowledge base has some information about making a demo. But your knowledge base might also have information about how prospective customers can contact your Sales team for a demo of your software. The word "demo" meaning two different things in your knowledge base can cause confusion and cause irrelevant search results to come up. If you can, see if you can replace one instance with a different word, so the word "demo" consistently means only one thing in your knowledge base.

  • Use simple language. Have you ever read a really long, meandering sentence, and by the end weren't really sure what the author was trying to say? This can happen when either human customers or an AI are parsing your knowledge base. Take the time to cut unnecessary content so it's easier to pick out the takeaways from your content.

  • Minimize your reliance on images and videos. Generative content only works with text; your AI Agent can't access images in your knowledge base. If you have content in images, it's a good idea to re-evaluate if there's a way to provide that same content in text.

    Another reason this is a good idea is for accessibility: your customers who have visual disabilities may not be able to see your images or videos. Making as much text available in text as possible, like in alt text or transcripts, helps both customers chatting with your AI Agent and your customers who access your knowledge base using assistive technology like screen readers.

  • Verify your table content. Some tables work better than others with AI; sometimes AI is able to parse the spatial relationships between cells, and sometimes it gets confused. After setting up your AI Agent, test its ability to provide information that comes from tables in your knowledge base. Your tables are likely to work if they're formatted using proper Markdown, but note that they won't work if they're embedded in images.

Make data-driven maintenance decisions

It's common for documentation teams to be small and sometimes struggle with keeping entire knowledge bases up to date. If you're on a team like that, the idea of turning over your knowledge base to an AI Agent can feel daunting. You're not alone! If this situation feels familiar to you, it's important to work smarter and not harder by following some best practices:

  • Collect analytics data for your knowledge base. There are lots of ways to track usage data for your knowledge base, depending on the tools you use to make it. Customer usage data is often surprising to people who spend all day using a product - that's why it's important to collect it.

  • Decide which metrics best indicate success for your knowledge base. The metrics in your analytics data can be tricky: the stories they tell are often up to interpretation.

    For example, if customers only tend to spend 10 seconds on a long topic, is it because they tend to be looking for a crucial piece of information near the top of the page? If that's the case, you probably don't need to change anything. But what if they're spending that time scanning through the page for information that isn't there and leaving in frustration? In that case, there's probably something you can improve about the way your knowledge base is organized.

    There's no right or wrong set of metrics to focus on. It's common to focus on the topics that are most commonly viewed, or topics that have the most positive or negative reviews from customers, but ultimately it's up to you and your organization to choose how to measure the success of your knowledge base. That strategy can change over time, but you should have some data that you can refer back to.

  • Prioritize content reviews based on the data you collect. After collecting some customer usage data, start to go through it. Can you find patterns about the kinds of content that customers seemed to gravitate towards, or other content that customers didn't touch at all? Using your usage data, start creating a priority list for important topics to make sure they're polished.

  • You can always disable articles from the Knowledge page. If you know that a topic is out of date, but it's too low on your priority list to get to right away, you can disable it from showing up in generative content. That way, you can prevent inaccurate information from appearing in your AI Agent, without delaying your launch. In the future, when you do get a chance to update that topic, you can enable it again.

  • Revisit your data on a regular basis. After connecting your knowledge base to your AI Agent, you'll have even more usage data from your customers to analyze. When you revisit your data, you can test the success of your prior decisions and adjust your priorities accordingly.

  • Make use of MessageMind's reporting tools. On your MessageMind™ dashboard, you can see high-level reports on your AI Agent's automated resolution rate, and dig deeper into individual conversations to see how your AI Agent performed. Once your AI Agent is customer-facing, you'll have even more information on how your knowledge base is serving your customers through generative AI.

If you're just getting started and don't have analytics yet, consider prioritizing a few key areas of your product to review first, and go from there. You don't have to have a perfect plan for collecting analytics data right away! The important thing is that you eventually have a system where you can both collect and analyze usage data.

Improve your knowledge base content over time

What do you do once you have customer data? You use it! Here are a few tips:

  • Keep maintenance regular. As your product changes, so will your documentation, and so will your customers' questions. Set aside regular maintenance time to take a look at your AI Agent's reports and conversation transcripts, so you can pick out opportunities to improve your knowledge base and have your AI Agent performing even better for future customers.

  • Keep feedback loops tight. If you see an opportunity to improve your knowledge base so you can improve your AI Agent, make that change right away, so you can improve your AI Agent's performance right away.

Over time, with the information about how your customers interact with your AI Agent, you'll be able to settle into a workflow where you can improve both your knowledge base and your AI Agent all at once.