Trabajar con agentes generativos de IA
Si tu equipo conecta MessageMind™ con una base de conocimientos para alimentar sus respuestas, estás utilizando un Agente de IA generativa. Esta sección contiene todo lo que necesitas saber para poner en marcha tu Agente IA.
Antes de empezar
Antes de trabajar en tu Agente AI, infórmate sobre el contenido generativo y el motor de resolución de MessageMind.
Cuando has utilizado un chatbot en el pasado, probablemente lo has considerado lento o difícil de usar. La mayoría de los bots que existen no son muy buenos para entender lo que quieren tus clientes, ni para saber cómo responder o emprender acciones como lo haría un agente humano.
Al combinar la información de tu base de conocimientos con la IA de vanguardia, no sólo tienes un chatbot con MessageMind™: tienes un agente de IA generativa, diseñado para realizar tareas que antes sólo podían hacer los agentes humanos.
Esta guía te llevará a través de la tecnología de MessageMind™ que utilizamos para que la experiencia del cliente con un Agente de IA sea diferente a la de cualquier chatbot que hayas utilizado antes.
Comprender los Grandes Modelos Lingüísticos (LLM) y la IA Generativa
El secreto de cómo tu Agente AI entiende y escribe mensajes está en la IA, o inteligencia artificial, que MessageMind™ utiliza entre bastidores. En términos generales, la IA es una serie de programas informáticos complejos diseñados para resolver problemas como los humanos. Puede abordar diversas situaciones e incorporar distintos tipos de datos; en el caso de tu Agente de IA, se centra en analizar el lenguaje para conectar a los clientes con las respuestas.
Cuando un cliente interactúa con tu Agente de IA, éste utiliza Grandes Modelos de Lenguaje, o LLM, que son programas informáticos entrenados en grandes cantidades de texto, para identificar lo que pide el cliente. Basándose en los patrones que el LLM identificó en los datos de texto, un LLM puede analizar una pregunta de un cliente y determinar la intención que hay detrás de ella. Después, puede analizar la información de tu base de conocimientos y determinar si el significado que hay detrás coincide con lo que busca el cliente.
La IA generativa es un tipo de LLM que utiliza su análisis del contenido existente para crear contenido nuevo: construye frases palabra por palabra, basándose en qué palabras es más probable que sigan a las que ya ha elegido. Mediante la IA generativa, tu Agente de IA construye respuestas basadas en fragmentos de tu base de conocimientos que contienen la información que busca el cliente, y las formula de forma natural y conversacional.
Comprende los filtros de contenido de tu agente de IA
Los datos de entrenamiento LLM pueden contener contenidos nocivos o indeseables, y la IA generativa a veces puede generar detalles que no son ciertos, lo que se denomina alucinaciones. Para combatir estos problemas, tu Agente AI utiliza un conjunto adicional de modelos para garantizar la calidad de sus respuestas.
Antes de enviar cualquier respuesta generada a tu cliente, tu Agente AI comprueba que la respuesta lo sea:
- Segura: La respuesta no contiene ningún contenido dañino.
- Pertinente: La respuesta responde realmente a la pregunta del cliente. Aunque la información de la respuesta sea correcta, tiene que ser la que el cliente buscaba para darle una experiencia positiva.
- Precisa: La respuesta coincide con el contenido de tu base de conocimientos, por lo que tu Agente IA puede comprobar dos veces que su respuesta es verdadera.
Con estas comprobaciones, puedes estar seguro de que tu Agente de IA no sólo ha tomado decisiones acertadas sobre cómo ayudar a tu cliente, sino que también le ha enviado respuestas de alta calidad.
Comprender el motor de razonamiento de MessageMind
Tu Agente de IA funciona con un sofisticado Motor de Razonamiento MessageMind™ creado para proporcionar a los clientes los conocimientos y soluciones que necesitan.
Cuando los clientes hacen una pregunta a tu Agente de IA, éste tiene en cuenta lo siguiente para decidir qué hacer a continuación:
- Contexto de la conversación: ¿La conversación anterior a la pregunta actual contiene contexto que pueda ayudar a tu Agente de IA a responder mejor a la pregunta?
- Base de conocimientos: ¿Contiene la base de conocimientos la información que busca el cliente?
- Sistemas empresariales: ¿Existen Acciones configuradas con tu Agente de IA diseñadas para permitirle obtener la información que busca el cliente?
A partir de ahí, decide cómo responder al cliente:
- Pregunta de seguimiento: Si tu Agente de IA necesita más información para ayudar al cliente, puede pedir más información.
- Base de conocimientos: Si la respuesta a la consulta del cliente está en la base de conocimientos, puede obtener esa información y utilizarla para escribir una respuesta.
- Sistemas empresariales: Si la respuesta a la consulta del cliente está disponible mediante una de las Acciones configuradas en tu Agente AI, tu Agente AI puede obtener esa información haciendo una llamada a la API.
- Traspaso: Si tu Agente de IA no puede responder a la solicitud del cliente, puede transferirlo a un agente humano para que le preste más ayuda.
En conjunto, el mecanismo que toma estas complejas decisiones sobre cómo ayudar al cliente se denomina Motor de Razonamiento de MessageMind™. Al igual que cuando un agente humano toma decisiones sobre cómo ayudar a un cliente basándose en lo que sabe sobre lo que quiere el cliente, el Motor de Razonamiento tiene en cuenta una variedad de información para averiguar cómo resolver la consulta del cliente de la forma más eficaz posible.
Comprende cómo tu agente de IA evita las inyecciones de emergencia
Muchos chatbots de IA son vulnerables a las inyecciones de avisos o jailbreaking, que son avisos que consiguen que el chatbot proporcione información que no debería, por ejemplo, información confidencial o insegura.
El motor de razonamiento de los Agentes IA de MessageMind™ está estructurado de tal forma que es muy difícil que los ataques adversarios LLM tengan éxito. En concreto, tiene:
- Una serie de subsistemas de IA que interactúan entre sí, cada uno de los cuales modifica el contexto que rodea al mensaje de un cliente.
- Varias instrucciones rápidas que dejan muy clara la tarea que hay que realizar, indicando al Agente IA que no comparta el funcionamiento interno ni las instrucciones, y que redirija las conversaciones lejos de la cháchara casual.
- Modelos que pretenden detectar y filtrar contenidos nocivos en las entradas o salidas.
Con pruebas de IA generativa de última generación antes de los nuevos despliegues, MessageMind™ garantiza una experiencia de interacción con el cliente segura y eficaz.
Cuando conectes tu Agente de IA a tu base de conocimientos y empieces a servir contenido generado automáticamente a tus clientes, puede parecer magia. ¡Pero no lo es! Este tema te explica lo que ocurre entre bastidores cuando empiezas a servir contenido de la base de conocimientos a los clientes.
Cómo MessageMind™ ingiere tubase de conocimientos
Cuando vinculas tu base de conocimientos a tu Agente de IA, éste copia todo el contenido de tu base de conocimientos, de modo que pueda buscar rápidamente en ella y servir información relevante a partir de ella. Así es como ocurre:
-
Cuando vinculas tu Agente AI con tu base de conocimientos, tu Agente AI importa todo el contenido de tu base de conocimientos.
Dependiendo de las herramientas que utilices para crear y alojar tu base de conocimientos, ésta se actualizará con distinta frecuencia:
-
Si tu base de conocimientos está en Zendesk o Salesforce, tu Agente de IA comprueba si hay actualizaciones cada 15 minutos.
- Si tu Agente AI no ha mantenido ninguna conversación, ya sea inmediatamente después de vincularlo con tu base de conocimientos o en los últimos 30 días, tu Agente AI pausa la sincronización. Para activar una sincronización con tu base de conocimientos, mantén una conversación de prueba con tu Agente de IA.
-
Si tu base de conocimientos está alojada en otro lugar, tú o tu equipo de MessageMind™ tendréis que crear una integración para rasparla y cargar el contenido en la Knowledge API de MessageMind. Si es así, la frecuencia de las actualizaciones depende de la integración.
-
-
Tu Agente AI divide tus artículos en trozos, de modo que no tenga que buscar en artículos largos cada vez que busque información; en su lugar, puede consultar los trozos más cortos.
Aunque cada artículo puede abarcar diversos conceptos relacionados, cada trozo sólo debe abarcar un concepto clave. Además, tu Agente AI incluye el contexto de cada trozo; cada trozo contiene los encabezamientos que lo precedieron.
-
Tu Agente AI envía cada trozo a un Gran Modelo de Lenguaje (LLM), que utiliza para asignar a los trozos representaciones numéricas que correspondan al significado de cada trozo. Estos valores numéricos se denominan incrustaciones, y los guarda en una base de datos.
A continuación, la base de datos está lista para proporcionar información que GPT puede reunir en respuestas naturales a las preguntas de los clientes.
Cómo MessageMind™ crea respuestas a partirdel contenido de la base de conocimientos
Tras guardar el contenido de tu base de conocimientos en una base de datos, tu Agente de IA está listo para proporcionar contenido a partir de ella para responder a las preguntas de tus clientes. He aquí cómo lo hace:
-
Tu Agente AI envía la consulta del cliente al LLM, para que pueda obtener una incrustación (un valor numérico) que se corresponda con la información que pedía el cliente.
Antes de continuar, el Agente de IA envía el contenido a través de una comprobación de moderación mediante el LLM para ver si la pregunta del cliente era inapropiada o tóxica. Si lo fuera, tu Agente AI rechaza la consulta y no continúa con el proceso de generación de respuestas.
-
A continuación, tu Agente de IA compara las incrustaciones entre la pregunta del cliente y los trozos de su base de datos, para ver si puede encontrar trozos relevantes que coincidan con el significado de la pregunta del cliente. Este proceso se llama recuperación.
Tu Agente de IA busca en la base de datos la mejor coincidencia de significado con lo que pidió el cliente, lo que se denomina similitud semántica, y guarda los tres trozos más relevantes.
Si la pregunta del cliente es una continuación de una pregunta anterior, tu Agente de IA podría hacer que el LLM reescribiera la pregunta del cliente para incluir el contexto y aumentar así las posibilidades de obtener fragmentos relevantes. Por ejemplo, si un cliente pregunta a tu Agente de IA si tu tienda vende galletas, y tu Agente de IA dice que sí, el cliente puede responder con un "¿cuánto cuestan?". Esa pregunta no tiene suficiente información por sí sola, pero una pregunta como "¿cuánto cuestan tus galletas?" proporciona suficiente contexto para obtener un trozo significativo de información.
Si en este punto tu Agente de IA no puede encontrar ninguna coincidencia relevante para la pregunta del cliente en los trozos de la base de datos, le envía un mensaje pidiéndole que reformule su pregunta o eleva la consulta a un agente humano, en lugar de intentar generar una respuesta y arriesgarse a servir información inexacta.
-
Tu Agente de IA envía los tres fragmentos de la base de datos más relevantes para la pregunta del cliente a GPT para que los reúna en una respuesta. A continuación, tu Agente AI envía la respuesta generada a través de tres filtros:
-
El filtro de seguridad comprueba que la respuesta generada no contenga ningún contenido dañino.
-
El filtro Relevancia comprueba que la respuesta generada responde realmente a la pregunta del cliente. Aunque la información de la respuesta sea correcta, tiene que ser la que el cliente buscaba para darle una experiencia positiva.
-
El filtro Exactitud comprueba que la respuesta generada coincide con el contenido de tu base de conocimientos, para poder verificar que la respuesta del Agente IA es verdadera.
-
-
Si la respuesta generada pasa estos tres filtros, tu Agente de IA se la sirve al cliente.
¿Te estás preparando para crear por primera vez contenidos generados automáticamente a partir de tu base de conocimientos? ¿O tal vez sólo buscas afinar tu base de conocimientos? Sigue estos principios para que la información de tu base de conocimientos sea fácil de analizar para tu Agente de IA, lo que mejorará las posibilidades de que éste ofrezca información relevante y útil a tus clientes.
Estructura la informaciónpensando en tu cliente
Cuando mantienes una base de conocimientos, puede ser fácil organizar la información de forma que tenga sentido para ti, pero no para las personas que no están familiarizadas con la información que contiene. Si observas a un nuevo cliente intentando navegar por tu base de conocimientos, ¡casi seguro que te sorprenderá lo que hace! Si eres como la mayoría de las personas que mantienen bases de conocimientos, probablemente no seas un principiante, lo que significa que probablemente no seas tu propio público objetivo.
Entonces, ¿cómo puedes asegurarte de que tu base de conocimientos sea útil para los clientes, y qué tiene que ver eso con la IA? La respuesta aquí se reduce a utilizar títulos y encabezamientos. Llamaremos a estos postes indicadores como un colectivo, porque actúan como postes indicadores tanto para los humanos como para la IA, para indicar la probabilidad de que un cliente se esté acercando a la información a la que quiere llegar en tu base de conocimientos. Además, cuando MessageMind™ ingiere el contenido de tu base de conocimientos, guarda el título del tema como contexto para cada trozo de información en que divide tu base de conocimientos. Cuando la información tiene un contexto adecuado, es menos probable que tu Agente de IA sirva información irrelevante a los clientes.
-
Clasifica tu información en grupos que no se solapen. De este modo, tanto tus clientes humanos como tu IA tienen menos probabilidades de equivocarse y encontrar información que no es relevante para lo que buscan.
-
Haz que cada señal sea relevante para toda la información que hay debajo de ella. Si hay información bajo un epígrafe que no es relevante para ese epígrafe, tanto los clientes como la IA podrían tener problemas para encontrar esa información. Del mismo modo, si el encabezamiento es confuso o da a entender que va seguido de información que en realidad no está ahí, dificulta a los clientes y a la IA la navegación por tu base de conocimientos.
-
Organiza siempre la información de más amplia a más específica. Debe ser fácil para los clientes averiguar si se están acercando a la información que buscan siguiendo tus señales.
-
Haz que tus señales sean descriptivas.
-
Orienta las señales en torno a los objetivos del cliente. Lo más probable es que los clientes acudan a tu base de conocimientos o Agente de IA buscando ayuda para una tarea concreta, por lo que dejar claro qué artículos tratan sobre qué tareas es muy útil.
Como buena práctica, utiliza verbos en tus señales para facilitar a los clientes la búsqueda de las acciones que desean realizar.
-
Para ayudar a las personas y a la IA a escanear tu contenido más fácilmente, coloca los verbos y el vocabulario importantes más cerca de la parte delantera de tus señales que del final.
-
Siempre que puedas, evita mencionar conceptos o terminología con los que los nuevos clientes puedan no estar familiarizados todavía. Una redacción poco familiar dificulta que las señales cumplan su función, porque tanto las personas como los LLM pueden encontrarlas confusas.
-
Intenta que a los clientes les resulte fácil averiguar si son el público al que va dirigido un artículo con sólo leer las señales. Ningún cliente quiere perder el tiempo abriendo artículos sólo para descubrir que no son relevantes, o leyendo respuestas irrelevantes.
-
-
Utiliza una estructura HTML adecuada para crear señales. Podría quedar igual de bien si resaltas un texto, aumentas el tamaño y lo pones en negrita, pero un modelo de IA podría tener dificultades para reconocer que ese formato se supone que indica un encabezamiento. En su lugar, utiliza las etiquetas
<h1>
adecuadas, etc. Cuando lo hagas, tu formato será mucho más coherente y la IA podrá distinguir la jerarquía en la que se encuentra tu información. Además, tus clientes con deficiencias visuales pueden navegar más fácilmente por contenidos formateados correctamente, porque su tecnología de asistencia (por ejemplo, los lectores de pantalla) está programada para analizar HTML. -
No des por sentado que los clientes van a leer tu base de conocimientos en orden. Los clientes pueden encontrar un artículo, o incluso una sección de un artículo, a través de un motor de búsqueda, y pueden sentirse frustrados si la información que ven requiere mucho contexto que no tienen. Del mismo modo, si tu Agente de IA envía a un cliente información sin contexto, eso también puede ser una experiencia de chat frustrante. Asegúrate de sentar algunas bases en tus artículos más avanzados para que todos tus clientes puedan volver atrás y obtener más información si lo necesitan.
El estudio de la organización de la información para facilitar la navegación del cliente se denomina arquitectura de la información. Si te interesa obtener más información, incluidos recursos sobre cómo realizar pruebas para ver cómo funciona la organización de tu base de conocimientos para los nuevos clientes, consulta Arquitectura de la información: Guía de estudio en el sitio web del Grupo Nielsen Norman.
Crea trozos informativos escribiendocontenidos independientes
Cuando una IA ingiere el contenido de tu base de conocimientos, descompone la información en trozos. Luego, cuando los clientes hacen preguntas a tu Agente de IA, éste busca trozos que tengan significados relevantes y los utiliza para crear respuestas. Aquí tienes algunas formas de asegurarte de que cada trozo tiene sentido por sí mismo:
-
Proporciona información en frases completas. Como tu Agente de IA pone la información en trozos, la mejor forma de asegurarte de que tu información tiene un contexto completo es proporcionar frases completas.
Por ejemplo, supongamos que tu base de conocimientos contiene una FAQ, y una de sus preguntas es "¿Puedo pagar con tarjeta de crédito?". En lugar de un simple "Sí", que no es útil por sí solo, formula la respuesta como "Sí, puedes pagar con tarjeta de crédito".
-
Evita las referencias a otros lugares en tu base de conocimientos. Cuando los clientes lean el contenido de tu base de conocimientos en el contexto de un Agente AI, no tendrán contexto para referencias como "Como has visto en nuestro último ejemplo". Evita este tipo de referencias que hacen que los clientes sientan que se están perdiendo información.
Escribe de forma clara y concisa
Ahora que hemos hablado de cómo debe organizarse tu información, podemos ver qué aspecto debe tener la información en sí. Cuanto más sencillo sea tu contenido, más fácil será tanto para los humanos como para la IA encontrar las piezas importantes de información que necesitan.
-
Utiliza una terminología clara que no se solape. Cuanto más fácil sea seguir tu terminología, más probable será que un cliente o la IA puedan reconocer si el contenido es relevante para la pregunta de un cliente.
Por ejemplo, supongamos que tu empresa fabrica software de edición musical, y tu base de conocimientos tiene información sobre cómo hacer una maqueta. Pero tu base de conocimientos también puede tener información sobre cómo pueden ponerse en contacto los posibles clientes con tu equipo de Ventas para una demostración de tu software. La palabra "demo" que significa dos cosas distintas en tu base de conocimientos puede causar confusión y hacer que aparezcan resultados de búsqueda irrelevantes. Si puedes, comprueba si puedes sustituir una instancia por una palabra diferente, de modo que la palabra "demo" signifique sistemáticamente una sola cosa en tu base de conocimientos.
-
Utiliza un lenguaje sencillo. ¿Alguna vez has leído una frase muy larga y sinuosa, y al final no estabas realmente seguro de lo que el autor intentaba decir? Esto puede ocurrir cuando clientes humanos o una IA analizan tu base de conocimientos. Tómate tu tiempo para recortar el contenido innecesario, de modo que sea más fácil extraer lo esencial de tu contenido.
-
Reduce al mínimo tu dependencia de imágenes y vídeos. El contenido generativo sólo funciona con texto; tu Agente de IA no puede acceder a las imágenes de tu base de conocimientos. Si tienes contenido en imágenes, es una buena idea volver a evaluar si hay alguna forma de ofrecer ese mismo contenido en texto.
Otra razón por la que es una buena idea es por la accesibilidad: es posible que tus clientes con discapacidad visual no puedan ver tus imágenes o vídeos. Hacer que la mayor cantidad posible de texto esté disponible en texto, como en texto alternativo o transcripciones, ayuda tanto a los clientes que chatean con tu Agente de IA como a los clientes que acceden a tu base de conocimientos utilizando tecnología de asistencia, como lectores de pantalla.
-
Verifica el contenido de tu tabla. Algunas tablas funcionan mejor que otras con la IA; a veces la IA es capaz de analizar las relaciones espaciales entre celdas, y a veces se confunde. Después de configurar tu Agente IA, prueba su capacidad para proporcionar información procedente de las tablas de tu base de conocimientos. Es probable que tus tablas funcionen si están formateadas utilizando Markdown adecuado, pero ten en cuenta que no funcionarán si están incrustadas en imágenes.
Tomadecisiones de mantenimiento basadas en datos
Es habitual que los equipos de documentación sean pequeños y a veces tengan dificultades para mantener actualizadas bases de conocimientos enteras. Si estás en un equipo así, la idea de entregar tu base de conocimientos a un Agente de IA puede parecer desalentadora. ¡No estás solo! Si esta situación te resulta familiar, es importante trabajar de forma más inteligente y no más dura siguiendo algunas buenas prácticas:
-
Recopila datos analíticos para tu base de conocimientos. Hay muchas formas de hacer un seguimiento de los datos de uso de tu base de conocimientos, dependiendo de las herramientas que utilices para crearla. Los datos de uso de los clientes suelen sorprender a las personas que se pasan el día utilizando un producto, por eso es importante recopilarlos.
-
Decide qué métricas indican mejor el éxito de tu base de conocimientos. Las métricas de tus datos analíticos pueden ser engañosas: las historias que cuentan a menudo dependen de la interpretación.
Por ejemplo, si los clientes sólo suelen dedicar 10 segundos a un tema largo, ¿se debe a que suelen buscar un dato crucial cerca de la parte superior de la página? Si es así, probablemente no necesites cambiar nada. Pero, ¿y si pasan ese tiempo escaneando la página en busca de información que no está ahí y se marchan frustrados? En ese caso, probablemente haya algo que puedas mejorar sobre la forma en que está organizada tu base de conocimientos.
No hay un conjunto correcto o incorrecto de métricas en las que centrarse. Es habitual centrarse en los temas que más se consultan, o en los temas que tienen más críticas positivas o negativas de los clientes, pero en última instancia depende de ti y de tu organización elegir cómo medir el éxito de tu base de conocimientos. Esa estrategia puede cambiar con el tiempo, pero debes tener algunos datos a los que puedas remitirte.
-
Prioriza las revisiones de contenido basándote en los datos que recopiles. Tras recopilar algunos datos de uso de los clientes, empieza a revisarlos. ¿Puedes encontrar patrones sobre los tipos de contenido hacia los que los clientes parecían gravitar, u otros contenidos que los clientes no tocaban en absoluto? Utilizando tus datos de uso, empieza a crear una lista de prioridades para los temas importantes para asegurarte de que están pulidos.
-
Siempre puedes desactivar los artículos desde la página Conocimiento. Si sabes que un tema está obsoleto, pero está demasiado abajo en tu lista de prioridades para llegar a él de inmediato, puedes desactivarlo para que no aparezca en el contenido generativo. De este modo, puedes evitar que aparezca información inexacta en tu Agente AI, sin retrasar su lanzamiento. En el futuro, cuando tengas la oportunidad de actualizar ese tema, puedes volver a activarlo.
-
Revisa tus datos periódicamente. Tras conectar tu base de conocimientos a tu Agente de IA, tendrás aún más datos de uso de tus clientes para analizar. Cuando revises tus datos, podrás comprobar el éxito de tus decisiones anteriores y ajustar tus prioridades en consecuencia.
-
Utiliza las herramientas de elaboración de informes de MessageMind. En tu panel de control MessageMind™, puedes ver informes de alto nivel sobre la tasa de resolución automática de tu Agente de IA, y profundizar en conversaciones individuales para ver cómo ha actuado tu Agente de IA. Una vez que tu Agente de IA esté orientado al cliente, tendrás aún más información sobre cómo tu base de conocimientos está sirviendo a tus clientes a través de la IA generativa.
Si acabas de empezar y aún no tienes análisis, considera la posibilidad de priorizar algunas áreas clave de tu producto para revisarlas primero, y partir de ahí. No hace falta que tengas un plan perfecto para recopilar datos analíticos de inmediato. Lo importante es que al final tengas un sistema en el que puedas recoger y analizar datos de uso.
Mejora el contenido de tu base de conocimientos conel tiempo
¿Qué haces una vez que tienes los datos del cliente? ¡Utilízalo tú! Aquí tienes algunos consejos:
-
Mantén un mantenimiento regular. A medida que cambie tu producto, también lo hará tu documentación, y también las preguntas de tus clientes. Reserva un tiempo de mantenimiento regular para echar un vistazo a los informes y transcripciones de conversaciones de tu Agente de IA, para que puedas detectar oportunidades de mejorar tu base de conocimientos y hacer que tu Agente de IA funcione aún mejor para futuros clientes.
-
Mantén estrechos los circuitos de retroalimentación. Si ves una oportunidad de mejorar tu base de conocimientos para que puedas mejorar tu Agente de IA, haz ese cambio enseguida, para que puedas mejorar el rendimiento de tu Agente de IA enseguida.
Con el tiempo, con la información sobre cómo interactúan tus clientes con tu Agente de IA, podrás establecer un flujo de trabajo en el que puedas mejorar tanto tu base de conocimientos como tu Agente de IA a la vez.
Before you begin
Before working on your AI Agent, get some background information on generative content and MessageMind's resolution engine.
When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.
By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with MessageMind™ - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.
This guide will take you through MessageMind™’s technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.
Understand Large Language Models (LLMs) and Generative AI
The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that MessageMind™ uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.
When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.
Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.
Understand your AI Agent's Content Filters
LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.
Before sending any generated response to your customer, your AI Agent checks to make sure the response is:
- Safe: The response doesn't contain any harmful content.
- Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
- Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.
With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.
Understand MessageMind™’s Reasoning Engine
Your AI Agent runs on a sophisticated Reasoning Engine MessageMind™ created to provide customers with the knowledge and solutions they need.
When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:
- Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
- Knowledge base: Does the knowledge base contain the information the customer is looking for?
- Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?
From there, it decides how to respond to the customer:
- Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
- Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response.
- Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
- Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.
Together, the mechanism that makes these complex decisions on how to help the customer is called MessageMind™’s Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.
Understand How Your AI Agent Prevents Prompt Injections
Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.
The reasoning engine behind MessageMind™’s AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:
- A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message.
- Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat.
- Models that aim to detect and filter out harmful content in inputs or outputs.
With state-of-the-art generative AI testing prior to new deployments, MessageMind™ ensures a secure and effective customer interaction experience.
When you connect your AI Agent to your knowledge base and start to serve automatically generated content to your customers, it might feel like magic. But it's not! This topic takes you through what happens behind the scenes when you start serving knowledge base content to customers.
How MessageMind™ ingests your knowledge base
When you link your knowledge base to your AI Agent, your AI Agent copies down all of your knowledge base content, so it can quickly search through it and serve relevant information from it. Here's how it happens:
-
When you link your AI Agent with your knowledge base, your AI Agent imports all of your knowledge base content.
Depending on the tools you use to create and host your knowledge base, your knowledge base then updates with different frequencies:
-
If your knowledge base is in Zendesk or Salesforce, your AI Agent checks back for updates every 15 minutes.
- If your AI Agent hasn't had any conversations, either immediately after you linked it with your knowledge base or in the last 30 days, your AI Agent pauses syncing. To trigger a sync with your knowledge base, have a test conversation with your AI Agent.
-
If your knowledge base is hosted elsewhere, you or your MessageMind™ team have to build an integration to scrape it and upload content to MessageMind's Knowledge API. If this is the case, the frequency of updates depends on the integration.
-
-
Your AI Agent splits your articles into chunks, so it doesn't have to search through long articles each time it looks for information - it can just look at the shorter chunks instead.
While each article can cover a variety of related concepts, each chunk should only cover one key concept. Additionally, your AI Agent includes context for each chunk; each chunk contains the headings that preceded it.
-
Your AI Agent sends each chunk to a Large Language Model (LLM), which it uses to assign the chunks numerical representations that correspond to the meaning of each chunk. These numerical values are called embeddings, and it saves them into a database.
The database is then ready to provide information for GPT to put together into natural-sounding responses to customer questions.
How MessageMind™ creates responses from knowledge base content
After saving your knowledge base content into a database, your AI Agent is ready to provide content from it to answer your customers' questions. Here's how it does that:
-
Your AI Agent sends the customer's query to the LLM, so it can get an embedding (a numerical value) that corresponds with the information the customer was asking for.
Before proceeding, the AI Agent sends the content through a moderation check via the LLM to see if the customer's question was inappropriate or toxic. If it was, your AI Agent rejects the query and doesn't continue with the answer generation process.
-
Your AI Agent then compares embeddings between the customer's question and the chunks in its database, to see if it can find relevant chunks that match the meaning of the customer's question. This process is called retrieval.
Your AI Agent looks for the best match in meaning in the database to what the customer asked for, which is called semantic similarity, and saves the top three most relevant chunks.
If the customer's question is a follow-up to a previous question, your AI Agent might get the LLM to rewrite the customer's question to include context to increase the chances of getting relevant chunks. For example, if a customer asks your AI Agent whether your store sells cookies, and your AI Agent says yes, your customer may respond with "how much are they?" That question doesn't have enough information on its own, but a question like "how much are your cookies?" provides enough context to get a meaningful chunk of information back.
If your AI Agent isn't able to find any relevant matches to the customer's question in the database's chunks at this point, it serves the customer a message asking them to rephrase their question or escalates the query to a human agent, rather than attempting to generate a response and risking serving inaccurate information.
-
Your AI Agent sends the three chunks from the database that are the most relevant to the customer's question to GPT to stitch together into a response. Then, your AI Agent sends the generated response through three filters:
-
The Safety filter checks to make sure that the generated response doesn't contain any harmful content.
-
The Relevance filter checks to make sure that the generated response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
-
The Accuracy filter checks to make sure that the generated response matches the content in your knowledge base, so it can verify that the AI Agent's response is true.
-
-
If the generated response passes these three filters, your AI Agent serves it to the customer.
Getting ready to create automatically generated content from your knowledge base for the first time? Or maybe you're just looking to tune up your knowledge base? Follow these principles to make the information in your knowledge base easy for your AI Agent to parse, which will improve your AI Agent's chances of serving up relevant and helpful information to your customers.
Structure information with your customer in mind
When you're maintaining a knowledge base, it can be easy to organize information in a way that makes sense to you, but not to people who aren't familiar with the information in it already. If you watch a new customer trying to navigate your knowledge base, you'll almost certainly be surprised at what they do! If you're like most people who maintain knowledge bases, you probably aren't a beginner, which means that you likely aren't your own target audience.
So how can you make sure your knowledge base is useful for customers, and what does that have to do with AI? The answer here comes down to using titles and headings. We'll call these signposts as a collective, because they act as signposts for both humans and AI, to indicate how likely it is that a customer is getting closer to the information they want to get to in your knowledge base. Additionally, when MessageMind™ ingests your knowledge base content, it saves the topic title as context for each chunk of information it splits your knowledge base into. When information has proper context, it's less likely that your AI Agent will serve irrelevant information to customers.
-
Categorize your information into groups that don't overlap. That way, both your human customers and your AI are less likely to make a wrong turn and find information that isn't relevant to what they're looking for.
-
Make every signpost relevant to all of the information under it. If there's information under a heading that isn't relevant to that heading, both customers and AI might have trouble finding that information. Similarly, if the heading is confusing or implies that it's followed by information that isn't actually there, it makes it harder for customers and AI to navigate your knowledge base.
-
Always organize information from most broad to most specific. It should be easy for customers to figure out whether they're getting closer to the information they're looking for by following your signposts.
-
Make your signposts descriptive.
-
Orient signposts around customer objectives. Most likely, customers are coming to your knowledge base or AI Agent looking for help with a specific task, so making it clear which articles are about which tasks is very helpful.
As a best practice, use verbs in your signposts to make it easier for customers to find the actions they want to perform.
-
To help people and AI scan your content more easily, put important verbs and vocabulary closer to the front of your signposts than to the end.
-
Wherever you can, avoid mentioning concepts or terminology that new customers might not be familiar with yet. Unfamiliar wording makes it harder for signposts to do their job, because both people and LLMs can find them confusing.
-
Try to make it easy for customers to figure out whether they're the intended audience for an article just from reading the signposts. No customer wants to waste their time opening articles just to find that they're not relevant, or reading irrelevant responses.
-
-
Use proper HTML structure to create signposts. It might look just as good if you highlight some text, increase the size, and make it bold, but an AI model might struggle to recognize that that formatting is supposed to indicate a heading. Instead, use the appropriate
<h1>
tags, and so on. When you do, your formatting will be much more consistent, and AI can pick out the hierarchy your information is in. Additionally, your customers who have visual impairments can navigate properly formatted content more easily, because their assistive technology (e.g., screen readers) are programmed to parse HTML. -
Don't assume that customers are going to read your knowledge base in order. Customers might find an article, or even a section of an article, via a search engine, and might get frustrated if the information they see requires a lot of context they don't have. Likewise, if your AI Agent sends a customer information without context, that can be a frustrating chat experience too. Make sure you lay some groundwork in your more advanced articles so all of your customers can go back and get more information if they need to.
The study of organizing information to aid customer navigation is called information architecture. If you're interested in more information, including resources on how to perform tests to see how your knowledge base organization works for new customers, see Information Architecture: Study Guide at the Nielsen Norman Group's website.
Create informative chunks by writing standalone content
When an AI ingests the content in your knowledge base, it breaks the information up into chunks. Then, when customers ask your AI Agent questions, your AI Agent searches for chunks that have relevant meanings and uses them to create responses. Here are some ways you can ensure each chunk makes sense on its own:
-
Provide information in full sentences. Because your AI Agent puts information into chunks, the best way to make sure your information has full context is to provide full sentences.
For example, let's say your knowledge base contains a FAQ, and one of your questions is "Can I pay by credit card?" Instead of a simple "Yes," which isn't helpful on its own, phrase the answer as "Yes, you can pay by credit card."
-
Avoid references to other locations in your knowledge base. When customers are reading your knowledge base content in the context of a AI Agent, they won't have context for references like "As you saw in our last example." Avoid these kinds of references that make customers feel like they're missing out on information.
Write clearly and concisely
Now that we've talked about how your information should be organized, we can look at what the information itself should look like. The simpler your content is, the easier it is for both humans and AI to find the important pieces of information they need.
-
Use clear terminology that doesn't overlap. The easier your terminology is to follow, the more likely it is that a customer or AI can recognize whether content is relevant to a customer's question.
For example, let's say your company makes music publishing software, and your knowledge base has some information about making a demo. But your knowledge base might also have information about how prospective customers can contact your Sales team for a demo of your software. The word "demo" meaning two different things in your knowledge base can cause confusion and cause irrelevant search results to come up. If you can, see if you can replace one instance with a different word, so the word "demo" consistently means only one thing in your knowledge base.
-
Use simple language. Have you ever read a really long, meandering sentence, and by the end weren't really sure what the author was trying to say? This can happen when either human customers or an AI are parsing your knowledge base. Take the time to cut unnecessary content so it's easier to pick out the takeaways from your content.
-
Minimize your reliance on images and videos. Generative content only works with text; your AI Agent can't access images in your knowledge base. If you have content in images, it's a good idea to re-evaluate if there's a way to provide that same content in text.
Another reason this is a good idea is for accessibility: your customers who have visual disabilities may not be able to see your images or videos. Making as much text available in text as possible, like in alt text or transcripts, helps both customers chatting with your AI Agent and your customers who access your knowledge base using assistive technology like screen readers.
-
Verify your table content. Some tables work better than others with AI; sometimes AI is able to parse the spatial relationships between cells, and sometimes it gets confused. After setting up your AI Agent, test its ability to provide information that comes from tables in your knowledge base. Your tables are likely to work if they're formatted using proper Markdown, but note that they won't work if they're embedded in images.
Make data-driven maintenance decisions
It's common for documentation teams to be small and sometimes struggle with keeping entire knowledge bases up to date. If you're on a team like that, the idea of turning over your knowledge base to an AI Agent can feel daunting. You're not alone! If this situation feels familiar to you, it's important to work smarter and not harder by following some best practices:
-
Collect analytics data for your knowledge base. There are lots of ways to track usage data for your knowledge base, depending on the tools you use to make it. Customer usage data is often surprising to people who spend all day using a product - that's why it's important to collect it.
-
Decide which metrics best indicate success for your knowledge base. The metrics in your analytics data can be tricky: the stories they tell are often up to interpretation.
For example, if customers only tend to spend 10 seconds on a long topic, is it because they tend to be looking for a crucial piece of information near the top of the page? If that's the case, you probably don't need to change anything. But what if they're spending that time scanning through the page for information that isn't there and leaving in frustration? In that case, there's probably something you can improve about the way your knowledge base is organized.
There's no right or wrong set of metrics to focus on. It's common to focus on the topics that are most commonly viewed, or topics that have the most positive or negative reviews from customers, but ultimately it's up to you and your organization to choose how to measure the success of your knowledge base. That strategy can change over time, but you should have some data that you can refer back to.
-
Prioritize content reviews based on the data you collect. After collecting some customer usage data, start to go through it. Can you find patterns about the kinds of content that customers seemed to gravitate towards, or other content that customers didn't touch at all? Using your usage data, start creating a priority list for important topics to make sure they're polished.
-
You can always disable articles from the Knowledge page. If you know that a topic is out of date, but it's too low on your priority list to get to right away, you can disable it from showing up in generative content. That way, you can prevent inaccurate information from appearing in your AI Agent, without delaying your launch. In the future, when you do get a chance to update that topic, you can enable it again.
-
Revisit your data on a regular basis. After connecting your knowledge base to your AI Agent, you'll have even more usage data from your customers to analyze. When you revisit your data, you can test the success of your prior decisions and adjust your priorities accordingly.
-
Make use of MessageMind's reporting tools. On your MessageMind™ dashboard, you can see high-level reports on your AI Agent's automated resolution rate, and dig deeper into individual conversations to see how your AI Agent performed. Once your AI Agent is customer-facing, you'll have even more information on how your knowledge base is serving your customers through generative AI.
If you're just getting started and don't have analytics yet, consider prioritizing a few key areas of your product to review first, and go from there. You don't have to have a perfect plan for collecting analytics data right away! The important thing is that you eventually have a system where you can both collect and analyze usage data.
Improve your knowledge base content over time
What do you do once you have customer data? You use it! Here are a few tips:
-
Keep maintenance regular. As your product changes, so will your documentation, and so will your customers' questions. Set aside regular maintenance time to take a look at your AI Agent's reports and conversation transcripts, so you can pick out opportunities to improve your knowledge base and have your AI Agent performing even better for future customers.
-
Keep feedback loops tight. If you see an opportunity to improve your knowledge base so you can improve your AI Agent, make that change right away, so you can improve your AI Agent's performance right away.
Over time, with the information about how your customers interact with your AI Agent, you'll be able to settle into a workflow where you can improve both your knowledge base and your AI Agent all at once.
Before you begin
Before working on your AI Agent, get some background information on generative content and MessageMind's resolution engine.
When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.
By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with MessageMind™ - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.
This guide will take you through MessageMind™’s technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.
Understand Large Language Models (LLMs) and Generative AI
The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that MessageMind™ uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.
When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.
Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.
Understand your AI Agent's Content Filters
LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.
Before sending any generated response to your customer, your AI Agent checks to make sure the response is:
- Safe: The response doesn't contain any harmful content.
- Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
- Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.
With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.
Understand MessageMind™’s Reasoning Engine
Your AI Agent runs on a sophisticated Reasoning Engine MessageMind™ created to provide customers with the knowledge and solutions they need.
When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:
- Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
- Knowledge base: Does the knowledge base contain the information the customer is looking for?
- Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?
From there, it decides how to respond to the customer:
- Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
- Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response.
- Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
- Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.
Together, the mechanism that makes these complex decisions on how to help the customer is called MessageMind™’s Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.
Understand How Your AI Agent Prevents Prompt Injections
Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.
The reasoning engine behind MessageMind™’s AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:
- A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message.
- Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat.
- Models that aim to detect and filter out harmful content in inputs or outputs.
With state-of-the-art generative AI testing prior to new deployments, MessageMind™ ensures a secure and effective customer interaction experience.
When you connect your AI Agent to your knowledge base and start to serve automatically generated content to your customers, it might feel like magic. But it's not! This topic takes you through what happens behind the scenes when you start serving knowledge base content to customers.
How MessageMind™ ingests your knowledge base
When you link your knowledge base to your AI Agent, your AI Agent copies down all of your knowledge base content, so it can quickly search through it and serve relevant information from it. Here's how it happens:
-
When you link your AI Agent with your knowledge base, your AI Agent imports all of your knowledge base content.
Depending on the tools you use to create and host your knowledge base, your knowledge base then updates with different frequencies:
-
If your knowledge base is in Zendesk or Salesforce, your AI Agent checks back for updates every 15 minutes.
- If your AI Agent hasn't had any conversations, either immediately after you linked it with your knowledge base or in the last 30 days, your AI Agent pauses syncing. To trigger a sync with your knowledge base, have a test conversation with your AI Agent.
-
If your knowledge base is hosted elsewhere, you or your MessageMind™ team have to build an integration to scrape it and upload content to MessageMind's Knowledge API. If this is the case, the frequency of updates depends on the integration.
-
-
Your AI Agent splits your articles into chunks, so it doesn't have to search through long articles each time it looks for information - it can just look at the shorter chunks instead.
While each article can cover a variety of related concepts, each chunk should only cover one key concept. Additionally, your AI Agent includes context for each chunk; each chunk contains the headings that preceded it.
-
Your AI Agent sends each chunk to a Large Language Model (LLM), which it uses to assign the chunks numerical representations that correspond to the meaning of each chunk. These numerical values are called embeddings, and it saves them into a database.
The database is then ready to provide information for GPT to put together into natural-sounding responses to customer questions.
How MessageMind™ creates responses from knowledge base content
After saving your knowledge base content into a database, your AI Agent is ready to provide content from it to answer your customers' questions. Here's how it does that:
-
Your AI Agent sends the customer's query to the LLM, so it can get an embedding (a numerical value) that corresponds with the information the customer was asking for.
Before proceeding, the AI Agent sends the content through a moderation check via the LLM to see if the customer's question was inappropriate or toxic. If it was, your AI Agent rejects the query and doesn't continue with the answer generation process.
-
Your AI Agent then compares embeddings between the customer's question and the chunks in its database, to see if it can find relevant chunks that match the meaning of the customer's question. This process is called retrieval.
Your AI Agent looks for the best match in meaning in the database to what the customer asked for, which is called semantic similarity, and saves the top three most relevant chunks.
If the customer's question is a follow-up to a previous question, your AI Agent might get the LLM to rewrite the customer's question to include context to increase the chances of getting relevant chunks. For example, if a customer asks your AI Agent whether your store sells cookies, and your AI Agent says yes, your customer may respond with "how much are they?" That question doesn't have enough information on its own, but a question like "how much are your cookies?" provides enough context to get a meaningful chunk of information back.
If your AI Agent isn't able to find any relevant matches to the customer's question in the database's chunks at this point, it serves the customer a message asking them to rephrase their question or escalates the query to a human agent, rather than attempting to generate a response and risking serving inaccurate information.
-
Your AI Agent sends the three chunks from the database that are the most relevant to the customer's question to GPT to stitch together into a response. Then, your AI Agent sends the generated response through three filters:
-
The Safety filter checks to make sure that the generated response doesn't contain any harmful content.
-
The Relevance filter checks to make sure that the generated response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
-
The Accuracy filter checks to make sure that the generated response matches the content in your knowledge base, so it can verify that the AI Agent's response is true.
-
-
If the generated response passes these three filters, your AI Agent serves it to the customer.
Getting ready to create automatically generated content from your knowledge base for the first time? Or maybe you're just looking to tune up your knowledge base? Follow these principles to make the information in your knowledge base easy for your AI Agent to parse, which will improve your AI Agent's chances of serving up relevant and helpful information to your customers.
Structure information with your customer in mind
When you're maintaining a knowledge base, it can be easy to organize information in a way that makes sense to you, but not to people who aren't familiar with the information in it already. If you watch a new customer trying to navigate your knowledge base, you'll almost certainly be surprised at what they do! If you're like most people who maintain knowledge bases, you probably aren't a beginner, which means that you likely aren't your own target audience.
So how can you make sure your knowledge base is useful for customers, and what does that have to do with AI? The answer here comes down to using titles and headings. We'll call these signposts as a collective, because they act as signposts for both humans and AI, to indicate how likely it is that a customer is getting closer to the information they want to get to in your knowledge base. Additionally, when MessageMind™ ingests your knowledge base content, it saves the topic title as context for each chunk of information it splits your knowledge base into. When information has proper context, it's less likely that your AI Agent will serve irrelevant information to customers.
-
Categorize your information into groups that don't overlap. That way, both your human customers and your AI are less likely to make a wrong turn and find information that isn't relevant to what they're looking for.
-
Make every signpost relevant to all of the information under it. If there's information under a heading that isn't relevant to that heading, both customers and AI might have trouble finding that information. Similarly, if the heading is confusing or implies that it's followed by information that isn't actually there, it makes it harder for customers and AI to navigate your knowledge base.
-
Always organize information from most broad to most specific. It should be easy for customers to figure out whether they're getting closer to the information they're looking for by following your signposts.
-
Make your signposts descriptive.
-
Orient signposts around customer objectives. Most likely, customers are coming to your knowledge base or AI Agent looking for help with a specific task, so making it clear which articles are about which tasks is very helpful.
As a best practice, use verbs in your signposts to make it easier for customers to find the actions they want to perform.
-
To help people and AI scan your content more easily, put important verbs and vocabulary closer to the front of your signposts than to the end.
-
Wherever you can, avoid mentioning concepts or terminology that new customers might not be familiar with yet. Unfamiliar wording makes it harder for signposts to do their job, because both people and LLMs can find them confusing.
-
Try to make it easy for customers to figure out whether they're the intended audience for an article just from reading the signposts. No customer wants to waste their time opening articles just to find that they're not relevant, or reading irrelevant responses.
-
-
Use proper HTML structure to create signposts. It might look just as good if you highlight some text, increase the size, and make it bold, but an AI model might struggle to recognize that that formatting is supposed to indicate a heading. Instead, use the appropriate
<h1>
tags, and so on. When you do, your formatting will be much more consistent, and AI can pick out the hierarchy your information is in. Additionally, your customers who have visual impairments can navigate properly formatted content more easily, because their assistive technology (e.g., screen readers) are programmed to parse HTML. -
Don't assume that customers are going to read your knowledge base in order. Customers might find an article, or even a section of an article, via a search engine, and might get frustrated if the information they see requires a lot of context they don't have. Likewise, if your AI Agent sends a customer information without context, that can be a frustrating chat experience too. Make sure you lay some groundwork in your more advanced articles so all of your customers can go back and get more information if they need to.
The study of organizing information to aid customer navigation is called information architecture. If you're interested in more information, including resources on how to perform tests to see how your knowledge base organization works for new customers, see Information Architecture: Study Guide at the Nielsen Norman Group's website.
Create informative chunks by writing standalone content
When an AI ingests the content in your knowledge base, it breaks the information up into chunks. Then, when customers ask your AI Agent questions, your AI Agent searches for chunks that have relevant meanings and uses them to create responses. Here are some ways you can ensure each chunk makes sense on its own:
-
Provide information in full sentences. Because your AI Agent puts information into chunks, the best way to make sure your information has full context is to provide full sentences.
For example, let's say your knowledge base contains a FAQ, and one of your questions is "Can I pay by credit card?" Instead of a simple "Yes," which isn't helpful on its own, phrase the answer as "Yes, you can pay by credit card."
-
Avoid references to other locations in your knowledge base. When customers are reading your knowledge base content in the context of a AI Agent, they won't have context for references like "As you saw in our last example." Avoid these kinds of references that make customers feel like they're missing out on information.
Write clearly and concisely
Now that we've talked about how your information should be organized, we can look at what the information itself should look like. The simpler your content is, the easier it is for both humans and AI to find the important pieces of information they need.
-
Use clear terminology that doesn't overlap. The easier your terminology is to follow, the more likely it is that a customer or AI can recognize whether content is relevant to a customer's question.
For example, let's say your company makes music publishing software, and your knowledge base has some information about making a demo. But your knowledge base might also have information about how prospective customers can contact your Sales team for a demo of your software. The word "demo" meaning two different things in your knowledge base can cause confusion and cause irrelevant search results to come up. If you can, see if you can replace one instance with a different word, so the word "demo" consistently means only one thing in your knowledge base.
-
Use simple language. Have you ever read a really long, meandering sentence, and by the end weren't really sure what the author was trying to say? This can happen when either human customers or an AI are parsing your knowledge base. Take the time to cut unnecessary content so it's easier to pick out the takeaways from your content.
-
Minimize your reliance on images and videos. Generative content only works with text; your AI Agent can't access images in your knowledge base. If you have content in images, it's a good idea to re-evaluate if there's a way to provide that same content in text.
Another reason this is a good idea is for accessibility: your customers who have visual disabilities may not be able to see your images or videos. Making as much text available in text as possible, like in alt text or transcripts, helps both customers chatting with your AI Agent and your customers who access your knowledge base using assistive technology like screen readers.
-
Verify your table content. Some tables work better than others with AI; sometimes AI is able to parse the spatial relationships between cells, and sometimes it gets confused. After setting up your AI Agent, test its ability to provide information that comes from tables in your knowledge base. Your tables are likely to work if they're formatted using proper Markdown, but note that they won't work if they're embedded in images.
Make data-driven maintenance decisions
It's common for documentation teams to be small and sometimes struggle with keeping entire knowledge bases up to date. If you're on a team like that, the idea of turning over your knowledge base to an AI Agent can feel daunting. You're not alone! If this situation feels familiar to you, it's important to work smarter and not harder by following some best practices:
-
Collect analytics data for your knowledge base. There are lots of ways to track usage data for your knowledge base, depending on the tools you use to make it. Customer usage data is often surprising to people who spend all day using a product - that's why it's important to collect it.
-
Decide which metrics best indicate success for your knowledge base. The metrics in your analytics data can be tricky: the stories they tell are often up to interpretation.
For example, if customers only tend to spend 10 seconds on a long topic, is it because they tend to be looking for a crucial piece of information near the top of the page? If that's the case, you probably don't need to change anything. But what if they're spending that time scanning through the page for information that isn't there and leaving in frustration? In that case, there's probably something you can improve about the way your knowledge base is organized.
There's no right or wrong set of metrics to focus on. It's common to focus on the topics that are most commonly viewed, or topics that have the most positive or negative reviews from customers, but ultimately it's up to you and your organization to choose how to measure the success of your knowledge base. That strategy can change over time, but you should have some data that you can refer back to.
-
Prioritize content reviews based on the data you collect. After collecting some customer usage data, start to go through it. Can you find patterns about the kinds of content that customers seemed to gravitate towards, or other content that customers didn't touch at all? Using your usage data, start creating a priority list for important topics to make sure they're polished.
-
You can always disable articles from the Knowledge page. If you know that a topic is out of date, but it's too low on your priority list to get to right away, you can disable it from showing up in generative content. That way, you can prevent inaccurate information from appearing in your AI Agent, without delaying your launch. In the future, when you do get a chance to update that topic, you can enable it again.
-
Revisit your data on a regular basis. After connecting your knowledge base to your AI Agent, you'll have even more usage data from your customers to analyze. When you revisit your data, you can test the success of your prior decisions and adjust your priorities accordingly.
-
Make use of MessageMind's reporting tools. On your MessageMind™ dashboard, you can see high-level reports on your AI Agent's automated resolution rate, and dig deeper into individual conversations to see how your AI Agent performed. Once your AI Agent is customer-facing, you'll have even more information on how your knowledge base is serving your customers through generative AI.
If you're just getting started and don't have analytics yet, consider prioritizing a few key areas of your product to review first, and go from there. You don't have to have a perfect plan for collecting analytics data right away! The important thing is that you eventually have a system where you can both collect and analyze usage data.
Improve your knowledge base content over time
What do you do once you have customer data? You use it! Here are a few tips:
-
Keep maintenance regular. As your product changes, so will your documentation, and so will your customers' questions. Set aside regular maintenance time to take a look at your AI Agent's reports and conversation transcripts, so you can pick out opportunities to improve your knowledge base and have your AI Agent performing even better for future customers.
-
Keep feedback loops tight. If you see an opportunity to improve your knowledge base so you can improve your AI Agent, make that change right away, so you can improve your AI Agent's performance right away.
Over time, with the information about how your customers interact with your AI Agent, you'll be able to settle into a workflow where you can improve both your knowledge base and your AI Agent all at once.