While Generative AI and large language models (LLMs) are hot commodities on the market, they do lack the ability to give fully-informed, comprehensive answers. LLMs are trained on large datasets and user input but this data often goes “stale” and is not traceable; in other words, the sources that answers are pulled from are not given or clear.
In a business context, these flaws pose many risks. How can managers make decisions if they can’t trust the quality or relevancy of the data in an LLM? If the answer to a question has knowledge gaps, is it really helpful to an employee at all?
We recently saw an example of this in regards to virtual tax filing assistants, TurboTax and H&R Block, where the LLMs provided by these services are completely missing the mark. These companies championed their new AI-based LLM chatbots only to churn out unhelpful and irrelevant answers. More can be read in the Washington Post’s scathing review of the chatbots, but the bottom line is that these companies acted too quickly to provide a feature that requires sophisticated development and implementation.
In contrast, let’s take a look at how semantic technologies can remedy these problems. For comparison’s sake, we have included a demo of simplified graph-based LLM, but more information about our Semantic RAG application (now referred to as Graph RAG) can be found by jumping ahead.
PoolParty Meets ChatGPT
Aside from the lack of explainability that’s found in an LLM like ChatGPT, it often provides inaccurate or abbreviated summaries to questions that could otherwise be filled with information.
Last year, the PoolParty Team worked extensively on a demo application that combines the strengths of an LLM with Semantic AI – an explainable AI whose sourcing you can trust.
In this screenshot, you can see how ChatGPT’s answer is relatively weak in comparison to the semantically-enriched answer on the right. Our knowledge graph adds an additional layer to ChatGPT that not only provides more comprehensive answers but also enrichens the answers with sources from the knowledge graph.
The benefit to this approach is two-fold: the answer can be better trusted because you can click through the terms and see where the information is pulled from AND the answer is thorough, ensuring that there are no gaps in the information exchange.
The PoolParty Meets ChatGPT demo was our first step into the LLM world but since then we have enhanced it even further through an advanced methodology called Retrieval Augmented Generation (RAG).
The advanced features of a Semantic RAG
A Semantic or Graph RAG design pattern combines symbolic AI (knowledge graphs) and generative AI (LLM) for better domain fidelity and fewer hallucinations. On top of its impressive natural language processing capabilities, it is enriched with semantic search to ensure that items and documents are found based on the user’s intent of the query and not just the keywords entered in the search field.
Information exchange with a smart chatbot
When a user first enters a question into the search field, they get a response that summarizes the answer and embeds all the appropriate linked content/sources. If this answer still needs refinement or if the user is looking for something more specific, they can simply ask the LLM a follow-up question that deepens the response. At this point, the user has engaged in a “conversation” with the LLM and can continue to converse with additional prompting.
The user can chat with their data so that it is all the more useful and actionable, instead of static data that is just sitting in a repository.
Recommended articles
A recommendation algorithm identifies documents of the company knowledge base that best match the result of the human-machine dialog and returns them as a list of summaries. This does not require sharing the knowledge base with the LLM provider, ensuring that company data remains secure.
Users can click into these documents to read more for a deeper dive.
Final conclusion
Our solution delivers relevant and actionable results in each section. As a final step, we process these results to produce an easily understood conclusion. You don’t have to browse through all the results to understand your query because the conclusion summarizes all relevant points for you.
WE’VE MADE OUR OWN SOLUTION FOR KNOWLEDGE GRAPHS AND LLMS.
Knowledge graphs and LLMs as the secret weapon
A knowledge graph is at the center of this solution, with some of the following advantages:
The added context
Visually represented in a web of sorts, knowledge graphs link together various business assets, entities, concepts, etc. together to see how these things are related. They help to provide context to all these little pieces of information because they allow you to see how they all fit together.
Semantic tags that are mapped in a knowledge graph identify relationships between concepts, terms, documents, etc. and the contents within those documents. When the semantic metadata is stored in a knowledge graph, documents can be indexed and queried better, allowing for precise user search.
Graph database vs. vector database
Microsoft refers to a vector database as “a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes. Each vector has a certain number of dimensions, which can range from tens to thousands, depending on the complexity and granularity of the data.” These databases are praised for their ability to retrieve results based on the similarity and distance between vectors.
As previously mentioned, a graph database uses knowledge graphs to map the relations between data points – the context within these relations help the machine infer factual information about these relations.
This is where a graph database is more powerful. Whereas a vector database will note that “Vienna” is related to “Austria” based on the close proximity of these two vectors, a graph database will understand that “Vienna IS the capital of Austria.” The graph database is more accurate and closer to the topic. This is especially helpful for companies who need precise answers to their questions and can trace back the logic of the answer.
Low-level hallucination
Though the possibility of a hallucination can not be fully ruled out with LLMs, semantic technologies drastically lower the frequency of hallucinations. Below you will find how the Semantic RAG can help in the case of hallucinations:
TYPE OF HALLUCINATION
TYPE OF HALLUCINATION
Nonsensical output. The LLM generates responses that lack logical coherence and comprehensibility.
Factual contradiction. This type of hallucination results in the generation of fictional and misleading content, yet still are presented as coherent despite their inaccuracy.
Prompt contradiction. The LLM generates a response that contradicts the prompt used to generate it, raising concerns about reliability and adherence to the intended meaning or context.
PROBLEMS WITH CONVENTIONAL LLM
LLMs sometimes have problems with understanding context. They may not be able to distinguish between different meanings of a word and use it in the wrong context. The higher the ambiguity of a query, the higher the probability of leading the LLM down the wrong path.
The data with which the LLM was originally trained is not relevant in terms of time or context to solve the question posed. The LLM begins to fill in the data gaps with hallucinations.
LLMs have their own rules, policies and strategies set by their parent company. They prevent them from distributing unwanted content, even if it is contained in the training data. If the LLM detects a violation of these rules, possible responses are decoupled from the request.
MITIGATION WITH SEMANTIC RAG
The Smart Query Builder injects the semantics of a word when formulating the query and thus unmistakably determines its meaning for the LLM.
The contextual and domain-specific knowledge provided in the Semantic RAG fills in data gaps and leads the LLM to meaningful answers.
The Smart Query Builder guides the formulation of the prompt and can take the rules of the LLM into account in advance. Of course, changing the LLM provider or fine-tuning can also shift the rules.
Additional content recommendations and suggestions
A recommender system that is backed by a knowledge graph is perhaps one of the more sophisticated recommender types on the market – and unique to our RAG offering.
The advantage to a knowledge-based recommender is that content that is not explicitly related to the search query can still be surfaced; though the user is not specifically looking for this content, it can provide additional input that is relevant to the query because it is deemed relevant through these implicit relations. In other words, the employee searching for information should get exactly what they are looking for plus additional documents that can help their search.
The document recommendations provide an enriched full-circle experience to the user where they can deep dive on a topic and have all the necessary files at hand.
Using your own data that you can trust
Since the knowledge domain is modeled by subject matter experts of a company, the recommender system can closely resemble the specificity of the company’s work. Information, and thus concepts, are collected by the experts who will actively use the solution and are organized into the knowledge model based on the way these experts understand them. It takes into account the specific language the company uses and the unique relations between business objects – as a result, the solution is precise and catered to the company use case.
With our RAG, employees are better informed with the information that is supplied to them, so they can complete tasks more efficiently and feel more secure in decision making – ultimately freeing up employee working time.