There is a lot of excitement around artificial intelligence (AI) and what it can deliver. The promise of improved decision-making and work automation has fuelled expectations for how much the technology can benefit the global economy, with some placing its total impact in the trillions of dollars. But all this excitement seems to ignore one important fact: the mainstream approach to AI has many technical limitations and often works like a “black box,” producing results that cannot be understood by humans and generate mistrust.
The limitations of the common approach to artificial intelligence
Machine learning relies on big data and with good reason: to achieve optimal results, models have to be trained on large data sets. The problem is that big data is not always available. Imagine an online store that was recently launched and wants to recommend relevant products to its customers. Without having gathered enough information on their purchase preferences, it is difficult to build a recommender system that offers products that are relevant to each specific user.
Another problem with popular machine learning models is that they are usually constrained to one domain. That means every time a company encounters a new problem, it has to build a new model to address it. This happens because the same machine learning architecture does not perform well in two different scenarios – which is extremely inefficient.
Traditional machine learning algorithms are also not able to explore the context in which data is embedded. This limits what algorithms can see and understand, leading to poor results. A machine learning algorithm of an online store, for example, is unable to understand that wool is warm and that is why people wear it in winter.
But the biggest shortcoming of current approaches to AI is their lack of explainability. Without it, AI cannot be trusted. Explainability means that there are trustworthy humans who can understand and explain results generated by the AI. Unfortunately, the most popular machine learning algorithms offer results that cannot be explained, they are just “a matter of fact.”
Powering AI with human capabilities
Machines have the limitations listed above because they cannot do something that you do all the time: associate concepts and ideas. The ability to associate is the basis of our human understanding of the world. If you read the word “beach,” you may think of the sun, the sea, or even sand. Someone else might think of swimsuits. These associations can be broad and subjective since they are based on one’s understanding, experiences and even emotions. Machines, on the other hand, are really precise and have a more exact way of processing information. This is where Semantic AI comes in because it combines both approaches.
Semantic AI is based on knowledge graphs, which are a representation of knowledge and act as mediators between humans and machines. In other words, a knowledge graph follows the human way of understanding the world by linking things, ideas and concepts. Except it has one advantage: it can be understood by machines.
Because it is a representation of knowledge, the process of building an enterprise knowledge graph goes beyond a company’s IT department. It requires that experts in the different fields a company to collaborate and map out the concepts they work with and how they relate to each other. This may sound like a very complicated process, but there are methodologies, standards and tools available that can speed things up by automatically extracting many concepts from texts and databases, for example. The result of this process is a map, or more precisely a graph, generated in a standardized language that machines can understand.
The benefits of Semantic AI
As soon as the knowledge of a company or at least of a sub-domain is represented as a graph, the sky is the limit. The lack of big data, for example, becomes less of an issue. Semantic AI applications can use a company’s knowledge graph to classify data, both text and databases, and map out its interconnections. Once this is done, algorithms require less data to produce relevant results because they can analyze not only the data itself but also its context. An e-commerce recommendation system, to follow our previous example, will be able to understand that wool is warm and that wool products should be recommended more often when it gets cold.
AI solutions built using a knowledge graph are also reusable. Once a company’s knowledge is modeled in this format, it is separate from existing databases. That means the knowledge graph does not have to be constrained to one specific domain, but can cover all the areas a company works in to be used again and again.
Last but not least, this approach makes machine learning algorithms more explainable. Because the relationships between data are mapped, it becomes easier to understand how a specific algorithm reached any given result. This makes their inputs more trustworthy because it helps humans evaluate and understand them.
Building AI that people can trust
Transparency will be essential for artificial intelligence to thrive. There are two main reasons for this: accountability and trust. In some cases, the use of artificial intelligence can lead to negative outcomes, such as endangering users, as is the case for autonomous driving, or by having a hidden bias, as can be the case for human resources departments. In decision-making, being able to understand how AI reaches a conclusion will be essential for decision-makers to actually adopt the technology and allow it to inform what companies do.
It is likely that this will eventually be regulated by governments, but for the time being there is no other option than making decisions made by AI more transparent. The only way out of this dilemma is a fundamental re-engineering of the underlying architecture of AI, which includes adding knowledge graphs as a prerequisite to building the applications. Once this becomes the norm and not the exception, we will have artificial intelligence that people can trust and that performs at a much higher level than what we see today.