Select Page

HAVE A CONVERSATION WITH AN EXPERT

Interview With

Toni Byrd Ressaire

10.9.2024

Our second installment of this “Have a Conversation with an Expert” series features a very exciting and enlightening discussion with Toni Byrd Ressaire, Director of Innovation at Technically Write IT (TWi).

Though Toni has been in the information sector for more than 20 years, her current role at TWi has her developing the vision and strategy of the company while also providing consultancy and implementation support to clients. TWi offers companies end-to-end content services, including content consulting, user adoption, machine translation, and localization.

Toni’s expertise spans across a variety of topics. Prior to working at TWi, she was a researcher in natural language processing (NLP) and set up her own consulting company that specialized in preparing content for emerging technologies like Conversational AI. To quote Toni: “I was interested in how AI and natural language processing would impact our industry, in particular the technical documentation industry.”

Alongside her role at TWi, she also teaches a course on technical communication at Munster Technological University. 

Given Toni’s extensive relationships with content, NLP, and AI, we thought it fitting to interview her for this series. Take a look at the Q&A section below to see what Toni thinks of all things Conversational AI.

Toni Byrd Ressaire

Toni Byrd Ressaire

Director of Innovation at Technically Write IT

Toni Byrd Ressaire is Director of Innovation at Technically Write IT (TWi). Based in Ireland, TWi offers consulting and customised end-to-end content operations for seamless management of content from conception to delivery. Toni graduated with an MSc in Technical and Scientific Communication from James Madison University in the United States. She specializes in content strategy and thinking outside the box.

An LLM is only as good as the information that’s fed to it. It’s all based on the source. So if our information at the source is accurate, disambiguated, and it has some context, it’s already better off.”

“Because remember, even though these are natural language, these are models that “understand” natural language, they still are computer models. They’re not humans.”

“So we have to provide some context, and we can do that through semantically enriching that information, which is something you understand very, very well. It’s really the focus of what you do at Semantic Web Company.”

Interview Questions & Answers
It's interesting that you are deciding on how to incorporate AI strategies into your own company, but also with customers of yours. So what does that look like? Are they just coming to you and asking for advice, or what is that process?

Yes, we’ve actually had questions such as, “Can you help us with our AI strategy?” or “Can you help us understand how to get our own content ready for AI because we’re getting requests from the executive level and questions about how are we going to prepare for incorporating AI.” So we’re hearing those types of questions from our clients.

And so we ourselves, as a company, have actually been looking at AI, of course, and looking at the use cases and doing some proof of concepts and practicing with it for a number of years. Before OpenAI released ChatGPT, we were working with conversational AI, we would call it, and chatbots before that. And then a couple of years before OpenAI actually released their commercial product, we were already in their playground, if you’ve heard of that, looking at Generative AI and what the use cases might be.

Of course, as we all know, when OpenAI released ChatGPT, it just exploded around the world big time. So in some ways, we felt prepared for it, but quite frankly, in some ways, we still had to look at how the world, and business in particular, was going to respond to it and find our niche within that kind of new paradigm.

So your team was already playing around with ChatGPT, or at least that sort of infrastructure, before it became public? And were you building chatbots with that or what was the main use case before?

The chatbots that we were building and sometimes playing around with were a combination of rule-based and NLP chatbots. When I say I started research back in 2016, I started working with NLP chatbots.

And without going into a lot of detail – and many people probably understand this – there’s a difference between rule-based and AI driven bots. And I was particularly interested in how an artificial intelligence could process human language so that we could understand how to have conversations using AI and information. So this idea of conversing with your data or conversing with your information is something that we were already looking at, but the technology just wasn’t advanced enough.

We were using a combination of rule-based and AI. But a lot of times we had to resort to rule-based chatbots, because particularly in business, we really need accuracy, we need content control, and the NLP just wasn’t quite advanced enough to give us that control – so we weren’t deploying them very much.

Looking at large language models (LLMs), it really just kind of takes that conversing to a next level because it incorporates the understanding of natural human language in a much more advanced way.

And would you say that LLMs and these Conversational AI platforms at the moment are advanced enough to suit both your company's needs and client needs, or does it still have some things that need to be heavily worked on?

YES. With the right preparation. With all of these use cases, we’re moving toward this idea of AI readiness. We get these statements from our clients like, “We want to turn on Copilot but we’re concerned about the results” or “We have turned on Copilot, and we’re getting inaccurate information or conflicting information.”

Some things have not changed in the type of work that we do. And that is, yes, we can now use LLMs to extract information, to chat with our information, but that information needs to be prepared.

What we’re seeing is a lot of people want to use it within their unstructured business information. And for that, while you don’t require necessarily the same depth of structure that we needed before – with today’s automated systems, CCMS’s and such – you do still need some structure, and you need to curate that information.

The information that lives in most of our companies, the unstructured, is not curated. That means you have a lot of conflicting information. So the first thing is just accuracy. Just curate your information and after that, LLMs still need context. While they’re getting really, really good at understanding natural human language, LLMs still need context.

Natural human language is often very ambiguous. It’s imperative that if you’re going to apply GenAI large language models to your information, to converse with it, or to extract, to get deliverables out of it – it’s really imperative that you get accuracy.

What tools or what process are you typically doing to prepare this content? What is the recipe that you find works best?

Well, the recipe is going to depend on the use case. And when we think about recipes, there is no one size fits all. For example, there’s no one general purpose LLM that is appropriate for every use case, but there are basic things within that recipe that you can do to get your content AI ready.

And certainly a taxonomy would be one of the first places we start, because from a taxonomy, then you can move to metadata and knowledge graphs. But first you need to know what is that term base that the LLM needs to be able to understand.

There are some other things. Again, depending on the use case and whether you’re coming from already componentized and structured documents or information, or whether you’re coming from unstructured documents, that recipe is going to look different.

In a lot of cases, you need to look at your semantic chunking. For example, for many years in the tech comm space, we’ve been using topic-based authoring, or component authoring, as we call it. And some of that may already have you prepared for using an LLM to chat with your content. But in some cases, you may need to even examine those chunking structures, because depending on the LLM that you’re using, there are going to be certain parameters or limitations around tokens and things like that. It gets quite technical here.

You said that you were playing around with OpenAI before this big hype. Has your opinion changed or evolved, both negative or positive, in regards to this whole Generative AI boom?

Yeah, well, I think because I was already working with, let’s call them AI models, which we then referred to as natural language processing, I was already aware of the capabilities of where natural language was in terms of AI and technology. So for me, there was no big surprise when we started looking at generative models.

What surprised me is how soon it came about and how good they were when the commercial models were first released. But it was better at some things than others. So again, you always go back to the use case.

ChatGPT was released as a commercial model. We heard a lot of criticism about it, but I was still amazed at how good it was. 

And then what amazed me even more was over the course of one year, how much it improved. Because, you know, me and my colleagues were using it consistently. In fact, we built some proof of concepts internally using not just ChatGPT, but some open source models.

Our first concern was with security and privacy. So we deployed some models locally to test.

When we talk about Graph RAG, we know that you can deploy them in private, secure environments. But when ChatGPT came out everyone was thinking about this public model, and we were saying you can deploy these things privately, you can keep them private, and you can have more control over them.

So we were playing with it for the purpose of looking at security and privacy. But while we were doing that, we were seeing these rapid improvements in the technology itself.

So over the course of one year what surprised us, was how rapidly it improved. So where are we today? You have to choose the right models, but you also have to prepare your information. You have to make sure that your content and your data are ready for GenAI.

I like this idea of AI readiness because I think people don’t necessarily know that’s such an important step, because your application is only as good as the stuff you feed it with.

I definitely will liken it to what’s happened with using LLMs for search, like in Bing and Google. You’ll hear criticisms that it’s not always accurate. Why? Because it’s pulling information from the web.

And just like you need to be very cognizant when you’re searching the web of what’s true and what’s not true, an LLM is only as good as the information that’s fed to it. It’s all based on the source. So if our information at the source is accurate, disambiguated, and it has some context, it’s already better off.

Because remember, even though these are natural language models, they “understand” natural language – they still are computer models. They’re not humans.

So we have to provide some context, and we can do that through semantically enriching that information, which is something you understand very, very well at the Semantic Web Company. It’s really the focus of what you do.

If you were to design your perfect Conversational AI platform, what features are you looking for? Is it like recommendations for more things to read? Is it a chatbot? What do you think makes a platform stick out from something else?

Well, the first thing, regardless of what type of new technology you’re looking for, is to do what we call a content strategy, to do discovery, to do analysis.

And before you start doing your planning, before you make these decisions about any type of platform technology, you need to understand what it is that you want to do with your information in order to turn it into a business asset.

Start with who you are as a business and where you want to go as a business. What are your business goals? So that’s primarily for me, the first thing. When you’re looking at actual platforms and get to the technology phase, you’ve made those early decisions and now you know what you need to look for.

Obviously, you need to look for security. This is first and foremost important. You need to look for secure platforms. With clients, we suggest that they look for platforms that are what we call private. This means business private, and this means that they’re deployed in servers that they already trust.

Another thing would be ensuring that the platform that you’re using meets your use case requirements. For example, there’s one platform that we had been using, and it’s secure, we trust it.

But the language models in it were limited and those language models were fine for what we were doing six months ago, but now we want to do more extraction and these language models are not the best language models for extracting information from, so now we need to look at a different platform.

So what are the language models that are used? As I said, one size does not fit all.

I would also probably look at the company itself and the reputation of the company, because in this particular technology, I think reputation is extremely important because people are really concerned about the integrity of this technology. And so you need a vendor with integrity.

What do you look for with technology integrity?

We want to know what kind of data they’ve been trained on and such. If you’re working with a third party platform, I think that you need to look at the integrity of that company as well.

How long, for example, have they been in this space? Do they have a track record, a history of being a company that provides solid, trustworthy products? Trustworthiness is something that we’re hearing thrown around a lot around AI.

So if you’re looking at a third party platform, PoolParty is a company that’s been in this space for a long time and has already been producing  trustworthy products. Because this is such an important issue, I would be looking for companies that have that type of integrity.

Finally, what is some advice you would give to people looking for applications wanting to, if maybe they're new to this whole Conversational AI space?

I’m particularly interested in the extraction features for our industry because using LLMs to extract can lead us to actually make our information more AI ready. This is an interesting loop; we can actually use LLMs to help us get our information AI ready.

And we do that through extraction. For example, you can extract taxonomies, you can extract information for metadata or to build knowledge graphs. This removes a lot of the manual process.

It’s actually making our jobs easier to get that information AI ready, as we can use the LLMs both to extract the information, to get it ready, and then use the LLMs to converse or chat with that information. But we still need human oversight over all of these things and we still need that expertise.

I’ll give you a final quote. I actually have a lot of respect for the Semantic Web Company team, because this team has been breaking ground around LLMs for quite some time.

I see this company as having been leaders in helping us to know how to use large language models, how to make them respond more accurately.

I have followed the company very closely. And what you’ve been doing, like with the ESG knowledge-hub.eco platform – I was very, very interested in looking at how it was actually building deeper knowledge graphs on top of existing knowledge graphs to get accurate information.

PoolParty has been leaders in this industry, and we’ve certainly been following very closely what you guys have been doing.

Want to dive deeper into the technologies mentioned in this interview? Check out our e-book about knowledge-hub.eco, our very own Conversational AI demo application!

Download the free eBook to learn more about our Conversational AI demo application.