• Technospire
  • Posts
  • How AI is Silently Conquering the Business World

How AI is Silently Conquering the Business World

AI chatbots have grown vastly in popularity over the last year, and unlike the other online trends of recent years, they are here to stay. While trends such as crypto and NFTs had very specialized use cases, AI chatbots are versatile enough to be useful in almost every industry. Right now, many companies are being presented with an AI chatbot opportunity - adopt AI technology, or risk falling behind those who do. However, this opportunity does not come from the type of chatbot that is familiar to the general public, such as ChatGPT. It comes from a different and highly valuable type of chatbot, known as a custom knowledge chatbot. This article will explore how custom knowledge chatbots work, why they are becoming easier to access, and how they are being used to give companies an unfair edge over their competitors.

First, it is important to understand how all AI chatbots work. AI chatbots are based on two parts: a large language model (LLM) API, and a prompt from the user. A large language model API allows you to use the functions of a large company’s AI (such as OpenAI’s GPT-3.5 or GPT-4) in your chatbot. This allows the chatbot to provide the user with information, and to act or behave in a way that the user specifies. This is combined with a prompt from the user, telling it what to do, to generate a unique response. This runs on an endless loop. The main advantage of AI technology over older chatbots is that the user is not restricted to closed yes / no questions, so conversation is freer and more natural.

Custom knowledge chatbots include a third element - a knowledge base (so they are a combination of an LLM API + a prompt + a knowledge base). This is a database of text and numeric data chunks which are stored by similarity. A chatbot only answers questions based on information from the knowledge base, so the information it provides is very specific to whatever the knowledge base is about. It is important to note that the chatbot appears to know all of this information, but in reality, it doesn’t because it has a token limit. This is a fixed upper bound to the information we can provide to an LLM API, since its limited computational power makes it impossible for the AI to ‘look at’ all the data at once. In order to get around this issue, the chatbot uses a retrieval system that only gets the information that is most relevant to the user’s query. All the information in the knowledge base is split into chunks of a reasonable size (such as 1 paragraph of text), and these chunks are stored in a vector database. This means the most similar chunks are stored nearest to each other.

The retrieval system starts with the user’s query, which is sent to the vector database. The system retrieves chunks that are closest to the user’s query, so most similar in meaning. The most relevant chunks, and the user’s original query, are combined to form a prompt so that the AI knows how to use the information provided to answer the question. Finally, the AI generates an answer and sends it back to the user.

The prompt that the AI receives is extremely important, as the prompt's quality determines the output's quality. A significant portion of chatbot development is prompt engineering, which is writing a prompt that tells the AI how it should process the information to generate a high-quality response. Prompt engineering is a helpful skill to have if you want ChatGPT’s answers to help you in personal endeavors, and it is also a high-income skill. The basic idea of prompt engineering is that the more specific and detailed the prompt, the more likely you will get the answer that you are looking for. Some ways to make your prompt more detailed are:

  • Start with a message telling the chatbot what it is a specialist in (e.g. You are a helpful assistant tasked with answering questions about…)

  • Provide any context specific to you or what you need help with.

  • Give one or two examples of a good answer.

  • Tell the chatbot how to format its answer.

The prompt given to a custom knowledge chatbot follows a specific format that puts together all the components from the retrieval system:

  1. A system message - the instructions you give to the chatbot for the entire chat experience.

  2. The most relevant chunks.

  3. (Optional) Chat history - a small number of recent messages so that the chatbot appears to remember what happened previously in the conversation.

  4. The user’s query.

So, the final prompt looks like this:

Another powerful tool to expand the functionality of a chatbot is intent classification, which takes input and classifies it based on what the user wants to achieve. The AI analyses the user’s query to trigger different actions, which gives companies greater control over the direction of a conversation between the chatbot and the user. If a customer asks for anything outside of one of the intent classifier’s options, the chatbot provides a default response using the knowledge base.

 

It is becoming increasingly common for companies to use custom knowledge chatbots to assist them, because their natural-language interface is more convenient to use than point-and-click websites, and all the information they provide is stored in one place, so it is more accurate, coherent and easier to update. Usually, they are provided by other companies that develop a chatbot specifically for a client, deploy it onto a website or a communication channel such as WhatsApp or Telegram, and sell this as a service. This process is becoming easily accessible due to no-code platforms, such as OpenAI GPTs, that provide the functions for creating and deploying your own custom knowledge chatbots. Since chatbots can save companies money, time and effort by automating simple tasks, they provide value which is greater than their price.

There are already many ways that AI technology can be used to operate systems and processes in a business. For example, a customer service chatbot can be added to the home page of a website to provide instant responses to frequently asked questions. This can save time as you do not need people to operate a contact phone or email address. For some companies, this chatbot can also be used for lead generation, by guiding a user towards a follow-up event and collecting their contact information. Another example is a staff training chatbot, which can automate the process of onboarding and training new employees. This frees up a lot of time for managers, provides staff with 24/7 access to information, and makes sure training is standardized across the business. Finally, chatbots can be used for data analysis to provide insights, risk assessments, and assist in decision-making. These are just a few examples - there are many other ways AI can be used to solve problems in a business.

I hope this has shown you the transformative effect AI can have on the way that a company operates. With any new technology that is this powerful and is developing this rapidly, questions will arise about the ethics of using this technology. Much of the value provided by AI chatbots comes from automating simple tasks done by existing employees. This can either be very helpful to them, by freeing up time for more meaningful work, or it can be harmful if it replaces their work entirely. In addition, AI may cause privacy concerns, since much of the company’s data, including data that is internal, could potentially be accessed by the service provider, the chatbot-building tool, and the provider of the LLM API. Despite this, its popularity continues to grow, because as companies get larger, money becomes more of a priority for them. Since AI is changing so rapidly and unpredictably, we can only speculate about what else it will bring, but it is most likely that its rapid growth will not slow down anytime soon.