top of page
  • Writer's pictureAndreea Pricopi

How we've Built our Customer Service Chatbot

Updated: Jun 21

ChatGPT has taken the world by storm by pushing the boundaries of conversational AI. As businesses started to realize that an intelligent chatbot can be an invaluable asset, we got curious about the nitty-gritty of implementing one. In this blog post, we go over the key aspects of how we implemented a customer service chatbot for our website using OpenAI’s API, Google Cloud Functions, and Dialogflow.


First, we wanted to leverage the language understanding and generation capabilities of OpenAI models. However, we still needed to “teach” a model to specifically answer questions about our business. This process of adjusting an already trained model for a specific use case is known as fine-tuning. Simply put, we fine-tuned an OpenAI model.

As we were eager to implement it before ChatGPT API was released, we used a Davinci's GPT-3 model. Now that we have also used the ChatGPT API across different projects, we can look back and appreciate how seamless the transition has been since previous hands-on experience.


For the fine-tuning, we have prepared a dataset using the most frequently asked questions we have gathered over time, along with a specific answer to each one. In addition, we have implemented the logic necessary to ensure that the chatbot does not engage in conversations about topics that are not related to our business. Handling inquiries only within a specific domain is important for brand value alignment as well as efficient resource management, such as limiting the cost of using an API.


As the text processing and generation logic was lightweight and suitable for an event-driven application, we opted for a serverless solution for deployment. This approach allowed us, developers, to focus on functionality, while the cloud provider handled the infrastructure. Since we already had extensive experience with AWS Lambda functions, we wanted to broaden our horizons and experimented with their Google counterpart, namely Google Cloud Functions.


By this point, we had a deployed conversational model. However, we still needed at least a graphical user interface of the chat window. To reduce development time and make use of state-of-the-art technology, we decided to use Dialogflow. This cloud-based solution is a tool for building complex conversational AI agents. We took advantage of various features of Dialogflow. We used one of its built-in integrations to get an embeddable dialog for our website and then simply rendered the styling. We used it for trivial pieces of conversation like "hello", "thank you", etc., for which using the GPT-3 model would be superfluous. Lastly, we employed Dialogflow to request and validate the contact details of the customers, who would like to be contacted by our team.


With the refined model, Cloud Function, and Dialogflow agent coming together, we now have an intelligent chatbot up and running on our website. We are happy to invite you to interact with it, give us some feedback, and if you think your business could benefit from a similar chatbot, we are glad to put our expertise to good use.

0 comments
bottom of page