Robotic DogJust like dogs, large language models can be trained to be more useful, obedient and relevant. Many people have played with ChatGPT, interestingly with mixed results. Whilst clearly impressive, we hear two main themes of feedback. Firstly, it’s difficult to get it to come up with useful responses that would be better than writing themselves and secondly, responses tend to be very generic and not very detailed.

There are two key reasons for this; one is that writing prompts (the instructions you give to ChatGPT) are more complex than you’d imagine and the other is that it has been told to look at generic data available on the internet and so the responses are bound to be generic.

Thankfully, there are solutions to both of these challenges.

Firstly, TrueBird offers an app – called Alchemy Insights – that not only helps you write better prompts and educates you on the various techniques to improve the responses but also allows you to rate the quality of the responses so that, over time, you and your organisation can build a library of the most effective prompts and share best practice, learning together.

Secondly, large language models, of which ChatGPT is just one, can be trained on other data including your website, industry manuals, internal process descriptions, your favourite literature, competitor brochures and pretty much anything else. This means that the responses are far more specific and contextualised for your needs. This applies not just to ChatGPT et al but can also be used in, for example, chatbots that are the front of house to your customers or advanced search engines.