Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI


3 Experiments That Unlock the Power of ChatGPT

AI AI, AI & Emerging Technology Consulting, AI Readiness, Tech Services 4 min read
Profile picture for user mediamonks

Written by

A hand holds a smartphone to the viewer. On the phone is a conversation between a user and a chatbot. The conversation isn't legible.

Look around, and you’ve surely noticed a surge of interest in artificial intelligence that can process language more accurately and effectively than ever before. Yes, chatbots have improved by leaps and bounds since the days of Eliza, the early bot whose therapist persona cleverly masked its cognitive limits by reflecting user input with noncommittal replies. Today’s bots seem to truly understand users, and can even explain memes.

What’s supercharging these AIs are large language models (LLMs). LLMs are language prediction tools that can read, summarize and translate text by predicting upcoming words in a sentence, allowing them to generate new text that closely resembles human speech and writing. They’re adept at both writing and interpreting text, and that cognitive ability means they can do far more than just write the first draft of an email or summarize your meeting notes.

ChatGPT, built by OpenAI, has gained incredible popularity thanks to its simple conversational interface and its ease of use. This accessibility has inspired multiple teams within Media.Monks to experiment with LLMs, and GPT in particular, to find better ways to work and create. The result is a series of prototyped innovations that demonstrate the ability of LLMs to aid in internal collaboration, streamline information gathering and self-service, and make highly technical metrics more accessible for everyone.

Enabling collaboration through multi-user experiences.

The Labs.Monks, our R&D team focused on technology and innovation, built a chatbot designed to streamline brainstorming and collaboration across teams. Charmingly named Brian (originally from an internal pun of BrAIn but renamed for simplicity), the GPT-powered bot integrates into Slack and serves as an intelligent, active participant in team channels. The idea for Brian came from the realization that most applications of ChatGPT are task-based, which inspired the team to consider other ways LLMs can support teams, like serving as a creative collaborator.

Brian has two modes. In facilitation mode, it keeps group brainstorms going by offering questions and providing summaries on the discussion. In contribution mode, Brian serves as another collaborator who thinks along with the team and adds to the discussion.

“During one of our tests, it was able to help us brainstorm a fictional brief on how to create an experiential activation for a soft drink brand catered to seniors with some interesting results! Though ultimately we ended up coming up with an idea ourselves, the input from Brian helped us get to other outcomes we might not have thought of otherwise,” says Angelica Ortiz, Senior Creative Technologist. Being able to field a discussion among a group of users (and even address individuals by name) separates Brian from other chatbots, which are typically limited to one-on-one conversations.

The team originally built Brian as an exercise to gain hands-on knowledge and experience with LLMs, the focus of their recent Labs Report. Now, the team is exploring how to roll it out as a tool for wide use by the Media.Monks team.

Monk Thoughts The input from Brian helped us get to other outcomes we might not have thought of otherwise.
Angelica Ortiz headshot

An alternative to fine-tuning GPT.

After seeing the potential of LLMs, many brands are exploring the idea of fine-tuning those models to better match their tone of voice or the kinds of content they create. Generally, fine-tuning an existing model can be cost-effective, removing the need to train a model, program a chatbot or write new content from scratch. But for some use cases, fine-tuning can be prohibitively expensive compared to another method of generating more brand-unique results: prompt engineering.

Our Tech Services practice developed a method of prompt engineering that makes it easy to build a GPT-powered chatbot that can answer questions based on content from a specific domain. The example they use is turning a company’s internal wiki into an assistant that saves employees the trouble of searching and sifting through long documents to find the information they need. The key technology behind this method are OpenAI’s embeddings, a feature that allows matching user queries with answers from the most relevant source content.

Embeddings unlock some incredible features. Users can ask questions and receive responses in their language of choice, regardless of the source content’s original language, meaning there’s no need to localize. They also don’t rely on exact word matches; if someone asks our hypothetical company wiki bot about “vacation time” policies, the bot will know to pull information from a document about “paid time off.” Adding more content to the chatbot is also easy, as all it takes is a simple webhook to enable the bot to answer questions about new content as its published.

If you want to learn more about how to use embeddings to prompt engineer a bot of your own, check out the full writeup. You’ll also see a video demo that walks you through how embeddings achieve each of the outcomes above.

Digesting information at speed.

Sifting through data can be overwhelming—especially if numbers aren’t your forte. That’s why our enterprise automation team developed Turing.Monk, a chatbot affectionately named after Alan Turing, the 20th century computer scientist who developed the Turing test, which tests a computer’s ability to exhibit intelligent behavior. Turing.Monk help teams quickly find the answers they need about their campaigns by answering queries in three formats: lists, summaries and graphs.

The bot functions a lot like a marketing assistant, helping marketers draw conclusions about a campaign’s performance. Want to see how the media cost has changed on a week-by-week basis? Just ask Turing.Bot to “provide a written summary of how the media cost is changing” for the campaign in question. It’s that easy.

The ability to ask questions in natural language helps puts analytics and data science at the fingertips for those on the team who might not know SQL or Python. “It’s early in development, but today an account manager can keep prompting and fine-tuning the prompt to get the outcome they desire,” says Michael Balarezo, Global VP of Enterprise Automation. “We’re now working on improving the analytical capability of the tool, leveraging the power of LLMs to understand the nuance of the ask, and translate that into more complex insight generation”

More potential has yet to be unlocked.

While much has been said about LLMs’ abilities to generate text, their skill in interpreting queries and surfacing up helpful, contextual information—all in a conversational format—will make them incredible tools in the workplace and beyond. From facilitating creative collaboration, to making information easily accessible for all, to giving people superpowers by putting digestible data at their fingertips, the potential for LLMs like GPT is great—and you can bet we’ll continue to experiment and find even more applications and use cases to benefit our team and the brands we work with.


Make our digital heart beat faster

Get our newsletter with inspiration on the latest trends, projects and much more.

Thank you for signing up!

Continue exploring

Media.Monks needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI