Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

Meet MonkGPT—How Building Your Own AI Tools Helps Safeguard Brand Protection

AI AI, AI & Emerging Technology Consulting, AI Readiness, Digital transformation, Talent as a Service, Tech Services 5 min read
Profile picture for user Michael B

Written by
Michael Balarezo
Global VP, Enterprise Automation

Large Language Models

What I’ve learned from months of experimenting with AI? These tools have proven to be a superpower for our talent, but it’s up to us to provide them with the proper cape—after all, our main concern is that they have a safe flight while tackling today’s challenges and meeting the needs of our clients. 

At Media.Monks, we’re always on the lookout for ways to integrate the best AI technology into our business. We do this not just because we know AI is (and will continue to be) highly disruptive, but also because we know our tech-savvy and ceaselessly curious people are bound to experiment with exciting new tools—and we want to make sure this happens in the most secure way possible. We all remember pivotal blunders of these past months, like private code being leaked out into the public domain, and thus it comes as no surprise that our Legal and InfoSec teams have been pushing the brakes a bit on what tech we can adopt, taking the safety of our brand and those of our partners into consideration. 

So, when OpenAI—the force behind ChatGPT—updated their terms of service, allowing people who leverage the API to utilize the service without any of their data being used to train the model as a default setting, we were presented with a huge opportunity. Naturally, we seized it with both hands and decided to build our own internal version of the popular tool by leveraging OpenAI’s API: MonkGPT, which allows our teams to harness the power of this platform while layering in our own security and privacy checks. Why? So that our talent can use a tool that’s both business-specific and much safer, with the aim to mitigate risks like data leaks.

You can’t risk putting brand protection in danger.  

Ever since generative AI sprung onto the scene, we’ve been experimenting with these tools while exploring how endless their possibilities are. As it turns out, AI tools are incredible, but they don’t necessarily come without limitations. Besides not being tailored to specific business needs, public AI platforms may use proprietary algorithms or models, which could raise concerns about intellectual property rights and ownership. In line with this, these public tools typically collect data, the use of which may not be transparent and may fail to meet an organization’s privacy policies and security measures. 

Brand risk is what we’re most worried about, as our top priority is to protect both our intellectual property and our employee and customer data. Interestingly, a key solution is to build the tools yourself. Besides, there’s no better way to truly understand the capabilities of a technology than by rolling up your sleeves and getting your hands dirty.

Breaking deployment records, despite hurdles.  

In creating MonkGPT, there was no need to reinvent the wheel. Sure, we can—and do—train our own LLMs, but with the rapid success of ChatGPT, we decided to leverage OpenAI’s API and popular open source libraries vetted by our engineers to bring this generative AI functionality into our business quickly and safely.

In fact, the main hurdle we had to overcome was internal. Our Legal and InfoSec teams are critical of AI tooling terms of service (ToS), especially when it comes to how data is managed, owned and stored. So, we needed to get alignment with them on data risk and updates to OpenAI’s ToS—which had been modified for API users specifically so that it disabled data passed through OpenAI’s service to be used to train their models by default.

Though OpenAI stores the data that's passed through the API for a period of 30 days for audit purposes (after which it’s immediately deleted), their ToS states that it does not use this data to train its models. Coupling this with our internal best practices documentation, which all our people have access to and are urged to review before using MonkGPT, we make sure that we minimize any potential for sensitive data to persist in OpenAI’s model.

As I’ve seen time and time again, ain’t no hurdle high enough to keep us from turning our ideas into reality—and useful tools for our talent. Within just 35 days we were able to deploy MonkGPT, scale it out across the company, and launch it at our global All Hands meeting. Talking about faster, better and cheaper, this project is our motto manifested. Of course, we didn’t stop there. 

Baking in benefits for our workforce.   

Right now, we have our own interface and application stack, which means we can start to build our own tooling and functionality leveraging all sorts of generative AI tech. The intention behind this is to enhance the user experience, while catering to the needs of our use cases. For example, we’re currently adding features like Data Loss Prevention to further increase security and privacy. This involves implementing ways to effectively remove any potential for sensitive information to be sent into OpenAI’s ecosystem, so as to increase our control over the data, which we wouldn’t have been able to do had we gone straight through ChatGPT’s service. 

Another exciting feature we’re developing revolves around prompt discovery and prompt sharing. One of the main challenges in leveraging a prompt-based LLM’s software is figuring out what the best ways are to ask something. That’s why we’re working on a feature—which ChatGPT doesn’t have yet—that allows users to explore the most useful prompts across business units. Say you’re a copywriter, the tool could show you the most effective prompts that other copywriters use or like. By integrating this discoverability into the use of the tool, our people won’t have to spin their wheels as much to get to the same destination.

In the same vein, we’re also training LLMs towards specific purposes. For instance, we can train a model for our legal counsels that uncovers all the red flags in a contract based on both the language for legal entities and what they have seen in similar contacts. Imagine the time and effort you can save by heading over to MonkGPT and, depending on your business unit, selecting the model that you want to interact with—because that model has been specifically trained for your use cases.

It’s only a matter of time before we’re all powered by AI. 

All these efforts feed into our overall AI offering. In developing new features, we’re not just advancing our understanding of LLMs and generative AI, but also expanding our experience in taking these tools to the next level. It’s all about asking ourselves, “What challenges do our business units face and how can AI help?” with the goal to provide our talent with the right superpowers. 

Monk Thoughts The real opportunities lie in further training AI models and exploring new use cases.
m
Michael Balarezo headshot
nk

It goes without saying that my team and I apply this same kind of thinking to the work we do for all our clients. Our AI mission moves well beyond our own organization as we want to make sure the brands we partner with reap the benefits of our trial and error, too. This is because we know with absolute certainty that sooner or later every brand is going to have their very own models that know their business from the inside out, just like MonkGPT. If you’re not already embracing this inevitability now, then I’m sure you will soon. Whether getting there takes just a bit of consultation or full end-to-end support, my team and I have the tools and experience to customize the perfect cape for you.

Related
Thinking

Make our digital heart beat faster

Get our newsletter with inspiration on the latest trends, projects and much more.

Thank you for signing up!

Continue exploring

Media.Monks needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss