Today data security company Metomic launched Metomic for ChatGPT, technology that gives IT and security leaders full visibility into what sensitive data is being uploaded to OpenAI's ChatGPT platform by their staff.
The easy-to-use browser plugin enables businesses to take full advantage of the generative AI solution without jeopardising their company's most sensitive data.
I spoke to Rich Vibert, CEO of Metomic, to find out more:
Last year, reports showed that the amount of sensitive data being uploaded to ChatGPT by employees had increased 60 percent between March and April, with 319 cases identified among 100,000 employees between April 9 and April 15, 2023.
According to Vibert:
"Management at more and more companies, particularly at the CEO level, are waking up and thinking, what are my employees putting into these tools?
You just open up a browser and go to chatGPT or any of these other tools to do your job better than your competitors, but then accidentally put customer PII or company secrets or passwords into these third-party tools.
Not only does this raise privacy issues, but you could also be breaching your customer contracts or regulations such as GDPR."
Vibert was at pains to add that most security breaches using ChatGPT and similar materials are non-malicious and broadly fall into two camps:
The first is the sharing of source code.
Last year, employees at Samsung's Korea-based semiconductor business were found to have plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot's future responses to other people — including its competitors.
The other form of breach often includes company confidential information like earnings and salaries, and publicly identifiable information (PII), such as staff and customer phone numbers, addresses, and social security numbers.
This is especially the case with tools, like co-pilot and open AI, which you connect to your entire data set. And because the models are trained on confidential data, employees have access to stuff that's super confidential that they shouldn't have access to.
"Then you give access to everyone in the company. So the company can use it to ask 'What was our revenue last year?' or something even more confidential."
Proactive rather than reactive
Because Metomic's ChatGPT integration sits within the browser itself, it identifies when an employee logs into OpenAI's web-based ChatGPT platform and scans the data uploaded in real-time.
Security teams can receive alerts if employees upload sensitive data, like customer PII, security credentials, and intellectual property. The browser extension comes equipped with 150 pre-built data classifiers to recognise common critical data risks.
Businesses can also create customised data classifiers to identify their most vulnerable information.
Vibert shared:
"We built Metomic on the promise of giving businesses the power of collaborative SaaS and GenAI tools without the data security risks that come with implementing cloud applications.
Our ChatGPT integration expands on our foundational value as a data security platform. Businesses gain all the advantages of ChatGPT while avoiding serious data vulnerabilities."
Metomic for ChatGPT identifies critical risks in ChatGPT conversations in real-time and offers contextual previews of sensitive data being uploaded to the platform.
Metomic customers that implement the browser extension have visibility to sensitive data that is being shared with ChatGPT by employees using Chrome on desktop.
While there's plenty of research identifying the fact that most companies have no regulations or policies on the use of Generative AI at work, and as Vibert cautions:
"Even if they have policies about the use of generative AI, these tools are developing so quickly that the policies they wrote in the early days are already outdated."
Vibert sees education as the way forward.
"You've got to educate the employees, and you've got to trust them and put the controls and education directly in their hands."
Metonic's plug is real-time training that puts the controls directly in employees' hands, so they don't click that submit button if the prompts detect that they are using confidential information.
"We don't block them. They can still use the tools. We just strip out the sensitive data before they hit that submit button."
I was curious about the repercussions in a work setting with staff using GenerativeAI in adverse ways.
Vibert asserts:
"This is just the beginning of a long chain of events that will develop in the tech industry very quickly.
It could be that security alerts trigger a workflow, which leads to security training, it could trigger flow that ensures you don't have access to specific systems.
It's also important to understand how each employee shares sensitive data across the entire software suite and respond accordingly to ensure that staff use these tools more safely."
Metomic's data security software extends across SaaS, GenAI and cloud.
"This allows companies to understand how employees share sensitive data across the entire tech stack.
That's critical because you need the bigger picture.
Some departments use one tool heavily, and other departments use another tool very heavily. The picture is incorrect and inaccurate if we don't have this cross-coverage. So having a holistic picture of how employees are sharing sensitive data is critical.
And this ranges from things like Slack and Microsoft Teams, Google Drive, all the way through to ChatGPT."
While the true impact of ChatGPT and other generative AI tools on confidentiality and privacy is still unfolding, Metomic's proactive stance offers a valuable glimpse into the future of data security against the dynamic nature of workplace security challenges.
Lead image: Emiliano Vittoriosi.
Would you like to write the first comment?
Login to post comments