More and more companies (at least 78%, according to one McKinsey study) are aggressively integrating AI into their business processes, and safety, security, and ethics, often taken for granted in more mature enterprise solutions, can get left behind.
And when your data is also your customers’ data, data security isn’t just a good idea: it’s the law. So: demand some foundational promises from any company proposing to sell you AI-driven solutions.
Big players - OpenAI, Google, Meta, and so forth - tend to lead by example. You don’t want a single page that pays lip service to your security. You want to see an entire section dedicated to how that company is pursuing safe, ethical usage of AI solutions.
Adobe’s policy for Firefly, their image generation model, is a stellar example of what you should ask of every AI-enabled solution. In addition to other details, Adobe represents that:
OpenAI’s enterprise portal (though notably not their consumer-grade portals) states that:
It’s extremely common for major AI companies to make promises to their enterprise customers that regular customers don't get.
For instance, Google’s Gemini LLM provides a list of policy guidelines that Gemini shouldn’t do - but does not assert or guarantee that the model won’t do them anyway. If you want promises about your data security, you need to use Gemini for Workspaces; utilizing the portal for the general public could put your proprietary data at risk.
You should also verify that you are allowed to use the output of an LLM. For instance: Google asserts that Gemini includes a source when it outputs lengthy amounts of code, so that you can comply with any licensing requirements. The implicit statement here: just because Gemini says it doesn’t, doesn't mean you can use it (or that Google was allowed to use it).
AI Disclosure Statement: This article was written without the use of AI assistance. All typos and assertions belong to the author, for better or worse