OpenAI announced that it will require organizations to complete an identity verification (IDV) process to verify their organization’s identity before being allowed to access the latest OpenAI models. Identity verification will likely require developers to digitally verify themselves using government-issued photo ID from permitted countries and prove their affiliation with their organization. Forrester expects that the reasons for OpenAI’s decision include:

  • The ability to block malicious or rogue country-based developers from APIs to models.
  • Tying developers to only one client organization, ensuring that one developer accesses OpenAI models only on behalf of one organization (this helps with vetting organizations).
  • Better enforcement of rate limitations to models.
  • Protecting OpenAI models from IP theft.
  • A need to correlate true identities to prompts and model responses that slip through genAI guardrails (e.g., asking for instructions on how to commit acts of terrorism). This is especially important as large language models (LLMs) expand from text-focused input-output to new modalities (audio, image, video).

OpenAI’s decision parallels a larger market trend about linking online, digital identities to real-world ones. Included in this trend are age verification (gaming, gambling, adult content websites) requirements, regulatory compliance with know-your-customer/anti-money-laundering regulations, and nonrepudiation of user transactions.

Firms using OpenAI should define an internal governance (onboarding and offboarding) process for permissioning access to genAI/LLM models, including OpenAI. Shadow genAI access puts the firm’s and its customers’ intellectual property at risk of disclosure via these models. Further, Forrester recommends that end user organizations leverage multichannel-capable (web, mobile app, contact center, in-person), low-user-friction but powerful identity verification solutions.