Reiko Feaver: New AI Risks: Contractor Confidentiality Risks; Insurance Gaps

Reiko Feaver: New AI Risks: Contractor Confidentiality Risks; Insurance Gaps

Authored by CM Law Partner & Technology Group Chair Reiko Feaver, this article examines two emerging AI risks that are quietly raising the stakes for businesses: contractor confidentiality gaps and receding insurance coverage.

Two emerging trends put businesses at increasing risk. Up-front vendor diligence, appropriate contractual provisions, and internal monitoring processes will be key protective tools

AI-Related Contractor Confidentiality Risks

AI companies, including major players like OpenAI, are actively soliciting contractors to upload actual work product to evaluate and train AI models. These companies instruct contractors to scrub confidential and sensitive information from this work before disclosure to the company. This creates a very real risk that business information is being “protected” by individuals who may no longer be focused on the best interest of their previous employer or who simply don’t have the skills to effectively remove confidential, proprietary, or sensitive data and information from their work product.

Any organization relying on independent contractors, consultants, or external service providers faces these risks. Contractors today work remotely, use personal devices, and rely on external software tools and cloud platforms to perform assigned tasks. The degree of visibility into how company information, data, and materials are handled, stored, or transmitted varies across companies. While complete protection is not likely, businesses should at the least adopt contractual and operational measures over which they do have control to minimize risk.

Contractor agreements often include generic data, information, and work product confidentiality and protection clauses. Better agreements will contain: (a) restrictions on external system uploads; (b) express prohibitions on use of company data and information in external AI or cloud platforms; (c) representations concerning reuse of prior-employer materials; (d) company audit or monitoring rights; (e) obligations on contractor to notify of breaches or unauthorized uses; and (e) precise obligations relating to return and permanent destruction of company data, information, and work product upon termination of the engagement.

Operational measures include requiring annual contractor certifications of compliance, enhanced exit procedures with AI-specific acknowledgments, document tracking and access monitoring systems, and exit interview protocols addressing AI platform uploads.

Receding Insurance Coverage

Insurance markets are narrowing “silent” insurance coverage for AI-related incidents, increasing AI-related exclusions, and pushing out AI specific policies. Business has seen this progression with cyber coverage. Recognizing that, businesses should already be familiar with the proactive measures necessary to balance exposure to the risks inherent in the use of AI tools.

Insurance companies will expect businesses to be able to identify and understand where and how AI tools are being used in their products, offerings, and business operations. To meet these obligations, businesses must impose upon AI vendors and providers strong AI transparency and governance controls. Up-front diligence and assessment can identify whether vendors and providers have these controls. AI-assessments must be conducted as security assessments are routinely conducted today.

Contracts should be updated and modified to provide express, written obligations on vendors and providers of AI tools and resultant liability for failures to meet those obligations. Contracts should address data acquisition, use, and protection; input and output accuracy, reliability, and legal compliance; monitoring, correction, and improvement of the underlying models; and presence of appropriate levels of human oversight. Existing warranties and indemnifications schemes are likely insufficient to cover the real risks from AI tools and must be reworked to address risks specific to the AI use.

Once a vendor has been assessed and approved and executed an appropriate contract, vendor management must be robust and ongoing. Relying on what the vendor was doing at initial assessment or what the contract requires the vendor to do will never replace operational processes to track vendors and audit performance. Lack of these measures may well invalidate or limit AI related coverage that the business does maintain.


About CM Law

CM Law (cm.law) – formerly Culhane Meadows – is the largest national, full-service, women-owned & managed (WBE) law firm in the United States. Designed to provide experienced attorneys with an optimal way to practice sophisticated law while maintaining a superior work/life balance, the firm offers fully remote work options, a transparent, merit and math-based compensation structure, and a collaborative culture. Serving a diverse clientele—from individuals and small businesses to over 40 Fortune-ranked companies—CM Law is committed to delivering exceptional legal services across a broad spectrum of industries.


The foregoing content is for informational purposes only and should not be relied upon as legal advice. Federal, state, and local laws can change rapidly and, therefore, this content may become obsolete or outdated. Please consult with an attorney of your choice to ensure you obtain the most current and accurate counsel about your particular situation.