The AI Gold Rush: Are You Protecting Your Business, or Giving It Away?
Posted on 25 February 2026 22:01
A practical guide for SME executives on AI risks, privacy, and intellectual property
There's a quiet crisis unfolding inside businesses right now, and most executives don't even know it's happening.
While boardrooms debate AI strategy and employees enthusiastically adopt tools like ChatGPT, Gemini, Deepseek, and Claude to turbocharge their productivity, a different conversation is being missed entirely:
- What happens to the information you feed these platforms?
- Who owns what comes out?
- What are the legal and reputational consequences if something goes wrong?
The AI productivity wave is real and the benefits are undeniable. But in the rush to keep up, common sense around data protection and intellectual property is being left at the door. For CEOs, CFOs, and business owners of small and medium enterprises, this isn't a technology problem, it's a governance problem. And it needs your attention now!
What Actually Happens When Your Team Uses AI?
Before we talk about protections, you need to understand the basic mechanics. When someone in your business types a prompt into a free AI tool e.g. asking ChatGPT to summarise a contract, draft a client proposal using your company's methodology, or analyse financial data, that information travels to a third-party server. What happens next depends entirely on the platform's terms and conditions, and those terms vary significantly.
Think of it this way: if your employee walked into a public library, read aloud your confidential client list, your proprietary pricing model, and your internal legal correspondence, and then asked a librarian to help them write a report, would you be comfortable with that? Almost certainly not. Yet that is, in functional terms, what happens when sensitive business information is pasted into a consumer-grade AI tool with default settings.
The Terms and Conditions: What the Platforms Actually Say
The legal landscape here is not uniform, and it shifts regularly. But as a general principle, here is what executives need to know about the major platforms:
- ChatGPT (OpenAI): In its free tier and standard API usage, OpenAI has historically used conversations to train and improve its models, though users can opt out through settings. For business users on paid enterprise plans, OpenAI provides contractual commitments that your data will not be used for training, and conversations are not retained beyond the session. However, many employees using personal or informal accounts do not operate under these protections.
- Google Gemini: Similar principles apply. Consumer-facing Gemini products may use interactions to improve Google's services. Workspace business users with appropriate Enterprise agreements benefit from stronger data processing terms and data residency commitments, but again, this depends on your contract and how your staff are accessing the tool.
- DeepSeek: This Chinese-developed platform warrants particular caution. DeepSeek's privacy policy allows for the collection of a broad range of user data, which may be stored on servers in the People's Republic of China and subject to Chinese law, including laws that can compel companies to share data with government authorities. Several national governments and corporations have already moved to restrict or ban its use on work devices. For any business operating in regulated sectors, or handling data from clients in jurisdictions with strict data transfer rules, using DeepSeek with business information carries material risk.
- Claude (Anthropic): Anthropic similarly distinguishes between consumer and business use. On the free tier, conversations may be reviewed by Anthropic staff for safety and improvement purposes. Paid plans and the API come with stronger commitments around data not being used for training without consent, and enterprise agreements include formal data processing addenda.
The pattern across all platforms is consistent: free or personal tiers offer minimal protection; paid enterprise tiers offer significantly more. But "more" is not the same as "complete."
Are Paid Plans Enough? The Honest Answer.
Upgrading to a paid business plan does materially improve your position, and for most SMEs it should be a baseline requirement if staff are using these tools with any company data. Paid plans typically offer:
- Contractual commitments that your data will not be used to train AI models
- Data processing agreements that may satisfy regulatory requirements (such as POPIA, GDPR)
- Shorter or no data retention periods
- Reduced human review of conversations
- In some cases, the ability to choose data residency regions
However, there are important caveats. A paid subscription does not grant you unlimited privacy. Most platforms retain logs for safety, security, and abuse prevention purposes. Terms of service can change, and they do. And critically, a paid subscription does not protect you from your own employees making poor decisions about what to share.
There is also a subtler risk that paid plans do not address: the risk to your intellectual property from the outputs. If your team uses AI to generate a client deliverable, a strategic framework, a marketing campaign, or a software module, questions about who owns that output i.e. you, your client, or the AI platform, are not yet fully resolved in most jurisdictions. Courts and regulators are still catching up. In the meantime, the safest assumption is that AI-generated content may carry ownership ambiguity, and your contracts with clients should be updated to address this explicitly.
How SMEs Can Protect Themselves: Practical Controls
The good news is that you do not need a large IT department or cybersecurity budget to implement meaningful protections. What you need is clarity, policy, and discipline.
Start with an AI Acceptable Use Policy. This is the single highest-impact thing most SMEs can do right now. Your policy should define which AI tools are approved for business use, under what conditions, and with what categories of information. It should explicitly prohibit entering personal data, client confidential information, trade secrets, financial data, and legal communications into unapproved tools. It should be signed by every employee, not buried in an induction pack.
Classify your information. Not everything your business holds is equally sensitive. Create a simple tiered classification: public information, internal information, confidential information, and restricted information. Then map which tier is appropriate for AI use. As a rule of thumb, confidential and restricted information should never enter a consumer AI platform, and should only enter enterprise platforms with verified data processing agreements in place.
Centralise your AI tools. The proliferation of AI tools within an organisation, sometimes called "shadow AI", is one of the fastest-growing risks in business today. Employees will, with good intentions and genuine enthusiasm, find and use whatever tools help them work faster. Without a sanctioned set of approved tools, your data is scattered across dozens of third-party platforms, each with different terms and different risks. Pick a small number of approved tools, pay for appropriate-tier access, and make them easy to use. Friction drives shadow adoption.
Audit and review AI outputs. Particularly where AI is being used to create client-facing work, legal documents, financial analysis, or strategic advice, human review is not optional. AI tools hallucinate! They may produce confident-sounding content that is factually wrong. Without review processes, you risk reputational damage, legal liability, and in regulated industries, compliance failures.
Update your contracts. Both client contracts and employment agreements need to reflect the reality of AI use. Client contracts should address who owns AI-assisted deliverables and what data your firm may process using third-party AI tools. Employment agreements should include clauses on AI tool use, confidentiality obligations as they relate to AI, and the handling of AI-generated work.
Existing Frameworks: Your Built-In Head Start
If your business already operates under established information security or risk management frameworks, you may have more protection in place than you realise, as long as those frameworks have been updated to address AI.
ISO 27001 is the internationally recognised standard for information security management. At its core, it requires organisations to identify their information assets, assess the risks to those assets, and implement controls to manage those risks. An AI tool through which confidential data passes is an information asset risk, and ISO 27001's existing controls around third-party supplier management, access control, and incident response are directly applicable. Businesses certified under ISO 27001 should review their supplier register and risk assessments to explicitly include AI platforms. If you're not yet ISO 27001 certified, the framework provides an excellent roadmap even if formal certification isn't your immediate goal.
NIST CSF and NIST AI RMF (the AI-specific Risk Management Framework) offer complementary guidance. The NIST AI RMF, released in 2023, provides specific guidance on governing, mapping, measuring, and managing AI risks. It maps directly onto the kinds of decisions leaders face when deploying AI tools. It is freely available and does not require formal certification to use as a governance guide.
GDPR, POPIA and other data protection regulations provide hard legal obligations that must inform your AI strategy. If you handle personal data of individuals, the use of AI tools to process that data is subject to requirements including lawful basis for processing, data minimisation, and third-party processor agreements. Feeding customer personal data into an AI tool without a Data Processing Agreement in place is likely a data protection regulation violation, regardless of what the tool produces.
The common thread across these frameworks is that AI does not require you to reinvent your governance wheel. It requires you to extend the wheel you already have.
The Prompt Sprawl Problem: Managing AI's New Institutional Knowledge
Here is a challenge that almost no business is managing well yet, and that will become a significant operational risk within the next two to three years.
As teams integrate AI into daily operations, they develop prompts - the instructions they give AI tools to produce useful results. A well-crafted prompt can encode years of institutional knowledge: how your business approaches a problem, your pricing methodology, your client communication style, your quality standards. Over time, these prompts become business assets as valuable as any process document or client database.
The problem is that prompts are almost universally unmanaged. They live in individual employees' personal accounts on third-party platforms, in browser bookmarks, in Slack messages, and in people's heads. When that employee leaves, the prompt, and the knowledge embedded in it, leaves with them. When the platform changes its terms, that knowledge may have been shared without your knowledge. When a competitor poaches your team, they may bring your prompt library with them.
Managing prompt sprawl requires the same discipline as managing any other intellectual asset. Businesses should maintain a centralised prompt library, stored within company-controlled systems, not on third-party AI platforms. Prompts that encode proprietary methodology, client-specific processes, or competitive advantage should be treated as confidential business documents. Access should be controlled, and employees should understand that prompts created in the course of their employment are company property.
Practically, this can be as simple as a shared document repository with version control, or as sophisticated as a purpose-built prompt management system. The sophistication should match your scale, but the discipline should be universal.
A Simple Action Plan for Executives
You don't need to solve everything at once. Here is a prioritised starting point:
This week: Have a conversation with your leadership team about which AI tools your employees are currently using. You may be surprised. Establish a temporary moratorium on using any AI tool with client or confidential data until you have a policy in place.
This month: Draft and implement an AI Acceptable Use Policy. Audit your current AI tool subscriptions and upgrade to appropriate business-tier plans where relevant. Check whether your data protection agreements with key AI suppliers are in place.
This quarter: Review your client and employment contracts for AI-related gaps. Conduct a basic information classification exercise. Establish a centralised prompt library. Brief your team on the policy and the reasoning behind it. Compliance is far more likely when people understand why, not just what.
This year: Consider whether ISO 27001 or a similar framework is appropriate for your business as a signal of maturity to clients and partners. Monitor regulatory developments in your sector. AI-specific regulation is coming, and businesses that have built governance infrastructure will adapt far more easily than those starting from scratch.
The Bottom Line
AI tools are genuinely transformative, and the businesses that learn to use them well will have real competitive advantage. But advantage built on poor data governance is fragile. It exposes you to regulatory action, client loss, reputational damage, and competitive harm, often before you even realise the exposure exists.
The executives who will navigate this era well are not those who move fastest, but those who move with intention. They understand what their people are doing with these tools, they have clear policies in place, and they treat AI governance as a business discipline, not an IT afterthought.
The technology is moving quickly. Your governance doesn't have to chase it, it just has to be in place before something goes wrong.
Because in this environment, the question is not whether a gap in your AI governance will be exploited. It's whether you'll close it before or after it costs you.
If this resonates with challenges you're facing in your business, I'd welcome the conversation. The risks are real, but so are the practical solutions.
References & Further Reading
All sources are publicly available and were verified as of February 2026. Platform terms and privacy policies are subject to change — readers are encouraged to check the latest versions directly.
Platform Privacy Policies & Terms of Service
- OpenAI — Business Data Privacy Commitments
Official page detailing OpenAI's enterprise-level data handling, including the commitment not to train on business data by default across ChatGPT Enterprise, Team, and API plans.
https://openai.com/business-data/ - OpenAI — Enterprise Privacy Overview
Covers SOC 2 compliance, data encryption standards, data residency options, and Data Processing Addendum availability.
https://openai.com/enterprise-privacy/ - OpenAI — Consumer Terms of Use
Governs free and Plus user accounts, including provisions on content use for model improvement and opt-out mechanisms.
https://openai.com/policies/row-terms-of-use/ - Google — Generative AI in Google Workspace Privacy Hub
Google's official FAQ covering how Workspace data is handled in Gemini, including the commitment that content is not used for model training outside the customer's domain without permission.
https://support.google.com/a/answer/15706919 - Google — Gemini Apps Privacy Hub
Details how the consumer-facing Gemini app handles data, including the use of chats by human reviewers to improve Google products and services on the free tier.
https://support.google.com/gemini/answer/13594961 - Google — Gemini API Additional Terms of Service
Explicitly distinguishes between paid services (no training on prompts/responses) and unpaid services (content used to improve Google products, with human reviewer access).
https://ai.google.dev/gemini-api/terms - Google Workspace — Generative AI Security, Compliance and Privacy
Overview of Gemini for Workspace enterprise certifications including ISO 42001, FedRAMP High, SOC 1/2/3, and HIPAA compliance.
https://workspace.google.com/security/ai-privacy/ - DeepSeek — Official Privacy Policy
The primary source confirming that DeepSeek collects and stores personal data in the People's Republic of China and that data may be processed under Chinese law.
https://cdn.deepseek.com/policies/en-US/deepseek-privacy-policy.html - Anthropic — Updates to Consumer Terms and Privacy Policy (August 2025)
Official announcement of Anthropic's 2025 policy change allowing consumer account data to be used for model training (opt-in), with confirmation that Commercial Terms (Claude for Work, API, Enterprise) remain unaffected.
https://www.anthropic.com/news/updates-to-our-consumer-terms - Anthropic Privacy Center — Is my data used for model training?
Confirms that by default, commercial products (Claude for Work, API) do not use inputs/outputs to train models; training use requires explicit opt-in or feedback submission.
https://privacy.claude.com/en/articles/7996868-is-my-data-used-for-model-training - Anthropic Privacy Center — How long do you store my data?
Details retention periods: 30 days default for commercial users, up to 5 years for consumers who opt into model improvement, and up to 7 years for trust and safety flagged content.
https://privacy.claude.com/en/articles/10023548-how-long-do-you-store-my-data
DeepSeek Risk & Regulatory Developments
- CNBC — South Korea says DeepSeek transferred user data to China and the US without consent (April 2025)
Reports findings of South Korea's Personal Information Protection Commission (PIPC), including unauthorised transfer of AI prompt data to Beijing Volcano Engine Technology Co.
https://www.cnbc.com/2025/04/24/south-korea-says-deepseek-transferred-user-data-to-china-us-without-consent.html - IAPP — DeepSeek and the China Data Question (2025)
Analysis from the International Association of Privacy Professionals on the legal nuances of DeepSeek's data flows and the distinction between direct data collection and international data transfer under GDPR.
https://iapp.org/news/a/deepseek-and-the-china-data-question - NPR — International regulators probe how DeepSeek is using data (January 2025)
Covers regulatory responses from Italy, Ireland, South Korea, and the US, including warnings to government staff and analysis from Yale cybersecurity researcher Samm Sacks.
https://www.npr.org/2025/01/31/nx-s1-5277440/deepseek-data-safety - Lexology — DeepSeek Faces Overseas and Chinese Data Security Challenges (February 2025)
Summary of global regulatory actions against DeepSeek as of mid-February 2025, covering bans and investigations across Italy, Belgium, South Korea, Australia, India, and the US.
https://www.lexology.com/library/detail.aspx?g=e98373d5-d7ac-4b81-9958-b42a6d5ddbed - Security Magazine — Dangers of DeepSeek's Privacy Policy (February 2025)
Analysis of DeepSeek's data collection practices, including the collection of keystroke patterns and the implications of Chinese law for data governance.
https://www.securitymagazine.com/articles/101374-dangers-of-deepseeks-privacy-policy-data-risks-in-the-age-of-ai
AI Governance Frameworks
- NIST — AI Risk Management Framework (AI RMF 1.0), January 2023
The foundational US voluntary framework for managing AI risks across the AI lifecycle, covering governance, mapping, measurement, and management of AI-specific risks. Free to access and apply without formal certification.
https://www.nist.gov/itl/ai-risk-management-framework - NIST — Generative AI Profile (NIST AI 600-1), July 2024
Extension of the AI RMF specifically addressing risks unique to generative AI systems including large language models, covering intellectual property risks, data privacy, and third-party component risks.
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf - ISO — ISO/IEC 27001:2022 Information Security Management
The internationally recognised information security standard. Annex A controls 5.19–5.22 cover third-party and supply chain risk management, directly applicable to AI tool governance.
https://www.vanta.com/collection/tprm/third-party-risk-requirements-iso-27001 - ISO — ISO/IEC 42001:2023 AI Management System Standard
The AI-specific management system standard, structurally similar to ISO 27001. Organisations certified under ISO 27001 can typically achieve ISO 42001 compliance 30–40% faster due to overlapping governance structures.
https://www.protechtgroup.com/en-us/blog/ai-governance-iso-42001-certification - AWS — AI Lifecycle Risk Management: ISO/IEC 42001:2023 for AI Governance
Practical guidance on applying ISO 42001 in enterprise environments, including alignment with ISO 27001, NIST CSF, and GDPR.
https://aws.amazon.com/blogs/security/ai-lifecycle-risk-management-iso-iec-420012023-for-ai-governance/
AI Data Risks & Enterprise Security
- TechCrunch — Anthropic users face a new choice: opt out or share your chats for AI training (August 2025)
Independent analysis of Anthropic's September 2025 policy change, including commentary on the UI design of the consent mechanism and its implications for enterprise users.
https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/ - Nightfall AI — Does ChatGPT Store Your Data in 2025?
Detailed analysis of ChatGPT's 2025 data practices, including the 2024 policy change affecting free and Plus users, and a discussion of GDPR non-compliance concerns.
https://www.nightfall.ai/blog/does-chatgpt-store-your-data-in-2025 - Protecto — OpenAI Data Privacy Compared: OpenAI, Claude, Perplexity AI, and Otter (October 2025)
Side-by-side comparison of enterprise and consumer data handling across major AI platforms, with practical guidance on channel selection for sensitive data.
https://www.protecto.ai/blog/openai-data-privacy/ - AMST Legal — Anthropic's Claude AI Updates: Impact on Privacy & Confidentiality (September 2025)
Legal analysis of the September 2025 Claude policy changes, including the distinction between consumer and commercial account tiers and risks for SMEs using paid but non-enterprise plans.
https://amstlegal.com/anthropics-claude-ai-updated-terms-explained/ - Google Cloud White Paper — Generative AI, Privacy, and Google Cloud (September 2024)
Google's official position on data sovereignty, model training, intellectual property, and GDPR alignment for enterprise customers using Vertex AI and Google Workspace.
https://services.google.com/fh/files/misc/genai_privacy_google_cloud_202308.pdf
Disclaimer: This article is provided for informational purposes only and does not constitute legal or regulatory advice. AI platform terms and privacy policies change frequently. Businesses should seek independent legal and compliance advice before making decisions about AI tool deployment, data governance, or contractual arrangements.