The applications of artificial intelligence (AI) in business are proliferating. Startups at all stages are building AI into their products to make them smarter, more useful, and compelling for users; personalizing user experiences; automating product or service discovery, business operations, and back-office or support functions; and developing new AI-powered data analytics products and security tooling for cybersecurity vendors. Artificial intelligence can provide a powerful competitive advantage, helping startups scale with less labor, expense, and time-to-market to offer novel services or products to their users. But at the same time, the rapid adoption of these emerging technologies in startups without time-tested security best practices is creating new cybersecurity risks. Founders, CTOs, and security practitioners at startups and security experts who work with them need to understand and thoughtfully manage these risks.
Startups are an attractive target for many attackers because they may have an early-stage security program instead of a larger, more mature business with a higher level of internal security expertise or budget. Startups can typically process sensitive data such as user Personally Identifiable Information (PII) or financial/payment information and new products with strategic intellectual property (IP). Additionally, by default, most product environments during the early phase do not have robust security controls and configuration errors because the engineering focus is primarily on bringing a product to market and less on operational security best practices. Attackers will use AI tools and automation to target startups to probe APIs and cloud services at scale, find misconfigurations in environments that support development, testing, and operations activities, and launch more highly personalized and convincing social engineering or phishing campaigns using deepfakes. Securing AI applications, models, or tools and the underlying technologies they depend on should be a security program’s top focus and the most significant contributor to the company’s risk management strategy.
Here is an overview of changes in cybersecurity risk for startups due to AI, which cybersecurity risks startups should consider, relevant U.S. regulations, incident response expectations, how penetration testing and M&A due diligence are part of the equation, how to think about cyber insurance, and provide real-world Duty of Care Risk Analysis (DoCRA) and reasonable security scenarios that are a better fit for startups and their resources.
How is AI Changing Cybersecurity Risk for Startups?
Artificial intelligence (AI) helps to increase the speed of both attack and defense. On the cybersecurity defense side, tools and technologies developed through the use of AI help automate security monitoring, threat detection, anomaly detection, and response orchestration with greater efficiency than traditional cybersecurity technologies. For cybersecurity teams at startups with significant expertise gaps and resourcing constraints, AI-generated technologies offer startups a way to “stretch” their security resources and increase their capacity to identify threats and respond to them quickly and effectively.
Attackers are using AI and automation tools to help them attack startups in highly targeted, personalized ways and with more sophistication. AI can be used to identify vulnerabilities faster and generate more convincing phishing messages or social engineering efforts to better target individual employees. AI-assisted code generation can speed the process of exploit development, while automated scanning tools and technologies can be used to search across cloud environments for exposed services, APIs, and potential misconfigurations more quickly.
AI models and associated data sets create new risk vectors and expand attack surfaces. Startups and early-stage companies frequently develop new AI models or integrate pre-trained models into their products and services. This can create risks around data used to train the models, how that data was sourced and licensed, the security of the model APIs themselves, and how the model may be attacked. Adversaries can use AI to perform data poisoning to change how a model behaves or use AI to generate new inputs to an AI model that result in the model behaving unexpectedly, or being tricked into making incorrect decisions or leaking private data.
Why are Startups Valuable Cyber Targets?
Attackers find startups attractive cyber targets because they combine high-value data, rapidly scaling environments, and high-touch internal operations, while in many cases not yet having established mature security programs to adequately protect themselves. Startups often have sensitive or high-value data that makes them attractive targets to actors pursuing financial or strategic cybercrime, corporate espionage, or third-party downstream attack,s including customer PII or financial information, strategic roadmaps, investor and fundraising data, and intellectual property. In early-stage startups, many of the other attractive access points for attackers (i.e., intellectual property or proprietary products, detailed security, and product architecture) may not have been finalized yet. Early-stage companies often have robust customer data in production systems early on (customer PII, credit card data, payment information), and in many cases, a high-touch development, sales, marketing, and product support team, which are early-stage attack vectors in their own right.
Startups are also dependent on external service providers, cloud platforms, open-source components, and other third parties to drive their development velocity and growth. This presents attack vectors as well, since open-source components, third-party services, cloud platform misconfigurations, etc., may have vulnerabilities that are targeted when integrated into the company environment with less scrutiny.
Cybersecurity Risks Startups Must Consider
API, Cloud, and Container Security. Startups depend on APIs for business functionality and core technology integrations. Misconfigured cloud storage, overly broad IAM (Identity and Access Management) policies, and exposed endpoints for testing and production services are also common attack vectors that startups need to validate to prevent exploitation.
AI and Model Risk. Startups often develop or integrate AI models into their products or services that are trained on sensitive or proprietary data. Risks to consider include data poisoning, adversarial inputs that cause the model to misbehave in a way that could lead to incorrect decisions being made that harm the business, or model theft that could expose proprietary data or use in downstream attacks.
AI-Assisted Social Engineering. Attackers can use AI to create more convincing, contextually relevant phishing attacks and voicemail or text messages designed to appear to be from valid internal contacts within the organization. Automation increases the speed and scale at which social engineering attacks can be launched, especially when using social media or business network reconnaissance to generate targeted campaigns.
Credential Theft and Identity Abuse. Credential stuffing and account takeover become increasingly common risks as weak authentication and a lack of multi-factor authentication (MFA) policies and mechanisms are widely used in organizations with complex user access policies. Startups should harden authentication as early as possible.
Dependency Risk from Open Source and Third Parties. Dependency confusion and insecure third-party components (i.e., open-source libraries) present attack risks for startups that often use open-source components and third-party services without the level of in-depth security validation performed at more mature companies.
Insider and Privilege Abuse. Dynamic team growth, frequent new hires, and the use of contractors or third-party vendor access accounts can all result in privilege creep. Insiders with knowledge of business systems and the ability to access systems without adequate identity or role validation can be a risk vector if left to escalate access without least privilege policies and role reviews.
Compliance and Data Privacy Gaps. Regulators and courts are increasingly holding companies responsible for how they store, process, and use consumer data. Startups handling consumer data must map their security controls and procedures to their compliance requirements.
What’s New for Startups and AI Cybersecurity Risk?
What Security Measures Should Startups Prioritize?
Identity and Access Management (IAM). Implement multi-factor authentication and least privilege access along with adaptive security for developer, staging, and production environments to secure identities and credentials. Identity and credential theft are some of the most common attack vectors at startups.
Secure DevOps Practices. Build security into the development lifecycle through static and dynamic application and infrastructure code analysis, third-party dependency scanning, and automated checks at CI/CD gates. DevSecOps (Development, Security, Operations) allows for risks to be identified earlier in the application and infrastructure code development lifecycle.
AI Model Governance. Validate AI training data, establish guardrails and monitoring processes to ensure the security of model inputs and outputs, model performance and behavior over time, and provide rollback capabilities in case a model becomes corrupted.
Endpoint and Cloud Protection. Use endpoint detection and response (EDR) tools and cloud security posture management (CSPM) tools to monitor for anomalous activity and misconfigurations.
Continuous Monitoring and Logging. Deploy and implement logging and monitoring tools that cover cloud resources and infrastructure, APIs, network activity, and application logs, and aggregate logs in a centralized logging solution for correlation and visualization.
Offensive Security, Penetration Testing, and Red Teaming. Schedule regular pen tests, including red teaming exercises that assess AI-specific attack vectors (AI model theft and manipulation, data poisoning), as well as other automated attack scenarios.
Employee Security Awareness. Security training should be given early to employees with an emphasis on modern phishing, social engineering, and the secure use of AI technologies.
What U.S. Regulations Impact Cyber Risk for Startups?
Startups must consider an ever-shifting cybersecurity risk, privacy, incident reporting, and corporate governance regulatory environment.
Federal Trade Commission (FTC) Act. The FTC can enforce violations related to unfair or deceptive security practices and privacy violations, including failure to secure AI tools or misuse of customer data. Violations of FTC enforcement guidelines (often written without specificity) about public claims and security capabilities can result in enforcement action.
State Privacy Laws. State privacy laws such as the California Consumer Privacy Act (CCPA) and similar legislation in Colorado, Virginia, and other states apply to consumer data and require businesses to implement data security measures.
Children’s Online Privacy Protection Act (COPPA). If a startup’s product or service targets children under age 13 or collects their data, compliance with COPPA requirements for parental consent and data minimization is triggered.
Health Insurance Portability and Accountability Act (HIPAA). If a startup’s business involves healthcare technology or sensitive health information (PHI or ePHI), it must ensure compliance in any use of that data, including in AI workflows.
Securities and Exchange Commission (SEC) Requirements. Publicly listed companies and startup pre-IPO or pre-merger companies must consider SEC guidance on cybersecurity risk management and disclosure as cyber risk, including AI-related risk, is increasingly considered material by investors.
Cyber Incident Reporting Laws. The Cybersecurity Incident Reporting for Critical Infrastructure Act (CIRCIA) was signed into law in 2024, and the law, once implemented, will require cyber incidents and ransom payments to be reported to CISA within certain timeframes. Startups with critical infrastructure providers in their environments will need to understand their role in these reporting requirements.
Incident Response Expectations for Startups
For startups, incident response (IR) includes business continuity, regulatory reporting, investor and board communications, and more. Expectations for startups include:
Cross-Functional Playbooks. Plans that include coordination between security, engineering, leadership, customer support, legal, and communications teams.
AI-Specific Scenarios. Playbooks should contain potential attack scenarios specific to AI use cases like adversarial model corruption, data poisoning, automated exploit campaigns, third-party compromise, etc.
Real-Time Forensics. Collecting and centralizing detailed application, cloud, and infrastructure logs, app activity, API logs, and AI workflow data will aid in forensic analysis after an incident.
Third-Party Coordination. Many startups depend on cloud providers, other vendors, and third-party SaaS products and services. Incident playbooks should include specific roles and responsibilities with each service provider.
Post-Incident Review. A root cause analysis of the incident and lessons learned should be performed with updates applied to processes and documentation.
Regular Tabletop Exercises. Incident response should be practiced through tabletop simulations and live, technical drills with realistic and evolving AI attack scenarios and automated attack vectors in mind.
Penetration Testing and AI
Penetration testing is a fundamental part of validating security that has been designed and implemented. Pen testing for startups must include an assessment of AI workflows, APIs, cloud configuration, and model integrity. AI-assisted pen testing tools are becoming available that can help security teams discover and assess vulnerabilities that were harder or impossible to identify through purely manual techniques, and “pen test as code” methodologies should be leveraged. Penetration tests must include scenarios that include a combination of automation, adversarial model manipulation, data poisoning, and model evasion, as well as traditional exploitation.
M&A Considerations for Startups
Cybersecurity and AI risk are critical considerations in mergers and acquisitions (M&A) involving startups. Buyers should be prepared to review:
- Code and Model Quality. Are the AI models in the acquired company clearly documented, thoroughly tested, and well-governed? Are the training data sources considered compliant and secured properly?
- Dependency Risk. What open source and third-party dependencies does the startup have? Are controls in place to mitigate supply chain risk?
- Incident History. Has the company suffered any breaches or near misses? How were security incidents managed and disclosed?
- Security Maturity. Does the startup have formal policies, procedures, incident response plans, and technology solutions for security monitoring and threat detection?
- Careful M&A diligence on cybersecurity and AI risk should be a core part of the due diligence and risk assessment before a buyer makes a final acquisition decision.
Cyber Insurance and the Startup Sector
Cyber insurance coverage is a growing way to help startups manage their financial risks from attacks, ransomware payments, legal costs, regulatory fines, and business interruption. Insurers are evaluating new AI-related risk around the security and governance of AI models, secure development and data practices, and incident management capabilities as part of the underwriting process.
Startups should expect to be able to demonstrate:
- Multi-factor authentication (MFA) and access controls.
- Network segmentation and least privilege policies.
- Continuous monitoring, incident detection, and response
- Patch management and secure development tooling.
- Incident response plans (IRP) that have been documented and tested.
Security programs with controls in place that are well-documented and risk-based increase the chances of better terms, while immature controls may face high insurance premiums or coverage exclusions.
Duty of Care Risk Analysis (DoCRA) and Reasonable Security for Startups
Reasonable security or meeting the duty of care to stakeholders are often vague concepts. In cybersecurity, “reasonable security” is generally understood to be those defensive technologies and processes that are proportional to the risk being protected against, the potential impact of an incident, and given the resources and resourcing constraints of the business.
Duty of Care Risk Analysis (DoCRA) cybersecurity risk assessment and analysis is a practical framework that startups can use to evaluate if they have the appropriate security controls in place to protect against and respond to reasonably foreseeable security threats. Rather than adhering to a compliance framework, risk analysis and a DoCRA decision-making framework support smart, defensible choices to be made that an insurer, auditor, regulator, judge, or jury may be more likely to consider reasonable, given the state of AI, the business, and its technology environment, the sector, size, and speed of the startup.
Use Case Scenarios for DoCRA
Secure API Platform
A startup that provides APIs to third parties for use in mobile applications used DoCRA to make the decision to invest in specific API authentication, rate limiting, and anomaly detection capabilities. The resulting analysis of these technologies supported the decision-making that these controls materially lowered the risk of abuse or account takeover while meeting the business’s platform performance requirements.
AI Model Marketplace
An AI model marketplace startup was preparing for seed-stage fundraising. DoCRA was used to justify and support investment in AI model governance, data lineage controls, and AI model drift monitoring. The risk analysis used in DoCRA allowed the startup to make decisions that were closely aligned with business needs, market expectations, and risk around model compromise.
Cloud-Native SaaS Product
A SaaS startup used DoCRA to determine why specific encryption and monitoring capabilities were selected for use in production applications while de-prioritizing the same level of controls over development environments as a justified, defensible choice in documentation. The effort resulted in a defensible security posture that was aligned with the startup’s risk and product development velocity.
Why a Risk-Based Approach Matters for Startups and AI
Startups have resource and resourcing constraints by design, but face increasingly complex and dynamic threats in the era of AI. Automation accelerates the development of products and services, threat innovation, attack, and reconnaissance as well. A risk-based, duty-of-care approach to security allows startups to invest their security dollars as effectively as possible while protecting customer trust, reputation, and aligning with regulatory expectations and requirements without sacrificing their innovation culture and velocity.
Security programs that can document the reasonable security decisions they make through HALOCK’s DoCRA approach will demonstrate a defensible, risk-aligned, intelligent approach to security to investors, partners, auditors, and insurers in a more automated and threat-rich AI cybersecurity environment.
To successfully approach managing risk in the age of AI, startups should incorporate reasonable security into their risk strategy.
Establish reasonable security through duty of care.
With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.
Review Your Security and Risk Posture
Read more AI (Artificial Intelligence) Risk Insights
References and Sources
- FTC guidance on privacy and security. https://www.ftc.gov/business-guidance/privacy-security
- California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa
- Children’s Online Privacy Protection Act (COPPA). https://www.ftc.gov/business-guidance/resources/childrens-online-privacy-protection-rule
- HIPAA standards for health data. https://www.hhs.gov/hipaa
- SEC cybersecurity risk governance guidance. https://www.sec.gov/cybersecurity
- Proposed rules under the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). https://www.federalregister.gov/documents/2024/04/04/2024-06526/
- AI and cybersecurity CISA. https://www.cisa.gov/topics/artificial-intelligence
- NIST cybersecurity framework. https://www.nist.gov/cyberframework
- NIST AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework
- FBI Internet Crime Report (2024). https://www.ic3.gov/Media/PDF/AnnualReport/2024_IC3Report.pdf
