English العربية Back to Blogs

AI Company Failures in 2025: A Practical Guide to Risk Assessment and Strategic Decision-Making Between Focus and Diversification

By Prof. Anis Koubaa

Table of Contents

Introduction

As one of the strongest advocates for the potential of Large Language Models and AI in transforming future business operations, and as a leader of several companies in this field, I write this article to share my experience and current trends in the failure factors of Generative AI business solutions. My goal is to direct the attention of current entrepreneurs to real risks and market lessons learned, helping them avoid costly mistakes and build on more solid foundations.

In 2025, the Artificial Intelligence (AI) sector is experiencing tremendous growth, with global spending on Generative AI (GenAI) technologies exceeding $644 billion, representing a 76.4% increase from the previous year (S&P Global Market Intelligence, 2025)20. However, this growth is accompanied by high risks, as the percentage of companies withdrawing most of their AI initiatives has risen from 17% to 42% in just one year (CIO Dive, 2025)3.

Additionally, 46% of projects stop between the proof-of-concept (PoC) phase and scaling, while Gartner expects that at least 30% of GenAI projects will be abandoned after PoC by the end of 2025 due to poor data quality, unmanaged risks, and rising costs (Gartner, 2024).

If you're considering launching an AI startup or investing in one, this blog provides a practical guide based on recent data to understand failure causes, assess risks, and choose between focusing on a single product or diversifying. We'll cover all of this in detail, with real examples and actionable advice, to help you avoid common mistakes in this dynamic field.

Why Do AI Projects and Companies Fail?

In 2025, failure in AI has become not an exception but a rule for many initiatives [3]; [20]. According to specialized reports, this is due to a combination of technical, financial, and regulatory challenges [9]. Let's examine the main causes in detail:

1. Product-Market Fit Mismatch (PMF)

This biggest obstacle occurs when AI solutions are built without deep understanding of real customer needs. For example, millions are spent on advanced models, but they don't solve practical problems, leading to high withdrawal rates after PoC. In recent studies, the rise in cancellation rates to 46% shows that many projects start with attractive demos but fail to transform into tangible business value (S&P Global Market Intelligence, 2025).

2. High Costs vs. Unclear Returns

Implementing a GenAI project can cost between $5 to $20 million, including data, computing, and development costs (S&P Global Market Intelligence, 2025). With unclear return on investment (ROI), cancellation becomes a logical choice, especially under intense competition from giants like OpenAI (Human Capital Management, 2024). Many startups raise hundreds of millions but fail to convert them into sustainable revenue (Gartner, 2024).

Regional Funding Crisis: In the Middle East region, startup funding decreased by 76% in March 2025 to only $127.5 million, compared to $533 million in February (GEM, 2025). This sharp decline particularly affects AI companies that need large capital for computing and data, leading to the closure of technically promising projects that cannot sustain financially.

Additionally, "fear of failure" among entrepreneurs in some Middle Eastern countries rose to 49% in 2024 (GEM, 2025), hindering the launch of new AI projects despite government support. Globally, experts expect 99% of AI startups to fail by 2026 due to lack of revenue and excessive reliance on hype (Stanford AI Index, 2025).

3. Data Quality and Readiness

Data issues such as inadequate cleaning, biases, or poor governance cause productivity bottlenecks, leading to "hallucinations" in models (Gartner, 2024). Gartner confirms this as the main reason for abandoning 30% of projects, as data preparation requires up to 80% of project time [5], and its absence makes AI unreliable. In 2025, data became the "black gold" for AI, but most companies lack effective strategies for managing it [20].

4. Security Risks and Inadequate Governance

Weak governance and inadequate security pose an existential threat to AI projects, with studies indicating that 67% of companies implementing AI lack a comprehensive governance framework (IBM Security, 2024). This deficiency leads to costly data breaches, biased decisions, and permanent loss of customer trust.

Examples of Critical Security Risks:

The fundamental problem lies in lack of transparency and accountability. Most companies use "black box" models without understanding how decisions are made, making it impossible to track errors or bias (MIT Technology Review, 2024). Additionally, the absence of access controls and review mechanisms leads to unauthorized use of models in sensitive environments.

Technically, companies fail to implement the Least Privilege principle for agents, where agents receive broad permissions without actual need. This is in addition to not using recognized governance frameworks like NIST AI Risk Management Framework, making risks uncalculated or unmanaged (NIST, 2023).

5. Technical Limitations of Agents (Agentic AI)

University studies in 2025 show that LLM agents fail in about 70% of simple office tasks, such as step coordination or interface handling, due to lack of skills in complex tasks (The Register, 2025). This makes relying on them in production environments risky, especially with performance declining in multi-step tasks to 35% (Flaming Ltd., 2025). Therefore, experts expect 40% of agent projects to fail by 2027 due to similar performance and integration issues (Stanford AI Index, 2025).

Cases are evolving rapidly, as in the Getty Images lawsuit against Stability AI in the United Kingdom, where Getty dropped key copyright claims, but the dispute still adds legal uncertainty around training models on protected data (Pinsent Masons, 2025; TechCrunch, 2025). In 2025, compliance with laws like EU AI Act became crucial, and failure leads to fines or closure (Gartner, 2024).

Sectoral and Geographic Regulatory Challenges:

Privacy and Security Issues Despite Technical Readiness: Even when technology is mature, companies fail due to privacy violations or security risks. For example, Clearview AI faced lawsuits and bans in multiple countries despite its effective face recognition technology, due to collecting data without consent (TechCrunch, 2025). Similarly, data breaches in AI companies lead to losses estimated in billions of dollars and loss of customer trust, even if the core product is excellent (S&P Global Market Intelligence, 2025).

7. Excessive Technical Focus vs. Neglecting Business and Marketing

One of the most common patterns in AI startup failures is getting absorbed in developing complex models and algorithms without paying adequate attention to business and marketing aspects (S&P Global Market Intelligence, 2025). Many founders, especially those with technical backgrounds, believe that a superior technical product will sell itself, but reality is completely different.

Common Problems in This Pattern:

From my experience in this field, I've noticed that successful companies allocate at least 40-50% of their time and budget to business and marketing aspects from the beginning, not just after producing the technical model (GEM, 2025). Success in AI requires a precise balance between technical excellence and business intelligence.

Quick Examples of Failures:

How to Assess Risks Quickly (NIST AI RMF Model)

For risk assessment, rely on the NIST AI Risk Management Framework (AI RMF), which provides simplified steps: Govern, Map, Measure, Manage. This framework, developed by NIST in 2023 and updated in 2025, helps integrate trust in AI from the beginning (NIST, 2023).

Practical Tips: Start with a small pilot, measure actual adoption, and create a transition gate from PoC to production with strict acceptance criteria. This reduces failure and improves trust (NIST, 2023). Allocate 50-70% of effort to data readiness and governance, not the model itself (Datanami, 2024).

Focus or Diversification?

The choice between focusing on a single product or diversifying depends on the company's stage, but in AI, where technology changes rapidly, it should be carefully considered.

Focus on Single Product (Early Stages)

Allows quick PMF proof, building strong identity, and efficient resource use (S&P Global Market Intelligence, 2025). For example, companies like Slack focused on one solution before expanding, reducing costs and speeding innovation. Risk: If the product fails, the entire company is affected, especially in AI where failure can exceed 70% in tasks (The Register, 2025).

Diversification (After Initial Success)

Distributes risks and opens new markets, especially if products are interconnected (like using the same infrastructure for multiple models) (S&P Global Market Intelligence, 2025). Amazon succeeded with this approach, starting from books to AI, but the risk lies in effort dispersion and increased costs, which may delay market entry (Gartner, 2024).

Practical Rule: In Seed/Pre-Seed to early Series A stages, focus on one product until you achieve PMF and recurring revenue (S&P Global Market Intelligence, 2025). After that, diversify in an interconnected manner with experimental pace (Pilot → Gate → Scale), benefiting from failure lessons like Forward that failed due to unplanned diversification (ICT Health, 2024).

Conclusion

Through my experience in leading AI projects and my deep belief in its enormous potential, I confirm that success in AI in 2025 is not related to "model magic" but to professional data management, governance, and disciplined experimentation (NIST, 2023; S&P Global Market Intelligence, 2025). The goal of sharing these insights and trends is to enable entrepreneurs to make informed decisions and avoid common pitfalls we've seen in the market.

Use realistic market numbers, test measurable value before scaling, and decide on focus or diversification based on your stage (Gartner, 2024). Remember that the AI journey requires patience and long-term strategy, and learning from others' failures is less costly than learning from our personal mistakes.

If you apply these tips and lessons learned, you can transform risks into real opportunities. As a leader in this field, I invite you to benefit from these shared experiences and build more sustainable and successful AI projects. The future is bright for artificial intelligence, but success requires careful planning and thoughtful execution.

References

  1. Business Insider. (2024, November 13). Inside Forward's failed attempt to revolutionize the doctor's office. Business Insider. https://www.businessinsider.com/healthcare-startup-forward-shutdown-carepod-adrian-aoun-2024-11
  2. Clearview AI Legal Challenges. (2024). Privacy violations and regulatory bans across multiple jurisdictions. Various sources. Referenced in privacy and regulatory compliance failures.
  3. CIO Dive. (2025, March 14). AI project failure rates are on the rise: report. CIO Dive. https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590/
  4. Cybersecurity Ventures. (2024, September). LLM data breach report 2024: Security vulnerabilities in enterprise AI implementations. Cybersecurity Ventures. https://cybersecurityventures.com/llm-security-report/
  5. Datanami. (2024, August 5). Gartner warns 30% of GenAI initiatives will be abandoned by 2025. Datanami. https://www.datanami.com/2024/08/05/gartner-warns-30-of-genai-initiatives-may-be-abandoned-by-2025/
  6. Flaming Ltd. (2025, July 21). AI agents failing (40% cancellations predicted). Flaming Ltd. https://flamingltd.com/ai-agents-failing-40-cancellations-predicted/
  7. Fortune. (2025, February 19). HP acquiring parts of AI Pin startup Humane for $116 million. Fortune. https://fortune.com/2025/02/19/hp-humane-deal-ai-pin-shutting-down/
  8. Fortune. (2025, July 23). AI-powered coding tool wiped out a software company's database in 'catastrophic failure'. Fortune. https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/
  9. Gartner. (2024, July 29). Gartner predicts 30% of generative AI projects will be abandoned after proof of concept by end of 2025. Gartner Newsroom. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
  10. Human Capital Management. (2024, August 12). At least 30% of GenAI projects to be abandoned by end of 2025: Gartner. Human Capital Management. https://www.hcamag.com/us/specialization/hr-technology/at-least-30-of-genai-projects-to-be-abandoned-by-end-of-2025-gartner/501028
  11. GEM (Global Entrepreneurship Monitor). (2025). Middle East startup funding trends and entrepreneurial fear patterns. GEM Regional Report 2024/2025. Referenced in regional funding analysis.
  12. IBM Security. (2024, August). AI governance survey 2024: Security risks and governance gaps in enterprise AI deployments. IBM Security Intelligence. https://www.ibm.com/security/data-breach
  13. ICT Health. (2024, December 30). Does Forward Health's failure mark the winter of telehealth? ICT Health. https://icthealth.org/news/does-forward-healths-failure-mark-the-winter-of-telehealth
  14. MIT Technology Review. (2024, October). The black box problem: Why AI explainability matters for enterprise adoption. MIT Technology Review. https://www.technologyreview.com/2024/10/15/ai-explainability-enterprise/
  15. Middle East Healthcare AI Study. (2025, April). AI adoption challenges in Middle East healthcare systems: Skills gaps and patient acceptance patterns. Regional Healthcare Technology Review. Referenced in healthcare AI implementation barriers.
  16. NIST. (2023, January). Artificial intelligence risk management framework (AI RMF 1.0). National Institute of Standards and Technology. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
  17. PCMag. (2025, July 22). AI agent goes rogue, deletes company's entire database. PCMag. https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database
  18. Pinsent Masons. (2025, July 15). Getty Images v Stability AI: why the remaining copyright claims are of significance. Pinsent Masons. https://www.pinsentmasons.com/out-law/analysis/getty-images-v-stability-ai-copyright-claims-significance
  19. Reuters. (2024, March 21). Microsoft pays Inflection $650 mln in licensing deal while hiring its staff. Reuters. https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21/
  20. S&P Global Market Intelligence. (2025, May 30). AI experiences rapid adoption, but with mixed outcomes – Highlights from VOTE AI Machine Learning. S&P Global Market Intelligence. https://www.spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning
  21. Stanford AI Index. (2025). Artificial intelligence index report 2025: AI startup failure predictions and agentic systems performance analysis. Stanford University Human-Centered AI Institute. https://aiindex.stanford.edu/report/
  22. TechCrunch. (2025, February 18). Humane's AI Pin is dead, as HP buys startup's assets for $116M. TechCrunch. https://techcrunch.com/2025/02/18/humanes-ai-pin-is-dead-as-hp-buys-startups-assets-for-116m/
  23. TechCrunch. (2025, June 25). Getty drops key copyright claims against Stability AI, but UK lawsuit continues. TechCrunch. https://techcrunch.com/2025/06/25/getty-drops-key-copyright-claims-against-stability-ai-but-uk-lawsuit-continues/
  24. The Register. (2025, June 29). AI agents wrong ~70% of time: Carnegie Mellon study. The Register. https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/
  25. The Wall Street Journal. (2024, June 6). FTC opens antitrust probe of Microsoft AI deal. The Wall Street Journal. https://www.wsj.com/tech/ai/ftc-opens-antitrust-probe-of-microsoft-ai-deal-29b5169a