Black Friday 2025: Get 40% off on the Evalyze Pro annual plan with code at checkout.
Activate Discount
AI

Evalyze Places Second in SMU’s Global AI Due Diligence Challenge

Evalyze AI's due diligence engine tested against 220+ global judges at SMU’s Lee Kuan Yew competition and came home with second place.

Evalyze Places Second in SMU’s Global AI Due Diligence Challenge

In late 2025, Evalyze flew to Singapore to put our AI due diligence engine to a very direct test: could it evaluate startups as well as a global panel of experienced investors?

The stage was the 12th Lee Kuan Yew Global Business Plan Competition (LKYGBPC) at Singapore Management University (SMU). We’re proud to announce that in the brand-new DueAI™ Challenge track, Evalyze finished second overall and ranked among the top three AI systems in the competition.

What is the Lee Kuan Yew Global Business Plan Competition?

The LKYGBPC is SMU’s flagship global deep-tech startup competition, organised by the Institute of Innovation and Entrepreneurship (IIE). Held every two years, it brings together founders tackling problems in urban solutions and sustainability from universities around the world.

Lee Kuan Yew Global Business Plan Competition

Some quick numbers from the 12th edition:

  • 1,500+ startup applications
  • 1,200 universities represented
  • 91 countries
  • 60 finalists invited to Singapore for Grand Finals Week
  • S$2.5M+ in prizes and support

In other words: a very dense concentration of ambitious founders, investors, and corporate partners in one place.

A New Experiment: The DueAI™ Challenge

For the 12th edition, SMU introduced something new alongside the main startup competition: the DueAI™ Challenge.

The idea was simple but ambitious:
“Let multiple independent AI systems evaluate the same startup applications as the human judges and then compare the results.”

Key aspects of the DueAI setup:

  • AI teams were asked to screen the pitch decks submitted to LKYGBPC and rank or select the most promising startups.
  • The same startups were also evaluated by 220+ human judges from 46 countries, including investors, operators, and domain experts.
  • After the judging phase, organisers compared how closely each AI’s selections overlapped with the human-chosen finalists.
  • The AIs with the highest overlap (and the most useful insights) were recognised as winners.

The goal wasn’t to “replace” human judgment, but to see how far AI has come in augmenting startup discovery and due diligence at scale.

Evalyze’s Challenge: Only the Text, None of the Slides

Evalyze joined the DueAI track with one constraint that mirrors a very real-world limitation:
We were given access only to the text extracted from pitch decks — no slide designs, no charts, no original PDF files (for privacy reasons).

In practice, that meant:

  • We had to work with raw textual content: problem statements, solution descriptions, traction, market sizing, and so on.
  • Our system needed to reconstruct a structured view of each startup from this partial information.
  • From over 1,500 applicants, only 60 teams were ultimately selected as official finalists by the competition judges.

Our internal objective was straightforward: see how closely Evalyze’s AI would agree with the judges without ever sitting in the room with them.

How We Evaluated Startups

We used the same core evaluation engine that powers our product:

  • A multi-dimensional scoring framework looking at elements such as problem clarity, solution differentiation, business model, traction, team, and market.
  • A consistent final assessment score for each startup is used to rank and cluster teams.
  • A visual report format (very close to our live dashboard) was generated for each of the 60 human-selected finalists, summarising:
    • Key strengths and weaknesses
    • Red flags or open questions
    • An overall investor-readiness / potential score

These reports were shared with the judges during Grand Finals Week as an additional lens — not to override their judgment, but to help structure and stress-test it.

The Result: Second Place, Top-Three AI

When the organisers compared all the AI systems against the human outcomes, Evalyze finished:

  • 2nd place overall in the DueAI Challenge
  • Top 3 among all participating AI teams
Evalyze Second Place in DueAI Challenge

On the quantitative side, our selections achieved roughly 44% overlap with the human judges’ picks for the finalists.

That overlap number matters because:

  • It shows that our system can approximate expert judgment based only on pitch-deck text.
  • It still leaves room for healthy disagreement — which is where AI can be especially useful, surfacing things humans might miss.

The Outliers: Where AI Saw What Humans Didn’t

One of the most interesting parts of the analysis was the outlier group:

  • There were four startups that most AI systems consistently liked, but human judges initially passed on.
  • Evalyze independently selected two of these four outliers.
  • Later in the process, one or two of those very same outlier startups were upgraded into the final Top 8 “Grand Finalists.”

That pattern is exactly what we care about at Evalyze:

  • AI shouldn’t just rubber-stamp human decisions.
  • It should flag non-obvious teams with strong fundamentals that might be overlooked because of bias, fatigue, or simple time constraints.

In short, the competition provided real-world evidence that AI-driven due diligence can help uncover hidden potential, not just score the obvious winners.

Five Days in Singapore: From Networking to Grand Finals

Beyond the algorithms and metrics, the event itself was a dense, five-day sprint:

Evalyze Team at SMU Grand Finals Week
  • Day 1, Opening & Networking — Founders, VCs, corporates, and ecosystem partners gathered at SMU’s city campus.
  • Days 2–3, Pitch Competitions — Finalist teams presented across categories; judges started using Evalyze reports.
  • Days 4–5, Grand Finals & Awards — Top teams pitched in front of the full jury, followed by the closing ceremony where DueAI winners (including Evalyze) were announced.

Throughout the week, we had the chance to:

  • Discuss our methodology with investors and judges.
  • Gather feedback on how our six-dimensional analysis and final scores matched their own intuition.
  • Explore potential collaborations around AI-assisted screening for accelerators, funds, and corporate innovation teams.

The DueAI track itself was initiated and led by Dr. Zitian, from whom we also received a testimonial about Evalyze’s performance.

What This Means for Evalyze and for AI in Venture

This wasn’t just a trophy hunt for us. It was a live experiment in what our product is supposed to do in the real world.

A few key takeaways:

  1. AI can be calibrated against real investor behaviour.
    Running Evalyze side-by-side with 220+ human judges gave us a rare benchmark.

  2. Disagreement is a feature, not a bug.
    The outlier cases show where AI can surface non-obvious winners — that’s where investors get differentiated deal-flow.

  3. Text-only still goes a long way.
    Even without slide design or visual cues, structured pitch-deck text + a robust framework can produce results close to a live investment committee.

  4. Human-AI collaboration is the real game.
    Judges reported that having structured AI reports helped them focus questions, uncover blind spots, and compare teams more consistently.

Learn More

We’re grateful to SMU, the organisers, and all the judges for running such a bold experiment and for giving Evalyze the chance to test our technology on a truly global stage.

Try Evalyze’s AI Due Diligence Engine Now →