Don’t build AI Solutions like you would Traditional Software

Dhruv Alexander
3 min readFeb 26, 2025

Applying traditional product management (PM) frameworks to AI solutions makes sense for system features like data storage, UI design, and reporting, but not for the model itself. Unlike traditional software, the model isn’t executing a predefined task — it’s generating insights. This introduces fundamental uncertainties that teams must adapt to.

To illustrate this, consider a bank asking its internal data science team to build a fraud detection solution for mortgage applications. While developing components like data storage, UI design, and reporting follows a structured approach, building the fraud detection model comes with a different set of challenges. Here’s how it differs.

1. No One Knows What Insights Will Look Like Ahead of Time

Unlike traditional software, an AI fraud model doesn’t execute a predefined task — it discovers patterns and surfaces insights.

  • You don’t know in advance what the model will highlight as fraud risks.
  • The model may reveal unexpected correlations that contradict assumptions.
  • Insights are probabilistic, requiring human interpretation rather than binary decisions.

Because the output is unknown until the model runs, AI fraud detection requires iterative validation — not just a one-time implementation.

2. No One Knows the Right Approach Until We Test

The final solution may not match the original plan. AI fraud detection isn’t about choosing a single model — it’s about experimenting with different approaches:

  • A simple model might work — or it might need multiple models for different loan types.
  • A hybrid approach may be better, combining machine learning with business rules.
  • The number of models required is unknown upfront.

Unlike software, where requirements are defined before development starts, AI requires flexibility to adapt as new insights emerge.

3. AI Model Performance Isn’t Binary — It’s a Spectrum

In traditional software, a feature either works or it doesn’t. AI models, however, operate on probabilities.

This introduces a tradeoff between false positives and false negatives:

  • A strict model might block legitimate transactions, frustrating customers.
  • A lenient model might let fraud slip through, increasing financial losses.

Since AI success is measured in accuracy, precision, and recall, a model can technically be working while still not being good enough for business needs. Unlike software, where success is defined upfront, AI requires continuous fine-tuning to strike the right balance.

4. There’s an Interpretability vs. Accuracy Tradeoff

Unlike rule-based systems, AI models don’t always provide clear reasons for their outputs. A model may flag a loan as high-risk based on thousands of subtle correlations, making its reasoning difficult to articulate.

This creates a tradeoff between accuracy and ease of interpretation

  • A highly complex model may be more accurate but harder to interpret.
  • A simpler model may be easier to explain but sacrifice predictive power.

Fraud detection requires balancing both — ensuring the model is powerful enough to be useful, yet interpretable enough for auditors, regulators, and fraud analysts to trust its outputs.

5. Customers Influence Features, Not What Ultimately Goes Into the Model

In traditional software, customers define feature requirements — their needs shape UI elements, workflows, and reporting capabilities.

In AI, however, customers don’t dictate what makes it into the model. While they can provide feedback on what seems like a fraud indicator, only rigorous statistical validation determines which features actually improve accuracy. A model must be built on what’s objectively right, not just what feels right to users.

Software updates follow a schedule; AI model updates don’t.

A fraud model may need re-training in six months or a full rebuild in two years — there’s no fixed timeline.

  • A six-month update may only require adding fresh fraud data.
  • A two-year update might need new features, segmentation, or an entirely different model.
  • All of this is unknown upfront — fraud patterns evolve unpredictably.

Unlike traditional software, AI updates aren’t just fixes — they’re adaptations to an evolving threat landscape.

7. A Model Can Be Wrong and Still Appear to Work

A flawed AI model can look great in testing but fail in production.

  • If the model learned from biased data, it may achieve high accuracy but systematically miss emerging fraud patterns.
  • If data leakage occurred, the model may perform well on historical data but collapse when exposed to real-world cases.
  • AI failures aren’t always immediate — a model can degrade over time without obvious signs.

This is why rigorous testing, monitoring, and continuous validation are essential — a model that appears to be working may still introduce risk.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

No responses yet

Write a response