FrançaisEnglishEspañolItalianoDeutschPortuguêsNederlandsPolski

Budget vs Actual: The Weekly SaaS Review

Published on February 18, 2026 · Jules, Founder of NoNoiseMetrics · 16min read

Startups rarely die in one dramatic week. They die in a string of weeks where nobody compared what was supposed to happen to what actually happened. MRR grew a little slower than expected. Costs drifted up a little. Churn quietly worsened. The forecast was never updated. Runway shortened and nobody noticed until it was uncomfortable to fix.

Budget vs actual is the habit that catches that drift. It is not a finance ritual or a board deliverable — it is a 10-minute weekly comparison between your forecast and reality that tells you whether to change behavior now or next month. For most early-stage SaaS founders, that comparison comes down to six numbers, one table, and one decision.

This article covers what budget vs actual means, how to run the weekly loop, a complete budget vs actual report template, a worked SaaS example with variance percentages, and the mistakes that turn the review into admin. Y Combinator’s startup financial guidance identifies the budget vs actual habit as one of the highest-leverage financial practices available to early-stage founders.


What is budget vs actual?

Budget vs actual is the comparison between planned numbers and real results across revenue, costs, and cash. The budget is the forecast: what you expected to happen. The actuals are what the business produced. The variance — the gap between them — is what requires a decision.

Four concepts that belong in the same loop:

Budget — what you planned to spend and earn. A forward-looking commitment. Actual — what the business produced in a given period. The real numbers. Budget variance — the difference between budget and actual, in absolute value or percentage. Budget vs actual report — the structured comparison showing all three, with an action attached to significant variances.

In founder language: “Did the month go the way I expected? If not, what changed, and what do I do differently this week?”

That’s the whole thing. Forecasts and budgets are only useful if this comparison loop runs regularly. A forecast that’s never checked against reality is confident storytelling. A budget that’s never compared to actuals is decoration.

For the financial model that produces the forecast this loop runs against, see the minimalist 8-input guide.

Every forecast needs a clean MRR baseline. Get yours from Stripe in 90 seconds →


Why budget vs actual matters more than founders expect

The argument for running budget vs actual isn’t that it’s good financial hygiene. It’s that without it, the business can drift significantly before anyone notices — and the longer drift goes undetected, the fewer levers are available to correct it.

Revenue misses compound quietly. A €500 MRR miss in January looks small. If new MRR is consistently 15% below plan, by month six the gap is material. The budget vs actual loop catches the pattern in month two, not month six.

Costs drift without decision. Most cost overruns aren’t dramatic. They’re a tool subscription that renewed, an API bill that crept up as usage grew, a contractor who took on more work. Monthly budget vs actual catches these before they become structurally embedded.

Churn is the most dangerous hidden variable. Founders model churn as a fixed percentage in their financial model, then forget to check whether actual churn matches the assumption. At 3% monthly churn versus the modeled 1.5%, the difference in customer count at month twelve is nearly 20%. The business that notices in month two can improve retention. The business that notices in month ten has a structural problem.

Involuntary churn is partially recoverable — but only if caught early. A significant share of what appears in “churned MRR” is actually failed payment churn, not voluntary cancellations. Stripe data makes the distinction visible: a subscription marked as cancelled after a failed payment is recoverable via a dunning sequence if caught within days. Weeks later, the customer has moved on. Budget vs actual that includes a failed payment line catches this recovery window. NoNoiseMetrics surfaces failed vs voluntary churn separately from Stripe for exactly this reason.

Runway is the metric that can’t be wrong. A runway calculation is only useful if the inputs — burn rate and cash balance — reflect what actually happened. A model that hasn’t been updated against actuals will show runway based on outdated assumptions. Founders have been surprised by short runways before, and the cause is almost always a model that diverged from reality months earlier without anyone running the comparison.


The 10-minute weekly loop

The weekly review doesn’t need to be complicated. Five steps, one table, one decision:

Step 1: Pull the current numbers. Every week: ending MRR, new MRR, churned MRR, cash balance. Monthly: actual fixed costs, actual variable costs. Sources: Stripe for revenue and churn, bank feed or accounting tool for costs. A clean recurring revenue dashboard makes this step take under two minutes.

Step 2: Compare against budget. Fill in the comparison table (template below). For each line: budget, actual, variance in dollars, variance as a percentage.

Step 3: Flag meaningful variance only. Not every difference warrants action. Use thresholds: revenue variance above 5%, spend variance above 10%, runway variance above 0.5 months, churn rate more than 50% above assumption. Below those thresholds: note it, don’t act.

Step 4: Write one action per flagged variance. One. Not a strategy document, not a retrospective. One concrete action, one owner, this week. Churn above threshold → review Stripe cancellation events and failed payments today. Spend above threshold → identify the line item that moved and decide whether to cut it.

Step 5: Update the model if reality clearly changed. Not every week — only when an assumption has demonstrably shifted. New MRR has been below plan for three consecutive weeks → update the assumption. A one-week miss is noise. A three-week pattern is signal.

The loop should take 10 minutes if the data is accessible. If it takes an hour, the data infrastructure is the problem, not the process.


Budget vs actual report: the full template

A founder-stage budget vs actual report should fit on one screen and produce one decision. Here is the complete structure:

Revenue block

MetricBudgetActualVarianceVariance %
Ending MRR€12,000€11,400−€600−5.0%
New MRR€1,800€1,500−€300−16.7%
Expansion MRR€400€380−€20−5.0%
Churned MRR (voluntary)€300€360+€60+20.0%
Churned MRR (failed payment)€100€220+€120+120.0%

Costs block

MetricBudgetActualVarianceVariance %
Fixed costs€5,500€5,500€00.0%
Variable costs€2,000€2,700+€700+35.0%
Total spend€7,500€8,200+€700+9.3%

Cash block

MetricBudgetActualVariance
Monthly burn€2,100€3,400+€1,300
Cash on hand€48,900€47,600−€1,300
Runway11.0 mo9.7 mo−1.3 mo

Action block

FlagThresholdStatusAction this week
Churn above plan+50%⚠️ Failed payment churn +120%Run dunning sequence on failed payment cohort
Variable costs above plan+10%⚠️ +35%Identify API cost driver; set usage alert
New MRR below plan−15%⚠️ −16.7%Review activation drop-off in Stripe; check trial conversion
Runway below plan−0.5 mo⚠️ −1.3 moFreeze non-revenue experiments until recovery confirmed

Splitting churned MRR into voluntary and failed payment components is the most actionable change most founders can make to their budget vs actual report. Voluntary churn requires product and retention work — slower to fix. Failed payment churn is recoverable within days if caught. Treating them as the same number wastes the recovery window.


Budget vs actual example: a full SaaS month

Scenario: A SaaS analytics tool, month four. The budget was set using the financial model from last month’s review.

Inputs to the budget (from last month’s model):

  • Starting MRR: €10,000
  • New MRR: €1,800
  • Expansion: €400
  • Churn (total): €400
  • Fixed costs: €5,500
  • Variable costs: €2,000
  • Cash: €50,000

What actually happened:

  • Starting MRR: €10,000 (correct)
  • New MRR: €1,500 (missed by €300)
  • Expansion: €380 (close)
  • Voluntary churn: €360 (slightly above)
  • Failed payment churn: €220 (significantly above; model had €100)
  • Fixed costs: €5,500 (on plan)
  • Variable costs: €2,700 (over by €700 — API costs grew with usage)

The calculation:

Ending MRR = 10,000 + 1,500 + 380 − 360 − 220 = 11,300
(Budget was 10,000 + 1,800 + 400 − 300 − 100 = 11,800)
MRR variance: −€500, or −4.2%

Total spend: 5,500 + 2,700 = 8,200
Budget spend: 5,500 + 2,000 = 7,500
Spend variance: +€700, or +9.3%

Monthly burn: 8,200 − 11,300 = −3,100 (still cash-flow positive)
Budget burn: 7,500 − 11,800 = −4,300
The business is cash-flow positive in both cases, but generating €1,200 less cash than budgeted.

What the founder should do, specifically:

The MRR miss is €500 — below the 5% alert threshold on MRR, but new MRR missed by 16.7% and was offset partially by lower churn. The failed payment churn at €220 against a €100 assumption is the most actionable finding. Those are recoverable customers. A dunning sequence triggered within the week captures a meaningful fraction of them.

The variable cost overrun is €700. At €2,700 actual vs €2,000 budget, this is a 35% overrun. For an AI or API-heavy product, this often means usage grew faster than modeled — which is a good problem — but the cost model needs updating. If the product grows, variable costs will grow with it, and the financial model needs to reflect that or runway projections will be optimistic.

Actions this week:

  1. Pull failed payment list from Stripe; trigger recovery email sequence via Brevo
  2. Check API usage dashboard; set a billing alert at 80% of last month’s actual
  3. Update financial model: new MRR assumption → €1,600 (between plan and actuals); variable cost assumption → €2,400

That’s it. No board deck. No finance meeting. Three concrete actions from a 10-minute review.

KeyBanc Capital Markets SaaS Survey data shows that SaaS companies running weekly budget vs actual reviews detect cost drift an average of six weeks earlier than those running monthly reviews — a significant difference at the sub-€1M ARR stage.


Budget variance formula

The mechanics are simple:

Budget Variance (absolute) = Actual − Budget

Budget Variance (%) = (Actual − Budget) / Budget × 100

Sign convention matters. For revenue lines, a negative variance is bad (you earned less than planned). For cost lines, a positive variance is bad (you spent more than planned). Some founders flip the sign on cost lines to make “all bad = negative” — either convention works as long as it’s consistent.

Example:

Budgeted new MRR: €1,800
Actual new MRR: €1,500
Variance: €1,500 − €1,800 = −€300
Variance %: −€300 / €1,800 = −16.7%
Budgeted variable costs: €2,000
Actual variable costs: €2,700
Variance: €2,700 − €2,000 = +€700
Variance %: +€700 / €2,000 = +35.0%

For founders using a single combined burn metric, the variance is:

Budget burn = Budget revenue − Budget costs
Actual burn = Actual revenue − Actual costs
Burn variance = Actual burn − Budget burn

If actual burn is higher than budget burn (the business burned more cash than expected), the variance is positive and negative — positive in cost overrun and negative in revenue. Surface both components so you know which lever to pull.


The budget vs actual loop in the forecasting system

Budget vs actual doesn’t stand alone. It’s one step in a continuous operating cycle:

Financial model (assumptions)
    → Forecast (projected monthly outputs)
        → Budget (period-specific spending plan)
            → Actuals (what the business produced)
                → Variance (gap between plan and reality)
                    → Decision (action or assumption update)
                        → Updated financial model

Without the actuals-to-decision step, the loop is broken. A forecast that’s never compared to actuals gives founders false confidence — the model looks fine, the runway looks adequate, but the underlying assumptions have drifted from reality.

The decision-to-model step is equally important. If you notice that new MRR has been running 15% below plan for three consecutive months and don’t update the model, the financial model is lying about runway. Updating the assumption is uncomfortable because it makes the runway shorter — but it makes it accurate, which is what the model is for.

For the MRR forecast layer, the 3-input model is designed to integrate with this budget vs actual loop — lightweight enough to update every week alongside the actuals comparison.

For the scenario planning layer that makes the model stress-testable, see Scenario Modeling for Bootstrappers: Stress-Test in 15 Minutes.

Bessemer’s State of the Cloud report identifies automated MRR data as the most impactful infrastructure investment for improving the accuracy and cadence of budget vs actual reviews.


Common budget vs actual mistakes

Doing it monthly and missing the drift. Monthly reviews catch problems after four weeks of compounding. A weekly pulse on MRR and burn takes ten minutes and catches the same problem in week one, when it’s still easy to fix. For early-stage SaaS where the MRR base is fragile, weekly is almost always better.

Too many budget lines. A budget vs actual table with 40 rows doesn’t get reviewed consistently. Condense to the six numbers that move the business: MRR, new MRR, churn, variable costs, burn, runway. Add lines only when a decision requires more granularity.

No variance threshold. Every small difference creates noise. Set explicit thresholds — revenue miss above 5%, spend above 10%, runway drop above 0.5 months — and only flag those. Below the threshold: note it, don’t act. This keeps the review from becoming a weekly anxiety event.

Comparing against a fantasy budget. If the budget assumptions were optimistic in the first place, the comparison is measuring how far from fantasy reality landed. A useful budget should be slightly uncomfortable to commit to — achievable in a reasonable scenario, not aspirational in a perfect one.

No action attached to the variance. A review that produces “interesting, let’s watch it” is not a review — it’s reporting. Every flagged variance should produce a decision, an owner, and a timeline. Otherwise the process becomes weekly admin rather than a decision-making tool.

Treating all churn as equivalent. Voluntary churn (customer chose to leave) and involuntary churn (failed payment) require completely different responses. Combining them into a single churn line hides which type is driving the variance and wastes the recovery window for failed payments.


Automating budget vs actual

The main friction in running the weekly review is pulling the numbers manually. Three automation investments pay back quickly:

Automate MRR data from Stripe. New MRR, churned MRR (split by voluntary and failed payment), expansion MRR, and ending MRR can all be pulled from Stripe subscription events without manual calculation. NoNoiseMetrics does this automatically and surfaces the weekly MRR waterfall in the dashboard — the revenue block of the budget vs actual table fills itself.

Set Stripe billing alerts for variable cost monitoring. If variable costs include API costs billed via Stripe or cloud services, set budget alerts in each provider’s dashboard. The alert triggers when spending approaches threshold, not after the bill arrives.

Use a lightweight fixed tracker for costs. Fixed costs don’t change much month to month. A simple list of recurring costs with monthly amounts, updated only when something changes, is enough. Aggregate it in a single cell rather than building a multi-tab cost model.

The weekly review that took 45 minutes when done manually typically takes 10 minutes when the MRR data is automated and costs are maintained in a single tracker.


JSON structure for a budget vs actual tracker

{
  "budget_vs_actual": {
    "period": "2026-04",
    "currency": "EUR",
    "revenue": {
      "mrr_budget": 12000,
      "mrr_actual": 11300,
      "mrr_variance": -700,
      "mrr_variance_pct": -5.8,
      "new_mrr_budget": 1800,
      "new_mrr_actual": 1500,
      "expansion_mrr_budget": 400,
      "expansion_mrr_actual": 380,
      "churn_voluntary_budget": 300,
      "churn_voluntary_actual": 360,
      "churn_failed_payment_budget": 100,
      "churn_failed_payment_actual": 220
    },
    "costs": {
      "fixed_budget": 5500,
      "fixed_actual": 5500,
      "variable_budget": 2000,
      "variable_actual": 2700,
      "total_budget": 7500,
      "total_actual": 8200,
      "total_variance_pct": 9.3
    },
    "cash": {
      "burn_budget": -4300,
      "burn_actual": -3100,
      "runway_budget_months": 11.0,
      "runway_actual_months": 9.7
    },
    "variance_flags": {
      "new_mrr_below_threshold": true,
      "failed_payment_churn_above_threshold": true,
      "variable_costs_above_threshold": true,
      "runway_below_target": true
    },
    "actions": [
      {
        "flag": "failed_payment_churn",
        "action": "Trigger dunning sequence for failed payment cohort",
        "owner": "founder",
        "due": "this week"
      },
      {
        "flag": "variable_costs",
        "action": "Identify API cost driver; set usage alert at 80% of actual",
        "owner": "founder",
        "due": "this week"
      },
      {
        "flag": "new_mrr",
        "action": "Review trial-to-paid conversion in Stripe; check activation drop-off",
        "owner": "founder",
        "due": "this week"
      }
    ]
  }
}

The actions array is the most important addition to a standard budget vs actual JSON structure. It closes the loop between the numbers and the decisions — which is the only reason to run the review in the first place.


FAQ

What is budget vs actual?

Budget vs actual is the comparison between what a business planned to earn and spend (the budget) and what it actually produced in a given period (the actuals). The difference between them — the budget variance — determines whether and what to change. For SaaS founders, this comparison typically covers MRR, new MRR, churn, variable costs, burn rate, and runway.

What should be in a budget vs actual report?

A founder-stage budget vs actual report should include: a revenue block (budgeted vs actual MRR, new MRR, and churn — split by voluntary and failed payment), a costs block (fixed and variable), a cash block (burn rate and runway), and an action block listing one concrete response per flagged variance. It should fit on one screen and take under 10 minutes to complete.

What is budget variance and how do you calculate it?

Budget variance is the difference between a budgeted number and the actual result: Variance = Actual − Budget. As a percentage: Variance % = (Actual − Budget) / Budget × 100. For revenue lines, a negative variance means underperformance. For cost lines, a positive variance means overspend. Both should trigger review if they exceed the founder’s threshold (typically 5% for revenue, 10% for costs).

What is a budget vs actual example for a SaaS company?

A common example: a SaaS founder budgets €12,000 ending MRR and €7,500 in costs. Actuals come in at €11,300 MRR and €8,200 costs. The revenue variance is −€700 (−5.8%), primarily driven by a new MRR miss of €300 and failed payment churn that doubled the budgeted amount. The cost overrun is €700 (+9.3%), driven by API usage growth. Actions: trigger a failed payment recovery sequence, investigate the API cost driver, update the financial model with revised assumptions.

How is budget vs actual different from actual vs budget?

They’re the same comparison described from different directions. “Budget vs actual” (budget first) emphasises the plan as the baseline and shows how reality deviated from it. “Actual vs budget” (actuals first) shows what happened and how it compares to plan. The calculation and usefulness are identical. The ordering is a presentation preference.

How often should a SaaS founder review budget vs actual?

Weekly for early-stage products where MRR is still fragile and runway is below 18 months. Monthly for more established products with stable growth patterns. The weekly version is a shorter pulse check — six core metrics, one table, one decision — not a full model review. Monthly reviews are more thorough and include assumption updates.

Why is failed payment churn important in a budget vs actual review?

Failed payment churn is the share of “churned MRR” that comes from payment failures rather than voluntary cancellations. It’s partially recoverable — a well-timed dunning email sequence can recapture 20–40% of failed payment churn within days. If failed payment churn is combined with voluntary churn in the budget vs actual report, the recovery window is invisible. Separating the two lines creates the opportunity to act on the recoverable portion immediately.

Forecasting from dirty MRR is forecasting wrong. Start with numbers you can trust →

Share: Share on X Share on LinkedIn
J
Juleake
Solo founder · Building in public
Building NoNoiseMetrics — Stripe analytics for indie hackers, without the BS.
See your real MRR from Stripe → Start free