Integrated resource planning for behind-the-meter generation. Run a live financial analysis on a 50MW Phoenix data center with gas, solar, and battery storage.
Utilities have done integrated resource planning for decades—modeling portfolio economics, reliability constraints, and risk across technology mixes to determine the least-cost, most-reliable generation stack. Data center developers evaluating behind-the-meter generation face the exact same problem, but without the tools. BTM-Optimize brings IRP-grade analysis to the behind-the-meter decision.
The scenario: A hyperscaler evaluating on-site generation for a 50MW campus in Phoenix, AZ. Current state: 100% SP&L grid power under the C-27 Large Commercial tariff. Proposed portfolio: 4×12.5MW natural gas reciprocating engines, 10MW DC solar, and 5MW/20MWh LFP battery storage. The question an IRP answers: what does this resource mix actually cost across a range of conditions, and does it meet reliability requirements?
What you're running: Layer 1 of BTM-Optimize—the deterministic financial screener. Think of it as the screening curve in a traditional IRP: it tells you which resource combinations are worth modeling in detail. Layer 2 adds the stochastic analysis that separates a pitch deck from a bankable feasibility study.
The screener above tells you the expected value. This is the real deliverable: a quantified risk profile for the same Phoenix DC scenario. 500 Monte Carlo iterations, 23 weather years, NERC GADS reliability data.
| Failure Mode | Frequency | Avg Duration | Avg UL | Cost at Risk |
|---|---|---|---|---|
| Single generator trip | 4.2 events/yr | 6.8 hrs | 0 MWh | $0 |
| 2+ simultaneous generator trips | 0.18 events/yr | 4.2 hrs | 38 MWh | $1.9M |
| Grid outage (islanded operation) | 0.87 events/yr | 3.4 hrs | 0 MWh | $0 |
| Grid outage + generator trip | 0.04 events/yr | 5.1 hrs | 64 MWh | $3.2M |
| Heat event derate (>40C, all units) | 8.3 days/yr | 6 hrs/day | 0 MWh | $0.4M (fuel) |
| Correlated: heat + trips + grid | 0.008 events/yr | 14 hrs | 186 MWh | $9.3M |
In utility IRP, the process starts with screening curves—deterministic economic models that filter candidate resource portfolios before committing to detailed stochastic analysis. This demo runs that first stage: Layer 1 of BTM-Optimize, a financial screener that calculates LCOE, IRR, payback, and annual cost comparison to determine whether a portfolio is worth modeling in depth.
The full platform adds Layer 2: Monte Carlo risk analysis—the equivalent of production cost modeling in a utility IRP. It simulates equipment failures, fuel price volatility, weather variability, and grid outages across hundreds of iterations to produce a distribution of outcomes rather than a single number. A project might have a great expected return but a fat left tail—a small but real chance of catastrophic underperformance. You can't see that in a spreadsheet, and you can't finance a project without it.
Deterministic economics under ideal conditions. CAPEX amortization, fuel costs with segmented heat rate curves, O&M escalation, TOU grid charges with demand ratchets, LCOE, IRR, and payback.
The question: Does this pencil?
Stochastic risk modeling across N iterations. Per-unit Bernoulli outage sampling using NERC GADS forced outage rates. Poisson-distributed grid failures. Cold-weather performance derates. Fuel price volatility with mean-reverting stochastic processes. Full distribution of cost outcomes.
The question: What's the range of outcomes, and how bad can it get?
Every parameter in BTM-Optimize traces back to a source. The platform maintains 13 OEM equipment profiles, each built from a four-layer provenance chain: NREL cost benchmarks, EIA-923 actual generator performance data, NERC GADS reliability statistics, and manufacturer specifications. When someone asks "where did that heat rate come from?", the answer is never "we assumed it."
Provenance chain example: The 8,500 BTU/kWh heat rate in this demo's gas generator profile originates from NREL ATB 2024 benchmarks for reciprocating engines, validated against EIA-923 Form reported actuals for similar-class units, with forced outage rate (3%) sourced from NERC GADS fleet statistics for the same technology class.
Annual Technology Baseline capital costs, O&M rates, and performance parameters by technology class. The starting point for every equipment profile.
EIA-923 generator-level data via the Public Utility Data Liberation project. Real heat rates, capacity factors, and fuel consumption from operating plants.
Generator Availability Data System forced outage rates, planned outage schedules, and mean time to repair by technology class and unit age.
Equipment-specific performance curves, maintenance intervals, ambient temperature derates, and warranty parameters from 13 OEM equipment profiles.
The screener above tells you the expected value. Layer 2 shows you the shape of the risk. Here's what it models that the demo does not:
Per-unit Bernoulli sampling: each generator has an independent probability of forced outage in each simulation hour, calibrated from NERC GADS fleet data. A 4-unit plant with 3% FOR doesn't lose 3% of capacity—it sometimes loses 0%, sometimes 25%, occasionally 50%. The distribution matters.
Poisson-distributed grid outages calibrated from regional SAIDI/SAIFI data. Outage duration follows a log-normal distribution. This determines how much backup capacity you actually need—and how often you'll use it.
Solar output sampled from 23 years of TMY weather data (NREL NSRDB). Gas turbine performance derated for ambient temperature using OEM-specific curves. Battery round-trip efficiency adjusted for thermal conditions. Phoenix summers are different from Portland winters.
Natural gas prices modeled with mean-reverting stochastic processes calibrated from EIA Henry Hub historical data. Escalation rates aren't fixed—they're drawn from a distribution that captures both the trend and the volatility.
Extreme heat events simultaneously increase cooling load, derate gas engine output, reduce battery efficiency, and stress the grid. Layer 2 models these correlations rather than treating each risk as independent. The worst-case scenario is when everything goes wrong at once—and that's exactly when you need the analysis most.
The output difference: Layer 1 gives you one number—"the LCOE is $29/MWh." Layer 2 gives you a risk profile—"your LOLE is 0.08 days/year, your P99 unserved load event is 186 MWh during a correlated heat/outage/grid failure, and you can eliminate it with a 5th standby unit at $18.8M or bridge 68% of it with 10 MWh of additional BESS at $3.5M." That's the difference between a screening estimate and a decision you can underwrite.
OEM equipment profiles with full provenance chains
Reference data layers (NREL, EIA, NERC, OEM)
Years of weather data (NREL NSRDB)
We build IRP-grade analyses for data center developers evaluating behind-the-meter generation. Your site, your tariff, your equipment options—modeled with the same rigor utilities apply to their own resource plans.