The architecture is defined. The compliance mapping is done. Here's how to go from policy to production in 90 days, and why the productivity gains justify moving now.
The case is made: blocking AI is riskier than governing it (Part 01), the regulations draw lines around data rather than tools (Part 02), and a three-zone architecture satisfies the compliance requirements (Part 03). What remains is execution.
§ 01Phase 1: classify and govern (weeks 1-3)
Run a data classification exercise across your engineering teams. What do they work with daily? Map every data type to RED, AMBER, or GREEN. You'll likely find that 70-80% of daily development work falls in AMBER or GREEN, the zones where AI tools operate.
Draft an AI acceptable use policy aligned with your NERC CIP compliance program. Add your AI tool vendors to the CIP-013 supply chain risk assessment. This is governance paperwork, but it creates the foundation everything else builds on.
§ 02Phase 2: build the infrastructure (weeks 4-6)
Deploy isolated AI model hosting through your existing cloud provider. If you're on AWS, stand up Bedrock with a VPC endpoint. Azure, configure AI Foundry with Private Link. GCP, set up Vertex AI with VPC Service Controls. The cloud provider documentation for each is mature; this isn't experimental infrastructure.
Configure Claude Code to point to your isolated endpoint. Set up deny rules for sensitive file patterns. Implement pre-commit hooks that scan for CEII markers, internal IP ranges, substation identifiers, and credential patterns. Build the audit logging pipeline from AI interactions to your SIEM.
§ 03Phase 3: pilot (weeks 7-9)
Select two or three development teams for a structured pilot. Start them in Zone 3 with general development work to build fluency. Move them to Zone 2 once the tooling is validated and the team demonstrates data classification awareness.
Monitor everything. Tune the deny rules; you'll find patterns you missed. Refine the DLP controls. Collect productivity metrics alongside compliance evidence. The pilot generates the data you need to justify broader deployment.
§ 04Phase 4: scale (weeks 10-12)
Expand to all development teams. Conduct a NERC CIP audit readiness review with the AI governance layer in place. Document procedures for CIP-003-9 compliance. Establish a quarterly review cadence for deny rules, DLP patterns, and access controls.
This isn't a one-time deployment. The deny rules will need updating as new data patterns emerge. The audit logs will reveal usage patterns that inform tighter controls or relaxed ones. The classification matrix will evolve as your BESS fleet grows and new NERC CIP standards take effect. The governance practice, like the grid itself, is iterative.
§ 05The productivity case
The architecture and compliance mapping give your security and compliance teams what they need to say yes. The reason to say yes is the productivity gain.
An engineer writing DERMS integration code with AI assistance produces working, tested implementations in hours instead of days. A team modernizing a meter data management system moves through the backlog two to three times faster. Code review quality improves because the AI catches patterns a tired human eye misses on the fourth review of the day.
The workforce dimension doesn't show up in sprint velocity. Your experienced engineers are retiring. The institutional knowledge they carry about your specific grid, your specific systems, your specific integration patterns walks out the door with them. AI coding assistants don't replace that knowledge, but they allow a mid-career engineer to produce at a level that previously required a decade of utility-specific experience. They're a force multiplier for the team you have, at the exact moment you can't afford to lose productivity.
Talent retention matters too. Engineers who want to use modern tools will go where modern tools are available. A governed AI deployment isn't just a productivity investment; it's a retention strategy.
§ 06The ongoing climb
AI governance at a utility isn't a project you finish. New models will arrive with new capabilities and new data flows. NERC CIP standards will expand; CIP-015-1 is just the latest in a long line. Your BESS fleet will grow, your DERMS will evolve, the boundary between OT and IT will continue to blur.
The three-zone architecture isn't a permanent solution. It's a starting position; a well-classified, well-governed, auditable starting position that gives your teams the tools they need while giving your compliance and security teams the controls they require. Your team will revisit it, refine it, push it up the hill again as conditions shift.
The risk of not adopting AI is now greater than the risk of adopting it with proper controls. OpenClaw and consumer AI tools are the danger, not an enterprise deployment with data classification, network isolation, and audit logging. Your engineers are going to use AI one way or another. The only question is whether they do it inside a governed architecture or outside your visibility entirely.
The boulder doesn't get lighter. With the right architecture, your team pushes it with better tools.
§ 07Series complete
Four articles. One argument: AI in utility engineering isn't a technology decision; it's a governance decision. The technology is mature. The compliance fit is direct. The productivity gain is real. The barrier is organizational, and 90 days of focused work removes it.
If you want to talk about what this would look like at your utility, I'm at adam@sgridworks.com. The first call is always a diagnostic.
— Adam · adam@sgridworks.com · Feb 2026