Your engineering teams are already using AI. The question isn't whether to allow it; it's whether to govern it.
§ 01The workforce gap is the real risk
The average age of a utility engineer in the United States is north of 50. Retirements are outpacing hiring by a widening margin. Regulators and state commissions are pushing utilities to integrate DERs at scale, modernize aging SCADA and EMS infrastructure, deploy DERMS and ADMS platforms, expand NERC CIP compliance, and bring BESS online under new frameworks.
The math doesn't work. There aren't enough engineers to do all of it at the pace regulators and ratepayers expect.
AI coding assistants — Claude Code, GitHub Copilot, Cursor — are reporting significant productivity gains across industries, with early adopters citing two to five times faster delivery on coding tasks. What used to take a team a sprint can take an engineer a day. That's not hype. It's what happens when an AI can read an entire DERMS codebase, understand the integration patterns, and generate multi-file implementations that actually compile.
The CTO who blocks these tools isn't reducing risk. They're guaranteeing that modernization falls further behind, that their best engineers leave for employers who let them use modern tools, and that the engineers who stay use consumer AI tools anyway, without any governance at all.
§ 02What AI coding assistants actually do
These aren't autocomplete tools anymore. The current generation reads entire codebases (200,000+ tokens of context), reasons across multiple files, and executes autonomous multi-step workflows. They write code, review it, run tests, debug failures, iterate until the implementation works.
For utility engineering teams, the practical applications are immediate.
- DERMS/ADMS integration code and API development
- SCADA historian data pipeline development
- Grid planning model scripts and validation tools
- Outage management and restoration logic
- Meter data management and billing system modernization
- Cybersecurity tooling and compliance automation
- Test generation, code review, documentation
The concern from security and compliance teams is legitimate: these tools send code to cloud APIs for inference. Your source code leaves your machine. That's a real data flow that needs to be governed. "Code leaves your machine" isn't the end of the analysis. It's the beginning.
§ 03OpenClaw: what an ungoverned AI looks like
A secure architecture makes more sense once you've seen an insecure one. OpenClaw is an open-source AI agent that crossed 180,000 GitHub stars in early 2026 and drew two million visitors in a single week. It connects to LLMs, integrates with external APIs, and autonomously executes tasks. It's also a case study in how not to build AI tooling for any environment that handles sensitive data.
Within days of its surge in popularity, security researchers found over 21,000 publicly exposed OpenClaw instances leaking API keys, chat histories, and account credentials into the open internet. The tool trusts localhost by default with no authentication. Direct messages share a single global context, meaning secrets loaded for one user become visible to others. CVE-2026-25253 (CVSS 8.8) enables one-click remote code execution through missing WebSocket origin validation, a published exploit chain that takes milliseconds to execute via a malicious web page.
It gets worse. Researchers discovered 341 malicious skills in OpenClaw's plugin marketplace, including 335 that install Atomic Stealer malware on macOS. Anyone with a one-week-old GitHub account can upload a skill. Cisco called it "an absolute nightmare" from a security perspective. The founding CTO of npm called it "a security dumpster fire."
§ 04Why "no" is the riskiest answer
IBM's 2025 Cost of a Data Breach Report found that 77% of employees paste data into generative AI prompts. 82% of those use unmanaged accounts. Shadow AI breaches average $4.63 million in cost and take 185 days to fully contain.
Consider what that means for a utility. An engineer is writing integration code for a DERMS deployment. They're stuck on a complex API interaction. If the utility provides no AI tools, the engineer opens ChatGPT in a browser, pastes the code (including hardcoded IP ranges, internal API schemas, and maybe a configuration snippet with substation identifiers), and gets their answer. That data now sits on external servers under a consumer terms of service, retained indefinitely, used for model training by default.
No policy prevents this. No firewall catches it. The engineer didn't intend to leak sensitive data. They intended to get their work done.
OpenClaw is the wrong tool for any professional environment. For a utility handling CEII-protected data and NERC CIP-scoped assets, it's a compliance violation waiting to happen. The lesson isn't "AI tools are dangerous." The lesson is that ungoverned AI tools are dangerous. Governed ones are a different story.
The question was never whether AI belongs in utility engineering. It was whether we'd govern it before or after the shadow AI breach.
A well-governed AI deployment with proper data controls is far safer than the uncontrolled shadow usage happening right now in your organization. The question is what governance looks like when you're protecting CEII, BES Cyber Systems, and BESS operational data.
§ 05Next in the series
Part 02 walks through what CEII, NERC CIP, and BESS regulations actually say about AI tools, and how the data tiers of modern AI assistants map to compliance.
— Adam · adam@sgridworks.com · Feb 2026