
Integrating enterprise AI with operational technology systems promises a new level of resilience, agility and efficiency for factories and other physical operations.
getty
Enterprise AI is increasing the value of data-centric business systems — ERP, supply chain management, business intelligence and customer engagement. But for companies with significant physical operations, a larger opportunity exists in integrating enterprise AI with operational AI — the intelligent, physical-centric systems that monitor, control and optimize real-time processes in industries such as manufacturing, logistics, energy, mining, agriculture and critical infrastructure. Integrating intelligence across all parts of an enterprise promises a new level of company-wide resilience, agility and efficiency.
Converging operational and enterprise AI isn’t just about smarter factories. It’s about smarter enterprises. Imagine manufacturing facilities that reconfigure production in response to demand shifts, logistics networks that autonomously reroute around disruptions, and infrastructure that self-monitors and self-corrects. AI agents that span enterprise planning and operational execution could compress decision cycles from days to minutes, transform planning from reactive to proactive and elevate workforces from simple execution to oversight.
Integrating Operational AI and Enterprise AI
Realizing this ambitious vision requires overcoming a persistent technical barrier — the infamous OT-IT gap. Despite decades of progress, operational technologies engineered for deterministic control in industrial environments remain largely isolated from the increasingly stochastic world of enterprise IT. Integration requires architectural strategies that safely and securely bridge this divide, while respecting each domain’s distinct characteristics. The goal is to enable smooth, secure, reliable bidirectional information flow across physical and digital domains while minimizing the complexity, development cost and technical debt of bespoke, one-off engineering projects. This article introduces a practical AI-first framework for enabling this flow, grounded in clear domain boundaries, event-driven interfaces and AI-native data strategies.
The OT-IT Gap
Let’s begin by exploring the OT-IT gap — although “chasm” is probably the more accurate word. Operational technology systems operate in the real world — not climate-controlled datacenters, but environments that are physically demanding, remote and often hostile to conventional computing. These industrial-grade systems must deliver high reliability and deterministic performance, often in real time, while withstanding extreme temperatures, vibration, power limitations and dirty or wet conditions. Many OT systems support mission-critical applications with strict requirements for safety, resilience, security and data sovereignty, usually without IT support. These constraints demand architecturally unique hardware and software tailored to specific operational contexts. To put it another way, OT systems are not scaled-down IT systems — and never will be. This enduring difference defines the OT-IT gap.
The diversity and specialization inherent in OT systems are essential features that require unique hardware implementations and software architectures. We cannot bridge the OT-IT gap by oversimplifying or trying to standardize away the realities of embedded product development.
Instead, we must modernize our approach to integration. The opportunity lies in applying proven, mainstream software architectural concepts — platform-based design, event-driven interfaces, software abstraction and minimal data transformation. Our challenge is to embrace OT diversity while enabling simple, efficient, scalable interoperability with enterprise IT systems. This is not a technology problem requiring breakthroughs. It’s a mindset shift from IoT-centric OT to AI-centric OT.
AI-Centric OT
Bridging the OT-IT gap is no longer merely about connecting IoT devices to dashboards or mobile apps. It is now a strategic enabler unifying enterprise and operational AI. Organizations are working to scale AI workloads across a continuum of environments — spanning clouds, regional datacenters, on-premises infrastructure and OT edge systems. But that continuum breaks at the OT discontinuity.
The discontinuity exists in software, not hardware. OEMs already offer servers engineered for industrial use, and OT-capable computing platforms such as the Nvidia Jetson AGX Thor or the Qualcomm Cloud AI 100 inference card can extend large AI models into isolated industrial environments. However, OT edge software requires specialized, domain-spanning tools and middleware. Platforms like Edge Impulse (now a Qualcomm company) help accelerate OT AI development, while various middleware providers offer high-productivity pipelines for OT-IT integration. Edge tools and middleware add significant value, but only within the constraints of existing architectures — adding convenience and accelerating development without addressing deeper structural challenges.
Architectural Foundation: Four Principles
Spanning OT and IT requires a secure, composable, AI-first continuum that extends from cloud to datacenter to physical infrastructure, without hardcoded translation layers or architectural dead ends. And it must do so while respecting the core differences between domains: OT is deterministic, mission-critical and environment-constrained, while IT is probabilistic, adaptive and designed for scale and flexibility.
Rather than forcing convergence or burying complexity in middleware, I advocate for clean, event-driven interfaces that respect domain boundaries while enabling intelligent coordination. This approach modernizes OT software architecture and unlocks scalable, AI-native workflows across the entire enterprise.
Four core architectural principles support this vision:
- Platform-based OT devices
- Modular, componentized applications
- Event-driven interfaces
- A “data as-is” strategy — using AI to integrate and normalize
Let’s consider each of these in turn.
Platform-based OT Devices
As explained in the OT-IT Gap section, the broad diversity of OT products has historically required extensive edge device customization. Developers typically modify or develop the entire embedded software stack — OS, networking, security and update mechanisms — in addition to building the device capabilities customers care about. This undifferentiated friction slows development, increases cost and adds technical debt.
This situation is beginning to change as the embedded industry shifts from whole-stack customization to commercial off-the-shelf platforms that combine OT-ready embedded hardware with vendor-supported system software. COTS OT platforms let teams start developing applications on day one, often with little or no system-level coding. Developers focus on delivering value through application logic, while platform vendors provide secure operating systems, over-the-air updates and long-term support.
COTS OT platforms shift engineering focus from costly, undifferentiated system plumbing to rapid, scalable application innovation, with workflows utilizing secure, modern tools and AI-first methods.
Reality Check: Embedded silicon suppliers are investing heavily in platform software. Qualcomm is a clear example: it acquired a Linux OS provider (Foundries.io) in 2024, edge AI workflow tools (Edge Impulse) in March 2025 and a developer-centric embedded platform company (Arduino) in October 2025. It also partnered with BMW to build an advanced ADAS software stack, a big step towards software-defined vehicles. NXP offers CoreRide, a competing SDV platform. Other chipmakers are also on this path, investing heavily in delivering fully functional platforms suitable for industrial development and deployment.
System software companies like Wind River and Golioth are also capitalizing on the embedded platform trend, offering complete development and deployment support for a variety of chips. “Factoring out” cross-platform embedded system development is a savvy software strategy, comparable to Microsoft’s consolidation of early PC OSes — but adapted to the highly diverse world of OT devices.
The days of bare-metal, bake-from-scratch embedded silicon are ending, but not over. Enterprises taking advantage of this trend should hold suppliers accountable for delivering complete, standardized platforms with little or no bespoke system development.
Componentized Applications
This principle champions modular design in OT environments. On capable OT platforms, developers can assemble applications from loosely coupled hardware-agnostic components, which are simpler to develop, test, update, maintain and reuse. Componentization reduces architectural complexity, accelerates delivery and ensures that applications, not platform software, define product logic.
Components also enable independent updates of AI models, control logic and helper modules at the edge, supporting iterative development without system redeployment.
Reality Check: Componentization is mature in IT, but OT systems lag due to constraints in compute, memory and deployment environments. Mainstream orchestration frameworks like Kubernetes and Docker are often too heavy for constrained edge devices and require cloud management infrastructure, which is unavailable in many OT environments. However, modular capabilities are emerging across multiple layers:
- OS distributions coupled with OTA platforms (e.g., Foundries.io) maintain application components, often packaged as OCI-compatible containers (Docker) for seamless, secure updates.
- AI Pipelines (e.g., Edge Impulse) support modular model and data deployment.
- Middleware (e.g., ClearBlade) often offers frameworks for microservice deployment (e.g., JavaScript).
- WebAssembly (WASM), via runtimes such as Atym’s, is a new option for lightweight, sandboxed, hardware-agnostic components on smaller platforms, including microcontrollers.
The shift toward componentized OT is uneven, but already in motion. Middleware, OS and tool vendors are stepping in to fill gaps, and some semiconductor companies are investing in these companies.
Event-Driven Interfaces
This principle addresses how software components communicate across both OT and IT environments. Traditional architectures often rely on tightly coupled request-response APIs or periodic polling, which introduces latency, complexity and fragility. Event-driven interfaces, by contrast, enable asynchronous, loosely coupled communication that supports real-time responsiveness and improves integration flexibility.
Event-driven design is especially well-suited for cyber-physical systems, where signals from the physical world (e.g., sensor readings, state changes and alerts) trigger intelligent decisions and actions across both operational and enterprise domains. These interfaces allow AI components to observe and respond to events across the entire enterprise, without tight coupling or constant synchronization.
As enterprises adopt agentic AI — autonomous or semi-autonomous agents that subscribe to events, make decisions based on enterprise-wide contexts and take action across digital and physical domains — event-driven architectures become even more valuable. Most importantly, event-driven interfaces preserve domain independence. OT systems can broadcast events without knowing how downstream IT systems will consume them, and vice versa. This loose coupling simplifies OT-IT integration and future-proofs AI-centric development.
Reality Check: Event-driven architectures are mature across enterprise systems, and ERP platforms increasingly use event triggers to drive decentralized workflows. In OT, protocols like MQTT and NATS support real-time coordination, while OPC UA and other semantic information models encode meaning at the source, enabling structured, interoperable data exchange.
However, loose coupling brings tradeoffs. Without shared schemas and semantic context, events can be misinterpreted or ambiguous. It’s easy to move data, but shared understanding still requires modeling, governance and versioning. Although OPC UA provides a useful foundation, AI-first agentic systems need additional layers to support asynchronous, multimodal and loosely coupled workflows.
Although event-driven architectures are not yet plug-and-play, enterprises and application developers can confidently leverage these techniques as schemas and interfaces rapidly evolve to meet the demands of integrated AI workflows.
Data As-Is
This principle reimagines how data flows across OT-IT boundaries. Rather than forcing operational systems to conform to rigid IT schemas, we advocate for a “data as-is” approach wherever feasible. In this model, AI systems ingest raw or lightly processed operational data, including unstructured and multimodal formats. Inference layers then contextualize these inputs, often fusing them with enterprise sources and cascading enriched outputs downstream for deeper analysis.
The “as-is” strategy isn’t universal — some operational data still requires normalization or preprocessing near the point of collection. But minimizing transformations unlocks flexible, low-code workflows that adapt to native data formats and accelerate integration across OT and IT domains.
Reality Check: “Data as-is” isn’t just a simplification — it’s often a necessity. In brownfield environments, where adding new on-device translation layers isn’t practical, AI-powered tools can often ingest data as-is, without custom middleware or schema remapping. For example, some manufacturers train AI models directly on PLC data, and energy and mining firms use LLMs to extract insights from unstructured equipment logs. In greenfield deployments, AI-native pipelines are emerging that process and filter raw data locally before passing enriched signals downstream. These patterns are already reshaping industrial workflows, with advances in physical AI poised to accelerate the shift.
AI Is Reshaping OT
Enterprise AI is moving beyond dashboards and analytics, and beginning to shape how the physical world actually runs. But to create real impact, AI has to do more than observe and analyze. It needs to act. That means deploying intelligent agents spanning enterprise and operational domains, closing the loop between insight and execution.
This shift doesn’t require ripping out existing systems. In brownfield environments, we can make legacy systems smarter and more connected by using AI-driven, adaptive interfaces rather than hard-coded translation. At the same time, we need to design new OT platforms for low-friction AI integration from the start.
The four principles outlined here offer a practical path forward. Adopting them enables a foundation for systems that learn, adapt and act across the entire enterprise, paving the way for higher levels of industrial autonomy.
A final thought: Major industrial suppliers — Siemens, Rockwell, Schneider Electric, Honeywell, Hitachi and others — are already moving towards AI interoperability, each along its own path. As this transformation unfolds, customer and supplier conversations should go beyond short-term integration projects and align on shared architectural principles that enable modular, open, AI-first systems to function smoothly and securely across enterprise and operational domains. That means looking past the next generation of IoT hubs, gateways, core services, brokers and “IoT clouds” toward OT architectures that natively support intelligent agents.