The state of operational decision-making in multi-site facilities.
A field study across 40+ mid-market and enterprise operators running multi-site facilities, property, and field-service portfolios. Why the average enterprise takes 71 days to turn a signal into an action, and what the best operators do differently.
The average multi-site operator sits on more data than ever and makes the same five decisions slower than they did a decade ago. The bottleneck isn’t data. It’s what happens between the data and the decision, and everyone we interviewed already knew it.
Executive summary
Over the last six months we interviewed 43 directors and VPs of operations at mid-market and enterprise operators running multi-site facilities, property, and field-service portfolios across Canada and the US. The sample skews where MAIA sells: residential and commercial property operators running between 12 and 400 buildings, integrated facilities firms serving retail and institutional portfolios, and commercial field-service companies operating in five or more regions.
Five findings hold across every segment we studied. They are not surprising to the operators themselves, every leader we spoke to could name at least three of them off the top of their head. What is surprising is that nobody has a system that addresses them end-to-end. The same leaders who can articulate the problem with clarity consistently told us they were still solving it with a stack of dashboards, spreadsheets, and the patience of their best people.
This report lays out the five findings, what the top-quartile operators are doing differently, and what the next ten years of decision infrastructure will look like.
Signal-to-decision latency is 71 days on average, and getting worse.
The average multi-site operator takes seventy-one days to convert a meaningful operational signal into an executive-level action. We measured this by selecting ten recurring signal types, arrears spikes, compliance deadline drift, SLA misses, vendor performance decay, energy anomalies, tenant churn risk, contractor-credential lapses, elevator and fire-system faults, work-order backlog thresholds, and staffing gaps, and asking operators how long, on average, each one took to reach an owner or executive in a form that produced a response. The median was 71 days. The top quartile averaged 19. The bottom quartile exceeded 140.
The gap between the two groups is not a technology gap. Both groups use the same PMS, the same FSM, the same Microsoft 365. The difference is that top-quartile operators have someone whose job is explicitly to sit between the systems and the decisions. In every case that person was expensive, scarce, and burning out. This is the market MAIA addresses.
73% of enterprise data never reaches a decision.
We asked operators to estimate, for each of their primary data sources, what fraction of the data generated each week feeds into a decision, any decision, at any level, within thirty days. The self-reported median was 27%. Enterprise systems produce the inverse of the telephone game: every hop, from sensor to system, from system to dashboard, from dashboard to meeting, from meeting to email, from email to action, drops signal. By the time anything reaches the person who can act, three out of four bits have been discarded, aged out, or routed to a distribution list nobody reads.
The operators who do best on this axis are the ones who have ruthlessly narrowed their operational dashboards. One VP of operations at a 200-building residential portfolio told us they had deleted 14 of their 16 dashboards over the previous 18 months, on the explicit theory that “fewer dashboards mean more decisions.” The narrowing worked. Their backlog compressed 22% in the next quarter.
Vendor rotation is effectively random in 82% of operators.
Vendor and contractor dispatch, who gets assigned the next work order, is a policy-heavy process in theory and an availability- and-personality-driven process in practice. Of the 43 operators in our sample, 35 could describe an explicit fairness policy (e.g. no single vendor exceeds 25% of monthly volume), and 8 could show a system enforcing it. In the other 27, the policy exists on paper, and the actual allocation drifts toward whoever the dispatcher trusts, whoever is easiest to reach, or whoever bids lowest on the last comparable job. The consequences were consistent: uncontrolled vendor concentration, periodic fairness-complaint escalations, and , most importantly for the operator’s bottom line, inability to detect performance decay before it became a contract dispute.
The four operators in our sample with automated fairness audits were uniformly the ones running dispatch software with rule enforcement built in, not bolted on. The lesson: fairness is a runtime property, not a documentation property. If the system doesn’t enforce it every assignment, it’s not happening.
Compliance horizons lag by 40+ days in most mid-market operators.
Every operator we interviewed had a statutory-obligation calendar: fire-code inspections, elevator TSSA/DOB filings, insurance renewals, contractor WSIB verifications, annual reviews. And every operator had missed one in the last twelve months. The median lag between the moment a compliance obligation became knowable (the inspection got scheduled, the policy got issued, the contractor got hired) and the moment it entered an operator’s compliance calendar was 43 days. The worst we saw was 128.
This is the most straightforwardly solvable of the five findings , compliance obligations have structure, deadlines, and a finite set of issuers. But the gap is real because compliance lives in contract language, regulator portals, vendor certifications, and one-off emails from insurers, not in any single system. Until those sources are wired into the compliance calendar automatically, the calendar is a lagging indicator of what the regulator already knows.
Autonomy earns trust at the action level, not the decision level.
The most durable finding, and the one that surprised us most, is about how operators delegate authority to software. Every operator we interviewed had one or more auto-dispatch, auto-notify, or auto-escalate feature in their existing stack. None of them trusted those features with the decision itself. The autonomy they actually granted was always at the action level: “I’ll let the system send the notification, but I’ll decide who.” “I’ll let the system draft the N4 notice, but I’ll approve it.” “I’ll let the system propose the vendor, but I want to see the list.”
This is the right instinct. It’s also the exact architecture decision infrastructure should embrace, not as a limitation, but as a feature. The best operators we spoke to had found software that reliably did the actions well (drafting, notifying, escalating, scheduling) while visibly showing its reasoning on every single decision. They didn’t want the software to be smarter. They wanted it to be legible. Trust is earned at the action layer, in milliseconds, one legible decision at a time.
What the top-quartile operators are doing differently
Three things showed up repeatedly in the operators who closed the gap. Each one is a choice, not a technology, which is why they matter. Any operator can adopt them tomorrow.
- 01They treat the decision as the output, not the dashboard.Dashboards are a means, not the end. Top operators define what decisions they need each week and work backwards into which signals and which systems are required to produce them.
- 02They ground every decision in its source.Every operational decision produced by a top-quartile operator cites its source: which document, which system, which rule. This makes decisions reviewable, auditable, and, crucially, reversible without finger-pointing when they turn out to be wrong.
- 03They unify across systems rather than within them.The worst operators chase feature completeness within a single system. The best accept that their decision layer sits above their operational systems and is built specifically to reason across them.
Where the next decade goes
Every major category of enterprise software has, eventually, produced a layer above it that compressed the distance between its outputs and a decision. Finance got FP&A platforms on top of ERPs. Sales got CRM intelligence platforms on top of Salesforce. Security got XDR on top of SIEM. Operations is next.
We believe, based on this fieldwork and on the deployments we’re running, that the operations decision layer will look different from its predecessors in three ways. First, it will be built ontology-first, because operations are fundamentally entity-resolution problems. Second, it will be explanation-native rather than summary-native, because trust scales with legibility. Third, it will treat autonomy as an action-level property, not a decision-level one, because that is how the people who do the work already want to delegate.
MAIA is being built to be that layer. This report is a field study. The next will be a methodology paper on how we measure signal-to- decision latency in production, with a reproducible benchmark anyone can run against their own stack.
Methodology note
Between November 2025 and April 2026 we conducted 43 semi-structured interviews with directors and VPs of operations at multi-site operators in Canada and the US. Sample inclusion required an operator managing 12+ locations and generating at least 200 work orders per month. Signal-to-decision latency was measured by asking each operator to name three recent examples of each of the ten signal categories and estimate the time between detection and executive-level action. All figures in this report are rounded medians unless stated otherwise. A full methodology appendix with interview instruments, participant-segment breakdowns, and confidence intervals is available in the downloadable PDF edition.
This report is MAIA Intelligence’s first public research output. Subsequent editions will cover autonomy thresholds, compliance-horizon math, and ontology-as-infrastructure. If you run operations at a multi-site portfolio and want to be in the next cohort, we’d welcome the conversation.
