Condition-based maintenance that works: turning alerts into fewer breakdowns, not more noise

Estimated reading time: 9 minutes

What you will get from this article

  • A simple alert-to-action blueprint you can apply to two assets within a week
  • A way to cut alert fatigue and recover technician time
  • A starting scope built for mid-market plants with lean maintenance teams
  • Four measures that prove downtime reduction, not just alert volume

Condition-based maintenance sounds simple. Monitor asset health. Spot problems early. Fix them before they become breakdowns. In practice, CBM often delivers something less useful.

A stream of alerts. Nobody trusts them. Nobody owns them. Nobody acts on them.

Operators see noise. Maintenance teams see extra work. Leaders see no change in downtime. The problem is rarely the technology. It is the response. Alerts without clear actions, owners, and time windows give you a better warning system. Not a better reliability system.

What condition-based maintenance means in practice

By condition-based maintenance, we mean: act when asset data says to, not when the calendar says to. Sensors (vibration, temperature, pressure) detect degradation early. You intervene before failure. You skip maintenance when the asset is healthy. CBM sits between time-based preventive maintenance and fully predictive programmes. It is practical. It does not need advanced AI.

Why most CBM programmes produce noise instead of results

Most programmes are built around the sensor, not the response. Vibration monitoring goes live on a compressor. Thresholds are set. Alerts start firing. Then, quietly, things go wrong.

There is no defined response. Who acts on the alert? Within what time window? What does acting mean: inspect, lubricate, raise a work order, or escalate? Without answers, each alert becomes a burden. Technicians learn to ignore them. The system runs in the background, generating data nobody uses.

In a Plant Services survey of 166 plant respondents, only 44% said their teams could respond to real-time asset changes well or extremely well. That is not a sensor problem. It is a process problem.

Unplanned downtime ranked as a top-three challenge for 27% of those same sites. More undifferentiated alerts make that worse, not better.

Define what happens when an alert fires before you switch the sensors on.

The alert-to-action standard: four questions to answer first

Before any alert goes live, answer these four questions for it. If you cannot answer all four, do not activate it yet.

  1. What does this alert mean? Describe the failure mode plainly, with a lead time. Not “vibration anomaly on pump 4” — but “bearing wear, likely 3–6 weeks from failure.”
  2. Who owns the response? Name the role, not the person. “Day-shift maintenance lead” is sustainable. “Jamie” is not.
  3. What is the required action? Be specific. “Raise a planned work order within 72 hours” is an action. “Do something” is not.
  4. What is the response window? How long before escalation? Four hours? Twenty-four? The answer depends on criticality. There must be an answer.

These four questions are your alert-to-action standard. No new software needed. A meeting, a whiteboard, and the discipline to write it down before you go live.

Condition-based maintenance alert-to-action standard: four questions to define before activating any CBM alert
Before activating any alert, define what it means, who owns the response, what the action is, and the response window. If you cannot answer all four, the alert is not ready to go live.

How to choose the first two failure modes to tackle

Start with two assets and two failure modes. Not twenty. The goal in phase one is to prove the process works, not achieve site coverage.

Pick assets where three things are true:

  • Failure is genuinely costly. No standby cover, or the breakdown stops a line.
  • A precursor signal exists. Bearing degradation announces itself over days or weeks. Catastrophic seal failure often does not. Choose failure modes where the signal is real and lead time is enough to act.
  • You have failure history. Real maintenance records make thresholds reliable. No history means guesswork.

For most mid-market plants, vibration monitoring on rotating equipment is the fastest route to a working pilot. Pumps, compressors, fans. The SIRI (Smart Industry Readiness Index) framework places condition-based monitoring at Band 3 of shop floor intelligence. That is the level where systems can spot deviations and suggest likely causes. Two assets at that level, reliably, beats ten assets halfway there.

What it looks like in practice

A food and beverage plant was running vibration monitoring on six assets. The system generated 30–40 alerts a week. Nobody had time to review them. Breakdowns continued at the same rate.

The operations team spent one day on it. They picked two assets with the highest downtime cost. For each, they wrote a one-page alert-to-action document: two thresholds, a named role, a specific action, a response window, and an escalation path to the shift manager. They deactivated every other alert temporarily.

Within six weeks, the maintenance lead was acting on every alert from those two assets. Two planned interventions were completed. Neither asset had an unplanned failure in the following quarter. The team then used that record to expand scope to four more assets.

What changed was not the technology. It was the decision rights and the planning cadence.

Connecting CBM alerts to your planning process

An alert that sits outside your planning process will not reduce breakdowns. It adds to the list of things your team knows about but cannot schedule.

The fix is simple. Pre-build a response template in your CMMS (Computerised Maintenance Management System) for each alert type. When a bearing warning fires, the template already exists: job description, parts, time estimate, priority. The technician activates it, adds site notes, and it enters the planning queue. One step, not five.

Leading manufacturers schedule maintenance from condition data during planned downtime windows. For mid-market plants, this does not need advanced AI. It needs one thing: CBM alerts reviewed in your weekly planning meeting. Every week, the planner should see which alerts are open and whether work orders have been raised.

What to measure: not alert volume

Alert volume is the default metric for new CBM programmes. It is almost useless. High counts might mean early detection or thresholds set too low. Low counts might mean success or broken sensors. You cannot tell.

Track these instead, weekly, for your scoped assets.

  1. Failures avoided. For each closed CBM work order: was a failure prevented, or a false alarm? This is your primary measure.
  2. Planned vs. unplanned ratio. If CBM is working, planned work rises. Unplanned falls.
  3. Alert-to-action compliance. What percentage of alerts produced a work order within the response window? Low compliance is a process gap, not a technology gap.
  4. Unplanned downtime on scoped assets. Before vs. after, on a rolling 12-month window.
CBM success scorecard: five measures to replace alert volume in a condition-based maintenance programme, including failures avoided and planned maintenance ratio
Alert volume tells you almost nothing about reliability improvement. Track these five outcome measures instead, and review them weekly in your planning meeting.

What to do on Monday

What to do on Monday. If you want to test whether your CBM programme has a working response process, start here.

  1. List every active alert threshold on your monitoring system.
  2. For each alert, write the name of the role that owns it and the action they must take.
  3. Deactivate any alert you cannot answer both questions for. It is not ready.
  4. Pre-build a CMMS response template for the most likely failure mode on your two highest-impact assets.
  5. Add CBM alert review to this week’s planning meeting agenda.

Takeaways

  • Define the alert-to-action standard before you activate any alert. Four questions: meaning, owner, action, window.
  • Start with two assets and two failure modes. Prove the process works. Then scale.
  • Pre-build CMMS templates for each alert type. One step from alert to scheduled work order.
  • Measure failures avoided and planned maintenance ratio. Not alert volume.
  • Review CBM status weekly. Reliability is a cadence, not a project.

One question

Condition-based maintenance fails most often not because the sensing is wrong. It fails because the response process is undefined. Fix the process first. The scope grows from there.
If your plant runs a CBM programme: what was the first failure mode you tackled? And what had to change in your planning process to make the alerts actionable?

Frequently asked questions

What is condition-based maintenance and how does it differ from preventive maintenance?

Condition-based maintenance (CBM) triggers maintenance activity based on real-time signals from an asset (vibration, temperature, pressure) rather than a fixed schedule. Preventive maintenance runs on a calendar: inspect every 30 days, replace every 500 hours. CBM is more efficient because it acts when the data says intervention is needed, not before and not after. The trade-off is that CBM requires a reliable signal, a calibrated threshold, and a defined response process. Without those three, it produces alert noise rather than reliability improvement.

How many assets should I include in a CBM pilot?

Start with two assets. Narrow scope lets you validate the alert-to-action process, build team confidence, and collect evidence of downtime reduction before you scale. Choose assets where: (1) unplanned failure is genuinely expensive or disruptive, (2) a detectable precursor signal exists, and (3) you have failure history on site to calibrate thresholds. For most mid-market plants, rotating equipment (pumps, compressors, fans) with known bearing failure patterns is the fastest route to a working pilot.

What causes alert fatigue in maintenance programmes and how do I fix it?

Alert fatigue has three common causes: thresholds set too low (everything triggers), no defined owner for each alert type, and no distinction between alerts requiring immediate action and those requiring monitoring. The fix is to write an alert-to-action standard before going live: for every alert, define what it means, who owns the response, what the action is, and the response window. Any alert you cannot answer all four questions for should not be active. Deactivate it until you can.

How do I prove my CBM programme is working?

Stop measuring alert volume. Measure these four instead:
1. Failures avoided (prevented vs. false alarm, per closed work order)
2. Planned vs. unplanned maintenance ratio for scoped assets
3. Alert-to-action compliance rate within the defined response window
4. Unplanned downtime hours on scoped assets, rolling 12-month comparison

Sources

Transform Your Operations with Nick Leeder & Co

Are you ready to elevate your engineering, manufacturing or sustainability initiatives? Partner with Nick Leeder & Co, an award-winning digital transformation coach, to boost your operational performance and reduce your environmental footprint. With over 25 years of experience, we specialise in guiding leaders like you through the complexities of Industry 4.0, ensuring sustainable growth and impactful results.

Every week we publish our Beyond the Buzzwords newsletter on LinkedIn. Hit the button to have your weekly dose of digital transformation without the hype and fluff delivered to your inbox.

Subscribe on LinkedIn

If you prefer less regular but more in-depth analysis, our blog is just the thing for you!

Join the ranks of industry innovators and leverage our expertise to unlock your factory’s hidden potential. Don’t miss the opportunity to work with seasoned professionals dedicated to your success.

Contact us today and ignite your digital transformation journey.

Learn more about our services.

Nick Leeder & Co Signature Black
Nick Leeder & Co Signature

Discover more from Nick Leeder & Co

Subscribe to get the latest posts sent to your email.

Leave a Reply