Why Are We Still Getting Calls About This?
A true-to-life story about the real reasons Incident Management fails — and what we can do about it.
☕ Monday, 9:02 a.m.
The first call came from Berlin. A logistics coordinator said the inventory system had frozen mid-order. “Again,” she added. Ten minutes later, another call came in — this time from Cologne. Same issue. By 9:30 a.m., the phones were ringing non-stop. The system was down across half the country. At the service desk, Lisa tried to keep calm.She followed the process. Logged the calls. Assigned categories. Set priorities. But something was off. The incident queue was filling rapidly, yet nobody upstream was responding.
🎭 The Invisible Walls
Lisa reached out to the backend team. No reply. She escalated. Mark, a senior systems engineer, answered after ten minutes. “We’re looking into it,” he said. “Can you give me an ETA?” “Look, we’ve got 50 other things on our plate. You’ll hear from us when there’s something to say.” By noon, the business teams were furious. Service desk agents were apologizing, stalling, and — quietly — venting frustration. It wasn’t that people didn’t care. It was that nobody was truly collaborating.
🔍 What Was Really Happening?
This wasn’t a technology problem. The escalation path was defined. SLAs were tracked. Roles were documented. But none of that mattered.
Because the way people worked hadn’t changed.
The mindset in the technical teams was, “It’s not my job to babysit incidents.” The behavior was passive avoidance. And the organization quietly accepted it.
🧊 Culture: The Iceberg That Sank Incident Management
What made this incident spiral wasn’t a broken system. It was the hidden rules of how people worked:
- Support teams were overloaded and focused only on project deadlines.
- No one saw value in documentation.
- There was no practice of linking incidents together or flagging trends.
- Store managers were blamed for “crying wolf” — even when real failures occurred.
Worst of all, it wasn’t the first time. The same system had failed two weeks earlier. Nobody remembered, because no one made sure the organization did.
🧭 So What Now?
The biggest mistake in Incident Management isn’t the lack of structure. It’s the belief that structure alone is enough. Organizations often write detailed processes and expect human behavior to follow. But unless that process is supported by shared responsibility, trust, and communication — it’s just paper. In this case, the post-incident review revealed:
- Repeated issues with no memory
- Incident categories that meant nothing
- Specialists who only acted when shouted at
- And support agents left apologizing without answers
🛠 What Actually Helped
Eventually, what moved the needle wasn’t a new tool or ticketing logic. It was a shift in perspective.
- Teams from across the service chain sat down together.
- They acted out the incident from each role’s point of view — including the store manager.
- For the first time, backend engineers saw the impact of silence.
- People spoke. Others listened.
- Agreements were made.
The change didn’t happen overnight. But it started with something simple: empathy. From that point on, incidents weren’t just “tickets” — they were signals. And people responded differently.
🧩 Final Thought
Incident Management doesn’t collapse because people don’t care. It collapses because the conditions around them make caring harder than ignoring. If you’re wondering why the same issues keep returning, maybe the question isn’t “what’s the root cause of the incident?” Maybe it’s “what’s the root cause of how we’re responding?” That’s not a process question. It’s a people question.
Inspired by the book “ABC of ICT” by Paul Wilkinson and Jan Schilt, which explores how attitude, behavior, and culture impact IT success.

Why Are We Still Getting Calls About This?
A true-to-life story about the real reasons Incident Management fails — and what we can do about it.