Did you enjoy our articles?
Click the order button below to get a high-quality paper.
You can talk to the writer using our messaging system and keep track of how your assignment is going.
Order Now / اطلب الانManaging improvement is not about chasing perfection — it is about systematically identifying where performance falls short of what customers need and closing those gaps with structured, measurable interventions. Unit 8607-501 tests whether you can diagnose quality failures in your organisation, honestly evaluate your own role in the quality chain, and then plan and deliver a real improvement project with measurable outcomes.
This assignment example follows a customer services manager in a 200-person facilities management company through a genuine improvement project: reducing the response time for reactive maintenance requests, which had been consistently missing the contractual service-level agreement. The example demonstrates how each AC builds on the last — from organisational diagnosis to personal evaluation to planning, implementation, and impact measurement.
Critical assessment demands more than describing what the organisation does — it requires weighing evidence of quality management against what customers actually experience, identifying where systems succeed and where they fail despite stated intentions.
Quality management infrastructure. The organisation holds ISO 9001:2015 certification, which was renewed in October 2024 following an external audit. The quality management system (QMS) includes documented procedures for service delivery, a complaints register, a corrective action process, and quarterly management review meetings. On paper, this represents a robust framework. Oakland (2022) argues that ISO 9001 provides the architecture for quality management but not the culture — the standard ensures processes exist without guaranteeing they work effectively in practice. This distinction is central to assessing the organisation’s actual effectiveness.
Where the system works. Planned preventive maintenance (PPM) — the scheduled, predictable element of the service — performs well. PPM completion rates averaged 94% across all contracts in 2024-2025, exceeding the 90% contractual target. Customer satisfaction surveys for PPM services returned a mean score of 4.2 out of 5.0 (n=340 responses, annual survey 2025). The structured nature of PPM — known schedules, pre-allocated resources, predictable workloads — plays to the QMS’s strength of documented procedures and systematic planning.
Where the system fails. Reactive maintenance — unplanned repairs triggered by equipment failures, building defects, or tenant complaints — tells a different story. The contractual SLA requires an initial response within four hours for Priority 2 calls and a fix within 24 hours. Over the twelve months to March 2025, the four-hour response SLA was met on 71% of occasions — a 29% failure rate that generated 47 formal complaints and three contractual penalty deductions totalling £18,400. The root causes are structural rather than individual: reactive jobs are allocated via a manual spreadsheet system managed by two coordinators, there is no automated escalation when a job approaches its SLA deadline, and engineer workload visibility is limited to verbal updates during morning briefings.
The gap between PPM effectiveness and reactive maintenance failure illustrates what Dale, Bamford, and Van der Wiele (2024) term the ‘quality paradox’ — organisations that invest heavily in systematic quality for predictable work often underinvest in the adaptive quality systems needed for unpredictable demand. The QMS addresses what can be planned but not what cannot.
Customer voice. A critical assessment must also examine whether quality is defined from the organisation’s perspective or the customer’s. The ISO 9001 framework centres on conformance to specification — did the organisation do what it said it would do? However, customer feedback reveals a different quality dimension. Twelve of the 47 complaints specifically referenced communication failures rather than response time: customers reported not knowing when an engineer would arrive, not receiving updates when jobs were delayed, and not being informed when a job was completed. The organisation measures SLA compliance (a process metric) but does not systematically measure customer experience during reactive service delivery (an outcome metric). This represents a significant gap — the organisation is managing quality against its own definition rather than the customer’s (Parasuraman, Zeithaml and Berry’s SERVQUAL model, as discussed by Wilson et al., 2021).
Self-evaluation must be evidence-based rather than impressionistic. Three sources inform this assessment: my annual performance review (January 2025), direct feedback from four contract managers who rely on my team’s reactive maintenance performance, and a self-administered skills audit against the EFQM competency framework completed in February 2025.
Strengths. My performance review identifies two quality-related strengths. First, complaint resolution: of the 47 formal complaints received in 2024-2025, I personally managed the investigation and response for 31 and achieved a 90% customer satisfaction rate with the resolution provided (measured by post-resolution callback survey). Contract managers confirm that when I am directly involved in complaint handling, outcomes are consistently positive. Second, team development: I introduced a monthly quality review session for the coordination team in June 2024, during which we analyse SLA failures and identify recurring patterns. This has produced three specific process changes (revised job categorisation criteria, introduction of a ‘two-hour warning’ flag for approaching SLA deadlines, and a standard customer communication template) that the team now uses independently.
Weaknesses. The skills audit against the EFQM framework reveals two significant gaps. First, data-driven decision-making: I tend to respond to quality failures reactively — investigating after a complaint arrives — rather than using trend data to anticipate and prevent failures. The monthly quality review sessions analyse individual incidents but do not aggregate data to identify systemic patterns. Deming’s (as cited by Moen and Norman, 2021) emphasis on statistical process control — using data to distinguish between common cause variation and special cause variation — exposes a fundamental gap in my quality management approach. I treat every SLA failure as a special cause (something went wrong in this specific case) when the 29% failure rate suggests a common cause (the system itself is inadequate).
Second, upward influence: I have identified the reactive maintenance coordination system as the primary barrier to quality improvement, but I have not effectively communicated this to senior management. My business case for a digital job allocation system was submitted in September 2024 but has not progressed beyond the initial proposal stage. Feedback from the operations director suggests the proposal lacked financial rigour: ‘The case identified the problem clearly but didn’t quantify the cost of doing nothing versus the cost of the solution.’ Buchanan and Badham (2023) describe this as a failure of political skill — the ability to frame improvement proposals in language and evidence that resonates with decision-makers’ priorities. My technical analysis of the quality problem was sound; my ability to translate it into a compelling business case was not.
ff deliverables, PDCA embeds continuous refinement into the improvement process itself. Problem definition. The four-hour response SLA for Priority 2 reactive maintenance calls was met on only 71% of occasions in 2024-2025 (target: 95%). Root cause analysis using the ‘5 Whys’ technique identified three contributing factors: (1) job allocation relies on a manual spreadsheet with no real-time visibility of engineer availability, (2) there is no automated escalation when jobs approach SLA deadlines, and (3) customer communication during service delivery is ad hoc rather than systematic. Improvement objectives. Following the SMART framework: Primary: Increase four-hour response SLA compliance from 71% to 90% within six months of implementation (by December 2025). Secondary: Reduce formal complaints related to reactive maintenance by 40% (from 47 to a maximum of 28 annually). Tertiary: Achieve a customer communication satisfaction score of 4.0 or above (against the current baseline of 3.1 recorded in the March 2025 targeted survey). Intervention design. The improvement plan has three integrated components: Component 1: Digital job allocation system. Replace the manual spreadsheet with a cloud-based reactive maintenance platform (after a vendor evaluation in April 2025, the organisation selected ServiceM8 based on cost, functionality, and integration with the existing asset register). This addresses root cause 1 by providing real-time visibility of engineer...
Subscribe to unlock this premium content and access our entire library of exclusive learning materials.
Subscribe to UnlockAlready subscribed? Sign in
Click the order button below to get a high-quality paper.
You can talk to the writer using our messaging system and keep track of how your assignment is going.
Order Now / اطلب الان