Conifers AI SOCGlossaryX
Quality of Investigation (QoI)

Quality of Investigation (QoI)

Conifers team

Key Insights: What You Need to Know About Quality of Investigation (QoI)

  • Quality of Investigation (QoI) is a metric introduced by Conifers to measure how thoroughly a Security Operations Center handles each incident, tracking whether analysts collect sufficient evidence, follow structured investigation steps, and document findings in a way that supports both response and post-incident review.
  • QoI addresses a gap that traditional SOC metrics like Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) leave open: speed metrics tell you how fast an incident was closed, but QoI tells you whether it was actually understood before it was closed.
  • The Conifers 2021 QoI Whitepaper established the foundational scoring criteria for QoI, linking investigation thoroughness to downstream outcomes including recurrence rates, missed lateral movement, and incomplete containment.
  • Quality of Investigation scores are particularly meaningful in high-volume SOC environments. According to Gartner's SOC Efficiency Study 2023, SOC teams processing large alert volumes face compounding risk when triage speed is prioritized over investigation depth.
  • The SANS Institute Incident Handling Report 2022 found that incomplete investigations are a leading contributor to recurring incidents, a pattern QoI scoring is specifically designed to detect and correct before recurrence happens.
  • QoI is not a fixed formula across all environments. The right scoring criteria vary by organization size, threat profile, and the tools available to analysts, which means QoI implementations require calibration rather than out-of-the-box deployment.
  • AI-driven investigation platforms like the CognitivSOC model can automate the data collection and enrichment steps that most commonly cause QoI scores to fall short, without replacing the analyst judgment that gives those scores meaning.

What Is Quality of Investigation (QoI) in Security Operations?

When your SOC's detection logic can't keep up with the complexity of modern threats, the question isn't just whether alerts are being closed, it's whether the incidents behind those alerts are being genuinely understood. Quality of Investigation (QoI) is a metric that answers that question directly. Introduced by Conifers as part of its framework for measuring incident handling thoroughness, QoI scores how completely an analyst investigates an incident before closing or escalating it, examining factors like evidence collection, contextual enrichment, timeline reconstruction, and documentation quality.

The practical problem QoI addresses is straightforward. A SOC Manager receiving 500 alerts per day faces constant pressure to reduce queue depth and hit response time targets. Without a metric that separately tracks investigation quality, analysts can achieve strong MTTD and MTTR numbers by closing alerts quickly without fully investigating them. QoI makes that tradeoff visible. It gives security leadership a way to see not just that incidents were handled, but whether they were handled in a way that actually reduced risk. An incident closed with a low QoI score is a potential breach waiting to recur.

QoI doesn't replace existing SOC performance metrics. It completes them. Where MTTD measures detection speed and MTTR measures response speed, QoI measures the depth of understanding achieved between those two points. For enterprise security teams and MSSPs managing complex environments, that depth is what separates an investigation that stops an attacker from one that merely delays them.

Core Concepts Behind Quality of Investigation (QoI)

The Thoroughness Problem in Modern SOC Operations

SOC analysts are trained to investigate thoroughly, but the conditions they work in often make thoroughness the first casualty of volume. When an analyst is working through a queue of 50 open alerts, each investigation competes with every other for time. The natural response is to resolve what can be resolved quickly and move on. QoI exists to measure what gets lost in that compression.

Thoroughness in QoI terms isn't a subjective judgment about analyst effort. It's a structured score built from observable investigation behaviors: Were the right data sources queried? Was the affected asset's history checked? Did the analyst correlate this event with prior alerts on the same endpoint? Was the scope of potential impact documented? Each of these steps can be tracked, and each gap in the checklist contributes to a lower QoI score. That structure is what makes QoI actionable rather than aspirational.

QoI vs. Traditional SOC Metrics

Speed metrics have dominated SOC performance measurement for years, partly because they're easy to calculate and partly because executives understand them. But the SANS Institute Incident Handling Report 2022 identified a consistent pattern: organizations with strong MTTR numbers still experienced recurring incidents at high rates, and incomplete initial investigations were a primary contributing factor. QoI is designed to surface exactly this problem.

The distinction matters most at the tail end of the alert queue. High-priority incidents typically get deep attention regardless of QoI tracking because the stakes are obvious. Medium and low-priority alerts are where investigation quality tends to collapse under volume pressure. And it's frequently from those lower-priority alerts that advanced persistent threats build their footholds. (This is the part that makes QoI uncomfortable to introduce in a SOC: it often reveals that your best closure rates are hiding your worst investigation habits.)

Scoring Criteria and Weighting

QoI scores are constructed from weighted criteria that reflect the investigation steps most likely to catch missed threat context. The Conifers 2021 QoI Whitepaper outlines a scoring model that weights evidence collection, enrichment steps, timeline construction, and documentation completeness as core components. Each component can be scored independently, which allows SOC managers to see not just overall QoI but where investigation quality specifically breaks down.

Weighting decisions aren't universal. An organization investigating primarily endpoint threats will weight host-level enrichment steps differently than one focused on network intrusions. This is one area where QoI implementation genuinely depends on context, and teams that apply a generic scoring template without calibrating to their environment tend to get misleading results early on.

The Relationship Between QoI and Alert Fatigue

Alert fatigue and low QoI scores are often symptoms of the same underlying problem: too many alerts, too few analysts, and insufficient automation to close the gap. But they're distinct in an important way. Alert fatigue is about analyst attention and morale. QoI is about investigation outcomes. You can have a team that isn't fatigued but still produces low QoI scores because their investigation process is poorly structured. You can also have a fatigued team that produces reasonable QoI scores because they have good tooling and clear investigation checklists.

QoI as an Organizational Learning Signal

One underused application of QoI data is longitudinal trend analysis. When QoI scores are tracked over time and across analyst tiers, they create a picture of where investigation knowledge is concentrated and where it's thin. If Tier 1 analysts consistently score low on enrichment steps for a specific alert category, that's a training signal. If QoI scores drop across the board following a tool change or staffing shift, that's a process signal. The institutional knowledge embedded in QoI scoring criteria becomes a guide for building better analysts, not just measuring current ones.

Implementing Quality of Investigation (QoI) in Your SOC

Defining Your Investigation Baseline

Before QoI scores can be meaningful, a SOC needs to define what a complete investigation looks like for each major alert category. This means documenting the specific data sources that should be queried, the enrichment steps that should be completed, and the minimum documentation standard required before an alert can be closed. This baseline work is the most time-consuming part of a QoI implementation, and it's also the most valuable because it forces explicit agreement on what good investigation practice actually means in your environment.

Teams that skip this step and try to apply generic QoI criteria tend to hit resistance from analysts who feel the scoring doesn't reflect real investigation complexity. The baseline definition process is also a useful opportunity to surface undocumented investigation practices that experienced analysts apply intuitively but have never formalized. Those practices belong in the scoring criteria.

Integrating QoI Scoring Into Existing Workflows

QoI scoring works best when it's embedded in the investigation workflow rather than applied as an after-the-fact audit. If analysts have to submit a separate QoI assessment after closing an incident, the scoring becomes a compliance task rather than a quality guide. The more effective approach is to build QoI checklists directly into the incident ticket or investigation interface, so each step is either confirmed or flagged as the investigation progresses.

This integration question connects directly to the handoff protocols between analyst tiers. When a Tier 1 analyst closes or escalates an incident, the QoI score at that handoff point tells the receiving analyst exactly what investigation work has been done and what gaps exist. That continuity is particularly valuable for MSSPs managing incidents across multiple client environments where context doesn't always travel with the ticket.

Automating the Data Collection Steps That Drag QoI Down

Most QoI shortfalls aren't caused by analysts skipping investigation steps intentionally. They're caused by those steps taking too long. Querying five different data sources to enrich a single alert can take 20 minutes manually. When an analyst has 30 alerts to work through, that math doesn't support thorough investigation. Contextual enrichment automation addresses this directly by pre-populating investigation tickets with the data that would otherwise require manual collection.

When enrichment is automated, analysts spend their time on the judgment-intensive steps that actually require human reasoning, while QoI scores reflect genuine investigation decisions rather than data collection fatigue. This is where AI-assisted SOC platforms create measurable QoI improvement without changing analyst headcount.

Reporting QoI to Security Leadership

QoI metrics need different reporting frames depending on the audience. For SOC managers, per-analyst and per-alert-category QoI trends are the most actionable view. For CISOs and Directors of Cybersecurity, aggregate QoI trends over time and their correlation with incident recurrence rates tell the story that matters at the board level. For MSSP clients, QoI data provides evidence that the service they're paying for delivers consistent investigation depth, not just fast ticket closure. See how SOC metrics and KPIs fit together in practice.

Benefits of Tracking Quality of Investigation (QoI)

Closing the Gap Between Closure Rate and Actual Risk Reduction

The most direct benefit of QoI tracking is that it separates the appearance of effective incident handling from the reality of it. A SOC with a 98% alert closure rate and average QoI scores in the bottom quartile is not a well-functioning SOC. It's a SOC that's generating documentation without generating security outcomes. QoI makes that distinction auditable. Security leaders can point to QoI trends when justifying investment in analyst training, tooling, or staffing, because QoI directly links investigation behavior to the risk of missed threats.

Reducing Incident Recurrence Through Better Initial Investigations

The SANS Institute Incident Handling Report 2022 drew a direct line between incomplete investigations and recurring incidents. When an analyst closes an alert without fully scoping the affected environment, without checking for lateral movement, or without confirming that the initial access vector has been addressed, the conditions that enabled the incident remain in place. QoI scoring identifies those gaps at the investigation stage, before they become the conditions for the next breach. And for organizations that track QoI over multiple quarters, the correlation between rising QoI scores and falling recurrence rates is typically the most persuasive internal argument for sustaining the program.

Building a Measurable Investigation Capability

QoI gives security teams something that most SOC quality programs lack: a quantifiable baseline for investigation capability that can be tracked, compared, and improved. New analysts can be benchmarked against team averages. Investigation playbooks can be evaluated by their effect on QoI scores. Tool changes can be assessed by whether they move QoI in the right direction. This turns investigation quality from a subjective manager assessment into an objective performance dimension, which matters both for internal development programs and for demonstrating SOC maturity to auditors and clients.

Challenges in Implementing Quality of Investigation (QoI)

When Scoring Criteria Don't Match Investigation Reality

A common early symptom of a poorly calibrated QoI implementation is analyst pushback that feels emotional but is actually technical. When analysts argue that the scoring criteria don't reflect real investigation complexity for certain alert types, they're often right. A QoI checklist built around network-layer investigation steps will produce misleading scores when applied to identity-based incidents. Getting the criteria wrong doesn't just produce bad data. It erodes analyst trust in the metric, which makes future QoI improvement efforts harder to drive. The fix requires going back to the baseline definition process and rebuilding scoring criteria with analyst input for each major alert category.

The Speed-Quality Tension in High-Volume Environments

Consider a SOC Manager who introduces QoI scoring on a Monday morning and by Wednesday is watching MTTR numbers climb as analysts spend more time on each investigation. This tension is real and it doesn't resolve itself automatically. Gartner's SOC Efficiency Study 2023 found that SOC teams in high-volume environments face a genuine tradeoff between investigation depth and throughput that tooling alone can't eliminate. The practical answer involves a combination of automation for data collection steps (so thoroughness doesn't cost as much time), incident confidence scoring to prioritize which alerts get deep QoI attention, and explicit management commitment to accepting some MTTR increase in exchange for better QoI outcomes.

Sustaining QoI Programs Through Analyst Turnover

QoI programs built on analyst expertise rather than documented criteria don't survive turnover well. When the senior analyst who designed the scoring criteria leaves, the institutional knowledge embedded in those criteria starts to decay. Teams that document QoI scoring rationale explicitly, including why specific enrichment steps matter for specific alert categories, are much better positioned to maintain consistent scores as team composition changes. This connects directly to the broader challenge of preserving institutional knowledge in SOC environments where analyst tenure is often short.

Standards and Frameworks That Apply to Quality of Investigation (QoI)

QoI doesn't map to a single framework by design, but several major standards create the compliance context in which QoI measurement becomes not just useful but expected. Mapping QoI criteria to those frameworks is a practical exercise that many SOC teams find useful when building the business case for a QoI program.

NIST SP 800-61 (Computer Security Incident Handling Guide) defines the incident response lifecycle phases: Preparation, Detection and Analysis, Containment, Eradication, and Recovery. QoI scoring is most directly relevant to the Detection and Analysis phase, where the guide specifies that analysts should document the scope of the incident, identify affected systems, and collect and preserve evidence. When SOC teams map their QoI criteria against the Detection and Analysis guidance in 800-61, they typically find that their QoI checklist either covers or extends those requirements. Teams that have tried this exercise often discover that NIST 800-61 sets a higher documentation bar than their current practice, which itself becomes a finding worth addressing.

MITRE ATT&CK provides a different but complementary angle. Investigation quality can be measured partly by whether analysts check for the TTPs associated with the detected technique. If an alert fires on a credential dumping behavior, a high-QoI investigation will examine the adjacent ATT&CK techniques that commonly follow credential dumping in observed campaigns. Teams that use kill chain mapping as part of their investigation process are effectively building ATT&CK coverage into their QoI criteria without always labeling it that way.

ISO 27001 Annex A.16 (Information Security Incident Management) sets requirements for documented incident handling procedures and evidence collection practices. For organizations pursuing or maintaining ISO 27001 certification, QoI scoring records can serve as evidence that incident handling procedures are being followed consistently, turning what might otherwise be a manual audit process into a metric-backed compliance demonstration.

The NIST Cybersecurity Framework (CSF) Respond function, specifically the RS.AN (Analysis) category, maps directly to what QoI measures. RS.AN requires that organizations analyze incidents to understand their impact and scope. QoI scoring provides the operational mechanism for meeting that requirement at scale, across hundreds of daily alerts rather than just the high-profile incidents that would naturally receive deep analysis anyway. See how these concepts fit into the broader operationalization of SOC AI programs.

How Conifers CognitivSOC Supports Quality of Investigation (QoI) at Scale

The specific capability that connects Conifers CognitivSOC to QoI improvement is automated investigation orchestration. The platform's AI SOC agents execute the enrichment and data collection steps that most commonly cause QoI scores to fall short in high-volume environments. When an alert fires, the relevant host history, user context, threat intelligence correlation, and related alert history are assembled automatically before an analyst opens the ticket. This doesn't change what a complete investigation requires. It changes how much of that work the analyst has to do manually.

For SOC managers trying to maintain QoI standards while managing large alert volumes, this matters because it breaks the direct tradeoff between throughput and investigation depth. Analysts can meet QoI criteria for more investigations per shift when the time-intensive data collection steps are handled by the platform. The institutional knowledge integration capability is also relevant here: QoI criteria and investigation playbooks can be embedded in the platform so that scoring guidance travels with the alert rather than residing in a wiki that analysts may or may not consult under pressure.

Teams evaluating this approach can see how it works in practice at conifers.ai/demo. The Amsys case study also shows how a managed security provider applied the platform to scale investigation quality across a growing client base.

Frequently Asked Questions About Quality of Investigation (QoI)

How does Quality of Investigation (QoI) change the way SOC analysts prioritize their investigation time?

QoI reframes analyst prioritization from a purely volume-based calculation to a quality-weighted one. Without QoI, an analyst's implicit goal is to close as many alerts as possible while meeting SLA time targets. With QoI tracking in place, closing an alert with a low QoI score is treated as an incomplete outcome, not a completed one. This changes what "done" means for each investigation.

In practice, this often means analysts spend more time on medium-severity alerts than they did before QoI tracking, because those are the alerts where investigation steps most commonly get skipped under time pressure. It also means that the automation supporting investigation data collection becomes more valuable, because every minute saved on mechanical enrichment steps is a minute available for the judgment work that QoI measures.

Can QoI scoring be gamed by analysts trying to hit performance targets?

Yes, and this is worth acknowledging directly. If QoI criteria require that certain enrichment steps are documented in the ticket, an analyst can document those steps without actually performing them meaningfully. This is the same problem that affects any checklist-based quality system. The mitigation is a combination of audit sampling, where managers periodically review the actual investigation artifacts against the claimed QoI steps, and automation of the measurable steps so that the system confirms completion rather than relying solely on analyst self-reporting.

Some QoI implementations address this by connecting scoring to system-level logs rather than analyst-entered checkboxes. If the enrichment step requires querying a threat intelligence feed, the QoI score for that step is based on whether the query was actually executed, not whether the analyst checked a box. This approach is more resistant to gaming and more reliable as a performance measurement, though it requires tighter integration between the QoI scoring system and the investigation tooling.

When does Quality of Investigation (QoI) not apply or break down as a useful metric?

QoI is least useful in environments where alert volume is so low that every incident naturally receives deep investigation attention. If a security team handles 10 significant incidents per month, each one will be investigated thoroughly by default, and a formal QoI scoring program adds overhead without adding insight. QoI is designed for environments where volume creates pressure to compress investigation depth.

It also breaks down when the QoI criteria aren't kept current. Security environments change. New attack techniques emerge, new tools are deployed, and the enrichment steps that were relevant six months ago may no longer reflect the most important investigation actions for your current threat profile. A QoI program that scores against outdated criteria will measure compliance with historical best practices rather than current investigation quality. Regular criteria reviews, at least quarterly in active environments, are a prerequisite for sustained QoI validity. It genuinely depends on how much your threat environment shifts over time.

How should MSSPs use QoI when managing incident handling across multiple client environments?

MSSPs face a specific QoI challenge: a single scoring model may not be appropriate across clients with different environments, risk profiles, and contractual service levels. The practical approach is to maintain a core QoI framework with configurable criteria that can be adjusted per client segment. The core criteria cover investigation steps that apply universally, such as evidence collection and timeline documentation, while client-specific criteria reflect the particular technologies and threat patterns relevant to each engagement.

QoI data is also a client communication tool. When an MSSP can show a client that investigation quality scores have improved over the contract period, or that QoI scores correlate with reduced recurrence rates for that specific environment, that's a more meaningful service demonstration than SLA compliance statistics alone. See how this plays out in the DTX case study and the broader Conifers MSSP program.

How does Quality of Investigation (QoI) relate to false positive suppression and alert tuning programs?

QoI and false positive suppression work in opposite directions on the same problem. False positive suppression reduces the number of low-signal alerts that analysts need to investigate, which reduces the volume pressure that causes QoI to suffer. QoI measures whether the alerts that do reach analysts are handled thoroughly. Together, they create a more complete picture of SOC effectiveness than either metric provides alone.

There's also an interesting feedback loop between the two. When QoI scores are tracked by alert category, low QoI scores concentrated in specific categories often indicate that those categories are generating alerts that analysts don't find credible enough to investigate carefully. That's a signal that the detection rule generating those alerts may need tuning. QoI data can inform alert tuning decisions in ways that purely volume-based metrics miss.

What's the difference between Quality of Investigation (QoI) and Incident Confidence Score?

These two metrics measure different things, though they're often confused. An Incident Confidence Score reflects how certain the detection system is that the flagged behavior represents a real threat. It's a property of the alert itself, based on detection logic and threat intelligence. Quality of Investigation is a property of what the analyst did with that alert after it was flagged.

A high-confidence alert can still receive a poor QoI score if the analyst doesn't investigate it thoroughly. And a low-confidence alert can receive a high QoI score if the analyst rigorously works through the investigation steps to determine whether the confidence assessment was correct. Both metrics matter for different reasons. Incident Confidence Score helps prioritize what gets investigated first. QoI measures whether those investigations, once started, are completed to a standard that actually reduces risk. Using them together, alongside tools for knowledge-driven triage, gives SOC leadership a more complete view of both detection and response quality. Explore more concepts in the Conifers glossary.

For MSSPs ready to explore this transformation in greater depth, Conifer's comprehensive guide, Navigating the MSSP Maze: Critical Challenges and Strategic Solutions, provides a detailed roadmap for implementing cognitive security operations and achieving SOC excellence.

Start accelerating your business—book a live demo of the CognitiveSOC today!​