Detection Engineering vs. Threat Hunting: Complementary, Not Competing
Organizations often treat detection engineering and threat hunting as competing priorities for the same team budget. They shouldn't. Here's how the two disciplines reinforce each other when done right.
The False Tradeoff
"Should we invest in detection rules or in proactive hunting?" is a question that comes up in nearly every security program resourcing conversation. It's framed as a tradeoff — the same analyst hours can't do both.
This framing is wrong, and acting on it weakens both disciplines.
What Each Does
Detection engineering is the practice of building reliable, scalable, maintained detection logic — rules, analytics, and models that run continuously against your telemetry and alert when something matches.
Threat hunting is the practice of proactively searching for adversary behaviour that existing detections haven't caught — operating on the assumption that detections have gaps and that sophisticated adversaries know how to avoid them.
These are not competing activities. They're a feedback loop.
The Feedback Loop
Threat hunting surfaces adversary behaviours that weren't being detected. A hunter who finds lateral movement via an unusual protocol documents the finding, preserves the evidence, and hands it to the detection team. The detection team builds a rule. The rule runs continuously. The coverage gap closes.
Without hunting, detection coverage grows only through reactive incident response and vendor-provided rule sets. Gaps compound. Adversaries with knowledge of your detection stack can operate in those gaps indefinitely.
Without detection engineering, hunting findings don't scale. A hunter who manually identifies an attacker technique every time it occurs isn't a program — they're a person. Converting findings into automated detections is what turns individual expertise into organizational capability.
Where Programs Go Wrong
The programs that struggle are the ones that:
Only hunt, never detect. Findings happen, get written up, and then sit in a report. Nothing gets automated. The same gaps get found again next quarter.
Only detect, never hunt. The detection estate grows, but it's never challenged. Nobody is testing the assumption that existing rules catch what matters. The gaps are unknown because nobody is looking for them.
Treat them as separate teams with separate goals. When hunters and detection engineers don't share context — the same hypothesis tracking system, the same evidence repository, the same knowledge base — the feedback loop breaks.
The Infrastructure That Makes It Work
The feedback loop between hunting and detection engineering requires shared operational infrastructure: a place where hunt findings live that detection engineers can access, a system for tracking which findings have been converted to detections, and visibility into which ATT&CK techniques have detection coverage versus only hunt coverage.
This is one of the core design principles behind Vel — not just as a hunting tool, but as the connective tissue between hunting and the broader security engineering program.
Ready to put this into practice?
Vel is the workbench that makes these workflows operational — hypothesis tracking, evidence management, query federation, and leadership visibility in one place.