Building Your First SIEM Detection Rule: SOC Beginner Guide
- Akshay Jain
- 2 hours ago
- 6 min read
The Alert That Nobody Wrote
On a quiet Tuesday night, an attacker spent four hours moving laterally through a mid-sized financial firm's network. They dumped credentials, accessed a file server containing client records, and exfiltrated 40 GB of data, all of this without triggering a single alert. The SIEM was running. The logs were flowing. But nobody had written a rule to catch what was happening.
This isn't a hypothetical. Variants of this scenario play out across organizations every week. Having a SIEM without well-crafted detection rules is like installing a smoke detector with no batteries! The hardware is there, but the protection isn't!
If you're just stepping into a Security Operations Center role, learning to write your first SIEM detection rule is one of the most impactful skills you can develop. This guide will walk you through the entire process starting from understanding what a detection rule actually is, to writing and deploying one that could stop a real attack in its tracks.
What Is a SIEM Detection Rule?
A SIEM (Security Information and Event Management) system is a platform that collects log data from across an organization's environment firewalls, endpoints, cloud services, authentication systems, applications and many more and centralizes it for analysis. Think of it as a surveillance control room that receives live feeds from hundreds of cameras simultaneously.
A detection rule is the logic that tells the SIEM what to look for in that flood of data. Without rules, the SIEM is just storing logs. With well-tuned rules, it becomes an early warning system.
A good example can be - imagine a bank vault with motion sensors. The sensors themselves don't do anything, they're just inputs. The detection rule is the logic that says "if motion is detected between 2 AM and 5 AM when no staff are scheduled, trigger an alarm". The sensor is your log source. The rule is your intelligence.
Detection rules can be simple (flag any login from a foreign country) or complex (flag a chain of events: failed logins, followed by a successful login, followed by a large file download, all within 10 minutes). The latter is called correlation logic, and it's where SIEMs really shine.
How SIEM Detection Rules Work
Understanding the mechanics of a detection rule helps you write better ones. Here's the general pipeline:
Log Ingestion
Raw events flow into the SIEM from log sources: Windows Event Logs, Syslog, firewall logs, EDR telemetry, cloud audit trails (AWS CloudTrail, Azure Monitor, GCP Audit Logs), and more.
Normalization
The SIEM parses and normalizes these logs into a common schema. Different vendors call the same thing different names e.g. a "source IP" in one log might be src_ip, sourceAddress, or ipSrc in another. Normalization makes them consistent and queryable.
Rule Evaluation
The detection engine continuously evaluates incoming (and sometimes historical) events against your defined rules. When a rule condition is met, it generates an alert or incident.
Alert Triage
Alerts land in an analyst's queue. A well-written rule produces actionable, high-fidelity alerts. A poorly written rule causes alert fatigue by producing hundreds of false positives that train analysts to ignore everything.
The Anatomy of a Detection Rule
Most detection rules share these core components:
Component | Description | Example |
Data Source | Which log(s) to query | Windows Security Event Log |
Filter / Condition | What to match | EventID = 4625 (Failed Login) |
Threshold / Aggregation | Count or pattern logic | > 10 failures in 5 minutes |
Grouping | Group by which field | Per source IP, per username |
Time Window | Look back period | Last 5 minutes |
Severity | Alert priority | High |
Response Action | What happens on match | Create incident, notify SOC |
Technical Deep-Dive: Writing Your First Detection Rule
Let's build a real detection rule from scratch, targeting one of the most common attack vectors: password spraying.
What the Sample Logs Look Like:
In a Windows Active Directory environment, failed logins generate Event ID 4625 with a Logon Type. A password spray looks like this in the logs:
EventID: 4625
TargetUserName: alice@corp.com | SourceIP: 185.220.x.x | Time: 09:01:03
EventID: 4625
TargetUserName: bob@corp.com | SourceIP: 185.220.x.x | Time: 09:01:05
EventID: 4625
TargetUserName: tom@corp.com | SourceIP: 185.220.x.x | Time: 09:01:07
Notice: same source IP, many different usernames, rapid succession. This is the pattern we want to catch.
Writing the Rule in Splunk SPL:
index=windows sourcetype="WinEventLog:Security" EventCode=4625 | bucket span=5m time | stats dc(AccountName) AS unique_users, count AS total_failures BY time, SourceNetwork_Address | where unique_users > 10 AND total_failures > 15 | eval alert_name="Potential Password Spray Detected" | table time, SourceNetwork_Address, unique_users, total_failures, alert_name
What this does:
Filters failed logins (EventCode 4625) from the Windows Security log
Groups events into 5 minute buckets
Counts distinct usernames targeted per source IP per bucket
Fires when more than 10 unique accounts are targeted with more than 15 failures
Returns a clean table for analyst review

Writing the Same Rule in Sigma
Sigma is an open standard for writing detection rules that can be converted to virtually any SIEM query language.
title: Password Spray Attack - Multiple Failed Logins Different Accounts
id: a9c6f8b2-1d3e-4f57-8c2a-b7e0d1234567
status: experimental
description: Detects password spray attempts where a single source IP generates failed authentication events against multiple distinct user accounts within a short time window.
author: Akshay Jain
date: 2026/03/14
tags: - attack.t1110.003
logsource:
product: windows
service: security
detection:
selection:
EventID: 4625
condition: selection | count(TargetUserName) by IpAddress > 10
fields:
- IpAddress
- TargetUserName
- WorkstationName
falsepositives:
- Misconfigured applications performing repeated authentication
- Automated testing tools in dev environments
level: high
Key Sigma fields explained:
tags: Maps to MITRE ATT&CK — T1110.003 is "Password Spraying"
logsource: Tells converters which platform this targets
detection: The actual matching logic
falsepositives: Critical for tuning. Always document expected false positives
level: Severity level (informational / low / medium / high / critical)
Blue Team Specifics: SOC Workflow for Your New Rule
Writing the rule is only step one. Here's the operational workflow for a SOC analyst deploying a new detection rule:
Step 1: Baseline First
Before deploying, run the rule in search mode (not alerting mode) against 30 days of historical data. Understand what it would have fired on. Are most matches legitimate? Tune accordingly.
Step 2: Define Your Threshold Carefully
Too low → alert fatigue. Too high → missed detections. For a password spray rule in an enterprise with 5,000 users, triggering at 10 unique accounts in 5 minutes is a reasonable starting point. Adjust based on your environment.
Step 3: Enrich the Alert
A good alert isn't just an IP address. Enrich it automatically:
GeoIP lookup: Is the source IP in an unusual country?
Threat intel lookup: Is the IP in known threat feeds (VirusTotal, AbuseIPDB)?
Asset context: Which accounts were targeted? Are any privileged accounts in the list?
Step 4: Build a Playbook
Before the rule goes live, write a simple response playbook:
Confirm the alert is not a false positive (check for IT-scheduled tasks, pen tests)
Identify the source IP, internal or external?
Check if any targeted account subsequently logged in successfully
If confirmed attack: block the source IP at the perimeter firewall, force password resets for targeted accounts, escalate if privileged accounts were involved
Detection engineering is one of the most intellectually rewarding roles in cybersecurity. It's where threat intelligence, attacker psychology, and data engineering collide. Your first SIEM detection rule won't be perfect. It will probably generate a few false positives, and you'll tune it, and you'll learn from it. That iterative process is the job.
The gap between organizations that detect breaches in hours versus those that discover them months later rarely comes down to budget or headcount alone. It comes down to whether someone took the time to write, tune, and maintain good detection logic. That someone can be you.
Start with one rule. Map it to ATT&CK. Tune it against real data. Then write another. The attackers are persistent BUT so are the best defenders.
Happy cyber-exploration! 🚀🔒
Note: Feel free to drop your thoughts in the comments below - whether it's feedback, a topic you'd love to see covered, or just to say hi! Don't forget to join the forum for more engaging discussions and stay updated with the latest blog posts. Let's keep the conversation going and make cybersecurity a community effort!
-AJ



Comments