← jacob.masse
February 28, 2026

DDoS Mitigation Lessons from Building AttackEngine

I built AttackEngine because I was tired of watching small companies get knocked offline by attacks they never saw coming. The product was an anti-DDoS SaaS platform. It went from first commit to acquisition in under a year. Along the way, I learned more about traffic analysis, real-time detection, and incident response than any certification could teach.

Here is what stuck with me.

The Problem is Worse Than You Think

Most people imagine DDoS attacks as brute force volumetric floods. Someone points a botnet at your IP, saturates your bandwidth, and you go dark. That does happen. But it is the least interesting category of attack, and it is not the one that kills startups.

The attacks that do real damage are application-layer floods. They look like legitimate traffic. They target expensive endpoints, things like search queries, login forms, or API routes that trigger database joins. A few hundred requests per second can take down a service that handles thousands of normal users just fine. The volume is low enough that basic rate limiting misses it entirely.

When I started building AttackEngine, I focused on volumetric protection first. That was a mistake. The customers who needed us most were getting hit by L7 attacks that their existing CDN could not catch.

Traffic Fingerprinting Changed Everything

The core insight behind AttackEngine was fingerprinting. Not just looking at source IPs or request rates, but building behavioral profiles of traffic patterns.

Every legitimate user leaves a fingerprint. They load CSS files, execute JavaScript, follow redirects, maintain cookies. Bot traffic almost never does all of those things correctly. Even sophisticated bots that pass individual checks tend to fail when you look at the full behavioral sequence.

We built a fingerprinting engine that tracked several signals per request:

Each signal alone had a high false positive rate. Combined into a composite score, the accuracy was strong enough that we could trust it in production without constantly worrying about blocking real users. That was the bar that mattered. You cannot block legitimate customers. One blocked sale is worse than ten seconds of degraded performance.

Real-Time Detection Needs Real-Time Architecture

The hardest engineering problem was latency. When an attack starts, you have seconds to respond before the service degrades. Minutes, and your customers are already posting on Twitter about your outage.

We processed traffic logs through a streaming pipeline. Every request hit an analysis layer that updated running statistics: requests per second by fingerprint cluster, error rate anomalies, geographic distribution shifts. When the system detected a deviation beyond configurable thresholds, it triggered mitigation automatically.

The key architecture decision was separating detection from mitigation. Detection ran on a fast path with minimal computation per request. Mitigation rules propagated to edge nodes within two seconds of a detection event. This meant we could analyze deeply without adding latency to normal requests.

One pattern that worked well: we maintained a rolling baseline of "normal" traffic for each customer, updated every five minutes. Attack detection was measured as deviation from that baseline, not against fixed thresholds. A site that normally gets 50 requests per second and suddenly gets 500 is under attack. A site that normally gets 5,000 and hits 5,500 probably just got featured on Hacker News.

Multi-Channel Alerting is Not Optional

This sounds like a minor feature. It is not. Getting alerting right was one of the things that drove customer retention more than anything else.

When an attack is detected, the ops team needs to know immediately. Not in fifteen minutes when they check Slack. Not tomorrow when they review logs. Right now. We built integrations for Slack, Discord, PagerDuty, SMS, email, and webhook endpoints. Customers configured escalation chains: Slack first, then PagerDuty if no acknowledgment within five minutes, then SMS to the founder.

The alert payloads mattered too. We included attack type classification, estimated volume, top source regions, and a direct link to the live mitigation dashboard. An on-call engineer should be able to assess the situation from the alert alone, without logging into anything.

We also built "attack resolved" notifications with summary reports. Duration, peak volume, requests blocked, estimated impact. Customers forwarded those to their own clients as proof that their infrastructure held up. That turned a negative event into a trust-building moment.

Bootstrapping to Acquisition

AttackEngine was bootstrapped. No venture capital, no angel round. I built the initial version myself and signed the first paying customers through cold outreach to companies I had seen get hit.

The go-to-market was straightforward: find companies that had recently experienced an outage, reach out with a specific analysis of what happened to them, and offer a free trial. The conversion rate on that motion was high because the pain was immediate and concrete.

Pricing was usage-based, tied to clean traffic volume. This aligned incentives correctly. We only charged for traffic we protected, not traffic we blocked. Customers understood the model intuitively.

The acquisition happened because a larger security company wanted our fingerprinting technology and our customer base. The entire lifecycle from founding to exit was under twelve months. That speed was possible because the product solved a painful, measurable problem for a specific audience.

Lessons I Still Apply

Building AttackEngine taught me principles I use in every security role now:

Measure before you mitigate. You cannot protect what you do not understand. Before deploying any security control, build a baseline of normal behavior. Without that baseline, you are guessing.

False positives are worse than false negatives. Blocking a legitimate user is a guaranteed loss. Letting an attacker through might not even be noticed. Tune for precision over recall.

Alerting is a product, not a feature. The quality of your alerts determines whether your security team trusts the system. Noisy alerts get ignored. Precise alerts with actionable context get acted on within minutes.

Speed of response beats depth of analysis. A 90% accurate mitigation deployed in two seconds beats a 99% accurate mitigation deployed in two minutes. You can refine after the bleeding stops.

DDoS protection is not a solved problem. Attack techniques evolve constantly, and the barrier to launching an attack keeps dropping. But the fundamentals of detection, fingerprinting, baselining, and fast response, those hold up. If you are building anything that lives on the internet, understanding these principles is not optional.

More articles
Pentest Your Own Product Before Someone Else Does Compliance is a Product Feature, Not a Checkbox Running a Security Audit Across 60+ Assets
jacob.masse