← jacob.masse
January 30, 2026

Running a Security Audit Across 60+ Assets

When I started the security audit at Humera, the asset inventory spreadsheet had 23 entries. By the time I finished discovery, we had 60+ assets: domains, subdomains, servers, SaaS integrations, API keys, service accounts, and cloud resources that nobody remembered provisioning. That gap between what the organization thinks it owns and what it actually owns is where the risk lives.

Here is the process I used, what worked, what I would change, and how to turn audit findings into action.

Phase 1: Discovery

You cannot audit what you do not know exists. Discovery is the most important phase, and most organizations rush through it. I spent the first full week just on discovery before running a single vulnerability scan.

Domain and DNS Enumeration

Start with your primary domains and work outward. I used a combination of tools:

At Humera, this uncovered subdomains that were not in any documentation. Some were running outdated services, and one had production database credentials exposed in the page source. That alone justified the entire audit.

Cloud Resource Inventory

Cloud sprawl is real. I pulled resource inventories from AWS using aws resourcegroupstaggingapi get-resources and cross-referenced against what the infrastructure team knew about. We found S3 buckets from a deprecated feature, EC2 instances running old application versions, and IAM roles with permissions that had not been reviewed in over a year.

For each cloud provider, I built a complete inventory: compute instances, storage, databases, networking components, IAM entities, and API gateways. The goal is a single spreadsheet where every row is an asset with its owner, purpose, and last review date.

API Keys and Secrets

This is the category that scares people, and it should. I audited every API key, service account, and secret across the organization. The process:

  1. Pull all secrets from the secrets manager and catalog them
  2. Search source code repositories for hardcoded credentials using trufflehog and gitleaks
  3. Review CI/CD pipeline configurations for exposed environment variables
  4. Check third-party SaaS integrations for API keys with excessive permissions

We found API keys that had not been rotated in over a year, service accounts with admin-level permissions that should have been read-only, and credentials that were still sitting in git history from a repository that had since been made private.

Phase 2: Vulnerability Assessment

With the asset inventory complete, I moved to systematic vulnerability assessment. The approach depends on the asset type.

External Attack Surface

For internet-facing assets, I ran automated scanning with nuclei using community templates, followed by manual verification of every finding. Automated scanners produce false positives. If you report a vulnerability without verifying it, you lose credibility with the engineering team, and they stop taking your findings seriously.

The scanning covered: SSL/TLS configuration, HTTP security headers, known CVEs on exposed services, default credentials, open ports, and exposed administrative interfaces. I used nmap for port scanning, testssl.sh for TLS analysis, and httpx for HTTP header assessment across all domains simultaneously.

Internal Infrastructure

For servers and internal services, the focus shifted to configuration review. I checked: operating system patch levels, firewall rules, user account hygiene, logging configuration, and backup verification. Each server got a configuration audit against CIS benchmarks appropriate for its OS and role.

Application Security

For each web application, I performed a focused security assessment covering authentication, authorization, input validation, and session management. This was not a full penetration test. It was a targeted review of the most common vulnerability categories. At this scale, you need to be efficient. Spend 2 to 4 hours per application, focus on the high-risk areas, and document findings as you go.

Phase 3: Prioritization

Sixty assets with an average of 3 to 5 findings each gives you roughly 200 to 300 issues. You cannot fix them all at once. Prioritization determines whether the audit leads to real improvement or just a report that sits in a shared drive.

I used a simple risk matrix: likelihood of exploitation multiplied by impact if exploited. But I added a practical dimension that most frameworks miss, which is ease of remediation. A medium-risk issue that takes 30 minutes to fix should be addressed before a high-risk issue that requires two weeks of refactoring. Quick wins build momentum and show the engineering team that security findings are actionable, not just complaints.

The prioritized findings fell into three tiers:

Phase 4: Reporting and Remediation

The audit report is not the final deliverable. The remediation plan is. I have seen too many audits produce a 100-page report that nobody reads. I structured the output differently.

The executive summary was one page. Total assets audited, critical findings count, risk rating, top three recommendations. This is for leadership. They need to understand the risk level and the investment required to address it.

The technical findings were in a structured spreadsheet, not a PDF. Each row: asset, finding, severity, evidence (screenshots and reproduction steps), recommended fix, estimated effort, assigned owner. A spreadsheet because it is trackable. You can sort, filter, assign, and mark items complete. A PDF is a snapshot. A spreadsheet is a project plan.

I scheduled weekly remediation check-ins for the first month. Not to micromanage, but to unblock. Engineers hit obstacles: they need credentials to access a system, they need approval to change a configuration, they are unsure whether a fix will break something. The check-in removes those blockers before they cause delays.

What I Would Do Differently

If I ran this audit again, I would change two things.

First, I would involve engineering team leads in the discovery phase. I did the asset discovery mostly alone, which meant I missed context. An engineer could have told me that the staging server with production credentials was scheduled for decommission next week. Context reduces unnecessary findings and increases trust.

Second, I would set up continuous monitoring before delivering the final report. Point-in-time audits decay immediately. The day after you finish, someone spins up a new server or creates a new API key, and your inventory is already outdated. Starting continuous monitoring during the audit ensures the improvements persist.

A security audit across 60+ assets is a significant effort. But the alternative is not knowing what you have, not knowing what is vulnerable, and finding out when someone else discovers it first. The process is not complicated. It just requires discipline: systematic discovery, verified findings, honest prioritization, and relentless follow-through on remediation.

More articles
DDoS Mitigation Lessons from Building AttackEngine Pentest Your Own Product Before Someone Else Does Compliance is a Product Feature, Not a Checkbox
jacob.masse