Does SOC 2 Require Penetration Testing or Vulnerability Scans?

May 1 / Competance Editorial Team

Key Takeaways

SOC 2 does not explicitly require penetration testing, but many auditors still expect to see proof that your vulnerability management is real and repeatable. A scan report alone often is not enough if it is not tied to a process, owners, and follow-up.

CC4.1 and CC7.1 are often mapped to the kinds of activities that show your controls work in practice, like scheduled vulnerability scanning, security monitoring, and periodic penetration testing. If you do one thing, make it easy to trace from risk to control to test evidence.

A professional penetration test is more than “we ran some tools.” It works best when it has:

  • Clear scope (systems, environments, and dates), plus what is out of scope

  • A repeatable method (so results are consistent from one test to the next)

  • Actionable reporting that ranks findings by risk and links fixes to the controls you claim to have

Here’s the catch: a pen test can fail as evidence if it is vague or incomplete. If you’re short on time, prioritize one well-scoped external test and document remediation steps with timestamps, owners, and retest notes

When your auditor asks for proof your security actually works

Also, the awkward moment in a SOC 2 audit is not when you show your policies, it’s when the auditor asks what you did to prove those controls work in real life. A written process for patching or access reviews is a start, but auditors often want independent validation that your environment is not wide open to known issues.

A practical benchmark is that many organizations run vulnerability scans monthly and a penetration test annually. That cadence is not a SOC 2 rule, but it is common because it creates regular, time-stamped evidence and shows you are checking for exposure across the year, not only before the audit.

What to test when you need independent validation

Next, pick testing that matches what you actually run in production and what your customers can touch. If you do one thing, do an external vulnerability scan of internet-facing assets, then show how you triage and fix the findings.

A simple decision guide:

  • If you ship weekly or deploy often, run authenticated vulnerability scans at least monthly and after major changes

  • If you are early-stage with a small stack, start with quarterly scans and a clear remediation workflow, then increase frequency as you grow

  • If you handle sensitive customer data or have a complex network, budget for an annual third-party penetration test plus periodic targeted tests after big releases

  • If you’re short on time, scan the external perimeter first (domains, cloud IPs, VPN endpoints) and document the remediation for the top findings

How to document scans and pen tests so they map to SOC 2

But evidence is not the raw report dump, it’s the story of how you detect issues, decide priority, and confirm fixes. Auditors are usually looking for a trail that is repeatable and tied to your risk process, not a one-off test.

Include these items in your audit packet:

  • Scope statement showing what assets were tested (for example, production web app and API, corporate endpoints, cloud accounts)

  • Dates and frequency (for example, monthly scan schedule, annual pen test window)

  • Tool or provider name and whether the scan was authenticated (logged in) or unauthenticated

  • Triage record that shows severity, owner, and target fix date (a ticket list is fine)

  • Remediation proof for 2 to 5 representative findings (before and after screenshots, patch version, or configuration change record)

  • Retest evidence showing the issue is closed or risk accepted with approval

Here’s the catch: a common mistake is to present a pen test as a checkbox and ignore follow-up. The fix is to show the closed loop, even if you only remediate the critical and high issues within 7 to 30 days and schedule the rest.

What you’ll be able to decide by the end

So by the end of this section, you can choose a testing set that fits your reality and still satisfies audit scrutiny: a regular vulnerability scan cadence, an annual pen test when it makes sense, and clear remediation evidence. You’ll also be able to present the results as support for SOC 2 criteria by showing monitoring, response, and continuous improvement rather than only handing over reports.

What SOC 2 does and does not say about pen testing

Next, it helps to separate what SOC 2 actually is from what people wish it were. SOC 2 is a criteria-based report, meaning you are evaluated against broad Trust Services Criteria and what you say you do, then the auditor tests whether that is true. It is not a prescriptive control checklist that says “run a penetration test every 12 months,” or “scan every week,” across every company.

Because of that, you can be “SOC 2 compliant” with different control sets depending on your product, data, and risk. A 5-person SaaS handling only basic support tickets will not be held to the same evidence bar as a team shipping infrastructure software into regulated customer environments. The report is about whether your controls are suitably designed and operating, not whether you followed a single universal recipe.

That said, “not required” often still turns into “expected” once risk and scope are on the table. If your environment is internet-facing, you process sensitive customer data, or you make changes weekly, an auditor can reasonably expect to see some independent testing that shows security controls work under pressure. In that situation, a pen test may be the clearest way to produce evidence that access controls, segmentation, and monitoring hold up beyond policy.

Here’s the catch: many teams treat pen testing as a checkbox and wait until 2 weeks before fieldwork. A better approach is to match testing to what could realistically hurt you and your customers, then keep the artifacts ready. If you do one thing, document your risk reasoning in plain language so the “why we did not do a pen test” story is as solid as the “here is our pen test report” story

  • Works best when you can tie test scope to in-scope systems and customer-impacting risks

  • Fails when the test is generic, out of date, or doesn’t include your production-like paths

  • If you’re short on time, prioritize vulnerability scanning plus remediation evidence on the most exposed systems first

How CC4.1 and CC7.1 connect to vulnerability management evidence

Next, it helps to separate two ideas auditors often blend together: are you checking your controls on a schedule (CC4.1) and are you watching for new threats and drift day to day (CC7.1). Vulnerability scanning can support both, but the evidence you show should match the specific point you are trying to prove.

If you do one thing, do this: label each artifact with the control it supports (CC4.1 or CC7.1), the time period covered (for example, the last 90 days), and what changed as a result (ticket opened, patch applied, exception approved). That small bit of context often matters more than the scanner brand.

Mapping CC4.1 to ongoing and separate evaluations

Also, CC4.1 is about evaluations: you periodically step back and check whether your security controls are designed well and still operating as intended. “Separate evaluations” means reviews that are not the same thing as daily operations, like a quarterly control review, an internal audit, or an outside assessment.

Examples of acceptable CC4.1 evaluation activities (pick what fits your scope and size):

  • Quarterly review that vulnerable systems are scanned, results are triaged, and deadlines are met

  • Monthly sample check of patch tickets to confirm evidence exists (before/after version, approver, completion date)

  • Annual external penetration test with a documented scope and retest results

  • Review of cloud configuration baselines (for example, CIS benchmarks) and a written summary of gaps

  • Management review of security metrics such as scan coverage percentage or SLA compliance

Common mistake: showing only raw scan exports. Fix: add a short evaluation memo or meeting notes stating what you checked, what you found, and what you decided (for example, “2 critical findings, both patched within 7 days” or “1 risk accepted with compensating control”).

Mapping CC7.1 to detection and monitoring of new vulnerabilities and changes

That said, CC7.1 is about detection and monitoring: you identify new vulnerabilities and configuration changes in time to respond. This is where continuous scanning, alerts, and change tracking fit, especially for internet-facing systems, production cloud accounts, and endpoints.

Artifacts that commonly support CC7.1:

  • Scheduled vulnerability scan configuration showing frequency (for example, weekly internal scan, daily container image scan)

  • Evidence of new vulnerability monitoring (vendor advisories, CVE feed alerts, or security mailing list tickets)

  • Alert or notification logs for critical findings (what triggered, when it was seen, who was notified)

  • Change detection evidence (cloud config drift reports, infrastructure-as-code pull requests, firewall rule change logs)

  • Triage records showing severity, owner, due date, and status for findings

  • Patch or remediation proof tied to the finding (ticket, commit, deployment record, and verification scan)

Here’s the catch: CC7.1 works best when your monitoring covers the systems that matter most and when alerts create trackable work. It fails when scans run but findings do not turn into assigned tickets, or when exceptions are made without a recorded reason.

What makes a penetration test credible for SOC 2 evidence

Next, treat the pen test report like audit evidence, not a marketing deliverable. A credible test shows what was tested, how it was tested, and what you did with the results, with enough detail that an auditor can trace it back to your systems and your risk decisions.

If you do one thing, make the scope and rules of engagement impossible to misread. Auditors get stuck when a report says “external test completed” but never states which domains, apps, IP ranges, cloud accounts, or environments were actually in play.

Minimum expectations to include in the report

Also, your report should clearly document the basics in a way a non-security reviewer can follow:

  • Scope: in-scope assets (for example: customer web app, API, admin portal), out-of-scope assets, testing window, and environment (prod vs staging)

  • Rules of engagement: allowed techniques, approved test accounts, rate limits, and what counts as a stop condition (for example: service instability)

  • Phases performed: reconnaissance, enumeration, exploitation attempts, privilege escalation, and retesting of fixes (if included)

  • Risk-based coverage: why the test focused where it did (for example: internet-facing login, payment flows, tenant isolation, admin actions)

A common mistake is copying last year’s scope even though the product changed. The fix is to list the top 3 changes since the last test (new SSO, new cloud region, new WAF rules) and state how the scope covered them.

Risk-based coverage that stands up in an audit

But “credible” also means the test matches your real risk. A test works best when it includes the paths an attacker would actually try first, and it fails when it avoids your most sensitive surfaces.

For example, if you are a SaaS app with 10 enterprise customers, an auditor will expect to see attention on tenant separation, admin role misuse, and common account takeover paths. If you are short on time, skip broad internal testing and prioritize one deep test of your internet-facing app and API, plus validation that critical findings were fixed within your stated timelines.

Techniques and tools you can reference in evidence

That said, auditors do not need a tool catalog, but they do benefit from familiar, concrete language. Your report can reference common techniques and tools to show a standard, repeatable approach:

  • Discovery and port scanning with Nmap

  • Exploitation and proof-of-concept validation with Metasploit (when approved in the rules of engagement)

  • Packet capture and traffic inspection with Wireshark

  • Web and API testing with Burp Suite (for example: authentication checks, session handling, input validation)

Include the version or date range of the tools used when possible, plus a short mapping from each critical finding to the affected asset and the remediation ticket or change record. That single thread from finding to fix is often what makes the evidence feel complete.

Closing remarks

Also, it helps to remember this line when you are deciding what to document next: “Compliance is a snapshot; security is a habit.” Auditors usually care less about a perfect one time report and more about whether your team can show a repeatable pattern of finding issues, fixing them, and checking again.

Next, ask yourself a simple question before your next audit cycle: what would you rather explain to your auditor, why you tested or why you did not. If you spot a gap, pick one next step that turns into evidence fast, such as scheduling a scoped penetration test for your highest risk app or setting a monthly vulnerability scan cadence with tracked remediation for 30 to 60 days.

Learn how to defend your SOC 2 testing choices

Created with