NSAuditor AI Enterprise Edition produces a Type-I-friendly pre-audit gap report mapped to the AICPA Trust Services Criteria 2017 (with 2022 points of focus) — signed artifacts, RFC 3161 trusted timestamps, SHA-256 chain-of-custody, Ed25519 suppression signing, and native Vanta push.
✓ 7 controls fully covered⚠ 5 partial↑ Type I & Type II⇄ Vanta push
About this page. This describes what NSAuditor AI EE — our flagship product — delivers for your SOC 2 audit. Nsasoft itself does not currently hold its own SOC 2 attestation, and unlike most security vendors we don't need one: NSAuditor AI runs entirely on customer infrastructure with Zero Data Exfiltration. We are not a data processor. Read why →
NSAuditor AI EE generates a pre-audit gap report with institutional-level evidence controls. It maps cloud and network scan findings to specific Trust Services Criteria, produces signed evidence artifacts (cover-page Scope Attestation, SHA-256 chain-of-custody sidecars, RFC 3161 trusted-timestamps, cryptographic suppression signing), and ships in machine-readable form suitable for GRC platform ingestion (Vanta shipped; Drata / Secureframe planned).
It is not a SOC 2 Type II evidence pack on its own (Type II requires recurring-scan attestation across 6–12 months — supported via recurring-scan attestation + SLA/MTTR tracking, both shipped). It is not a replacement for governance, risk-assessment, or business-continuity evidence. It is not a substitute for a CPA-firm audit — this is the pre-audit report you give your auditor so they don't bill you for finding what you already knew.
The market split: GRC platforms (Vanta, Drata, Secureframe) automate workflow but lack native vulnerability scanning; legacy scanners (Tenable, Qualys, Rapid7) produce voluminous CVE reports but don't map findings to TSC controls. Our wedge is the bridge — deep scanning + auditor-mapped output + GRC API push.
Direct TSC Mapping
Each finding carries stable rationale text mapped to a specific Trust Services Criterion the auditor can attach.
RFC 3161 Timestamps
Third-party Time-Stamp Authority signs each artifact's hash — non-repudiation an internal actor can't backdate.
Ed25519 Suppression Signing
Approvers cross-checked against a corporate identity registry. Late-renewal flagging surfaces governance drift.
Type II Recurring Attestation
Multi-scan chronological matrix across 6–12 month windows with cadence-gap and scope-drift detection.
Native Vanta Push
API connector with TestResult outcome mapping, idempotency, retry/backoff. Drata and Secureframe planned.
WORM Evidence Storage
S3 Object Lock COMPLIANCE-mode validation for SEC 17a-4 / FINRA 4511. GOVERNANCE mode rejected by design.
AICPA TSC 2017
Coverage matrix
Source of truth is data/compliance/soc2.json in the EE package; this matrix mirrors it. A test asserts the two stay in sync.
"Covered" means the engine produces direct evidence the auditor can attach to the control. CC6.1 (Logical access security) catches password-only SSH, exposed admin panels, AWS users without MFA, shadow admins, privilege-escalation paths. CC6.7 (Data in transit) flags TLS 1.0/1.1, weak ciphers, expired or self-signed certs, missing HSTS. CC7.1 (Vulnerability management — the canonical scanning control) is evidenced by CVE matching against an offline NVD feed plus EOL-software detection. C1.1 (Confidentiality) is evidenced by S3 misconfiguration findings. The full list with example findings and TSC rationale lives in the EE package's SOC 2 coverage doc.
What "Partial" means
NSAuditor AI evidences one dimension of these controls (typically the configuration-state-at-this-moment dimension). CC6.3 is partial because we detect stale IAM keys but the removal-cadence dimension needs CTEM history. CC8.1 is partial because we capture configuration snapshots but change-authorization linkage requires ITSM/Git PR webhook integration. The renderer surfaces a ⚠ Coverage caveat line on every partial control so an auditor seeing FAIL/PARTIAL knows the boundary of what we evidenced.
What "Out of scope" means
A SOC 2 audit covers technical, administrative, and physical controls. NSAuditor AI evidences the technical-network-and-cloud slice. Everything else is genuinely not our job — pretending otherwise would create what auditors call "scope illusion": the false belief that buying a scanner replaces governance work. The renderer emits all 34 OOS controls with their reason text in every gap report so auditors immediately see the engine's known boundaries and don't have to guess what wasn't evaluated. Pair with a GRC platform (Vanta / Drata / Secureframe) for the governance layer.
Output artifacts
What the auditor actually receives
Each --compliance soc2 run writes 10 files to out/<scan-id>/ (14+ when recurring-attestation is configured), every one with a .sha256 integrity sidecar.
File
Purpose
scan_compliance_soc2.md
Markdown gap report — engineering / GRC consumption.
scan_compliance_soc2.html
Standalone HTML report (inline CSS, dark theme, "Print to PDF" button) — give this to the auditor.
scan_compliance_soc2.json
Full report data (machine-readable) — for GRC API push or custom downstream tooling.
Targets explicitly excluded with one-line business justification each (missing justification flagged ⚠ NO JUSTIFICATION PROVIDED)
Framework + version — SOC 2 (AICPA TSC 2017, with 2022 points of focus)
Clock advisory — NTP drift status + probe staleness classification
TSA policy OID (when configured) and version delta verdict (stable / changed / unknown)
How an auditor verifies the artifacts
# Verify integrity sidecars (all four MUST return OK)$ shasum -a 256 -c scan_compliance_soc2.md.sha256
$ shasum -a 256 -c scan_compliance_soc2.html.sha256
$ shasum -a 256 -c scan_compliance_soc2.json.sha256
$ shasum -a 256 -c scan_attestation_soc2.json.sha256
# Verify RFC 3161 trusted timestamp (when configured)$ openssl ts -verify \
-in scan_compliance_soc2.json.tsr \
-queryfile scan_compliance_soc2.json.tsq \
-CAfile /path/to/tsa-ca-bundle.pem
Verification: OK
Why this holds up to a CPA review
Evidence integrity — three layered guarantees
SOC 2 evidence falls apart the moment an auditor finds a gap a security team could exploit. We engineered for the threat model where the security team itself is one of the actors.
Cover page is part of the report bytes. .sha256 sidecars verify the report bytes. If the cover-page scope is wrong, the hashes won't match. Always written.
2. Non-repudiation (against an insider regenerating both)
Opt-in via COMPLIANCE_TSA_URL. When configured, each artifact ships a .tsr sidecar containing a Time-Stamp Response from the configured RFC 3161 Time-Stamp Authority. The TSA's signature attests "this hash existed at <TSA timestamp>" — a third-party time floor an internal actor cannot backdate. Recommended TSAs: FreeTSA.org (free, testing), DigiCert / Sectigo / GlobalSign (paid commercial), or your internal corporate TSA.
3. Suppression non-repudiation
Each suppression can be Ed25519-signed. The signature payload is NFC-normalized per RFC 5198 and capped at 64KiB. Verification happens against a corporate identity registry binding approver names to public keys with authorization scopes (which frameworks an approver may authorize for). Late-renewal flagging surfaces governance gaps as bands: HEALTHY / ACCEPTABLE / DEFICIENT / CRITICAL. Quarterly trend analysis catches degradation that single-point-in-time review would miss.
The "what about false positives?" answer
Suppression workflow
Every real scan produces findings the security team triaged out — false positives, accepted risks with compensating controls. SOC 2 auditors reject silent deletion (looks like under-reporting) but accept documented exclusions.
The compliance engine reads out/<scan-id>/suppressions.json and enforces three fields per suppression: a specific match, a non-empty rationale, and an approver. Suppressions missing any of these are rejected and the underlying finding stays in fail status — the engine refuses to silently absorb undocumented suppressions.
Per-status default expiry
accepted_risk — 90 days (compensating controls decay; quarterly review is institutional norm)
false_positive — 365 days (scanner-error classifications don't go stale)
Caller can override via explicit expiresInDays
Where suppressions show up
Every suppression appears in Appendix B — Accepted Risks & False Positives with control ID, finding text, status (accepted_risk vs false_positive), approver, rationale, and renewal chain. Expired suppressions surface as report.expiredSuppressions[] with a "REVIEW REQUIRED" callout when non-empty.
Recurring-scan attestation — compliance status taxonomy
pass — scans within cadence, no scope drift
pass_with_drift_review — within cadence; scope-drift events surfaced for CC8.1 change-management review
cadence_breach — any inter-scan gap exceeds threshold (CC7.1 finding)
no_evidence — zero scans in the window (distinguishes "scans on disk but outside window" from "truly empty rootDir" via discoveredCount)
SLA / MTTR tracking
Per-severity thresholds from sla.json. Three statuses per finding: compliant, approaching (default 75% of threshold), breached. The renderer separates breachedTotal (raw count) from breachedEffective (post-compensating-control). Auditors read both — breachedEffective > 0 is a hard CC7.1 finding. Suppressions with status: accepted_risk + non-empty compensating_control flip the effective status to breached_with_compensating_control; the renderer surfaces an inline disclaimer requiring auditor verification of each compensating-control text.
GRC platform integration
Native Vanta push, Drata + Secureframe planned
The Vanta connector maps NSAuditor findings to Vanta TestResult objects and pushes them via API. Same compliance engine and renderer power Drata and Secureframe when they ship.
Outcome mapping (Vanta)
NSAuditor status
Vanta outcome
pass
passed
fail (all violations compensated)
passed_with_compensating_control
fail (any uncompensated)
failed
partial
failed (Vanta has no partial; partialReason in description)
accepted_risk
passed_with_compensating_control
false_positive
passed
Reliability features
Pre-flight — single-shot GET to vendor identity endpoint with AbortController timeout
Streaming response cap — MAX_RESPONSE_BYTES=1MiB (CC7.1 input validation at vendor API boundary)
Duration cap — maxTotalDurationMs=180s per push (A1.2 availability bound under Retry-After saturation)
Idempotency — idempotencyKey() with scanId collision disambiguation (~<sha16> suffix)
Description truncation — control descriptions truncate at 8KB with explicit count + scan reference
Vendor FAQ
Is Nsasoft itself SOC 2 audited?
A direct answer to the most common procurement question. We're transparent about what we do and don't have.
Short answer
No, and we don't currently need to be. NSAuditor AI runs entirely on customer infrastructure. Scan data, findings, reports, and credentials never touch Nsasoft servers. License validation is offline (JWT with embedded public key). AI analysis uses customer-provided API keys (OpenAI, Claude, Ollama). We are not a data processor under any regulation.
Why most security vendors need SOC 2 (and we don't)
SaaS GRC platforms, MDR services, and cloud SIEMs all ingest customer security telemetry into their own infrastructure. They are data processors. SOC 2 attestation is the table-stakes proof that they handle that data competently — DPAs and BAAs are required, vendor security reviews are required.
NSAuditor AI is the inverse architecture. The product is a CLI / scanner / MCP server that runs in your environment. It produces files you keep. The only network traffic between you and us is npm package installs and (if you're an Enterprise customer) license-key purchase & renewal — all of which happen through standard npm and Stripe infrastructure. We have nothing of yours to safeguard.
What we offer instead of an attestation
Architecture verifiability. The Community Edition is MIT-licensed open source. You can inspect the entire scanner. The Enterprise Edition follows the same architectural model — you can run tcpdump against it and prove the ZDE claim.
Offline license validation. ES256 JWT with the public key embedded in the CE platform. No phone-home for license checks, even on Enterprise.
Air-gapped deployment for OT and regulated environments — Docker images and signed offline tarballs.
The product produces SOC 2 evidence for you (this page). That's the relevant security claim — it shifts SOC 2 attestability to the customer's audit, where it actually belongs.
Future plans
If we ever begin processing customer data — for example a hosted GRC service or a SaaS dashboard layer — we will pursue SOC 2 Type II at that point and will publish the attestation here. Until then, the honest answer is "we don't need it, and we'd rather you didn't trust us than perform a compliance theater."
Auditor FAQ
Questions your CPA firm asks, answered up front
How do I verify the scope of this report wasn't manipulated post-scan?
Three layered guarantees: integrity via SHA-256 sidecars on every artifact (always written, verifiable with shasum -a 256 -c); non-repudiation via opt-in RFC 3161 trusted-timestamp .tsr sidecars from a Time-Stamp Authority; suppression non-repudiation via Ed25519 signatures verified against a corporate identity registry.
What if the security team marked a real finding as "false positive" to make the report look clean?
Multiple controls prevent this: every suppression is logged in Appendix B with rationale + approver; suppressions can carry Ed25519 signatures the auditor verifies independently; approvers are cross-referenced against a corporate identity registry with authorization-scope checks; suppressions renewed after expiration are flagged LATE or VERY_LATE; quarterly trend analysis surfaces governance degradation.
How does the scanner handle clock drift?
An NTP probe measures local-clock drift against configurable NTP servers. Drift above threshold triggers a WARN or CRITICAL clock advisory on the cover page. In strict mode (COMPLIANCE_NTP_STRICT=true), critical drift aborts the compliance run entirely. Probe-staleness detection classifies the gap between probedAt and generatedAt to catch backdated or stale reports.
This is a single-point-in-time scan. How does Type II apply?
NSAuditor AI EE supports Type II evidence via recurring-scan attestation, SLA / MTTR tracking with per-severity thresholds, version-delta detection across scans, and quarterly suppression-renewal cadence trend analysis with governance bands.
Why is CC1 (control environment) marked out of scope?
CC1 is about board oversight and organizational ethics — these are inherently human / process artifacts, not network state. We could pretend, but pretending creates more audit risk than admitting the boundary. Pair NSAuditor AI EE with a GRC platform (Vanta / Drata / Secureframe) for the governance layer.
Where can I see the canonical mapping?
The source of truth is data/compliance/soc2.json in the EE package. The full SOC 2 coverage matrix in the EE repo mirrors that file, and a test asserts the two stay in sync. The customer-facing version is at nsauditor.com/ai/docs/soc2/.
Ready to ship a SOC 2 audit?
Talk to us about an Enterprise license, or grab the open-core CE on npm to evaluate the scanner first. Audit-ready evidence in under five minutes from your first scan.