Penetration Testing Report
Penetration Testing Report
One-liner: The formal deliverable that documents findings, business impact, and remediation guidance from a penetration test.
π― What Is It?
A penetration testing report is the primary output of a security assessment. It translates technical findings into actionable intelligence for multiple audiencesβfrom executives to developersβensuring vulnerabilities are understood, prioritized, and remediated. Unlike raw tool output, a pentest report provides context, business impact, and clear remediation steps.
π€ Why It Matters
- The only lasting deliverable β Long after shells close and tools exit, the report remains
- Drives remediation β Without a clear report, findings get ignored or misunderstood
- Legal/compliance evidence β Documents due diligence for audits and regulations
- Communication bridge β Connects technical findings to business risk
For Different Audiences
- Developers β Need technical details and code examples to fix issues
- Security teams β Need to prioritize remediation and track progress
- Executives β Need business impact and risk context to allocate resources
π Report Structure
Three Core Sections
| Section | Target Audience | Percentage | Purpose |
|---|---|---|---|
| Summary | Business & Security | 10-20% | High-level overview: what was tested, what was found, what it means |
| Vulnerability Write-Ups | Technical | 70-90% | Detailed findings with evidence and remediation |
| Appendices | Security | 5-10% | Scope, methodology, testing artifacts |
Summary Section
The summary provides a non-technical overview for decision-makers:
Executive Summary (Business Focus)
- What was tested (application/network description)
- Overall security posture
- Key business risks
- High-level recommendations
Findings & Recommendations (Security Focus)
- Vulnerability statistics and risk ratings
- Common vulnerability themes
- Attack chains (how low/medium findings combine for high impact)
- Systemic issues to prevent future vulnerabilities
Vulnerability Write-Ups
Each vulnerability gets a standalone write-up with consistent structure:
Essential Elements:
- Title β Clear, descriptive (e.g., "Unauthenticated SQL Injection in Login Form")
- Risk Rating β Using CVSS or client's matrix; rated in isolation
- Summary β Brief explanation and impact in plain language
- Background β Context for non-security experts (what is SQL Injection?)
- Technical Details & Evidence β HTTP requests, responses, screenshots, code snippets
- Impact β Real-world consequences specific to the tested system
- Remediation Advice β Actionable steps that address root cause, not just symptoms
- References β Links to vendor docs, OWASP, security resources
The Golden Thread: As expertise grows, these sections flow naturally without rigid headings. The report tells a story that readers can follow from discovery to resolution.
Appendices
Supporting documentation for completeness and future testing:
Assessment Scope
- Originally scoped vs. achieved coverage
- Out-of-scope items
- Limitations and constraints
- Recommendations for follow-up testing
Assessment Artifacts
- Files created during testing (webshells, test accounts, uploaded files)
- Changes made to the system
- Cleanup recommendations
- Credentials used
π¬ Technical Deep Dive
Contextualizing Vulnerabilities
A great write-up explains the finding in the context of the specific system:
β Generic: "SQL Injection allows database access"
β
Contextualized: "This SQL Injection in the login form bypasses authentication on the customer portal, allowing access to 50,000+ customer records including PII"
Risk Rating Best Practices
- Rate vulnerabilities in isolation β Assume all other findings don't exist
- Use CVSS v3.1 or client's matrix consistently
- Document rating justification
- Highlight attack chains separately in findings summary
Remediation: Root Cause First
Your primary recommendation must fix the vulnerability at its core:
Example: SQL Injection
- β Primary Fix: Use Parameterized Queries β Prevents SQL confusion regardless of input
- β Defense-in-Depth: Input validation and WAF rules can help but cannot replace parameterization
- β Mitigation Only: Input sanitization alone doesn't fix the root cause
π‘οΈ Writing Best Practices
Clarity and Professionalism
- Past tense β "The vulnerability was discovered..." (not "we pwned the login")
- No first person β Avoid "I", "we", "our" (write as neutral observer)
- Objective tone β Stick to facts, no exaggeration or emotion
- Consistent terminology β Don't switch between "parameter", "variable", "input"
- Mask sensitive data β Never include real passwords or PII without authorization
Quality Assurance Process
- Self-review β Read your own work with fresh eyes
- Read aloud β Catches awkward phrasing and errors
- Peer review β Have another tester check technical accuracy and clarity
- Consistency check β Same format, terminology, and style throughout
- Evidence verification β Ensure screenshots and code snippets match descriptions
π€ Interview Angles
Common Questions
- "Walk me through the structure of a pentest report"
- "How do you explain technical findings to non-technical stakeholders?"
- "What's the difference between an executive summary and findings summary?"
- "How do you prioritize and rate vulnerabilities?"
STAR Story
Situation: Completed a web application penetration test for a healthcare client with multiple critical findings.
Task: Write a report that would drive remediation across technical and business stakeholders.
Action: Structured the report with an executive summary focusing on HIPAA compliance risk for the CISO, detailed technical write-ups with .NET/MS SQL code examples for developers, and a findings summary highlighting how medium-risk XSS + session fixation could lead to PHI exposure. Included assessment artifacts appendix listing all test accounts and webshells requiring cleanup.
Result: Client remediated all critical findings within 2 weeks. Development team used the code examples to fix not just reported issues but similar patterns throughout the codebase. Report was used as template for subsequent assessments.
β Best Practices
Do's
- Write summary last after completing technical sections
- Provide client-specific remediation (not generic copy-paste)
- Include working exploit code when safe and authorized
- Document systemic issues (not just individual findings)
- Use screenshots with sensitive data redacted
- Explain why a fix works, not just what to do
Don'ts
- Never use first-person language ("we", "our", "I")
- Don't copy findings from old reports without contextualizing
- Don't use informal language or humor
- Don't include unverified or assumed information
- Don't rely on tools alone β provide manual validation
- Don't just describe the vuln without explaining impact
π Real-World Importance
- Legal evidence β Reports used in breach litigation and regulatory proceedings
- Compliance β Required for PCI-DSS, SOC 2, ISO 27001 audits
- Vendor assessment β Third-party security validation
- Risk management β Informs insurance and risk decisions
β Common Misconceptions
- "The report is just documentation" β It's the primary deliverable, not an afterthought
- "More findings = better report" β Quality and context matter more than quantity
- "Technical teams don't need context" β Even experts benefit from clear explanations
- "Tools can generate reports" β Tool output lacks context, business impact, and remediation guidance
π Related Concepts
- CVSS
- SQL Injection
- Parameterized Queries
- OWASP
- Vulnerability Management
- Risk Rating
- Technical Writing
π References
- PTES Technical Guidelines: http://www.pentest-standard.org/
- OWASP Testing Guide: https://owasp.org/www-project-web-security-testing-guide/
- SANS Pentest Report Template
- NIST SP 800-115 - Technical Guide to Information Security Testing