YEAR-END REPORT
Solana Security Ecosystem Review
2025
At a Glance
Exploits
Fewer on-chain smart contract Solana exploits in 2025 (US$ 8mn) down from peak in 2022 (US$ 550mn), even as activity and TVL grew
Audits Analyzed
163 Solana security reviews examined, spanning 1,669 recorded vulnerabilities
Finding Density
Average of 10 issues per audit, with 1.4 High or Critical vulnerabilities in each review
Vulnerability Themes
The most severe issues concentrate in business logic flaws, access control failures, and protocol design weaknesses
A Quick Note
We spend most of the year reviewing Solana programs one code base at a time. Across that work, patterns emerge - how bugs cluster, which security practices help, and where things can still go wrong even after an audit - but those patterns rarely live in one place.
This report is our attempt to put structure around what we are seeing: to quantify what audits actually uncover on Solana, place that evidence next to publicly reported incidents on-chain, and connect both to the design and operational choices development teams were making.
The aim is to give builders, reviewers and ecosystem participants a shared reference point for how Solana security is evolving and where attention is likely to matter most in the next year.
Security Reviews
Results
We analyzed 163 Solana security audits drawn from a mix of publicly released reports and anonymized Sec3 review engagements. Together these reviews produced 1,733 findings, of which 1,669 qualified as vulnerability-level issues.
99.4% of audits identified vulnerabilities
162 of 163 reviews identified at least one vulnerability
Severity Distribution
Distribution of 1,669 Vulnerabilities
of reviews contained at least one medium-or-higher issue
of reviews contained at least one high-or-critical issue
of reviews contained at least one critical issue
What Vulnerabilities Dominate
Among findings with clear classifications (approximately 70% of the total dataset):
All Classified Findings
High + Critical Only
Serious issues overwhelmingly stem from Business Logic, Permissions, and Validation Errors rather than low level arithmetic or liveness problems. Top 3 categories increased from 82.5% → 85.5% of all severe findings.
Top Vulnerability Patterns
We further clustered findings into specific vulnerability types. Excluding the generic "Other Mixed" bucket, the 10 most frequent patterns across all severities:
| # | Vulnerability | Findings | Details |
|---|---|---|---|
| 1 | Business Logic or Economic Flaws | 413 | Incorrect fee/reward accrual, misconfigured liquidation rules, accounting drift over time |
| 2 | Input Validation and Sanitation Issues | 326 | Unbounded user parameters, missing oracle freshness checks, unverified account ownership |
| 3 | Arithmetic & Rounding Errors | 93 | Fixed-point math errors, exploitable rounding at scale, release-build calculation discrepancies |
| 4 | Programming Errors | 80 | Incorrect control flow, module assumption mismatches, incomplete state transitions |
| 5 | Denial of Service or Resource Exhaustion | 79 | Unbounded loops/structures allowing single-account progress blocking |
| 6 | Access Control Issues (General) | 67 | Missing or incomplete permission checks |
| 7 | Missing Account Owner Checks | 61 | Failure to verify account ownership before state modifications |
| 8 | Missing Signer or Authority Checks | 36 | Operations executable without proper authorization |
| 9 | PDA Derivation Errors | 32 | Incorrect program-derived address computation or validation |
| 10 | Lamport and Rent Handling Issues | 29 | Improper balance management or rent exemption handling |
Among High-and-Critical issues, Access Control Problems dominate. Grouping "Access control (general)", "Missing signer/authority check", "Missing account owner check", "PDA derivation issues", and other access control related patterns together, they account for over 20% of all high-and-critical findings.
Snapshot of Solana in 2025
DeFi TVL
Source: defillama.com/chain/solana
Stablecoin TVL
Source: defillama.com/stablecoins/solana
Higher TVL and accelerating stablecoin growth indicate an ecosystem supporting more applications, more users, and more complex financial activity. As value concentration and protocol complexity increase, the security surface expands. This broader context makes the decline in large scale exploits particularly notable and strengthens the case that security practices across the ecosystem have improved.
How Framework Choices Shape Vulnerability Patterns
Most Solana programs use Rust SDK, Anchor, or Pinocchio. Each approach changes where validation happens and which bugs are most common.
| Framework | Guardrail | Typical Vulnerability Profile | Pros | Cons | Best Used When |
|---|---|---|---|---|---|
| Rust SDK | None by default | Missing checks, fragile dispatch, incorrect PDA derivations | Full control, maximum flexibility | Easy to miss basics, lots of boilerplate | You need total customization |
| Anchor | Built into the framework | Protocol logic issues, edge cases, incentive design | Safe defaults, standardized account validation | More abstraction, larger binaries | Most new protocols, standard workflows |
| Pinocchio | Must be rebuilt manually | Missing owner/signer checks, offset or aliasing errors | Fastest, most compact, lowest compute unit | Manual validation everywhere, unsafe if rushed | Highly optimized systems with disciplined engineering |
Practical Guide
Before the Audit
How Strong Teams Prepare
Plan the audit
- Engage auditors early and lock dates
- Define scope: programs that hold value
- Create dedicated audit branch
- Budget time post review for fixes
Make the system easy to understand
- Overview: purpose, assets, users
- Roles & permissions table
- Account & PDA map
- Core flow diagrams
Ship reviewable code and tests
- Clear structure, no dead code
- Inline comments for critical logic
- High coverage unit & integration tests
- Document known limitations
Run automated tools first
- Linters & static analyzers
- Formal verification when relevant
- Security scanners (Soteria, etc.)
- Fix obvious findings before kickoff
During the Review
Collaboration & Communication
Stay engaged
- Rapid response on auditor questions
- Clarify intent vs. actual behavior
- Acknowledge & track all findings
- Don't assume; confirm understanding
Prioritize findings
- High/Critical first, info later
- Discuss severity if you disagree
- Validate exploit paths together
- Document mitigation strategy
Track changes properly
- Keep fixes on audit branch
- Tag each commit to a finding
- Request re-review for complex fixes
- Don't deploy until cleared
Where Risk Is Moving
Fewer failures come from simple bugs in isolated programs - more risk now sits in how systems, automation, and people interact
Stacked systems
Restaking, leverage, looping, multi-protocol strategies. Shared collateral creates correlated failures. Need stack level stress tests.
Yield bearing stablecoins
Supply has jumped into low double digit billions. Risk is in the portfolio behind each dollar, not just the peg.
Vibe coded contracts
AI generated code shipped with thin specs. Forks tweak logic assuming changes are safe, breaking critical guardrails.
Agents and prompt injection
Agents act on natural language with keys and payment rails. Malicious prompts can steer them to misuse safe contracts.
Wallets and the human layer
Wallet drains and social engineering drive most losses. A single permit can empty accounts even when contracts are clean.
The Security at Scale
Independent security reviews of Solana programs consistently identify vulnerabilities - primarily in business logic, input validation, and access control. Yet on-chain exploits declined 98% from their 2022 peak in 2025.
Multiple factors likely contributed: adoption of frameworks like Anchor that provide built-in security guardrails, more experienced teams, better testing practices, and widespread adoption of pre-launch security review. As the ecosystem continues to grow in complexity, sustained attention to security - across tooling, testing, and review processes - will remain important.
Dataset Scope and Constraints
This dataset is informative but not exhaustive. Key constraints:
Audits do not cover all deployed code
Findings represent only projects that engaged auditors and agreed to publish or share results. Unaudited code, informal reviews, and unpublished work fall outside the sample. Read these numbers as "what auditors typically find when they look," not "what exists on-chain overall"
Automated classification is imperfect
Despite LLM assistance and manual sampling, some findings inevitably land in the wrong specific type. We maintain a large "Other/Mixed" bucket and avoid over-interpreting rare categories
Severity is defined at report time
Severities come from original reviewers. Teams occasionally disagree on edge-case ratings, and some findings shift in importance as protocols evolve or more context emerges. We aggregate severities as reported without re-grading
Per-review statistics are averaged
A small number of very large reviews contribute disproportionately to the total finding count. While the mean is 10.3 findings per review, the median of 7 shows that most audits uncover fewer vulnerabilities. We report both counts and per-review percentages to characterize the full distribution