How we review crypto wallets
At Whitewallet Research, wallet reviews are based on hands-on testing across criteria that reflect how people actually use self-custody tools: storing assets, moving them across networks, accessing DeFi, and recovering access when something goes wrong. Scores are not adjusted based on partnerships or commercial relationships.
What we test
We evaluate wallets across seven criteria. Each receives a score from 1.0 to 5.0. The final score is a weighted average that reflects how much each criterion matters in practice.
Our working question throughout: can a user store assets securely, transact without unnecessary friction, and access on-chain financial tools without giving up control of their keys?
We do not conduct code-level security audits or penetration testing. Scores reflect observable practices, feature completeness, and usability – not guarantees against all possible exploits.
Scoring scale
5/5 – Best in class across the industry at time of review
4/5 – Above average, works well for most users
3/5 – Functional with meaningful trade-offs
2/5 – Significant gaps that affect usability or safety
1/5 – Broken, unsafe, or unsuitable for self-custody
Weighted criteria
Not all criteria carry equal weight. Security and asset access are more consequential than interface polish. The weighting reflects that.
Tier 1 – Critical (60%)
These determine whether a wallet is safe and practically usable:
- Security & Key Management – 25%
- Supported Assets & Networks – 20%
- Transaction Costs & Speed – 15%
Tier 2 – Quality (25%)
These affect daily usability and on-chain functionality:
- User Experience & Interface – 10%
- DeFi & Ecosystem Integration — 15%
Tier 3 – Support & Recovery (15%)
Important for edge cases and long-term use:
- Recovery & Backup Systems – 10%
- Customer Support & Documentation — 5%
Note on Tier 2 weighting: DeFi and ecosystem integration carries more weight here than in standard hot wallet methodologies. Most active crypto users interact with DeFi protocols, bridges, and yield products regularly. A wallet that limits that access creates a real constraint on how it can be used.
The seven criteria
1. Security & Key Management (25%)
What we check: private key storage model (local device, encrypted cloud, MPC, or custodial); authentication options (PIN, password, biometrics, 2FA); seed phrase generation and backup process; transaction signing behavior; open-source availability and audit history; disclosed security incidents and response.
How we test: we generate new wallets and document the full seed phrase backup flow, test all available authentication methods, attempt to export private keys, verify that the wallet requires confirmation before signing transactions, and review available audit reports and public repositories.
Score examples:
5/5 – Open-source, locally stored encrypted keys, biometric and PIN authentication, seed phrase backup with verification step, hardware wallet support, audited by an independent firm
3/5 – Closed-source, PIN only, seed phrase backup optional, basic transaction confirmation
1/5 – Cloud-stored keys without encryption, no seed phrase backup, auto-signs transactions
2. Supported Assets & Networks (20%)
What we check: number of supported blockchains; token standards (ERC-20, SPL, BEP-20, and others); Layer 2 and sidechain compatibility; NFT display and management; ability to add custom tokens; multi-chain address management.
How we test: we add wallets for BTC, ETH, SOL, and major L2s; import custom tokens using contract addresses; attempt to receive NFTs; and check whether the wallet auto-detects new tokens or requires manual addition.
Score examples:
5/5 – 50+ chains including Bitcoin, Ethereum, Solana, and major L2s; auto-detects tokens; NFT display
3/5 – Ethereum and a small number of EVM chains; manual token import; basic NFT support
1/5 – Single chain; no custom tokens; no NFT support
3. Transaction Costs & Speed (15%)
What we check: fee estimation accuracy; ability to customize transaction fees; support for Replace-By-Fee or transaction speed-up; batch transaction support; real-time fee data; network congestion warnings.
How we test: we send test transactions during both low and high network congestion, compare fee estimates to external trackers, test custom fee input, attempt to accelerate pending transactions, and check whether the wallet surfaces congestion warnings.
Score examples:
5/5 – Real-time fee integration, custom fee input, RBF support, batch sends, congestion warnings
3/5 – Slow/standard/fast presets, estimates reasonably close to actual, no RBF
1/5 – Fixed fees, no customization, estimates significantly off, no recovery for stuck transactions
4. User Experience & Interface (10%)
What we check: onboarding clarity; navigation structure; transaction history readability; portfolio overview; consistency across mobile and desktop; accessibility features.
How we test: we create wallets from scratch, navigate core functions, test mobile and desktop versions side by side, locate specific past transactions, and evaluate portfolio display and fiat conversion accuracy.
Score examples:
5/5 – Clear onboarding, intuitive navigation, detailed transaction history with fiat values, consistent mobile and desktop experience
3/5 – Functional layout, basic transaction list, mobile missing some desktop features
1/5 – Confusing navigation, incomplete transaction history, mobile experience unusable
5. DeFi & Ecosystem Integration (15%)
What we check: built-in dApp browser; WalletConnect support; native swap functionality; staking and yield interfaces; cross-chain bridging; support for signing messages and interacting with smart contracts.
How we test: we connect wallets to major DeFi protocols via WalletConnect and built-in browsers, test native swap features, attempt to stake tokens and access yield products, and test bridging between supported networks.
Score examples:
5/5 – Built-in browser, WalletConnect, native swaps with DEX aggregation, staking and yield UI, native bridging
3/5 – WalletConnect only, basic swap, no staking or yield interface
1/5 – No WalletConnect, no dApp browser, send and receive only
6. Recovery & Backup Systems (10%)
What we check: seed phrase standard (BIP39); backup verification process; cloud backup options; social recovery; multi-signature support; account migration from other wallets.
How we test: we generate new wallets, document the full backup flow, test seed phrase import and export, attempt cloud backups where available, and verify whether the wallet confirms the user has saved the seed correctly.
Score examples:
5/5 – BIP39 seed with verification quiz, encrypted cloud backup option, social recovery, import from other wallets
3/5 – BIP39 seed, basic backup prompt, no verification, no cloud backup
1/5 – No seed phrase shown, cloud-only with no export option, no recovery path if access is lost
7. Customer Support & Documentation (5%)
What we check: support channel availability; response times; quality of educational resources; community presence; developer documentation.
How we test: we submit support tickets for common issues, measure response times, review help center content, and check community activity.
Score examples:
5/5 – Responsive support with short response times, comprehensive guides, active community channels, detailed documentation
3/5 – Email support with responses within 24 hours, basic FAQ, limited community presence
1/5 – No support channel, outdated or absent documentation, inactive community
How the final score is calculated
Step 1: Rate each criterion on the 1–5 scale.
Step 2: Multiply each score by its weight.
Step 3: Sum the weighted scores.
Example
| Criterion | Score | Weight | Weighted score |
| Security & Key Management | 5/5 | 0.25 | 1.25 |
| Supported Assets & Networks | 4/5 | 0.20 | 0.80 |
| Transaction Costs & Speed | 4/5 | 0.15 | 0.60 |
| User Experience & Interface | 4/5 | 0.10 | 0.40 |
| DeFi & Ecosystem Integration | 4/5 | 0.15 | 0.60 |
| Recovery & Backup Systems | 4/5 | 0.10 | 0.40 |
| Customer Support & Documentation | 3/5 | 0.05 | 0.15 |
| Total | 1.00 | 4.20/5.00 |
What we don’t rate
Code-level security audits – we rely on published audits and observable practices, not independent penetration testing.
Legal or regulatory compliance – varies by jurisdiction and falls outside the scope of these reviews.
Long-term project viability – a wallet that scores well today may be unsupported in the future.
Hardware and cold storage functionality – this methodology covers software wallets only.
Our testing process
- Download the wallet on mobile and desktop where both are available
- Generate a new wallet and document the seed phrase backup flow in full
- Test all available authentication methods
- Add Bitcoin, Ethereum, Solana, and at least two L2 wallets
- Send small test transactions across chains
- Import a custom token using a contract address
- Attempt to receive and display an NFT
- Connect to a DeFi protocol via WalletConnect or the built-in browser
- Test native swap and staking features where available
- Test cross-chain bridging where supported
- Attempt seed phrase recovery on a second device
- Submit a support ticket and measure response time
- Review documentation and community channels
We test with small amounts of real funds. We do not review wallets that require KYC or custodial sign-up. We do not modify scores based on commercial relationships.
Editorial independence
Whitewallet Research publishes reviews and ratings independently. Scores are not adjusted based on partnerships, advertising relationships, or the affiliation between Whitewallet Research and the Whitewallet product. Where Whitewallet itself is reviewed, the same criteria and scoring process apply.
Ratings are updated when wallets make significant changes: security incidents, major network additions, or substantial UX overhauls. Routine updates are reviewed quarterly.
Questions or disagreements with a score can be sent to [email protected].
Last updated: April 2026