Table of Contents
Every password has a measurable strength, and that measurement has a name: entropy. Borrowed from information theory, entropy quantifies how unpredictable a password is – how many guesses an attacker would need, on average, to find it. It is the single most reliable way to compare passwords, passphrases, and generation methods on equal terms. If you are building a foundation for your password security, understanding entropy is what separates informed decisions from guesswork.
The concept is straightforward in principle but widely misunderstood in practice. People assume that a password with uppercase letters, numbers, and symbols must be strong. They assume that a long passphrase made of common words must be weak. Both assumptions are often wrong, and entropy is the tool that explains why. This article walks through the mathematics, provides concrete calculation examples, and addresses the misconceptions that lead people to overestimate or underestimate the strength of their credentials.
What Entropy Means in Information Theory
Entropy was formalized by Claude Shannon in 1948 as a measure of information content and uncertainty. In the context of passwords, entropy answers a specific question: given the method used to generate a password, how many binary (yes/no) decisions would an attacker need to make to identify the correct password?
The result is expressed in bits. Each bit of entropy doubles the number of possible passwords. A password with 1 bit of entropy has 2 possibilities. A password with 10 bits has 1,024 possibilities. A password with 40 bits has roughly 1.1 trillion possibilities. The scale is exponential, which is precisely what makes entropy such a powerful concept – small increases in bits translate to massive increases in the work an attacker must perform.
This exponential scaling is also why entropy is measured logarithmically. The formula is:
H = log2(N^L) = L * log2(N)
Where:
- H is the entropy in bits
- N is the size of the character set (the number of possible symbols per position)
- L is the length of the password (the number of positions)
For a passphrase composed of randomly selected words, the formula adapts slightly:
H = log2(W^K) = K * log2(W)
Where:
- W is the size of the wordlist
- K is the number of words in the passphrase
The key assumption in both formulas is that each character (or word) is selected independently and uniformly at random. This assumption is critical, and it is where most real-world passwords diverge from their theoretical entropy – a point we will return to shortly.
Entropy for Different Character Sets
The character set you draw from determines how much entropy each character contributes. Here is a reference table showing the entropy per character for common character sets, along with the total entropy for passwords of various lengths.
Entropy Per Character
| Character Set | Size (N) | Entropy per Character (bits) |
|---|---|---|
| Digits only (0-9) | 10 | 3.32 |
| Lowercase letters (a-z) | 26 | 4.70 |
| Lowercase + digits | 36 | 5.17 |
| Upper + lowercase letters | 52 | 5.70 |
| Upper + lower + digits | 62 | 5.95 |
| All printable ASCII | 95 | 6.57 |
Total Entropy by Length and Character Set
| Length | Digits Only | Lowercase | Upper + Lower + Digits | Full ASCII (95) |
|---|---|---|---|---|
| 6 | 19.9 | 28.2 | 35.7 | 39.4 |
| 8 | 26.6 | 37.6 | 47.6 | 52.6 |
| 10 | 33.2 | 47.0 | 59.5 | 65.7 |
| 12 | 39.9 | 56.4 | 71.5 | 78.8 |
| 14 | 46.5 | 65.8 | 83.4 | 92.0 |
| 16 | 53.2 | 75.2 | 95.3 | 105.1 |
| 20 | 66.4 | 94.0 | 119.1 | 131.4 |
These numbers make the tradeoff between character set size and password length concrete. A 16-character lowercase password (75.2 bits) is stronger than a 12-character password using the full ASCII set (78.8 bits) – they are in the same ballpark, but the lowercase one is easier to type. A 10-character digits-only PIN (33.2 bits) is dangerously weak regardless of context.
Entropy for Passphrases
Passphrases derive their strength not from character variety but from the size of the wordlist and the number of words chosen. The most commonly referenced wordlist is the Electronic Frontier Foundation’s (EFF) Diceware list, which contains 7,776 words.
Each word randomly selected from this list contributes log2(7776) = approximately 12.92 bits of entropy. Here is how that scales:
| Words | Entropy (bits) | Approximate Combinations |
|---|---|---|
| 3 | 38.8 | ~470 billion |
| 4 | 51.7 | ~3.7 trillion |
| 5 | 64.6 | ~28 quadrillion |
| 6 | 77.5 | ~222 quadrillion |
| 7 | 90.5 | ~1.7 quintillion |
| 8 | 103.4 | ~13.4 quintillion |
A five-word passphrase from the EFF list provides roughly the same entropy as a 10-character random password using the full ASCII character set. The passphrase approach trades character-level complexity for word-level complexity, and for anything you need to memorize – especially a master password – it is the better choice.
Worked Example: Passphrase Entropy
Suppose you use PanicVault’s passphrase generator to create a six-word passphrase from the EFF list:
glacier trumpet canvas marble lantern oxide
The entropy calculation:
H = 6 * log2(7776) = 6 * 12.92 = 77.5 bits
At a guessing rate of 100 billion attempts per second (a realistic rate for offline attacks against weak hashes), exhausting the full keyspace would take:
2^77.5 / (100 * 10^9) / (3600 * 24 * 365) = approximately 6 million years
On average, an attacker would find the correct passphrase after searching half the keyspace, so the expected time is roughly 3 million years. That is comfortably beyond any practical threat horizon.
Worked Example: Random Character Password
Now compare a 12-character random password using the full printable ASCII set:
k7#Qm!xP9&rZ
The entropy calculation:
H = 12 * log2(95) = 12 * 6.57 = 78.8 bits
This is slightly higher than the six-word passphrase (78.8 vs. 77.5 bits), and both are in the range that security professionals consider strong for most purposes. The difference is usability: one is memorable, the other requires a password manager to recall. For a deeper look at how password managers handle this generation process, see our guide on how password generators work.
Why Human-Chosen Passwords Have Lower Effective Entropy
The formulas above assume that every character or word is chosen independently and uniformly at random. Human beings do not choose passwords this way. They choose words they can remember, patterns they can type easily, and structures that feel familiar. This predictability reduces the effective entropy – the actual uncertainty an attacker faces – far below the theoretical maximum.
Common Patterns That Reduce Entropy
Research from Carnegie Mellon University, Cambridge University, and multiple industry studies has consistently documented the patterns humans fall into:
- Dictionary words as the base: The majority of human-chosen passwords begin with a common English word. With a working dictionary of 50,000 words, this reduces the starting entropy to about 15.6 bits regardless of how long the word is.
- Predictable capitalization: Users overwhelmingly capitalize only the first letter. An attacker who assumes “first letter capitalized or all lowercase” tests two variants per word, adding roughly 1 bit instead of the theoretical maximum.
- Trailing numbers and symbols: The most common suffix is “1”, followed by “123”, “!”, and the current year. The number of realistic suffixes is in the hundreds, not the millions – perhaps 8-10 bits of entropy.
- Leet-speak substitutions: Replacing “a” with “@”, “e” with “3”, “o” with “0”, and “s” with “$” adds negligible entropy because cracking tools apply these rules by default.
- Keyboard patterns: Sequences like “qwerty”, “asdfgh”, and “zxcvbn” are well-catalogued and tested early in any attack.
An Example of Effective vs. Theoretical Entropy
Consider the password Summer2026!:
- Theoretical calculation: 11 characters from the full ASCII set = 11 * 6.57 = 72.3 bits.
- Effective entropy estimate: Starts with a common capitalized word (~16 bits) + a four-digit year from a small range (~7 bits) + a common trailing symbol (~3 bits) = approximately 26 bits.
That is a staggering gap. The password looks complex and scores well on naive strength meters, but an attacker using rule-based dictionary attacks would crack it in seconds. The science behind password cracking relies precisely on exploiting these patterns.
This is why a strong password guide will always emphasize generation over invention. You cannot out-random a computer, and your brain’s natural tendency toward patterns is exactly what attackers are trained to exploit.
Practical Entropy Targets
Not every account needs the same level of protection. Here are entropy targets based on threat models and account value, informed by NIST guidelines and industry consensus:
| Use Case | Minimum Entropy | Recommended Approach |
|---|---|---|
| Low-sensitivity accounts (forums, newsletters) | 45-50 bits | 12+ character random password |
| Standard accounts (shopping, social media) | 55-65 bits | 16+ character random password |
| High-value accounts (email, banking) | 65-80 bits | 20+ character random password or 5-word passphrase |
| Master passwords / encryption keys | 80+ bits | 6+ word passphrase or 20+ character random password |
| Long-term archival encryption | 128+ bits | 20+ character password from full ASCII set |
The 80-bit threshold for master passwords deserves special attention. Your master passphrase protects your entire password vault. If an attacker obtains your encrypted vault file, they can attempt offline cracking with no rate limiting and no lockout. The encryption used by KeePass-format vaults includes key derivation functions (like Argon2) that deliberately slow down each guess, but the master passphrase still needs sufficient entropy to withstand sustained attack over years.
For most people using a password manager, the practical approach is simple: let the generator create 20-character random passwords for individual accounts (yielding 100+ bits with full ASCII), and memorize a single 6-word passphrase as your master password (yielding ~78 bits). This combination covers every realistic threat.
How Password Strength Meters Work
Password strength meters are the colored bars and labels (weak, medium, strong) you see when creating accounts on websites. Understanding how they work – and where they fail – helps you evaluate their feedback critically.
Simple Meters: Character Class Counting
The most basic strength meters award points based on:
- Password length (more characters = more points)
- Presence of uppercase letters
- Presence of lowercase letters
- Presence of digits
- Presence of special characters
These meters essentially estimate theoretical entropy using the character set approach. A password like Abcdef1! would score well because it touches all four character classes, even though it follows an extremely predictable pattern and has low effective entropy.
Advanced Meters: Pattern Recognition
More sophisticated meters, like zxcvbn (developed by Dropbox and open-sourced in 2012), take a fundamentally different approach. Instead of counting character classes, zxcvbn:
- Decomposes the password into recognizable patterns (dictionary words, keyboard sequences, dates, repeated characters, l33t substitutions)
- Estimates the entropy of each detected pattern
- Finds the decomposition that yields the lowest total entropy (the “easiest” way to crack the password)
- Converts that entropy estimate into a score and an estimated crack time
This approach produces far more accurate results. The password Summer2026! would score poorly on zxcvbn because it detects “summer” as a dictionary word, “2026” as a date, and “!” as a common append – yielding a low effective entropy estimate. Meanwhile, k7#Qm!xP9&rZ would score highly because no patterns are detected.
Limitations of All Meters
Even the best meters have blind spots:
- They cannot detect personal context. A meter does not know that “Whiskers2019” is your cat’s name and the year you adopted her.
- They measure the password in isolation. No meter checks whether your password has appeared in a breach database (though some services layer this check on top).
- They estimate attacker capability. Crack time estimates depend on assumptions about hashing algorithms and hardware speed that may not match your specific risk.
- They do not account for the generation method. A truly random 12-character lowercase password is stronger than what a character-class meter suggests, because the meter does not know it was randomly generated – it just sees “no uppercase, no digits.”
The takeaway: treat strength meters as a rough sanity check, not an authoritative verdict. The true measure of a password’s strength is the entropy of the method used to generate it, not the appearance of the result.
Common Misconceptions About Entropy
“Adding a symbol makes my password twice as strong”
Adding a single symbol to an 8-character lowercase password increases entropy from 37.6 bits to approximately 43 bits (assuming the symbol expands the character set to 95 and the password is now 9 characters). That is a meaningful increase, but it is not doubling. Doubling would mean adding a full bit of entropy, which requires doubling the number of possibilities – achievable by adding one random binary choice, not by swapping a character set.
More importantly, if you always place the symbol at the end (as most people do), the effective entropy gain is much smaller than the theoretical gain. The role of complexity rules in password policies explores this gap between theory and human behavior.
“My password is 20 characters long, so it must be strong”
Length contributes to entropy only when each character is chosen randomly. The password aaaaaaaaaaaaaaaaaaaa is 20 characters long with effectively zero bits of entropy. The password correcthorsebatterystaple is 25 characters but has only about 44 bits of entropy if drawn from the original XKCD four-word method (and much less if the attacker knows you used that specific comic as your inspiration).
Length is the dominant factor in password strength, but only when paired with randomness. Without randomness, length is meaningless.
“A password that looks random must be random”
The string qWeRtY!@# looks random to a human eye, but it is a keyboard pattern with predictable case alternation and a shift-key sequence. Cracking tools identify and test these patterns early. Appearance is not entropy.
“Entropy is all that matters”
Entropy measures resistance to guessing attacks. It does not protect against phishing, keyloggers, credential stuffing (where the exact password is already known from a breach), or session hijacking. A 128-bit entropy password is useless if you type it into a fake login page. Entropy is a necessary condition for password security, not a sufficient one.
“Passphrases are weaker because they use common words”
This is perhaps the most persistent misconception. A passphrase’s strength does not come from the obscurity of its words – it comes from the number of possible word combinations. A five-word passphrase from a 7,776-word list has 2^64.6 possible combinations regardless of whether those words are “the”, “and”, “cat” or “glacier”, “trumpet”, “oxide.” What matters is that the words were selected randomly, not that they are unusual.
The weakness would arise only if someone chose their own words deliberately – picking a favorite quote or a sentence that makes narrative sense. In that case, the effective wordlist shrinks to a tiny subset and the entropy collapses. This is why passphrase generation must be random, not creative.
Calculating Entropy: A Quick Reference
Here is a summary of entropy formulas and examples you can use to evaluate any password generation scheme:
Random Character Passwords
Entropy = Length * log2(Character Set Size)
Example: 16 characters from 62-char set (upper + lower + digits)
H = 16 * log2(62) = 16 * 5.95 = 95.3 bits
Random Word Passphrases
Entropy = Word Count * log2(Wordlist Size)
Example: 5 words from 7,776-word EFF list
H = 5 * log2(7776) = 5 * 12.92 = 64.6 bits
PINs
Entropy = Length * log2(10)
Example: 6-digit PIN
H = 6 * log2(10) = 6 * 3.32 = 19.9 bits
A 19.9-bit PIN is trivially weak against offline attacks but acceptable for phone lock screens where rate limiting (lockout after 10 failed attempts) compensates for the low entropy. Context always matters.
Combining Independent Elements
If you combine independently random elements – for example, a random word plus a random two-digit number – the entropies add:
H_total = H_word + H_number
H_total = log2(7776) + log2(100) = 12.92 + 6.64 = 19.56 bits
This only works when the elements are truly independent. If users always append a two-digit number to a word (and attackers know this), the combination is still accounted for in attack dictionaries.
Putting Entropy Into Practice
Understanding entropy transforms password security from a set of arbitrary rules into a coherent framework. Here is how to apply it:
Use a password manager to generate all your passwords. A generator producing 20-character random passwords from the full ASCII set gives you 131+ bits of entropy per account – far beyond any realistic threat.
Build your master passphrase from 6 randomly generated words. At 77.5 bits from the EFF list, this exceeds the 80-bit target when you account for key derivation functions that add computational cost to each guess.
Ignore complexity rules when evaluating strength. A 16-character lowercase random password (75.2 bits) is stronger than most 10-character passwords that satisfy every complexity requirement (65.7 bits at best, far less in practice).
Treat strength meters as advisory. If a meter says your randomly generated password is “medium,” the meter is wrong – it is evaluating appearance, not the generation method.
Never estimate entropy from the output. Always estimate it from the generation process. A password that looks random may not be. A passphrase that looks simple may be highly secure. The method is what matters.
Entropy is not an abstract concept for cryptographers. It is a practical tool for anyone who wants to make informed decisions about their password security. Measure your passwords by the process that creates them, aim for the targets outlined above, and let the exponential math work in your favor.
