Binary to Decimal Converter
Convert between binary, decimal, octal and hexadecimal number systems with live conversion and input validation
Only 0-1 allowed
Only 0-7 allowed
Standard decimal number
Only 0-9, A-F allowed
Change any field to convert between bases. Each conversion updates all other bases automatically.
How to Convert Binary to Decimal
Binary (base 2) uses only 0 and 1. Each digit's value is its position raised by power of 2 (rightmost = 2^0 = 1). So binary '1011' = (1Γ8) + (0Γ4) + (1Γ2) + (1Γ1) = 11 decimal. Each binary digit (bit) doubles in value as you move left.
Used in computing fundamentals - all data is binary at the lowest level. CPU operations, network packets, file storage all happen in binary at the hardware level. Programmers occasionally convert between binary and decimal when working with bit manipulation, error codes, or low-level protocols.
Common Binary to Decimal
| Binary | Decimal |
|---|---|
| 0 | 0 |
| 1 | 1 |
| 10 | 2 |
| 100 | 4 |
| 1010 | 10 |
| 11111111 | 255 |
| 10000000 | 128 |
| 1111111111111111 | 65535 |
Frequently Asked Questions
What's the largest 8-bit binary number?
11111111 = 255 decimal. This is one byte's maximum value. The next number (256) requires 9 bits: 100000000. The byte structure (8 bits = 0-255) is the foundation of all character encodings, RGB colour values, and many protocols.
Why is computing binary?
Hardware reliability. A circuit can be reliably 'on' or 'off' (1 or 0) but harder to distinguish 10 voltage levels for decimal. Binary makes hardware simpler and faster. The translation from binary to decimal/hex/text happens in software for human readability.