Number Systems for Developers: Binary, Octal, Decimal and Hex Explained
Decimal is the number system you grew up with. Binary, octal, and hexadecimal are the number systems your computer thinks in. Understanding them makes you better at debugging, bit manipulation, memory analysis, and working with low-level APIs.
Why Multiple Number Systems?
Computers store and process everything as binary (1s and 0s). But binary is verbose for humans — 255 in binary is 11111111, requiring 8 characters for a number you could write as two hex digits (FF) or three decimal digits (255).
Different bases serve different purposes:
- Binary (base 2) — the actual hardware representation; essential for bit manipulation and understanding storage
- Octal (base 8) — 3 bits per digit; used in Linux file permissions, some C constants
- Decimal (base 10) — human-readable counting; what most application-level code outputs
- Hexadecimal (base 16) — 4 bits per digit; the standard for memory addresses, color codes, cryptographic hashes, and binary data
How Base Conversion Works
Every number is a sum of powers of its base. In decimal (base 10): 255 = (2 × 10²) + (5 × 10¹) + (5 × 10⁰) = 200 + 50 + 5.
In binary (base 2), 11111111 is:
1×2⁷ + 1×2⁶ + 1×2⁵ + 1×2⁴ + 1×2³ + 1×2² + 1×2¹ + 1×2⁰
= 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1
= 255In hex (base 16), digits go 0–9 then A=10, B=11, C=12, D=13, E=14, F=15. FF in hex:
F×16¹ + F×16⁰
= 15×16 + 15×1
= 240 + 15
= 255The Hex ↔ Binary Shortcut
The most useful trick: each hex digit maps directly to exactly 4 binary bits (a nibble). You can convert between hex and binary without going through decimal:
Hex: F F 5 7 3 3
Binary: 1111 1111 0101 0111 0011 0011
So #FF5733 = 11111111 01010111 00110011 in binaryThis is why hex is the standard for representing byte data — two hex digits = one byte.
Where Each Base Appears in Practice
Binary
// Bit flags in code (JavaScript)
const PERMISSIONS = {
READ: 0b001, // 1
WRITE: 0b010, // 2
EXECUTE: 0b100, // 4
};
// Check if a user has READ permission
const userPerms = 0b101; // read + execute
const canRead = (userPerms & PERMISSIONS.READ) !== 0; // true
// Add WRITE permission
const newPerms = userPerms | PERMISSIONS.WRITE; // 0b111 = 7Octal
# Linux file permissions — chmod 755
# 7 = 111 in binary → rwx
# 5 = 101 in binary → r-x
# In C/C++ — octal literals start with 0
int permissions = 0755; // NOT decimal 755!
# In Python — octal literals use 0o prefix
import os
os.chmod("deploy.sh", 0o755)Hexadecimal
// Color values
const red = 0xFF0000; // R=255, G=0, B=0
const white = 0xFFFFFF; // R=255, G=255, B=255
// Extract RGB channels from a packed integer
const color = 0xFF5733;
const r = (color >> 16) & 0xFF; // 255
const g = (color >> 8) & 0xFF; // 87
const b = color & 0xFF; // 51
// Memory addresses in debuggers
// 0x7fff5fbff8a0 ← typical stack address
// File signatures (magic bytes) to identify file types
// JPEG: FF D8 FF
// PNG: 89 50 4E 47 (= 0x89PNG in ASCII)
// PDF: 25 50 44 46 (= %PDF in ASCII)Number Representation in JavaScript
// Literals
const binary = 0b11111111; // 255
const octal = 0o377; // 255
const decimal = 255;
const hex = 0xFF; // 255
// All the same value
console.log(binary === octal && octal === hex); // true
// Converting to different base strings
(255).toString(2); // "11111111"
(255).toString(8); // "377"
(255).toString(16); // "ff"
// Parsing from string
parseInt("11111111", 2); // 255
parseInt("377", 8); // 255
parseInt("FF", 16); // 255Use the Number Base Converter to convert between all four bases with a visual breakdown of the binary representation. For file permission octals specifically, see the Chmod Calculator.
Frequently Asked Questions
What is Number.MAX_SAFE_INTEGER and why does it matter?▾
JavaScript uses IEEE 754 double-precision floating point for all numbers, which can represent integers exactly only up to 2^53 - 1 = 9,007,199,254,740,991. Beyond this, integer arithmetic becomes imprecise — (2^53 + 1) === 2^53 evaluates to true. This matters when working with large IDs from databases (Twitter snowflake IDs are 64-bit), timestamps in nanoseconds, or cryptographic values. Use BigInt for exact arithmetic with large integers: BigInt(2) ** 64n.
What is two's complement and how does it represent negative numbers?▾
Two's complement is the standard way CPUs represent signed integers. To negate a number: flip all bits, then add 1. Example for -5 in 8-bit: 5 = 00000101, flip = 11111010, add 1 = 11111011. The leftmost bit acts as a sign bit (1 = negative). The range for an n-bit two's complement integer is -2^(n-1) to 2^(n-1) - 1. For 8 bits: -128 to 127. For 32 bits: -2,147,483,648 to 2,147,483,647. The advantage: addition and subtraction work identically for signed and unsigned numbers, simplifying CPU hardware.
When would I use bitwise operations in real code?▾
More often than you'd expect. Feature flags: store N boolean flags in a single integer, check with (flags & FLAG_ADMIN) !== 0. Permission systems: Linux uses 9 bits for rwxrwxrwx. Network masks: 192.168.1.0/24 is stored as a 32-bit integer with the mask applied via AND. Fast even/odd check: (n & 1) === 0 is faster than n % 2 === 0. Fast integer division/multiplication by powers of 2: x >> 1 is x/2, x << 1 is x*2. UUID/hash bit-level operations. Color channel extraction: red = (hex >> 16) & 0xFF.