Unix Timestamp Converter

Convert Unix timestamps (epoch time) to human-readable dates and back — paste any 10-digit (seconds) or 13-digit (milliseconds) timestamp, or an ISO 8601 / RFC 2822 date string. All formats output at once: UTC, local time, ISO 8601, relative time and more. Live clock reference included. Everything runs in your browser — nothing sent to any server.

Accepts: Unix seconds, Unix milliseconds, ISO 8601, RFC 2822, or any date string parseable by JavaScript.

Related Tools

Frequently Asked Questions

What is a Unix timestamp and why does it start at January 1, 1970?

A Unix timestamp (also called epoch time or POSIX time) is the number of seconds elapsed since 00:00:00 UTC on January 1, 1970 — the Unix epoch. That date was chosen pragmatically by the developers of early Unix systems in the late 1960s as a convenient recent starting point; it has no special mathematical significance.

Today's timestamp is roughly 1.7 billion seconds. Negative timestamps represent dates before 1970 and are valid on most systems.

Why do some timestamps have 10 digits and others have 13?

10-digit timestamps count elapsed seconds (e.g. 1700000000). 13-digit timestamps count elapsed milliseconds — exactly 1000× larger.

JavaScript's Date.now() and new Date().getTime() return milliseconds. Most Unix/Linux APIs (time(), shell date +%s) return seconds. Python's time.time() returns seconds as a float.

Mixing them up is one of the most common timestamp bugs: feeding a millisecond timestamp to a seconds-based parser produces a date around the year 56000.

When will Unix timestamps overflow? Is the year 2038 problem real?

Yes. 32-bit signed integers max out at 2,147,483,647 — that corresponds to 03:14:07 UTC on January 19, 2038. After that second, a 32-bit counter rolls over to a large negative number and dates jump back to 1901.

Systems using 32-bit time_t are genuinely affected: older embedded systems, some databases, and 32-bit Linux kernels. Modern 64-bit Linux, macOS, and Windows are safe until the year 292,277,026,596.

Legacy IoT devices and databases with 32-bit TIMESTAMP columns (older MySQL configs) remain at risk.

What is the difference between UTC and Unix time? Do they handle leap seconds the same way?

No — they differ in how they handle leap seconds. UTC inserts extra seconds periodically to stay aligned with Earth's rotation; there have been 27 leap seconds since 1972. Unix time ignores leap seconds entirely and assumes every day has exactly 86,400 seconds.

In practice, most systems use NTP or UTC-SLS (Leap Second Smearing), which spread the adjustment over a window so clocks stay monotonic. For the vast majority of applications the 27-second total difference is irrelevant.

Are 'Unix timestamp', 'epoch time', 'POSIX time', and 'Unix time' all the same thing?

Yes — all four refer to the same concept: seconds elapsed since 00:00:00 UTC on January 1, 1970. 'Unix timestamp' and 'Unix time' are the most common terms. 'Epoch time' is informal but widely understood. 'POSIX time' is the formal IEEE Std 1003.1 name.

The one trap: some databases and languages use 'timestamp' loosely to mean any datetime value, not necessarily Unix-based. Always verify the units (seconds vs milliseconds) and the reference epoch when integrating between systems.

How do I convert a Unix timestamp to a human-readable date in JavaScript, Python, and SQL?

JavaScript: new Date(timestamp * 1000).toISOString() — multiply by 1000 to convert seconds to milliseconds.

Python: datetime.fromtimestamp(ts, tz=timezone.utc).isoformat() or datetime.utcfromtimestamp(ts).

SQL — PostgreSQL: to_timestamp(1700000000) returns a timestamptz. MySQL: FROM_UNIXTIME(1700000000). SQLite: datetime(1700000000, 'unixepoch') — SQLite has no native timestamp type.

Always specify UTC explicitly to avoid timezone-dependent results on the server.