Back

How Key-Value Databases (e.g., Redis, Memcached) Work

How Key-Value Databases (e.g., Redis, Memcached) Work

Your frontend app feels fast. Users click, data appears, pages load instantly. Behind that experience, there’s often a key-value database doing the heavy lifting — catching requests before they ever reach your main database. Understanding how these systems work helps you make smarter decisions about caching, sessions, and backend architecture.

Key Takeaways

  • Key-value databases store and retrieve data using a unique key paired with a value, trading query flexibility for exceptional speed.
  • In-memory storage (RAM) enables constant-time lookups via hash tables, often 100–1000x faster than disk-based reads.
  • Redis offers rich data structures, optional persistence, and built-in clustering, while Memcached excels as a simple, multi-threaded, purpose-built cache.
  • Common use cases include session management, API response caching, and rate limiting — all scenarios where low latency matters most.
  • Key-value stores complement relational databases rather than replace them. Use them as a performance layer, not a primary data store.

What Is a Key-Value Database?

A key-value database is a type of NoSQL data store that saves and retrieves data using a simple two-part structure: a unique key and its associated value.

Think of it like a JavaScript object or a hash map:

"session:user:4821" → { userId: 4821, role: "admin", expires: 1720000000 }
"product:sku:9001"  → { name: "Wireless Keyboard", price: 49.99 }

You look up data by key. That’s it. There’s no query language, no JOINs, no schema. This constraint is exactly what makes key-value stores so fast.

How In-Memory Storage Makes Lookups Fast

Most key-value databases — including Redis and Memcached — store data in RAM, not on disk. Disk reads are measured in milliseconds. Memory reads happen in microseconds, often 100–1000x faster.

Internally, these systems use a hash table: the key is hashed to a memory address, and the value is retrieved directly. There’s no scanning, no indexing, no query planning. The lookup time is effectively O(1) — constant regardless of dataset size.

This is why key-value stores are the default choice for caching layers, session storage, and any backend service where response time directly affects user experience.

Core Operations: SET, GET, and Expiration

The fundamental operations are minimal by design:

  • SET key value — store a value
  • GET key — retrieve a value
  • DEL key — remove a value
  • EXPIRE key seconds — auto-delete after a time-to-live (TTL)

TTL is especially useful for caching. You store an API response or a rendered HTML fragment with a 60-second expiry. Your app reads from the cache first. If the key is missing or expired, it falls back to the database and repopulates the cache. This pattern — cache-aside — is one of the most common patterns in web architecture.

Redis vs. Memcached: Key Architectural Differences

Both are in-memory key-value stores. Both deliver sub-millisecond performance. But they make different trade-offs.

FeatureRedisMemcached
Data typesStrings, lists, sets, hashes, sorted sets, streams, and moreStrings only
Value size limitUp to 512 MBUp to 1 MB
PersistenceOptional (RDB snapshots or append-only file)None — purely volatile
Multi-threadingSingle-threaded event loop (I/O threading added in 6.0+)Fully multi-threaded
Memory reclamationReturns freed memory to the OSHolds allocated memory via slab allocator until restart
Built-in clusteringYes (Redis Cluster, Sentinel)Requires client-side sharding

Memcached is a purpose-built cache. It’s simple, fast, and predictable. Its slab-based memory allocator keeps fragmentation low, making memory usage highly consistent — useful when you need a hard memory ceiling. It’s a strong fit when you’re caching plain strings and want nothing more.

Redis is a broader in-memory data structure store. Beyond caching, it supports sorted sets for leaderboards, pub/sub messaging, atomic counters, and optional persistence. Modern Redis is used as a cache, a session store, a message broker, and a lightweight database — sometimes all at once. Worth noting: Redis licensing changed starting in 2024, which led some teams to evaluate Valkey, a compatible open-source fork maintained under a permissive license by the Linux Foundation.

Where Key-Value Databases Fit in Frontend-Facing Systems

From a frontend developer’s perspective, key-value storage typically shows up in three places:

  • Session management — storing auth tokens, user state, and preferences server-side
  • API response caching — reducing database load and speeding up repeated requests
  • Rate limiting — tracking request counts per user or IP using atomic increment operations

Each of these benefits directly from what key-value databases do best: fast reads, fast writes, and simple expiration logic.

When Not to Use a Key-Value Store

Key-value databases aren’t a replacement for relational databases. They have real limitations:

  • Most queries are key-based, with limited support for filtering or sorting compared to relational databases.
  • No built-in relationships between records
  • Not suited for complex reporting or analytics
  • Data modeling requires careful key design upfront

If your data has relationships or needs flexible querying, reach for PostgreSQL or a document database instead. Use key-value storage as a performance layer on top of your primary data store, not as a substitute for it.

Conclusion

Key-value databases work because they trade complexity for speed. They do one thing — store and retrieve values by key, primarily in memory — and they do it exceptionally well. Whether you choose Redis for its flexibility or Memcached for its simplicity, understanding the underlying model helps you use these tools where they actually belong: as a fast, focused layer that keeps your applications responsive.

FAQs

Redis supports two optional persistence mechanisms: RDB snapshots, which save the dataset at configured intervals, and the append-only file (AOF), which logs every write operation. You can use either or both. Without persistence enabled, data is lost on restart, just like Memcached. For pure caching, persistence is often unnecessary.

Memcached is a good choice when you need a straightforward, multi-threaded cache for simple string values and want predictable memory usage with minimal configuration. If you do not need rich data structures, persistence, or built-in clustering, Memcached's simplicity and efficient slab-based memory allocator make it a reliable, lightweight option.

Both Redis and Memcached use eviction policies to handle memory limits. Memcached uses LRU (least recently used) eviction by default. Redis offers several configurable policies, including LRU, LFU (least frequently used), random eviction, and no-eviction mode, which returns errors on writes when memory is full.

Redis can function as a primary data store for specific use cases like session management, counters, or real-time leaderboards, especially with persistence enabled. However, it lacks relational querying, enforced schemas, and mature transaction support. For most applications, it works best as a complementary performance layer alongside a relational or document database.

Gain control over your UX

See how users are using your site as if you were sitting next to them, learn and iterate faster with OpenReplay. — the open-source session replay tool for developers. Self-host it in minutes, and have complete control over your customer data. Check our GitHub repo and join the thousands of developers in our community.

OpenReplay