About Projects

Consistent Ring Within a Consistent Ring

Over the past few months, I’ve been diving deep into distributed storage internals — and somewhere along the way, OrangeDB was born. If you haven’t heard of it yet, check orange. This journey led me to explore several lesser-known databases that quietly power critical infrastructure in production environments.

Some highlights:

Revisiting Existing Designs

Cassandra

Cassandra is completely leaderless — any node can handle reads or writes. Data is eventually replicated using gossip. To ensure consistency, Cassandra uses tunable quorum reads and writes:

Features like sloppy quorum, hinted handoff, and read repair further enhance availability. However, this design may result in higher read and write latencies, especially under quorum settings.

MongoDB Sharded Deployment

In a MongoDB sharded cluster:

This design offers strong consistency by default (via the primary) and lower read latency in relaxed consistency modes. However, it relies heavily on leader election (via Raft), which introduces complexity (e.g., split-brain handling, election delays, leadership failover).


Enter OrangeDB

zero disk architecture

OrangeDB attempts to combine the best of both worlds:

It introduces:

“Consistent Ring Within a Consistent Ring”

This two-layer ring structure enables fine-grained control over placement, replication, and load distribution.


Write Path

  1. For a key like k=120, OrangeDB hashes the key to determine the target shard via the outer ring.
  2. Within that shard, the key is again hashed to select a specific replica (let’s say replica-1) in the inner ring.
  3. All writes for that key go to this designated primary replica — without any leader election.
  4. That replica then asynchronously replicates the write to its sibling replicas within the shard using gossip-style replication (similar to Cassandra).

This design achieves:


Read Path

Reads can be performed at multiple consistency levels depending on the use case:


Read Consistency Levels

OrangeDB supports three read modes:

all
quorum
single

What OrangeDB Tries to Solve

OrangeDB vs Cassandra: Simpler, Lower-Latency Writes

OrangeDB avoids quorum-based writes by routing each key to a fixed replica, requiring no coordination during writes.

Why it matters:

Trade-off: Potential for data loss if the primary fails before replication. No immediate durability like Cassandra’s quorum model.


OrangeDB vs MongoDB: No Leader Election, Always Writable

OrangeDB removes the concept of a primary per shard, avoiding Raft-based elections and their downtime.

Why it matters:

Trade-off: Requires smarter client or routing logic to handle failed replicas gracefully.



Final Thoughts

OrangeDB started as a personal exploration—an educational project driven by curiosity about how distributed storage systems work under the hood. It’s a playground to experiment with ideas inspired by Cassandra, MongoDB, and others, but rethinking some of their core trade-offs.

This project is far from production-ready. There are still many unanswered questions around durability, failure handling, and consistency guarantees. But that’s part of the fun—learning by building and seeing what challenges arise.

If you’ve enjoyed this peek into the inner workings of distributed databases, I hope it sparks your own experiments and deep dives. At the end of the day, OrangeDB is just one step in an ongoing journey to understand and improve how I store and manage data at scale.

Thanks for reading and sharing the curiosity! ✌️