Presenters

Source

🚀 Level Up Your Authorization: A Deep Dive into Google Zanzibar & SpiceDB 🌐

Authorization checks. They’re the gatekeepers of your application, deciding who sees what. But what happens when those gatekeepers become a bottleneck, slowing everything down? That’s the problem Vuni, a Couchbase engineer, tackled in a fascinating presentation at the conference, and we’re breaking it down for you! 👨‍💻

🎯 The Millisecond Matters: Why Authorization Speed is Critical

Vuni started with a stark truth: latency is king. A study by Deoid showed that even a mere 0.1-second improvement in load time can lead to an 8x increase in user retention. Think about that! Slow authorization checks directly impact user experience and, ultimately, your bottom line. 💰

🕰️ A Quick Trip Through Access Control Evolution

Authorization isn’t a new concept. Vuni walked us through the evolution of access control models, highlighting the limitations of each:

  • Role-Based Access Control (RBAC): Think of it like assigning roles (admin, user, editor) and granting permissions based on those roles. It’s simple, but often too rigid for complex applications.
  • Attribute-Based Access Control (ABAC): This approach uses attributes (user location, time of day, resource sensitivity) to make decisions. While flexible, it’s notoriously difficult to get right and can become a maintenance nightmare.
  • Relationship-Based Access Control: A step in the right direction, focusing on relationships between users and resources.

✨ Introducing Google Zanzibar & SpiceDB: The Game Changer

The real breakthrough came with Google Zanzibar. Recognizing the issues with decentralized authorization libraries – inconsistencies and security risks – Google created a centralized “authorization as a service” model. This single source of truth is used across massive products like YouTube, Gmail, and Google Maps, powered by Spanner DB.

Enter SpiceDB, the leading open-source implementation of Zanzibar, written in Go. Companies like Reddit and Netflix are leveraging its power to streamline authorization.

🛠️ The Tech Behind the Magic: Transitive Closures and Support Vertices

So, how do they achieve blazing-fast authorization checks? Let’s dive into the technical details:

  • The Initial Hurdle: A naive approach using Breadth-First Search (BFS) to check permissions in deep hierarchies resulted in an unacceptable O(N) time complexity.
  • Transitive Closure Indexing: The key was pre-computing and indexing transitive closures. Imagine a family tree – transitive closure quickly tells you everyone related to you, no matter how many generations removed.
  • Support Vertices: The Secret Weapon: When dealing with dense graphs (lots of connections), calculating transitive closures becomes a bottleneck. Vuni’s team introduced a clever technique using “support vertices” and related set theory concepts (R+ and R- sets) to achieve an incredible O(1) time complexity for most checks! This is a massive performance boost.
  • Tradeoffs are Real: Recalculating support vertices’ R+ and R- sets for every write introduces a performance cost. However, given a typical 99:1 read-to-write ratio, the benefit far outweighs the cost.

👨‍💻 Go to the Rescue: The Power of Concurrency

Vuni emphasized the crucial role of Go in this architecture:

  • Concurrency is Key: Go routines were used to parallelize the calculation of R+ and R- sets, significantly improving performance.
  • Optimistic Concurrency Control: Badger’s optimistic concurrency control mechanism was leveraged to handle concurrent writes, with a retry mechanism in place for contention.
  • Seamless Updates: Blue-Green deployments ensured continuous operation during updates.

📊 Real-World Results: Numbers That Speak Volumes

To demonstrate the impact, Vuni presented some impressive results from a synthetic data set:

  • Data Set: 20 million nodes, 155 million edges, 2,000 layers of hierarchy.
  • Workloads: Pure read (99% checks) and mixed (99% checks, 1% writes).
  • Performance Gains:
    • Pure Reads: 13.x lower median latency, 15x throughput!
    • Mixed Workload: 72% lower P95 latency, 2x higher throughput.

These aren’t just incremental improvements; they’re game-changing leaps in performance. 🚀

🔭 Looking Ahead: Couchbase & Open Roles

Vuni concluded by highlighting Couchbase’s commitment to in-house solutions, including vector search and indexing. And if you’re looking for a challenging and rewarding career, Couchbase has open roles – go check them out! 💾📡

This presentation wasn’s just about authorization; it was a masterclass in problem-solving, demonstrating the power of graph algorithms, a deep understanding of performance optimization, and the elegance of Go. It’s a must-read for anyone building scalable and performant applications.

Appendix