In Defense of OAuth 2.0

Guest Author, August 30th, 2012

This guest post comes from Scott Morrison, CTO of Layer 7 Technologies, an API management company. Scott provides the visionary innovation and technical direction for the company. You can follow him on Twitter at @KScottMorrison.

That recent sound of a door slamming was Eran Hammer storming out of the OAuth standardization process, declaring once and for all that the technology to which he gave so much of his time, was dead. Not only would he no longer be a part of the effort to formalize OAuth, he also asked that his name be removed from all of the OAuth 2.0 documents—which is just about as strong a statement as you can make in the world of standards, where authorship counts for nearly as much as it does in academe. Tantrums and controversy make great social media copy, so it didn’t take long before everyone seemed to be talking about this one. In some quarters, you’d hardly know the London Olympics had begun.

So what are we to really make of all this? Is OAuth dead, or at least on the road to Hell as Eran now famously put it? Certainly my inbox is full of emails from people asking me if they should stop building their security architecture around such a tainted specification.

I think Tim Bray, who has vast experience with the relative ups and downs of technology standardization, offered the best answer in his own blog:

“It’s done. Stick a fork in it. Ship the RFCs.”

Which is to say sometimes you just have to declare a reasonable victory and deal with the consequences later. OAuth isn’t perfect, nor is it easy; but it’s necessary, and it’s needed now, so it’s important to forget the personality politics and just get it done. This is largely what has happened, as the specification was ratified only days after this episode at the IETF meeting in Vancouver.

This is a good thing, because from the beginning, OAuth solved an important problem. It is much more than a simple mechanism to share information between their social media accounts. The real genius of OAuth is that it empowers people to perform delegated authorization on their own, without the involvement of a cabal of security admins. And this is something that is really quite profound.

The history of computing security has been a history of centralization for defense—a history of building castle walls and carefully managed internal hierarchies. Nowhere is this more evident than in the centralized control of identity and entitlements (a fancy term that really just describes the set of actions each identity is allowed, such as writing to a particular file system).

Control implied power, and those who wield power like to maintain it. This has led to a status quo around identity management in nearly every organization that is maintained not only because it is hard to do otherwise, but also because to change would be to surrender this power.

But centralized identity administration will always have a problem with scalability. Not the scalability we immediately think of when we hear the word, such as authentication events per second, but instead the scalability of people and process. In other words, administrative scalability.

OAuth solves this. With OAuth, we can finally scale authentication and authorization by leveraging the user population itself. This is the one thing that could potentially smash the monopoly on central identity and access management. OAuth undermined the castle, and the real noise we are hearing isn’t infighting on the specification, but the beginning of enterprise walls falling down.

The important insight of OAuth 2.0 is this: delegated authorization also solves that basic security sessioning problem of all apps running over stateless protocols like HTTP. Think about this for a minute. The basic web architecture provides for complete authentication on every transaction. This is dumb, so we have come up with a museum’s worth of contrivances all meant to track security context across stateless transactions—using cookies, proprietary tokens, SSL sessions, keep-alives, etc. The problem with many of these is that they don’t constrain entitlements at all; a cookie is as good as a password, because really it simply maps linearly back to an original act of basic authentication.

OAuth matters precisely because it does constrain, and in particular adds in the important notion of constraint with informed user consent. In OAuth you exchange a password (or other primary security token) for a time-bound access token that represents a limited set of capabilities to which you have explicitly agreed. In simpler words, the token expires fast and is only good for what you say it is.

These limitations on the token mean you can pass it off to another application (like Twitter) and reduce your risk if they lose or misuse it. Or—and this is the key insight of OAuth 2.0—you can just use it yourself as a better way to track security session. The important insight here is that you are performing the consent yourself, without having to involve any inside security admins. The cabal just got bypassed.

The problem with OAuth 2.0 is that it’s surprisingly hard to arrive at such a simple idea from the writings that emerged out of the OAuth 1.0a efforts. OAuth initially set out to solve a very specific—and indeed, increasingly common—problem around linking accounts on the social web. Most of the good interpretation of OAuth focuses on this three-legged scenario.

If you are a developer interested in learning how to apply OAuth 2.0 to say, mobile device development, you are quickly going to find yourself with spec in hand. And that’s the problem: specifications aren’t made to instruct; their purpose is to promote interoperability and guide expert designers. Both OAuth specs reduce quickly to swim lane diagrams and excruciating detail that demands a strong understanding of the use cases going in.

OAuth is more a victim of poor marketing than bad specsman-ship. As it stands today, learning about OAuth 2.0 is only slightly less torturous that learning about Kerberos, and that is faint praise indeed. We are all still waiting for the definitive guide, the one that finds its way above the fold by virtue of its simplicity and good advice.

This complexity is an ironic counterpoint to the current movement promoting simple and accessible, as exemplified by the RESTful style and technologies like JSON—the very engines of the modern Web renaissance. This is the greatest threat to the future of OAuth, and it doesn’t have to be this way. OAuth is actually deceptively simple; it is the underlying detail that remains complex. But the same can be said of SSL/TLS, which we all use daily with few problems. What OAuth needs are a set of dead simple but nonetheless solid libraries on the client side, and equally simple and scalable support on the server. This is a tractable problem and it is coming. But with this OAuth also needs much better interpretation so that people can understand it fast.

Personally, I agree in part with Eran Hammer’s wish buried in the conclusion of his blog entry:

“I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise.”

OAuth absolutely does need simple profiling for interop, and comprehension. But don’t ignore the enterprise. The enterprise really needs the profile too, because the enterprise badly needs OAuth.

Both comments and pings are currently closed.

3 Responses to “In Defense of OAuth 2.0”

August 30th, 2012
at 4:56 pm
Comment by: Today in bookmarks for August 30th. | ngerakines.me

[...] In Defense of OAuth 2.0 [...]

August 31st, 2012
at 3:26 am
Comment by: Distributed Weekly 170 — Scott Banwart's Blog

[...] In Defense of OAuth 2.0 [...]

December 11th, 2012
at 10:22 pm
Comment by: OAuth 2.0 – Open API 인증을 위한 만능 도구상자 | kth 개발자 블로그

[...] In Defense of OAuth 2.0 [...]

Follow the PW team on Twitter

ProgrammableWeb
APIs, mashups and code. Because the world's your programmable oyster.

John Musser
Founder, ProgrammableWeb

Adam DuVander
Executive Editor, ProgrammableWeb. Author, Map Scripting 101. Lover, APIs.