Thursday, June 06, 2019

Introducing PURDAH

So I'm reading Neal Stephenson's latest novel 'Fall'... In Chapter 11 he introduces PURDAH... Personal Unseperable Registered Designator for Anonymous Holography.

He explains that Holography is from the original meaning of the term:

"A holograph is a document written entirely in the handwriting of the person whose signature it bears. Some countries (e.g., France) or local jurisdictions within certain countries (e.g., some U.S. states) give legal standing to specific types of holographic documents, generally waiving requirements that they be witnessed. One of the most important types of such documents are holographic last wills." -

"So it's just an anonymous ID, with a fancy name?", Corvallis asks...

No, PURDAHs are all registered in distributed ledger technology so their veracity can be verified at any time. Unseperable means that no one can take it away from you, as long as you take reasonable precautions...

At least Neal seems to be paying attention!

Wednesday, November 08, 2017

Trust vs Confidence

Over the years, in my own mind, I have built specific semantics around the terms 'Trust' and 'Confidence'. These are closely related to the validity of 'Proof'... I think that often the use of these terms in the vernacular are too fuzzy to be of use in identity system discussions. I would posit:


Security and its many mechanisms are used to establish trust; once trust is established, you just trust. My canonical use-case for this is access to the school blog. I can grant or revoke write access to my kids' school blog. I give access to people who I trust will only post age appropriate material. I could use manual or automated mechanisms to check posts before they are published but the effort or cost outweighs the risks. I choose to trust. Trust is a human, emotional, social construct that implies a loosening of control. Trust can be abused and it is, knowing the risks, the rewards and the remediations for abuse of trust is important (systems of accountability; reputation? legal?). So trust needs to be bounded "I trust XX to do YY".

Concretely: I trust an entity with my money, like coinbase, who has my bitcoin wallet. I have taken a leap. Coinbase could steal my money despite all of the controls of the blockchain and distributed ledger technology. I could use a different wallet technology but then I am still choosing to trust the software that enables that wallet, or the hardware that the software runs on. At some point the cost of not trusting outweighs the risk and the expense of trusting.

So on some level... this poses the question: If my relationship with coinbase is purely one of trust that they will hold my money and return it based on the current value of bitcoin, what difference does the underlying blockchain technology actually make to me? I could use bitcoin in a way that I don't need a trusted third party (at the extreme: build my own hardware and software) but I don't, and most people don't.

I think it is incumbent on people talking about identity systems to really understand where security ends and trust starts. Do most people understand how misplaced their trust in their mobile hardware could be?

So to me Trust is what happens beyond the bounds of control. Or to put it another way; Trust is what happens within 'pipes' or 'bubbles' of control established using security mechanisms. 


Confidence is, when I'm trying to use it precisely, a measure of certainty in a claim. In terms of an identity system a claim might be:

  • an authentication claim (I am the person identified by ID XX) 
  • an authorization claim (I should have access to this resource)
  • an attribute claim (I am over 18 years of age)
These claims often get delivered in terms of or together with 'Proofs'. Where 'Proof' is a mechanism (hopefully standardized) to deliver a claim with metadata to increase confidence (in the absence of trust). Some examples:

In an authentication claim, the claim of the ID may be accompanied with claims of who established the ID (signed by a private key of a trusted party) the claim may also include details of how the user was authenticated (password, multi-factor, smart card, etc...). The associated metadata establishes a level of confidence in the claim.  Step-up authentication models (you can view your balance if you logged in with a password but you have to use multi-factor to initiate a transfer) are a direct result of your levels of confidence in various authentication claims. 
In an attributes claim again one would expect the claim to be signed by a trusted party, trusted to make the specific claim, and the claim may include metadata about how the attribute was validated. An over 18 years of age claim that was self asserted (they checked a checkbox that says "i'm over 18") may be enough to satisfy COPPA compliance requirements in the US but would be insufficient to provide legal access to porn in the UK. 

Bringing Confidence and Trust together:

So with a claim that is signed by a party that is trusted to make age claims; the signature gives me confidence that claim is from the the trusted party and then I trust the age claim rarely do I require proof of the mechanics of acquiring the validation. Even if I require details of how the claim was was established (self asserted or credit system check) I could, but I don't, make the third party 'prove' it.

That is establishing a Trust space using a security mechanism (in this case; PKI and standardized claim semantics) and then... trusting the information that is provided in that secured context.


Is there is a source (glossary) that you use to define these fuzzy terms when you reference them in specs? I know that there were efforts to normalize identity terminology back in the day... did any survive the test of time? 

Wednesday, November 01, 2017

Eight years and counting

Well it was 8 years since I last posted here and 12 years since I started this blog and I have to ask... what has changed, what has been achieved in all that time?  I've been out of touch with this space for a while and i'm going to go on a little personal voyage of discovery to see what I can learn and see if any of the fundamental problems have been solved.

My first step is going to be attempting to articulate in abstract terms what I consider to be 'the fundamental problems'.

My primary point of interest since this all started has been to give people access to and appropriate control over data about themselves and their transactions. It is well known that the likes of Google, Facebook, Experian, Equifax and many others make their money trading in data generated by or about us. These companies provide important and valuable services but they do not adequately respect individual privacy nor do they fairly include the individuals that are their currency in the value chain.

I have spoken with business stakeholders in large organizations that use these services, despite knowing that they are 'unclean', because the value they add is very real. The increase of ROI on targeted marketing based on these services is phenomenal. As we, as a community, worked on alternate models they were eager: Gives us a viable alternative, they would say, and we will use it. As far as I can tell there is still no viable alternative... I will try to unpack why.

I will try to discover if we have a technology problem, a communications problem, a legal problem, an education problem or a business problem; presumably we have a little of each.

The lack of a viable alternative is closely related to scale. The aforementioned companies have huge user populations which is what makes them so appealing and so valuable. Users do not adopt a technology based on the elegance of the standards or even, unfortunately, based on the strength of the privacy. Users primarily adopt technology because it makes it easier for them to do something they want to do (including playing games and consuming porn). With that said I do believe that there is a growing number of people dissatisfied with the status quo. People who would be willing to engage in an alternative system even if it costs them a little. How do we provide them a viable alternative?

So a fundamental question I have is: Do we have the building blocks to build a viable alternative and we just haven't found the right constellation of services and apps to provide or, are there still gaps in the technology stack to build a viable alternative? I hope to find out.

In my upcoming posts I will start to dig into what I believe are the important qualities of systems that might address this need. I will undoubtedly build on old classics like Kim Cameron's Laws of Identity but will also add some flavor of my own in terms of business and legal frameworks that I believe need to be in place. I will also address the qualities that I believe are necessary for a distributed data network to actually work, at scale, as a data network (Spoiler: Link based systems like "Linked Data" work great for unstructured content, documents, but fail rapidly to satisfy operational requirements for structured data).

I'm excited to dig in and learn blockchain and blockchain alternatives... Please let me know about stuff that you think is worth reviewing and including as stops on my voyage of discovery!

Monday, August 24, 2009


If you have a chance; check out this proposed session for SXSW: Have you noticed that when you search the internet you probably don't see results from the stuff that you pay for (subscriptions, stuff available through your local library, etc...)? this panel will discuss how we could fix that... If you think that would be useful.. go give it the thumbs up.

Monday, July 20, 2009


I have written about reputation in the past and continue to evolve my thinking on the subject. I had an interesting interaction last weekend with Lillie Coney of EPIC while on a panel together at ALA. Lillie described the legal frameworks that exist to both protect and circumvent our privacy as a lawyer and a privacy expert she described the steps necessary to strengthen our privacy position in the law. I found myself pushing back on Lillie; expressing that Reputation systems are just as important as systems of accountability for privacy as legal frameworks. If we had more time I think we might have had an interesting discussion on the subject.

Here's the summary I reached in my head: I do not deny that the legal system works to protect our privacy interests at certain levels. However, as an individual with a compaint against a large company I have very little recourse. For me to take action, personally, against a large corporation is prohibitavly time consuming and costly. I believe that robust reputation systems can help give me a way to have a voice.

We know that there are places that the legal system works. We know that there are places that reputation systems work. There is a gap between these 2 places where very little works. Lille was explaining how we fill that gap with legal framework. I propose that we can also fill that gap with well constructed reputation systems. I don't think this is an either-or situation; together these things can provide robust protection and accountability that is available to everyone.

My point is that while those of us who think about reputation recognize the importance of the legal frameworks, I'm not sure that the people who work on the legal frameworks recognize the importance of the reputation systems.

What do you think?

Friday, June 12, 2009

Is anybody out there

It's been a long time since I blogged :-( and even now I'm just asking a question...

Now that I am actually implementing SAML stuff, specifically Shibboleth (mainly web sso). What book would you recommend I buy?


Tuesday, March 03, 2009

What is SSO

One of the hottest issues in Identity Management is often referred to as SSO; Single Sign-On. However it is a horribly misunderstood and misused term. I will try to give a brief overview of what SSO is and isn't.

What most people mean when they say SSO is the user experience of accessing multiple services and systems but only having to 'log-in' once. On the face of it SSO sounds great but there are some pitfalls that we have to be wary of. If we aren't very careful, the 'ease' of SSO is bought at the cost of privacy.

The type of SSO that I am going to explore is the "HTTP Redirect" SSO mechanisms that are widely deployed for SSO on the web. This includes OpenID, Shibboleth (Web SSO), SAML (WebSSO), FaceBook, Yahoo! and Google, to name a few. These protocols differ in many details and have different strengths and weaknesses but they all share the same underlying HTTP Redirect mechanism. The basic pattern is this:

1. Jane navigates to a web-site and she wants to log-in using a username and password that support SSO.
2. Jane clicks on the 'login' button on the page.
3. Jane has to tell the web-site who her SSO service provider is. This is known as the Where Are You From problem, otherwise known as WAYF. More about WAYF in a moment.
4. Once Jane has told the web-site who her SSO service is; a HTTP Redirect is sent to the browser to send Jane off to her SSO service.
5. At her SSO service Jane is asked to provide her UserName and Password.
6. If Jane convinces the SSO service that she is, in fact, Jane, then she is returned (via HTTP Redirect) to the original web-site with a 'token' that says "I am SSO service XYZ and I believe this is Jane"
7. The web-site and SSO service communicate in such a way that the web-site can validate that this is really SSO service XYZ talking AND if it knows and trusts service XYZ it can go ahead and accept that this is Jane.
At this point we have performed 3rdParty Authentication or Federated Sign-On NOT SSO.

8. Having done what she came to do Jane now navigates to another web-site.
9. When Jane arrives at the second web-site she is NOT recognized as being logged in. This site has no knowledge who she is or that she has logged in somewhere else before. If Jane wants to access 'protected' resources at this web-site she is going to have to click on the log-in button.
10. Again Jane will be asked Where Are You From and she will select her SSO service provider.
11. The web-site will then send Jane off to her SSO provider asking... "Who is this?"
12. Because Jane logged into her SSO service just a few minutes earlier the SSO service doesn't ask Jane for a UserName and Password this time, it immediately returns back to the web-site with a 'token' that says "I am SSO service XYZ and I believe this is Jane"
13. The using the same trust validation as above the web-site can now believe that this is Jane

And Jane only logged in ONCE... that is SSO.

Jane still had to click on login twice and still had to provide her SSO service twice but she only Signed-On a Single time.

There are variations in this flow, OpenID nicely shortcuts the double SSO service provider selection BUT you have to type in your UserName twice.

The most common expectation of SSO that is not satisfied by the flow described is "why didn't the second site just 'know' that I had already logged in and who I was?" Apart from the fact that would be technically difficult the answer is actually that REALLY you wouldn't want that behavior... Once I explain why:

If SSO worked that way, when you logged in once, everywhere you went on the internet would know who you are. Not just an IP address, they would be getting a message "here's Jane". All of the web-sites on the web could talk to each other and work out EXACTLY which sites you visited and which ones you didn't. That is generally considered to be a terrible breach of privacy. In order to avoid this privacy leak clicking 'login' remains an explicit action that the user must take. The action no longer means: "I want to enter my username and password" but now means "I'm OK telling this site who I am."

There are ways for 'closely connected' sites to shortcut this experience. Handing a user from their Local Library System to the Consortia Meta-Search interface; a handoff that is between trusted parties; Janes identity CAN be passed from one service to the other providing the 'seamless' SSO that we would love to have. But you can't be sure that Jane was OK being identified at the second system unless you make the action explicit. As a service provider you have to make very careful choices between seamless SSO and user privacy.

Rather than going on now:- You can tune in later for "SSO using Pair-Wise Identifiers to protect your privacy", "How and Why OpenID is different from Shibboleth Web SSO" , "Why you MUST trust your SSO service provider because they know a lot about you"...

Please ask questions if I haven't been clear... Please let me know if you think I have said something misleading or wrong... I'm just trying to start a conversation here.