Thursday, October 16, 2008

Resolution Revolution

So I learned a little this week about sockets and it has given me pause to think about the realities of 'success' in regards to MASSIVE the adoption of the protocols that I tend to talk about on this blog.

They say a little knowledge is a dangerous this... well here I go... head first:

DNS resolution has been under attack recently (last 6 month) from a new set of poisoning attacks. One of the main reasons the attacks work is because DNS uses UDP and not of TCP. The basic fix that has been implemented is Source Port Randomization but even that has been brute force attacked.... so people speculate as to what else could be done. One idea was make every request twice and the answers MUST match (this is known as debouncing). Another option proposed is, just use TCP instead of UDP.

So here's what I find interesting... The debounce option was rejected because it would double the amount of traffic on the DNS system; we would go from 2 packets on the wire to 4. It has been determined that the current DNS infrastructure is running at over 50% capacity so instantly doubling the load is simply not an option. SO... why not use TCP? Well, if you use TCP you have the 3 way handshake, then the query, then the response and then the fin and the fin ack.... 7 packets on the wire (and larger packets at that). So I find all of this fascinating in a purely academic way, this stuff is all new to me. (now I have a basis on which to go understand DNS Sec, that'll be next week's reading)

Then I wander... is anyone doing the math? IF OpenID became ubiquitous, or InfoCards did, what would that look like at a packets on the wire level? Is there so much spare bandwidth and processing power now available that we don't have to worry about this?

Wednesday, October 15, 2008

Is this reputed to be a reputation?

There's a great thread going on about reputation on one of the lists I read. I tried to respond to the thread, which is something I NEVER do, but apparently it has been too long since I was active so it wouldn't let me.... So I'm weighing in here for any one to check if they like.

Another definition of reputation:

Reputation is the result of running an evaluation algorithm over a set of input data.

Some sample input data:

a) Number of sale transactions and number of complaints
b) Number of IM connection requests and number of IM spam reports
c) Ebay reputation, Credit score and number of points on my drivers license.
d) How much 100 people, selected at random, like Diet Coke

The evaluation algorithm can be very simple or very complex.... Ebay's is arguable very simple and Fair Issac's has a very complex algorithm.

Arguably the reputation of a reputation could be measured based on the quality of its input data and the quality of the evaluation algorithm.

Reputation system attacks tend to attack the data input stream, or depend on a delay between input and output. (I've written on this in the past.)

As identity providers I think our first line of responsibility to reputation systems is the CONTROLED delivery of quality input data that is surrounded by enough metadata about collection/storage/retention and "whatever else" that anyone can run reputation evaluations against that data and reach meaningful conclusions. I can then feed that (anonymized?) data into the reputation service of my choice which will likely be dependent on the context of my current activity.

If I want an agent at my smtp gateway to 'decide' if a piece of information should be delivered to my inbox I don't care what the sender says about themselves, I don't want to go query a bunch of reputation services to see if they know anything about this sender (which ones would I trust?). I want to have access to a set of data, signed by a reputable source, how long has the account existed, how many mail have been sent, how many complaints have there been, registration info(made available for bootstrapping) that I can put into my personalized reputation algorithm.

Tuesday, September 23, 2008

I did my best...

Paul, sorry I can't help with the fines but I was very interested to see that you are checking out "that" kind of book ;-)

Monday, September 22, 2008

The next stage

Well now the rubber is going to meet the road....

The people that I now call associates, and my boss, know a LOT more than I do about the management of massive repositories of distributed data. So now I get to test some of the ideas that I've talked about here over the years...

I now work at OCLC, the Library People. My job is specifically working on Identity Management and Authentication. These things obviously only make sense in the context of controlling access to information resources.

As I learn the differences between what I have guessed is important and what really is important for the OCLC use cases I'll let you know how good or bad my thinking of the last couple of years has been.

I will still be engaged in the standards process and will bring the OCLC needs to the table as concrete examples of massive distributed identity use cases.... I think this is going to be fun!

Saturday, August 09, 2008

The times they are....

If you are reading this you probably know me and my work.

Together with my team of awesome co-workers we have tried to help move the art and science of distributed identity management and distributed data sharing forward. I think we have done some good work and would like to think that we have contributed positively to the general progress.

Unfortunately, as many of you know, advancing technology doesn't actually pay the bills and we can't pay the bills any more :-(

ooTao as we know is going to go away. I thought that we had a purchaser for the company but it looks like that is going to fall through. I am devastated to think that body of knowledge and the body of work that we have built up over the last 4 years is just going to evaporate but it looks like that might be what happens. The entire ooTao team is now out looking for employment, including me.

I am still looking to see if anyone, with enough money to pay us, wants to try to keep the team together and keep the work going but I'm not feeling very hopeful.

So if you want to employ one or more people passionate and knowledgeable about distributed identity and distributed data... just let me know... otherwise, I'm off on the next great adventure.

I hope I'll end up in a position that I can continue to participate in the standards work. No matter what I will continue to post here periodically about what I'm doing that is in any way related.

Friday, May 30, 2008

A Wag for the TAG

The interference of the W3C in the XRI vote at OASIS is unprecedented and disturbing. The W3C has rebuffed all efforts by the XRI TC to engage in any form of dialog about the technical merits of XRI. Despite repeated attempts by the XRI community to show the use cases that XRI is solving the TAG make vague statements like 'you can do everything in URL'... This statement is clearly and patentley meaningless without specifics....

It all well and good that SOME of the stuff that XRI does CAN be done in URI/URL but without specifying a STANDARD way of doing stuff the ability to do it is next to useless!!

There are parts of XRI that you simply CAN NOT DO with URI.... Like resolve an abstract identifier (urn).

There are hundreds of millions of users with services that use the xri specs (OpenID being the best known). The ONLY reason W3C cares about this is they think they CONTROL the internet and here is a spec that OBVIOUSLY solves wide reaching problems and it's not theirs.

In my mind this is as subversive as the Net Neutrality issue... W3C is cynically trying to stifle innovation for pure 'not invented here' reasons.

rant rave grr huff.... This pisses me off... PLEASE.... if you voted NO on the xri vote spend some time on the phone with me and talk with me about why you voted no and why I think you are wrong! Before undermining LOTS of hard work by LOTS of smart people at least understand the technology.

Wednesday, May 21, 2008

Let every eye negotiate for itself

Paul's response to my latest post put me in mind of Claudio in Act 2 scene 1 of Much Ado About Nothing...

Let every eye negotiate for itself
And trust no agent; for beauty is a witch
Against whose charms faith melteth in blood.
Paul is correct that I must qualify my posts more carefully.

There is as yet no agreement on all of the mechanisms of claim and assertion exchange. While the ability to differentiate a self asserted claim and an issuer asserted claim in a managed infoCard is useful in some cases it is not the ONLY answer to the problem. The fact that I have a widely deployed client provider that wants to consume claims in this way is a pure Business Detail that should not impact the purity of the technical discussion.

As Paul points out a Better way to do this would be for us to deliver an 'Email' claim with enough metadata about how the claim was acquired and how it was or wasn't vetted that the RP could make its own decision as to the veracity of the claim. I probably should have implemented it this way even though the RP was asking for something else.

Post Script

That was meant to be wry bitting humor... not mean... does it sound too mean?

Tuesday, May 20, 2008

The Claim Game

ooTao's Managed InfoCards now include a verified email claim and verified i-name claim.

If you want to consume these claims you will need to ask for:
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/verified/emailaddress
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/verified/iname
I have blogged previously about how you might validate an iname claim

We are publishing our own 'white list' of claims providers that we consider 'trustworthy' in order to 'trust' the verified email claim. More on that soon.

If you want to start consuming our verified claims at your RP just let us know and we can do some testing together.

Saturday, May 17, 2008

Did Info Card help?

I like InfoCards... I like the idea that I will not have to remember the usernames and passwords. I am confident the MS will work out how to solve the 'portability issue'... BUT.... I just went through InfoCard hell!! I'm still shaking as the adrenaline that built up is trying to drain from my body... this can't be good for me. Let me tell you what happened.

After a long week at IIW and Data Sharing Summit and OpenSocial Spec meeting, I am finally checking in on the blogosphere at 5:30 am on Saturday morning and I see this really cool thread on Kim's blog. It's all about the qualities of Distributed Data Management that I have been talking about for years, but, it's Kim and Dave and Clayton Donley, who is the Senior Director of Development for Oracle Identity Management.... I get so excited, I have to add a comment and tell them about ooTao's work in the space (although Kim is meant to know :-) ).

And that's when the problems started...

I can use digitalMe on my mac to log into our RPs and even to Mike's blog, but it will not work on Kims blog. I spent a while restarting things; browsers, selectors, OSs, this is just habit as a long-time Windows user, nothing helped.

So I upgraded and downgraded the versions of DigitalMe and tried to log in to no availe. For any who care the error I get is: 'unknown option privfile... blah blah'.

Then I remembered, my old XP PC that is now the kids, should still have InfoCard selector installed so I put aside my mac and power up the old PC. First attempt to login at Kims blog tells me that 'InfoCard isn't installed' which seems strange, since I remember installing it. So I poke around and find that I DO have it installed but I don't have any cards defined... I add a card... I return to Kims blog... I click and YES, the selector invokes and I can see the card and I select it... and I am asked if I want to be redirected to an error page... which isn't exactly what I want but, what the hell, I've come this far.

The error page informs me that the temporal offset of the requesting token is larger than the requisit 300S. Those aren't the exact words but believe me the error message did not say... 'The Client and Server Clocks don't match'... So I unpacked the message and realized that I needed to change the time on the PC so that it matched Kims server within 5 minutes.. I just had to hope that Kims clock was close to right. So I changed the time a few times and yes.... finally... I logged into Kims blog and left a comment.

Unfortunately by the time I got there, my enthusiasm and excitement for the topic had been morphed in to frustrated anxiety so my comment is no-where near the 'tone' I originally intended. There should probably be some joke I can make here about 'Claims Transformations' as this STS certainly transformed my claims... BUT... I have now been trying to write, writing, writing about this damn post for 3 hours...

I think it was worth it though if I can finally get these guys to understand what it is we have built.

Tuesday, May 06, 2008

iPages a go-go

I was reading Kevin Marks post that looks at Brad Templeton's post about the interplay between data portability and behavior portability. As I commented on Kevin's blog I agree with them 80% but think that Brad's proposal has one flaw.

I disagree that it is practical or desirable to create a centralized data store. I think there are a couple of issues with that model. The first is the security implications of having everything in one place... that scares me. The second issue is, I think key, to the success of this model...

The 'place that I have access to all my data and can therefore run my OpenSocial apps', lets for the sake of ease call it my 'iPage' can and should provide me all of the user interactions I need to manage my virtually aggregated data. Specialized 'Widget Providers' should give me widgets that give me data domain specific user interactions through which I can specify my favorite music, food likes and dislikes, rental car preferences, etc... BUT there is a world of data that is collected about me, and should be FOR me, buy people and systems that are much better qualified to know and assert those things than I am... Like medical information, qualifications, financial instruments, transactional histories of all kinds, what was done to my car at its last service, etc...

This is why we have BUILT a system that has a data abstraction (xdi/higgins) behind the OpenSocial container rather than a database. The abstraction can provide (bi-directional data access) data to widgets that is stored locally or data that is stored remotely (or a mix of both), the widget neither knows nor cares.

Using OPEN distributed identity standards (OpenID, oAuth, ID-WSF, InfoCards, FOAF, XFN) and OPEN data abstraction standards (XDI, Higgins,XML,RDF)... This can be done today... we've done it... This truly enables VRM in a broad and flexible way.

Monday, April 21, 2008

Steve does it again...

If you read this blog you get to watch me struggle to articulate some of the important subtleties of working with XRI, XRDS and XDI. Check out this article written by ooTao CTO Steven.Churchill which show very clearly who the real brains of this operation is.

Wednesday, April 16, 2008

More on Claims and XRDS

I was recently contacted by Bob Wyman in regard to an earlier post of mine... the first question was:
Some time ago, you wrote:

SEPs in XRDS must be considered self asserted
claims and as such should not be trusted on their
face. Service Providers should publish the
mechanisms by which SEP claims should be validated
to be about a specific subject (authenticated
identifier). (ooo… I feel another spec coming).


Did that spec ever get written?
I had to respond that I never did write that spec but offered to consider his use-cases if Bob thought it would be useful. He sent me these use cases:

Well, there are two kinds of things that I would like to be able to validate. The generic issue here is one of XRDS spam...
1. If I'm hosting a blog for someone and there is an XRDS file with a SEP that forwards to that blog, how do I assure a third party that the XRDS file belongs to the person for whom I am providing blog hosting?
2. If an XRDS file contains a link to some descriptive service (perhaps an XML file that describes the business and claims that the subject is a "Pizza Parlor"), how do I make the assertion that I know the subject to be, in fact, a Pizza Parlor?
And I responded like this.... NOTE: if you manage to read the whole thing AND find the intentional mistake... you win a prize (at least you may be entered into a random drawing and have your name honorably mentioned by me to my family over diner one night).

I SAID: -

First I have to give the disclaimer.... these ideas are just our thinking on the subject, we do not represent the XRI TC or any other body, blah, blah, you get the idea...

John Bradley and I spent a good couple of hours talking this through and have come up with 2 answers for you... One is the practical, how you should probably do it today kind of answer and the other is the 'doing it right' answer, which would mean taking on a lot more of our abstract thinking and an XDI server. The 'simple' answer still has problems that I will highlight...

Use Case 1) How to assert at an arbitrary http endpoint (web page, blog) a relationship with a specific XRDS.

The 'simple' solution is that the http endpoint support YADIS discovery to 'get' the desired XRDS. The claim in this case would be validated by reseprocity. The XRDS returned by YADIS discovery MUST have EITHER an 'EquivID' or a 'CanonicalEquivID' that is the URI of the original endpoint.

The one problem with this 'simple' approach is if you as the service provider or the end user actually have the ability to put the EquivID element into the users' XRDS. If, for example, this was Blogger blogs and Blogger OpenID 2.0 XRDSs then you would have the ability to edit the XRDS and the blog to create the reciprocal relationship. If the use case is broader than that you need to fall back on other mechanisms for the 'other end' of the relationship to be established. The options there would be:

a) tell the user to 'go edit their XRDS' - and wish them luck :-)

b) Use XRDSPP (XRDS Provisioning Protocol) - which is partially specified here: http://dev.inames.net/wiki/XRDSP_Spec and partially specified here: http://xpp.seedwiki.com/wiki/xpp/specs and not yet implemented or deployed anywhere that I know of. (although it is the 'next thing on our list' as MANY use cases depend on its existence)

Use Case 2) How to assert a third party claim in an XRDS.
I'm not SURE that I have understood your use case 100% so I will be verbose about the problem that I am solving in case it isn't the question you asked...

What is not clear to me from your question is what an RP would be looking for in the XRDS .... Would they be looking for "what does Service XYZ know about this entity" OR would they be looking for "what claims are available about this entity" OR would they be looking for "Is the entity represented by this XRDS a Pizza Parlor?"


If the question is: What does 'this service that I trust' know about the entity represented by this XRDS then the flow would be:

1) RP looks for the CanonicalID associated with the Authentication Service SEP that they use to authenticate this entity (if they interact with the entity using OpenID then they need the CID of the XRD that contains the OpenID SEP, if they have a 'signed document' from the entity they would use the CID of the XRD that contains 'KeyService' SEP (the place you get the public key)) .

2) The RP presumably knows the URI of 'this service that I trust' so they simply parse the CID, AND THE SERVICE TYPE, to the 'trusted service' and the trusted service returns 'claims' about the specified entity. SAML would be an obvious choice for expressing the claims but one could use any format one chooses.

If the question is: What claims are available about the entity represented by this XRDS then flow flow would be:

1) Perform Service Discovery for a 'Claims' service (not yet formalized but we could make one up on the fly if we needed to).

2) Perform Service Discovery for the AuthN service (like above) to get a 'Key' CanonicalID.

3) Ask the claims service (assuming that the claims service has a well known API) about the entity by passing in the CID and the AuthN Service Type.

4) Get back a list of claims... The claims should always be verbose and specific... not: 'this guy is over 18' .... but "Claim service A says - the guy who on this date and time had the credentials for the OpenID Service for CID =!abcabc is over 18". As per my blog post yesterday about "XRDS Caching" this claim could be cached in the SEP to optimize this interaction. Depending on how the claim is retrieved, from cache or from the service itself will dictate the level of crypto verification you might want to apply to the claim.

If the question is: Are you a Pizza Parlor then the flow would be...

1) Get the XRDS for the CID (no service selection) and iterate over the XRD level Type elements to see if anyone has claimed that this is a Pizza Parlor. The Type element of the XRD is an XRI that might me in the 'self issued' form.... "xri://+pizza.parlor" or it may be in the 'asserted' form... xri://@google*(+pizza.parlor). In the assert form, if you decide to trust the asserter, you can validate the claim by the same means as answering the first question in this use-case where google just became your 'trusted service'.

AND THAT"S THE END OF THE SIMPLE ANSWER :-)

So in fact the 'how it SHOULD be done' (according to Andy Dale) answer is a lot simpler if you can overcome one pre-requisite..... First install your XDI server... the rest is easy, really... if you want to know I'll write up how that would work.

Did you spot the mistake?

Tuesday, April 15, 2008

XRDS patterns

Talking with John Bradley yesterday we got into some best practice ideas for XRDS usage. These probably need to me formalized somewhere other than my blog as I think they are important, but here's a first brain dump for you...

1) More abstraction in our Service End Points (SEPs) - Right now we have a tendency to put a uri in the uri element of the SEP. The problem with this is that if the service provider changes their coordinates (or any other detail about their service) they have to change all of their customers SEPs. What we probably want to do is in any given individual's XRDS is provide a pointer to the Service Provider.... Jane uses @xyz for this service.... @xyz is then dereferenced for the access details. If @xyz makes any changes to their service they only have to change the SEP at the @xyz XRDS.

In MOST cases this can be achieved by using an Service Level Ref. In MOST cases the Canonical ID of the XRD that contains the final SEP is actually irrelevant so having many SEPs Ref to the providers' SEP works fine. In cases where the CID does matter (like in an AuthN service) we have to do something else.. An XRI in the URI element would do the trick but that is going to have to be handled by the application as the resolution client will not ''automatically" dereference the xri. However, all the app will have to do is make another call to the resolver while remembering the CID from the first resolution call.

2) XRDS Level Chaching - There are several SEPs that we are defining that, in their simplest uses, only expose a single piece of information. Examples of these are the 'Key Service' where in most cases you simply want the current public key associated with the identifier, or the STS service, where you are simply looking for an assertion of who is the issuer of mCards for this xri. In these cases it is burdensome, especially if we add the abstraction I proposed above, to have to resolve the SEP and then invoke another service to get a single piece of information. We have found that it is convenient in these cases to cache the pertinent piece of information directly in the XRDS. This way you can optimize most discovery and validation interactions. If you find that the cached value is "not what you would expect" (does not provide a public key that matches the signature provided) you can then invoke the described service to find out if the signature used an older, revoked, compromised key.

What do you think?

Thursday, April 10, 2008

wow...what a week

Well, RSA is over and we finally get to slow down again.... The last few weeks have been crazed finishing everything that we wanted to get finished to show at RSA. It is VERY cool... the iPage framework is an embodiment and implementation of a lot of the ideas I have been sharing here for the last 3 years. It is real user centric information management. It allows anyone to create a collection of claims from various places and then project them back out into the world progressively and securely. Over the next couple of weeks I will publish more information about iPages and how they work and instructions how to get one of your own.

Watch this space.

Friday, April 04, 2008

Check it out...

If you're in the SF Bay Area next week, and you happen to be at RSA... You HAVE to come check out the ooTao demo!!

We will be in the OSIS interop room all day Tuesday and Wednesday showing off our stuff... It is well worth stopping by.... You will get to see, what I believe is, the most comprehensive Identity 2.5 mash-up done to-date... And it looks pretty good too.

See you there!

Friday, March 07, 2008

Kind words, on the whole...

Ryan Janssen wrote his take on our conversation. On the whole I like it. I'm frustrated that we seem to be unable to build web sites that communicate what we do .... Rather than accept this as our shortcoming I think I should blame Ryan :-)

Tuesday, March 04, 2008

looking back...

Ryan Janssen and I spent a bunch of time on the phone the other night talking about the history of my involvement in the ID space. He's also been talking with others, like Drummond, and is putting together a history on his blog: http://drstarcat.com/

So far his stage setting and perspective seems very fair and even handed... we'll see if I still feel that way once he's written about me :-)... Check it out, it's a good read.

Tuesday, February 19, 2008

Short and sweet

It's not enough that I added Paul Madsen's Blog to my blog roll. I have to tell you that it has become my favorite blog to read. Paul keeps it short and to the point, he is funny and insightful. It also sounds like he enjoys his kids as much as I do mine.

What is more, ID-WSF is proving to be a surprisingly good read too!

Monday, February 11, 2008

Open Source Ruby InfoCards RP Available...

Working together Microsoft, LinkSafe and ooTao have developed the first Info-Card enabled i-broker. You can register for an i-name at LinkSafe and subsequently log in to any OpenID 2.0 relying party without ever entering a password. All of the security can be Info-Card driven.

We have made the Ruby RP Module deployed at LinkSafe available under BSD license along with a simple 'hello world' app that demonstrates driving the module.

The source can be found at:

http://svn.ootao.com/svn/ootao/dist/standalone-rp/

Log in as guest/guest

You can view the running test app on our test server at:

https://ibroker.ootao.com:802

why xri 2

why xri

I thought this email thread was interesting enough to share with you all... I was asked in an email...

I do not understand however, the statement about URIs having some intrinsic limitation or to bound by hard trees. A URI is an identifier. No more, no less.

In as much as meaning can be expressed by statements and a statement can be expressed in RDF, which uses the URIs as an identifier's for the subjects on both sides of the statement predicates, is in no way a limitation on what can be expressed about those subjects or the relationships between them.

Perhaps you can elaborate on the perceived limitation of URIs?

I'm publishing my response for two reasons...

1) Maybe my answer will help others with the same question.
2) So that other XRI folks can help refine my answer

So this was my answer:

You actually answered your question in your questions... URI is insufficient to describe the relationships between resources. In order to understand the context of an identifier you need RDF, or XRI. I believe that XRI and RDF solve different parts of the same problem and used together provide some pretty cool capabilities.

XRI is a fully backward compatible extension of URI so nothing is lost with this approach. It does bring some useful additions for anyone that wants to use them. Here's a couple of examples:

1) XRI Resolution spec defines 2 mechanisms for 'Trusted Resolution'. While you can turn trusted resolution off and use dns infrastructure as-is (nothing lost) you can turn on either 'ssl resolution' or full 'signed authority chain resolution' to greatly increase the confidence that the results of a resolution are what they should be. Given how easy it is to undermine the DNS infrastructure this seems important to me as we move higher value transactions around a distributed web.

2) XRI's cross reference syntax lets you build your RDF tuples right into your address.

XRI://(uri://my_subject)*(uri://my_predicate)*(uri://my_object)

Here's an example directly from the w3c tutorial.....

http://www.example.org/index.html has a language whose value is English

Which it then breaks down to...

[http://www.example.org/index.html] [http://purl.org/dc/elements/1.1/language]"en"

could be expressed as:

xri://(http://www.example.org/index.html)*
(http://purl.org/dc/elements/1.1/language)*
en

although starting to slip in some more xri 'stuff' it might look like:

xri://(http://www.example.org/index)*(@ISO639-1)*(+en)

In this last example the subject is still expressed as and dereferenced as a URL, it's natural form. The @ in the predicate means that ISO639-1 is resolvable in the @ namespace (dereferencing it would likely return the same as http://purl.org/dc/elements/1.1/language). The addition of the + to +en indicates that it is resolvable in the + space, which can be used to do things like find synonyms... (in the next draft of ISO639 en became eng... these might be made synonymous in the + space).

We have found that building indexes of xris that use RDF syntax is a highly efficient way to navigate semantic space. (I'm not saying that it should be the only way, just that it is a viable alternative to XML serialization of RDF. We store our XRI index as a native b-tree which we find to be much more efficient to process than RDF XML.

I'll stop there as you might already feel like your at the wrong end of a fire hose spending way more time on this question than you ever intended. If you want to spend more time learning about how and why I feel XRI (and I haven't even started on XDI yet) is important and useful, just let me know.
how'd I do?

Wednesday, February 06, 2008

Business Networking that _didn't_ suck...

As you can imagine. I have profiles in a LOT of Social and Business networking sites. This is part of my job, I look see who does what and how. The real acid test of my evaluation is whether I ever go back to the site and _use_ the account. If I do it's a rare thing and a good sign.

One of the networks that I have used along the way is BizNik whose tag line has long been... Business Networking that doesn't suck. And I did use BizNik periodically and even went to one of their local networking events. One of my favorite features was the "who has been to your profile" feature. Something shared by LinkedIn but at LinkedIn you only get 'hints' of who looked at your profile.

So this morning I get my 'weekly stats' email from BizNik and it tells me that my profile was viewed 7 times in the last week and I think to myself... "oh, I wonder who looked at my profile" and click on the link provided.... and to my horror.... I can no longer see the list! Now I have to pay $10 a month to see who looked at MY profile.

Now understand the need to monetize a business... Believe me I've been failing to do it for years and maybe it's because I do NOT believe that the way to go about monetizing a business is by charging the users for value that they create!.... People go to MY profile because of the information I put in it, it's MY information. Yes it's BizNiks container but can't they just stick ads on the page like everybody else. In my world BizNik would work with me to improve my profile, drive more people to my profile, share that ad revenue with me.... Not try to charge me.

So I guess that I will not be going to BizNik any more, it's not really a decision I make, it's an organic thing.

I guess I'll just have to drive people to my i-page...

Wednesday, January 30, 2008

All that glitters...

A quick word about SPARQL....

John sent me this link to an InfoWorld article that discusses the changes that will happen once the promise of the Semantic Web becomes reality.

First, congratulations to everyone who worked on SPARQL. I have gleaned some understanding over the last few years what it means to try to get agreement and drive ideas to a finished standards proposal... it's hard.

The title of this post sounds like I'm going to say bad things about SPARQL, but I'm not. SPARQL and the functionality that it will provide is very important and very valuable. I do think that it's important to put it in the context of the XDI and Higgins work that we are engaged in.

RDF and SPARQL will provide more available structured data that can be incorporated into the DataWeb. However SPARQL only addresses a small part of the problems that I talk about on this blog. For example, SPARQL doesn't have identification, authentication and authorization built into it's framework. I think this is a shame; we have seen time and again that building the capability for security into a protocol is far superior to 'bolting it on' or 'wrapping it around'.

SPARQL specifically leaves Update and Insert semantics as 'out-of-scope'. There are lots of use cases for which this is fine. However, there are also lots of use cases where you really need to push values back out.

So SPARQL is great... we will definitely build a standard plugin so that you can consume data available via SPARQL from XDI. We will probably even build a SPARQL query engine on top of our XDI engine so that any public data available from XDI can be accessed via SPARQL.

Sunday, January 20, 2008

Open Source Brain

Up till now I have had exclusive access to Steven Churchill's brilliant and clear thinking as we have been working together closely for years. Now you all have limited access too... Steve is now blogging. Check out his first post on a Simple Identity Model.

Sunday, January 13, 2008

I-Name news

Did you see this?

[Twitter] Joseph Smarr posted on Twitter
You can now log into Plaxo with an iName! I just attached =joseph.smarr. OpenIDDevCamp rocks, as do John Bradley and Michael Krelin! :)


It's great having John on the ooTao team... Thanks all of you!

Tuesday, January 08, 2008

Relationships are real

In my previous post I touched on the question of what is a Map. In our world, today, computers tend to make the distinction between the 'real world' and the representation of the world 'fuzzy'. 


If I have an interactive 'map' of my water system and I redirect the water flow from that map; which came first.... Is the physical system now simply a representation of the virtual model? Is it a physical 'memory' of the state that I changed on my computer or is it the other way round? If the 'map' and the state of the valves in the water system are 'out of synch' which is right? My intention was to redirect the flow, therefore the 'map' is right and the water is flowing wrong. I need to fix the valve so that it correctly represents the map... or do I?  


Conventionally one would assume that the map represents the physical state and that 'instructions' either successfully change the state or not. The map should be a representation of state not the authority for it. The software has the capability to 'poll' the physical networks state such that if the state changes the map can 'auto-correct' to current conditions. Depending on the completeness of the software we just have to hope that the physical network never gets into a state that it doesn't know how to represent. 

But here's the fuzziness again. If the valve has a processor and a network connection (which it would have to have to respond to instructions) it can also 'poll' the system for what it's current state should be and auto-correct. So at what point is the physical system just solid state memory?

Another example I have been thinking about is gerrymandering. Does that districting map dictate where people vote or does it represent where they vote?

This makes my head hurt!!

All of this is not JUST mental masturbation; I'm trying to work out what comes first a 'map' of the social graph that establishes our relationships and is 'portable' or is there some other manifestation of relationship that the social graph is a portable representation of.


Here's my conclusion...


Relationships are real. Directed relationship objects MUST be reified in the identity network. Maps of the social graph will show different aspects (attributes) of both types of top level entity; entities AND relationships. Like interactive maps of the physical world where you can layer utilities, streets, satellite pictures and geo-political attributes to communicate (makes portable) the state of a given physical area. The 'social graph' is a map that communicates (makes portable) the state of some entities and their relationships. 

An important quality of the 'portable social graph' is that each 'map' only represents a sub-section of reality.  I would expect different people might have access to different 'maps'. I would expose different sub-sets of my entity and relationships data to different 'mapping authorities'.

So this leads me to the conclusion that before we can really address social graph portability we need a better understanding of what relationships are.

In systems that I have built that have reified the relationship object I have found the following qualities necessary...

  • Relationships are unidirectional and COMPLETELY controlled by the 'root' end of the arc. 
  • Relationships are no different from any other 'claim' that I make about you, totally unsubstantiated. (good mapping authorities MIGHT only show reciprocated or verified relationship claims)
  • Other people are ALWAYS interacting with one (or more) of my relationship objects NOT with my entity object.  (that's the point of reifying the relationship)
  • Relationship objects contain several different types of data...
  1. Data that I keep about you, that is mine, only mine and is never meant to be shared. Stuff like "this guy tells really bad jokes" or "it's not in his profile but I know his home phone number is XXX"
  2. Pointers to the data about me that I want you to have access to. (in my world, it is the relationship object that dereferences the pointers NOT the 'other' entity).
  3. Pointers to (and caches of) information that you have exposed to me about you (and sometimes others).
XFN or FOAF are ways for me to expose a sub-set of my entity and relationships to PUBLIC mapping authorities. They are but a map of something a LOT more complex that needs to be given a lot more attention. 

(of course xdi has all of this solved... if only you would all just use it :-) )