I think the last point, authenticated metadata, is the most interesting.
In fact, I think it's how web sites like Facebook should have been designed. Rather than "handing over" personal information, each site should receive binary blobs identified only by purpose (e.g. "E-mail address", "photo #3 of gallery 2"). The data could be a copy, an entry in some public registry, or a URL. Facebook can then decide when it wants to send this information to the web browser (e.g. when someone tries to view your photo gallery), and the only way to actually render the blob is with an appropriate decryption key on the client side. The encoding could include other trust factors, such as expiration dates that would make the images impossible to view even with a key.
I have cooked up something like that in house, using a faux "Virtual Private Database" implemented in the application layer. You can look at Postgres' Viel project or Oracle's VPDB functionality to see record and column level fine-grained authorization. My system implements a flat RBAC system in Lisp and I will be implementing OAuth as the inter-app authentication module. That way a user of our site can grant fine-grained access to an asset of theirs, with the ability to grant read/write/delete permissions, and expiration by time or times accessed.
Back in the day you could finger Linus Torvalds to check for the most recent linux kernel version:
"8. Keeping track of current releases
Important new releases, programs, and ports are usually announced in comp.os.linux.announce. finger torvalds@klaava.helsinki.fi or finger @linux.cs.helsinki.fi to get some information about the current kernel."
$ finger linux@kernel.org
[kernel.org]
Trying 149.20.20.133...
finger: connect: Connection refused
Trying 204.152.191.37...
The latest stable version of the Linux kernel is: 2.6.30.4
The latest prepatch for the stable Linux kernel tree is: 2.6.31-rc6
The latest 2.4 version of the Linux kernel is: 2.4.37.5
The latest 2.2 version of the Linux kernel is: 2.2.26
The latest prepatch for the 2.2 Linux kernel tree is: 2.2.27-rc2
The latest -mm patch to the stable Linux kernels is: 2.6.28-rc2-mm1
As someone just rolling out an internal XML service, it's mainly because JSON hasn't caught on with the corporate developers. Try to bring it up in a conference call without having half of the parties misunderstand it as "A guy named Jason will do the data feed part of the project for us".
XML is probably supported on more platforms, and certainly has better support for transforming into other formats (there are some analogs to XSLT for JSON but they're nowhere near as ubiquitous as XSLT processors
"supported on more platforms" is relatively bogus when the libraries are open source. And I'm sure that all currently in use languages and platforms, even enterprise ones, have JSON parsers. I'd venture to say that one needs the ability to transform XML into other formats using tools like XSLT because XML is often so hard to work with, and I don't think this limitation is necessarily a serious problem with JSON, since the format is so simple and there are fewer serious ambiguities as to how things should be serialized in JSON, not having the concept of child nodes vs attributes (as one example).
Not sure how it applies to this specific problem, but comparing JSON and XML in a general case is like comparing a Prius and a M1 Abrams. Obvious examples: XML has multiple ways of defining validation (in a way even your text editor will understand). XPath. XSLT.
As thwarted points out, the entourage of XThings that follow XML around only exist to prop up the inherently awkward XML itself, whereas JSON, being more or less a straightforward serialization format for the data structures we all know and love, has no need for these things. To refine your analogy, I would say XML is to JSON as an M1 Abrams is to a sensible foreign policy of non-interference.
Amherst's webserver has had finger symlinked into cgi-bin since I was there. In the olden days, when Amherst's on-campus social network still exported a finger interface (it grew out of .plans on a VAX), you could actually use it to read world-readable .plans off the web. It doesn't seem to work anymore though; my guess is that one too many people got fingered by someone who didn't like them and forwarded incriminating stuff to their boss or something.
It's a trivial BASH one-liner to actually expose finger over the web: drop 'finger $QUERY_STRING' into your cgi-bin.
I'd love to see this up and running, but I'm trying to wrap my head around the adoption process. For example Twitter recently shut down their API for finding users by email address citing spam concerns.
What's in it for the people running the servers who'll need to support this?
This is decentralized. Someone could try to centralize it (see: the large number of large OpenID providers that aren't consumers), but starting out with something that decentralized means it's an uphill battle for the centralizers to do anything evil with the centralization. Who deploys it and who doesn't will depend on who decides it's a net benefit.
How about associating each individual with a unique number and creating a universal identity database controlled by individuals (to protect private data) and not by any corporation?
I am sorry that this comment was pulled down. I posted in earnest because the subject is interesting to me. I would like to ask why to suggest that a personal id number controlled by the person and not by the corporate owners of the id database is a bad comment? In fact, this article just posted to HN a few hours ago http://blog.modernmechanix.com/2008/06/05/the-computer-data-... suggests that in the US, the social security number become a unique identifier in "data banks" starting in the 60s. So, to me it appears that it is legitimate to discuss if the social security number, email or a new personal id number owned by the person ought to be chosen as the new id standard for the modern post-internet human individual. Please correct, if I don't understand the issue here. Thanks.
In fact, I think it's how web sites like Facebook should have been designed. Rather than "handing over" personal information, each site should receive binary blobs identified only by purpose (e.g. "E-mail address", "photo #3 of gallery 2"). The data could be a copy, an entry in some public registry, or a URL. Facebook can then decide when it wants to send this information to the web browser (e.g. when someone tries to view your photo gallery), and the only way to actually render the blob is with an appropriate decryption key on the client side. The encoding could include other trust factors, such as expiration dates that would make the images impossible to view even with a key.