Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
WebFinger (code.google.com)
56 points by fogus on Aug 15, 2009 | hide | past | favorite | 32 comments


I think the last point, authenticated metadata, is the most interesting.

In fact, I think it's how web sites like Facebook should have been designed. Rather than "handing over" personal information, each site should receive binary blobs identified only by purpose (e.g. "E-mail address", "photo #3 of gallery 2"). The data could be a copy, an entry in some public registry, or a URL. Facebook can then decide when it wants to send this information to the web browser (e.g. when someone tries to view your photo gallery), and the only way to actually render the blob is with an appropriate decryption key on the client side. The encoding could include other trust factors, such as expiration dates that would make the images impossible to view even with a key.


I have cooked up something like that in house, using a faux "Virtual Private Database" implemented in the application layer. You can look at Postgres' Viel project or Oracle's VPDB functionality to see record and column level fine-grained authorization. My system implements a flat RBAC system in Lisp and I will be implementing OAuth as the inter-app authentication module. That way a user of our site can grant fine-grained access to an asset of theirs, with the ability to grant read/write/delete permissions, and expiration by time or times accessed.

An elaborate piece of pain though.


Back in the day you could finger Linus Torvalds to check for the most recent linux kernel version:

"8. Keeping track of current releases

Important new releases, programs, and ports are usually announced in comp.os.linux.announce. finger torvalds@klaava.helsinki.fi or finger @linux.cs.helsinki.fi to get some information about the current kernel."


     $ finger linux@kernel.org
    [kernel.org]
    Trying 149.20.20.133...
    finger: connect: Connection refused
    Trying 204.152.191.37...
    The latest stable version of the Linux kernel is:           2.6.30.4
    The latest prepatch for the stable Linux kernel tree is:    2.6.31-rc6
    The latest 2.4 version of the Linux kernel is:              2.4.37.5
    The latest 2.2 version of the Linux kernel is:              2.2.26
    The latest prepatch for the 2.2 Linux kernel tree is:       2.2.27-rc2
    The latest -mm patch to the stable Linux kernels is:        2.6.28-rc2-mm1


This would be much more exciting if it was JSON. I'm not sure why new web-related standards still use XML.


As someone just rolling out an internal XML service, it's mainly because JSON hasn't caught on with the corporate developers. Try to bring it up in a conference call without having half of the parties misunderstand it as "A guy named Jason will do the data feed part of the project for us".


He, hehe, hehehe. I've been on that conference call.


XML is probably supported on more platforms, and certainly has better support for transforming into other formats (there are some analogs to XSLT for JSON but they're nowhere near as ubiquitous as XSLT processors


"supported on more platforms" is relatively bogus when the libraries are open source. And I'm sure that all currently in use languages and platforms, even enterprise ones, have JSON parsers. I'd venture to say that one needs the ability to transform XML into other formats using tools like XSLT because XML is often so hard to work with, and I don't think this limitation is necessarily a serious problem with JSON, since the format is so simple and there are fewer serious ambiguities as to how things should be serialized in JSON, not having the concept of child nodes vs attributes (as one example).


Not sure how it applies to this specific problem, but comparing JSON and XML in a general case is like comparing a Prius and a M1 Abrams. Obvious examples: XML has multiple ways of defining validation (in a way even your text editor will understand). XPath. XSLT.


As thwarted points out, the entourage of XThings that follow XML around only exist to prop up the inherently awkward XML itself, whereas JSON, being more or less a straightforward serialization format for the data structures we all know and love, has no need for these things. To refine your analogy, I would say XML is to JSON as an M1 Abrams is to a sensible foreign policy of non-interference.


Why do we make things so hard?

GET mailto:username@example.com HTTP/1.1

301 Moved Location: http://example.com/users/username

(forgive the HTTP mistakes)


incidentally, you would look up a SRV record for example.com for a http server to do the lookup on. Or even just use example.com.


that will be HTTP/1.3


No. It's a valid URL. Why would we require a new protocol version?


Umm, because that valid URL uses "mailto" protocol, which doesn't support "GET" requests as you described?


Actually, it uses HTTP with "mailto" as the username and "username" as the password.


No, the user@pass thing is a browser stuff. It's not sent as part of the request line.


That's not part of HTTP 1.1, just a popular convention for URLs carried over from Telnet/FTP, see: http://www.w3.org/Addressing/rfc1738.txt


Where, specifically, RFC 2616 does the spec say you must specify a http URI?

5.1.2 just specifies that it's a RequestURI and does not further define it.

3.2 refers you to RFC 2396, the URI spec, which includes mailto.

It's a valid URI and you are allowed to ask for it.


5.1.2 just specifies that it's a RequestURI and does not further define it.

I'm not sure if I understand. The one I'm looking at further defines it as:

    Request-URI    = "*" | absoluteURI | abs_path | authority
Edit: ah, of which, absoluteURI refers to 3.2.1, which references RFC2396, which includes mailto.

Still, doesn't include user:password@domain, though :-P

Src: http://www.ietf.org/rfc/rfc2616.txt


user:password@domain is a browser convention, not part of a URL, like I said.


Amherst's webserver has had finger symlinked into cgi-bin since I was there. In the olden days, when Amherst's on-campus social network still exported a finger interface (it grew out of .plans on a VAX), you could actually use it to read world-readable .plans off the web. It doesn't seem to work anymore though; my guess is that one too many people got fingered by someone who didn't like them and forwarded incriminating stuff to their boss or something.

It's a trivial BASH one-liner to actually expose finger over the web: drop 'finger $QUERY_STRING' into your cgi-bin.


I'd love to see this up and running, but I'm trying to wrap my head around the adoption process. For example Twitter recently shut down their API for finding users by email address citing spam concerns.

What's in it for the people running the servers who'll need to support this?


This is decentralized. Someone could try to centralize it (see: the large number of large OpenID providers that aren't consumers), but starting out with something that decentralized means it's an uphill battle for the centralizers to do anything evil with the centralization. Who deploys it and who doesn't will depend on who decides it's a net benefit.


I'm sure the guys at Google have thought about this, but how will this affect the spam situation?

Spam didn't exist when the original finger protocol was implemented.


That was my first thought too.

Probably if it wasn't for spam, we wouldn't need a lot of the services we use nowadays (mainly as filters).


I had a similar idea a year ago so I registered finget.com . Never got to actually do any work on it.


Yeah, and WebIRC = Google Wawe.


How about associating each individual with a unique number and creating a universal identity database controlled by individuals (to protect private data) and not by any corporation?

http://humanidproject.blogspot.com/


I am sorry that this comment was pulled down. I posted in earnest because the subject is interesting to me. I would like to ask why to suggest that a personal id number controlled by the person and not by the corporate owners of the id database is a bad comment? In fact, this article just posted to HN a few hours ago http://blog.modernmechanix.com/2008/06/05/the-computer-data-... suggests that in the US, the social security number become a unique identifier in "data banks" starting in the 60s. So, to me it appears that it is legitimate to discuss if the social security number, email or a new personal id number owned by the person ought to be chosen as the new id standard for the modern post-internet human individual. Please correct, if I don't understand the issue here. Thanks.


Stroke of genius




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: