Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So if this is, as some comments are suggesting, an indication that Diaspora is essentially dead… what can we learn from it? What did they do right, what did they do wrong? Is "distributed social networks" a fundamentally flawed idea, or was their implementation flawed?

I mean, they certainly did something right, given that G+ swiped liberally from Diaspora's design.

(What would a distributed social network that was dead easy to host look like? Imagine something like Diaspora that lives on your phone instead of on a server, for instance. Right now D* is a giant pile of Ruby, which means that interested amateurs are pretty much not gonna be able to play with it.)



I think most of the lessons are non-technical, which makes them all the more valuable.

1. Diaspora had all the characteristics of vaporware for the longest time - I remember they were #1 in a HuffPo 'Top Ten Alternatives to Facebook' lineup in May 2010[1].

2. The source code released that fall seemed to fail Spolsky's second rule of being able to build in a single step, because I remember checking out the repo and then realizing that I'd need to dedicate more than ten minutes to figuring everything out, making a mental note to do that at some point,a and promptly forgetting about it entirely.

3. The officially-hosted version was a victim of poor timing - the first time I saw it was last summer (2011). Aside from the fact that this was more than a year after I had already seen it in a mainstream news source, this was immediately after the release of Google+ and at the height of the 'nymwars'. By that point, Google+ had already gained its reputation as a massive, overhyped soon-to-be-likely-failure[2]. Diaspora's UI was nearly identical to Google+, so it seemed less like an alternative to Facebook and more of an open-source alternative of an alternative to Facebook.

I don't think the technical spec of Diaspora as a product is broken at the core (though that doesn't mean that something like what you envision couldn't also be valuable as well). I do think that the end goal is worthwhile, and let's be clear: this does not kill Diaspora - it has every chance to be as much alive as Netscape is, just in a different form (ie, Mozilla/Firefox). That's the beauty of open-source projects, particularly community-driven ones.

[1] The lineup is of course indicative of nothing other than the fact that Diaspora was already hyped-up enough to be included in such a lineup!

[2] Whether or not you agree with that conclusion now, it was certainly a popular sentiment at the time.


Someone said the other day in the submission about Tent, they should have published a protocol first. It should have been designed before they wrote any code, app.net at least seems to have this in it's favour.


It seems like you're flagging the results of bad development. The question we need to ask ourselves is, what leads these to unfortunate results. The decision to use two-stage build process might not be bad when a project is in that kind of shape. What leads a project to get into that kind of shape? Well, that is the question.

At the same time, I remember the first code-release of diaspora was before G+, it just wasn't usable code. Any project that takes a long time to create is going to have "timing problems". If Diaspora had smoothly working code right now, timing probably wouldn't be an issue.


Inexperience.

It was VERY clear that these were just a bunch of college kids who had no decent real world experience in software development. When you have a such a small project as Diaspora failing to get something the fundamentals sorted from day one is a bad sign IMHO.


What I'd take as lessons:

1) It's better to start with a protocol and build an application than just launch into application building.

2) Privacy is wrong motivation for wanting to replace Facebook. A distributed social networking application is certainly possible (one might argue the web itself an example). A secure, distributed sharing scheme makes a social networking app exponentially harder to develop (Project Xanadu tried to create a secure "transclosure" scheme. They had a ten year head-start on the web and still failed. Distributed, revocable sharing is essentially a pathological problem).


No, privacy is really required for something that aims to compete with Facebook, simply because Facebook can't compete on privacy.


Assuming this is true, how much is control rights vs need for de-novo architechture?


I'm not sure if I understood the question, but if you mean how user control is related to the decentralized structure, then the answer is in the question. Decentralization essentially prevents one entity from being in control, and in edge case control is given to a user who runs a personal pod for single Diaspora account.


About the notion that the data needs to be locally stored/controlled/at the edge. What about the idea of encryption and/or persistence management? (user control)

Does this provide more flexibility on Architecture.

Edited: for clarity.


I think encryption is good to have and D* devs had it in some further plans. But it's probably not the first priority, until federation protocol is rock solid.


Of course it isn't a fundamentally flawed idea. At least, from a technical perspective.

Email is a distributed social network. It just doesn't define very many types of actions or objects.

You can create "messages" and then you can "send" them. The user sees the guts of the message -- the "from", "to", "subject" and "body" portions, mainly.

As far as I know, nobody has extended the standard email formats and protocols to support new message types that have caught on lately. Actions like "Post photo", "Tag photo", "Take a special action defined by this third party which should be displayed like so when rendered in a 'feed'".

This seems like mostly a matter of agreeing on some new types of messages and some suggested ways of handling and displaying them.

I'm not saying this has to be built on top of existing email protocols -- it's really just independent sites exchanging messages of some sort. But email is a good working example to point at today.

The hard part is getting adoption, and getting different implementations to interoperate. Once you dive into this, you find a huge soup of different projects working on different pieces of the puzzle. Some of these projects have a future, others do not. It's going to take some time for this to shake out, and for the developers to resolve issues with bringing this stuff together.

A lot of this work has been going on without a lot of fanfare. See for example http://indiewebcamp.com/ or http://www.internetidentityworkshop.com/

Naturally, it couldn't hurt to bring a lot more help into these efforts.


As far as I know, nobody has extended the standard email formats and protocols to support new message types that have caught on lately. Actions like "Post photo", "Tag photo", "Take a special action defined by this third party which should be displayed like so when rendered in a 'feed'".

This is a very cool and exciting idea but my feeling is that email clients are hard to code because they're full desktop applications. Even standard email clients are hard to get right - building a social network version (with encryption) would be quite a challenge.

The rewards could be nice though - such as being able to use mailinator or mixmaster for anonymity.


Email clients have been made for every platform imaginable. Note Zawinski's Law:

http://catb.org/jargon/html/Z/Zawinskis-Law.html

Gmail is an email client. Facebook is now, too. So are the default 'mail' applications for Android and iOS. And let's not leave out Emacs :)

The challenge of email (which comes with its advantage of being ubiquitous today) is its plethora of standards documents one must read and respect if one wants to make a serious go of developing software that will work well with most of its corner cases.


I'd hazard a guess that none of the examples you gave were easy to code :-) Also, I'd argue that Gmail and Facebook type clients are non starters as its hard to do meaningful encryption in Javascript since you always must trust the server.

But if you disagree, please write one - I'd be a very willing beta tester and would even be keen to help in a limited way.


I'm not saying it'd be easy to just sit down and code one weekend! But really, are we expecting the "federated social web" people are talking about to be much easier to code than email software? We want it to do much more stuff, so it'll probably be harder.

Also, I think it's implicit that you trust your server. It's other people's servers you've got to worry about.


I just thought of another potential challenge - SMTP limits. I think Gmail limits you to sending 40 messages per day for instance. This sucks if you have >100 friends.


Those limits are imposed by existing email software. The real underlying challenge you are getting at is dealing with spam. And that would be just as much a challenge with any distributed social network too, for more or less the same reasons.

Email did this badly from the start, because it started in a much more trusting world of people who did not spam one another. Authentication features were added later, but there was no requirement to use them, which limits their effectiveness.

Perhaps for a project starting fresh now, this can be handled better?


I should add that the trickiest bits aren't the apps that could "run on top of" a distributed social network like photo tagging or games. It's setting up the social graph those apps use. How do you add a friend? How do you accept or reject a friend request? What sort of authentication and other security features should everyone understand and support? There are proposals for solving these issues but getting it a) secure and b) widely agreed upon and supported would be a Good Thing.


Actually, there were some fundamentally flaws in the particular version of a distributed social network that diaspora started with. They imagined a network of web servers that would each host a piece of diaspora and that each of these would provide secure sharing. That system pretty much requires trusting all the server-providers because the decryption happens on the servers.


Just saying the idea itself is solid. I never really dove into Diaspora so I have no idea whether their approach was good.


The things they did wrong were:

- During the initial 6-12 months after the Kickstarter there was little or no communication about what they were doing. Blog posts were very scarce. If you want to maintain interest in a project it's a good idea to have some level of communications.

- They made initial software choices which meant that the initial release wouldn't run on the target platform (ARM based servers)

- Once Diaspora got going it wasn't easy to find pods other than joindiaspora

- The Diaspora team were unresponsive to patch submissions and this meant that features which users wanted didn't get implemented.

- There were issues with the protocol being undocumented


>The Diaspora team were unresponsive to patch submissions and this meant that features which users wanted didn't get implemented.

So you've read pistos blog. Have you look at the contributions though? https://github.com/diaspora/diaspora/graphs/contributors

No open source project accepts every pull request, but Diaspora has accepted quite a lot.


I was very jazzed about it, but didn't donate, back when it was being kickstarted. Whether or not they were intending to, the publicity quickly took on a flavor of "these wunderkinds are going to save us from FB!" I couldn't say whether that was intentional or not, but the day they first released their code to the public, I read through some of it and was singularly unimpressed, and basically gave up on the project because it looked like their first Rails app, and seemed very insecure. I don't know how many developers lost confidence in them that day, but I certainly did. The punk rock ethic is great for getting things done, but some things are big projects and need to be undertaken with a degree of care and commitment, and those kinds of projects are not conducive environments for all-cylinders-firing wunderkinds.

I think a takeaway better model is: produce a beta or a demo that you bring to Kickstarter and ask for double or quadruple the money you think you'll need (because you will). Either make the code open-source last after it's fully developed, or make sure the code you bring to the demo is secure and unembarrassing. Even though I disagree with the idea, I think Light Table is hitting most of the notes Diaspora missed: Chris showed up with a compelling demo, raised way more money than a kid would deem necessary (and then more still with YC) and at least originally was going to make the code available last. He also updates frequently and people are using the software today, which never really seemed to happen with Diaspora.


Why would it mean the project is "dead"?

Do you mean that community projects are by nature "dead" projects?

What does "dead" mean, exactly?

My OS is a community project. OSX/iOS are built from community projects. Lots of scripting languages are community projects. Mozilla is a community project. Wikipedia is a community projects. I could go on.

I realise I may think a bit differently than many programmers, but I care less about how much a particular chunk of code is actively changing than whether it works really well over the long term (simple, stable, reliable, secure). I like "timeless" software than quietly continues to work for many years, remaining relatively unchanged. In my experience, well-engineered software like that is often immune from so-called "bit rot". Because it was designed correctly, with minimised complexity and maximised portability as a top priorities, from the beginning.

From a design and implementation perspective, there are no real impediments to a decentralised social network that cannot be overcome. However first you have to decide what you mean by "social network"? Does it have to be a clone of FB or G+, save for the centralisation element? Or does your definition allow some changes to their approach? For example, what if the network was private? What if there were no ads? What if it was comprised of lots of smaller networks of maybe 100-200 people (like your "friends" on FB) instead of being one massive, public image gallery/chatbox? What if it didn't require the web, as FB does? What if it was application-agnostic?

What do you demand from a "social network"?

Does it have to be a FB/G+ clone?

Anything is possible, so to speak. But not everything is necessarily ready to be received based solely on technical merit. How much marketing and PR is needed?


If it were dead it would mean few people are using the software, and especially that few new people were adopting it. It would also mean few or no people are actively developing or maintaining the software. A project that has few users but a lot of development activity is definitely still alive because it is always possible for new features to eventually turn into adoption (Firefox).

Presently, commit activity is actually pretty decent (https://github.com/diaspora/diaspora/commits/master). If the project is dead no one has told the people submitting PR nor the people merging the PR.

Most of that stuff is pretty small, but I think this announcement could actually be a step forward. Since February - it seems to me there has been a shadow hanging over this project with a promised over-haul of federation code. I do not know if that will happen now, but there is less risk that it will happen outside the view of those who want to participate in setting the direction. Most people of course, will just write articles and comments and mail list posts and never submit any code (like me!). They may continue to complain that they have a limited a voice in the direction of the project.

The project still has to have active committers who are a subset of the interested "stakeholders". It doesn't matter if they get a little paycheck from D* Inc or not - its still only going to include some people, which will be those people who have a record of submitting acceptable code. This is true of every single open source project I know of.


Do you mean that community projects are by nature "dead" projects?

No, but they're not necessarily live projects, either. The key question is how much commit activity is coming from the people who are walking away from the project. If they were doing all the committing, and they're leaving, it's dead. If on the other hand there's an active community of committers outside the original developers, it's alive.


So there is an assumption that number of commits means something? I'm just not quite sure what that something is.

What if the software as released is "rock solid"? That is, it's so simple, effective and reliable that it doesn't need to be changed, except for bug fixes?

What if the software is merely a "platform"? (And not only in the marketing sense of that word.) That is, the platform only "does one thing and does it well", and does not generally need to be "actively" developed (no commits except bug fixes), but... of course people can easily build things on top of it. For example, Ruby or Python programmers can do whatever they want. Total freedom. We give them the ability to create a connection to a social network they choose and they can send/receive over it to/from other members as they wish. We do not impose rules on that or try to manage it in anyway. We only provide the platform. The platform is application-agnostic.

The "platform" basically stays the same. It does what it's supposed to do, create networks, and that's all. If we measure by number of commits, one could say the development of the "platform" is "dead".

tl;dr what if someone releases a _platform_ that developers can build on, but number of commits to the _platform_ remains near zero? Because (apart from any bugs found) "it just works."

To my knowledge, Diaspora is closely intertwined with Ruby and web development. This makes it difficult to separate the "platform" from lots and lots of Ruby or other scripting language programming, mainly aimed at webpages, and people changing UI stuff to their liking. And personal preferences can vary greatly. (And there's more to the internet than just webpages. FB has to be webpages because it relies on the web, specifically one person's website: Zuckerberg. Another social network (or newtork of networks) might not be so limited.) Does the dynamic, highly personalised aspect of viewing webpages have to be part of the _platform_? Can we separate the personalisation from the basic functional element of the platform? (spawning decentralised networks)


A project that isn't being actively developed is dead. The idea of a "finished" program that does everything it needs to, one "so simple, effective and reliable that it doesn't need to be changed, except for bug fixes" is an attractive one but it's a myth; there has never been such a program, and I doubt there ever will be.


Hmmm. Very interesting perspective.

I'm currently using a number of "dead" programs.

In fact, most of my kernel is "dead". There is code in there that hasn't been changed in over 30 years!

I'm even communicating over a "dead" protocol. When were the last changes to TCP?

I'd even guess you are using some "dead" software yourself. Low level stuff that no one has the desire nor energy to modify.

(To be clear, I am not suggesting that we should not try to improve programs, continually. I'm only pointing out that perhaps sometimes code works for what it's supposed to do, no one has come forward with something "better" and hence the code does not need to be fiddled with endlessly in the absence of serious bugs.)


>I'm currently using a number of "dead" programs.

Then I hope you have a plan in place for when, not if, they break.

>In fact, most of my kernel is "dead". There is code in there that hasn't been changed in over 30 years!

If the kernel has people who take responsibility for it, and make changes to it, then it's not dead.

>I'm even communicating over a "dead" protocol. When were the last changes to TCP?

The fast open draft was published in July.

>(To be clear, I am not suggesting that we should not try to improve programs, continually. I'm only pointing out that perhaps sometimes code works for what it's supposed to do, no one has come forward with something "better" and hence the code does not need to be fiddled with endlessly in the absence of serious bugs.)

Sure, but I really don't think that's true. Possibly because the lower-level layers are still evolving - code written in low-level languages more than about 10 years ago (before the AMD64 architecture existed) probably won't work correctly on a modern system, and most high level languages have had incompatible changes over the same time period (I know Java's supposed to be an exception to this - allegedly you can still run the original java demos from 1994 on a current JVM). The fact is I've tried and failed to run several programs from >5 years ago, but I've yet to find one that still works without having been maintained.


Still waiting for Ethernet to "break". IP as well. UDP too. And netcat. It's been like 20 years. I'm still waiting.

I also wasn't aware that RFC drafts were the same as "commits".

Originally we were talking about "number of commits". Low number of commits means "dead", so they are say. Are you in agreement with that or not? If so, what does "dead" mean?

Now you are saying if software is maintained (fixing bugs) it's not dead. Who said it was? I certainly didn't. I even went so far as to clarify that.

Let's assume some software is maintained. There's someone to take responsibilty. As you have suggested. But there's no commits, except to fix bugs.

If there's no bugs to fix (maybe one every 15 years), then there's no commits. But if _number of commits_ tells you whether a project is "live" or "dead" then how do you call this a "live" project, if is has almost no commit activity?

My original comment was about the idea of "number of commits"-->"dead" as carrying some deeper meaning, e.g. about the quality of the software.

I like software that works and keeps on working. I really do not care that much if people are committing to it or not. In fact, I'd prefer they didn't because in many cases they only succeed in breaking it or in creating new weaknesses or insecurities.

The original netcat just keeps working. Last "commit" was in the 1990's.


[deleted]


Yes, Ruby is a "good" language but how many people are able to host ruby projects?

Ruby is a quick application to develop with and they needed to develop quickly. Usually, there are hundreds of particular aspects of a software production process that an outsider could question but just usually the tradeoffs can't be as easily made calculated from the outside as one would think.

I think the OP's idea of completely distributed servers might be as good any IF these weren't website servers but information caches akin to Freenet nodes. Freenet at least does work on some level. http://en.wikipedia.org/wiki/Freenet




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: