Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not an Android app developer, but can someone explain to me what the big deal is? I've been developing Windows desktop software, Linux desktop software and Unix server software for years. The hardware diversity on all those 3 platforms is huge. Heck, if you're developing web apps, it's like every user uses a different machine. I've never seen anybody claiming that Windows is fragmented. So what's so special about Android that people put the "fragmented" label on it, and why is that a big deal?


You've never seen anyone complain about shoddy windows drivers, dll hell, legacy support or browser quirks? You've somehow missed so much as the kvetching about Microsoft not updating older browsers, instead tying finally-better versions of IE to new operating systems, leaving a long-tail of abandoned devices out there that take serious development effort to deal with and thereby hold back advances?

You've never seen people griping in unix land about the different opinions the different versions of different distros have, about where various files belong? or how/which configuration defaults should be set? Or how packages support a small handful of environments and rely on individual admins or community maintenance to discover and document how in the world to get package W to play well with package X on distro Y.Z?

What's the push for containers even about, if no-one's running into fragmentation-style roadblocks?


Sure, but what's that got to do with hardware fragmentation? All those problems were caused by Windows itself and the lack of updates and solutions, not because Windows ran on tons of different hardware.

As for complaining about different distros: that depends. For some time I've tried to solve that problem through a project of mine, Autopackage (now defunct, website gone; it's still on Wikipedia: http://en.wikipedia.org/wiki/Autopackage). But we met a lot of resistance from distribution people and even from users. The view was that each distro is its own operating system and shouldn't bother with compatibility with other distros. Heck binary distribution is only for closed source software, was what they said. In the end there was not enough support.

So no, I've not seen people complaining quite as much about Linux distro fragmentation as about Android fragmentation.


Windows legacy problems were very often, very much a case of Windows running on a ton of different hardware. Drivers have quite a bit to do with the pains of hardware fragmentation. And quite a bit of the legacy pain is in having to deal with the old (flawed/bugged/limited) abstraction layers and dlls that people would very much rather not have to support anymore, but cannot abandon due the installed base of the old hardware it supports.

As to quantity of complaints: you asserted these sorts of complaints just didn't exist. If you want to argue about whether they're overblown, or simply proportional with the fragmentation gripes of various historical platforms, that's quite a bit different.


> And quite a bit of the legacy pain is in having to deal with the old (flawed/bugged/limited) abstraction layers and dlls that people would very much rather not have to support anymore, but cannot abandon due the installed base of the old hardware it supports.

It'd be interesting to see if the newer versions of Android would attempt to do what the newer versions of Windows did - detect an older application trying to use an undocumented feature, and then give the app what it wants to hear, instead of crashing or having a bug. Therefore, keeping compatibility with older programs.

Microsoft of course had teams of people doing this kind of stuff.


There are a bunch of conditionals in the Android framework on which SDK version something was compiled against, so they already do this in a way.


You don't hear people claim that Windows is fragmented because everyone has become used to it, so much so that it seems like such a normal thing. When you develop for Windows, worrying about which version of .NET Framework might be installed on the user's machine or whether or not a certain feature of the OS might be turned on or whether their drivers might be outdated is a normal course of events.

The iOS development philosophy is that developers should not have to worry about any of this, and their apps should just work. I mean, when was the last time you tried to install an app on iOS and it gave you an error message about such and such prerequisite missing, or you installed it and the user experience was broken because the phone had an older version of the OS or a wacky manufacturer UI?


The iOS development philosophy is that developers should not have to worry about any of this, and their apps should just work.

Microsoft, and IBM before them, had that same philosophy: if only everyone would use the current version of their OS, developers would not have to worry about cross-platform compatibility because everything would just work. Nobody liked that philosophy then, and most developers don't like it now either.

Web development can certainly be a PITA, but at least the web development platform is based on open cross-platform compatibility as a goal, rather than monoculture. It recognizes the reality that monoculture can't be achieved without giving up fundamental freedoms, and strives to work in that reality rather than in some company's monopolistic wet-dream.


>>Nobody liked that philosophy then, and most developers don't like it now either.

I'm not sure if the latter part of that sentence is correct, since iOS is the most popular mobile development platform.


Popular in what sense? Because developers like it, or because developers have to use it in order to reach their audience? By the latter measure, Ticketmaster is the most popular ticket vendor.


I find that devs have a far harder time dealing with apple's policies and fees than with android's fragmentation. One particular case is one startup where all the people involved use iphones but they moved to android to launch their app because they couldn't afford the complications of doing it for iphone, which was originally their plan.

I can find a way through a code problem, but when it comes to bureaucracy sometimes there is no alternative, or they shut it down the moment you find it.


Apple's fees are $99/year which is hardly prohibitive.

And I doubt you would see any difference between Apple's bureaucracy and Google's. Anytime you try and embarrass the company, try and take money away from them or be anti-user then of course you will have problems.


I don't remember having to 'hope' that Google accepts an app through a human review before allowing it on their store.

I like neither Apple nor Google but I believe we shouldn't overlook facts that clearly differentiate one from the other.


Can you elaborate on the bureaucratic complications said startup faced?


>> When you develop for Windows, worrying about which version of .NET Framework might be installed on the user's machine or whether or not a certain feature of the OS might be turned on

It's also not quite as big of a deal on Windows - you can simply include or launch the installer for the XYZ Library or ABC Framework version 3.9.


Compared to Windows or Linux, Android is almost not fragmented at all.

Yes, there are different screen sizes, but that's not a big deal, the API abstracts from that.

And yes, there are different API versions, but that's not a big problem too, because the API is backwards compatible (or forwards comaptible, I'm not fully sure about the difference between these two).


Of course... similarly, compared to Windows or Linux, Android is not mature at all. The main method of interface is also much more sensitive to things taking a long time.

As for the API "not being a big deal." I think you grossly overstate how well the APIs have been managed. It is nowhere near as hellacious as the J2ME fiasco, but it is not pleasant, either.


> The main method of interface is also much more sensitive to things taking a long time.

Not sure I understand, what do you mean by that?

> but it is not pleasant, either

Why exactly?


Humble apologies for not responding. Forgot I had posted.

I meant that the latency of the touch screen is a huge factor. If the screen is not keeping pace with my finger, it is highly noticeable. And jarring. This is just as true for many GUIs, but the speed of getting text onto the screen is a solved problem. And rarely goes wrong. (That is, most UIs on desktops are relatively static while the user is doing stuff. This appears to not be the case on my phone. Further, if Facebook goes slow on my desktop, I just switch to another app/tab for a bit. Not really doable on the phone.)

As far as the API issues, it really comes down to just how obnoxious it is to target many versions of devices. You wind up picking the API that is solid across them all. Often this is the "old" one, that is not quite as nice, and definitely not as supported. In fact, this is the main thing I hate about how Google has curated their stuff. They don't so much "stabilize" apis, as they do constantly churn them.


No problem mate, thanks for the reply!

I mostly agree with the touch latency, even with Nexus 4 it's not as good as iPhone. But it's getting better.

API issues: there are different API versions that are backwards compatible. But apart from that, the API is the same across devices. Manufacturers don't change the API. Having to use older API versions can be a bit annoying, but I don't see it as a big problem, and there's also the support library.


Because the formerly iOS loving tech press were looking for a buzzword to kill the rise of Android with a few years ago. This was the best they could come up with.


Speaking as an android app dev there are some challenges but your instinct is right. I've rarely run into device specific issues, android has decent support for building flexible layouts, there are official libraries for bringing some newer version ui components to older versions, etc. Overall while certainly more challenging than writing for a single known device its not dramatically difficult at least for basic apps.


Screen size is the biggest difference. Things don't, or at least don't have to, run full screen on a desktop. The UI is also not as big of a deal on a desktop. People don't buy Quicken simply because it is "prettier." Given the absurd number of different screen sizes, it's much more difficult to make an app that will look good on each of them.

The second part is the API differences. You don't get to use the latest fancy new features when 34% of your audience is on the older version. That is similar to windows land, but it is also not nearly as big a deal as the screen sizes.

Finally, hardware. Yes there is a lot of different hardware on desktops, but on average it is more powerful than on mobile devices. Ask anyone who has done embedded development, there are a different set of problems there than you have on the desktop. The same goes for mobile. Most of the new mobile devices have hardware on par with desktops from a few years ago, so it isn't quite as bad as embedded. That being said, the range of Android devices is huge, and there is some percentage running on crap that creates another headache for Android developers.


There are more mobile developers than desktop developers.

Thanks to twitter and other social tools complaints now get amplified and travel quickly.

There always have been a lot of pain and complaints in windows, macos & linux development, in any case, but we didn't pay much attention to it as we do now.

Problem with Android fragmentation, on top of that, is the frustration that comes when you compare to the competing platform (iOS). I'm not saying iOS development doesn't have its own pains, but it looks tidy and simple compared to the Android ecosystem. If iOS wasn't there Android fragmentation would be accepted as the normal state of affairs.

It is a big deal? Well, it depends on what kind of app you are developing and how much you can get away with. For some developers screen resolution differences are not a big deal, but different processors, memory, etc... are. For others the opposite is true.


iOS fragmentation is masked by using the same version number on devices that support different api's. Clever perhaps, but fragmented like Android.


> iOS fragmentation is masked by using the same version number on devices that support different api's

Example of this? I can't think of one offhand (besides things like the original iPhone not providing fine-grained location, because it didn't have a GPS receiver). Some of the user-facing features vary by device, but the APIs are the same.


Skinning. Imagine if half the PC manufactures shipped windows with an OS X skin that the user couldn't turn off, and another 15% of the market shipped a DIFFERENT non-native skin.

http://en.wikipedia.org/wiki/HTC_Sense http://en.wikipedia.org/wiki/TouchWiz


With Android 4.0 and newer it's not a problem anymore. Your app can choose to use the Holo theme, which is guaranteed to look the same on every certified Android phone. Without certification by Google, the phone can't have Google Play I think.


I think the point is that Android 4.0 and later are only on 40%-50% of devices in the wild (per this report). So things will get better, but they aren't there for someone coding today.


A caveat of this: The distribution is very different depending on the geographic area. For my recently released US-focused app, Android 2.3 makes up 15% of the downloads.


That's actually a really interesting point.

I wonder what Android fragmentation looks like per-country.


The problems are greatly exaggerated. I imagine there's little difference to the "fragmentation" problems developing for Windows.


Developers these days are lazy and want a write-once-run-everywhere development fantasy land that has never existed in computing and never will. They want to expend minimal effort building a photo-sharing fart flashlight app by gluing together a few APIs and cashing out for a billion dollars. Having to do device testing, ensure compatibility and provide long-life maintenance runs counter to this mentality.


I have to disagree with you.

It is not just about effort. It is also about opportunity cost. Imagine you work for someone else and then you tell them that for the kind of UI we want I can ensure it works for 95% of Android devices if you give me another three weeks and access to these 30 other devices (maybe on some website that offers paid access to them for a bit of time). Right now it works well for 75% of the devices.

Now, the person who owns this business may consider his/her money better spent when you implement some other feature on both Ios and Android rather than ensure you support almost all of Android devices. It may not be the developer who chooses. Many times it may not be so clear cut too. It may be that some features get shot down during design because they are hard to get right on all the various aspect ratios and screen sizes that Android offers.

So, yes the 'fragmentation' is a problem but in my experience it is mostly only to do with the variations in aspect ratio and screen size.

Nevertheless, like one of the first replies to this topic (by bookwormAT) says, it is not as bad as having had to code for various Operating Systems if the manufacturers hadn't agreed to go with Android in the first place. So, Android does help us develop for a wide range of manufacturers' phones but it doesn't help much to ease the pain of accounting for the variation in displays.


AFAICT, it's a stalking horse trotted out by Apple fanboys to decry Android popularity.


AFAICT, everyone that isn't complaining about Android fragmentation has never actually worked on a project that involved supporting 90%+ of users.

There are some horrific devices available today that are Android in name but definitely not in spirit. They are so underpowered that they are more akin to feature phones.


Well, take Microsoft. They put an awful lot of effort into testing their operating system on various real-world hardware and working around hardware bugs. Does Google do the same?


My biggest fragmentation issues have been the result of hardware differences. Carriers pick and choose which permissions are allowed/denied. An example, only ATNT denies access to AlarmManager. This means you cannot set or change the alarm through software without root. Now, due to this, you have to build a complex software alarm to do what the operating system should give you access to.


Is that documented anywhere? It would be crazy for AT&T to restrict access to AlarmManager. I would think that would break a large proportion of apps in the store.

Cf: http://developer.android.com/reference/android/app/AlarmMana...

This is a very common API.


Because of the limitations. You've got limited screen real estate and limited resources. The bar UX and design-wise is also higher.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: