Software is currently in a strange place legislatively, in the 18th century Civil Engineering exploded (sometimes literally) and the number of disasters went up in radically (for an interesting case https://en.wikipedia.org/wiki/Tay_Bridge_disaster), over time professional standards bodies grew alongside the maturing industry to both ensure that people working in the industry where adequately trained and adequately protected from external factors (people pushing them to do things cheaply, skimp on designs etc).
Given that we've handed a large part of the running of the modern world over to computers, software and the people who write that software I think at some point we need to start looking at a similar system, now the usual argument to this is "We don't want regulation" or "It's not my code, it's other peoples" but that doesn't alter the reality that this isn't going to get better until the way the industry is run changes, it's not a software issue, better tooling won't save us (it might help though) and there is no perfect outcome.
EDIT: It's worth noting that if as an industry we don't work with our customers on this stuff and reach an accord that fits everyone eventually one will be imposed on us from outside, Software Engineering doesn't exist in isolation, other fields of Engineering are covered already.
We need formal codes of practice, we need good institutions, but we also need a cultural change.
We no longer have the luxury of saying "oh, it's just a CRUD app, it doesn't matter if things go wrong". Software has become too important, it has become too deeply intertwined in our daily lives. Errors and leaks from trivial little apps can have life-changing consequences for users.
If your software handles personally identifying data, there's a non-zero chance that you could ruin someone's life through negligence. Legal case management software might be an obvious example, but it's the tip of a very big iceberg.
If you have several million users, one-in-a-million edge cases are going to happen constantly. Weird things happen at scale. We need to treat every Android permission and every Facebook API call as a potential matter of life or death. We need the personal courage and the support of our peers to say "no, I will not implement that feature", "no, I will not store that user data", "no, I will not transmit that in plaintext", "no, I will not commit that to production without unit testing".
Don't put the blame on the developers. Too frequently they are put under huge pressure and cannot say "no" to their managers. There are people which are responsible for delivery, testing and they are paid way more than the average developer.
Unlike in engineering, with software you can - and indeed should - test it before deployment in thousands of ways.
Finally there is also a receiving party that should ensure they are not accepting broken software. They have responsibility too.
> Too frequently they are put under huge pressure and cannot say "no" to their managers.
Actually, this is precisely why something like the "Iron Ring", linked above, exists. Standards bodies like APEGA (https://www.apega.ca) exist precisely so that you have someone backing you if you say "no" to a manager. If you're an engineer and you put the public at harm in any way, APEGA will come after you. Part of being able to call yourself an engineer in Canada is the distinction that specifically you won't just say "yes" and bend over because your manager wills it. Your duty is first and foremost to the public.
This is something that works incredibly well for engineering, geology, and geoscientists, and I can't see why it won't work for software. Developers who are contracted to build systems for the public like this absolutely _should_ have liability here.
Software projects that have significant impact on public safety/well being are a relatively rare kind. You can easily apply additional policies on them - and indeed this is what is being done (see airplane software, medical software etc.).
But will you really call your union because your manager forces you to skip writing unit tests for each and every class you write? Having in mind you are just churning out some crappy e-commerce website?
Completely another thing is that you might be careful with your code, the third party components you are using are more lax. This is well recognized e.g. in medical software where any third party code must identified and held to the same standards as the software you write.
It is very difficult to say who is to blame for creating a certain software bugs. Is it the person that configures the software, becuase he chose invalid config parameters? Is it the intern that wrote invalid piece of parsing code? Is it the system architect who put wrong data into the specification?
For any given piece of software there are many people responsible in various ways for its creation. In such contexts you cannot be blaming individual persons especially that their intentions might be good. Even very careful programmers will make bugs.
> just churning out some crappy e-commerce website?
That can affect people's lives dramatically though. Off-by-1 error charges someone's debit card $2,000 instead of $200 and now they can't buy food. Ecomm is important.
This comes back to how software is designed and built. On it's own a single bad weld should never endanger a bridge, but we accept a single mistake can bring down software. At it's core that's why software fails so often AND that's what we need to fix.
A single bad weld is isolated, a single bug can propagate a lot of bad data. Software crashing isn't the same as a bridge crashing, in many cases crashing is the best thing that can happen.
> But will you really call your union because your manager forces you to skip writing unit tests for each and every class you write? Having in mind you are just churning out some crappy e-commerce website?
Firstly engineering bodies are not a union. They are a legal organization that takes action to protect the public from actual harm. In the case of the software put forth in TFA, quantifiable harm was caused to people because of poorly tested and poorly integrated software.
Secondary to that, not every piece of code requires this level of oversight. As an example, you don't need to get a civil engineer to sign off so you can fix a picture frame on your wall; however, if you do decide to remove a load-bearing wall, you better be sure you need someone to sign that you're not collapsing your house, or selling it to someone else who won't know about the problem.
In the same way, the biggest restriction this creates for most people is that they won't be able to call themselves a software "engineer" and won't have the authority to sign off a project (in the same way you sign off that work on a bridge has been completed). This already exists in Canada and doesn't prevent anyone from creating a crappy e-commerce site. Protecting the word "engineer" is good for multiple reasons, and holding software to the same standards as we hold the rest of our public projects is not only a good idea in theory, but probably saves cost in the long run too. Think of all the people who will sue over being wrongfully arrested or put on the sex offenders registry.
> For any given piece of software there are many people responsible in various ways for its creation.
As with any large project. We don't blame the individual electrician and make them legally culpable if the light sockets start a house fire. However, if the sockets are faulty and the lead engineer _knew_ such, (or did not do enough due diligence such that they could not expect it to happen) yet still signed to let the construction finish and endangered someone who purchased the home, then the engineer is liable (to an extent for damages but more importantly that they will lose the right to work as an engineer in the future). The most salient point is that you create a code of responsibilities for engineers towards the public, and you build the legal framework so that projects that harm or endanger the public can be reigned in.
For projects like TFA, I can imagine that under an organization like APEGA, you could treat software similar to traditional projects by adding integration requirements to the contract, and requiring a certain period (90 - 120 days maybe?) of time after all the data has been transferred over to soft-test the system in production to see if it meets the on-site requirements (i.e. matches what is done manually). Does this make the contract more expensive? Yes. By having processes like these, can we be reasonably sure that we aren't going to cause a large amount of external harm or grief? Maybe. Building a culture where we don't just ship a proof-of-concept that can potentially imprison people because our manager had short term goals starts by making it clear who is liable, who is accountable, and where your responsibility lies.
>Don't put the blame on the developers. Too frequently they are put under huge pressure and cannot say "no" to their managers.
Befehl ist Befehl. Blaming management is literally the Nuremberg defence. If we feel that we have to do reckless and dangerous things in order to keep our jobs, then we desperately need to unionise. It's just not good enough for us to throw up our hands and blame the PHBs.
Management need to change, but they're not going to do it voluntarily. Pressure needs to come from users, but also from developers. We need to fight back against poor practices, as millions of workers have done in other industries. We're the guys on the shop floor, we're the people who know what the issues are and how to fix them. If we don't take a stand, nobody else will.
How would we react if this story was about a plane crash or a nuclear accident caused by negligence? Would we be so quick to absolve employees of responsibility, or would we be asking why nobody blew the whistle?
> Befehl ist Befehl. Blaming management is literally the Nuremberg defence.
This isn't really about shifting blame or absolving responsibility. It is everyone's responsibility to ensure that safety and the public good are put first. If developers don't have the ability to say no to their management because there's no institution backing up that decision then we are to blame.
I know it's a cynical world view but management will trade just about anything for profits and employees will compromise their principles for their livelihood. Hoping that this system of perverse incentives will correct itself internally using only the individual employee's right to resign far too hopeful for my taste.
Except we're not just talking about clearly malicious or fraudulent software with sins of commission. The bigger issue is cutting corners, taking shortcuts, and individual weaknesses that break the whole system.
Successive and cumulative corner cutting. And management should provide oversight covering several steps. But often they close their eyes and the last developer who touched the code is getting the blame.
I would amend that to "don't put the blame solely on the developers". I've seen way too many developers whose mentality is "ship it, it seems good enough, we'll fix any bugs as we go".
If the company culture is supporting this attitude, then sure - you will have developers saying that. But it isn't the developers that create that culture. This is something the management is responsible for.
The fact that a person acted pursuant to order of of a superior does not relieve him from responsibility. This can be a difficult decision to make in these circumstances, but I just couldn't ship buggy code that may be responsible for someone being unfairly jailed.
That's why I try to always document as much as possible in the workplace.
People aren't computers, they don't just run the company culture program in their brains. Sure, developers are influenced by company culture, but they contribute to it as well.
At one end of the spectrum, you have junior devs, fresh out of school, highly susceptible to the effects of company culture. At the other, you have senior devs with more than a decade of experience, who do a great deal to set the tone and shape the culture.
Thank about what kind of cultural influence you get from a senior dev who has accumulated more than ten years of experience of moving fast and breaking things. That's a lesson that I found very hard to learn: just because someone has lots of experience and skill and knowledge, doesn't mean they are better developers.
Mind you, I'm not saying the blame for the state of our industry lies solely with devs. I agree with you that it's the management that has the last word, but that doesn't absolve the rest of us of all responsibility.
If you really care about it you can simply do "cultural screening": don't hire people that clearly don't match your attitude requirements. We are doing that.
You can also discourage certain behaviors. If needed, fire offenders.
Exactly. This is why bosses make the big bucks. Developers are just code-machines and they will perform poorly or well entirely dependent on their external environment and culture created by their management.
(Oh, it's a one way thing, you say? Bosses are to blame for failure, but devs get credit for successes? Head I win, tails you lose?)
Even bringing developers (or anyone really) into it is like blaming the prisoners for having a dilemma. It's the nature of software that it does not fit inside one person's head, and this leads to problems.
> If your software handles personally identifying data, there's a non-zero chance that you could ruin someone's life through negligence.
And, at the scale you're discussing, if there is a non-zero chance that this will happen in any given instance, then there is a near-100% chance that it will happen in some instance.
Meh. That's all I have to say about that. We already have the distinction between software that can actually affect people's lives in a negative/positive way and software that is merely a nuance or inconvenient. The former is heavily regulated. Mistakes in financial accounting software, healthcare software, or mission critical software such as industrial automation or say launching satellites cause companies to vanish Your task management crud app that crashes every now and then isn't going to cause you to die. Marketing and advertisement software may be a large nuisance, but it isn't killing anyone. I've worked in both environments. The cultures are appropriate for each of the given situations.
No we don't have that distinction. We have places where outsiders, typically from heavily regulated industries like finance, like medicine, like areospace, have imposed testing and validation procedures and best practices onto the software industry. Left to our own devices, we get IoT bullshit like Nest thermostats that randomly turn the temperature down because they turn off[0], security exploit via unneeded running service part 2028, the 9382nd installment of security exploit via default credentials, the 11 millionth edition of buffer overflow security exploit, and the list goes on. These problems screw up not just it, but other systems (see mirari dns ddos, see ransomware, see ransomware on MUNI, see all the other effects from people that coded up things that relied on a ubiquitous persistent internet connection)
There's no such thing as harmless software any more. Our devices are too interconnected and we are too reliant upon them. Failure modes can cascade in unpredictable ways. "Harmless" systems often contain potentially harmful data; highly sensitive systems often share network resources with IoT junk.
Some semi-hypothetical scenarios:
A data leak in an appointment management app allows an abusive ex-partner to track down and kill their victim.
Mobile malware spread by an in-game advertising network causes a massive DDoS of the cellular network. Some areas are left without cell coverage for several hours while operators scramble to reconfigure their base station equipment; thousands of 911 calls go unanswered until service is resumed.
Poor encryption in a chat app allows an oppressive regime to identify, surveil and "disappear" human rights activists.
A bug in an integrated logistics management system causes some parcels to be tagged with the wrong attributes. A consignment of temperature-sensitive medicine is inadvertently shipped in an unrefrigerated container; the mistake goes unnoticed until patients start dying, because the "smart" temperature sensor inside the package thought that the shipment was safe to transport at ambient temperatures.
A disreputable dating app is hacked and the names of the users published, leading to several suicides.
A wifi-enabled printer is connected to the secure side of a hospital network by an unwitting member of staff. The printer is pwned, bringing down the entire network for 36 hours. Two ICU patients die as a result of preventable errors, because their medical records were unavailable.
Several models of aftermarket in-car entertainment units are pwned due to an unpatched vulnerability in Android. The units are connected to the CAN-bus in order to use the media controls on the steering wheel. The poorly-written malware intermittently floods the CAN-bus with malformed packets, causing sudden failures of the braking system in several cars.
Here in the UK we had a large NHS trust (regional sub department of our nationalised health service) shut down for 3 days, operations cancelled, delayed, surgeons stood down over malware.
I could go on ad nauseum about all the potential non-software ways that could go a long way in helping someone intent on breaking the law break the law, but the common thread in all your hypotheticals is people who want to break the law will find ways to break the law. If you want to help fix our current societal issues that drive people to want to break the law, that's another story.
We have armed guards, large vaults, and cameras in banking centers, but that doesn't stop people from trying to rob the bank.
We have federal laws and regulations protecting people's mail, but that doesn't stop identity thieves from going into your mailbox or digging through your trash for banking statements.
If a creepy stalker spots their ex's car driving down the road, what's stopping them from following her/him home? "But how would they find out what car they are driving?" you may rebute, well who decided to post what brand new car they drive on facebook? At some point, people have to take responsibility for the information they put out there about themselves. I wouldn't call into a radio show and blurt out my full name and address for everyone to hear before I gave my opinion.
I think this leads to a different issue -- why is appointment management software asking for personally identifiable information? Maybe we could make a distinction -- if you deal with PII, then these standards should be placed upon you.
You are missing the big picture here. The key phrase is: "risk management". Anything that your are doing (or NOT doing) have inherent risks. You should manage those risks.
If risk realises you will be losing money (in various ways, including litigation) or other assets (such as reputation, talented people, market share etc.). You can lower your exposure to the risk generally by spending money on it. You can hire more devs, testers, enact better policies, increase auditing etc. You can even buy insurance.
What you are advocating for is myopic. You should decrease your risk exposure in a way that is most efficient. It may be with better software, but often this isn't the case.
What the public can and should do is to work on the other part of the equation: make the failure more costly. You should "make them pay" - literally. If you could go to out of business or to prison because the software you have delivered is faulty, you will really make sure it isn't. Simple as that.
EDIT: a few recent banking regulations are good examples. In general various regulators have demanded that the international banks setup demonstrably independent local companies with appointed heads that are criminally responsible for misdeeds done those local companies. Putting in prisons few developers isn't goint to fix anything - the company will simply hire another few. Threatening the people in power is the way to go.
If you could go to out of business or to prison because the software you have delivered is faulty
And that's the problem. If a civil engineer designed a bridge that maimed hundreds, he would lose his license. There is no license to be lost for software developers.
What will be the cost to the company that built this shoddy justice system software? Will anybody go to jail or be personally held accountable? I doubt it (I can't think of cases where a developer has been personally responsible).
It isn't the developer who has signed the contract
Yes, and that's the problem. The engineer didn't sign the contract either, but he DID sign his name on the design. There should be some equivalent in software.
And just as not all "engineers" are professional engineers (licensed), not all software developers should need to be licensed. But, for something like descried in the original article, there damn sure ought to be somebody to hold accountable.
I could say the same about the civil engineer, or lawyers, or MDs. Are you really arguing that professional licensing isn't needed because we can always seek remedy through the court system?
We already have a taste of these rules and regulations, for example HIPAA, PCI, and FIPS-140. I'm not saying I disagree with you, but keep in mind that those are all considered quite burdensome, and it's hard to see how that could be different. Also FIPS-140 is a good example of a standard that names specific technologies and so is doomed to lag behind the state of the art. Just be careful what you wish for! But as you say, it seems inevitable for more rules to be imposed eventually, one way or another, so perhaps we should think about how we might write better ones.
There are also terrible project management standards like CMMI, which a lot of agencies try to adopt to get government grants. Standards that were designed to increase quality by clearly defining the work, ended up doing neither.
The trouble with comparing it to Civil Engineering is you have a bridge. The bridge has plans and you can have other engineers and physicists look at it, do the maths, and figure out if it is sound. You can have independent auditing, and you have do, because if a bridge fails, people die.
In software, we don't often feel that same burden. Also, unlike the bridge, software doesn't wear out over time. It may increase load, to the point where even with more hardware it won't scale. It may also become difficult to maintain due to old/unmaintained libraries, older language that are difficult to find developers for, or just that it was badly written and bandages together -- all of which is what we often called "Technical Debt".
We do have life critical software, such as the software in a pacemaker, in your cars ECU, in airplane navigation and control systems. The industry holds themselves to a high standard, and some of those companies may do independent auditing (if for liability than anything else). But even then, we see things get through as there are talks at hackcons at hacking things like cars, airplanes and biomedical devices.
In this article, we have a piece of software that seems like it's just data entry software. From the limited amount of information, it seems to have been poorly designed and buggy. If this was for the inventory at a retailer, yea who cares. Some grocery stories or gas station chains now lose a ton of time and money because they made a bad decision. If prices go up because of that, you just go to another one, or they just eat those costs.
When this deals with government information about people, you get into a whole different area. Another case that's similar: Novapay in New Zealand. A terribly designed education payroll system caused thousands of teachers to not be paid for months. That affects peoples' lives.
I find it weird that all the standards in the privacy space (that I know about) seem to me to be much more about checkbox-checking compliance (ie. following their letter doesn't help much or at all) as opposed to standards in reliability space.
Not to mention ambiguous. I've never found two people who can agree on the proper interpretation of even one of the guidelines in HIPAA or PCI. My experience with both has been managers who would rather spend a week trying to get out of having to be compliant than spend a day just complying.
There are cases where this approach would be very welcome. However it probably won't solve the problem of "a cumbersome user interface was causing the time taken to update a record to jump from around one minute to as much as 30 minutes per entry." That's bad design with maybe the collaboration of the customer who didn't understand the problem or pushed/spent enough to fix it. Not different from building a bridge too narrow to make two cars pass at the same time: perfectly safe but not functional enough. Engineer's fault, customer's fault, both? In the case of this software who's going to be sued, the developer or the customer? The customer by the people in jail and maybe the developer by the customer?
However, let's also try to compare the complexity of that software system with the complexity of a bridge. What would be more complex, that Tay Bridge or California's Odyssey? I bet on the latter and I assume that it would cost much more money and time to properly build and test software systems. It's almost only manpower, compared to concrete, steel and wood, but the costs could go up by one order of magnitude compared to what they are now.
If this approach gets mandated it would be a big slow down for every industry needing custom software. It also means we'll only need 1/10th of customers to keep our shops open, the 10% with more money. Goodbye to the small ones.
There will be the equivalent of a construction site manager, but good luck with the bugs that become apparent or are introduced after years of changes to the first accepted delivery of the project. Bridges are much more static than software.
The threat model of code is different than that of bridges, and legislation won't help.
Civil engineers have to make sure that bridges won't naturally collapse.
Software "engineers" have to ensure that no one will break their code, from the privacy and anonymity of their house, with millions of dollars in payoffs.
If bridges held millions of dollars and would be accessible from around the world, they would be breaking all the time also.
Cars are designed with an assumption that they will be sometimes mishandled, tipped over, crashed into things, run into bodies of water, set on fire, etc. In these circumstances cars should behave as to best preserve humans' safety, both inside (passengers) and outside (pedestrians).
This leads to some serious design decisions on deeper level, e.g. making the engine go under the rest of the car on a frontal collision, adding a structural cage around the passenger compartment, etc.
Deeply defensive programming methodologies also exist, from using safer languages and formally proven algorithms to pen-testing the completed setup. Their adoption just costs effort and time, that is, money.
>Cars are designed with an assumption that they will be sometimes mishandled
But not maliciously. If you run your car at another, the other car will break.
I would compare it to lock-making. Despite thousands of years of lock-making, they _still_ get broken, and there's nothing to be done about it.
>Deeply defensive programming methodologies also exist, from using safer languages and formally proven algorithms to pen-testing the completed setup. Their adoption just costs effort and time, that is, money.
It's not just money. It's paperwork, bureaucracy, and ultimately not effective. Practically, the only way to ensure safety is to take down the internet.
> It's not just money. It's paperwork, bureaucracy, and ultimately not effective. Practically, the only way to ensure safety is to take down the internet.
We can and should be making incremental improvements to technology. Continuing to use methods that are known to be faulty is gross negligence in the legal sense: https://en.wikipedia.org/wiki/Gross_negligence
We do not see enough lawsuits around software negligence today because companies are under no obligation to release source code - there is no way to tell how or why or even whether software is defective. This was a major issue in the investigation of the Toyota unintended acceleration scandal. The investigation uncovered major, obvious code quality problems, ultimately concluded the fault was caused by software (something Toyota denied, and without the investigation, consumers never would have learned), and I hope becomes a precedent: http://www.safetyresearch.net/blog/articles/toyota-unintende...
IMO if poor coding practice is gross negligence, it follows that failing to reveal the source code constitutes intent: poorly developed closed-source software is criminal negligence.
The problem is that there are only two ways to have a safe internet:
1. Remove pacemakers and cars from the internet.
2. Make a government bureaucracy, lock down computers to running "approved code only", requiring certification before being allowed to touch a compiler, require all code (including macros, one of the most popular sources of viruses) and websites be "formally verified" (a very expensive project) and written without bugs.
Maliciously, too: see all these little things from door locks to built-in speed limiters.
Formal certification may be paperwork. But things like using e.g. Haskell instead of Ruby, or Rust instead of C, generating exhaustive tests where possible, making sure that tests touch every line of the code base, writing short pure functions for the most part, and isolating effectful and unsafe code are not about paperwork, to my mind.
>Formal certification may be paperwork. But things like using e.g. Haskell instead of Ruby, or Rust instead of C, generating exhaustive tests where possible, making sure that tests touch every line of the code base, writing short pure functions for the most part, and isolating effectful and unsafe code are not about paperwork, to my mind.
But you can't legislate languages (C is banned). You'd legislate something along the lines of three hundred pages of legalese requirements which read like Patent applications (there's a reason they look that way) which _would_ require use of a memory safe language but would require that the compiler be signed off by a bunch of lawyers, closed source and costing about $10,000 a license.
We had a government language. It was called Ada. There's a reason it never took off.
Position one: implement professional-training program and whistleblower-protections for employees of the out-of-state companies writing the proprietary software for CA court system
Position two: require court system software to be open source
You might reply that open source is no panacea, and that's true. But the vulnerabilities in current software are wholly different than the ones in civil (physical) engineering-- both your imagined "professional grade" proprietary court app and my imagined open source court app will have critical vulnerabilities. The question is what happens when they are found. I'd like to read a blog entry complete with screen captures by a security expert who is running the exact same software, posted after the exploit has been responsibly reported and addressed. I do not want to read an opaque statement from a company blaming its users, and I don't see how even a set of strong, effective software engineering guidelines is going to get you anything else in this case.
Edit: I said "vulnerabilities", but the logic applies equally to usability issues, data corruption, etc.
> professional standards bodies grew alongside the maturing industry
> ensure that people working in the industry where adequately trained and adequately protected from external factors
> software and the people who write that software I think at some point we need to start looking at a similar system
Who this gets implemented I'll quit and become a welder or something else that's outdoors and filled with less insane propositions. The BS and stupidity that gets setup by these standards organizations is crazy.
I know many people who are PEs who can testify that all of this is completely useless. It's often said by people who give these certifications that you shouldn't go into industry before getting these. That's acceptable until you hear why: you'll forget a lot of stuff that is unused in industry but used in these exams. This is horrible design at it's best.
"Engineering Standardization" in this form is good for a few things:
1. Driving up the cost by keeping perfectly good engineers out of the field and making the industry talent starved.
2. Creating a corrupt standardization organization that makes it so their "group" can get certificates while others cant.
3. A reduction to the speed of new ideas entering the field. Less people allowed in all of which are brainwashed into "THIS IS THE WAY, YOU WILL OBEY"
4. A lack of care for checking work; "They're a professional engineer, you know better then me"
5. Preserving irrelevant information that isn't at all related to the task at hand (A+, I'm looking at you)
I think the real solution is for universities to actually teach people how to program, about software architecture, about software engineering, about maintenance, and less about things that you won't necessarily be doing. I'm in senior level classes at college and I find that most of my peers have problem "doing programming" and I know people in the EE track who have problems "doing electronics". These aren't stupid people, they are just lacking instruction needed to succeed in the field.
Another solution is to have a simple standard: All life endangering software must either be a) formally proven by the programmers and checked by a mathematician or b) have unit testing for every possible case of every testable portion of the code.
If both of these are done then it will save far more lives then useless standards that will be actively ignored. These two standards are already where our field is naturally going and embracing that will only yield higher quality software for everyone. Using stupid standards documents that specify things like "functions can only be 80 chars wide and 100 lines long" or "no memory allocation after startup"
It also avoids forming unnatural accreditation bodies that are harmful to the industry.
Edit: I'd also be perfectly fine with something like UL for software.
Well usually if you pay more for something like a manufactured good or art work you get more value, but the rule for software seems to be the more you pay for it the less value you get.
Similar problems have been reported in Tennessee and also in Indiana - where prosecutors have had a perhaps more troubling issue of inmates being mistakenly released early.
It seems backwards to describe this as more troubling.
Keep in mind this is a UK site. People are guilty until proven innocent, so from that perspective guilty people going free may indeed seem more troubling.
The UK is "Innocent until proven Guilty" except for specific exemptions (mostly related to not handing over encryption passwords, which is an offence itself..a lovely little logical trap they snuck in on that one didn't they).
Not as troubling as a government depriving a citizen of their inalienable rights. That is by far the more troubling outcome - especially given the greater societal implications.
But a person with a traffic ticket getting jailed and registered as a sex offender is better? Both are equally troubling. In my mind, the latter moreso.
Separate from the question of software liability is the court's responsibility to not make egregious errors.
Historically, courts have been unwilling to assign themselves blame for screwing up. Judicial immunity is untouchable in american case law because it's hard to find a judge willing to rule against it.
The standard of due process is high for 'life and limb' cases but low for misdemeanors & traffic violations. When Fixed discovered that most SF parking tickets are challengeable, SF didn't fix the problem -- they turned off their fax machine to make it harder to challenge tickets. https://techcrunch.com/2015/10/12/fixed-the-app-that-fixes-y...
Jurisprudence doesn't have a concept of 'bulk miscarriage of justice'. You could put NSA surveillance in this camp too. There's a star trek line about genocide which says 'we have no law to fit your crime'. That's where we are with petty crimes mishandled in bulk.
> The software, created by Texas-based Tyler Technologies, costs about $5m (£4m) and is set to gradually replace a decades-old e-filing system that looks like something a hacker would use in a Hollywood movie.
> Tyler Technologies acknowledged in a statement that the upgrade process had been “challenging” - but said poor training was to blame for bad inputting of data and integration with third-party applications that often introduce glitches into the system.
Even as someone who spends the majority of the day at the command prompt, I agree we should always be attentive to user-interface issues. But I've been less than optimistic that people will have the wisdom to know that a new "modern" interface automatically means it's more sophisticated/elegant, or that it's more attuned to the needs of human users.
In particular, it's been alarming to see a rise in unnecessary use of AJAX (nevermind Angular) in government applications. I'm not anti-JS, it's just that client-end development seems to have far more moving parts of the kind that don't get well-tested by workers in a bureaucracy. Especially when that work has been farmed out.
edit: as an example of how government can do modern web-dev/UX well, I can think of no better (at least at the U.S. federal level) example than the CFPB: https://cfpb.github.io/
Tyler Technologies acknowledged in a statement that the upgrade process had been “challenging” - but said poor training was to blame for bad inputting of data and integration with third-party applications that often introduce glitches into the system.
People writing software need to take responsibility for how the software is used, especially when it can impact people's lives to this degree. You can't just blame third party software or the people entering data.
> People writing software need to take responsibility for how the software is used
There is an old joke that it's almost impossible to get someone to understand a problem that would result in a threat to their livliehood.
I don't think people need to "take responsibility" I think organisations need to be made to take responsibility.
In the UK when you purchase a physical good from the retailer the warranty for that good is between you and the retailer, the retailer then has to deal with problems up the line from suppliers, it's not perfect but it's workable.
I think software should be the same, I think if I pay Foo for a system then Foo should be responsible for the system even if it's made of parts from Bar, Fizz and Buzz, if Foo has an issue with Fizz they need to take it up with Fizz.
There is so much crap software out there in every field and the acceleration towards a world run on software continues.
I'm not saying it's not a good idea, but that system's gonna be really tough on the software industry. Consider what would happen when a famous third-party API has security flaw. Most third-party libraries' licenses have an "as-is" clause and even mature third-party software has exploits sometimes (e.g. Java applets).
If you wanted to use a library in a project, you would have to
- a) read the entire thing to see if you can find any bugs that the devs missed, or
- b) roll your own solution which will probably have even more bugs, or
- c) find an equivalent library that has a paid version without the "as-is" clause.
Imagine paying a monthly subscription fee just so you can offset the liability when Angular has a security hole. This would run a lot of small shops out of business.
"Can you imagine it in the automotive,aeronautical, transport etc?"
This. This is the most important point I've read so far. The key word here is "imagine."
I think even we, as an industry, don't quite grok the degree to which software truly has eaten the world -- and the degree to which it now is every bit as important, as life-or-death, as things like cars, airplane travel, or even medicine. We hold these goods to certain standards because we deem them extremely critical -- too important to be left to the de facto standards set by the operation of a wild-west market. It's about time the notion of standards-free software shocked people as much as the notion of standards-free engineering in any other systemically critical industry.
I'd prefer self-established standards set by the industry to government-imposed standards set by lay bureaucrats. But the latter is coming eventually if the former never does.
> Imagine paying a monthly subscription fee just so you can offset the liability when Angular has a security hole. This would run a lot of small shops out of business.
Surgeons are not liable for, for example, hip replacement implants from a decade ago that are now known to have problems, but they would be responsible if they continued using the implants after they were recalled. They also purchase malpractice insurance. How is this any different than following security advisories and purchasing liability insurance?
I think that's a bit harsh without knowing whether the software vendor was allowed to be effectively involved in the training and integration efforts. Software like this depends on lots of processes around the rollout and usage being properly designed and executed, and the fact that the software is being used successfully in many places suggests that the vendor knows how to deploy the software safely. Whether they were complicit in not doing so in the case of Alameda County is the important question. Maybe the county ran out of money at the last minute and the vendor couldn't stop them from cutting corners on the rollout. Maybe the vendor doesn't think it's good customer relations to go public with the fact that they told the county over and over again that they were creating risks of errors like this. Maybe they even had stipulations in their contract to stop improper integrations from happening and the government ignored them or found a way around them.
Hopefully there will be an investigation and the people will find out who screwed up and why so they can be held accountable. It's tempting to say that everyone involved is "responsible" in some vague way, but that lets people off too easy. If everyone is guilty, then no one is guilty.
Except, what is the threshold? Are MS responsible for every time a 20 year old machine running windows 98 because nothing else can run the data collection software crashes? Should I give Linus a ring every time my aunt breaks her laptop that I installed Mint on?
Assuming the software itself has no meaningful or significant bugs (a hard thing to prove), it likely IS a problem of training. I know that my travel request time went from 5 minutes to closer to an hour for the first few times I used it after we switched to a new system. And while the admins were trained properly, the rest of us just were told "Ask an admin if you need help"
Similarly, it might actually BE the third party systems. I know that every time we upgrade part of our infrastructure we have to deal with the hell of components that should, but don't, interface well. And as often as I scream profanities and talk about how much I hate a certain vendor, I also know it isn't their responsibility to ensure that their software works with a different vendor's (I do yell at our procurement people though since they should make it so).
Do I think Tyler Tech should try to help? Of course, this is bad PR. And if they were the ones providing the training, address what went wrong and update the training and possibly GUI to resolve this. But so long as they met the requirements of the contract, I see no reason they should be held responsible or penalized.
You can't just blame third party software or the people entering data.
Indeed, this excuse just doesn't pass the smell test. Responsible vendors do (and not only "do", they advocate and insist on) practices like integration testing, and graduated rollouts (so that when process glitches like these inevitably slip through, they're caught when they affect 1 or 2 people -- not 50).
I agree that you can't blame the software but disagree with who should be liable. The entity using the software needs to be liable (in this case either the State or the courts). They are responsible for implementing and using the the software thus ought to be liable for any mistakes it makes, whether user error or software bug. If they then want to take that up with the creator of the software, that's between them.
Although I don't have any direct evidence regarding this instance, I wouldn't be surprised if the old system was greenscreen & form-based, while the new one is some kind of shiny Java-backed web app. It wouldn't surprise me at all if the old system was faster to use and less error-prone: those old greenscreen apps tended to be optimised for long-term use, rather than for showing off in a board-room demo.
having worked in data entry on a green screen app, as well as the .Net, Windows Form "upgrade", I agree with this sentiment.
the green screen app was highly optimized for efficient and accurate data entry without use of a mouse. The modern alternative looked better, but a large portion of the functionality was much less efficient in comparison.
the only redeeming factor of the new application, was it allowed much of the manual data entry to be automated. This however required considerable time, technical knowledge, and industry contacts to develop and implement, which not all organizational users possessed.
That's exactly the difference. Mr Woods (the defender I quote in the story) described the old system as something a computer hacker would use in a Hollywood movie... but it worked and was stable.
The headline is somewhat clickbaity and sensational.
Yes, it is bad that there are clerical errors in justice system.
But, if information is lost due to faulty software or user errors or even user error helped by bad UI design, it's still fundamentally just a clerical error. Those errors should be fixed and perhaps some people should be eligible for compensation for being mistreated due to error, but there is no sinister "software is putting people in jail" plan here. Just errors.
Embarrassing ones that should be fixed at a priority.
It definitely seems like more of the blame here should be shouldered by the police and the courts, who, knowing that the system has problems (and hopefully assured that those problems will be resolved in the future) should put less faith in them, and double-check (against filed papers, for example) potentially dubious results.
This is more expensive in terms of people's times, but it's just part of the cost of adopting the new software, and should be treated as such, possibly by billing the software vendor for the additional manpower required to work with the software during the transition.
Indeed. The "Computer says no" or "Computer says catch him" attitude should change: particularly if you know it's a new system, check twice before making drastic actions based on data it gives.
Even just "computer says this record has been randomly selected for manual review to verify system integrity, please consult your administrator on how to proceed or enter a manager's PIN to override" would be really helpful since it would effectively train people in double checking.
I think it's more complex than checking twice. You can't check twice for something that doesn't exist on a record but should, for example. How would you know what to look for?
The point here is perhaps that some errors are less forgivable than others.
May be in the field like this some extra care should be taken to ensure that, despite possible clerical errors, the data entered is sufficiently accurate. Technically that's likely extra checks and more thoughts about clearness of UI. "Just clerical error" doesn't sound like enough justification in cases like this - software should actively help to avoid those errors, and do enough of that help.
> “With the old system, it took maybe one or two clicks to complete a process,” she said. “Now it takes 25 clicks, and there are drop-down boxes and all of that.”
> Because the system is so unwieldy, clerks are unable to enter data in the courtroom, she said, so that burden has fallen on other office workers. It’s created a backlog of more than 12,000 files that have not been uploaded — and that number is growing by up to 300 files a day, according to Woods.
Sounds like it's mostly a software design problem to me, rather than "clerk/user stupidity" issue:
However, the fault probably does not lie in someone writing bad software; it's more about the authorities themselves defining a bad workflow when specifying the system.
I say this based on just a bit of personal experience in working with authorities and their mode of operation (in Europe but I expect US is not that different): it's often about covering one's ass by insisting on lots of checks and balances, and the work amounts generated by the processes are often neglected because they are someone else's problem.
This looks like fundamentally a public sector process issue. They'll improve it though.
> However, the fault probably does not lie in someone writing bad software; it's more about the authorities themselves defining a bad workflow when specifying the system.
> This looks like fundamentally a public sector process issue. They'll improve it though.
The better, "old system" was ordered by the same government office - two different results from the same "public sector process" should give a hint that maybe that is not the root cause of the problem. At the same time, since your comment attacks the public sector without providing alternatives, I can only assume that you are implying that a private company would handle this better - are you proposing to privatize the courts?
The old system was ordered maybe 20 years ago, maybe even longer. Lots of new legislation has been introduced since that, and perhaps that new legislation is the very reason why some systems had to be renewed.
Also, a new generation of public servants are at work, defining and specifying the work.
Who remembers, were there problems at the time when the old system was taken to use?
No, I'm not suggesting courts should be privatised, but the ever-expanding legislation is a problem throughout the Western world (I'm not from the US).
I expect the truth of this lies somewhere in the outsourcing industry. It has every smell of miscommunicated requirements and half-assed implementation.
If I'm right, I doubt it'll ever be admitted to though.
I wish I could upvote this a hundred times. I am currently dealing with an Indian dev shop and the code has been atrocious -- as if they didn't even read the requirement. Doing a very basic Stripe integration has taken nearly a week and it was still incorrect. I could provide days of examples.
The people writing your code have certainly not read the requirements. They'll each be working on one single aspect, and focussing entirely on fulfilling some knocked-together unit tests written by someone who might've read the requirements, but only with a view to working out how to make as much existing code as possible from the last customer fit into your project.
I knew various government techies who went to work for the Administrative Office of the [US] Courts. Honestly, I don't know what the chain of command was there.
Perhaps I'm crazy, but isn't a $5 million contract a bit too low for an overhaul of the California justice software system?
Business-wise there's going to be a lot of things to cut, pushback against government asks for software, and a very skeletal plan for maintenance mode. I sometimes wonder how government models the businesses they do work with, or whether they work as hard as businesses in modelling the other side.
I don't get why such software isn't an open source initiative. I wish the government would give more legitimacy to orgs like "code for america". Who wants a proprietary closed janky software to rule wether you are a criminal or not? Nobody, that's who. The only beneficiaries in that story are the people on the other hand of the contract, making $5M for an half baked piece of software!
While I certainly agree that all government software should be open source, it sounds like a big part of the problem here was a failure in gathering accurate requirements (which is a very expensive endeavor).
I don't see why people in the comments are blaming tech company (Tyler Technologies) for this. There was certainly someone over-seeing procurement and specifications for the government and it was their job to make sure that the product which was procured and delivered was functional and ready to roll-out. This person and their department should be held responsible.
If you care about the welfare of your fellow Californians, consider sending a letter to your U.S. Congressperson, your California State Assemblyperson and Senator [1]. Attach this article as an exhibit. Copy your county court.
Then, and this is very important, set a reminder out one week and call each of those people, confirming they received the letter and understand your concerns.
If this is too much, either accept you don't care about the issue (that's fine) or, if you do, that you may have wrong attitudes about how citizen influence works in a democracy.
I worked as a programmer for a smaller California county court system for about five years, and have seen something very much like this play out before (both in my county and others).
I can't comment on the Tyler product or their training directly; maybe they really are a rock star outfit. But if this is like past attempts, this project has all of of the worst of aspects of software development risks and none of our more "modern" methods to mitigate them.
The court employees - most of whom would _not_ be considered very computer savvy - probably had a lot of training directly with Tyler but are struggling with a system that a) doesn't meet their needs, b) changes years (decades?) of ingrained workflow habits and terminology, and c) may be much slower than what they used to have.
Observations from past projects like this:
* at it's heart it's a database CRUD app, but with hundreds of tables and thousands of fields and business "logic" encoded (in more database fields) to help with validation and workflow
* most of the above fields need to be fully customized for each county, so add in tables and logic to modify your UI on every screen
* this software was not built for Alameda county, but re-purposed from use elsewhere. Terms and concepts for how the law worked in the state this was originally built for may or may not apply here.
* "usability" success metric: "do all 50 fields on the page accept input and save data in less than 60 seconds?" (i.e. no concept of real HCI usability design at all)
* iteration process: waterfall. Vendor sits with court subject experts for 2-3 months, documenting all of the workflow. They customize their product to meet those needs, and a month later show a build that does this. Court can't use it yet (deployment locally would cost way too much), but they've printed out hundred of pages of screen shots to help document how it could be used. Hire external consultants to help with this process. Repeat until a) court money runs out or b) someone's reputation will be tarnished if the system doesn't launch
* There is no staging environment. Deployment is on local hardware only (no cloud). No bug tracker exists that the court can see. Builds are not automated, and "maintenance" may cost the court additional money.
* importing previous cases: worst ETL job you can imagine. Take data from an aging mainframe database that may or may not have any relational integrity at all, and try to plug it into a system as described in point 1
* administrative overhead: your county is given money from the state to do this, and then no choice about which vendor or software to use (because the state wants to roll this out in _all_ counties... each of which is very different from one another, even in CA)
tl;dr This is a horribly difficult software update, subject to the worst practices in our industry.
Personally, I don't think blaming court employees for "clerical errors" is fair at all - not that those haven't happened, but (from my experience) these are hard working people who care about justice yet have really lousy software that impedes their job.
I'd love to see a company do this software right - custom build, real iterative development hand in hand with the users. The Courts really needs it, they've never experienced a high quality product in this area, and the inefficiencies affect the wider economy (because civil matters are faster to resolve).
Tyler Tech is not a rock star outfit. They may think they are, but they are not.
I interviewed with them in 2008, and that remains, to date, the worst interview experience I have ever had. Everything they did before the interview seemed calculated to convince me to withdraw myself from consideration, and everything after seemed calculated to discourage anyone else I knew from applying.
So I felt a little frisson of schadenfreude from reading the article.
This stems directly from the opacity of the California Judicial Council Technology Committee and the CCMS debacle it has generally made worse. Public comments are welcome, but secret.
The problem with incorrect input and mistakes could be solved by entering the same data two times by two different people. It is probably not that expensive.
It could be reduced this way, agree. But if a particularly bad UI is used in both cases, I'd assume it's quite possible to have duplication of errors here too.
Does the UK passing the investigatory powers act in any way have any bearing on abuses by the legal system in other countries or are we just playing "whataboutism bingo" today?
Justice is whatever the 'justice' systems says it is. Citizens have no recourse. The 'justice' system has no checks or balances that can be accessed, except by the rich. There are no penalties for abuse or misuse, even when uncovered.
Yeah, that's exactly what I think of when I think of the concept of 'justice'. One day some people might create a just society, but it almost certainly won't be in the US.
Given that we've handed a large part of the running of the modern world over to computers, software and the people who write that software I think at some point we need to start looking at a similar system, now the usual argument to this is "We don't want regulation" or "It's not my code, it's other peoples" but that doesn't alter the reality that this isn't going to get better until the way the industry is run changes, it's not a software issue, better tooling won't save us (it might help though) and there is no perfect outcome.
EDIT: It's worth noting that if as an industry we don't work with our customers on this stuff and reach an accord that fits everyone eventually one will be imposed on us from outside, Software Engineering doesn't exist in isolation, other fields of Engineering are covered already.