Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100% agree.

If it's someone else's project, they have full authority to decide what is and isn't an issue. With large enough projects, you're going to have enough bad actors, people who don't read error messages, and just downright crazy people. Throw in people using AI for dubious purposes like CVE inflation, and it's even worse.



> people who don't read error messages

One of my pet peeves that I will never understand.

I do not expect users to understand what an error means, but I absolutely expect them to tell me what the error says. I try to understand things from the perspective of a non-technical user, but I cannot fathom why even a non-technical user would think that they don't need to include the contents of an error message when seeking help regarding the error. Instead, it's "When I do X, I get an error".

Maybe I have too much faith in people. I've seen even software engineers become absolutely blind when dealing with errors. I had a time 10 years ago as a tester when I filed a bug ticket with explicit steps that results in a "broken pipe error". The engineer closed the ticket as "Can Not Reproduce" with a comment saying "I can't complete your steps because I'm getting a 'broken pipe error'".


Just today I've had a "technical" dude complain about something "not working".

He even checked "thing A" and "thing B" which "looked fine", but it still "didn't work". A and B had absolutely nothing to do with each either (they solve completely different problems).

I had to ask multiple times what exactly he was trying to do and what exactly he was experiencing.

I've even had "web devs" shout there must be some kind of "network problem" between their workstation and some web server, because they were getting an http 403 error.

So, yeah. Regular users? I honestly have 0 expectations from them. They just observe that the software doesn't do what they expect and they'll complain.


Your “technical guy” sounds a lot like me.

When debugging stuff with the devs at our work, I tend to overexplain as much as I can, because often there’s some deep link between systems that I don’t understand, but they do.

I’m a pretty firm believer in “no stupid questions (or comments)”, because often going in a strange direction that the devs assure me isn’t the problem, actually turns out to be the problem (maybe thing A actually has some connection to thing B in a very abstract way!).

I think just serving a different perspective or theory can help us all solve the problem faster, so sometimes it’s worth to pull that thread, even if it seems worthless in the moment.

Maybe I’m just lucky that my engineering colleagues are very patient with me (and maybe less lucky that some of our systems are so deeply intertwined), but I do hope they have more than zero expectations from me, as we mean well and just want to support where we can, knowing full well that ya’ll are leagues ahead in the smarts department.


Totally on board with this gripe. Absolutely infuriating. But just one minor devil's advocate on the HTTP 403, although this doesn't excuse it at all.

In Azure "private networking", many components still have a public IP and public dns record associated with the hostname of the given service, which clients may try to connect to if they aren't set up right.

That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"... (like their laptop is not connected to VPN, or Private DNS isn't set up right, or traffic isn't being routed correctly or some such).

Yeah, I get that's just plain silly, but it's IAAS/SAAS magic cloud abstraction and that's just the way Microsoft does things.


> That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"...

You are not describing a network issue. You're sending requests that by design the origin servers refuse to authorize. This is basic HTTP.

https://datatracker.ietf.org/doc/html/rfc7231#page-59

The origin servers could also return 404 in this usecase, but 403 is more informative and easier to troubleshoot, because it means "yeah your request to this resource could be good but it's failing some precondition".


They're not, but the point is that users can see the 403 due to network errors. If vpn + networking work then the user can access the resource through the private interface. If there are issues with network routing or VPN then they end up on the public interface and get 403. So from the user perspective the same action can result in success or 403 based on whether there are network issues.


My theory is that the best, absolute best predictor if someone could be a good programmer (or is) is the ability to read exactly what is written.

Is not math, logic or any of that asides. Is the actual ability to read, exactly, without adding or removing anything.


You can test that theory with Magic the Gathering players. Reading exactly what the card says and interpreting it with the exact text of the rules is core to the game.


> I do not expect users to understand what an error means

I'm not sure I agree.

Reason ?

The old adage "handle errors gracefully".

The "gracefully" part, by definition means taking into account the UX.

Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.

Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)

During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.

Even if your app is targetting a techie audience, its the same ethos.

If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.


There are people that when using a computer, if anything goes remotely wrong, they completely lose all notions of language comprehension. You can make messages as non-technical as possible and provide troubleshooting steps, and they just throw their hands up and say "I'm not a computer person! I don't know what it's telling me!"

20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"

It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.

"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."


>completely lose all notions of language comprehension

I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.

I really don't get it.


I think it's something to do with the expectations of automation. We seem to be wired or trained to trust the machines fully, and enter a state of helplessness when we think we are driven by a machine.

I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.

One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.

The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.

I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.

For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.

In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).

Human factor studies are often fascinating.


I think part of it is that most users at some point encounter an error message that is just straight up wrong. For example, a login page that says "wrong password" when in reality the user is typing EXACTLY what they typed on account creation, but the site silently truncated the password. Even one such frustrating experience is enough to teach many users that as soon as they see any error message, they should stop trusting anything the system tells them, including the error message. It's extremely difficult to rebuild user trust after this sort of UX contract violation, particularly because less technical users don't mentally differentiate separate computer systems. All the systems are just "the computer."

Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.


This is not about understanding the message, but switching user mental activity. I go myself in the similar situations many times. One example: I tried to pay my bills in online bank application, but got into error. After several attempts, I did read message and it say "Header size exceed..." . It give me clue that app probably put too much history into cookies. Clear browser data, log in again, and all got works.

Even when error message was clearly understandable for my expertise, it took surprisingly long tome to switch from one mental activity - "Pay bills", to another - "Investigate technical problem". And you have to throw away all short memory to switch into another task. So all rumors about "stupid" users is direct consequence from how human mind works.


> This is not about understanding the message, ...

99% of the population have no idea what "Header size exceeded" means, so it absolutely is about understanding the message, if the devs expect people to read the error.


Yeah, I would certainly not expect the user to understand what to do about a "Header size exceeded" error.

But I WOULD expect the user, when sending a message to support, to say they're getting a "Header size exceeded" error, rather than just say "an error".


This seems to be missing the point. Sometimes users see error messages. Sometimes they're good, sometimes they're bad; and yeah, software engineers should endeavor to make sure that error behaviors are graceful, but of all the not-perfect things in this world, error handling is one of the least perfect, so users do encounter unfortunately ungraceful errors.

In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.


I've had the experience of getting to sit beside several categories of people across my career and watch them attempt to do something which is causing issues or errors. The pattern I have seen the most is what I can only describe as speedrunning the error. People will try to do the thing they (think they) know how to do. When information or error comes available on the screen, they completely ignore it; if it is a popup, it is closed as quickly as possible and if it is shown somewhere on the screen that doesn't interrupt their flow, then it is completely ignored.

I have given instructions to repeat, but more slowly, and people will still click through errors without a chance to read. I have asked people to go step by step and pause after every step so we can look at what's going on, and they will treat "do thing and close resulting error" as a single step, pausing only after having closed the error.

The only explanation I have that I can understand is that closing errors and popups is a reflex for many people, such that they don't even register doing it. I don't know if this is true or if people would agree with it.

I've seen this with programmers at all levels of seniority. I've seen it with technically capable non-programmers. I've seen it with non-technical people just trying to use some piece of software.

The only thing that's ever been effective for me is to coach people to copy all text and take screenshots of literally everything that is happening on their screen (too many narrow screenshots that obscure useful context, so I ask for whole-screen screenshots only). Some people do well with this. Some never seem to put any effort into the communication.


If I can victim-blame for a moment, I don't know what my mom is supposed to do when a streaming service on her TV says there's a problem and will she please report a GUID to the support department.

No, my mom is not eidetic, and no, she's not going to upload a photo of her living room.

Totally agree with you, though, when the full error message is at least capable of being copied to the clipboard.


Most (all?) photo apps include a crop function, allowing your mom just crop out everything else.


I hope you’re being sarcastic. If not, expecting someone’s parent to know how to use a photo app’s crop functionality just to communicate an error state is a failure of understanding typical streaming app users.


I wasn't being sarcastic. This is not a case of not being capable of doing something, it's about not knowing the functionality exists. Cropping is very simple. I assumed the GP didn't know about it either or he would have taught his mom already.

Could the manufacturer solve this in a better way? Probably but that won't solve the issue the customer has now.


Poe's Law goes both ways. As a matter of fact, my mom invented digital photo cropping (or "pixel array extent adjustment," because even in her prime she wasn't a marketing genius, bless her heart). We know better than to expect her to submit a bug report once she's settled down to watch TV for the evening.

Jokes aside, "upload a photo of her living room" was meant to highlight the ridiculousness of the UX. I believe the designer of that flow had an OKR to decrease the number of reported bugs.


Joel Spolsky solved this over 25 years ago https://www.joelonsoftware.com/2000/04/26/designing-for-peop...


That solves nothing, just describes the problem.


Well I know where you stand regarding P = NP


> Instead, it's "When I do X, I get an error".

Worse still, just “it doesn’t work” without even any steps.

I sometimes gave those users an analogy like going to the doctor or a mechanic and not providing enough information, but I don’t think it worked.


My wife’s a doctor. Trust me, this isn’t unique to technical pursuits.

Patient: My foot hurts.

Wife: Which part of it?

Patient: It all hurts.

Wife: Does your heel hurt?

Patient: No.

Wife: Does your arch hurt?

Patient: No.

Wife: Do your toes hurt?

Patient: This one does.

Wife: Does anything but that one toe hurt?

Patient: No.

Wife: puts on a brave smile


The trouble here is that github issues is crap. Most bug trackers have ways to triage submissions. When a rando submits something, it has status "unconfirmed". Developers can then recategorize it, delete it, mark it as invalid, confirm that it's a real bug and mark it "confirmed", etc. Github issues is mostly a discussion system that was so inadequate that they supplemented it with another discussion system.


> Most bug trackers have ways to triage submissions. When a rando submits something, it has status "unconfirmed". Developers can then recategorize it, delete it, mark it as invalid, confirm that it's a real bug and mark it "confirmed", etc.

As far as I'm aware, most large open GitHub projects use tags for that kind of classification. Would you consider that too clunky?


> Would you consider that too clunky?

Absolutely. It's a patch that can achieve a similar result, but it's a patch indeed. A major features of every ticketing system, if not "the" major feature, is the ticket flow. Which should be opinionated. Customizable with the owner's opinion, but opinionated nonetheless. Using labels to cover missing areas in that flow is a clunky patch, in my book.


IMO it still has poor discoverability, constant filtering between the triage status flags and non-flagged stuff, stuff that might not have been flagged by accident, reporters putting tags on issues themselves, issues can only be closed by non-admins rather than truly deleted, random people complaining about this or that on unrelated tickets...

It all stems from the fact that all issues are in this one large pool rather than there being a completely separate list with already vetted stuff that nobody else can write into.


Sounds like it could be fixed by making it configurable to hide all issues without a certain tag (or auto-apply a hiding tag) for the issues "landing page".


This still puts the onus on the developers to categorise the issues which I'm guessing they don't want to do.


How is that different from other bug tracking systems? The devs have to triage submitted tickets there too


There are several automation solutions for GH issues. You could have an automatic “unconfirmed” tag applied to every user-created issue if you wanted.


RFC1925¹, section 2(3):

  With sufficient thrust, pigs fly just fine. However, this is
  not necessarily a good idea. It is hard to be sure where they
  are going to land, and it could be dangerous sitting under them
  as they fly overhead.
Translation: sure, you can make this work by piling automation on top. But that doesn't make it a good system to begin with, and won't really result in a robust result either. I'd really rather have a better foundation to start with.

¹ https://www.rfc-editor.org/rfc/rfc1925


I hate to break it to you, but all the other ticket systems do this by piling automation on top as well.


> I hate to break it to you, but all the other ticket systems do this by piling automation on top as well.

The rebuke to your comment is right in your comment: "other ticket systems do this by…"

The ticket system does it. As in, it has it built-in and/or well integrated. If GitHub had the same level of integration that other ticket systems achieve with their automation, this'd be a non-issue. But it doesn't, and it's a huge problem.

P.S.: I hate to break it to you, but "I hate to break it to you, but" is quite poor form.


No, it's not that well integrated. They don't call it 'tags' but they work exactly the same way. JIRA, the most commonly cited example in this thread, has a whole separate engine for it and your JIRA admin builds the ticket flow manually. All the way back in RT this sort of thing was handled by a cron job. Github leveraging actions to accomplish this isn't much of a difference.

P.S. I didn't ask


They're already doing that by moving discussions to issues. In fact it's more work for them because they have to actually create the issue instead of just adding a "confirmed bug" label or whatever.

I guess it probably leads to higher quality issue descriptions at least, but otherwise this seems pretty dumb and user-hostile.


There’s a one-click button to convert from discussion to issue (and vice versa). It’s hardly more work. But I do feel like discussions are kind of hidden and out of the way on GitHub.

On repos I maintain, I use an “untriaged” label for issues and I convert questions to discussions at issue triage time.


Isn't that basically what Ghostty is doing also?


That's always the case. Who else should triage?


> As far as I'm aware, most large open GitHub projects use tags for that kind of classification. Would you consider that too clunky?

Speaking for another large open GitHub project:

Absofuckinglutely yes.

I cannot overstate how bad this workflow is. There seems to be a development now in other platforms becoming more popular (gitlab, forgejo/codeberg, etc.) and I hope to god that it either forces GitHub to improve this pile of expletive or makes these "alternate" platforms not be so alternate anymore so we can move off.


None of the alternative forges have a better issue tracker implementation. For the most part, they are basically copies of GitHub. No hope there. :(


The classification here is not what type of issue it is, it's whether it's an issue or not. Creating an issue for things that aren't issues is fundamentally broken. There's no way to fix that except by piling bad design on bad design to make it so that you can represent non-issues using issues and have everything still be somewhat usable.


Just trying to triage and tag all of them can still be a full-time job’s worth of work in a popular repo.


Having spent only a week triaging Mozilla bug reports I do not see how a different ticketing system makes it easier. It is just hard work.


> Most bug trackers have ways to triage submissions. When a rando submits something, it has status "unconfirmed". Developers can then recategorize it, delete it, mark it as invalid, confirm that it's a real bug and mark it "confirmed", etc.

All of this is possible on GitHub issues and is in fact done by many projects, by this metric I dont see how GitHub Issues is any different than say, JIRA. In both cases, as you mentioned, someone needs to triage those issues, which would, of course, be the developers as well. Nothing gained, nothing lost.


Having used many issue trackers over the years (JIRA, custom tools, GH Issues), I've found GitHub issues to be very usable.

Especially with the new features added last year (parent tickets, better boolean search etc) although I'm not sure if you need to opt in to get those.

In fact, it's become our primary issue tracker at work.


I take the Basecamp philosophy of, “If it’s important enough, we won’t be able to ignore it, and it’s ok for anything else to fall through the cracks until someone feels like working on it.”

Well, that’s a paraphrase, but I remember reading that rough idea on their blog years ago, and it strikes me as perfectly fine for many kinds of projects.


Discussion systems all the way down :-). This is a fair assessment of the github issues system. I suspect that because git(1) can be a change control system for anything there is never any hope of making an effective issue tracker for a particular thing it is being used to manage change on. The choice the project made to allow the developers to determine when something was an issue is essentially adding a semantic layer on top of issues that customizes it for this particular corpus of change management.


You're 100% correct. I had a CVE reported to me in ~2022, shortly after the ChatGPT launch. I spent 4 hours slicing and dicing the issue, responding to how it was wrong, linking to background information, specific lines in the code, and then asking for or what am I missing. The response was literally "shrugs AI". Good for them.


Yeah but the article / post linked does not say that they won't look at reports of bugs or security problems, just that they are using issues to manage things they have decided are issues that should be worked on, and so public reporting using issues tickets will mess up that system they have. It's purely about their project's use of the issues system in github.

Unfortunately there is no such magic bullet for trawling through bug reports from users, but pushing more work out to the reporter can be reasonably effective at avoiding that kind of time wasting. Require that the reporters communicate responsively, that they test things promptly, that they provide reproducers and exact recipes for reproduction. Ask that they run git bisect / creduce / debug options / etc. Proactively close out bugs or mark them appropriately if reporters don't do the work.


Don't forget the rude, entitled, and aggressive, they are legion.

It's simply a great idea. The mindset should be 'understand what's happening', not 'this is the software's fault'.

The discussion area also serves as a convenient explanation/exploration of the surrounding issues that is easy to find. It reduces the maintainer's workload and should be the default.


Yeah but a good issue tracker should be able to help you filter that stuff out. That ghostty finds discussions to be a better way to triage user requests/issues is somewhat quirky, although a perfectly valid option. As is just using issues, imo. Just good to make sure users know how to report an issue, and what information to include.


To be clear, I think discussions on the whole as a product are pretty bad. I'm not happy having to use them, but given my experience trying different approaches across multiple "popular" projects on GH, this approach has so far been the least bad. Although I'm still sad about it.

> Yeah but a good issue tracker should be able to help you filter that stuff out.

Agreed. This highlights GitHub's issue management system being inadequate.

(Note: I'm the creator/lead of Ghostty)


I believe most of it is people expecting stuff to work differently, not having time to wrap their head around proper usage of system, because they need specific outcome and they don't need mastery of the tool.

Downside is that "Facebookization" created a trend where people expect everything to be obvious and achievable in minimal amount of clicks, without configuring anything.

Now "LLMization" will push the trend forward. If I can make a video with Sora by typing what I want in the box, why would I need to click around or type some arcane configuration for a tool?

I don't think in general it is bad - it is only bad for specialist software where you cannot use that software without deeper understanding, but the expectation is still there.


It is weird to push the idea that Facebook is some kind of pinacle of good and easy to use UI. That's the first one. It's quite the opposite, with people constantly complaining how bad, clunky and confusing Facebook is. And it is not the recent trend either. It has always been this way and e.g. VK has always had a better UI/UX that Facebook (and Telegram's is better that Whatsapp's).


Not as pinnacle of good - but pinnacle of “you don’t have to think, just scroll, occasionally like something” ;).

Then people expect accounting software to be just login click one or two buttons.


But still, compared to something like email, the previous standard for most people, Facebook was an unbelievable step forward. People complain about anything.


Facebook is a step forward in terms of features. But it is a clear regression in terms of UI, easiness, usability, understandability of email. Email is very simple in both concept and practice.


I think I disagree - when it comes to sharing large files, something like a video, or even a picture in 2005, email was nowhere near as good. And also having a place to comment on things without the stacking up of reply chrome is genuinely better.


> when it comes to sharing large files, something like a video, or even a picture in 2005, email was nowhere near as good.

That's just a stupid limitation and not even a technical one. You could happily send GBs over email. You can also easily filter allowed attachment size by sender on the recipient side, because by the time the attachment size is told, both information was already provided.


Email was created for email, not for file sharing of massive files (even though nothing stops). Facebook is even worse than email at sharing large files, video files or even pictures (like, imgur is much better).

Commenting on things is from a list of features (to be distinguished from UX/UI) I talked about.


facebook literally obfuscates their UI to stop you turning off "features" they want to push on you

it is a UI designed to be hard to use


You're talking about two different things here (and I'm not condoning either, to be clear.)

1) UI = a clearly documented way to configure all features and make the software work exactly how you want.

2) UI = load a web page and try to do the thing you wanted to do (in this case communicate with some specific people).

FB is clearly terrible at 1 but pretty alright at 2.


> If I can make a video with Sora by typing what I want in the box

IME, people cannot even articulate what they want when the know what they want, let alone when they don’t even understand what they want in the first place.


Agreed. We have to stop this “first-class” citizen for anything but general communication platforms. This allows creating a barrier between specialists and common users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: