Hacker Newsnew | past | comments | ask | show | jobs | submit | suprstarrd's commentslogin

I vaguely remember this happening with somebody on an Audacity project, so jumping in! I believe this was on a GitHub issue for that project, but the project has since disabled issues for the repository since they moved source locations. It also definitely hit some press.

If you are curious, some /pol/ and 4chan archives still have some stuff about the sneedacity incident available. There's still someone (a bot?) trying to recruit them to post shit about me from time to time.

I want to start by saying it's good that you are at least taking the time to look for this information! Stay healthily informed.

I see this as a bad analogy though: you wouldn't hear about it every time you go to the grocery store. Or, at the very least, you wouldn't stop and listen for the fifth time. You already know, and that's the point: the intention of most activism in technology (at least that I see) is to make you initially aware of it so you start to seek the information out and learn more elsewhere. (...And to give themselves good PR. We love rainbow capitalism /s)

Instagram and Twitter both get your attention during election season because they want you to be informed about how to vote. To me, that's a similar thing.


I can see where they got that idea from. You saying you won't provide permissions at the end ends up sounding a lot more like you won't use the app than I imagine you intended. (Although, subscribing to an app and then not using it would be silly.)

Speaking as someone who doesn't like the idea of AI art so take my words with a grain of salt, but my theory is that this input method exclusivity is intentional on their part, for exactly the reason you want the change. If you only let people making AI art communicate what they want through text or reference attachments (the latter of which they usually won't have), then they have to spend time figuring out how to put it into words. It IS painful to ask for those refinements, because any human would clearly understands it. In the end, those people get to say that they spent hours, days, or weeks refining "their prompt" to get a consistent and somewhat-okay looking image; the engineers get to train their AI to better understand the context of what someone is saying; and all the while the company gets to further legitimize a false art form.

It blows my mind that this wasn't the thought process going in. Thank you for doing this!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: