I use the iOS app daily, and while it's not the prettiest thing in the world, it has nearly every feature of the desktop client, including full scripting support for card contents, which is amazing for things like collapsable elements and media. And, at the end of the day, it's about what you learn from using it that matters.
That's part of my problem. I actually don't love how many billion options Anki has, and I'd love something with a more opinionated UI.
(I think the data model underneath Anki is...showing its age (and lack of explicit design) and building that on top of it would not be too easy. I've thought about it a few times.)
Just have them use it on their computer or the web?
It improved my grades so much in college that I spent the 25 bucks as a broke student so I could have it on my second hand iPad. This was before AnkiDroid even existed so it's amazing the price is still the same.
That works for people who already convinced that they want to use it. I'm talking about people who've heard it for the first time and they're not going to spend 25$ for some new app just to try. 25$ is unusually high price for an AppStore app and it's just doesn't work unless you're really determined to use it. I don't understand why people are downvoting this.
I would argue that it's almost impossible to start first with the mobile version so this situation should never happen. The computer version is essential for setting up and getting decks.
The web version remains free as well.
Anki app has an interface for adding/editing cards, and can absolutely be used without AnkiWeb or syncing. In fact, this is how I used it myself for years. I would argue that using AnkiWeb and syncing is an advanced feature for people who got the taste of having own decks and don't want to loose it.
Some people would need to buy computer first. Again, it's very hard to recommend mobile app to people if you need to add these kind of "workarounds". Especially for the main target audience - young students – many of whom live in their mobile phones and not used to spend 25$ for apps.
Every university student in the Western world has access to a Computer, and those who are poor usually have an Android Phone (with free AnkiDroid) and not a very expensive iPhone. If they can afford an iPhone they can afford the App
I think you essentially have to use the desktop version no matter what—so the real dichotomy is whether you want to use a free program with free online hosting with the bundled (free) web application... or if you want to buy a $25 app.
It seems a lot like saying nobody should use GMail unless they agree to pay for premium Google Services.
I don't have this problem. I bought iOS Anki app for myself many years ago. What I found hard is to recommend it to others, who never heard about spaced repetition yet, especially to young students, who arguably is the main target audience for this kind of tools. They just not jumping into buying 25$ app to try it. As soon as you start mentioning switching to laptop, using web/desktop - they're not jumping into it either. I don't know maybe you all have different experience, but that's what I experienced over the years and it always felt sad, because this price is a prohibitively high for many people.
If you're a homelab NixOS user, isn't it on you to try to answer these questions? A home lab is for learning, and if you don't want to do that, what's the point?
Maybe this is a naive take, but I don't really think LLMs have done that much to change the actual situation around ability/outcomes. If you are actually a very good C# programmer, knowing Swift and searching some Apple documentation seems very reasonable.
It might help "unstick" you if you aren't super confident, but it doesn't seem to me like it's actually leveling up mediocre programmers to "very good" ones, in familiar or unfamiliar domains.
> I don't really think LLMs have done that much to change the actual situation around ability/outcomes
from my own experiences and many others I have seen on this site and elsewhere, I'm not sure how anyone could conclude this.
> it doesn't seem to me like it's actually leveling up mediocre programmers to "very good" ones
Oh well then if this is your metric then maybe your take is correct, but not relevant? From the top level comment I thought we were talking about the bar being lowered for building something thanks to AI and you don't need to become any better at being a programmer to do so.
I don't care how good of a programmer you are, if you don't know Apple stuff (Swift, Xcode, all the random iOS/Mac app BS) you aren't making an Apple app in a weekend. Learning things is easy but still takes time, and proficiency is only earned by trying and failing a number of times — unless you're an LLM, in which case you're already proficient in everything.
No I can confirm this. I am at least an average C# dev, with 16 years of experience.
I have built a very nicely responsive real-time syncing iOS app in what amounts to a weekend of time. (I only have an hour here and there, young kids) I had zero iOS/Swift development experience prior to it.
I can also confirm that this wouldn't have been built if it weren't for Claude Code. It's "just" an improved groceries app, that works especially well for my wife and me.
Without LLM's, and with just an hour here and there, I wouldn't have done the work to learn the intricacies of iOS and Swift dev, set up the app, and actually tweak and polish it so it works well -- just to scratch the itch of a bit better groceries handling.
There's a huge amount you're missing by boiling down their complaint to "bubble sorts or inelegant code". The architecture of the new code, how it fits into the existing system, whether it makes use of existing utility code (IMO this is a huge downside; LLMs seem to love to rewrite a little helper function 100x over), etc.
These are all important when you consider the long-term viability of a change. If you're working in a greenfield project where requirements are constantly changing and you plan on throwing this away in 3 months, maybe it works out fine. But not everyone is doing that, and I'd estimate most professional SWEs are not doing that, even!
There's certainly coupled, obtuse, contorted code styles that the LLM will be unable to twister itself into (which is different from the coupled, obtuse code it generates itself). Don't pretend this is good code though, own that you're up to your neck in shit.
LLMs are pretty good at modifying well factored code. If you have a functional modular monolith, getting agents to add new functions and compose them into higher order functionality works pretty darn well.
> That turned into about 10 hours of conversation with Claude to pull it all together.
Did the author write an actual parser, or does this mean they spent 10 hours coaxing Claude into writing this blog post?
There's not a lot of depth here, and this doesn't really feel like it says much.
The blog post mostly compares Postgres, MySQL, SQL Server... and then flips between comparisons to BigQuery occasionally, and Snowflake other times. Is that intentional (and is it accurate?), or did the LLM get confused?
Yeah, I am disappointed by how shallow it is. Lexing, Parsing, AST would apply to nearly every programming language and not SQL alone. There’s no mention of how the parsers actually work on the code level, which would have made it an interesting read.
I want to believe people who feel they are 10x more productive with agentic tools but I can’t help notice how there’s lot of doing things that don’t need to be done at all. Either that, or doing them superficially as the article shows.
You can assume that already-published open weights models are available at $0, regardless of how much money was sunk into their original development. These models will look increasingly stale over time but most software development doesn't change quickly. If a model can generate capable and up-to-date Python, C++, Java, or Javascript code in 2025 then you can expect it to still be a useful model in 2035 (based on the observation that then-modern code in these languages from 2015 works fine today, even if styles have shifted).
Depending on other people to maintain backward compatibility so that you can keep coding like it’s 2025 is its own problematic dependency.
You could certainly do it but it would be limiting. Imagine that you had a model trained on examples from before 2013 and your boss wants you to take over maintenance for a React app.
You're all referencing the strange idea in a world where there would be no open-weight coding models trained in the future. Even in a world where VC spending vanished completely, coding models are such a valuable utility that I'm sure at the very least companies/individuals would crowdsource them on a reoccurring basis, keeping them up to date.
The value of this technology has been established, it's not leaving anytime soon.
I think faang and the like would probably crowdsource it given that they would—according to the hypothesis presented—would only have to do it every few years, and ostensibly are realizing improved developer productivity from them.
I don’t think the incentive to open source is there for $200 million LLM models the same way it is for frameworks like React.
And for closed source LLMs, I’ve yet to see any verifiable metrics that indicate that “productivity” increases are having any external impact—looking at new products released, new games on Steam, new startups founded etc…
Certainly not enough to justify bearing the full cost of training and infrastructure.
2013 was pre-LLM. If devs continue relying on LLMs and their training would stop (which i would find unlikely), still the tools around the LLMs will continue to evolve and new language features will get less attention and would only be used by people who don't like to use LLMs. Then it would be a race of popularity between new language (features) and using LLMs steering 'old' programming languages and APIs. Its not always the best technology that wins, often its the most popular one. You know what happened during the browser wars.
Your media consumption may be particularly biased if you didn't hear of this! I recommend following outlets from "both sides" even if you find the "other side" offensive. I hate to shill for Ground News, but it's great for this.
You are spreading typical misinformation/propaganda. Temporarily freezing accounts until the law is played out is not the same as debanking someone globally and permanently.
As far as I'm concerned, it's on the same order of badness. "Temporarily freezing" until when? The whim of some government official? No practical difference from using debanking as a political weapon.
> Yet software developed in C, with all of the foibles of its string routines, has been sold and running for years with trillions of USD is total sales.
This doesn't seem very relevant. The same can be said of countless other bad APIs: see years of bad PHP, tons of memory safety bugs in C, and things that have surely led to significant sums of money lost.
> It's also very easy to get this wrong, I almost wrote `hostname[20]=0;` first time round.
Why would you do this separately every single time, then?
The problem with bad APIs is that even the best programmers will occasionally make a mistake, and you should use interfaces (or...languages!) that prevent it from happening in the first place.
The fact we've gotten as far as we have with C does not mean this is a defensible API.
Sure, the post I was replying to made it sound like it's a surprise that anything written in C could ever have been a success.
Not many people starting a new project (commercial or otherwise) are likely to start with C, for very good reason. I'd have to have a very compelling reason to do so, as you say there are plenty of more suitable alternatives. Years ago many of the third party libraries available only had C style ABIs and calling these from other languages was clumsy and convoluted (and would often require implementing cstring style strings in another language).
> Why would you do this separately every single time, then?
It was just an illustration or what people used to do. The "set the trailing NUL byte after a strncpy() call" just became a thing lots of people did and lots of people looked for in code reviews - I've even seen automated checks. It was in a similar bucket to "stuff is allocated, let me make sure it is freed in every code path so there aren't any memory leaks", etc.
Many others would have written their own function like `curlx_strcopy()` in the original article, it's not a novel concept to write your own function to implement a better version of an API.
I don't mind so much that it's paid, given how much use I get for the price, but it sucks knowing it sucks and not being able to help make it better.
reply