Afaict checksums only cover metadata and it doesn't do any read integrity checking, isn't that bad when talking to raw flash devices since it relies on them accurately reporting errors?
Microcontrollers almost always use NOR flash, which is much less error prone than the NAND flash in your ssd.
Still, if you need ECC (maybe it's a medical application), the company designing the MCU would build it directly in to the embedded flash. The foundry might have a standard 128-bit word size option, and an extended 144-bit word size option, giving you 16 extra bits per word for whatever ECC or secded code you want to put in. From there, it's on you to build in the right error reporting mechanisms. It's certainly worth checking whether or not littleFS property bubbles errors up if a read operation flags the data as bad.
All this is assuming internal flash, I'm not sure whether external SPI flash offer similar extended word size options. I don't recall seeing any but maybe they are out there.
Error correction provides the same value as a checksum, but better. (the tradeoff is ECC codes are much larger and more expensive to compute)
It's also worth noting the CRC is used for power-loss and doesn't actually provide error detection for metadata-blocks.
Checksumming data is a bit complicated in a file system, mostly because of random file writes. If you write to the middle of a file, you will need to update a CRC, but updating that CRC may require reading quite a bit of additional data to build the CRC back up.
To make random writes efficient, you could slice up the files, but then that raises the question of how large to make the slices. Too large and random file writes are expensive, too small and the overhead of the CRCs gets costly.
You could make these slices configurable, but at this point we've kinda recreated the concept of a block device.
The block device representing the underlying storage already has configuration for this type of geometry: erase size/program size/read size. If we checksum (or ECC) at the block device level we also get the added benefit of also protecting metadata blocks. Most NAND flash components already have hardware ECC for this specific purpose.
TLDR: It's simpler and more effective to checksum at the block device level.
And for MCU development, simpler == less code cost.
> littlefs by itself does not provide ECC. The block nature and relatively large footprint of ECC does not work well with the dynamically sized data of filesystems, correcting errors without RAM is complicated, and ECC fits better with the geometry of block devices.
That is, ECC and read-integrity checking is determined at the block level, not the filesystem level. The filesystem assumes that it's dealing with a block device that either succeeds or reports an error.
There's also some cases where you don't care about the devices that develop errors.
Wear-leveling delays wear errors until the end of the device's life. Once these start to develop you will eventually have storage that's unusable. The cheapest option may be to just let the device crash when it has reached end-of-life.
You could as a sanity check, it would be useful to insure no corrupted data gets passed into your system. Though this wouldn't protect the filesystem completely.