Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eMMC questions #1083

Open
evpopov opened this issue Mar 11, 2025 · 1 comment
Open

eMMC questions #1083

evpopov opened this issue Mar 11, 2025 · 1 comment
Labels

Comments

@evpopov
Copy link

evpopov commented Mar 11, 2025

Hi All,
I'm currently using LittleFS on QSPI NOR flash and I'm quite happy, but need to evaluate future use on a 4GB eMMC. Sadly, I've not yet used eMMC so I'm stuck in theory land.
I'm working on an industrial motor controller that requires a reliable file system and doesn't need read/write performance or a large capacity. It will almost always experience unexpected loss of power. I will use the file system for the occasional firmware upgrade, a few MB of static files and a few MB of slowly rotated logs.... nothing crazy, no more than maybe 20-30MB total in 30-40 files, so very low compared to the capacity of the eMMC. As an example device, I'm looking at EMMC04G-MT32

I need help with general eMMC advice as well as with the LittleFS configuration options that are specific to eMMC? Here's how far I've gotten and please, consider any statement below more like a question.

I can use the eMMC "as is" in its TLC/MLC mode with an endurance of ~3000 write/erase cycles or put it in pSLC mode that should give it ~50k cycles of endurance with 50% capacity loss. I don't care about capacity so much, so I plan on switching it to pSLC mode and forgetting about it.

After reading through the standard, it appears that if I enable the write reliability parameter, the eMMC controller will guarantee that a block either has the old data or the new data, thus freeing me from worrying about invalid data caused by an unexpected power down. Of course the alternative is to monitor the power supply and give the eMMC advance warning of a shutdown, but guaranteeing power for the next 2-3-4-500ms may not be trivial. Does anyone have experience with this approach of using "reliable write" and simply not worrying about power loss?

Then there are the LittleFS settings:

  • .read_size = 512 /* is there any benefit in setting this to 1 and having my .read() callback translate to full 512 byte blocks? or do I just let LittleFS handle the whole blocks? */
  • .block_size = 512 /* because the device I'm considering does not support partial block write. */
  • .block_cycles = 0 /* My thinking is that since the eMMC manages the NAND, LittleFS should not attempt to do any wear leveling. */

What about the .erase() callback? Isn't eMMC managed and therefore not needing explicit erase? Here's what I think my options are:

  • Provide an .erase() callback that does nothing? Will LittleFS ever verify that the blocks are erased before programming them?
  • Provide an .erase() callback that triggers the eMMC "erase" operation. If I understand the standards correctly, my .prog_size should be equal to what the eMMC standard refers to as the "erase group" and that is (ERASE_GRP_SIZE + 1) * (ERASE_GRP_MULT + 1) * 512 where ERASE_GRP_SIZE and ERASE_GRP_MULT come from the CSD register of the eMMC. As an example, the datasheet of my sample eMMC lists these as 0x17 and 0x1F resulting in a.prog_size of 2432512=384KB which is a bit of a weird erase size and feels quite large.
  • Provide an .erase() callback that triggers the eMMC "trim" operation to flag individual 512 byte blocks as "deleted". Is this safe or do I risk ending up with a bunch of half-used "erase groups" that cannot be truly erased because they still have a block or two that are still used.

Any insight on the subject is welcome and appreciated.

@geky geky added the question label Mar 13, 2025
@geky
Copy link
Member

geky commented Mar 13, 2025

Hi @evpopov,

I can use the eMMC "as is" in its TLC/MLC mode with an endurance of ~3000 write/erase cycles or put it in pSLC mode that should give it ~50k cycles of endurance with 50% capacity loss. I don't care about capacity so much, so I plan on switching it to pSLC mode and forgetting about it.

I don't have much practical experience with pSLC vs TLC/MLC, it would be interesting to know if others have thoughts on the two.

I wonder if there's any papers on the tradeoff between pSLC vs TLC/MLC under wear-leveling. But just looking the math, even with perfect wear-leveling, ~2x more storage is only ~2x more device life. The ~17x increased device life of pSLC sounds hard to beat.

I suspect it's the sort of thing where you should stick with pSLC unless you really need the storage. (Or want to sell a larger chip.)

After reading through the standard, it appears that if I enable the write reliability parameter, the eMMC controller will guarantee that a block either has the old data or the new data, thus freeing me from worrying about invalid data caused by an unexpected power down. Of course the alternative is to monitor the power supply and give the eMMC advance warning of a shutdown, but guaranteeing power for the next 2-3-4-500ms may not be trivial.

The standard certainly is confusing on this.

In theory, it would be more performant to not make each write reliable, and instead send some sort of equivalent of a sync/flush command in LittleFS's sync callback.

But reading the spec it seems any data is at risk during powerloss, regardless of which block it's on. This would make the reliability bits required. I suspect there's risk of internal FTL state getting corrupted otherwise.

Would be interesting to know if I'm missing something.


As for the LittleFS bits:

.read_size = 512 /* is there any benefit in setting this to 1 and having my .read() callback translate to full 512 byte blocks? or do I just let LittleFS handle the whole blocks? */

That's an interesting question. I don't think so.

With read_size = 512, LittleFS can avoid multiple reads if they happen to be in the same read line. In theory LittleFS's cache_size will batch neighboring reads, but this is heuristic based and some read patterns (metadata lookup) are particularly difficult to predict.

You could probably save some RAM, but at a performance cost.

.block_size = 512 /* because the device I'm considering does not support partial block write. */

👍

.block_cycles = 0 /* My thinking is that since the eMMC manages the NAND, LittleFS should not attempt to do any wear leveling. */

You actually want .block_cycles = -1 to disable wear-leveling. 0 will assert. This was changed at one point to prevent users from accidentally ending up with a non-wear-leveling configuration, which is hard to notice.

But aside from that, yes, wear-leveling on top of eMMC is just extra work with no benefit.

What about the .erase() callback? Isn't eMMC managed and therefore not needing explicit erase? Here's what I think my options are: ... As an example, the datasheet of my sample eMMC lists these as 0x17 and 0x1F resulting in a.prog_size of 2432512=384KB which is a bit of a weird erase size and feels quite large.

The size is not too surprising. NAND has huge block sizes. I saw a part with a full 1MiB block size earlier. This is why LittleFS's O ( n 2 ) compaction cost has such awful performance on raw NAND.

The nice thing about eMMC and FTLs in general is that they hide the sheer scale of NAND erase blocks from traditional filesystems. I would probably stick with 512 B blocks until the performance issues in LittleFS get worked out.

Eventually, you might be able to get better performance by letting LittleFS handle the raw flash, but LittleFS is not there yet.

I would just leave erase as a noop. LittleFS is designed to work with this as long as the state of "erased" blocks doesn't change.

Provide an .erase() callback that triggers the eMMC "trim" operation to flag individual 512 byte blocks as "deleted". Is this safe or do I risk ending up with a bunch of half-used "erase groups" that cannot be truly erased because they still have a block or two that are still used.

LittleFS doesn't really have trim support. It would be nice to add this in the future, but it's low priority, and would probably depend on the block map work.

Calling trim wouldn't be harmful, but the eMMC probably implicitly trims when you do a data write, so I don't think it accomplishes much.

need to evaluate future use on a 4GB eMMC.

It's also worth mentioning as you get to larger device sizes you'll probably start to run into LittleFS's performance bottlenecks more. These are being worked on, but it may be worth reading up on #1079 to see some of the workarounds that are available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants
@geky @evpopov and others