-
Notifications
You must be signed in to change notification settings - Fork 842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eMMC questions #1083
Comments
Hi @evpopov,
I don't have much practical experience with pSLC vs TLC/MLC, it would be interesting to know if others have thoughts on the two. I wonder if there's any papers on the tradeoff between pSLC vs TLC/MLC under wear-leveling. But just looking the math, even with perfect wear-leveling, ~2x more storage is only ~2x more device life. The ~17x increased device life of pSLC sounds hard to beat. I suspect it's the sort of thing where you should stick with pSLC unless you really need the storage. (Or want to sell a larger chip.)
The standard certainly is confusing on this. In theory, it would be more performant to not make each write reliable, and instead send some sort of equivalent of a sync/flush command in LittleFS's But reading the spec it seems any data is at risk during powerloss, regardless of which block it's on. This would make the reliability bits required. I suspect there's risk of internal FTL state getting corrupted otherwise. Would be interesting to know if I'm missing something. As for the LittleFS bits:
That's an interesting question. I don't think so. With You could probably save some RAM, but at a performance cost.
👍
You actually want But aside from that, yes, wear-leveling on top of eMMC is just extra work with no benefit.
The size is not too surprising. NAND has huge block sizes. I saw a part with a full 1MiB block size earlier. This is why LittleFS's The nice thing about eMMC and FTLs in general is that they hide the sheer scale of NAND erase blocks from traditional filesystems. I would probably stick with 512 B blocks until the performance issues in LittleFS get worked out. Eventually, you might be able to get better performance by letting LittleFS handle the raw flash, but LittleFS is not there yet. I would just leave
LittleFS doesn't really have trim support. It would be nice to add this in the future, but it's low priority, and would probably depend on the block map work. Calling trim wouldn't be harmful, but the eMMC probably implicitly trims when you do a data write, so I don't think it accomplishes much.
It's also worth mentioning as you get to larger device sizes you'll probably start to run into LittleFS's performance bottlenecks more. These are being worked on, but it may be worth reading up on #1079 to see some of the workarounds that are available. |
Hi All,
I'm currently using LittleFS on QSPI NOR flash and I'm quite happy, but need to evaluate future use on a 4GB eMMC. Sadly, I've not yet used eMMC so I'm stuck in theory land.
I'm working on an industrial motor controller that requires a reliable file system and doesn't need read/write performance or a large capacity. It will almost always experience unexpected loss of power. I will use the file system for the occasional firmware upgrade, a few MB of static files and a few MB of slowly rotated logs.... nothing crazy, no more than maybe 20-30MB total in 30-40 files, so very low compared to the capacity of the eMMC. As an example device, I'm looking at EMMC04G-MT32
I need help with general eMMC advice as well as with the LittleFS configuration options that are specific to eMMC? Here's how far I've gotten and please, consider any statement below more like a question.
I can use the eMMC "as is" in its TLC/MLC mode with an endurance of ~3000 write/erase cycles or put it in pSLC mode that should give it ~50k cycles of endurance with 50% capacity loss. I don't care about capacity so much, so I plan on switching it to pSLC mode and forgetting about it.
After reading through the standard, it appears that if I enable the write reliability parameter, the eMMC controller will guarantee that a block either has the old data or the new data, thus freeing me from worrying about invalid data caused by an unexpected power down. Of course the alternative is to monitor the power supply and give the eMMC advance warning of a shutdown, but guaranteeing power for the next 2-3-4-500ms may not be trivial. Does anyone have experience with this approach of using "reliable write" and simply not worrying about power loss?
Then there are the LittleFS settings:
.read_size
= 512 /* is there any benefit in setting this to 1 and having my.read()
callback translate to full 512 byte blocks? or do I just let LittleFS handle the whole blocks? */.block_size
= 512 /* because the device I'm considering does not support partial block write. */.block_cycles
= 0 /* My thinking is that since the eMMC manages the NAND, LittleFS should not attempt to do any wear leveling. */What about the
.erase()
callback? Isn't eMMC managed and therefore not needing explicit erase? Here's what I think my options are:.erase()
callback that does nothing? Will LittleFS ever verify that the blocks are erased before programming them?.erase()
callback that triggers the eMMC "erase" operation. If I understand the standards correctly, my.prog_size
should be equal to what the eMMC standard refers to as the "erase group" and that is (ERASE_GRP_SIZE + 1) * (ERASE_GRP_MULT + 1) * 512 where ERASE_GRP_SIZE and ERASE_GRP_MULT come from the CSD register of the eMMC. As an example, the datasheet of my sample eMMC lists these as 0x17 and 0x1F resulting in a.prog_size
of 2432512=384KB which is a bit of a weird erase size and feels quite large..erase()
callback that triggers the eMMC "trim" operation to flag individual 512 byte blocks as "deleted". Is this safe or do I risk ending up with a bunch of half-used "erase groups" that cannot be truly erased because they still have a block or two that are still used.Any insight on the subject is welcome and appreciated.
The text was updated successfully, but these errors were encountered: