Skip to content

File too large when writing #1066

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
BlazejKrysztofiak opened this issue Jan 29, 2025 · 5 comments
Open

File too large when writing #1066

BlazejKrysztofiak opened this issue Jan 29, 2025 · 5 comments
Labels
needs investigation no idea what is wrong

Comments

@BlazejKrysztofiak
Copy link

BlazejKrysztofiak commented Jan 29, 2025

Hi

I am using littleFs with W25Q64JV NAND memory and I am trying to save about 1.7M byte data to file by appending it with smaller data chunks (about 20 bytes each). On some point lfs_file_write() function is returning error code LFS_ERR_FBIG.

My configuration:

#define EXTERNAL_FLASH_PAGE_SIZE        ( 256 )
#define EXTERNAL_FLASH_SECTOR_SIZE      ( 4096 )
#define EXTERNAL_FLASH_SECTORS_IN_BLOCK ( 16 )
#define EXTERNAL_FLASH_BLOCKS_NUMBER    ( 128 )

static const struct lfs_config lfsCfg =
{
    .read             = Read,
    .prog             = Prog,
    .erase            = Erase,
    .sync             = Sync,
    .read_size        = 1,
    .prog_size        = 1,
    .block_size       = EXTERNAL_FLASH_SECTOR_SIZE,
    .block_count      = EXTERNAL_FLASH_SECTORS_IN_BLOCK * EXTERNAL_FLASH_BLOCKS_NUMBER,
    .block_cycles     = 100,
    .cache_size       = EXTERNAL_FLASH_SECTOR_SIZE,
    .lookahead_size   = EXTERNAL_FLASH_SECTORS_IN_BLOCK * EXTERNAL_FLASH_BLOCKS_NUMBER / 8,
    .compact_thresh   = 0, // Fefault value
    .read_buffer      = ( void* )readBuffer,
    .prog_buffer      = ( void* )progBuffer,
    .lookahead_buffer = ( void* )lookaheadBuffer,
    .name_max         = FILE_SYSTEM_NAME_LENGTH_MAX,
    .file_max         = 0, // Default value
    .attr_max         = 0, // Default value
    .metadata_max     = 0, // Default value
    .inline_max       = 0, // Default value
};

My sync function is not implemented, it basically returns LFS_ERR_OK.

Do I try to write too much data into single file? Maybe I should close the file somewhere between writes and then reopen it?

@geky geky added the needs investigation no idea what is wrong label Feb 3, 2025
@geky
Copy link
Member

geky commented Feb 3, 2025

Hi @BlazejKrysztofiak, thanks for creating an issue.

That is curious. I've not seen any issues related to LFS_ERR_FBIG before, the relevant logic is honestly pretty simple:

littlefs/lfs.c

Lines 3673 to 3676 in 0494ce7

if (file->pos + size > lfs->file_max) {
// Larger than file limit?
return LFS_ERR_FBIG;
}

Is it possible file->pos is getting corrupt somehow? Are you seeking around in the file?

I would also check that the file is open and hasn't been closed before the write call. Enabling asserts would help catch this, if they're disabled.

@geky
Copy link
Member

geky commented Feb 3, 2025

It's also worth mentioning running out of space is reported as LFS_ERR_NOSPC, which is what makes LFS_ERR_FBIG curious. It's only possible to get this if you seek close to the file limit, which is INT_MAX by default:

littlefs/lfs.h

Lines 54 to 59 in 0494ce7

// Maximum size of a file in bytes, may be redefined to limit to support other
// drivers. Limited on disk to <= 2147483647. Stored in superblock and must be
// respected by other littlefs drivers.
#ifndef LFS_FILE_MAX
#define LFS_FILE_MAX 2147483647
#endif

Unless you are on a 16-bit system?

@BlazejKrysztofiak
Copy link
Author

Hi @geky

Thank you for answering.

I have asserts enabled. I am not seeking around the file as I have it opened in "append" mode and I am only adding new data to the end of the file. I am using 32 bit STM32G474.

I did not checked size of the file after I got error because my code is removing the file if all data was not saved correctly. Probably I wrongly calculated data size which I wanted to save and I exceeded LFS_FILE_MAX.

Unfortunately I do not have the code which caused it any longer. Requirements in the project changed and I do not have to save so much data any longer.

@geky
Copy link
Member

geky commented Feb 6, 2025

Ah, if you passed a huge size to lfs_file_write that would also cause this. This is easy to do if an earlier function returned a negative error code that went unchecked.

It would have to be > ~2GiB though, since that's the signed 32-bit limit.

But let us know if you run into this again.

@BlazejKrysztofiak
Copy link
Author

Okay, thank you 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs investigation no idea what is wrong
Projects
None yet
Development

No branches or pull requests

2 participants