1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

Reliably using memory cards and file systems in embedded systems

Discussion in 'Embedded' started by amrbekhit, Nov 23, 2011.

  1. amrbekhit

    amrbekhit Guest

    Hello all,

    What is the best way to use non-user-serviceable memory cards with file systems in embedded systems that can lose power at any time? The way I see it,there are two problems that need to be addressed:

    1 - File system corruption due to sudden power loss immediately after the file system has erased a sector on a device.
    2 - File system fragmentation (especially if you're not running an OS like Linux. and so don't have access to defragmentation software. In my case, myfirmware is on the metal).

    So far, I can only think of the following ways to solve the above two problems:

    1 - Regarding the power supply issues, either use a UPS so that the system is never turned off, or have a short term backup supply that can provide enough power for the system to safely close all file handles and write everything it needs to the memory card after it detects that main power has been lost.
    2 - Regarding the fragmentation issues, the system could be programmed to regularly reformat the memory card at suitable times (e.g. for my application, a remote datalogger, once all the data files have been successfully sentto the server).

    What other problems would one need to address when using memory cards and file systems in embedded systems, and how would one overcome them?


    Amr Bekhit
    amrbekhit, Nov 23, 2011
    1. Advertisements

  2. amrbekhit

    Arlet Ottens Guest

    Use maximum cluster size to minimize fragmentation. Also, if possible,
    pre-allocate logfiles to a large size after opening them (create file,
    seek to max size, and write a zero byte). Deleting all logfiles will be
    enough to clear any fragmentation, you don't have to reformat the card.
    Arlet Ottens, Nov 23, 2011
    1. Advertisements

  3. amrbekhit

    David Brown Guest

    It doesn't just have to be in connection with an erase - non-journalled
    file systems can get corrupt if power is lost during any write.

    Also note that some memory cards can get seriously corrupt from
    unexpected power loss, even if you have a good filesystem on it. In
    particular, I believe that SD-Cards can occasionally lockup completely
    from power fails during writes.
    Filesystem fragmentation is virtually irrelevant on non-spinning media
    (and it is not nearly the issue it used to be on spinning media).
    That's the only way to get reliability - avoid power fails while writing.
    That's a waste of time. Ignore fragmentation.
    David Brown, Nov 23, 2011
  4. amrbekhit

    Arlet Ottens Guest

    Fragmentation on memory cards can be harmful for write access. The cards
    are typically optimized for large sequential writes. If you're writing
    random blocks, the card may have to erase large blocks (typically around
    4MB) for every small write, which makes write access slower, and will
    wear out the flash sooner.
    Arlet Ottens, Nov 23, 2011
  5. That won't do anything useful on any modern filesystem that supports
    sparse files (e.g. anything designed in the last 20 years or so).
    Grant Edwards, Nov 23, 2011
  6. amrbekhit

    Arlet Ottens Guest

    FAT is still widely used, especially on memory cards. It's also a fairly
    good choice for embedded work: it's simple to implement, works well with
    memory cards, and is widely supported.
    Arlet Ottens, Nov 23, 2011
  7. Fair enough, but I wouldn't use FAT (or anything similar) on a memory
    card unless the card vendor will guarantee that it does built-in
    wear-levelling and bad-block remapping. The last time I talked to CF
    vendors, none of them would...
    Grant Edwards, Nov 23, 2011
  8. amrbekhit

    David Brown Guest

    There are flash-specific filesystems that handle this sort of thing, but
    they are only good for raw flash devices - not memory cards with their
    own mapping.

    You can do a bit better by using a decent filesystem with journalling -
    but if we assume that the target system here is a small embedded system
    rather than an embedded Linux system, a journalled file system is far
    more work to implement. And of course if you want to be able to take
    the card out of the system and read it from any PC, then the only sane
    choice is FAT (the insane option for a near-universally implemented
    filesystem being NTFS).

    It would take all day to list the shortcomings of FAT - yet it is the
    only realistic choice.
    David Brown, Nov 23, 2011
  9. amrbekhit

    David Brown Guest

    Memory cards are seldom "optimised" for anything. And they are normally
    used for small random writes.

    It is true enough that writes that match erase blocks (which are
    invariably much smaller than 4 MB - 128 KB is more realistic) will be a
    little faster, and will cause less wear on the flash. But it is a
    reasonable assumption that the OP is not looking for the fastest
    possible solution (since he didn't say so, and most embedded systems can
    accept slow writes), and no matter how hard you try you will sometimes
    get very long delays in writing. You have to deal with that anyway -
    being careful with block sizes won't change that much.

    With FAT, you can't choose to align your blocks unless you want to write
    your own system - and even then it will only help a bit with some
    blocks. It's just not worth the effort.

    And fragmentation doesn't come into this - it makes no difference that
    you will be able to measure.

    Memory cards, of all kinds, are sub-optimal. FAT is the only realistic
    choice of filesystem, and it is definitely sub-optimal. But that's what
    you've got to work with - trying to tweak it is going to make negligible

    So either a memory card with FAT is good enough and you use it - or it
    is not, and you find a completely different solution.

    But the problem of power failures while writing /is/ something you can
    fix - and something that you /should/ fix.
    David Brown, Nov 23, 2011
  10. amrbekhit

    Arlet Ottens Guest

    A major market for memory cards is digital cameras, which almost
    exclusively use big sequential writes.
    I've found several references that mention the 4MB size. Here's an
    article, for instance:

    The article also explains the different access modes, and how the card
    optimizes multiple writes the same allocation unit.
    The easiest solution is to leave the manufacturer formatting in place,
    assuming they know what the block alignment is, and have formatted the
    filesystem accordingly.
    Arlet Ottens, Nov 23, 2011
  11. amrbekhit

    Amr Bekhit Guest

    Dear All,

    Thanks for the useful replies so far. Just for clarification, in my system I'm using a microSD card formatted using FAT. The system is powered by an NXP LPC1752 with 64KB flash and 16KB RAM, so in my case space is at a premium. I ended up using the FAT file system because there's plenty of free embedded implementations on the web.

    As I understand it, although some other file systems are more robust than others, they all are at risk of corrupting the file system on a sudden powerloss, so making sure the system has a reliable source of power would be something that would need to be done regardless of the file system being used..

    @Arlet: I like the idea of preallocating log file space. Since I do have a growing log file in my system, I can see this being useful in minimising fragmentation.

    Amr Bekhit, Nov 23, 2011
  12. amrbekhit

    David Brown Guest


    Just make sure the "free implementation" you use is actually licensed in
    a way that lets you use it - details are important. Here's one that I
    know you /can/ use:

    Don't bother - ignore fragmentation. You might make a few percent
    difference on average in speed - but timings will vary many times more
    than this anyway.
    David Brown, Nov 23, 2011
  13. Just be aware that pre-allocation can take quite some time on some FAT
    systems. If you pre-allocate a few GB of data, you only have to
    write the one marker byte at the end, but your system may have to
    write out several megabytes of FAT data to the card. If your logger
    uses slow spi-mode writes that might take many minutes.

    Mark Borgerson
    Mark Borgerson, Nov 23, 2011
  14. I've also seen references to SD cards whose internal controllers are
    optimized for FAT filesystems in that they do more frequent wear-
    leveling on the first part of the disk where the FAT(s) and directory
    sectors are located.

    Whether such techniques are universally used is an open question. Most
    SD and Micro-SD cards are being sold to camera and cell phone owners who
    may only fill the memory a few times in the lifetime of the device.

    Mark Borgerson
    Mark Borgerson, Nov 23, 2011
  15. amrbekhit

    Arlet Ottens Guest

    With a big cluster size (which I'd recommend anyway) it's not so bad.

    Assuming a FAT32 system with 32kB cluster size, and pre-allocating a 2GB
    file, you need to initialize 64k FAT entries. Given 4 bytes per FAT
    entry, that's is 256 kB worth of data.
    Arlet Ottens, Nov 23, 2011
  16. OK. So extend that to a 32GB SDHC card and multiply by 4. (I recently
    developed a long-term logger that uses an array of 4 SD cards and
    collects several KB/second for up to 6 months.) So you get 256KB *16 *4
    or about 16MB to initialize. That might well end up at several
    minutes! ;-) That's one of the reasons that I used a custom sequential
    file system. The downside is that it takes a special application using
    raw device reads to move the data to the PC.

    I forget what the default cluster size is when you get a new 32GB
    card. Does anyone know? All my large cards have been reformated
    many times, and I've lost track of the original configuration.

    My own opinion is that 160MB of data per day times 180 days ought to
    add up to several MSc theses!

    Mark Borgerson
    Mark Borgerson, Nov 24, 2011
  17. amrbekhit

    Arlet Ottens Guest

    FAT is limited to 4GB files, so you'd never have to initialize more than
    500 kB per file. And, you also don't have to pre-allocate the whole
    thing at one time. You can already start logging to a file, and extend
    it by multiple 1MB seeks as you go along. That way, fragmentation is
    still possible, but it will be very limited.
    A few recent cards I've looked at had 32kB cluster sizes, but that's not
    a big sample.
    Arlet Ottens, Nov 24, 2011
  18. That would work if the files could miss a few seconds (or more) between
    them. My problem was that the customer wanted 6 months of data with
    no breaks. I was using an MSP430 MPU for the logger, and it didn't
    have enough buffer memory to handle much more than the time needed
    to update the sequential file directory and start a new file at the
    end of each day.
    Mark Borgerson
    Mark Borgerson, Nov 25, 2011
  19. amrbekhit

    Arlet Ottens Guest

    All you need for reasonable pre-allocation is some time to write a
    single sector worth of FAT entries. Combined with 32kB clusters, that's
    enough to write a 4MB chunk of data. When you get near the end of that
    4MB chunk, you pre-allocate another 4MB chunk.
    Arlet Ottens, Nov 25, 2011
  20. Good point. I hadn't delved far enough into the methods for pre-
    allocation to break things down to that point.

    Mark Borgerson
    Mark Borgerson, Nov 28, 2011
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.