Is there a way to manually clear swap space?

Discussion in 'Apple' started by Le Chiffre, Dec 18, 2010.

  1. Le Chiffre

    Le Chiffre Guest

    Besides restarting the computer?

    Like from Terminal or something? I do a lot of simultaneous work, and
    that slows down my computer when it writes to the swap space, so I was
    wondering if besides using kill to kill running processes I can
    manually clear the swap space without restarting the computer.
     
    Le Chiffre, Dec 18, 2010
    #1
    1. Advertisements

  2. Le Chiffre

    Tom Stiller Guest

    Why do you think *clearing* swap space would speed things up. If your
    simultaneous operations are swapping too much, you're short on RAM, not
    "clear" swap space.
     
    Tom Stiller, Dec 18, 2010
    #2
    1. Advertisements

  3. Le Chiffre

    Le Chiffre Guest

    I don't think it will speed things up, I just want to be able to clean
    things up after I'm done with my simultaneous processes. I've got 4 GB
    of RAM which is fine for most of my purposes except for the occasional
    hiccup in activity. I don't even touch swap most of the time, but when
    I do I like to clean up after myself.
     
    Le Chiffre, Dec 18, 2010
    #3
  4. It's just taking up space on disk. If it's not being used, it doesn't
    affect performance. So unless you're running out of disk space, there's
    no point in trying to clear it up.

    I don't think there's any way to clear it up on the fly.
     
    Barry Margolin, Dec 18, 2010
    #4
  5. Le Chiffre

    Wes Groleau Guest

    Then again, he claimed to already have four GB, and that the
    "actual problem" is a perceived need to clean up after himself.

    For which my answer is, “When you don't trust your O.S. to
    do what you want, you must buy or build a different one.†:)
     
    Wes Groleau, Dec 18, 2010
    #5
  6. Le Chiffre

    Guest Guest

    quit as many apps as you can, and after a short time some of the swap
    files will go away. if that doesn't work, log out and back in.

    or buy more memory.
     
    Guest, Dec 18, 2010
    #6
  7. Le Chiffre

    Tom Stiller Guest

    Where did the OP say that he had 4 GB?
     
    Tom Stiller, Dec 18, 2010
    #7
  8. Le Chiffre

    Tom Stiller Guest

    Damn! I've *got* to pay better attention.
     
    Tom Stiller, Dec 18, 2010
    #8
  9. Le Chiffre

    JF Mezei Guest

    I am not familiar with how OS-X manages the page file(s).

    However, as a concept, a process is allocated a certain amount of space
    on disk for memory that is not in ram. (in VMS terms the process memory
    that is in ram is called the working set).

    Ending/killing the process will free the space on disk. In a single page
    file, those blocks become available to another process. If you have one
    page file per process, then I guess the page file is deleted along with
    the process.

    Note that a runaway process (like Firefox having gone into some porn
    site javascript loop that opens windows etc) will end up consuming huge
    amounts of page file. And when you kill it, deallocating the memory
    requires that it be read in RAM in chunks that fit the available
    physical memory and this can take a lot of time.
     
    JF Mezei, Dec 18, 2010
    #9
  10. Le Chiffre

    David Empson Guest

    In a followup post to your first comment on this thread.
     
    David Empson, Dec 18, 2010
    #10
  11. Le Chiffre

    JF Mezei Guest

    Question:

    On OS-X, does each process have its own swap/page file, or is there one
    that is large enough to contain all the paged out memory pages ?


    If you have performance problems, go into activity monitor and look at
    the amout of virtual memory consumed by processes. You may spot one with
    inordinate amounts and you can then quit that one and it will free up a
    lot of disk space and virtual memory.


    BTW, is there a way to limit the amount of virtual memory a process is
    allowed to consume ? (on VMS, there is the PGFILQUOTA that ias defined
    on a per-user basis).
     
    JF Mezei, Dec 18, 2010
    #11
  12. Le Chiffre

    JF Mezei Guest

    OK answers a previous question. I have 5 files. Is there some logic
    behind how many files are created ? (number of processes per file ?).

    If you have multiple processes that map their memory to a physical file,
    then you *should* not be allowed to delete such a file until all
    processes that have memory mapped (or reserved) in that file have quitted.

    It is also possible that if the "rm" command succeeds, that the real
    file doesn't actually get deleted because the system must ensure that
    blocks that contain an active processe's memory remain valid, and more
    importantly, that such blocks are not given other purposes (for
    instance, you create a file containing very poersonal information, and
    the process that had its memory mapped to those disk blocks then read
    its memory and now gets your data instead of its data.

    Windows might allow this because of its roots as a single user system,
    but I suspect any unix variant would have proper page file protection
    mechanisms.

    Note that when you quit a process, its reservation of blocks within a
    page file may be wirtdrawn, but it does not mean that the whole
    swap/page file will be reduced in size.

    Consider a 3 processes that share a page file. Second process quits, it
    it releases a large number of blocks in the middle of the page file. The
    other 2 processes have blocks allocated towards the start and towards
    the end of the file, so the system couldn't shrink it unless it did some
    serious remaopping of virtual memory of the affected processes. This
    would have some performance impact and likely require those processes be
    frozen during the process.
     
    JF Mezei, Dec 18, 2010
    #12
  13. Le Chiffre

    Nick Naym Guest

    , Le
    Chiffre at wrote on 12/18/10 7:26 AM:

    http://jimmitchell.org/yasu/
     
    Nick Naym, Dec 19, 2010
    #13
  14. Le Chiffre

    Bob Harris Guest

    This will do 2 things. First the operating system should have an
    open reference on all the swap files, so if you delete it, the
    file will not go away, it will just not have a directory entry
    pointing a it. Or lets hope the OS keeps an open reference to the
    file, or if the OS is still using it, and the file system releases
    the storage, then there would be a conflict between new files
    being given the storage while the OS is still using that storage
    as swap space (had this happen in an OS back in the early '80; it
    was not pretty).

    The 2nd thing it will do is maybe create a lost file. If the
    operating system does a normal shutdown, which is when the OS
    should close its reference to the swap files, then the file system
    should actually delete the file, however, if the system should
    crash (or loose power), then there will not be a clean shutdown,
    and the OS will not close its handle on the swapfiles, and that
    storage will be lost until Disk Utility Repair is run.

    Your mileage may vary, as I'm basing my opinion on how this works
    in other Unix based operating systems I've worked on. And when I
    say worked on, I mean my day job is as a Unix file system
    developer.
     
    Bob Harris, Dec 19, 2010
    #14
  15. Le Chiffre

    Nick Naym Guest



    Never mind...after clearing swap files, YASU requires either a restart or
    shutdown. (Some other maintenance tasks YASU performs do not.)
     
    Nick Naym, Dec 19, 2010
    #15
  16. Le Chiffre

    Bob Harris Guest

    No. pagefiles (the same as VMS) are just a pool of store for
    virtual memory to be paged into/out of. Mac OS X starts at a
    small 64MB, the 2nd pagefile is also 64MB, then it starts
    doubling. I think when it reaches a specific higher size, it
    stops doubling the size of the pagefile.

    There is not ratio of processes to page files.

    The logic is that if more virtual memory needs to be forced to
    disk, and the existing pagefiles do not have enough free space,
    then additional pagefiles are created.
    It is a pool of storage. Besides, deleting the file does not
    close the open file descriptor, so all you have done is made the
    file unaccessible from another program, including the command
    line. The file inode and all its storage should still exist in
    the file system until the operating system closes its file
    descriptor reference to the swapfile at normal shutdown.
    Yup. Although way back in VMS 2.0 you could do this, and thus
    trash your file system. VMS learned.
    Unix, from the earliest days, has always separated deleting the
    directory file name entry, from releasing the inode and storage as
    long as there was an open references on the file. And the Unix
    file systems I've worked on (as in development) have been very
    careful about not releasing storage for a file that has an open
    reference (Tru64 UNIX, HP-UX, Linux, Solaris, AIX).
    Just like non-paged-pool does not get smaller after it has grown
    in size. Although the Non-Paged Pool behind ZKO did grow and
    shrink depending on the weather :)
    Yup. But those freed swap pages are then available for the next
    swap operation by any other process still running.

    Plus most operating systems figure that if you needed once, you
    will need it again, so they tend to keep pagefiles around once
    they have created them.
     
    Bob Harris, Dec 19, 2010
    #16
  17. Le Chiffre

    JF Mezei Guest


    Actually, from a performance point of view, it is important. If you have
    a very large file, the OS (generically speaking, not sure about OS-X)
    can allocate contiguous blocks to processes, so when a process needs
    access to memory, it can do multiblock read operation, and this greatly
    reduces head seek times etc.

    If your data is scattered all over the file, then reading your data
    requires more disk head movements and more latency compared to doing a
    single multi block read of consecutive blocks.

    (think about fragmentation within a file).
     
    JF Mezei, Dec 19, 2010
    #17
  18. They all share the same swap files.
    Unix has the ulimit command, but I think I've read that OS X doesn't
    implement the limit on VM size.
     
    Barry Margolin, Dec 19, 2010
    #18
  19. Le Chiffre

    Tom Stiller Guest

    Except for virtual memory mapping of a file, it's unlikely that there
    would be a demand for multi-block reads of consecutive blocks. If the
    file consists of consecutive blocks then there's no problem. Normal
    process scheduling and paging produce more scattered demands from the
    backing store.
     
    Tom Stiller, Dec 19, 2010
    #19
  20. Le Chiffre

    Wes Groleau Guest

    :) After clearing swap files, it demands an operation
    that clears swap files. Cute.
     
    Wes Groleau, Dec 20, 2010
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.