02 Apr, 2009

5 commits

  • * 'for-linus' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: (58 commits)
    SUNRPC: Ensure IPV6_V6ONLY is set on the socket before binding to a port
    NSM: Fix unaligned accesses in nsm_init_private()
    NFS: Simplify logic to compare socket addresses in client.c
    NFS: Start PF_INET6 callback listener only if IPv6 support is available
    lockd: Start PF_INET6 listener only if IPv6 support is available
    SUNRPC: Remove CONFIG_SUNRPC_REGISTER_V4
    SUNRPC: rpcb_register() should handle errors silently
    SUNRPC: Simplify kernel RPC service registration
    SUNRPC: Simplify svc_unregister()
    SUNRPC: Allow callers to pass rpcb_v4_register a NULL address
    SUNRPC: rpcbind actually interprets r_owner string
    SUNRPC: Clean up address type casts in rpcb_v4_register()
    SUNRPC: Don't return EPROTONOSUPPORT in svc_register()'s helpers
    SUNRPC: Use IPv4 loopback for registering AF_INET6 kernel RPC services
    SUNRPC: Set IPV6ONLY flag on PF_INET6 RPC listener sockets
    NFS: Revert creation of IPv6 listeners for lockd and NFSv4 callbacks
    SUNRPC: Remove @family argument from svc_create() and svc_create_pooled()
    SUNRPC: Change svc_create_xprt() to take a @family argument
    SUNRPC: svc_setup_socket() gets protocol family from socket
    SUNRPC: Pass a family argument to svc_register()
    ...

    Linus Torvalds
     
  • * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (33 commits)
    ext4: Regularize mount options
    ext4: fix locking typo in mballoc which could cause soft lockup hangs
    ext4: fix typo which causes a memory leak on error path
    jbd2: Update locking coments
    ext4: Rename pa_linear to pa_type
    ext4: add checks of block references for non-extent inodes
    ext4: Check for an valid i_mode when reading the inode from disk
    ext4: Use WRITE_SYNC for commits which are caused by fsync()
    ext4: Add auto_da_alloc mount option
    ext4: Use struct flex_groups to calculate get_orlov_stats()
    ext4: Use atomic_t's in struct flex_groups
    ext4: remove /proc tuning knobs
    ext4: Add sysfs support
    ext4: Track lifetime disk writes
    ext4: Fix discard of inode prealloc space with delayed allocation.
    ext4: Automatically allocate delay allocated blocks on rename
    ext4: Automatically allocate delay allocated blocks on close
    ext4: add EXT4_IOC_ALLOC_DA_BLKS ioctl
    ext4: Simplify delalloc code by removing mpage_da_writepages()
    ext4: Save stack space by removing fake buffer heads
    ...

    Linus Torvalds
     
  • Trond Myklebust
     
  • This fixes unaligned accesses in nsm_init_private() when
    creating nlm_reboot keys.

    Signed-off-by: Mans Rullgard
    Reviewed-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Mans Rullgard
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable:
    Btrfs: try to free metadata pages when we free btree blocks
    Btrfs: add extra flushing for renames and truncates
    Btrfs: make sure btrfs_update_delayed_ref doesn't increase ref_mod
    Btrfs: optimize fsyncs on old files
    Btrfs: tree logging unlink/rename fixes
    Btrfs: Make sure i_nlink doesn't hit zero too soon during log replay
    Btrfs: limit balancing work while flushing delayed refs
    Btrfs: readahead checksums during btrfs_finish_ordered_io
    Btrfs: leave btree locks spinning more often
    Btrfs: Only let very young transactions grow during commit
    Btrfs: Check for a blocking lock before taking the spin
    Btrfs: reduce stack in cow_file_range
    Btrfs: reduce stalls during transaction commit
    Btrfs: process the delayed reference queue in clusters
    Btrfs: try to cleanup delayed refs while freeing extents
    Btrfs: reduce stack usage in some crucial tree balancing functions
    Btrfs: do extent allocation and reference count updates in the background
    Btrfs: don't preallocate metadata blocks during btrfs_search_slot

    Linus Torvalds
     

01 Apr, 2009

29 commits

  • A deadlock can occur when user space uses a signal (autofs version 4 uses
    SIGCHLD for this) to effect expire completion.

    The order of events is:

    Expire process completes, but before being able to send SIGCHLD to it's parent
    ...

    Another process walks onto a different mount point and drops the directory
    inode semaphore prior to sending the request to the daemon as it must ...

    A third process does an lstat on on the expired mount point causing it to wait
    on expire completion (unfortunately) holding the directory semaphore.

    The mount request then arrives at the daemon which does an lstat and,
    deadlock.

    For some time I was concerned about releasing the directory semaphore around
    the expire wait in autofs4_lookup as well as for the mount call back. I
    finally realized that the last round of changes in this function made the
    expiring dentry and the lookup dentry separate and distinct so the check and
    possible wait can be done anywhere prior to the mount call back. This patch
    moves the check to just before the mount call back and inside the directory
    inode mutex release.

    Signed-off-by: Ian Kent
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ian Kent
     
  • A significant portion of the autofs_dev_ioctl_expire() and
    autofs4_expire_multi() functions is duplicated code. This patch cleans that
    up.

    Signed-off-by: Ian Kent
    Signed-off-by: Jeff Moyer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ian Kent
     
  • Use kzfree() instead of memset() + kfree().

    Signed-off-by: Johannes Weiner
    Reviewed-by: Pekka Enberg
    Acked-by: Tyler Hicks
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Addresses http://bugzilla.kernel.org/show_bug.cgi?id=12843

    "I use ramfs instead of tmpfs for /tmp because I don't use swap on my
    laptop. Some apps need 1777 mode for /tmp directory, but ramfs does not
    support 'mode=' mount option."

    Reported-by: Avan Anishchuk
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wu Fengguang
     
  • Introduce keyed event wakeups inside the eventfd code.

    Signed-off-by: Davide Libenzi
    Cc: Alan Cox
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • Use the events hint now sent by some devices, to avoid unnecessary wakeups
    for events that are of no interest for the caller. This code handles both
    devices that are sending keyed events, and the ones that are not (and
    event the ones that sometimes send events, and sometimes don't).

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Davide Libenzi
    Cc: Alan Cox
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • People started using eventfd in a semaphore-like way where before they
    were using pipes.

    That is, counter-based resource access. Where a "wait()" returns
    immediately by decrementing the counter by one, if counter is greater than
    zero. Otherwise will wait. And where a "post(count)" will add count to
    the counter releasing the appropriate amount of waiters. If eventfd the
    "post" (write) part is fine, while the "wait" (read) does not dequeue 1,
    but the whole counter value.

    The problem with eventfd is that a read() on the fd returns and wipes the
    whole counter, making the use of it as semaphore a little bit more
    cumbersome. You can do a read() followed by a write() of COUNTER-1, but
    IMO it's pretty easy and cheap to make this work w/out extra steps. This
    patch introduces a new eventfd flag that tells eventfd to only dequeue 1
    from the counter, allowing simple read/write to make it behave like a
    semaphore. Simple test here:

    http://www.xmailserver.org/eventfd-sem.c

    To be back-compatible with earlier kernels, userspace applications should
    probe for the availability of this feature via

    #ifdef EFD_SEMAPHORE
    fd = eventfd2 (CNT, EFD_SEMAPHORE);
    if (fd == -1 && errno == EINVAL)

    #else

    #endif

    Signed-off-by: Davide Libenzi
    Cc:
    Tested-by: Michael Kerrisk
    Cc: Ulrich Drepper
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • eventpoll.c uses void * in one place for no obvious reason; change it to
    use the real type instead.

    Signed-off-by: Tony Battersby
    Acked-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Battersby
     
  • ep_modify() doesn't need to set event.data from within the ep->lock
    spinlock as the comment suggests. The only place event.data is used is
    ep_send_events_proc(), and this is protected by ep->mtx instead of
    ep->lock. Also update the comment for mutex_lock() at the top of
    ep_scan_ready_list(), which mentions epoll_ctl(EPOLL_CTL_DEL) but not
    epoll_ctl(EPOLL_CTL_MOD).

    ep_modify() can also use spin_lock_irq() instead of spin_lock_irqsave().

    Signed-off-by: Tony Battersby
    Acked-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Battersby
     
  • xchg in ep_unregister_pollwait() is unnecessary because it is protected by
    either epmutex or ep->mtx (the same protection as ep_remove()).

    If xchg was necessary, it would be insufficient to protect against
    problems: if multiple concurrent calls to ep_unregister_pollwait() were
    possible then a second caller that returns without doing anything because
    nwait == 0 could return before the waitqueues are removed by the first
    caller, which looks like it could lead to problematic races with
    ep_poll_callback().

    So remove xchg and add comments about the locking.

    Signed-off-by: Tony Battersby
    Acked-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Battersby
     
  • If epoll_wait returns -EFAULT, the event that was being returned when the
    fault was encountered will be forgotten. This is not a big deal since
    EFAULT will happen only if a buggy userspace program passes in a bad
    address, in which case what happens later usually doesn't matter.
    However, it is easy to remember the event for later, and this patch makes
    a simple change to do that.

    Signed-off-by: Tony Battersby
    Acked-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Battersby
     
  • ep_call_nested() (formerly ep_poll_safewake()) uses "current" (without
    dereferencing it) to detect callback recursion, but it may be called from
    irq context where the use of current is generally discouraged. It would
    be better to use get_cpu() and put_cpu() to detect the callback recursion.

    Signed-off-by: Tony Battersby
    Acked-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Battersby
     
  • Remove debugging code from epoll. There's no need for it to be included
    into mainline code.

    Signed-off-by: Davide Libenzi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • Signed-off-by: Davide Libenzi
    Cc: Pavel Pisa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • Fix a bug inside the epoll's f_op->poll() code, that returns POLLIN even
    though there are no actual ready monitored fds. The bug shows up if you
    add an epoll fd inside another fd container (poll, select, epoll).

    The problem is that callback-based wake ups used by epoll does not carry
    (patches will follow, to fix this) any information about the events that
    actually happened. So the callback code, since it can't call the file*
    ->poll() inside the callback, chains the file* into a ready-list.

    So, suppose you added an fd with EPOLLOUT only, and some data shows up on
    the fd, the file* mapped by the fd will be added into the ready-list (via
    wakeup callback). During normal epoll_wait() use, this condition is
    sorted out at the time we're actually able to call the file*'s
    f_op->poll().

    Inside the old epoll's f_op->poll() though, only a quick check
    !list_empty(ready-list) was performed, and this could have led to
    reporting POLLIN even though no ready fds would show up at a following
    epoll_wait(). In order to correctly report the ready status for an epoll
    fd, the ready-list must be checked to see if any really available fd+event
    would be ready in a following epoll_wait().

    Operation (calling f_op->poll() from inside f_op->poll()) that, like wake
    ups, must be handled with care because of the fact that epoll fds can be
    added to other epoll fds.

    Test code:

    /*
    * epoll_test by Davide Libenzi (Simple code to test epoll internals)
    * Copyright (C) 2008 Davide Libenzi
    *
    * This program is free software; you can redistribute it and/or modify
    * it under the terms of the GNU General Public License as published by
    * the Free Software Foundation; either version 2 of the License, or
    * (at your option) any later version.
    *
    * This program is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with this program; if not, write to the Free Software
    * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
    *
    * Davide Libenzi
    *
    */

    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    #define EPWAIT_TIMEO (1 * 1000)
    #ifndef POLLRDHUP
    #define POLLRDHUP 0x2000
    #endif

    #define EPOLL_MAX_CHAIN 100L

    #define EPOLL_TF_LOOP (1 << 0)

    struct epoll_test_cfg {
    long size;
    long flags;
    };

    static int xepoll_create(int n) {
    int epfd;

    if ((epfd = epoll_create(n)) == -1) {
    perror("epoll_create");
    exit(2);
    }

    return epfd;
    }

    static void xepoll_ctl(int epfd, int cmd, int fd, struct epoll_event *evt) {
    if (epoll_ctl(epfd, cmd, fd, evt) < 0) {
    perror("epoll_ctl");
    exit(3);
    }
    }

    static void xpipe(int *fds) {
    if (pipe(fds)) {
    perror("pipe");
    exit(4);
    }
    }

    static pid_t xfork(void) {
    pid_t pid;

    if ((pid = fork()) == (pid_t) -1) {
    perror("pipe");
    exit(5);
    }

    return pid;
    }

    static int run_forked_proc(int (*proc)(void *), void *data) {
    int status;
    pid_t pid;

    if ((pid = xfork()) == 0)
    exit((*proc)(data));
    if (waitpid(pid, &status, 0) != pid) {
    perror("waitpid");
    return -1;
    }

    return WIFEXITED(status) ? WEXITSTATUS(status): -2;
    }

    static int check_events(int fd, int timeo) {
    struct pollfd pfd;

    fprintf(stdout, "Checking events for fd %d\n", fd);
    memset(&pfd, 0, sizeof(pfd));
    pfd.fd = fd;
    pfd.events = POLLIN | POLLOUT;
    if (poll(&pfd, 1, timeo) < 0) {
    perror("poll()");
    return 0;
    }
    if (pfd.revents & POLLIN)
    fprintf(stdout, "\tPOLLIN\n");
    if (pfd.revents & POLLOUT)
    fprintf(stdout, "\tPOLLOUT\n");
    if (pfd.revents & POLLERR)
    fprintf(stdout, "\tPOLLERR\n");
    if (pfd.revents & POLLHUP)
    fprintf(stdout, "\tPOLLHUP\n");
    if (pfd.revents & POLLRDHUP)
    fprintf(stdout, "\tPOLLRDHUP\n");

    return pfd.revents;
    }

    static int epoll_test_tty(void *data) {
    int epfd, ifd = fileno(stdin), res;
    struct epoll_event evt;

    if (check_events(ifd, 0) != POLLOUT) {
    fprintf(stderr, "Something is cooking on STDIN (%d)\n", ifd);
    return 1;
    }
    epfd = xepoll_create(1);
    fprintf(stdout, "Created epoll fd (%d)\n", epfd);
    memset(&evt, 0, sizeof(evt));
    evt.events = EPOLLIN;
    xepoll_ctl(epfd, EPOLL_CTL_ADD, ifd, &evt);
    if (check_events(epfd, 0) & POLLIN) {
    res = epoll_wait(epfd, &evt, 1, 0);
    if (res == 0) {
    fprintf(stderr, "Epoll fd (%d) is ready when it shouldn't!\n",
    epfd);
    return 2;
    }
    }

    return 0;
    }

    static int epoll_wakeup_chain(void *data) {
    struct epoll_test_cfg *tcfg = data;
    int i, res, epfd, bfd, nfd, pfds[2];
    pid_t pid;
    struct epoll_event evt;

    memset(&evt, 0, sizeof(evt));
    evt.events = EPOLLIN;

    epfd = bfd = xepoll_create(1);

    for (i = 0; i < tcfg->size; i++) {
    nfd = xepoll_create(1);
    xepoll_ctl(bfd, EPOLL_CTL_ADD, nfd, &evt);
    bfd = nfd;
    }
    xpipe(pfds);
    if (tcfg->flags & EPOLL_TF_LOOP)
    {
    xepoll_ctl(bfd, EPOLL_CTL_ADD, epfd, &evt);
    /*
    * If we're testing for loop, we want that the wakeup
    * triggered by the write to the pipe done in the child
    * process, triggers a fake event. So we add the pipe
    * read size with EPOLLOUT events. This will trigger
    * an addition to the ready-list, but no real events
    * will be there. The the epoll kernel code will proceed
    * in calling f_op->poll() of the epfd, triggering the
    * loop we want to test.
    */
    evt.events = EPOLLOUT;
    }
    xepoll_ctl(bfd, EPOLL_CTL_ADD, pfds[0], &evt);

    /*
    * The pipe write must come after the poll(2) call inside
    * check_events(). This tests the nested wakeup code in
    * fs/eventpoll.c:ep_poll_safewake()
    * By having the check_events() (hence poll(2)) happens first,
    * we have poll wait queue filled up, and the write(2) in the
    * child will trigger the wakeup chain.
    */
    if ((pid = xfork()) == 0) {
    sleep(1);
    write(pfds[1], "w", 1);
    exit(0);
    }

    res = check_events(epfd, 2000) & POLLIN;

    if (waitpid(pid, NULL, 0) != pid) {
    perror("waitpid");
    return -1;
    }

    return res;
    }

    static int epoll_poll_chain(void *data) {
    struct epoll_test_cfg *tcfg = data;
    int i, res, epfd, bfd, nfd, pfds[2];
    pid_t pid;
    struct epoll_event evt;

    memset(&evt, 0, sizeof(evt));
    evt.events = EPOLLIN;

    epfd = bfd = xepoll_create(1);

    for (i = 0; i < tcfg->size; i++) {
    nfd = xepoll_create(1);
    xepoll_ctl(bfd, EPOLL_CTL_ADD, nfd, &evt);
    bfd = nfd;
    }
    xpipe(pfds);
    if (tcfg->flags & EPOLL_TF_LOOP)
    {
    xepoll_ctl(bfd, EPOLL_CTL_ADD, epfd, &evt);
    /*
    * If we're testing for loop, we want that the wakeup
    * triggered by the write to the pipe done in the child
    * process, triggers a fake event. So we add the pipe
    * read size with EPOLLOUT events. This will trigger
    * an addition to the ready-list, but no real events
    * will be there. The the epoll kernel code will proceed
    * in calling f_op->poll() of the epfd, triggering the
    * loop we want to test.
    */
    evt.events = EPOLLOUT;
    }
    xepoll_ctl(bfd, EPOLL_CTL_ADD, pfds[0], &evt);

    /*
    * The pipe write mush come before the poll(2) call inside
    * check_events(). This tests the nested f_op->poll calls code in
    * fs/eventpoll.c:ep_eventpoll_poll()
    * By having the pipe write(2) happen first, we make the kernel
    * epoll code to load the ready lists, and the following poll(2)
    * done inside check_events() will test nested poll code in
    * ep_eventpoll_poll().
    */
    if ((pid = xfork()) == 0) {
    write(pfds[1], "w", 1);
    exit(0);
    }
    sleep(1);
    res = check_events(epfd, 1000) & POLLIN;

    if (waitpid(pid, NULL, 0) != pid) {
    perror("waitpid");
    return -1;
    }

    return res;
    }

    int main(int ac, char **av) {
    int error;
    struct epoll_test_cfg tcfg;

    fprintf(stdout, "\n********** Testing TTY events\n");
    error = run_forked_proc(epoll_test_tty, NULL);
    fprintf(stdout, error == 0 ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = 3;
    tcfg.flags = 0;
    fprintf(stdout, "\n********** Testing short wakeup chain\n");
    error = run_forked_proc(epoll_wakeup_chain, &tcfg);
    fprintf(stdout, error == POLLIN ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = EPOLL_MAX_CHAIN;
    tcfg.flags = 0;
    fprintf(stdout, "\n********** Testing long wakeup chain (HOLD ON)\n");
    error = run_forked_proc(epoll_wakeup_chain, &tcfg);
    fprintf(stdout, error == 0 ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = 3;
    tcfg.flags = 0;
    fprintf(stdout, "\n********** Testing short poll chain\n");
    error = run_forked_proc(epoll_poll_chain, &tcfg);
    fprintf(stdout, error == POLLIN ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = EPOLL_MAX_CHAIN;
    tcfg.flags = 0;
    fprintf(stdout, "\n********** Testing long poll chain (HOLD ON)\n");
    error = run_forked_proc(epoll_poll_chain, &tcfg);
    fprintf(stdout, error == 0 ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = 3;
    tcfg.flags = EPOLL_TF_LOOP;
    fprintf(stdout, "\n********** Testing loopy wakeup chain (HOLD ON)\n");
    error = run_forked_proc(epoll_wakeup_chain, &tcfg);
    fprintf(stdout, error == 0 ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    tcfg.size = 3;
    tcfg.flags = EPOLL_TF_LOOP;
    fprintf(stdout, "\n********** Testing loopy poll chain (HOLD ON)\n");
    error = run_forked_proc(epoll_poll_chain, &tcfg);
    fprintf(stdout, error == 0 ?
    "********** OK\n": "********** FAIL (%d)\n", error);

    return 0;
    }

    Signed-off-by: Davide Libenzi
    Cc: Pavel Pisa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • The base versions handle constant folding now and are shorter than these
    private wrappers, use them directly.

    Signed-off-by: Harvey Harrison
    Cc: Anton Altaparmakov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Harvey Harrison
     
  • Now that the filesystem freeze operation has been elevated to the VFS, and
    is just an ioctl away, some sort of safety net for unintentionally frozen
    root filesystems may be in order.

    The timeout thaw originally proposed did not get merged, but perhaps
    something like this would be useful in emergencies.

    For example, freeze /path/to/mountpoint may freeze your root filesystem if
    you forgot that you had that unmounted.

    I chose 'j' as the last remaining character other than 'h' which is sort
    of reserved for help (because help is generated on any unknown character).

    I've tested this on a non-root fs with multiple (nested) freezers, as well
    as on a system rendered unresponsive due to a frozen root fs.

    [randy.dunlap@oracle.com: emergency thaw only if CONFIG_BLOCK enabled]
    Signed-off-by: Eric Sandeen
    Cc: Takashi Sato
    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Sandeen
     
  • try_to_free_pages() is used for the direct reclaim of up to
    SWAP_CLUSTER_MAX pages when watermarks are low. The caller to
    alloc_pages_nodemask() can specify a nodemask of nodes that are allowed to
    be used but this is not passed to try_to_free_pages(). This can lead to
    unnecessary reclaim of pages that are unusable by the caller and int the
    worst case lead to allocation failure as progress was not been make where
    it is needed.

    This patch passes the nodemask used for alloc_pages_nodemask() to
    try_to_free_pages().

    Reviewed-by: KOSAKI Motohiro
    Acked-by: Mel Gorman
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Instead of open-coding the lru-list-add pagevec batching when expanding a
    file mapping from zero, defer to the appropriate page cache function that
    also takes care of adding the page to the lru list.

    This is cleaner, saves code and reduces the stack footprint by 16 words
    worth of pagevec.

    Signed-off-by: Johannes Weiner
    Acked-by: David Howells
    Cc: Nick Piggin
    Acked-by: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Peter Zijlstra
    Cc: MinChan Kim
    Cc: Lee Schermerhorn
    Cc: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Fix warnings and return values in sysfs bin_page_mkwrite(), fixing
    fs/sysfs/bin.c: In function `bin_page_mkwrite':
    fs/sysfs/bin.c:250: warning: passing argument 2 of `bb->vm_ops->page_mkwrite' from incompatible pointer type
    fs/sysfs/bin.c: At top level:
    fs/sysfs/bin.c:280: warning: initialization from incompatible pointer type

    Expects to have my [PATCH next] sysfs: fix some bin_vm_ops errors

    Signed-off-by: Hugh Dickins
    Cc: Nick Piggin
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • page_mkwrite is called with neither the page lock nor the ptl held. This
    means a page can be concurrently truncated or invalidated out from
    underneath it. Callers are supposed to prevent truncate races themselves,
    however previously the only thing they can do in case they hit one is to
    raise a SIGBUS. A sigbus is wrong for the case that the page has been
    invalidated or truncated within i_size (eg. hole punched). Callers may
    also have to perform memory allocations in this path, where again, SIGBUS
    would be wrong.

    The previous patch ("mm: page_mkwrite change prototype to match fault")
    made it possible to properly specify errors. Convert the generic buffer.c
    code and btrfs to return sane error values (in the case of page removed
    from pagecache, VM_FAULT_NOPAGE will cause the fault handler to exit
    without doing anything, and the fault will be retried properly).

    This fixes core code, and converts btrfs as a template/example. All other
    filesystems defining their own page_mkwrite should be fixed in a similar
    manner.

    Acked-by: Chris Mason
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Change the page_mkwrite prototype to take a struct vm_fault, and return
    VM_FAULT_xxx flags. There should be no functional change.

    This makes it possible to return much more detailed error information to
    the VM (and also can provide more information eg. virtual_address to the
    driver, which might be important in some special cases).

    This is required for a subsequent fix. And will also make it easier to
    merge page_mkwrite() with fault() in future.

    Signed-off-by: Nick Piggin
    Cc: Chris Mason
    Cc: Trond Myklebust
    Cc: Miklos Szeredi
    Cc: Steven Whitehouse
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Artem Bityutskiy
    Cc: Felix Blyakher
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Allow non root users with sufficient mlock rlimits to be able to allocate
    hugetlb backed shm for now. Deprecate this though. This is being
    deprecated because the mlock based rlimit checks for SHM_HUGETLB is not
    consistent with mmap based huge page allocations.

    Signed-off-by: Ravikiran Thirumalai
    Reviewed-by: Mel Gorman
    Cc: William Lee Irwin III
    Cc: Adam Litke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ravikiran G Thirumalai
     
  • Fix hugetlb subsystem so that non root users belonging to
    hugetlb_shm_group can actually allocate hugetlb backed shm.

    Currently non root users cannot even map one large page using SHM_HUGETLB
    when they belong to the gid in /proc/sys/vm/hugetlb_shm_group. This is
    because allocation size is verified against RLIMIT_MEMLOCK resource limit
    even if the user belongs to hugetlb_shm_group.

    This patch
    1. Fixes hugetlb subsystem so that users with CAP_IPC_LOCK and users
    belonging to hugetlb_shm_group don't need to be restricted with
    RLIMIT_MEMLOCK resource limits
    2. This patch also disables mlock based rlimit checking (which will
    be reinstated and marked deprecated in a subsequent patch).

    Signed-off-by: Ravikiran Thirumalai
    Reviewed-by: Mel Gorman
    Cc: William Lee Irwin III
    Cc: Adam Litke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ravikiran G Thirumalai
     
  • Add a helper function account_page_dirtied(). Use that from two
    callsites. reiser4 adds a function which adds a third callsite.

    Signed-off-by: Edward Shishkin
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Edward Shishkin
     
  • struct tty_operations::proc_fops took it's place and there is one less
    create_proc_read_entry() user now!

    Signed-off-by: Alexey Dobriyan
    Cc: Alan Cox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • Used for gradual switch of TTY drivers from using ->read_proc which helps
    with gradual switch from ->read_proc for the whole tree.

    As side effect, fix possible race condition when ->data initialized after
    PDE is hooked into proc tree.

    ->proc_fops takes precedence over ->read_proc.

    Signed-off-by: Alexey Dobriyan
    Cc: Alan Cox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • COW means we cycle though blocks fairly quickly, and once we
    free an extent on disk, it doesn't make much sense to keep the pages around.

    This commit tries to immediately free the page when we free the extent,
    which lowers our memory footprint significantly.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Renames and truncates are both common ways to replace old data with new
    data. The filesystem can make an effort to make sure the new data is
    on disk before actually replacing the old data.

    This is especially important for rename, which many application use as
    though it were atomic for both the data and the metadata involved. The
    current btrfs code will happily replace a file that is fully on disk
    with one that was just created and still has pending IO.

    If we crash after transaction commit but before the IO is done, we'll end
    up replacing a good file with a zero length file. The solution used
    here is to create a list of inodes that need special ordering and force
    them to disk before the commit is done. This is similar to the
    ext3 style data=ordering, except it is only done on selected files.

    Btrfs is able to get away with this because it does not wait on commits
    very often, even for fsync (which use a sub-commit).

    For renames, we order the file when it wasn't already
    on disk and when it is replacing an existing file. Larger files
    are sent to filemap_flush right away (before the transaction handle is
    opened).

    For truncates, we order if the file goes from non-zero size down to
    zero size. This is a little different, because at the time of the
    truncate the file has no dirty bytes to order. But, we flag the inode
    so that it is added to the ordered list on close (via release method). We
    also immediately add it to the ordered list of the current transaction
    so that we can try to flush down any writes the application sneaks in
    before commit.

    Signed-off-by: Chris Mason

    Chris Mason
     

31 Mar, 2009

6 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-cpumask:
    oprofile: Thou shalt not call __exit functions from __init functions
    cpumask: remove the now-obsoleted pcibus_to_cpumask(): generic
    cpumask: remove cpumask_t from core
    cpumask: convert rcutorture.c
    cpumask: use new cpumask_ functions in core code.
    cpumask: remove references to struct irqaction's mask field.
    cpumask: use mm_cpumask() wrapper: kernel/fork.c
    cpumask: use set_cpu_active in init/main.c
    cpumask: remove node_to_first_cpu
    cpumask: fix seq_bitmap_*() functions.
    cpumask: remove dangerous CPU_MASK_ALL_PTR, &CPU_MASK_ALL

    Linus Torvalds
     
  • * 'proc-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/adobriyan/proc:
    Revert "proc: revert /proc/uptime to ->read_proc hook"
    proc 2/2: remove struct proc_dir_entry::owner
    proc 1/2: do PDE usecounting even for ->read_proc, ->write_proc
    proc: fix sparse warnings in pagemap_read()
    proc: move fs/proc/inode-alloc.txt comment into a source file

    Linus Torvalds
     
  • This patch ifdefs xattr_create when xattrs aren't enabled.

    Signed-off-by: Jeff Mahoney
    Signed-off-by: Linus Torvalds

    Jeff Mahoney
     
  • This reverts commit 6c87df37dcb9c6c33923707fa5191e0a65874d60.

    proc files implemented through seq_file do pread(2) now.

    Signed-off-by: Alexey Dobriyan

    Alexey Dobriyan
     
  • Setting ->owner as done currently (pde->owner = THIS_MODULE) is racy
    as correctly noted at bug #12454. Someone can lookup entry with NULL
    ->owner, thus not pinning enything, and release it later resulting
    in module refcount underflow.

    We can keep ->owner and supply it at registration time like ->proc_fops
    and ->data.

    But this leaves ->owner as easy-manipulative field (just one C assignment)
    and somebody will forget to unpin previous/pin current module when
    switching ->owner. ->proc_fops is declared as "const" which should give
    some thoughts.

    ->read_proc/->write_proc were just fixed to not require ->owner for
    protection.

    rmmod'ed directories will be empty and return "." and ".." -- no harm.
    And directories with tricky enough readdir and lookup shouldn't be modular.
    We definitely don't want such modular code.

    Removing ->owner will also make PDE smaller.

    So, let's nuke it.

    Kudos to Jeff Layton for reminding about this, let's say, oversight.

    http://bugzilla.kernel.org/show_bug.cgi?id=12454

    Signed-off-by: Alexey Dobriyan

    Alexey Dobriyan
     
  • struct proc_dir_entry::owner is going to be removed. Now it's only necessary
    to protect PDEs which are using ->read_proc, ->write_proc hooks.

    However, ->owner assignments are racy and make it very easy for someone to switch
    ->owner on live PDE (as some subsystems do) without fixing refcounts and so on.

    http://bugzilla.kernel.org/show_bug.cgi?id=12454

    So, ->owner is on death row.

    Proxy file operations exist already (proc_file_operations), just bump usecount
    when necessary.

    Signed-off-by: Alexey Dobriyan

    Alexey Dobriyan