20 Dec, 2008

3 commits


18 Dec, 2008

2 commits


17 Dec, 2008

2 commits


11 Dec, 2008

6 commits

  • * 'to-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/frob/linux-2.6-roland:
    tracehook: exec double-reporting fix

    Linus Torvalds
     
  • Miles Lane tailing /sys files hit a BUG which Pekka Enberg has tracked
    to my 966c8c12dc9e77f931e2281ba25d2f0244b06949 sprint_symbol(): use
    less stack exposing a bug in slub's list_locations() -
    kallsyms_lookup() writes a 0 to namebuf[KSYM_NAME_LEN-1], but that was
    beyond the end of page provided.

    The 100 slop which list_locations() allows at end of page looks roughly
    enough for all the other stuff it might print after the symbol before
    it checks again: break out KSYM_SYMBOL_LEN earlier than before.

    Latencytop and ftrace and are using KSYM_NAME_LEN buffers where they
    need KSYM_SYMBOL_LEN buffers, and vmallocinfo a 2*KSYM_NAME_LEN buffer
    where it wants a KSYM_SYMBOL_LEN buffer: fix those before anyone copies
    them.

    [akpm@linux-foundation.org: ftrace.h needs module.h]
    Signed-off-by: Hugh Dickins
    Cc: Christoph Lameter
    Cc Miles Lane
    Acked-by: Pekka Enberg
    Acked-by: Steven Rostedt
    Acked-by: Frederic Weisbecker
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • On umount two event will be dispatched to watcher:

    1: inotify_dev_queue_event(.., IN_UNMOUNT,..)
    2: remove_watch(watch, dev)
    ->inotify_dev_queue_event(.., IN_IGNORED, ..)

    But if watcher has IN_ONESHOT bit set then the watcher will be released
    inside first event. Which result in accessing invalid object later. IMHO
    it is not pure regression. This bug wasn't triggered while initial
    inotify interface testing phase because of another bug in IN_ONESHOT
    handling logic :)

    commit ac74c00e499ed276a965e5b5600667d5dc04a84a
    Author: Ulisses Furquim
    Date: Fri Feb 8 04:18:16 2008 -0800
    inotify: fix check for one-shot watches before destroying them
    As the IN_ONESHOT bit is never set when an event is sent we must check it
    in the watch's mask and not in the event's mask.

    TESTCASE:
    mkdir mnt
    mount -ttmpfs none mnt
    mkdir mnt/d
    ./inotify mnt/d&
    umount mnt ## << lockup or crash here

    TESTSOURCE:
    /* gcc -oinotify inotify.c */
    #include
    #include
    #include

    int main(int argc, char **argv)
    {
    char buf[1024];
    struct inotify_event *ie;
    char *p;
    int i;
    ssize_t l;

    p = argv[1];
    i = inotify_init();
    inotify_add_watch(i, p, ~0);

    l = read(i, buf, sizeof(buf));
    printf("read %d bytes\n", l);
    ie = (struct inotify_event *) buf;
    printf("event mask: %d\n", ie->mask);
    return 0;
    }

    Signed-off-by: Dmitri Monakhov
    Cc: John McCutchan
    Cc: Al Viro
    Cc: Robert Love
    Cc: Ulisses Furquim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitri Monakhov
     
  • The large pages fix from bcf8039ed45 broke 32-bit pagemap by pulling the
    pagemap entry code out into a function with the wrong return type.
    Pagemap entries are 64 bits on all systems and unsigned long is only 32
    bits on 32-bit systems.

    Signed-off-by: Matt Mackall
    Reported-by: Doug Graham
    Cc: Alexey Dobriyan
    Cc: Dave Hansen
    Cc: [2.6.26.x, 2.6.27.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matt Mackall
     
  • Revert

    commit e8ced39d5e8911c662d4d69a342b9d053eaaac4e
    Author: Mingming Cao
    Date: Fri Jul 11 19:27:31 2008 -0400

    percpu_counter: new function percpu_counter_sum_and_set

    As described in

    revert "percpu counter: clean up percpu_counter_sum_and_set()"

    the new percpu_counter_sum_and_set() is racy against updates to the
    cpu-local accumulators on other CPUs. Revert that change.

    This means that ext4 will be slow again. But correct.

    Reported-by: Eric Dumazet
    Cc: "David S. Miller"
    Cc: Peter Zijlstra
    Cc: Mingming Cao
    Cc:
    Cc: [2.6.27.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Revert

    commit 1f7c14c62ce63805f9574664a6c6de3633d4a354
    Author: Mingming Cao
    Date: Thu Oct 9 12:50:59 2008 -0400

    percpu counter: clean up percpu_counter_sum_and_set()

    Before this patch we had the following:

    percpu_counter_sum(): return the percpu_counter's value

    percpu_counter_sum_and_set(): return the percpu_counter's value, copying
    that value into the central value and zeroing the per-cpu counters before
    returning.

    After this patch, percpu_counter_sum_and_set() has gone, and
    percpu_counter_sum() gets the old percpu_counter_sum_and_set()
    functionality.

    Problem is, as Eric points out, the old percpu_counter_sum_and_set()
    functionality was racy and wrong. It zeroes out counters on "other" cpus,
    without holding any locks which will prevent races agaist updates from
    those other CPUS.

    This patch reverts 1f7c14c62ce63805f9574664a6c6de3633d4a354. This means
    that percpu_counter_sum_and_set() still has the race, but
    percpu_counter_sum() does not.

    Note that this is not a simple revert - ext4 has since started using
    percpu_counter_sum() for its dirty_blocks counter as well.

    Note that this revert patch changes percpu_counter_sum() semantics.

    Before the patch, a call to percpu_counter_sum() will bring the counter's
    central counter mostly up-to-date, so a following percpu_counter_read()
    will return a close value.

    After this patch, a call to percpu_counter_sum() will leave the counter's
    central accumulator unaltered, so a subsequent call to
    percpu_counter_read() can now return a significantly inaccurate result.

    If there is any code in the tree which was introduced after
    e8ced39d5e8911c662d4d69a342b9d053eaaac4e was merged, and which depends
    upon the new percpu_counter_sum() semantics, that code will break.

    Reported-by: Eric Dumazet
    Cc: "David S. Miller"
    Cc: Peter Zijlstra
    Cc: Mingming Cao
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

10 Dec, 2008

1 commit

  • The patch 6341c39 "tracehook: exec" introduced a small regression in
    2.6.27 regarding binfmt_misc exec event reporting. Since the reporting
    is now done in the common search_binary_handler() function, an exec
    of a misc binary will result in two (or possibly multiple) exec events
    being reported, instead of just a single one, because the misc handler
    contains a recursive call to search_binary_handler.

    To add to the confusion, if PTRACE_O_TRACEEXEC is not active, the multiple
    SIGTRAP signals will in fact cause only a single ptrace intercept, as the
    signals are not queued. However, if PTRACE_O_TRACEEXEC is on, the debugger
    will actually see multiple ptrace intercepts (PTRACE_EVENT_EXEC).

    The test program included below demonstrates the problem.

    This change fixes the bug by calling tracehook_report_exec() only in the
    outermost search_binary_handler() call (bprm->recursion_depth == 0).

    The additional change to restore bprm->recursion_depth after each binfmt
    load_binary call is actually superfluous for this bug, since we test the
    value saved on entry to search_binary_handler(). But it keeps the use of
    of the depth count to its most obvious expected meaning. Depending on what
    binfmt handlers do in certain cases, there could have been false-positive
    tests for recursion limits before this change.

    /* Test program using PTRACE_O_TRACEEXEC.
    This forks and exec's the first argument with the rest of the arguments,
    while ptrace'ing. It expects to see one PTRACE_EVENT_EXEC stop and
    then a successful exit, with no other signals or events in between.

    Test for kernel doing two PTRACE_EVENT_EXEC stops for a binfmt_misc exec:

    $ gcc -g traceexec.c -o traceexec
    $ sudo sh -c 'echo :test:M::foobar::/bin/cat: > /proc/sys/fs/binfmt_misc/register'
    $ echo 'foobar test' > ./foobar
    $ chmod +x ./foobar
    $ ./traceexec ./foobar; echo $?
    ==> good bad
    #include
    #include
    #include
    #include
    #include
    #include

    static void
    wait_for (pid_t child, int expect)
    {
    int status;
    pid_t p = wait (&status);
    if (p != child)
    {
    perror ("wait");
    exit (2);
    }
    if (status != expect)
    {
    fprintf (stderr, "unexpected status %#x != %#x\n", status, expect);
    exit (3);
    }
    }

    int
    main (int argc, char **argv)
    {
    pid_t child = fork ();

    if (child < 0)
    {
    perror ("fork");
    return 127;
    }
    else if (child == 0)
    {
    ptrace (PTRACE_TRACEME);
    raise (SIGUSR1);
    execv (argv[1], &argv[1]);
    perror ("execve");
    _exit (127);
    }

    wait_for (child, W_STOPCODE (SIGUSR1));

    if (ptrace (PTRACE_SETOPTIONS, child,
    0L, (void *) (long) PTRACE_O_TRACEEXEC) != 0)
    {
    perror ("PTRACE_SETOPTIONS");
    return 4;
    }

    if (ptrace (PTRACE_CONT, child, 0L, 0L) != 0)
    {
    perror ("PTRACE_CONT");
    return 5;
    }

    wait_for (child, W_STOPCODE (SIGTRAP | (PTRACE_EVENT_EXEC << 8)));

    if (ptrace (PTRACE_CONT, child, 0L, 0L) != 0)
    {
    perror ("PTRACE_CONT");
    return 6;
    }

    wait_for (child, W_EXITCODE (0, 0));

    return 0;
    }

    Reported-by: Arnd Bergmann
    CC: Ulrich Weigand
    Signed-off-by: Roland McGrath

    Roland McGrath
     

09 Dec, 2008

1 commit

  • While 440037287c5 "[PATCH] switch all filesystems over to
    d_obtain_alias" removed some cases where fh_to_dentry() and
    fh_to_parent() could return NULL, there are still a few NULL returns
    left in individual filesystems. Thus it was a mistake for that commit
    to remove the handling of NULL returns in the callers.

    Revert those parts of 440037287c5 which removed the NULL handling.

    (We could, alternatively, modify all implementations to return -ESTALE
    instead of NULL, but that proves to require fixing a number of
    filesystems, and in some cases it's arguably more natural to return
    NULL.)

    Thanks to David for original patch and Linus, Christoph, and Hugh for
    review.

    Signed-off-by: J. Bruce Fields
    Cc: David Howells
    Cc: Christoph Hellwig
    Cc: Hugh Dickins
    Signed-off-by: Linus Torvalds

    J. Bruce Fields
     

06 Dec, 2008

1 commit

  • Changeset a238b790d5f99c7832f9b73ac8847025815b85f7 (Call fasync()
    functions without the BKL) introduced a race which could leave
    file->f_flags in a state inconsistent with what the underlying
    driver/filesystem believes. Revert that change, and also fix the same
    races in ioctl_fioasync() and ioctl_fionbio().

    This is a minimal, short-term fix; the real fix will not involve the
    BKL.

    Reported-by: Oleg Nesterov
    Cc: Andi Kleen
    Cc: Al Viro
    Cc: stable@kernel.org
    Signed-off-by: Jonathan Corbet
    Signed-off-by: Linus Torvalds

    Jonathan Corbet
     

05 Dec, 2008

2 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/bdev:
    [PATCH] fix bogus argument of blkdev_put() in pktcdvd
    [PATCH 2/2] documnt FMODE_ constants
    [PATCH 1/2] kill FMODE_NDELAY_NOW
    [PATCH] clean up blkdev_get a little bit
    [PATCH] Fix block dev compat ioctl handling
    [PATCH] kill obsolete temporary comment in swsusp_close()

    Linus Torvalds
     
  • When project quota is active and is being used for directory tree
    quota control, we disallow rename outside the current directory
    tree. This requires a check to be made after all the inodes
    involved in the rename are locked. We fail to unlock the inodes
    correctly if we disallow the rename when the target is outside the
    current directory tree. This results in a hang on the next access
    to the inodes involved in failed rename.

    Reported-by: Arkadiusz Miskiewicz
    Signed-off-by: Dave Chinner
    Tested-by: Arkadiusz Miskiewicz
    Signed-off-by: Lachlan McIlroy

    Dave Chinner
     

04 Dec, 2008

3 commits


03 Dec, 2008

1 commit

  • * 'linux-next' of git://git.infradead.org/ubifs-2.6:
    UBIFS: pre-allocate bulk-read buffer
    UBIFS: do not allocate too much
    UBIFS: do not print scary memory allocation warnings
    UBIFS: allow for gaps when dirtying the LPT
    UBIFS: fix compilation warnings
    MAINTAINERS: change UBI/UBIFS git tree URLs
    UBIFS: endian handling fixes and annotations
    UBIFS: remove printk

    Linus Torvalds
     

02 Dec, 2008

7 commits

  • kernel-doc handles macros now (it has for quite some time), so change the
    ntfs_debug() macro's kernel-doc to be just before the macro instead of
    before a phony function prototype.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Randy Dunlap
    Cc: Anton Altaparmakov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • It has been thought that the per-user file descriptors limit would also
    limit the resources that a normal user can request via the epoll
    interface. Vegard Nossum reported a very simple program (a modified
    version attached) that can make a normal user to request a pretty large
    amount of kernel memory, well within the its maximum number of fds. To
    solve such problem, default limits are now imposed, and /proc based
    configuration has been introduced. A new directory has been created,
    named /proc/sys/fs/epoll/ and inside there, there are two configuration
    points:

    max_user_instances = Maximum number of devices - per user

    max_user_watches = Maximum number of "watched" fds - per user

    The current default for "max_user_watches" limits the memory used by epoll
    to store "watches", to 1/32 of the amount of the low RAM. As example, a
    256MB 32bit machine, will have "max_user_watches" set to roughly 90000.
    That should be enough to not break existing heavy epoll users. The
    default value for "max_user_instances" is set to 128, that should be
    enough too.

    This also changes the userspace, because a new error code can now come out
    from EPOLL_CTL_ADD (-ENOSPC). The EMFILE from epoll_create() was already
    listed, so that should be ok.

    [akpm@linux-foundation.org: use get_current_user()]
    Signed-off-by: Davide Libenzi
    Cc: Michael Kerrisk
    Cc:
    Cc: Cyrill Gorcunov
    Reported-by: Vegard Nossum
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     
  • We're panicing in ocfs2_read_blocks_sync() if a jbd-managed buffer is seen.
    At first glance, this seems ok but in reality it can happen. My test case
    was to just run 'exorcist'. A struct inode is being pushed out of memory but
    is then re-read at a later time, before the buffer has been checkpointed by
    jbd. This causes a BUG to be hit in ocfs2_read_blocks_sync().

    Reviewed-by: Joel Becker
    Signed-off-by: Mark Fasheh

    Mark Fasheh
     
  • In init_dlmfs_fs(), if calling kmem_cache_create() failed, the code will use return value from
    calling bdi_init(). The correct behavior should be set status as -ENOMEM before going to "bail:".

    Signed-off-by: Coly Li
    Acked-by: Sunil Mushran
    Signed-off-by: Mark Fasheh

    Coly Li
     
  • In ocfs2_unlock_ast(), call wake_up() on lockres before releasing
    the spin lock on it. As soon as the spin lock is released, the
    lockres can be freed.

    Signed-off-by: David Teigland
    Signed-off-by: Mark Fasheh

    David Teigland
     
  • The locking_state dump, ocfs2_dlm_seq_show, reads the lvb on locks where it
    has not yet been initialized by a lock call.

    Signed-off-by: David Teigland
    Acked-by: Joel Becker
    Signed-off-by: Mark Fasheh

    David Teigland
     
  • This patch fixes two typos in comments of ocfs2.

    Signed-off-by: Coly Li
    Signed-off-by: Mark Fasheh

    Coly Li
     

01 Dec, 2008

1 commit


28 Nov, 2008

1 commit

  • udf_clear_inode() can leave behind buffers on mapping's i_private list (when
    we truncated preallocation). Call invalidate_inode_buffers() so that the list
    is properly cleaned-up before we return from udf_clear_inode(). This is ugly
    and suggest that we should cleanup preallocation earlier than in clear_inode()
    but currently there's no such call available since drop_inode() is called under
    inode lock and thus is unusable for disk operations.

    Signed-off-by: Jan Kara

    Jan Kara
     

27 Nov, 2008

1 commit

  • The conversion to write_begin/write_end interfaces had a bug where we
    were passing a bad parameter to cifs_readpage_worker. Rather than
    passing the page offset of the start of the write, we needed to pass the
    offset of the beginning of the page. This was reliably showing up as
    data corruption in the fsx-linux test from LTP.

    It also became evident that this code was occasionally doing unnecessary
    read calls. Optimize those away by using the PG_checked flag to indicate
    that the unwritten part of the page has been initialized.

    CC: Nick Piggin
    Acked-by: Dave Kleikamp
    Signed-off-by: Jeff Layton
    Signed-off-by: Steve French

    Jeff Layton
     

25 Nov, 2008

3 commits

  • Since commit c98451bd, the loop in nlm_lookup_host() unconditionally
    compares the host's h_srcaddr field to the incoming source address.
    For client-side nlm_host entries, both are always AF_UNSPEC, so this
    check is unnecessary.

    Since commit 781b61a6, which added support for AF_INET6 addresses to
    nlm_cmp_addr(), nlm_cmp_addr() now returns FALSE for AF_UNSPEC
    addresses, which causes nlm_lookup_host() to create a fresh nlm_host
    entry every time it is called on the client.

    These extra entries will eventually expire once the server is
    unmounted, so the impact of this regression, introduced with lockd
    IPv6 support in 2.6.28, should be minor.

    We could fix this by adding an arm in nlm_cmp_addr() for AF_UNSPEC
    addresses, but really, nlm_lookup_host() shouldn't be matching on the
    srcaddr field for client-side nlm_host lookups.

    Signed-off-by: Chuck Lever
    Signed-off-by: J. Bruce Fields

    Chuck Lever
     
  • Thanks to Matthew Dodd for this bug report:

    A file label issue while running SELinux in MLS mode provoked the
    following bug, which is a result of use before init on a 'struct list_head'.

    In nfsd4_list_rec_dir() if the call to dentry_open() fails the 'goto
    out' skips INIT_LIST_HEAD() which results in the normally improbable
    case where list_entry() returns NULL.

    Trace follows.

    NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
    SELinux: Context unconfined_t:object_r:var_lib_nfs_t:s0 is not valid
    (left unmapped).
    type=1400 audit(1227298063.609:282): avc: denied { read } for
    pid=1890 comm="rpc.nfsd" name="v4recovery" dev=dm-0 ino=148726
    scontext=system_u:system_r:nfsd_t:s0-s15:c0.c1023
    tcontext=system_u:object_r:unlabeled_t:s15:c0.c1023 tclass=dir
    BUG: unable to handle kernel NULL pointer dereference at 00000004
    IP: [] list_del+0x6/0x60
    *pde = 0d9ce067 *pte = 00000000
    Oops: 0000 [#1] SMP
    Modules linked in: nfsd lockd nfs_acl auth_rpcgss exportfs autofs4
    sunrpc ipv6 dm_multipath scsi_dh ppdev parport_pc sg parport floppy
    ata_piix pata_acpi ata_generic libata pcnet32 i2c_piix4 mii pcspkr
    i2c_core dm_snapshot dm_zero dm_mirror dm_log dm_mod BusLogic sd_mod
    scsi_mod crc_t10dif ext3 jbd mbcache uhci_hcd ohci_hcd ehci_hcd [last
    unloaded: microcode]

    Pid: 1890, comm: rpc.nfsd Not tainted (2.6.27.5-37.fc9.i686 #1)
    EIP: 0060:[] EFLAGS: 00010217 CPU: 0
    EIP is at list_del+0x6/0x60
    EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: cd99e480
    ESI: cf9caed8 EDI: 00000000 EBP: cf9caebc ESP: cf9caeb8
    DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
    Process rpc.nfsd (pid: 1890, ti=cf9ca000 task=cf4de580 task.ti=cf9ca000)
    Stack: 00000000 cf9caef0 d0a9f139 c0496d04 d0a9f217 fffffff3 00000000
    00000000
    00000000 00000000 cf32b220 00000000 00000008 00000801 cf9caefc
    d0a9f193
    00000000 cf9caf08 d0a9b6ea 00000000 cf9caf1c d0a874f2 cf9c3004
    00000008
    Call Trace:
    [] ? nfsd4_list_rec_dir+0xf3/0x13a [nfsd]
    [] ? do_path_lookup+0x12d/0x175
    [] ? load_recdir+0x0/0x26 [nfsd]
    [] ? nfsd4_recdir_load+0x13/0x34 [nfsd]
    [] ? nfs4_state_start+0x2a/0xc5 [nfsd]
    [] ? nfsd_svc+0x51/0xff [nfsd]
    [] ? write_svc+0x0/0x1e [nfsd]
    [] ? write_svc+0x1b/0x1e [nfsd]
    [] ? nfsctl_transaction_write+0x3a/0x61 [nfsd]
    [] ? sys_nfsservctl+0x116/0x154
    [] ? putname+0x24/0x2f
    [] ? putname+0x24/0x2f
    [] ? do_sys_open+0xad/0xb7
    [] ? filp_close+0x50/0x5a
    [] ? sys_open+0x1e/0x26
    [] ? syscall_call+0x7/0xb
    [] ? init_cyrix+0x185/0x490
    =======================
    Code: 75 e1 8b 53 08 8d 4b 04 8d 46 04 e8 75 00 00 00 8b 53 10 8d 4b 0c
    8d 46 0c e8 67 00 00 00 5b 5e 5f 5d c3 90 90 55 89 e5 53 89 c3 40
    04 8b 00 39 d8 74 16 50 53 68 3e d6 6f c0 6a 30 68 78 d6
    EIP: [] list_del+0x6/0x60 SS:ESP 0068:cf9caeb8
    ---[ end trace a89c4ad091c4ad53 ]---

    Cc: Matthew N. Dodd
    Signed-off-by: J. Bruce Fields

    J. Bruce Fields
     
  • If nfsd was shut down before the grace period ended, we could end up
    with a freed object still on grace_list. Thanks to Jeff Moyer for
    reporting the resulting list corruption warnings.

    Signed-off-by: J. Bruce Fields
    Tested-by: Jeff Moyer

    J. Bruce Fields
     

22 Nov, 2008

3 commits

  • To avoid memory allocation failure during bulk-read, pre-allocate
    a bulk-read buffer, so that if there is only one bulk-reader at
    a time, it would just use the pre-allocated buffer and would not
    do any memory allocation. However, if there are more than 1 bulk-
    reader, then only one reader would use the pre-allocated buffer,
    while the other reader would allocate the buffer for itself.

    Signed-off-by: Artem Bityutskiy

    Artem Bityutskiy
     
  • Bulk-read allocates 128KiB or more using kmalloc. The allocation
    starts failing often when the memory gets fragmented. UBIFS still
    works fine in this case, because it falls-back to standard
    (non-optimized) read method, though. This patch teaches bulk-read
    to allocate exactly the amount of memory it needs, instead of
    allocating 128KiB every time.

    This patch is also a preparation to the further fix where we'll
    have a pre-allocated bulk-read buffer as well. For example, now
    the @bu object is prepared in 'ubifs_bulk_read()', so we could
    path either pre-allocated or allocated information to
    'ubifs_do_bulk_read()' later. Or teaching 'ubifs_do_bulk_read()'
    not to allocate 'bu->buf' if it is already there.

    Signed-off-by: Artem Bityutskiy

    Artem Bityutskiy
     
  • Bulk-read allocates a lot of memory with 'kmalloc()', and when it
    is/gets fragmented 'kmalloc()' fails with a scarry warning. But
    because bulk-read is just an optimization, UBIFS keeps working fine.
    Supress the warning by passing __GFP_NOWARN option to 'kmalloc()'.

    This patch also introduces a macro for the magic 128KiB constant.
    This is just neater.

    Note, this is not really fixes the problem we had, but just hides
    the warnings. The further patches fix the problem.

    Signed-off-by: Artem Bityutskiy

    Artem Bityutskiy
     

21 Nov, 2008

2 commits