17 Jul, 2007

40 commits

  • Connect up new system calls.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • This is a straightforward split of do_mmap_pgoff() into two functions:

    - do_mmap_pgoff() checks the parameters, and calculates the vma
    flags. Then it calls

    - mmap_region(), which does the actual mapping

    Signed-off-by: Miklos Szeredi
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miklos Szeredi
     
  • The do_loop_readv_writev implementation of readv breaks out of the loop as
    soon as a single read request didn't fill it's buffer:

    if (nr != len)
    break;

    The generic_file_aio_read version doesn't. So if it hits EOF before the end
    of the list of buffers, it will try again on the next buffer. If the file was
    extended in the mean time, this will produce a bad result.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@linux-foundation.org
     
  • Fix a bug in mm/mlock.c on 32-bit architectures that prevents a user from
    locking more than 4GB of shared memory, or allocating more than 4GB of
    shared memory in hugepages, when rlim[RLIMIT_MEMLOCK] is set to
    RLIM_INFINITY.

    Signed-off-by: Herbert van den Bergh
    Acked-by: Chris Mason
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Herbert van den Bergh
     
  • Currently slob is disabled if we're using sparsemem, due to an earlier
    patch from Goto-san. Slob and static sparsemem work without any trouble as
    it is, and the only hiccup is a missing slab_is_available() in the case of
    sparsemem extreme. With this, we're rid of the last set of restrictions
    for slob usage.

    Signed-off-by: Paul Mundt
    Acked-by: Pekka Enberg
    Acked-by: Matt Mackall
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • mspec_mmap was setting VM_LOCKED (without adjusting locked_vm): don't do
    that, it serves no purpose in 2.6, other than to mess up the locked_vm
    accounting - mspec's pages won't get reclaimed anyway. Thanks to Dmitry
    Monakhov for raising the issue.

    Signed-off-by: Hugh Dickins
    Acked-by: Jes Sorensen
    Cc: Dmitry Monakhov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Signed-off-by: Dan Aloni
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Aloni
     
  • This adds preliminary NUMA support to SLOB, primarily aimed at systems with
    small nodes (tested all the way down to a 128kB SRAM block), whether
    asymmetric or otherwise.

    We follow the same conventions as SLAB/SLUB, preferring current node
    placement for new pages, or with explicit placement, if a node has been
    specified. Presently on UP NUMA this has the side-effect of preferring
    node#0 allocations (since numa_node_id() == 0, though this could be
    reworked if we could hand off a pfn to determine node placement), so
    single-CPU NUMA systems will want to place smaller nodes further out in
    terms of node id. Once a page has been bound to a node (via explicit node
    id typing), we only do block allocations from partial free pages that have
    a matching node id in the page flags.

    The current implementation does have some scalability problems, in that all
    partial free pages are tracked in the global freelist (with contention due
    to the single spinlock). However, these are things that are being reworked
    for SMP scalability first, while things like per-node freelists can easily
    be built on top of this sort of functionality once it's been added.

    More background can be found in:

    http://marc.info/?l=linux-mm&m=118117916022379&w=2
    http://marc.info/?l=linux-mm&m=118170446306199&w=2
    http://marc.info/?l=linux-mm&m=118187859420048&w=2

    and subsequent threads.

    Acked-by: Christoph Lameter
    Acked-by: Matt Mackall
    Signed-off-by: Paul Mundt
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • In the new madvise_need_mmap_write() call we can avoid an extra case
    statement and function call as follows.

    Signed-off-by: Jason Baron
    Cc: Nishanth Aravamudan
    Cc: Christoph Hellwig
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jason Baron
     
  • start_cpu_timer() should be __cpuinit (which also matches what it's
    callers are).

    __devinit didn't cause problems, it simply wasted a few bytes of memory
    for the common CONFIG_HOTPLUG_CPU=n case.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • Currently zone_spanned_pages_in_node() and zone_absent_pages_in_node() are
    non-static for ARCH_POPULATES_NODE_MAP and static otherwise. However, only
    the non-static versions are __meminit annotated, despite only being called
    from __meminit functions in either case.

    zone_init_free_lists() is currently non-static and not __meminit annotated
    either, despite only being called once in the entire tree by
    init_currently_empty_zone(), which too is __meminit. So make it static and
    properly annotated.

    Signed-off-by: Paul Mundt
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • This symbol got orphaned quite a while ago.

    Signed-off-by: Jan Beulich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • Kill pte_rdprotect(), pte_exprotect(), pte_mkread(), pte_mkexec(), pte_read(),
    pte_exec(), and pte_user() except where arch-specific code is making use of
    them.

    Signed-off-by: Jan Beulich
    Cc: Andi Kleen
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • .. which modpost started warning about.

    Signed-off-by: Jan Beulich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • Enabling debugging fails to build due to the nodemask variable in
    do_mbind() having changed names, and then oopses on boot due to the
    assumption that the nodemask can be dereferenced -- which doesn't work out
    so well when the policy is changed to MPOL_DEFAULT with a NULL nodemask by
    numa_default_policy().

    This fixes it up, and switches from PDprintk() to pr_debug() while
    we're at it.

    Signed-off-by: Paul Mundt
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • get_user_pages() can try to allocate a nearly unlimited amount of memory on
    behalf of a user process, even if that process has been OOM killed. The
    OOM kill occurs upon return to user space via a SIGKILL, but
    get_user_pages() will try allocate all its memory before returning. Change
    get_user_pages() to check for TIF_MEMDIE, and if set then return
    immediately.

    Signed-off-by: Ethan Solomita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ethan Solomita
     
  • This converts the default system init memory policy to use a dynamically
    created node map instead of defaulting to all online nodes. Nodes of a
    certain size (>= 16MB) are judged to be suitable for interleave, and are added
    to the map. If all nodes are smaller in size, the largest one is
    automatically selected.

    Without this, tiny nodes find themselves out of memory before we even make it
    to userspace. Systems with large nodes will notice no change.

    Only the system init policy is effected by this change, the regular
    MPOL_DEFAULT policy is still switched to later on in the boot process as
    normal.

    Signed-off-by: Paul Mundt
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Hugh Dickins
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • Add a new configuration variable

    CONFIG_SLUB_DEBUG_ON

    If set then the kernel will be booted by default with slab debugging
    switched on. Similar to CONFIG_SLAB_DEBUG. By default slab debugging
    is available but must be enabled by specifying "slub_debug" as a
    kernel parameter.

    Also add support to switch off slab debugging for a kernel that was
    built with CONFIG_SLUB_DEBUG_ON. This works by specifying

    slub_debug=-

    as a kernel parameter.

    Dave Jones wanted this feature.
    http://marc.info/?l=linux-kernel&m=118072189913045&w=2

    [akpm@linux-foundation.org: clean up switch statement]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • invalidate_mapping_pages() can sometimes take a long time (millions of pages
    to free). Long enough for the softlockup detector to trigger.

    We used to have a cond_resched() in there but I took it out because the
    drop_caches code calls invalidate_mapping_pages() under inode_lock.

    The patch adds a nasty flag and puts the cond_resched() back.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Add a bugcheck for Andrea's pagefault vs invalidate race. This is triggerable
    for both linear and nonlinear pages with a userspace test harness (using
    direct IO and truncate, respectively).

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • That static `nid' index needs locking. Without it we can end up calling
    alloc_pages_node() with an illegal node ID and the kernel crashes.

    Acked-by: gurudas pai
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Jin
     
  • Fix the shrink_list name on some files under mm/ directory.

    Signed-off-by: Anderson Briglia
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anderson Briglia
     
  • Remove the core slob allocator's minimum alignment restrictions, and instead
    introduce the alignment restrictions at the slab API layer. This lets us heed
    the ARCH_KMALLOC/SLAB_MINALIGN directives, and also use __alignof__ (unsigned
    long) for the default alignment (which should allow relaxed alignment
    architectures to take better advantage of SLOB's small minimum alignment).

    Signed-off-by: Nick Piggin
    Acked-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Remove the bigblock lists in favour of using compound pages and going directly
    to the page allocator. Allocation size is stored in page->private, which also
    makes ksize more accurate than it previously was.

    Saves ~.5K of code, and 12-24 bytes overhead per >= PAGE_SIZE allocation.

    Signed-off-by: Nick Piggin
    Acked-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Improve slob by turning the freelist into a list of pages using struct page
    fields, then each page has a singly linked freelist of slob blocks via a
    pointer in the struct page.

    - The first benefit is that the slob freelists can be indexed by a smaller
    type (2 bytes, if the PAGE_SIZE is reasonable).

    - Next is that freeing is much quicker because it does not have to traverse
    the entire freelist. Allocation can be slightly faster too, because we can
    skip almost-full freelist pages completely.

    - Slob pages are then freed immediately when they become empty, rather than
    having a periodic timer try to free them. This gives efficiency and memory
    consumption improvement.

    Then, we don't encode seperate size and next fields into each slob block,
    rather we use the sign bit to distinguish between "size" or "next". Then
    size 1 blocks contain a "next" offset, and others contain the "size" in
    the first unit and "next" in the second unit.

    - This allows minimum slob allocation alignment to go from 8 bytes to 2
    bytes on 32-bit and 12 bytes to 2 bytes on 64-bit. In practice, it is
    best to align them to word size, however some architectures (eg. cris)
    could gain space savings from turning off this extra alignment.

    Then, make kmalloc use its own slob_block at the front of the allocation
    in order to encode allocation size, rather than rely on not overwriting
    slob's existing header block.

    - This reduces kmalloc allocation overhead similarly to alignment reductions.

    - Decouples kmalloc layer from the slob allocator.

    Then, add a page flag specific to slob pages.

    - This means kfree of a page aligned slob block doesn't have to traverse
    the bigblock list.

    I would get benchmarks, but my test box's network doesn't come up with
    slob before this patch. I think something is timing out. Anyway, things
    are faster after the patch.

    Code size goes up about 1K, however dynamic memory usage _should_ be
    lower even on relatively small memory systems.

    Future todo item is to restore the cyclic free list search, rather than
    to always begin at the start.

    Signed-off-by: Nick Piggin
    Acked-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Given that there is no remaining usage of the deprecated kmem_cache_t
    typedef anywhere in the tree, remove that typedef.

    Signed-off-by: Robert P. J. Day
    Acked-by: Pekka Enberg
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     
  • alloc_large_system_hash() is called at boot time to allocate space for
    several large hash tables.

    Lately, TCP hash table was changed and its bucketsize is not a power-of-two
    anymore.

    On most setups, alloc_large_system_hash() allocates one big page (order >
    0) with __get_free_pages(GFP_ATOMIC, order). This single high_order page
    has a power-of-two size, bigger than the needed size.

    We can free all pages that wont be used by the hash table.

    On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.

    TCP established hash table entries: 32768 (order: 6, 393216 bytes)

    Signed-off-by: Eric Dumazet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Dumazet
     
  • This entry prints a header in .start callback. This is OK, but the more
    elegant solution would be to move this into the .show callback and use
    seq_list_start_head() in .start one.

    I have left it as is in order to make the patch just switch to new API and
    noting more.

    [adobriyan@sw.ru: Wrong pointer was used as kmem_cache pointer]
    Signed-off-by: Pavel Emelianov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelianov
     
  • Replace a hand coded version of DIV_ROUND_UP().

    Signed-off-by: Rolf Eike Beer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rolf Eike Beer
     
  • nid is initialized to numa_node_id() but will either be overwritten in
    the loop or not used in the conditional. So remove the initialization.

    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • Make zonelist creation policy selectable from sysctl/boot option v6.

    This patch makes NUMA's zonelist (of pgdat) order selectable.
    Available order are Default(automatic)/ Node-based / Zone-based.

    [Default Order]
    The kernel selects Node-based or Zone-based order automatically.

    [Node-based Order]
    This policy treats the locality of memory as the most important parameter.
    Zonelist order is created by each zone's locality. This means lower zones
    (ex. ZONE_DMA) can be used before higher zone (ex. ZONE_NORMAL) exhausion.
    IOW. ZONE_DMA will be in the middle of zonelist.
    current 2.6.21 kernel uses this.

    Pros.
    * A user can expect local memory as much as possible.
    Cons.
    * lower zone will be exhansted before higher zone. This may cause OOM_KILL.

    Maybe suitable if ZONE_DMA is relatively big and you never see OOM_KILL
    because of ZONE_DMA exhaution and you need the best locality.

    (example)
    assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.

    *node(0)'s memory allocation order:

    node(0)'s NORMAL -> node(0)'s DMA -> node(1)'s NORMAL.

    *node(1)'s memory allocation order:

    node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.

    [Zone-based order]
    This policy treats the zone type as the most important parameter.
    Zonelist order is created by zone-type order. This means lower zone
    never be used bofere higher zone exhaustion.
    IOW. ZONE_DMA will be always at the tail of zonelist.

    Pros.
    * OOM_KILL(bacause of lower zone) occurs only if the whole zones are exhausted.
    Cons.
    * memory locality may not be best.

    (example)
    assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.

    *node(0)'s memory allocation order:

    node(0)'s NORMAL -> node(1)'s NORMAL -> node(0)'s DMA.

    *node(1)'s memory allocation order:

    node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.

    bootoption "numa_zonelist_order=" and proc/sysctl is supporetd.

    command:
    %echo N > /proc/sys/vm/numa_zonelist_order

    Will rebuild zonelist in Node-based order.

    command:
    %echo Z > /proc/sys/vm/numa_zonelist_order

    Will rebuild zonelist in Zone-based order.

    Thanks to Lee Schermerhorn, he gives me much help and codes.

    [Lee.Schermerhorn@hp.com: add check_highest_zone to build_zonelists_in_zone_order]
    [akpm@linux-foundation.org: build fix]
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: Andi Kleen
    Cc: "jesse.barnes@intel.com"
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Beacuse SERIAL_PORT_DFNS is removed from include/asm-i386/serial.h and
    include/asm-x86_64/serial.h. the serial8250_ports need to be probed late in
    serial initializing stage. the console_init=>serial8250_console_init=>
    register_console=>serial8250_console_setup will return -ENDEV, and console
    ttyS0 can not be enabled at that time. need to wait till uart_add_one_port in
    drivers/serial/serial_core.c to call register_console to get console ttyS0.
    that is too late.

    Make early_uart to use early_param, so uart console can be used earlier. Make
    it to be bootconsole with CON_BOOT flag, so can use console handover feature.
    and it will switch to corresponding normal serial console automatically.

    new command line will be:
    console=uart8250,io,0x3f8,9600n8
    console=uart8250,mmio,0xff5e0000,115200n8
    or
    earlycon=uart8250,io,0x3f8,9600n8
    earlycon=uart8250,mmio,0xff5e0000,115200n8

    it will print in very early stage:
    Early serial console at I/O port 0x3f8 (options '9600n8')
    console [uart0] enabled
    later for console it will print:
    console handover: boot [uart0] -> real [ttyS0]

    Signed-off-by:
    Cc: Andi Kleen
    Cc: Bjorn Helgaas
    Cc: Russell King
    Cc: Gerd Hoffmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     
  • Needed to get fixed virtual address for USB debug and earlycon with mmio.

    Signed-off-by: Eric W. Biderman
    Signed-off-by: Yinghai Lu
    Cc: Andi Kleen
    Cc: Bjorn Helgaas
    Cc: Russell King
    Cc: Gerd Hoffmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric W. Biderman
     
  • for earlyprintk=ttyS0,9600 console=tty0 console=ttyS0,9600n8

    the handover will happen from earlyser0 to tty0. but what we want is to
    hand over to ttyS0.

    Later with serial-convert-early_uart-to-earlycon-for-8250.patch,

    console=tty0 console=uart8250,io,0x3f8,9600n8

    will handover to ttyS0 instead of tty0.

    Signed-off-by: Yinghai Lu
    Cc: Andi Kleen
    Cc: Bjorn Helgaas
    Cc: Russell King
    Cc: Gerd Hoffmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     
  • Change name to buf according to the usage as name + index

    Signed-off-by: Yinghai Lu
    Cc: Andi Kleen
    Cc: Bjorn Helgaas
    Cc: Russell King
    Cc: Gerd Hoffmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     
  • Some RS-232 devices require DTR to be asserted before they can be used. DTR
    is normally asserted in uart_startup() when the port is opened. But we don't
    actually open serial console ports, so assert DTR when the port is added.

    BTW:
    earlyprintk and early_uart are hard coded to set DTR/RTS.

    rmk says

    The only issue I can think of is the possibility for an attached modem to
    auto-answer or maybe even auto-dial before the system is ready for it to do
    so. Might have an undesirable cost implication for some running with such a
    setup.

    Apart from that, I can't think of any other side effect of this specific
    patch.

    Signed-off-by: Yinghai Lu
    Acked-by: Russell King
    Cc: Andi Kleen
    Cc: Bjorn Helgaas
    Cc: Gerd Hoffmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     
  • Remove all ids from the given idr tree. idr_destroy() only frees up
    unused, cached idp_layers, but this function will remove all id mappings
    and leave all idp_layers unused.

    A typical clean-up sequence for objects stored in an idr tree, will use
    idr_for_each() to free all objects, if necessay, then idr_remove_all() to
    remove all ids, and idr_destroy() to free up the cached idr_layers.

    Signed-off-by: Kristian Hoegsberg
    Cc: Tejun Heo
    Cc: Dave Airlie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kristian Hoegsberg
     
  • This patch adds an iterator function for the idr data structure. Compared
    to just iterating through the idr with an integer and idr_find, this
    iterator is (almost, but not quite) linear in the number of elements, as
    opposed to the number of integers in the range covered by the idr. This
    makes a difference for sparse idrs, but more importantly, it's a nicer way
    to iterate through the elements.

    The drm subsystem is moving to idr for tracking contexts and drawables, and
    with this change, we can use the idr exclusively for tracking these
    resources.

    [akpm@linux-foundation.org: fix comment]
    Signed-off-by: Kristian Hoegsberg
    Cc: Tejun Heo
    Cc: Dave Airlie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kristian Hoegsberg
     
  • This version brings a number of new checks, fixes for flase
    positives, plus a clarification of the output to better guide use. Of
    note:

    - checks for documentation for new __setup calls
    - clearer reporting where braces and parenthesis are involved
    - reports for closing brace and semi-colon spacing
    - reports on unwanted externs

    This patch includes an update to the documentation on checkpatch.pl
    itself to clarify when it should be used and to indicate that it
    is not intended as the final arbitor of style.

    Full changelog:

    Andy Whitcroft (19):
    Version: 0.07
    ensure we do not apply control brace checks to preprocesor directives
    add {u,s}{8,16,32,64} to the type matcher
    accept lack of spacing after the semicolons in for (;;)
    report new externs in .c files
    fix up typedef exclusion for function prototypes
    else trailing statements check need to account for \ at end of line
    add enums to the type matcher
    add missing check descriptions
    suppress double reporting of ** spacing
    report on do{ spacing issues
    include an example of the brace/parenthesis in output
    check for spacing after closing braces
    prevent double reports on pointer spacing issues
    handle blank continuation lines on macros
    classify all reports error, warning, or check
    revamp hanging { checks and apply in context
    no spaces after the last ; in a for is ok
    check __setup has a corresponding addition to documentation

    David Woodhouse (1):
    limit character set used in patches and descriptions to UTF-8

    Signed-off-by: Andy Whitcroft
    Cc: David Woodhouse
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • This is a correction for a macro which gives worst case compressed data
    size by LZO1X.

    This patch was provided by the LZO author (Markus Oberhumer).

    Signed-off-by: Nitin Gupta
    Cc: "Markus F.X.J. Oberhumer"
    Cc: "Richard Purdie"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nitin Gupta