06 Feb, 2008

40 commits

  • Remove SGP_QUICK from the sgp_type enum: it was for shmem_populate and has no
    users now. Remove SGP_FAULT from the enum: SGP_CACHE does just as well (and
    shmem_getpage is about to return with page always locked).

    Signed-off-by: Hugh Dickins
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Building in a filesystem on a loop device on a tmpfs file can hang when
    swapping, the loop thread caught in that infamous throttle_vm_writeout.

    In theory this is a long standing problem, which I've either never seen in
    practice, or long ago suppressed the recollection, after discounting my load
    and my tmpfs size as unrealistically high. But now, with the new aops, it has
    become easy to hang on one machine.

    Loop used to grab_cache_page before the old prepare_write to tmpfs, which
    seems to have been enough to free up some memory for any swapin needed; but
    the new write_begin lets tmpfs find or allocate the page (much nicer, since
    grab_cache_page missed tmpfs pages in swapcache).

    When allocating a fresh page, tmpfs respects loop's mapping_gfp_mask, which
    has __GFP_IO|__GFP_FS stripped off, and throttle_vm_writeout is designed to
    break out when __GFP_IO or GFP_FS is unset; but when tmfps swaps in,
    read_swap_cache_async allocates with GFP_HIGHUSER_MOVABLE regardless of the
    mapping_gfp_mask - hence the hang.

    So, pass gfp_mask down the line from shmem_getpage to shmem_swapin to
    swapin_readahead to read_swap_cache_async to add_to_swap_cache.

    Signed-off-by: Hugh Dickins
    Acked-by: Rik van Riel
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • swapin_readahead has never sat well in mm/memory.c: move it to mm/swap_state.c
    beside its kindred read_swap_cache_async. Why were its args in a different
    order? rearrange them. And since it was always followed by a
    read_swap_cache_async of the target page, fold that in and return struct
    page*. Then CONFIG_SWAP=n no longer needs valid_swaphandles and
    read_swap_cache_async stubs.

    Signed-off-by: Hugh Dickins
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • For three years swapin_readahead has been cluttered with fanciful CONFIG_NUMA
    code, advancing addr, and stepping on to the next vma at the boundary, to line
    up the mempolicy for each page allocation.

    It _might_ be a good idea to allocate swap more according to vma layout; but
    the fact is, that's not how we do it at all, 2.6 even less than 2.4: swap is
    allocated as needed for pages as they sink to the bottom of the inactive LRUs.
    Sometimes that may match vma layout, but not so often that it's worth going
    to these misleading vma->vm_next lengths: rip all that out.

    Originally I intended to retain the incrementation of addr, but correct its
    initial value: valid_swaphandles generally supplies an offset below the target
    addr (this is readaround rather than readahead), but addr has not been
    adjusted accordingly, so in the interleave case it has usually been allocating
    the target page from the "wrong" node (though that may not matter very much).

    But look at the equivalent shmem_swapin code: either by oversight or by
    design, though it has all the apparatus for choosing a new mempolicy per page,
    it uses the same idx throughout, choosing the same mempolicy and interleave
    node for each page of the cluster.

    Which is actually a much better strategy: each node has its own LRUs and its
    own kswapd, so if you're betting on any particular relationship between swap
    and node, the best bet is that nearby swap entries belong to pages from the
    same node - even when the mempolicy of the target page is to interleave. And
    examining a map of nodes corresponding to swap entries on a numa=fake system
    bears this out. (We could later tweak swap allocation to make it even more
    likely, but this patch is merely about removing cruft.)

    So, neither adjust nor increment addr in swapin_readahead, and then
    shmem_swapin can use it too; the pseudo-vma to pass policy need only be set up
    once per cluster, and so few fields of pvma are used, let's skip the memset -
    from shmem_alloc_page also.

    Signed-off-by: Hugh Dickins
    Acked-by: Rik van Riel
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Allow sticky directory mount option for hugetlbfs. This allows admin
    to create a shared hugetlbfs mount point for multiple users, while
    prevent accidental file deletion that users may step on each other.
    It is similiar to default tmpfs mount option, or typical option used
    on /tmp.

    Signed-off-by: Ken Chen
    Cc: Badari Pulavarty
    Cc: Adam Litke
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken Chen
     
  • The constructor for buffer_head slabs was removed recently. We need the
    constructor back in slab defrag in order to insure that slab objects always
    have a definite state even before we allocated them.

    I think we mistakenly merged the removal of the constuctor into a cleanup
    patch. You (ie: akpm) had a test that showed that the removal of the
    constructor led to a small regression. The prior state makes things easier
    for slab defrag.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Both slab defrag and the large blocksize patches need to ability to take
    refcounts on compound pages. May be useful in other places as well.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The page array is repeatedly indexed both in vunmap and vmalloc_area_node().
    Add a temporary variable to make it easier to read (and easier to patch
    later).

    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Checking if an address is a vmalloc address is done in a couple of places.
    Define a common version in mm.h and replace the other checks.

    Again the include structures suck. The definition of VMALLOC_START and
    VMALLOC_END is not available in vmalloc.h since highmem.c cannot be included
    there.

    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • pgtable.h does not include highmem.h but uses various constants from
    highmem.h. We cannot include highmem.h because highmem.h will in turn include
    many other include files that also depend on pgtable.h

    So move the definitions from highmem.h into pgtable.h.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Christoph Lameter
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Make vmalloc functions work the same way as kfree() and friends that
    take a const void * argument.

    [akpm@linux-foundation.org: fix consts, coding-style]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • We already have page table manipulation for vmalloc in vmalloc.c. Move the
    vmalloc_to_page() function there as well.

    Move the definitions for vmalloc related functions in mm.h to a newly created
    section. A better place would be vmalloc.h but mm.h is basic and may depend
    on these functions. An alternative would be to include vmalloc.h in mm.h
    (like done for vmstat.h).

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Simplify page cache zeroing of segments of pages through 3 functions

    zero_user_segments(page, start1, end1, start2, end2)

    Zeros two segments of the page. It takes the position where to
    start and end the zeroing which avoids length calculations and
    makes code clearer.

    zero_user_segment(page, start, end)

    Same for a single segment.

    zero_user(page, start, length)

    Length variant for the case where we know the length.

    We remove the zero_user_page macro. Issues:

    1. Its a macro. Inline functions are preferable.

    2. The KM_USER0 macro is only defined for HIGHMEM.

    Having to treat this special case everywhere makes the
    code needlessly complex. The parameter for zeroing is always
    KM_USER0 except in one single case that we open code.

    Avoiding KM_USER0 makes a lot of code not having to be dealing
    with the special casing for HIGHMEM anymore. Dealing with
    kmap is only necessary for HIGHMEM configurations. In those
    configurations we use KM_USER0 like we do for a series of other
    functions defined in highmem.h.

    Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
    function could not be a macro. zero_user_* functions introduced
    here can be be inline because that constant is not used when these
    functions are called.

    Also extract the flushing of the caches to be outside of the kmap.

    [akpm@linux-foundation.org: fix nfs and ntfs build]
    [akpm@linux-foundation.org: fix ntfs build some more]
    Signed-off-by: Christoph Lameter
    Cc: Steven French
    Cc: Michael Halcrow
    Cc:
    Cc: Steven Whitehouse
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Cc: Anton Altaparmakov
    Cc: Mark Fasheh
    Cc: David Chinner
    Cc: Michael Halcrow
    Cc: Steven French
    Cc: Steven Whitehouse
    Cc: Trond Myklebust
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Teach AVR32 to use the "GPIO Library" when exposing its GPIOs, so that signals
    on external chips (like GPIO expanders) can easily be used.

    This mostly reorganizes some existing logic, with two minor changes in
    behavior:

    - The PSR registers are used instead of the previous "gpio_mask" values,
    matching AT91 behavior and removing some duplication between that role
    and that of "pinmux_mask".

    - NR_IRQs grew to acommodate a bank of external GPIOs. Eventually this
    number should probably become a board-specific config option.

    There's a debugfs dump of status for the built-in GPIOs, showing which pins
    have deglitching, pullups, or open drain drive enabled, as well as the ID
    string used when requesting each IRQ.

    Signed-off-by: David Brownell
    Acked-by: Haavard Skinnemoen
    Cc: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • Use drivers/gpio/pca9539.c instead.

    Signed-off-by: eric miao
    Acked-by: Ben Gardner
    Acked-by: Jean Delvare
    Signed-off-by: David Brownell
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    eric miao
     
  • This adds a new-style I2C driver with basic support for the sixteen bit
    PCA9539 GPIO expanders. These chips have multiple registers, push-pull output
    drivers, and (not supported in this patch) pin change interrupts.

    Board-specific code must provide "pca9539_platform_data" with each chip's
    "i2c_board_info". That provides the GPIO numbers to be used by that chip, and
    callbacks for board-specific setup/teardown logic.

    Derived from drivers/i2c/chips/pca9539.c (which has no current known users).
    This is faster and simpler; it uses 16-bit register access, and cache the
    OUTPUT and DIRECTION registers for fast access

    Signed-off-by: eric miao
    Signed-off-by: David Brownell
    Acked-by: Jean Delvare
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    eric miao
     
  • Basic driver for 8-bit SPI based MCP23S08 GPIO expander, without support for
    IRQs or the shared chipselect mechanism.

    Signed-off-by: David Brownell
    Cc: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • This is a new-style I2C driver for most common 8 and 16 bit I2C based
    "quasi-bidirectional" GPIO expanders: pcf8574 or pcf8575, and several
    compatible models (mostly faster, supporting I2C at up to 1 MHz).

    The driver exposes the GPIO signals using the platform-neutral GPIO
    programming interface, so they are easily accessed by other kernel code. The
    lack of such a flexible kernel API has been a big factor in the proliferation
    of board-specific drivers for these chips... stuff that rarely makes it
    upstream since it's so ugly. This driver will let such boards use standard
    calls.

    Since it's a new-style driver, these devices must be configured as part of
    board-specific init. That eliminates the need for error-prone manual
    configuration of module parameters, and makes compatibility with legacy
    drivers (pcf8574.c, pc8575.c) for these chips easier (there's a clear
    either/or disjunction).

    Signed-off-by: David Brownell
    Acked-by: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • This adds gpiolib support for the PXA architecture:
    - move all GPIO API functions from generic.c into gpio.c
    - convert the gpio_get/set_value macros into inline functions

    This makes it easier to hook up GPIOs provided by external chips like
    ASICs and CPLDs.

    Signed-off-by: Philipp Zabel
    Signed-off-by: David Brownell
    Acked-by: Russell King
    Cc: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    [ Minor ARM fixup from David Brownell folded into this ]
    Signed-off-by: Linus Torvalds

    Philipp Zabel
     
  • Update Documentation/gpio.txt, primarily to include the new "gpiolib"
    infrastructure.

    Signed-off-by: David Brownell
    Cc: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • Provide new implementation infrastructure that platforms may choose to use
    when implementing the GPIO programming interface. Platforms can update their
    GPIO support to use this. In many cases the incremental cost to access a
    non-inlined GPIO should be less than a dozen instructions, with the memory
    cost being about a page (total) of extra data and code. The upside is:

    * Providing two features which were "want to have (but OK to defer)" when
    GPIO interfaces were first discussed in November 2006:

    - A "struct gpio_chip" to plug in GPIOs that aren't directly supported
    by SOC platforms, but come from FPGAs or other multifunction devices
    using conventional device registers (like UCB-1x00 or SM501 GPIOs,
    and southbridges in PCs with more open specs than usual).

    - Full support for message-based GPIO expanders, where registers are
    accessed through sleeping I/O calls. Previous support for these
    "cansleep" calls was just stubs. (One example: the widely used
    pcf8574 I2C chips, with 8 GPIOs each.)

    * Including a non-stub implementation of the gpio_{request,free}() calls,
    making those calls much more useful. The diagnostic labels are also
    recorded given DEBUG_FS, so /sys/kernel/debug/gpio can show a snapshot
    of all GPIOs known to this infrastructure.

    The driver programming interfaces introduced in 2.6.21 do not change at all;
    this infrastructure is entirely below those covers.

    Signed-off-by: David Brownell
    Cc: Sam Ravnborg
    Cc: Jean Delvare
    Cc: Eric Miao
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • Add an empty drivers/gpio directory for gpiolib infrastructure and GPIO
    expanders. It will be populated by later patches.

    This won't be the only place to hold such gpio_chip code. Many external chips
    add a few GPIOs as secondary functionality (such as MFD drivers) and platform
    code frequently needs to closely integrate GPIO and IRQ support.

    This is placed *early* in the build/link sequence since it's common for other
    drivers to depend on GPIOs to do their work, so they must be initialized early
    in the device_initcall() sequence.

    Signed-off-by: David Brownell
    Acked-by: Jean Delvare
    Cc: Eric Miao
    Cc: Sam Ravnborg
    Cc: Haavard Skinnemoen
    Cc: Philipp Zabel
    Cc: Russell King
    Cc: Ben Gardner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Brownell
     
  • This is a one-line patch to add the following to __scsi_alloc_queue():

    dma_set_seg_boundary(dev, shost->dma_boundary);

    This is the simplest approach but the result looks odd,
    __scsi_alloc_queue() does:

    blk_queue_segment_boundary(q, shost->dma_boundary);
    dma_set_seg_boundary(dev, shost->dma_boundary);
    blk_queue_max_segment_size(q, dma_get_max_seg_size(dev));

    I think that it would be better to set up segment boundary in the same
    way as we did for the maximum segment size. That is, removing
    shost->dma_boundary and LLDs call pci_set_dma_seg_boundary (or its
    friends).

    Then __scsi_alloc_queue() can set up both limits in the same way:

    blk_queue_segment_boundary(q, dma_get_seg_boundary(dev));
    blk_queue_max_segment_size(q, dma_get_max_seg_size(dev));

    killing dma_boundary in scsi_host_template needs a large patch for
    libata (dma_boundary is used by only libata and sym53c8xx). I'll send
    a patch to do that if it is acceptable. James and Jeff?

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Greg KH
    Cc: Jeff Garzik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes swiotlb not allocate a memory area spanning LLD's segment
    boundary.

    is_span_boundary() judges whether a memory area spans LLD's segment boundary.
    If map_single finds such a area, map_single tries to find the next available
    memory area.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Greg KH
    Cc: Jeff Garzik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This adds PCI's accessor for segment_boundary_mask in device_dma_parameters.

    The default segment_boundary is set to 0xffffffff, same to the block layer's
    default value (and the scsi mid layer uses the same value).

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Greg KH
    Cc: Jeff Garzik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This adds new accessors for segment_boundary_mask in device_dma_parameters
    structure in the same way I did for max_segment_size. So we can easily change
    where to place struct device_dma_parameters in the future.

    dma_get_segment boundary returns 0xffffffff if dma_parms in struct device
    isn't set up properly. 0xffffffff is the default value used in the block
    layer and the scsi mid layer.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Greg KH
    Cc: Jeff Garzik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This kills unused __clear_bit_string and find_next_zero_string (they
    were used by only gart and calgary IOMMUs).

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Muli Ben-Yehuda
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch converts gart IOMMU to use the IOMMU helper functions. The
    IOMMU doesn't allocate a memory area spanning LLD's segment boundary
    anymore.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Muli Ben-Yehuda
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch converts calgary IOMMU to use the IOMMU helper
    functions. The IOMMU doesn't allocate a memory area spanning LLD's
    segment boundary anymore.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Muli Ben-Yehuda
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • Previously, during initialization of the IOMMU tables, the last entry
    at each 4GB boundary is marked as used since there are many adapters
    which cannot handle DMAing across any 4GB boundary.

    The IOMMU doesn't allocate a memory area spanning LLD's segment
    boundary anymore. The segment boundary of devices are set to 4GB by
    default. So we can remove 4GB boundary protection now.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch converts PPC's IOMMU to use the IOMMU helper functions. The IOMMU
    doesn't allocate a memory area spanning LLD's segment boundary anymore.

    iseries_hv_alloc and iseries_hv_map don't have proper device
    struct. 4GB boundary is used for them.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This adds IOMMU helper functions for the free area management. These
    functions take care of LLD's segment boundary limit for IOMMUs. They would be
    useful for IOMMUs that use bitmap for the free area management.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This sets the segment size limit properly via pci_set_dma_max_seg_size
    and remove blk_queue_max_segment_size because scsi-ml calls it.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Acked-by: "Salyzyn, Mark"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This sets the segment size limit properly via pci_set_dma_max_seg_size
    and remove blk_queue_max_segment_size because scsi-ml calls it.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • request_queue and device struct must have the same value of a segment
    size limit. This patch adds blk_queue_segment_boundary in
    __scsi_alloc_queue so LLDs don't need to call both
    blk_queue_segment_boundary and set_dma_max_seg_size. A LLD can change
    the default value (64KB) can call device_dma_parameters accessors like
    pci_set_dma_max_seg_size when allocating scsi_host.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes iommu respect segment size limits when merging sg
    lists.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Cc: Kyle McMartin
    Acked-by: Grant Grundler
    Cc: Matthew Wilcox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes iommu respect segment size limits when merging sg
    lists.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes pci_iommu respect segment size limits when merging sg
    lists.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes sba iommu respect segment size limits when merging sg
    lists.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This patch makes iommu respect segment size limits when merging sg
    lists.

    Signed-off-by: FUJITA Tomonori
    Cc: Jeff Garzik
    Cc: James Bottomley
    Acked-by: Jens Axboe
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori