06 Jan, 2012

2 commits

  • When reading RAID5 files, in rare cases, we calculated too
    few sg segments. There should be two extra for the beginning
    and end partial units.

    Also "too few sg segments" should not be a BUG_ON there is
    all the mechanics in place to handle it, as a short read.
    So just return -ENOMEM and the rest of the code will gracefully
    split the IO.

    [Bug in 3.2.0 Kernel]
    CC: Stable Tree
    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • The users of ore_check_io() expect the reported device
    (In case of error) to be indexed relative to the passed-in
    ore_components table, and not the logical dev index.

    This causes a crash inside objlayoutdriver in case of
    an IO error.

    [Bug in 3.2.0 Kernel]
    CC: Stable Tree
    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     

01 Nov, 2011

1 commit

  • Some files were using the complete module.h infrastructure without
    actually including the header at all. Fix them up in advance so
    once the implicit presence is removed, we won't get failures like this:

    CC [M] fs/nfsd/nfssvc.o
    fs/nfsd/nfssvc.c: In function 'nfsd_create_serv':
    fs/nfsd/nfssvc.c:335: error: 'THIS_MODULE' undeclared (first use in this function)
    fs/nfsd/nfssvc.c:335: error: (Each undeclared identifier is reported only once
    fs/nfsd/nfssvc.c:335: error: for each function it appears in.)
    fs/nfsd/nfssvc.c: In function 'nfsd':
    fs/nfsd/nfssvc.c:555: error: implicit declaration of function 'module_put_and_exit'
    make[3]: *** [fs/nfsd/nfssvc.o] Error 1

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

25 Oct, 2011

4 commits

  • Now that we support raid5 Enable it at mount. Raid6 will come next
    raid4 is not demanded for so it will probably not be enabled.
    (Until some one wants it)

    NOTE: That mkfs.exofs had support for raid5/6 since long time
    ago. (Making an empty raidX FS is just as easy as raid0 ;-} )

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • This is finally the RAID5 Write support.

    The bigger part of this patch is not the XOR engine itself, But the
    read4write logic, which is a complete mini prepare_for_striping
    reading engine that can read scattered pages of a stripe into cache
    so it can be used for XOR calculation. That is, if the write was not
    stripe aligned.

    The main algorithm behind the XOR engine is the 2 dimensional array:
    struct __stripe_pages_2d.
    A drawing might save 1000 words
    ---

    __stripe_pages_2d
    |
    n = pages_in_stripe_unit;
    w = group_width - parity;
    | pages array presented to the XOR lib
    | |
    V |
    __1_page_stripe[0].pages --> [c0][c1]..[cw][c_par] [c0][c1]..[cw][c_par] [c0][c1]..[cw][c_par]
    ^
    |
    data added columns first then row

    ---
    The pages are put on this array columns first. .i.e:
    p0-of-c0, p1-of-c0, ... pn-of-c0, p0-of-c1, ...
    So we are doing a corner turn of the pages.

    Note that pages will zigzag down and left. but are put sequentially
    in growing order. So when the time comes to XOR the stripe, only the
    beginning and end of the array need be checked. We scan the array
    and any NULL spot will be field by pages-to-be-read.

    The FS that wants to support RAID5 needs to supply an
    operations-vector that searches a given page in cache, and specifies
    if the page is uptodate or need reading. All these pages to be read
    are put on a slave ore_io_state and synchronously read. All the pages
    of a stripe are read in one IO, using the scatter gather mechanism.

    In write we constrain our IO to only be incomplete on a single
    stripe. Meaning either the complete IO is within a single stripe so
    we might have pages to read from both beginning or end of the
    strip. Or we have some reading to do at beginning but end at strip
    boundary. The left over pages are pushed to the next IO by the API
    already established by previous work, where an IO offset/length
    combination presented to the ORE might get the length truncated and
    the user must re-submit the leftover pages. (Both exofs and NFS
    support this)

    But any ORE user should make it's best effort to align it's IO
    before hand and avoid complications. A cached ore_layout->stripe_size
    member can be used for that calculation. (NOTE: that ORE demands
    that stripe_size may not be bigger then 32bit)

    What else? Well read it and tell me.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • This patch introduces the first stage of RAID5 support
    mainly the skip-over-raid-units when reading. For
    writes it inserts BLANK units, into where XOR blocks
    should be calculated and written to.

    It introduces the new "general raid maths", and the main
    additional parameters and components needed for raid5.

    Since at this stage it could corrupt future version that
    actually do support raid5. The enablement of raid5
    mounting and setting of parity-count > 0 is disabled. So
    the raid5 code will never be used. Mounting of raid5 is
    only enabled later once the basic XOR write is also in.
    But if the patch "enable RAID5" is applied this code has
    been tested to be able to properly read raid5 volumes
    and is according to standard.

    Also it has been tested that the new maths still properly
    supports RAID0 and grouping code just as before.
    (BTW: I have found more bugs in the pnfs-obj RAID math
    fixed here)

    The ore.c file is getting too big, so new ore_raid.[hc]
    files are added that will include the special raid stuff
    that are not used in striping and mirrors. In future write
    support these will get bigger.
    When adding the ore_raid.c to Kbuild file I was forced to
    rename ore.ko to libore.ko. Is it possible to keep source
    file, say ore.c and module file ore.ko the same even if there
    are multiple files inside ore.ko?

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • ore_calc_stripe_info is needed by exofs::export.c
    for the layout calculations. Make it exportable

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     

15 Oct, 2011

7 commits

  • Current ore_check_io API receives a residual
    pointer, to report partial IO. But it is actually
    not used, because in a multiple devices IO there
    is never a linearity in the IO failure.

    On the other hand if every failing device is reported
    through a received callback measures can be taken to
    handle only failed devices. One at a time.

    This will also be needed by the objects-layout-driver
    for it's error reporting facility.

    Exofs is not currently using the new information and
    keeps the old behaviour of failing the complete IO in
    case of an error. (No partial completion)

    TODO: Use an ore_check_io callback to set_page_error only
    the failing pages. And re-dirty write pages.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • All users of the ore will need to check if current code
    supports the given layout. For example RAID5/6 is not
    currently supported.

    So move all the checks from exofs/super.c to a new
    ore_verify_layout() to be used by ore users.

    Note that any new layout should be passed through the
    ore_verify_layout() because the ore engine will prepare
    and verify some internal members of ore_layout, and
    assumes it's called.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • Users like the objlayout-driver would like to only pass
    a partial device table that covers the IO in question.
    For example exofs divides the file into raid-group-sized
    chunks and only serves group_width number of devices at
    a time.

    The partiality is communicated by setting
    ore_componets->first_dev and the array covers all logical
    devices from oc->first_dev upto (oc->first_dev + oc->numdevs)

    The ore_comp_dev() API receives a logical device index
    and returns the actual present device in the table.
    An out-of-range dev_index will BUG.

    Logical device index is the theoretical device index as if
    all the devices of a file are present. .i.e:
    total_devs = group_width * mirror_p1 * group_count
    0 < total_devs

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • Memory conditions and max_bio constraints might cause us to
    not comply to the full length of the requested IO. Instead of
    failing the complete IO we can issue a shorter read/write and
    report how much was actually executed in the ios->length
    member.

    All users must check ios->length at IO_done or upon return of
    ore_read/write and re-issue the reminder of the bytes. Because
    other wise there is no error returned like before.

    This is part of the effort to support the pnfs-obj layout driver.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • Move the check and preparation of the ios->kern_buff case to
    later inside _write_mirror().

    Since read was never used with ios->kern_buff its support is removed
    instead of fixed.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • Now that each ore_io_state covers only a single raid group.
    A single striping_info math is needed. Embed one inside
    ore_io_state to cache the calculation results and eliminate
    an extra call.

    Also the outer _prepare_for_striping is removed since it does nothing.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • Usually a single IO is confined to one group of devices
    (group_width) and at the boundary of a raid group it can
    spill into a second group. Current code would allocate a
    full device_table size array at each io_state so it can
    comply to requests that span two groups. Needless to say
    that is very wasteful, specially when device_table count
    can get very large (hundreds even thousands), while a
    group_width is usually 8 or 10.

    * Change ore API to trim on IO that spans two raid groups.
    The user passes offset+length to ore_get_rw_state, the
    ore might trim on that length if spanning a group boundary.
    The user must check ios->length or ios->nrpages to see
    how much IO will be preformed. It is the responsibility
    of the user to re-issue the reminder of the IO.

    * Modify exofs To copy spilled pages on to the next IO.
    This means one last kick is needed after all coalescing
    of pages is done.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     

04 Oct, 2011

1 commit

  • In the pNFS obj-LD the device table at the layout level needs
    to point to a device_cache node, where it is possible and likely
    that many layouts will point to the same device-nodes.

    In Exofs we have a more orderly structure where we have a single
    array of devices that repeats twice for a round-robin view of the
    device table

    This patch moves to a model that can be used by the pNFS obj-LD
    where struct ore_components holds an array of ore_dev-pointers.
    (ore_dev is newly defined and contains a struct osd_dev *od
    member)

    Each pointer in the array of pointers will point to a bigger
    user-defined dev_struct. That can be accessed by use of the
    container_of macro.

    In Exofs an __alloc_dev_table() function allocates the
    ore_dev-pointers array as well as an exofs_dev array, in one
    allocation and does the addresses dance to set everything pointing
    correctly. It still keeps the double allocation trick for the
    inodes round-robin view of the table.

    The device table is always allocated dynamically, also for the
    single device case. So it is unconditionally freed at umount.

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     

03 Oct, 2011

3 commits


07 Aug, 2011

2 commits

  • Export everything from ore need exporting. Change Kbuild and Kconfig
    to build ore.ko as an independent module. Import ore from exofs

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh
     
  • ORE stands for "Objects Raid Engine"

    This patch is a mechanical rename of everything that was in ios.c
    and its API declaration to an ore.c and an osd_ore.h header. The ore
    engine will later be used by the pnfs objects layout driver.

    * File ios.c => ore.c

    * Declaration of types and API are moved from exofs.h to a new
    osd_ore.h

    * All used types are prefixed by ore_ from their exofs_ name.

    * Shift includes from exofs.h to osd_ore.h so osd_ore.h is
    independent, include it from exofs.h.

    Other than a pure rename there are no other changes. Next patch
    will move the ore into it's own module and will export the API
    to be used by exofs and later the layout driver

    Signed-off-by: Boaz Harrosh

    Boaz Harrosh