14 Nov, 2018

1 commit

  • commit 9a59739bd01f77db6fbe2955a4fce165f0f43568 upstream.

    This enum has become part of the uABI, as both RXE and the
    ib_uverbs_post_send() command expect userspace to supply values from this
    enum. So it should be properly placed in include/uapi/rdma.

    In userspace this enum is called 'enum ibv_wr_opcode' as part of
    libibverbs.h. That enum defines different values for IB_WR_LOCAL_INV,
    IB_WR_SEND_WITH_INV, and IB_WR_LSO. These were introduced (incorrectly, it
    turns out) into libiberbs in 2015.

    The kernel has changed its mind on the numbering for several of the IB_WC
    values over the years, but has remained stable on IB_WR_LOCAL_INV and
    below.

    Based on this we can conclude that there is no real user space user of the
    values beyond IB_WR_ATOMIC_FETCH_AND_ADD, as they have never worked via
    rdma-core. This is confirmed by inspection, only rxe uses the kernel enum
    and implements the latter operations. rxe has clearly never worked with
    these attributes from userspace. Other drivers that support these opcodes
    implement the functionality without calling out to the kernel.

    To make IB_WR_SEND_WITH_INV and related work for RXE in userspace we
    choose to renumber the IB_WR enum in the kernel to match the uABI that
    userspace has bee using since before Soft RoCE was merged. This is an
    overall simpler configuration for the whole software stack, and obviously
    can't break anything existing.

    Reported-by: Seth Howell
    Tested-by: Seth Howell
    Fixes: 8700e3e7c485 ("Soft RoCE driver")
    Cc:
    Signed-off-by: Jason Gunthorpe
    Signed-off-by: Greg Kroah-Hartman

    Jason Gunthorpe
     

24 Aug, 2018

1 commit

  • Pull more rdma updates from Jason Gunthorpe:
    "This is the SMC cleanup promised, a randconfig regression fix, and
    kernel oops fix.

    Summary:

    - Switch SMC over to rdma_get_gid_attr and remove the compat

    - Fix a crash in HFI1 with some BIOS's

    - Fix a randconfig failure"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
    IB/ucm: fix UCM link error
    IB/hfi1: Invalid NUMA node information can cause a divide by zero
    RDMA/smc: Replace ib_query_gid with rdma_get_gid_attr

    Linus Torvalds
     

23 Aug, 2018

1 commit

  • There are several blockable mmu notifiers which might sleep in
    mmu_notifier_invalidate_range_start and that is a problem for the
    oom_reaper because it needs to guarantee a forward progress so it cannot
    depend on any sleepable locks.

    Currently we simply back off and mark an oom victim with blockable mmu
    notifiers as done after a short sleep. That can result in selecting a new
    oom victim prematurely because the previous one still hasn't torn its
    memory down yet.

    We can do much better though. Even if mmu notifiers use sleepable locks
    there is no reason to automatically assume those locks are held. Moreover
    majority of notifiers only care about a portion of the address space and
    there is absolutely zero reason to fail when we are unmapping an unrelated
    range. Many notifiers do really block and wait for HW which is harder to
    handle and we have to bail out though.

    This patch handles the low hanging fruit.
    __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks
    are not allowed to sleep if the flag is set to false. This is achieved by
    using trylock instead of the sleepable lock for most callbacks and
    continue as long as we do not block down the call chain.

    I think we can improve that even further because there is a common pattern
    to do a range lookup first and then do something about that. The first
    part can be done without a sleeping lock in most cases AFAICS.

    The oom_reaper end then simply retries if there is at least one notifier
    which couldn't make any progress in !blockable mode. A retry loop is
    already implemented to wait for the mmap_sem and this is basically the
    same thing.

    The simplest way for driver developers to test this code path is to wrap
    userspace code which uses these notifiers into a memcg and set the hard
    limit to hit the oom. This can be done e.g. after the test faults in all
    the mmu notifier managed memory and set the hard limit to something really
    small. Then we are looking for a proper process tear down.

    [akpm@linux-foundation.org: coding style fixes]
    [akpm@linux-foundation.org: minor code simplification]
    Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Acked-by: Christian König # AMD notifiers
    Acked-by: Leon Romanovsky # mlx and umem_odp
    Reported-by: David Rientjes
    Cc: "David (ChunMing) Zhou"
    Cc: Paolo Bonzini
    Cc: Alex Deucher
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Joonas Lahtinen
    Cc: Rodrigo Vivi
    Cc: Doug Ledford
    Cc: Jason Gunthorpe
    Cc: Mike Marciniszyn
    Cc: Dennis Dalessandro
    Cc: Sudeep Dutt
    Cc: Ashutosh Dixit
    Cc: Dimitri Sivanich
    Cc: Boris Ostrovsky
    Cc: Juergen Gross
    Cc: "Jérôme Glisse"
    Cc: Andrea Arcangeli
    Cc: Felix Kuehling
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

18 Aug, 2018

1 commit

  • All RDMA ULPs should be using rdma_get_gid_attr instead of
    ib_query_gid. Convert SMC to use the new API.

    In the process correct some confusion with gid_type - if attr->ndev is
    !NULL then gid_type can never be IB_GID_TYPE_IB by
    definition. IB_GID_TYPE_ROCE shares the same enum value and is probably
    what was intended here.

    Reviewed-by: Parav Pandit
    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     

17 Aug, 2018

2 commits

  • This reverts commit ddb457c6993babbcdd41fca638b870d2a2fc3941.

    The include rdma/ib_cache.h is kept, and we have to add a memset
    to the compat wrapper to avoid compiler warnings in gcc-7

    This revert is done to avoid extensive merge conflicts with SMC
    changes in netdev during the 4.19 merge window.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • Resolve merge conflicts from the -rc cycle against the rdma.git tree:

    Conflicts:
    drivers/infiniband/core/uverbs_cmd.c
    - New ifs added to ib_uverbs_ex_create_flow in -rc and for-next
    - Merge removal of file->ucontext in for-next with new code in -rc
    drivers/infiniband/core/uverbs_main.c
    - for-next removed code from ib_uverbs_write() that was modified
    in for-rc

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     

13 Aug, 2018

3 commits

  • Everything now uses the uverbs_uapi data structure.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • Convert the ioctl method syscall path to use the uverbs_api data
    structures. The new uapi structure includes all the same information, just
    in a different and more optimal way.

    - Use attr_bkey instead of 2 level radix trees for everything related to
    attributes. This includes the attribute storage, presence, and
    detection of missing mandatory attributes.
    - Avoid iterating over all attribute storage at finish, instead use
    find_first_bit with the attr_bkey to locate only those attrs that need
    cleanup.
    - Organize things to always run, and always rely on, cleanup. This
    avoids a bunch of tricky error unwind cases.
    - Locate the method using the radix tree, and locate the attributes
    using a very efficient incremental radix tree lookup
    - Use the precomputed destroy_bkey to handle uobject destruction
    - Use the precomputed allocation sizes and precomputed 'need_stack'
    to avoid maths in the fast path. This is optimal if userspace
    does not pass (many) unsupported attributes.

    Overall this results in much better codegen for the attribute accessors,
    everything is now stored in bitmaps or linear arrays indexed by attr_bkey.
    The compiler can compute attr_bkey values at compile time for all method
    attributes, meaning things like uverbs_attr_is_valid() now compile into
    single instruction bit tests.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • This is similar in spirit to devm, it keeps track of any allocations
    linked to this method call and ensures they are all freed when the method
    exits. Further, if there is space in the internal/onstack buffer then the
    allocator will hand out that memory and avoid an expensive call to
    kalloc/kfree in the syscall path.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Leon Romanovsky

    Jason Gunthorpe
     

11 Aug, 2018

5 commits

  • Memory in the bundle is valuable, do not waste it holding an 8 byte
    pointer for the rare case of writing to a PTR_OUT. We can compute the
    pointer by storing a small 1 byte array offset and the base address of the
    uattr memory in the bundle private memory.

    This also means we can access the kernel's copy of the ib_uverbs_attr, so
    drop the copy of flags as well.

    Since the uattr base should be private bundle information this also
    de-inlines the already too big uverbs_copy_to inline and moves
    create_udata into uverbs_ioctl.c so they can see the private struct
    definition.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Leon Romanovsky

    Jason Gunthorpe
     
  • This already existed as the anonymous 'ctx' structure, but this was not
    really a useful form. Hoist this struct into bundle_priv and rework the
    internal things to use it instead.

    Move a bunch of the processing internal state into the priv and reduce the
    excessive use of function arguments.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Leon Romanovsky

    Jason Gunthorpe
     
  • Currently the struct uverbs_obj_type stored in the ib_uobject is part of
    the .rodata segment of the module that defines the object. This is a
    problem if drivers define new uapi objects as we will be left with a
    dangling pointer after device disassociation.

    Switch the uverbs_obj_type for struct uverbs_api_object, which is
    allocated memory that is part of the uverbs_api and is guaranteed to
    always exist. Further this moves the 'type_class' into this memory which
    means access to the IDR/FD function pointers is also guaranteed. Drivers
    cannot define new types.

    This makes it safe to continue to use all uobjects, including driver
    defined ones, after disassociation.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • This radix tree datastructure is intended to replace the 'hash' structure
    used today for parsing ioctl methods during system calls. This first
    commit introduces the structure and builds it from the existing .rodata
    descriptions.

    The so-called hash arrangement is actually a 5 level open coded radix tree.
    This new version uses a 3 level radix tree built using the radix tree
    library.

    Overall this is much less code and much easier to build as the radix tree
    API allows for dynamic modification during the building. There is a small
    memory penalty to pay for this, but since the radix tree is allocated on
    a per device basis, a few kb of RAM seems immaterial considering the
    gained simplicity.

    The radix tree is similar to the existing tree, but also has a 'attr_bkey'
    concept, which is a small value'd index for each method attribute. This is
    used to simplify and improve performance of everything in the next
    patches.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Leon Romanovsky
    Reviewed-by: Michael J. Ruhl

    Jason Gunthorpe
     
  • There is no reason for drivers to do this, the core code should take of
    everything. The drivers will provide their information from rodata to
    describe their modifications to the core's base uapi specification.

    The core uses this to build up the runtime uapi for each device.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Michael J. Ruhl
    Reviewed-by: Leon Romanovsky

    Jason Gunthorpe
     

03 Aug, 2018

1 commit

  • Now that the unregister_netdev flow for IPoIB no longer relies on external
    code we can now introduce the use of priv_destructor and
    needs_free_netdev.

    The rdma_netdev flow is switched to use the netdev common priv_destructor
    instead of the special free_rdma_netdev and the IPOIB ULP adjusted:
    - priv_destructor needs to switch to point to the ULP's destructor
    which will then call the rdma_ndev's in the right order
    - We need to be careful around the error unwind of register_netdev
    as it sometimes calls priv_destructor on failure
    - ULPs need to use ndo_init/uninit to ensure proper ordering
    of failures around register_netdev

    Switching to priv_destructor is a necessary pre-requisite to using
    the rtnl new_link mechanism.

    The VNIC user for rdma_netdev should also be revised, but that is left for
    another patch.

    Signed-off-by: Jason Gunthorpe
    Signed-off-by: Denis Drozdov
    Signed-off-by: Leon Romanovsky

    Jason Gunthorpe
     

02 Aug, 2018

7 commits

  • The disassociate function was broken by design because it failed all
    commands. This prevents userspace from calling destroy on a uobject after
    it has detected a device fatal error and thus reclaiming the resources in
    userspace is prevented.

    This fix is now straightforward, when anything destroys a uobject that is
    not the user the object remains on the IDR with a NULL context and object
    pointer. All lookup locking modes other than DESTROY will fail. When the
    user ultimately calls the destroy function it is simply dropped from the
    IDR while any related information is returned.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • This does the same as the patch before, except for ioctl. The rules are
    the same, but for the ioctl methods the core code handles setting up the
    uobject.

    - Retrieve the ib_dev from the uobject->context->device. This is
    safe under ioctl as the core has already done rdma_alloc_begin_uobject
    and so CREATE calls are entirely protected by the rwsem.
    - Retrieve the ib_dev from uobject->object
    - Call ib_uverbs_get_ucontext()

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • This is a step to get rid of the global check for disassociation. In this
    model, the ib_dev is not proven to be valid by the core code and cannot be
    provided to the method. Instead, every method decides if it is able to
    run after disassociation and obtains the ib_dev using one of three
    different approaches:

    - Call srcu_dereference on the udevice's ib_dev. As before, this means
    the method cannot be called after disassociation begins.
    (eg alloc ucontext)
    - Retrieve the ib_dev from the ucontext, via ib_uverbs_get_ucontext()
    - Retrieve the ib_dev from the uobject->object after checking
    under SRCU if disassociation has started (eg uobj_get)

    Largely, the code is all ready for this, the main work is to provide a
    ib_dev after calling uobj_alloc(). The few other places simply use
    ib_uverbs_get_ucontext() to get the ib_dev.

    This flexibility will let the next patches allow destroy to operate
    after disassociation.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • After all the recent structural changes this is now straightfoward, hoist
    the hw_destroy_rwsem up out of rdma_destroy_explicit and wrap it around
    the uobject write lock as well as the destroy.

    This is necessary as obtaining a write lock concurrently with
    uverbs_destroy_ufile_hw() will cause malfunction.

    After this change none of the destroy callbacks require the
    disassociate_srcu lock to be correct.

    This requires introducing a new lookup mode, UVERBS_LOOKUP_DESTROY as the
    IOCTL interface needs to hold an unlocked kref until all command
    verification is completed.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • This is more readable, and future patches will need a 3rd lookup type.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • There are several flows that can destroy a uobject and each one is
    minimized and sprinkled throughout the code base, making it difficult to
    understand and very hard to modify the destroy path.

    Consolidate all of these into uverbs_destroy_uobject() and call it in all
    cases where a uobject has to be destroyed.

    This makes one change to the lifecycle, during any abort (eg when
    alloc_commit is not called) we always call out to alloc_abort, even if
    remove_commit needs to be called to delete a HW object.

    This also renames RDMA_REMOVE_DURING_CLEANUP to RDMA_REMOVE_ABORT to
    clarify its actual usage and revises some of the comments to reflect what
    the life cycle is for the type implementation.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • The ridiculous dance with uobj_remove_commit() is not needed, the write
    path can follow the same flow as ioctl - lock and destroy the HW object
    then use the data left over in the uobject to form the response to
    userspace.

    Two helpers are introduced to make this flow straightforward for the
    caller.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     

31 Jul, 2018

6 commits

  • Return bool for following internal and inline functions as their
    underlying APIs return bool too.

    1. cma_zero_addr()
    2. cma_loopback_addr()
    3. cma_any_addr()
    4. ib_addr_any()
    5. ib_addr_loopback()

    While we are touching cma_loopback_addr(), remove extra white spaces
    in it.

    Signed-off-by: Parav Pandit
    Reviewed-by: Daniel Jurgens
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Parav Pandit
     
  • Constify several pointers such as path_rec, ib_cm_event and listen_id
    pointers in several functions.

    Signed-off-by: Parav Pandit
    Reviewed-by: Daniel Jurgens
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Parav Pandit
     
  • Following APIs are not supposed to modify addr or dest_addr contents.
    Therefore make those function argument const for better code
    readability.

    1. rdma_resolve_ip()
    2. rdma_addr_size()
    3. rdma_resolve_addr()

    Signed-off-by: Parav Pandit
    Reviewed-by: Daniel Jurgens
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Parav Pandit
     
  • This clearly indicates that the input is a bitwise combination of values
    in an enum, and identifies which enum contains the definition of the bits.

    Special accessors are provided that handle the mandatory validation of the
    allowed bits and enforce the correct type for bitwise flags.

    If we had introduced this at the start then the kabi would have uniformly
    used u64 data to pass flags, however today there is a mixture of u64 and
    u32 flags. All places are converted to accept both sizes and the accessor
    fixes it. This allows all existing flags to grow to u64 in future without
    any hassle.

    Finally all flags are, by definition, optional. If flags are not passed
    the accessor does not fail, but provides a value of zero.

    Signed-off-by: Jason Gunthorpe
    Reviewed-by: Leon Romanovsky

    Jason Gunthorpe
     
  • Since neither ib_post_send() nor ib_post_recv() modify the data structure
    their second argument points at, declare that argument const. This change
    makes it necessary to declare the 'bad_wr' argument const too and also to
    modify all ULPs that call ib_post_send(), ib_post_recv() or
    ib_post_srq_recv(). This patch does not change any functionality but makes
    it possible for the compiler to verify whether the
    ib_post_(send|recv|srq_recv) really do not modify the posted work request.

    To make this possible, only one cast had to be introduce that casts away
    constness, namely in rpcrdma_post_recvs(). The only way I can think of to
    avoid that cast is to introduce an additional loop in that function or to
    change the data type of bad_wr from struct ib_recv_wr ** into int
    (an index that refers to an element in the work request list). However,
    both approaches would require even more extensive changes than this
    patch.

    Signed-off-by: Bart Van Assche
    Reviewed-by: Chuck Lever
    Signed-off-by: Jason Gunthorpe

    Bart Van Assche
     
  • When posting a send work request, the work request that is posted is not
    modified by any of the RDMA drivers. Make this explicit by constifying
    most ib_send_wr pointers in RDMA transport drivers.

    Signed-off-by: Bart Van Assche
    Reviewed-by: Sagi Grimberg
    Reviewed-by: Steve Wise
    Reviewed-by: Dennis Dalessandro
    Signed-off-by: Jason Gunthorpe

    Bart Van Assche
     

28 Jul, 2018

1 commit

  • Code changes in smc have become so complicated this cycle that the RDMA
    patches to remove ib_query_gid in smc create too complex merge conflicts.
    Allow those conflicts to be resolved by using the net/smc hunks by
    providing a compatibility wrapper. During the second phase of the merge
    window this wrapper will be deleted and smc updated to use the new API.

    Reported-by: Stephen Rothwell
    Reviewed-by: Parav Pandit
    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     

26 Jul, 2018

6 commits

  • For RoCE, when CM requests are received for RC and UD connections,
    netdevice of the incoming request is unavailable. Because of that CM
    requests are always forwarded to init_net namespace.

    Now that we have the GID attribute available, introduce SGID attribute in
    incoming CM requests and refer to the netdevice of it. This is similar to
    existing SGID attribute field in outgoing CM requests for RC and UD
    transports.

    Signed-off-by: Parav Pandit
    Reviewed-by: Daniel Jurgens
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Parav Pandit
     
  • We have a parallel unlocked reader and writer with ib_uverbs_get_context()
    vs everything else, and nothing guarantees this works properly.

    Audit and fix all of the places that access ucontext to use one of the
    following locking schemes:
    - Call ib_uverbs_get_ucontext() under SRCU and check for failure
    - Access the ucontext through an struct ib_uobject context member
    while holding a READ or WRITE lock on the uobject.
    This value cannot be NULL and has no race.
    - Hold the ucontext_lock and check for ufile->ucontext !NULL

    This also re-implements ib_uverbs_get_ucontext() in a way that is safe
    against concurrent ib_uverbs_get_context() and disassociation.

    As a side effect, every access to ucontext in the commands is via
    ib_uverbs_get_context() with an error check, or via the uobject, so there
    is no longer any need for the core code to check ucontext on every command
    call. These checks are also removed.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • Allocating the struct file during alloc_begin creates this strange
    asymmetry with IDR, where the FD has two krefs pointing at it during the
    pre-commit phase. In particular this makes the abort process for FD very
    strange and confusing.

    For instance abort currently calls the type's destroy_object twice, and
    the fops release once if abort is done. This is very counter intuitive. No
    fops should be called until alloc_commit succeeds, and destroy_object
    should only ever be called once.

    Moving the struct file allocation to the alloc_commit is now simple, as we
    already support failure of rdma_alloc_commit_uobject, with all the
    required rollback pieces.

    This creates an understandable symmetry with IDR and simplifies/fixes the
    abort handling for FD types.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • The ioctl framework already does this correctly, but the write path did
    not. This is trivially fixed by simply using a standard pattern to return
    uobj_alloc_commit() as the last statement in every function.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • The locking here has always been a bit crazy and spread out, upon some
    careful analysis we can simplify things.

    Create a single function uverbs_destroy_ufile_hw() that internally handles
    all locking. This pulls together pieces of this process that were
    sprinkled all over the places into one place, and covers them with one
    lock.

    This eliminates several duplicate/confusing locks and makes the control
    flow in ib_uverbs_close() and ib_uverbs_free_hw_resources() extremely
    simple.

    Unfortunately we have to keep an extra mutex, ucontext_lock. This lock is
    logically part of the rwsem and provides the 'down write, fail if write
    locked, wait if read locked' semantic we require.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     
  • Our ABI for write() uses a s32 for FDs and a u32 for IDRs, but internally
    we ended up implicitly casting these ABI values into an 'int'. For ioctl()
    we use a s64 for FDs and a u64 for IDRs, again casting to an int.

    The various casts to int are all missing range checks which can cause
    userspace values that should be considered invalid to be accepted.

    Fix this by making the generic lookup routine accept a s64, which does not
    truncate the write API's u32/s32 or the ioctl API's s64. Then push the
    detailed range checking down to the actual type implementations to be
    shared by both interfaces.

    Finally, change the copy of the uobj->id to sign extend into a s64, so eg,
    if we ever wish to return a negative value for a FD it is carried
    properly.

    This ensures that userspace values are never weirdly interpreted due to
    the various trunctations and everything that is really out of range gets
    an EINVAL.

    Signed-off-by: Jason Gunthorpe

    Jason Gunthorpe
     

25 Jul, 2018

4 commits

  • This patch does not change the behavior of the modified functions.

    Signed-off-by: Bart Van Assche
    Signed-off-by: Jason Gunthorpe

    Bart Van Assche
     
  • Introduce driver create and destroy flow methods on the uverbs flow
    object.

    This allows the driver to get its specific device attributes to match the
    underlay specification while still using the generic ib_flow object for
    cleanup and code sharing.

    The IB object's attributes are set via the ib_set_flow() helper function.

    The specific implementation for the given specification is added in
    downstream patches.

    Signed-off-by: Yishai Hadas
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Yishai Hadas
     
  • This patch considers the case that ib_flow is created by some device
    driver with its specific parameters using the KABI infrastructure.

    In that case both QP and ib_uflow_resources might not be applicable.
    Downstream patches from this series use the above functionality.

    Signed-off-by: Yishai Hadas
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Yishai Hadas
     
  • Introduce flow steering matcher object and its create and destroy methods.

    This matcher object holds some mlx5 specific driver properties that
    matches the underlay device specification when an mlx5 flow steering group
    is created.

    It will be used in downstream patches to be part of mlx5 specific create
    flow method.

    Signed-off-by: Yishai Hadas
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Jason Gunthorpe

    Yishai Hadas
     

24 Jul, 2018

1 commit