05 Sep, 2022

13 commits

  • [ upstream commmit 791f3465c4afde02d7f16cf7424ca87070b69396 ]

    Fixes a problem described in 50252e4b5e989
    ("aio: fix use-after-free due to missing POLLFREE handling")
    and copies the approach used there.

    In short, we have to forcibly eject a poll entry when we meet POLLFREE.
    We can't rely on io_poll_get_ownership() as can't wait for potentially
    running tw handlers, so we use the fact that wqs are RCU freed. See
    Eric's patch and comments for more details.

    Reported-by: Eric Biggers
    Link: https://lore.kernel.org/r/20211209010455.42744-6-ebiggers@kernel.org
    Reported-and-tested-by: syzbot+5426c7ed6868c705ca14@syzkaller.appspotmail.com
    Fixes: 221c5eb233823 ("io_uring: add support for IORING_OP_POLL")
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/4ed56b6f548f7ea337603a82315750449412748a.1642161259.git.asml.silence@gmail.com
    [axboe: drop non-functional change from patch]
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit 9d2ad2947a53abf5e5e6527a9eeed50a3a4cbc72 ]

    Leaving ip.error set when a request was punted to task_work execution is
    problematic, don't forget to clear it.

    Fixes: aa43477b04025 ("io_uring: poll rework")
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/a6c84ef4182c6962380aebe11b35bdcb25b0ccfb.1655852245.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit c487a5ad48831afa6784b368ec40d0ee50f2fe1b ]

    Don't forget to cancel all linked requests of poll request when
    __io_arm_poll_handler() failed.

    Fixes: aa43477b04025 ("io_uring: poll rework")
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/a78aad962460f9fdfe4aa4c0b62425c88f9415bc.1655852245.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit e2c0cb7c0cc72939b61a7efee376206725796625 ]

    The previous commit:

    1bc84c40088 ("io_uring: remove poll entry from list when canceling all")

    removed a potential overflow condition for the poll references. They
    are currently limited to 20-bits, even if we have 31-bits available. The
    upper bit is used to mark for cancelation.

    Bump the poll ref space to 31-bits, making that kind of situation much
    harder to trigger in general. We'll separately add overflow checking
    and handling.

    Fixes: aa43477b0402 ("io_uring: poll rework")
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • [ upstream commmit 61bc84c4008812d784c398cfb54118c1ba396dfc ]

    When the ring is exiting, as part of the shutdown, poll requests are
    removed. But io_poll_remove_all() does not remove entries when finding
    them, and since completions are done out-of-band, we can find and remove
    the same entry multiple times.

    We do guard the poll execution by poll ownership, but that does not
    exclude us from reissuing a new one once the previous removal ownership
    goes away.

    This can race with poll execution as well, where we then end up seeing
    req->apoll be NULL because a previous task_work requeue finished the
    request.

    Remove the poll entry when we find it and get ownership of it. This
    prevents multiple invocations from finding it.

    Fixes: aa43477b0402 ("io_uring: poll rework")
    Reported-by: Dylan Yudaken
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • [ upstream commmit c84b8a3fef663933007e885535591b9d30bdc860 ]

    Fix the following clang warnings:

    fs/io_uring.c:1195:20: warning: unused function 'req_ref_put'
    [-Wunused-function].

    Fixes: aa43477b0402 ("io_uring: poll rework")
    Reported-by: Abaci Robot
    Signed-off-by: Jiapeng Chong
    Link: https://lore.kernel.org/r/20220113162005.3011-1-jiapeng.chong@linux.alibaba.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Jiapeng Chong
     
  • [ upstream commmit aa43477b040251f451db0d844073ac00a8ab66ee ]

    It's not possible to go forward with the current state of io_uring
    polling, we need a more straightforward and easier synchronisation.
    There are a lot of problems with how it is at the moment, including
    missing events on rewait.

    The main idea here is to introduce a notion of request ownership while
    polling, no one but the owner can modify any part but ->poll_refs of
    struct io_kiocb, that grants us protection against all sorts of races.

    Main users of such exclusivity are poll task_work handler, so before
    queueing a tw one should have/acquire ownership, which will be handed
    off to the tw handler.
    The other user is __io_arm_poll_handler() do initial poll arming. It
    starts taking the ownership, so tw handlers won't be run until it's
    released later in the function after vfs_poll. note: also prevents
    races in __io_queue_proc().
    Poll wake/etc. may not be able to get ownership, then they need to
    increase the poll refcount and the task_work should notice it and retry
    if necessary, see io_poll_check_events().
    There is also IO_POLL_CANCEL_FLAG flag to notify that we want to kill
    request.

    It makes cancellations more reliable, enables double multishot polling,
    fixes double poll rewait, fixes missing poll events and fixes another
    bunch of races.

    Even though it adds some overhead for new refcounting, and there are a
    couple of nice performance wins:
    - no req->refs refcounting for poll requests anymore
    - if the data is already there (once measured for some test to be 1-2%
    of all apoll requests), it removes it doesn't add atomics and removes
    spin_lock/unlock pair.
    - works well with multishots, we don't do remove from queue / add to
    queue for each new poll event.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/6b652927c77ed9580ea4330ac5612f0e0848c946.1639605189.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit eb6e6f0690c846f7de46181bab3954c12c96e11e ]

    Inline io_poll_complete(), it's simple and doesn't have any particular
    purpose.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/933d7ee3e4450749a2d892235462c8f18d030293.1633373302.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit ab1dab960b8352cee082db0f8a54dc92a948bfd7 ]

    With IORING_FEAT_FAST_POLL in place, io_put_req_find_next() for poll
    requests doesn't make much sense, and in any case re-adding it
    shouldn't be a problem considering batching in tctx_task_work(). We can
    remove it.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/15699682bf81610ec901d4e79d6da64baa9f70be.1639605189.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit 5641897a5e8fb8abeb07e89c71a788d3db3ec75e ]

    Move some poll helpers/etc up, we'll need them there shortly

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/6c5c3dba24c86aad5cd389a54a8c7412e6a0621d.1639605189.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit 2bbb146d96f4b45e17d6aeede300796bc1a96d68 ]

    Clean up io_poll_update() and unify cancellation paths for remove and
    update.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/5937138b6265a1285220e2fab1b28132c1d73ce3.1639605189.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commmit 913a571affedd17239c4d4ea90c8874b32fc2191 ]

    Split io_cqring_fill_event() into a couple of more targeted functions.
    The first on is io_fill_cqe_aux() for completions that are not
    associated with request completions and doing the ->cq_extra accounting.
    Examples are additional CQEs from multishot poll and rsrc notifications.

    The second is io_fill_cqe_req(), should be called when it's a normal
    request completion. Nothing more to it at the moment, will be used in
    later patches.

    The last one is inlined __io_fill_cqe() for a finer grained control,
    should be used with caution and in hottest places.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/59a9117a4a44fc9efcf04b3afa51e0d080f5943c.1636559119.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ upstream commit 54daa9b2d80ab35824464b35a99f716e1cdf2ccb ]

    CQE result is a 32-bit integer, so the functions generating CQEs are
    better to accept not long but ints. Convert io_cqring_fill_event() and
    other helpers.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/7ca6f15255e9117eae28adcac272744cae29b113.1633373302.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    [pavel: backport]
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     

31 Aug, 2022

1 commit

  • commit e053aaf4da56cbf0afb33a0fda4a62188e2c0637 upstream.

    This is actually an older issue, but we never used to hit the -EAGAIN
    path before having done sb_start_write(). Make sure that we always call
    kiocb_end_write() if we need to retry the write, so that we keep the
    calls to sb_start_write() etc balanced.

    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     

21 Aug, 2022

1 commit

  • commit 386e4fb6962b9f248a80f8870aea0870ca603e89 upstream.

    In prior kernels, we did file assignment always at prep time. This meant
    that req->task == current. But after deferring that assignment and then
    pushing the inflight tracking back in, we've got the inflight tracking
    using current when it should in fact now be using req->task.

    Fixup that error introduced by adding the inflight tracking back after
    file assignments got modifed.

    Fixes: 9cae36a094e7 ("io_uring: reinstate the inflight tracking")
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     

17 Aug, 2022

1 commit

  • commit cc18cc5e82033d406f54144ad6f8092206004684 upstream.

    Potentially, someone may create as many pbuf bucket as there are indexes
    in an xarray without any other restrictions bounding our memory usage,
    put memory needed for the buckets under memory accounting.

    Cc:
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/d34c452e45793e978d26e2606211ec9070d329ea.1659622312.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     

12 Jul, 2022

2 commits

  • [ Upstream commit e0deb6a025ae8c850dc8685be39fb27b06c88736 ]

    If an opcode handler semi-reliably returns -EAGAIN, io_wq_submit_work()
    might continue busily hammer the same handler over and over again, which
    is not ideal. The -EAGAIN handling in question was put there only for
    IOPOLL, so restrict it to IOPOLL mode only where there is no other
    recourse than to retry as we cannot wait.

    Fixes: def596e9557c9 ("io_uring: support for IO polling")
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/f168b4f24181942f3614dd8ff648221736f572e6.1652433740.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     
  • [ Upstream commit f63cf5192fe3418ad5ae1a4412eba5694b145f79 ]

    Ensure that we call fsnotify_modify() if we write a file, and that we
    do fsnotify_access() if we read it. This enables anyone using inotify
    on the file to get notified.

    Ditto for fallocate, ensure that fsnotify_modify() is called.

    Cc: stable@vger.kernel.org
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Jens Axboe
     

07 Jul, 2022

1 commit

  • commit 73911426aaaadbae54fa72359b33a7b6a56947db upstream.

    All other opcodes correctly check if this is set and -EINVAL if it is
    and they don't support that field, for some reason the these were
    forgotten.

    This was unified a bit differently in the upstream tree, but had the
    same effect as making sure we error on this field. Rather than have
    a painful backport of the upstream commit, just fixup the mentioned
    opcodes.

    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     

02 Jul, 2022

1 commit

  • commit 05b538c1765f8d14a71ccf5f85258dcbeaf189f7 upstream.

    We can look inside the fixed buffer table only while holding
    ->uring_lock, however in some cases we don't do the right async prep for
    IORING_OP_{WRITE,READ}_FIXED ending up with NULL req->imu forcing making
    an io-wq worker to try to resolve the fixed buffer without proper
    locking.

    Move req->imu setup into early req init paths, i.e. io_prep_rw(), which
    is called unconditionally for rw requests and under uring_lock.

    Fixes: 634d00df5e1cf ("io_uring: add full-fledged dynamic buffers support")
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     

22 Jun, 2022

2 commits

  • [ Upstream commit d11d31fc5d8a96f707facee0babdcffaafa38de2 ]

    Fixed buffer table quiesce might unlock ->uring_lock, potentially
    letting new requests to be submitted, don't allow those requests to
    use the table as they will race with unregistration.

    Reported-and-tested-by: van fantasy
    Fixes: bd54b6fe3316ec ("io_uring: implement fixed buffers registration similar to fixed files")
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     
  • [ Upstream commit b0380bf6dad4601d92025841e2b7a135d566c6e3 ]

    Fixed file table quiesce might unlock ->uring_lock, potentially letting
    new requests to be submitted, don't allow those requests to use the
    table as they will race with unregistration.

    Reported-and-tested-by: van fantasy
    Fixes: 05f3fb3c53975 ("io_uring: avoid ring quiesce for fixed file set unregister and update")
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     

25 May, 2022

1 commit

  • commit e74ead135bc4459f7d40b1f8edab1333a28b54e8 upstream.

    Don't check if we can do nowait before arming apoll, there are several
    reasons for that. First, we don't care much about files that don't
    support nowait. Second, it may be useful -- we don't want to be taking
    away extra workers from io-wq when it can go in some async. Even if it
    will go through io-wq eventually, it make difference in the numbers of
    workers actually used. And the last one, it's needed to clean nowait in
    future commits.

    [kernel test robot: fix unused-var]

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/9d06f3cb2c8b686d970269a87986f154edb83043.1634425438.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     

09 May, 2022

2 commits

  • [ Upstream commit 5a1e99b61b0c81388cde0c808b3e4173907df19f ]

    We should check unused fields for non-zero and -EINVAL if they are set,
    making it consistent with other opcodes.

    Fixes: aa1fa28fc73e ("io_uring: add support for recvmsg()")
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Jens Axboe
     
  • [ Upstream commit 588faa1ea5eecb351100ee5d187b9be99210f70d ]

    We should check unused fields for non-zero and -EINVAL if they are set,
    making it consistent with other opcodes.

    Fixes: 0fa03c624d8f ("io_uring: add support for sendmsg()")
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Jens Axboe
     

20 Apr, 2022

5 commits

  • [ Upstream commit 4cdd158be9d09223737df83136a1fb65269d809a ]

    There are still several places that using pre array_index_nospec()
    indexes, fix them up.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/b01ef5ee83f72ed35ad525912370b729f5d145f4.1649336342.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     
  • [ Upstream commit 8f0a24801bb44aa58496945aabb904c729176772 ]

    Automatically default rsrc tag in io_queue_rsrc_removal(), it's safer
    than leaving it there and relying on the rest of the code to behave and
    not use it.

    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/1cf262a50df17478ea25b22494dcc19f3a80301f.1649336342.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     
  • [ Upstream commit d2347b9695dafe5c388a5f9aeb70e27a7a4d29cf ]

    Ensure that only 0 is passed for pad here.

    Fixes: c73ebb685fb6 ("io_uring: add timeout support for io_uring_enter()")
    Signed-off-by: Dylan Yudaken
    Link: https://lore.kernel.org/r/20220412163042.2788062-5-dylany@fb.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Dylan Yudaken
     
  • [ Upstream commit d8a3ba9c143bf89c032deced8a686ffa53b46098 ]

    Verify that the user does not pass in anything but 0 for this field.

    Fixes: 992da01aa932 ("io_uring: change registration/upd/rsrc tagging ABI")
    Signed-off-by: Dylan Yudaken
    Link: https://lore.kernel.org/r/20220412163042.2788062-3-dylany@fb.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Dylan Yudaken
     
  • [ Upstream commit 565c5e616e8061b40a2e1d786c418a7ac3503a8d ]

    Move validation to be more consistently straight after
    copy_from_user. This is already done in io_register_rsrc_update and so
    this removes that redundant check.

    Signed-off-by: Dylan Yudaken
    Link: https://lore.kernel.org/r/20220412163042.2788062-2-dylany@fb.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Dylan Yudaken
     

14 Apr, 2022

6 commits

  • commit e677edbcabee849bfdd43f1602bccbecf736a646 upstream.

    io_flush_timeouts() assumes the timeout isn't in progress of triggering
    or being removed/canceled, so it unconditionally removes it from the
    timeout list and attempts to cancel it.

    Leave it on the list and let the normal timeout cancelation take care
    of it.

    Cc: stable@vger.kernel.org # 5.5+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • commit 0f5e4b83b37a96e3643951588ed7176b9b187c0a upstream.

    Similarly to the way it is done im mbind syscall.

    Cc: stable@vger.kernel.org # 5.14
    Fixes: fe76421d1da1dcdb ("io_uring: allow user configurable IO thread CPU affinity")
    Signed-off-by: Eugene Syromiatnikov
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Eugene Syromiatnikov
     
  • commit a3e4bc23d5470b2beb7cc42a86b6a3e75b704c15 upstream.

    In preparation for not using the file at prep time, defer checking if this
    file refers to a valid io_uring instance until issue time.

    This also means we can get rid of the cleanup flag for splice and tee.

    Cc: stable@vger.kernel.org # v5.15+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • commit ec858afda857e361182ceafc3d2ba2b164b8e889 upstream.

    This is a leftover from the really old days where we weren't able to
    track and error early if we need a file and it wasn't assigned. Kill
    the check.

    Cc: stable@vger.kernel.org # v5.15+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • [ Upstream commit a07211e3001435fe8591b992464cd8d5e3c98c5a ]

    It's safer to not touch scm_fp_list after we queued an skb to which it
    was assigned, there might be races lurking if we screw subtle sync
    guarantees on the io_uring side.

    Fixes: 6b06314c47e14 ("io_uring: add file set registration")
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     
  • [ Upstream commit 34bb77184123ae401100a4d156584f12fa630e5c ]

    Don't forget to array_index_nospec() for indexes before updating rsrc
    tags in __io_sqe_files_update(), just use already safe and precalculated
    index @i.

    Fixes: c3bdad0271834 ("io_uring: add generic rsrc update with tags")
    Signed-off-by: Pavel Begunkov
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Pavel Begunkov
     

08 Apr, 2022

3 commits

  • commit c86d18f4aa93e0e66cda0e55827cd03eea6bc5f8 upstream.

    When there are no files for __io_sqe_files_scm() to process in the
    range, it'll free everything and return. However, it forgets to put uid.

    Fixes: 08a451739a9b5 ("io_uring: allow sparse fixed file sets")
    Signed-off-by: Pavel Begunkov
    Link: https://lore.kernel.org/r/accee442376f33ce8aaebb099d04967533efde92.1648226048.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Pavel Begunkov
     
  • [ Upstream commit 5e929367468c8f97cd1ffb0417316cecfebef94b ]

    The fix for not advancing the iterator if we're using fixed buffers is
    broken in that it can hit a condition where we don't terminate the loop.
    This results in io-wq looping forever, asking to read (or write) 0 bytes
    for every subsequent loop.

    Reported-by: Joel Jaeschke
    Link: https://github.com/axboe/liburing/issues/549
    Fixes: 16c8d2df7ec0 ("io_uring: ensure symmetry in handling iter types in loop_rw_iter()")
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Jens Axboe
     
  • [ Upstream commit adf3a9e9f556613197583a1884f0de40a8bb6fb9 ]

    Looks like a victim of too much copy/paste, we should not be looking
    at req->open.how in accept. The point is to check CLOEXEC and error
    out, which we don't invalid direct descriptors on exec. Hence any
    attempt to get a direct descriptor with CLOEXEC is invalid.

    No harm is done here, as req->open.how.flags overlaps with
    req->accept.flags, but it's very confusing and might change if either of
    those command structs are modified.

    Fixes: aaa4db12ef7b ("io_uring: accept directly into fixed file table")
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Jens Axboe
     

09 Mar, 2022

1 commit

  • [ Upstream commit e302f1046f4c209291b07ff7bc4d15ca26891f16 ]

    ctx->cq_extra should be protected by completion lock so that the
    req_need_defer() does the right check.

    Cc: stable@vger.kernel.org
    Signed-off-by: Hao Xu
    Link: https://lore.kernel.org/r/20211125092103.224502-2-haoxu@linux.alibaba.com
    Signed-off-by: Jens Axboe
    Signed-off-by: Sasha Levin

    Hao Xu