17 Jun, 2019

1 commit


05 Jun, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation version 2 of the license

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 315 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Reviewed-by: Armijn Hemel
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190531190115.503150771@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

03 Jun, 2019

1 commit

  • In commit:

    4b53a3412d66 ("sched/core: Remove the tsk_nr_cpus_allowed() wrapper")

    the tsk_nr_cpus_allowed() wrapper was removed. There was not
    much difference in !RT but in RT we used this to implement
    migrate_disable(). Within a migrate_disable() section the CPU mask is
    restricted to single CPU while the "normal" CPU mask remains untouched.

    As an alternative implementation Ingo suggested to use:

    struct task_struct {
    const cpumask_t *cpus_ptr;
    cpumask_t cpus_mask;
    };
    with
    t->cpus_ptr = &t->cpus_mask;

    In -RT we then can switch the cpus_ptr to:

    t->cpus_ptr = &cpumask_of(task_cpu(p));

    in a migration disabled region. The rules are simple:

    - Code that 'uses' ->cpus_allowed would use the pointer.
    - Code that 'modifies' ->cpus_allowed would use the direct mask.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Thomas Gleixner
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Link: https://lkml.kernel.org/r/20190423142636.14347-1-bigeasy@linutronix.de
    Signed-off-by: Ingo Molnar

    Sebastian Andrzej Siewior
     

04 Mar, 2018

1 commit

  • Do the following cleanups and simplifications:

    - sched/sched.h already includes , so no need to
    include it in sched/core.c again.

    - order the headers alphabetically

    - add all headers to kernel/sched/sched.h

    - remove all unnecessary includes from the .c files that
    are already included in kernel/sched/sched.h.

    Finally, make all scheduler .c files use a single common header:

    #include "sched.h"

    ... which now contains a union of the relied upon headers.

    This makes the various .c files easier to read and easier to handle.

    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

03 Mar, 2018

2 commits

  • A good number of small style inconsistencies have accumulated
    in the scheduler core, so do a pass over them to harmonize
    all these details:

    - fix speling in comments,

    - use curly braces for multi-line statements,

    - remove unnecessary parentheses from integer literals,

    - capitalize consistently,

    - remove stray newlines,

    - add comments where necessary,

    - remove invalid/unnecessary comments,

    - align structure definitions and other data types vertically,

    - add missing newlines for increased readability,

    - fix vertical tabulation where it's misaligned,

    - harmonize preprocessor conditional block labeling
    and vertical alignment,

    - remove line-breaks where they uglify the code,

    - add newline after local variable definitions,

    No change in functionality:

    md5:
    1191fa0a890cfa8132156d2959d7e9e2 built-in.o.before.asm
    1191fa0a890cfa8132156d2959d7e9e2 built-in.o.after.asm

    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • - Fixed style error: Missing space before the open parenthesis
    - Fixed style warnings: 2x Missing blank line after declaration

    One warning left: else after return
    (I don't feel comfortable fixing that without side effects)

    Signed-off-by: Mario Leinweber
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Link: http://lkml.kernel.org/r/20180302182007.28691-1-marioleinweber@web.de
    Signed-off-by: Ingo Molnar

    Mario Leinweber
     

10 Aug, 2017

2 commits

  • cpudl_find() users are only interested in knowing if suitable CPU(s)
    were found or not (and then they look at later_mask to know which).

    Change cpudl_find() return type accordingly. Aligns with rt code.

    Signed-off-by: Byungchul Park
    Signed-off-by: Peter Zijlstra (Intel)
    Cc:
    Cc:
    Cc:
    Cc:
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1495504859-10960-3-git-send-email-byungchul.park@lge.com
    Signed-off-by: Ingo Molnar

    Byungchul Park
     
  • The 'struct cpudl' passed to cpudl_init() is already initialized to zero.
    Don't do that again.

    Signed-off-by: Viresh Kumar
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Vincent Guittot
    Cc: linaro-kernel@lists.linaro.org
    Link: http://lkml.kernel.org/r/bd4c229806bc96694b15546207afcc221387d2f5.1492065513.git.viresh.kumar@linaro.org
    Signed-off-by: Ingo Molnar

    Viresh Kumar
     

02 Mar, 2017

1 commit

  • So the original intention of tsk_cpus_allowed() was to 'future-proof'
    the field - but it's pretty ineffectual at that, because half of
    the code uses ->cpus_allowed directly ...

    Also, the wrapper makes the code longer than the original expression!

    So just get rid of it. This also shrinks a bit.

    Acked-by: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

05 Sep, 2016

3 commits

  • These 2 exercise independent code paths and need different arguments.

    After this change, you call:

    cpudl_clear(cp, cpu);
    cpudl_set(cp, cpu, dl);

    instead of:

    cpudl_set(cp, cpu, 0 /* dl */, 0 /* is_valid */);
    cpudl_set(cp, cpu, dl, 1 /* is_valid */);

    Signed-off-by: Tommaso Cucinotta
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Luca Abeni
    Reviewed-by: Juri Lelli
    Cc: Juri Lelli
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-dl@retis.sssup.it
    Link: http://lkml.kernel.org/r/1471184828-12644-4-git-send-email-tommaso.cucinotta@sssup.it
    Signed-off-by: Ingo Molnar

    Tommaso Cucinotta
     
  • This change goes from heapify() ops done by swapping with parent/child
    so that the item to fix moves along, to heapify() ops done by just
    pulling the parent/child chain by 1 pos, then storing the item to fix
    just at the end. On a non-trivial heapify(), this performs roughly half
    stores wrt swaps.

    This has been measured to achieve up to 10% of speed-up for cpudl_set()
    calls, with a randomly generated workload of 1K,10K,100K random heap
    insertions and deletions (75% cpudl_set() calls with is_valid=1 and
    25% with is_valid=0), and randomly generated cpu IDs, with up to 256
    CPUs, as measured on an Intel Core2 Duo.

    Signed-off-by: Tommaso Cucinotta
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Luca Abeni
    Reviewed-by: Juri Lelli
    Cc: Juri Lelli
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-dl@retis.sssup.it
    Link: http://lkml.kernel.org/r/1471184828-12644-3-git-send-email-tommaso.cucinotta@sssup.it
    Signed-off-by: Ingo Molnar

    Tommaso Cucinotta
     
  • 1. heapify up factored out in new dedicated function heapify_up()
    (avoids repetition of same code)

    2. call to cpudl_change_key() replaced with heapify_up() when
    cpudl_set actually inserts a new node in the heap

    3. cpudl_change_key() replaced with heapify() that heapifies up
    or down as needed.

    Signed-off-by: Tommaso Cucinotta
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Luca Abeni
    Reviewed-by: Juri Lelli
    Cc: Juri Lelli
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-dl@retis.sssup.it
    Link: http://lkml.kernel.org/r/1471184828-12644-2-git-send-email-tommaso.cucinotta@sssup.it
    Signed-off-by: Ingo Molnar

    Tommaso Cucinotta
     

10 Aug, 2016

1 commit

  • Current code in cpudeadline.c has a bug in re-heapifying when adding a
    new element at the end of the heap, because a deadline value of 0 is
    temporarily set in the new elem, then cpudl_change_key() is called
    with the actual elem deadline as param.

    However, the function compares the new deadline to set with the one
    previously in the elem, which is 0. So, if current absolute deadlines
    grew so much to have negative values as s64, the comparison in
    cpudl_change_key() makes the wrong decision. Instead, as from
    dl_time_before(), the kernel should handle correctly abs deadlines
    wrap-arounds.

    This patch fixes the problem with a minimally invasive change that
    forces cpudl_change_key() to heapify up in this case.

    Signed-off-by: Tommaso Cucinotta
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Luca Abeni
    Cc: Juri Lelli
    Cc: Juri Lelli
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1468921493-10054-2-git-send-email-tommaso.cucinotta@sssup.it
    Signed-off-by: Ingo Molnar

    Tommaso Cucinotta
     

12 May, 2016

1 commit


23 Sep, 2015

1 commit

  • Move dl_time_before() static definition in include/linux/sched/deadline.h
    so that it can be used by different parties without being re-defined.

    Reported-by: Luca Abeni
    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1441188096-23021-3-git-send-email-juri.lelli@arm.com
    Signed-off-by: Ingo Molnar

    Juri Lelli
     

04 Feb, 2015

1 commit

  • cpu_active_mask is rarely changed (only on hotplug), so remove this
    operation to gain a little performance.

    If there is a change in cpu_active_mask, rq_online_dl() and
    rq_offline_dl() should take care of it normally, so cpudl::free_cpus
    carries enough information for us.

    For the rare case when a task is put onto a dying cpu (which
    rq_offline_dl() can't handle in a timely fashion), it will be
    handled through _cpu_down()->...->multi_cpu_stop()->migration_call()
    ->migrate_tasks(), preventing the task from hanging on the
    dead cpu.

    Cc: Juri Lelli
    Signed-off-by: Xunlei Pang
    [peterz: changelog]
    Signed-off-by: Peter Zijlstra (Intel)
    Link: http://lkml.kernel.org/r/1421642980-10045-2-git-send-email-pang.xunlei@linaro.org
    Cc: Linus Torvalds
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Xunlei Pang
     

31 Jan, 2015

1 commit

  • Currently, cpudl::free_cpus contains all CPUs during init, see
    cpudl_init(). When calling cpudl_find(), we have to add rd->span
    to avoid selecting the cpu outside the current root domain, because
    cpus_allowed cannot be depended on when performing clustered
    scheduling using the cpuset, see find_later_rq().

    This patch adds cpudl_set_freecpu() and cpudl_clear_freecpu() for
    changing cpudl::free_cpus when doing rq_online_dl()/rq_offline_dl(),
    so we can avoid the rd->span operation when calling cpudl_find()
    in find_later_rq().

    Signed-off-by: Xunlei Pang
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Juri Lelli
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/1421642980-10045-1-git-send-email-pang.xunlei@linaro.org
    Signed-off-by: Ingo Molnar

    Xunlei Pang
     

24 Sep, 2014

1 commit

  • Users can perform clustered scheduling using the cpuset facility.
    After an exclusive cpuset is created, task migrations happen only
    between CPUs belonging to the same cpuset. Inter- cpuset migrations
    can only happen when the user requires so, moving a task between
    different cpusets. This behaviour is broken in SCHED_DEADLINE, as
    currently spurious inter- cpuset migration may happen without user
    intervention.

    This patch fix the problem (and shuffles the code a bit to improve
    clarity).

    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: raistlin@linux.it
    Cc: michael@amarulasolutions.com
    Cc: fchecconi@gmail.com
    Cc: daniel.wagner@bmw-carit.de
    Cc: vincent@legout.info
    Cc: luca.abeni@unitn.it
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/1411118561-26323-4-git-send-email-juri.lelli@arm.com
    Signed-off-by: Ingo Molnar

    Juri Lelli
     

22 May, 2014

1 commit

  • Tejun reported that his resume was failing due to order-3 allocations
    from sched_domain building.

    Replace the NR_CPUS arrays in there with a dynamically allocated
    array.

    Reported-by: Tejun Heo
    Signed-off-by: Peter Zijlstra
    Acked-by: Juri Lelli
    Cc: Johannes Weiner
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/n/tip-kat4gl1m5a6dwy6nzuqox45e@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

07 May, 2014

1 commit

  • Free cpudl->free_cpus allocated in cpudl_init().

    Signed-off-by: Li Zefan
    Acked-by: Juri Lelli
    Signed-off-by: Peter Zijlstra
    Cc: # 3.14+
    Link: http://lkml.kernel.org/r/534F36CE.2000409@huawei.com
    Signed-off-by: Ingo Molnar

    Li Zefan
     

27 Feb, 2014

1 commit

  • Commit 82b9580 ("sched/deadline: Test for CPU's presence explicitly")
    changed how we check if a CPU returned by cpudeadline machinery is
    valid. But, we don't want to call cpu_present() if best_cpu is
    equal to -1. So, switch the order of tests inside WARN_ON().

    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra
    Cc: boris.ostrovsky@oracle.com
    Cc: konrad.wilk@oracle.com
    Cc: rostedt@goodmis.org
    Link: http://lkml.kernel.org/r/1393238832-9100-1-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Juri Lelli
     

22 Feb, 2014

1 commit

  • A hot-removed CPU may have ID that is numerically larger than the number of
    existing CPUs in the system (e.g. we can unplug CPU 4 from a system that
    has CPUs 0, 1 and 4).

    Thus the WARN_ONs should check whether the CPU in question is currently
    present, not whether its ID value is less than num_present_cpus().

    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Steven Rostedt
    Reported-by: Konrad Rzeszutek Wilk
    Signed-off-by: Boris Ostrovsky
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1392646353-1874-1-git-send-email-boris.ostrovsky@oracle.com
    Signed-off-by: Thomas Gleixner

    Boris Ostrovsky
     

16 Jan, 2014

1 commit

  • new sparse warnings:

    >> kernel/sched/cpudeadline.c:38:6: sparse: symbol 'cpudl_exchange' was not declared. Should it be static?
    >> kernel/sched/cpudeadline.c:46:6: sparse: symbol 'cpudl_heapify' was not declared. Should it be static?
    >> kernel/sched/cpudeadline.c:71:6: sparse: symbol 'cpudl_change_key' was not declared. Should it be static?
    >> kernel/sched/cpudeadline.c:195:15: sparse: memset with byte count of 163928

    Signed-off-by: Fengguang Wu
    Signed-off-by: Peter Zijlstra
    Cc: Juri Lelli
    Fixes: 6bfd6d72f51c ("sched/deadline: speed up SCHED_DEADLINE pushes with a push-heap")
    Link: http://lkml.kernel.org/r/52d47f8c.EYJsA5+mELPBk4t6\%fengguang.wu@intel.com
    Signed-off-by: Ingo Molnar

    Fengguang Wu
     

13 Jan, 2014

1 commit

  • Data from tests confirmed that the original active load balancing
    logic didn't scale neither in the number of CPU nor in the number of
    tasks (as sched_rt does).

    Here we provide a global data structure to keep track of deadlines
    of the running tasks in the system. The structure is composed by
    a bitmask showing the free CPUs and a max-heap, needed when the system
    is heavily loaded.

    The implementation and concurrent access scheme are kept simple by
    design. However, our measurements show that we can compete with sched_rt
    on large multi-CPUs machines [1].

    Only the push path is addressed, the extension to use this structure
    also for pull decisions is straightforward. However, we are currently
    evaluating different (in order to decrease/avoid contention) data
    structures to solve possibly both problems. We are also going to re-run
    tests considering recent changes inside cpupri [2].

    [1] http://retis.sssup.it/~jlelli/papers/Ospert11Lelli.pdf
    [2] http://www.spinics.net/lists/linux-rt-users/msg06778.html

    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-14-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Juri Lelli