19 Jan, 2021

1 commit


14 Jan, 2021

2 commits

  • In order to debug critical domain and device power issues, it may be
    necessary to disallow certain idle states at runtime. Enable a vendor
    hook to check if a domain idle state is allowed for powering down the
    domain.

    Bug: 175718935
    Signed-off-by: Lina Iyer
    Change-Id: I403c0c7d272439fb338bbf54a346861559385047

    Lina Iyer
     
  • Add following vendor hooks in ftrace oops path to allow
    vendor modules to get ftrace buffer data.

    1) android_vh_ftrace_oops_enter
    2) android_vh_ftrace_size_check
    3) android_vh_ftrace_format_check
    4) android_vh_ftrace_dump_buffer
    5) android_vh_ftrace_oops_exit

    Ftrace dump on oops can be controlled with kernel command
    line parameter ftrace_dump_on_oops.

    Bug: 177387994
    Change-Id: I898026bfc3e5c5c5ab1eaa0e86a019a6a720408e
    Signed-off-by: Prateek Sood
    Signed-off-by: Mukesh Ojha

    Prateek Sood
     

08 Jan, 2021

2 commits


05 Jan, 2021

1 commit

  • When entering cluster-wide or system-wide power mode, Exynos cpu
    power management driver checks the next hrtimer events of cpu
    composing the power domain to prevent unnecessary attempts to enter
    the power mode. Since struct cpuidle_device has next_hrtimer, it
    can be solved by passing cpuidle device as a parameter of vh.

    In order to improve responsiveness, it is necessary to prevent
    entering the deep idle state in boosting scenario. So, vendor
    driver should be able to control the idle state.

    Due to above requirements, the parameters required for idle enter
    and exit different, so the vendor hook is separated into
    cpu_idle_enter and cpu_idle_exit.

    Bug: 176198732

    Change-Id: I2262ba1bae5e6622a8e76bc1d5d16fb27af0bb8a
    Signed-off-by: Park Bumgyu

    Park Bumgyu
     

22 Dec, 2020

1 commit


17 Dec, 2020

1 commit

  • Add a per-transaction flag to indicate that the buffer
    must be cleared when the transaction is complete to
    prevent copies of sensitive data from being preserved
    in memory.

    Signed-off-by: Todd Kjos
    Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com
    Cc: stable
    Signed-off-by: Greg Kroah-Hartman
    Bug: 171501513
    Change-Id: Ic9338c85cbe3b11ab6f2bda55dce9964bb48447a
    (cherry picked from commit 0f966cba95c78029f491b433ea95ff38f414a761)
    Signed-off-by: Todd Kjos

    Todd Kjos
     

16 Dec, 2020

2 commits


11 Dec, 2020

1 commit


09 Dec, 2020

1 commit


04 Dec, 2020

2 commits

  • Add vendors hooks for to facilitate various scheduler value adds.

    Bug: 173725277
    Change-Id: I4389f2a8121fe7ac49ad4610e25a77bfc38f4660
    Signed-off-by: Shaleen Agrawal
    Signed-off-by: Pavankumar Kondeti

    Shaleen Agrawal
     
  • We have debug infrastructure built on top of preempt/irq disable/enable
    events. This requires modifications to the kernel tracing code. Since
    this is not feasible with GKI, we started with registering to the
    existing preemptirq trace events. However the performance of wide
    variety of use cases are regressed as the rate of preemptirq events
    is super high and generic trace events are slow.

    Since GKI allows optimized trace events via restricted trace hooks,
    add the same for preemptirq event.

    Bug: 174541725
    Change-Id: Ic8d3cdd1c1aa6a9267d0b755694fedffa2ea8e36
    Signed-off-by: Pavankumar Kondeti

    Pavankumar Kondeti
     

03 Dec, 2020

1 commit


02 Dec, 2020

1 commit

  • Create a vendor hook inside of gfp_zone() to modify which allocations
    get to enter ZONE_MOVABLE, by zeroing out __GFP_HIGHMEM inside of the
    trace hook based on certain conditions.

    Separately, create separate trace hooks in the swap-in and readahead
    paths to affect the behavior of the tracehook in gfp_zone().

    Bug: 158645321
    Change-Id: I4a4f0b724267ee120a1e5661f6da5d43d7ef6fc6
    Signed-off-by: Chris Goldsworthy

    Chris Goldsworthy
     

30 Nov, 2020

2 commits


28 Nov, 2020

3 commits

  • This is to add capability for vendor to decide whether a cpufreq update
    is needed, e.g. up/down rate limit.

    Using restricted hook since it can be called from scheduler wakeup path.

    Bug: 170511085
    Signed-off-by: Wei Wang
    Change-Id: If9adea3a3e31efbf3858fbd009665a07dc70c638
    (cherry picked from commit f9f3464532a045257f8138338b1beda86ef0a3be)
    Signed-off-by: Will McVicker

    Wei Wang
     
  • Upstream moved the sugov to DEADLINE class which has higher prio than RT
    so it can potentially block many RT use case in Android.

    Also currently iowait doesn't distinguish background/foreground tasks
    and we have seen cases where device run to high frequency unnecessarily
    when running some background I/O.

    Bug: 171598214
    Signed-off-by: Wei Wang
    Change-Id: I21e9bfe9ef75a4178279574389e417c3f38e65ac
    (cherry picked from commit 03177ef82bd942a3f163e826063491bae6ff0ac9)
    Signed-off-by: Will McVicker

    Wei Wang
     
  • Add hooks for vendor specific find_energy_efficient_cpu logic.

    Bug: 170507310
    Signed-off-by: Rick Yiu
    Change-Id: I064b501017e32d4f22f8128bed8bf3a1508ab15b
    (cherry picked from commit 2f108e2ec6e89609cbae32c5d13d6ad9f2e858cb)
    Signed-off-by: Will McVicker

    Rick Yiu
     

24 Nov, 2020

2 commits

  • The following restrict vendor hooks are added. The vendor hook
    can selectively opt in for the default scheduler behavior by not
    modifying the done argument.

    - android_rvh_sched_newidle_balance: For newly idle load balance.

    - android_rvh_sched_nohz_balancer_kick: For deciding if an idle
    CPU is woken up to do nohz balance or not.

    - android_rvh_find_busiest_queue: For selecting the busiest runqueue
    among the CPUs in the busiest group selected in find_busiest_group.

    - android_rvh_migrate_queued_task: Vendor implementations may require
    both source and destination CPUs runqueue locks to be held while
    calling set_task_cpu() during a task migration. Add a hook when
    a queued task is migration so that vendor implementation can detach
    the task and call set_task_cpu() with both runqueue locks held.

    Bug: 173661641
    Change-Id: I6a09226081061b6433e4231359be252a0f28f04b
    Signed-off-by: Pavankumar Kondeti

    Pavankumar Kondeti
     
  • QC value-add requires the ability to override the value displayed
    by show_cpuinfo_max_freq.

    Bug: 173984660
    Signed-off-by: Todd Kjos
    Change-Id: Ida11256f399f0730bc1228524b9687df3e7b8bf4

    Todd Kjos
     

14 Nov, 2020

1 commit

  • Add vendor hook to print epoch values when system enter and exit
    out of suspend and resume. These epoch values are useful to know
    how long the device is in suspend state. These values can be used
    to synchronize various subsystem timestamps and have an unique
    timestamp to correlate between various subsystems.

    Bug: 172945021
    Change-Id: I82a01e348d05a46c9c3921869cc9d2fc0fd28867
    Signed-off-by: Murali Nalajala

    Murali Nalajala
     

12 Nov, 2020

1 commit


11 Nov, 2020

2 commits


06 Nov, 2020

1 commit


27 Oct, 2020

1 commit


26 Oct, 2020

1 commit


22 Oct, 2020

1 commit

  • This reverts commit ff0771e5f881fa5596ff35856fc0df11429c9d80.

    ANDROID_VENDOR_HOOKS was temporarily disabled since it needed
    refactoring to work with commit d25e37d89dd2 ("tracepoint: Optimize
    using static_call()"). It can be enabled again now.

    Change-Id: Id4920fe20cf07736679f22228d192a251467bbb2
    Signed-off-by: Todd Kjos

    Todd Kjos
     

21 Oct, 2020

1 commit

  • Commit d25e37d89dd2 ("tracepoint: Optimize using static_call()") changed
    the format of DEFINE_TRACE() and other tracepoint logic, which broke the
    ANDROID_VENDOR_HOOKS logic.

    As I can't figure out how to unwind the build traces for the errors,
    disable this for now so the merge can continue.

    Bug: 156285741
    Cc: Todd Kjos
    Signed-off-by: Greg Kroah-Hartman
    Change-Id: I6794bdec76ba25370891747b296ea30a51812982

    Greg Kroah-Hartman
     

18 Oct, 2020

1 commit

  • A previous commit changed the notification mode from true/false to an
    int, allowing notify-no, notify-yes, or signal-notify. This was
    backwards compatible in the sense that any existing true/false user
    would translate to either 0 (on notification sent) or 1, the latter
    which mapped to TWA_RESUME. TWA_SIGNAL was assigned a value of 2.

    Clean this up properly, and define a proper enum for the notification
    mode. Now we have:

    - TWA_NONE. This is 0, same as before the original change, meaning no
    notification requested.
    - TWA_RESUME. This is 1, same as before the original change, meaning
    that we use TIF_NOTIFY_RESUME.
    - TWA_SIGNAL. This uses TIF_SIGPENDING/JOBCTL_TASK_WORK for the
    notification.

    Clean up all the callers, switching their 0/1/false/true to using the
    appropriate TWA_* mode for notifications.

    Fixes: e91b48162332 ("task_work: teach task_work_add() to do signal_wake_up()")
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Jens Axboe

    Jens Axboe
     

15 Oct, 2020

1 commit


10 Oct, 2020

1 commit

  • When releasing a thread todo list when tearing down
    a binder_proc, the following race was possible which
    could result in a use-after-free:

    1. Thread 1: enter binder_release_work from binder_thread_release
    2. Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
    3. Thread 2: dec nodeA --> 0 (will free node)
    4. Thread 1: ACQ inner_proc_lock
    5. Thread 2: block on inner_proc_lock
    6. Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA)
    7. Thread 1: REL inner_proc_lock
    8. Thread 2: ACQ inner_proc_lock
    9. Thread 2: todo list cleanup, but work was already dequeued
    10. Thread 2: free node
    11. Thread 2: REL inner_proc_lock
    12. Thread 1: deref w->type (UAF)

    The problem was that for a BINDER_WORK_NODE, the binder_work element
    must not be accessed after releasing the inner_proc_lock while
    processing the todo list elements since another thread might be
    handling a deref on the node containing the binder_work element
    leading to the node being freed.

    Signed-off-by: Todd Kjos
    Link: https://lore.kernel.org/r/20201009232455.4054810-1-tkjos@google.com
    Cc: # 4.14, 4.19, 5.4, 5.8
    Signed-off-by: Greg Kroah-Hartman

    Todd Kjos
     

08 Oct, 2020

1 commit

  • When a kernel panic occurred, it is sometimes caused by a other cpu
    than the panic cpu. When a panic cpu sends smp stop IPI, it can help to
    solve the problem even more if aonther cpu stores registers and
    run cache flush.

    Bug: 165956922

    Change-Id: I747f776456330d904015cfe4ae423e4b63f18f85
    Signed-off-by: Changki Kim
    Signed-off-by: Will McVicker
    Signed-off-by: Jonglin Lee

    Changki Kim
     

07 Oct, 2020

1 commit


05 Oct, 2020

1 commit


21 Sep, 2020

1 commit


16 Sep, 2020

1 commit

  • The pointer n is being initialized with a value that is
    never read and it is being updated later with a new value. The
    initialization is redundant and can be removed.

    Acked-by: Todd Kjos
    Acked-by: Christian Brauner
    Signed-off-by: Colin Ian King
    Link: https://lore.kernel.org/r/20200910151221.751464-1-colin.king@canonical.com
    Signed-off-by: Greg Kroah-Hartman

    Colin Ian King