06 Jul, 2017

19 commits


19 Jun, 2017

6 commits

  • Currently in WCOVE PMIC MFD driver, all second level IRQ chips
    are chained to the respective first level IRQs. So there is no
    need for explicitly unmasking the first level IRQ in this
    driver. This patches removes this level 1 IRQ unmask support.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Reviewed-by: Darren Hart (VMware)
    Reviewed-by: Andy Shevchenko
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     
  • Whishkey cove PMIC has support to mask/unmask interrupts at two levels.
    At first level we can mask/unmask interrupt domains like TMU, GPIO, ADC,
    CHGR, BCU THERMAL and PWRBTN and at second level, it provides facility
    to mask/unmask individual interrupts belong each of this domain. For
    example, in case of TMU, at first level we have TMU interrupt domain,
    and at second level we have two interrupts, wake alarm, system alarm that
    belong to the TMU interrupt domain.

    Currently, in this driver all first level IRQs are registered as part of
    IRQ chip(bxtwc_regmap_irq_chip). By default, after you register the IRQ
    chip from your driver, all IRQs in that chip will masked and can only be
    enabled if that IRQ is requested using request_irq() call. This is the
    default Linux IRQ behavior model. And whenever a dependent device that
    belongs to PMIC requests only the second level IRQ and not explicitly
    unmask the first level IRQ, then in essence the second level IRQ will
    still be disabled. For example, if TMU device driver request wake_alarm
    IRQ and not explicitly unmask TMU level 1 IRQ then according to the default
    Linux IRQ model, wake_alarm IRQ will still be disabled. So the proper
    solution to fix this issue is to use the chained IRQ chip concept. We
    should chain all the second level chip IRQs to the corresponding first
    level IRQ. To do this, we need to create separate IRQ chips for every
    group of second level IRQs.

    In case of TMU, when adding second level IRQ chip, instead of using PMIC
    IRQ we should use the corresponding first level IRQ. So the following
    code will change from

    ret = regmap_add_irq_chip(pmic->regmap, pmic->irq, ...)

    to,

    virq = regmap_irq_get_virq(&pmic->irq_chip_data, BXTWC_TMU_LVL1_IRQ);

    ret = regmap_add_irq_chip(pmic->regmap, virq, ...)

    In case of Whiskey Cove Type-C driver, Since USBC IRQ is moved under
    charger level2 IRQ chip. We should use charger IRQ chip(irq_chip_data_chgr)
    to get the USBC virtual IRQ number.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Reviewed-by: Andy Shevchenko
    Revieved-by: Heikki Krogerus
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     
  • Cleanup the resource allocation/free code in probe function by using
    devm_* calls.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Reviewed-by: Andy Shevchenko
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     
  • Currently all PMIC GPIO domain IRQs are consumed by the same
    device(bxt_wcove_gpio), so there is no need to export them as
    separate interrupts. We can just export only the first level
    GPIO IRQ(BXTWC_GPIO_LVL1_IRQ) as an IRQ resource and let the
    GPIO device driver(bxt_wcove_gpio) handle the GPIO sub domain
    IRQs based on status value of GPIO level2 interrupt status
    register. Also, just using only the first level IRQ will eliminate
    the bug involved in requesting only the second level IRQ and not
    explicitly enable the first level IRQ. For more info on this
    issue please read the details at,

    https://lkml.org/lkml/2017/2/27/148

    This patch also makes relevant change in Whiskey cove GPIO driver to
    use only first level PMIC GPIO IRQ.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Acked-by: Linus Walleij
    Acked-for-MFD-by: Lee Jones
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     
  • Since all second level thermal IRQs are consumed by the same
    device(bxt_wcove_thermal), there is no need to expose them as separate
    interrupts. We can just export only the first level IRQs for thermal and
    let the device(bxt_wcove_thermal) driver handle the second level IRQs
    based on thermal interrupt status register. Also, just using only the
    first level IRQ will eliminate the bug involved in requesting only the
    second level IRQ and not explicitly enable the first level IRQ. For
    more info on this issue please read the details at,

    https://lkml.org/lkml/2017/2/27/148

    This patch also makes relevant change in bxt_wcove_thermal driver to use
    only first level PMIC thermal IRQ.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Acked-by: Zhang Rui
    Reviewed-by: Andy Shevchenko
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     
  • TMU interrupts are registered as a separate interrupt chip, and
    hence it should start its interrupt index(BXTWC_TMU_IRQ) number
    from 0. But currently, BXTWC_TMU_IRQ is defined as part of enum
    bxtwc_irqs_level2 and its index value is 11. Since this index
    value is used when calculating .num_irqs of regmap_irq_chip_tmu,
    it incorrectly reports number of IRQs as 12 instead of actual
    value of 1.

    This patch fixes this issue by creating new enum of tmu IRQs and
    resetting its starting index to 0.

    Signed-off-by: Kuppuswamy Sathyanarayanan
    Reviewed-by: Andy Shevchenko
    Signed-off-by: Lee Jones

    Kuppuswamy Sathyanarayanan
     

14 May, 2017

5 commits

  • Linus Torvalds
     
  • Pull some more input subsystem updates from Dmitry Torokhov:
    "An updated xpad driver with a few more recognized device IDs, and a
    new psxpad-spi driver, allowing connecting Playstation 1 and 2 joypads
    via SPI bus"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
    Input: cros_ec_keyb - remove extraneous 'const'
    Input: add support for PlayStation 1/2 joypads connected via SPI
    Input: xpad - add USB IDs for Mad Catz Brawlstick and Razer Sabertooth
    Input: xpad - sync supported devices with xboxdrv
    Input: xpad - sort supported devices by USB ID

    Linus Torvalds
     
  • Pull UBI/UBIFS updates from Richard Weinberger:

    - new config option CONFIG_UBIFS_FS_SECURITY

    - minor improvements

    - random fixes

    * tag 'upstream-4.12-rc1' of git://git.infradead.org/linux-ubifs:
    ubi: Add debugfs file for tracking PEB state
    ubifs: Fix a typo in comment of ioctl2ubifs & ubifs2ioctl
    ubifs: Remove unnecessary assignment
    ubifs: Fix cut and paste error on sb type comparisons
    ubi: fastmap: Fix slab corruption
    ubifs: Add CONFIG_UBIFS_FS_SECURITY to disable/enable security labels
    ubi: Make mtd parameter readable
    ubi: Fix section mismatch

    Linus Torvalds
     
  • Pull UML fixes from Richard Weinberger:
    "No new stuff, just fixes"

    * 'for-linus-4.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
    um: Add missing NR_CPUS include
    um: Fix to call read_initrd after init_bootmem
    um: Include kbuild.h instead of duplicating its macros
    um: Fix PTRACE_POKEUSER on x86_64
    um: Set number of CPUs
    um: Fix _print_addr()

    Linus Torvalds
     
  • Merge misc fixes from Andrew Morton:
    "15 fixes"

    * emailed patches from Andrew Morton :
    mm, docs: update memory.stat description with workingset* entries
    mm: vmscan: scan until it finds eligible pages
    mm, thp: copying user pages must schedule on collapse
    dax: fix PMD data corruption when fault races with write
    dax: fix data corruption when fault races with write
    ext4: return to starting transaction in ext4_dax_huge_fault()
    mm: fix data corruption due to stale mmap reads
    dax: prevent invalidation of mapped DAX entries
    Tigran has moved
    mm, vmalloc: fix vmalloc users tracking properly
    mm/khugepaged: add missed tracepoint for collapse_huge_page_swapin
    gcov: support GCC 7.1
    mm, vmstat: Remove spurious WARN() during zoneinfo print
    time: delete current_fs_time()
    hwpoison, memcg: forcibly uncharge LRU pages

    Linus Torvalds
     

13 May, 2017

10 commits

  • Commit 4b4cea91691d ("mm: vmscan: fix IO/refault regression in cache
    workingset transition") introduced three new entries in memory stat
    file:

    - workingset_refault
    - workingset_activate
    - workingset_nodereclaim

    This commit adds a corresponding description to the cgroup v2 docs.

    Link: http://lkml.kernel.org/r/1494530293-31236-1-git-send-email-guro@fb.com
    Signed-off-by: Roman Gushchin
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Vladimir Davydov
    Cc: Tejun Heo
    Cc: Li Zefan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roman Gushchin
     
  • Although there are a ton of free swap and anonymous LRU page in elgible
    zones, OOM happened.

    balloon invoked oom-killer: gfp_mask=0x17080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK), nodemask=(null), order=0, oom_score_adj=0
    CPU: 7 PID: 1138 Comm: balloon Not tainted 4.11.0-rc6-mm1-zram-00289-ge228d67e9677-dirty #17
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
    Call Trace:
    oom_kill_process+0x21d/0x3f0
    out_of_memory+0xd8/0x390
    __alloc_pages_slowpath+0xbc1/0xc50
    __alloc_pages_nodemask+0x1a5/0x1c0
    pte_alloc_one+0x20/0x50
    __pte_alloc+0x1e/0x110
    __handle_mm_fault+0x919/0x960
    handle_mm_fault+0x77/0x120
    __do_page_fault+0x27a/0x550
    trace_do_page_fault+0x43/0x150
    do_async_page_fault+0x2c/0x90
    async_page_fault+0x28/0x30
    Mem-Info:
    active_anon:424716 inactive_anon:65314 isolated_anon:0
    active_file:52 inactive_file:46 isolated_file:0
    unevictable:0 dirty:27 writeback:0 unstable:0
    slab_reclaimable:3967 slab_unreclaimable:4125
    mapped:133 shmem:43 pagetables:1674 bounce:0
    free:4637 free_pcp:225 free_cma:0
    Node 0 active_anon:1698864kB inactive_anon:261256kB active_file:208kB inactive_file:184kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:532kB dirty:108kB writeback:0kB shmem:172kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
    DMA free:7316kB min:32kB low:44kB high:56kB active_anon:8064kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:464kB slab_unreclaimable:40kB kernel_stack:0kB pagetables:24kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    lowmem_reserve[]: 0 992 992 1952
    DMA32 free:9088kB min:2048kB low:3064kB high:4080kB active_anon:952176kB inactive_anon:0kB active_file:36kB inactive_file:0kB unevictable:0kB writepending:88kB present:1032192kB managed:1019388kB mlocked:0kB slab_reclaimable:13532kB slab_unreclaimable:16460kB kernel_stack:3552kB pagetables:6672kB bounce:0kB free_pcp:56kB local_pcp:24kB free_cma:0kB
    lowmem_reserve[]: 0 0 0 959
    Movable free:3644kB min:1980kB low:2960kB high:3940kB active_anon:738560kB inactive_anon:261340kB active_file:188kB inactive_file:640kB unevictable:0kB writepending:20kB present:1048444kB managed:1010816kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:832kB local_pcp:60kB free_cma:0kB
    lowmem_reserve[]: 0 0 0 0
    DMA: 1*4kB (E) 0*8kB 18*16kB (E) 10*32kB (E) 10*64kB (E) 9*128kB (ME) 8*256kB (E) 2*512kB (E) 2*1024kB (E) 0*2048kB 0*4096kB = 7524kB
    DMA32: 417*4kB (UMEH) 181*8kB (UMEH) 68*16kB (UMEH) 48*32kB (UMEH) 14*64kB (MH) 3*128kB (M) 1*256kB (H) 1*512kB (M) 2*1024kB (M) 0*2048kB 0*4096kB = 9836kB
    Movable: 1*4kB (M) 1*8kB (M) 1*16kB (M) 1*32kB (M) 0*64kB 1*128kB (M) 2*256kB (M) 4*512kB (M) 1*1024kB (M) 0*2048kB 0*4096kB = 3772kB
    378 total pagecache pages
    17 pages in swap cache
    Swap cache stats: add 17325, delete 17302, find 0/27
    Free swap = 978940kB
    Total swap = 1048572kB
    524157 pages RAM
    0 pages HighMem/MovableOnly
    12629 pages reserved
    0 pages cma reserved
    0 pages hwpoisoned
    [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
    [ 433] 0 433 4904 5 14 3 82 0 upstart-udev-br
    [ 438] 0 438 12371 5 27 3 191 -1000 systemd-udevd

    With investigation, skipping page of isolate_lru_pages makes reclaim
    void because it returns zero nr_taken easily so LRU shrinking is
    effectively nothing and just increases priority aggressively. Finally,
    OOM happens.

    The problem is that get_scan_count determines nr_to_scan with eligible
    zones so although priority drops to zero, it couldn't reclaim any pages
    if the LRU contains mostly ineligible pages.

    get_scan_count:

    size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
    size = size >> sc->priority;

    Assumes sc->priority is 0 and LRU list is as follows.

    N-N-N-N-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H

    (Ie, small eligible pages are in the head of LRU but others are
    almost ineligible pages)

    In that case, size becomes 4 so VM want to scan 4 pages but 4 pages from
    tail of the LRU are not eligible pages. If get_scan_count counts
    skipped pages, it doesn't reclaim any pages remained after scanning 4
    pages so it ends up OOM happening.

    This patch makes isolate_lru_pages try to scan pages until it encounters
    eligible zones's pages.

    [akpm@linux-foundation.org: clean up mind-bending `for' statement. Tweak comment text]
    Fixes: 3db65812d688 ("Revert "mm, vmscan: account for skipped pages as a partial scan"")
    Link: http://lkml.kernel.org/r/1494457232-27401-1-git-send-email-minchan@kernel.org
    Signed-off-by: Minchan Kim
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • We have encountered need_resched warnings in __collapse_huge_page_copy()
    while doing {clear,copy}_user_highpage() over HPAGE_PMD_NR source pages.

    mm->mmap_sem is held for write, but the iteration is well bounded.

    Reschedule as needed.

    Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1705101426380.109808@chino.kir.corp.google.com
    Signed-off-by: David Rientjes
    Acked-by: Vlastimil Babka
    Cc: "Kirill A. Shutemov"
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • This is based on a patch from Jan Kara that fixed the equivalent race in
    the DAX PTE fault path.

    Currently DAX PMD read fault can race with write(2) in the following
    way:

    CPU1 - write(2) CPU2 - read fault
    dax_iomap_pmd_fault()
    ->iomap_begin() - sees hole

    dax_iomap_rw()
    iomap_apply()
    ->iomap_begin - allocates blocks
    dax_iomap_actor()
    invalidate_inode_pages2_range()
    - there's nothing to invalidate

    grab_mapping_entry()
    - we add huge zero page to the radix tree
    and map it to page tables

    The result is that hole page is mapped into page tables (and thus zeros
    are seen in mmap) while file has data written in that place.

    Fix the problem by locking exception entry before mapping blocks for the
    fault. That way we are sure invalidate_inode_pages2_range() call for
    racing write will either block on entry lock waiting for the fault to
    finish (and unmap stale page tables after that) or read fault will see
    already allocated blocks by write(2).

    Fixes: 9f141d6ef6258 ("dax: Call ->iomap_begin without entry lock during dax fault")
    Link: http://lkml.kernel.org/r/20170510172700.18991-1-ross.zwisler@linux.intel.com
    Signed-off-by: Ross Zwisler
    Reviewed-by: Jan Kara
    Cc: Dan Williams
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ross Zwisler
     
  • Currently DAX read fault can race with write(2) in the following way:

    CPU1 - write(2) CPU2 - read fault
    dax_iomap_pte_fault()
    ->iomap_begin() - sees hole
    dax_iomap_rw()
    iomap_apply()
    ->iomap_begin - allocates blocks
    dax_iomap_actor()
    invalidate_inode_pages2_range()
    - there's nothing to invalidate
    grab_mapping_entry()
    - we add zero page in the radix tree
    and map it to page tables

    The result is that hole page is mapped into page tables (and thus zeros
    are seen in mmap) while file has data written in that place.

    Fix the problem by locking exception entry before mapping blocks for the
    fault. That way we are sure invalidate_inode_pages2_range() call for
    racing write will either block on entry lock waiting for the fault to
    finish (and unmap stale page tables after that) or read fault will see
    already allocated blocks by write(2).

    Fixes: 9f141d6ef6258a3a37a045842d9ba7e68f368956
    Link: http://lkml.kernel.org/r/20170510085419.27601-5-jack@suse.cz
    Signed-off-by: Jan Kara
    Reviewed-by: Ross Zwisler
    Cc: Dan Williams
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • DAX will return to locking exceptional entry before mapping blocks for a
    page fault to fix possible races with concurrent writes. To avoid lock
    inversion between exceptional entry lock and transaction start, start
    the transaction already in ext4_dax_huge_fault().

    Fixes: 9f141d6ef6258a3a37a045842d9ba7e68f368956
    Link: http://lkml.kernel.org/r/20170510085419.27601-4-jack@suse.cz
    Signed-off-by: Jan Kara
    Cc: Ross Zwisler
    Cc: Dan Williams
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • Currently, we didn't invalidate page tables during invalidate_inode_pages2()
    for DAX. That could result in e.g. 2MiB zero page being mapped into
    page tables while there were already underlying blocks allocated and
    thus data seen through mmap were different from data seen by read(2).
    The following sequence reproduces the problem:

    - open an mmap over a 2MiB hole

    - read from a 2MiB hole, faulting in a 2MiB zero page

    - write to the hole with write(3p). The write succeeds but we
    incorrectly leave the 2MiB zero page mapping intact.

    - via the mmap, read the data that was just written. Since the zero
    page mapping is still intact we read back zeroes instead of the new
    data.

    Fix the problem by unconditionally calling invalidate_inode_pages2_range()
    in dax_iomap_actor() for new block allocations and by properly
    invalidating page tables in invalidate_inode_pages2_range() for DAX
    mappings.

    Fixes: c6dcf52c23d2d3fb5235cec42d7dd3f786b87d55
    Link: http://lkml.kernel.org/r/20170510085419.27601-3-jack@suse.cz
    Signed-off-by: Jan Kara
    Signed-off-by: Ross Zwisler
    Cc: Dan Williams
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • Patch series "mm,dax: Fix data corruption due to mmap inconsistency",
    v4.

    This series fixes data corruption that can happen for DAX mounts when
    page faults race with write(2) and as a result page tables get out of
    sync with block mappings in the filesystem and thus data seen through
    mmap is different from data seen through read(2).

    The series passes testing with t_mmap_stale test program from Ross and
    also other mmap related tests on DAX filesystem.

    This patch (of 4):

    dax_invalidate_mapping_entry() currently removes DAX exceptional entries
    only if they are clean and unlocked. This is done via:

    invalidate_mapping_pages()
    invalidate_exceptional_entry()
    dax_invalidate_mapping_entry()

    However, for page cache pages removed in invalidate_mapping_pages()
    there is an additional criteria which is that the page must not be
    mapped. This is noted in the comments above invalidate_mapping_pages()
    and is checked in invalidate_inode_page().

    For DAX entries this means that we can can end up in a situation where a
    DAX exceptional entry, either a huge zero page or a regular DAX entry,
    could end up mapped but without an associated radix tree entry. This is
    inconsistent with the rest of the DAX code and with what happens in the
    page cache case.

    We aren't able to unmap the DAX exceptional entry because according to
    its comments invalidate_mapping_pages() isn't allowed to block, and
    unmap_mapping_range() takes a write lock on the mapping->i_mmap_rwsem.

    Since we essentially never have unmapped DAX entries to evict from the
    radix tree, just remove dax_invalidate_mapping_entry().

    Fixes: c6dcf52c23d2 ("mm: Invalidate DAX radix tree entries only if appropriate")
    Link: http://lkml.kernel.org/r/20170510085419.27601-2-jack@suse.cz
    Signed-off-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Reported-by: Jan Kara
    Cc: Dan Williams
    Cc: [4.10+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ross Zwisler
     
  • Cc: Tigran Aivazian
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Commit 1f5307b1e094 ("mm, vmalloc: properly track vmalloc users") has
    pulled asm/pgtable.h include dependency to linux/vmalloc.h and that
    turned out to be a bad idea for some architectures. E.g. m68k fails
    with

    In file included from arch/m68k/include/asm/pgtable_mm.h:145:0,
    from arch/m68k/include/asm/pgtable.h:4,
    from include/linux/vmalloc.h:9,
    from arch/m68k/kernel/module.c:9:
    arch/m68k/include/asm/mcf_pgtable.h: In function 'nocache_page':
    >> arch/m68k/include/asm/mcf_pgtable.h:339:43: error: 'init_mm' undeclared (first use in this function)
    #define pgd_offset_k(address) pgd_offset(&init_mm, address)

    as spotted by kernel build bot. nios2 fails for other reason

    In file included from include/asm-generic/io.h:767:0,
    from arch/nios2/include/asm/io.h:61,
    from include/linux/io.h:25,
    from arch/nios2/include/asm/pgtable.h:18,
    from include/linux/mm.h:70,
    from include/linux/pid_namespace.h:6,
    from include/linux/ptrace.h:9,
    from arch/nios2/include/uapi/asm/elf.h:23,
    from arch/nios2/include/asm/elf.h:22,
    from include/linux/elf.h:4,
    from include/linux/module.h:15,
    from init/main.c:16:
    include/linux/vmalloc.h: In function '__vmalloc_node_flags':
    include/linux/vmalloc.h:99:40: error: 'PAGE_KERNEL' undeclared (first use in this function); did you mean 'GFP_KERNEL'?

    which is due to the newly added #include , which on nios2
    includes and thus and which
    again includes .

    Tweaking that around just turns out a bigger headache than necessary.
    This patch reverts 1f5307b1e094 and reimplements the original fix in a
    different way. __vmalloc_node_flags can stay static inline which will
    cover vmalloc* functions. We only have one external user
    (kvmalloc_node) and we can export __vmalloc_node_flags_caller and
    provide the caller directly. This is much simpler and it doesn't really
    need any games with header files.

    [akpm@linux-foundation.org: coding-style fixes]
    [mhocko@kernel.org: revert old comment]
    Link: http://lkml.kernel.org/r/20170509211054.GB16325@dhcp22.suse.cz
    Fixes: 1f5307b1e094 ("mm, vmalloc: properly track vmalloc users")
    Link: http://lkml.kernel.org/r/20170509153702.GR6481@dhcp22.suse.cz
    Signed-off-by: Michal Hocko
    Cc: Tobias Klauser
    Cc: Geert Uytterhoeven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko