24 Sep, 2009

2 commits


23 Sep, 2009

4 commits

  • For /proc/kcore, each arch registers its memory range by kclist_add().
    In usual,

    - range of physical memory
    - range of vmalloc area
    - text, etc...

    are registered but "range of physical memory" has some troubles. It
    doesn't updated at memory hotplug and it tend to include unnecessary
    memory holes. Now, /proc/iomem (kernel/resource.c) includes required
    physical memory range information and it's properly updated at memory
    hotplug. Then, it's good to avoid using its own code(duplicating
    information) and to rebuild kclist for physical memory based on
    /proc/iomem.

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Jiri Slaby
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Originally, walk_memory_resource() was introduced to traverse all memory
    of "System RAM" for detecting memory hotplug/unplug range. For doing so,
    flags of IORESOUCE_MEM|IORESOURCE_BUSY was used and this was enough for
    memory hotplug.

    But for using other purpose, /proc/kcore, this may includes some firmware
    area marked as IORESOURCE_BUSY | IORESOUCE_MEM. This patch makes the
    check strict to find out busy "System RAM".

    Note: PPC64 keeps their own walk_memory_resouce(), which walk through
    ppc64's lmb informaton. Because old kclist_add() is called per lmb, this
    patch makes no difference in behavior, finally.

    And this patch removes CONFIG_MEMORY_HOTPLUG check from this function.
    Because pfn_valid() just show "there is memmap or not* and cannot be used
    for "there is physical memory or not", this function is useful in generic
    to scan physical memory range.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Cc: Américo Wang
    Cc: David Rientjes
    Cc: Roland Dreier
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • For /proc/kcore, vmalloc areas are registered per arch. But, all of them
    registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
    them. By this. archs which have no kclist_add() hooks can see vmalloc
    area correctly.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Presently, kclist_add() only eats start address and size as its arguments.
    Considering to make kclist dynamically reconfigulable, it's necessary to
    know which kclists are for System RAM and which are not.

    This patch add kclist types as
    KCORE_RAM
    KCORE_VMALLOC
    KCORE_TEXT
    KCORE_OTHER

    This "type" is used in a patch following this for detecting KCORE_RAM.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

22 Sep, 2009

1 commit

  • Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()")
    modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
    int'. This made the casts to 'unsigned long' in most callers superfluous,
    so remove them.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Geert Uytterhoeven
    Reviewed-by: Christoph Lameter
    Acked-by: Ingo Molnar
    Acked-by: Russell King
    Acked-by: David S. Miller
    Acked-by: Kyle McMartin
    Acked-by: WANG Cong
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Haavard Skinnemoen
    Cc: Mikael Starvik
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Cc: Ralf Baechle
    Cc: David Howells
    Acked-by: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: Chris Zankel
    Cc: Michal Simek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geert Uytterhoeven
     

21 Sep, 2009

1 commit

  • Bye-bye Performance Counters, welcome Performance Events!

    In the past few months the perfcounters subsystem has grown out its
    initial role of counting hardware events, and has become (and is
    becoming) a much broader generic event enumeration, reporting, logging,
    monitoring, analysis facility.

    Naming its core object 'perf_counter' and naming the subsystem
    'perfcounters' has become more and more of a misnomer. With pending
    code like hw-breakpoints support the 'counter' name is less and
    less appropriate.

    All in one, we've decided to rename the subsystem to 'performance
    events' and to propagate this rename through all fields, variables
    and API names. (in an ABI compatible fashion)

    The word 'event' is also a bit shorter than 'counter' - which makes
    it slightly more convenient to write/handle as well.

    Thanks goes to Stephane Eranian who first observed this misnomer and
    suggested a rename.

    User-space tooling and ABI compatibility is not affected - this patch
    should be function-invariant. (Also, defconfigs were not touched to
    keep the size down.)

    This patch has been generated via the following script:

    FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

    sed -i \
    -e 's/PERF_EVENT_/PERF_RECORD_/g' \
    -e 's/PERF_COUNTER/PERF_EVENT/g' \
    -e 's/perf_counter/perf_event/g' \
    -e 's/nb_counters/nb_events/g' \
    -e 's/swcounter/swevent/g' \
    -e 's/tpcounter_event/tp_event/g' \
    $FILES

    for N in $(find . -name perf_counter.[ch]); do
    M=$(echo $N | sed 's/perf_counter/perf_event/g')
    mv $N $M
    done

    FILES=$(find . -name perf_event.*)

    sed -i \
    -e 's/COUNTER_MASK/REG_MASK/g' \
    -e 's/COUNTER/EVENT/g' \
    -e 's/\/event_id/g' \
    -e 's/counter/event/g' \
    -e 's/Counter/Event/g' \
    $FILES

    ... to keep it as correct as possible. This script can also be
    used by anyone who has pending perfcounters patches - it converts
    a Linux kernel tree over to the new naming. We tried to time this
    change to the point in time where the amount of pending patches
    is the smallest: the end of the merge window.

    Namespace clashes were fixed up in a preparatory patch - and some
    stylistic fallout will be fixed up in a subsequent patch.

    ( NOTE: 'counters' are still the proper terminology when we deal
    with hardware registers - and these sed scripts are a bit
    over-eager in renaming them. I've undone some of that, but
    in case there's something left where 'counter' would be
    better than 'event' we can undo that on an individual basis
    instead of touching an otherwise nicely automated patch. )

    Suggested-by: Stephane Eranian
    Acked-by: Peter Zijlstra
    Acked-by: Paul Mackerras
    Reviewed-by: Arjan van de Ven
    Cc: Mike Galbraith
    Cc: Arnaldo Carvalho de Melo
    Cc: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Benjamin Herrenschmidt
    Cc: David Howells
    Cc: Kyle McMartin
    Cc: Martin Schwidefsky
    Cc: "David S. Miller"
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

16 Sep, 2009

2 commits

  • * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (134 commits)
    powerpc/nvram: Enable use Generic NVRAM driver for different size chips
    powerpc/iseries: Fix oops reading from /proc/iSeries/mf/*/cmdline
    powerpc/ps3: Workaround for flash memory I/O error
    powerpc/booke: Don't set DABR on 64-bit BookE, use DAC1 instead
    powerpc/perf_counters: Reduce stack usage of power_check_constraints
    powerpc: Fix bug where perf_counters breaks oprofile
    powerpc/85xx: Fix SMP compile error and allow NULL for smp_ops
    powerpc/irq: Improve nanodoc
    powerpc: Fix some late PowerMac G5 with PCIe ATI graphics
    powerpc/fsl-booke: Use HW PTE format if CONFIG_PTE_64BIT
    powerpc/book3e: Add missing page sizes
    powerpc/pseries: Fix to handle slb resize across migration
    powerpc/powermac: Thermal control turns system off too eagerly
    powerpc/pci: Merge ppc32 and ppc64 versions of phb_scan()
    powerpc/405ex: support cuImage via included dtb
    powerpc/405ex: provide necessary fixup function to support cuImage
    powerpc/40x: Add support for the ESTeem 195E (PPC405EP) SBC
    powerpc/44x: Add Eiger AMCC (AppliedMicro) PPC460SX evaluation board support.
    powerpc/44x: Update Arches defconfig
    powerpc/44x: Update Arches dts
    ...

    Fix up conflicts in drivers/char/agp/uninorth-agp.c

    Linus Torvalds
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (46 commits)
    powerpc64: convert to dynamic percpu allocator
    sparc64: use embedding percpu first chunk allocator
    percpu: kill lpage first chunk allocator
    x86,percpu: use embedding for 64bit NUMA and page for 32bit NUMA
    percpu: update embedding first chunk allocator to handle sparse units
    percpu: use group information to allocate vmap areas sparsely
    vmalloc: implement pcpu_get_vm_areas()
    vmalloc: separate out insert_vmalloc_vm()
    percpu: add chunk->base_addr
    percpu: add pcpu_unit_offsets[]
    percpu: introduce pcpu_alloc_info and pcpu_group_info
    percpu: move pcpu_lpage_build_unit_map() and pcpul_lpage_dump_cfg() upward
    percpu: add @align to pcpu_fc_alloc_fn_t
    percpu: make @dyn_size mandatory for pcpu_setup_first_chunk()
    percpu: drop @static_size from first chunk allocators
    percpu: generalize first chunk allocator selection
    percpu: build first chunk allocators selectively
    percpu: rename 4k first chunk allocator to page
    percpu: improve boot messages
    percpu: fix pcpu_reclaim() locking
    ...

    Fix trivial conflict as by Tejun Heo in kernel/sched.c

    Linus Torvalds
     

02 Sep, 2009

1 commit

  • The SLB can change sizes across a live migration, which was not
    being handled, resulting in possible machine crashes during
    migration if migrating to a machine which has a smaller max SLB
    size than the source machine. Fix this by first reducing the
    SLB size to the minimum possible value, which is 32, prior to
    migration. Then during the device tree update which occurs after
    migration, we make the call to ensure the SLB gets updated. Also
    add the slb_size to the lparcfg output so that the migration
    tools can check to make sure the kernel has this capability
    before allowing migration in scenarios where the SLB size will change.

    BenH: Fixed #include -> to avoid
    breaking ppc32 build

    Signed-off-by: Brian King
    Signed-off-by: Benjamin Herrenschmidt

    Brian King
     

28 Aug, 2009

1 commit

  • Support for TLB reservation (or TLB Write Conditional) and Paired MAS
    registers are optional for a processor implementation so we handle
    them via MMU feature sections.

    We currently only used paired MAS registers to access the full RPN + perm
    bits that are kept in MAS7||MAS3. We assume that if an implementation has
    hardware page table at this time it also implements in TLB reservations.

    Signed-off-by: Kumar Gala
    Signed-off-by: Benjamin Herrenschmidt

    Kumar Gala
     

27 Aug, 2009

2 commits

  • Benjamin Herrenschmidt
     
  • This is an attempt at cleaning up a bit the way we handle execute
    permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
    defined by CPUs that can do something with it, and the myriad of
    #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
    hopefully should cover everything.

    The logic on BookE is a little bit different than what it was though
    not by much. Since now, _PAGE_EXEC will be set by the generic code
    for executable pages, we need to filter out if they are unclean and
    recover it. However, I don't expect the code to be more bloated than
    it already was in that area due to that change.

    I could boast that this brings proper enforcing of per-page execute
    permissions to all BookE and 40x but in fact, we've had that now for
    some time as a side effect of my previous rework in that area (and
    I didn't even know it :-) We would only enable execute permission if
    the page was cache clean and we would only cache clean it if we took
    and exec fault. Since we now enforce that the later only work if
    VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
    execute permissions... Unless I missed something

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     

25 Aug, 2009

1 commit


20 Aug, 2009

16 commits

  • Benjamin Herrenschmidt
     
  • Since the pte_lockptr is a spinlock it gets optimized away on
    uniprocessor builds so using spin_is_locked is not correct. We can use
    assert_spin_locked instead and get the proper behavior between UP and
    SMP builds.

    Signed-off-by: Kumar Gala
    Signed-off-by: Benjamin Herrenschmidt

    Kumar Gala
     
  • cam[tlbcam_index] is checked before tlbcam_index < ARRAY_SIZE(cam)

    Signed-off-by: Roel Kluin
    Signed-off-by: Andrew Morton
    Signed-off-by: Kumar Gala
    Signed-off-by: Benjamin Herrenschmidt

    Roel Kluin
     
  • Introduced a temporary variable into our iterating over the list cpus
    that are threads on the same core. For some reason Ben forgot how for
    loops work.

    Signed-off-by: Kumar Gala
    Signed-off-by: Benjamin Herrenschmidt

    Kumar Gala
     
  • This contains all the bits that didn't fit in previous patches :-) This
    includes the actual exception handlers assembly, the changes to the
    kernel entry, other misc bits and wiring it all up in Kconfig.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • The base TLB support didn't include support for SPARSEMEM_VMEMMAP, though
    we did carve out some virtual space for it, the necessary support code
    wasn't there. This implements it by using 16M pages for now, though the
    page size could easily be changed at runtime if necessary.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • This adds the TLB miss handler assembly, the low level TLB flush routines
    along with the necessary hook for dealing with our virtual page tables
    or indirect TLB entries that need to be flushes when PTE pages are freed.

    There is currently no support for hugetlbfs

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • The definition for the global structure mmu_gathers, used by generic code,
    is currently defined in multiple places not including anything used by
    64-bit Book3E. This changes it by moving to one place common to all
    processors.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • This adds the PTE and pgtable format definitions, along with changes
    to the kernel memory map and other definitions related to implementing
    support for 64-bit Book3E. This also shields some asm-offset bits that
    are currently only relevant on 32-bit

    We also move the definition of the "linux" page size constants to
    the common mmu.h file and add a few sizes that are relevant to
    embedded processors.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • That patch used to just add a hook to page table flushing but
    pulling that string brought out a whole bunch of issues, so it
    now does that and more:

    - We now make the RCU batching of page freeing SMP only, as I
    believe it was intended initially. We make a few more things compile
    to nothing on !CONFIG_SMP

    - Some macros are turned into functions, though that forced me to
    out of line a few stuffs due to unsolvable include depenencies,
    however it's probably better that way anyway, it's not -that-
    critical code path.

    - 32-bit didn't call pte_free_finish() on tlb_flush() which means
    that it wouldn't push out the batch to RCU for delayed freeing when
    a bunch of page tables have been freed, they would just stay in there
    until the batch gets full.

    64-bit BookE will use that hook to maintain the virtually linear
    page tables or the indirect entries in the TLB when using the
    HW loader.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • We need to pass down whether the page is direct or indirect and we'll
    need to pass the page size to _tlbil_va and _tlbivax_bcast

    We also add a new low level _tlbil_pid_noind() which does a TLB flush
    by PID but avoids flushing indirect entries if possible

    This implements those new prototypes but defines them with inlines
    or macros so that no additional arguments are actually passed on current
    processors.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • This adds some code to do early ioremap's using page tables instead of
    bolting entries in the hash table. This will be used by the upcoming
    64-bits BookE port.

    The patch also changes the test for early vs. late ioremap to use
    slab_is_available() instead of our old hackish mem_init_done.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • The current "no hash" MMU context management code is written with
    the assumption that one CPU == one TLB. This is not the case on
    implementations that support HW multithreading, where several
    linux CPUs can share the same TLB.

    This adds some basic support for this to our context management
    and our TLB flushing code.

    It also cleans up the optional debugging output a bit

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • The kernel uses SPRG registers for various purposes, typically in
    low level assembly code as scratch registers or to hold per-cpu
    global infos such as the PACA or the current thread_info pointer.

    We want to be able to easily shuffle the usage of those registers
    as some implementations have specific constraints realted to some
    of them, for example, some have userspace readable aliases, etc..
    and the current choice isn't always the best.

    This patch should not change any code generation, and replaces the
    usage of SPRN_SPRGn everywhere in the kernel with a named replacement
    and adds documentation next to the definition of the names as to
    what those are used for on each processor family.

    The only parts that still use the original numbers are bits of KVM
    or suspend/resume code that just blindly needs to save/restore all
    the SPRGs.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • TASK_UNMAPPED_BASE is not used with the new top down mmap layout. We can
    reuse this preload slot by loading in the segment at 0x10000000, where almost
    all PowerPC binaries are linked at.

    On a microbenchmark that bounces a token between two 64bit processes over pipes
    and calls gettimeofday each iteration (to access the VDSO), both the 32bit and
    64bit context switch rate improves (tested on a 4GHz POWER6):

    32bit: 273k/sec -> 283k/sec
    64bit: 277k/sec -> 284k/sec

    Signed-off-by: Anton Blanchard
    Signed-off-by: Benjamin Herrenschmidt

    Anton Blanchard
     
  • With the new top down layout it is likely that the pc and stack will be in the
    same segment, because the pc is most likely in a library allocated via a top
    down mmap. Right now we bail out early if these segments match.

    Rearrange the SLB preload code to sanity check all SLB preload addresses
    are not in the kernel, then check all addresses for conflicts.

    Signed-off-by: Anton Blanchard
    Signed-off-by: Benjamin Herrenschmidt

    Anton Blanchard
     

18 Aug, 2009

1 commit

  • This provides a mechanism to allow the perf_counters code to access
    user memory in a PMU interrupt routine. Such an access can cause
    various kinds of interrupt: SLB miss, MMU hash table miss, segment
    table miss, or TLB miss, depending on the processor. This commit
    only deals with 64-bit classic/server processors, which use an MMU
    hash table. 32-bit processors are already able to access user memory
    at interrupt time. Since we don't soft-disable on 32-bit, we avoid
    the possibility of reentering hash_page or the TLB miss handlers,
    since they run with interrupts disabled.

    On 64-bit processors, an SLB miss interrupt on a user address will
    update the slb_cache and slb_cache_ptr fields in the paca. This is
    OK except in the case where a PMU interrupt occurs in switch_slb,
    which also accesses those fields. To prevent this, we hard-disable
    interrupts in switch_slb. Interrupts are already soft-disabled at
    this point, and will get hard-enabled when they get soft-enabled
    later.

    This also reworks slb_flush_and_rebolt: to avoid hard-disabling twice,
    and to make sure that it clears the slb_cache_ptr when called from
    other callers than switch_slb, the existing routine is renamed to
    __slb_flush_and_rebolt, which is called by switch_slb and the new
    version of slb_flush_and_rebolt.

    Similarly, switch_stab (used on POWER3 and RS64 processors) gets a
    hard_irq_disable() to protect the per-cpu variables used there and
    in ste_allocate.

    If a MMU hashtable miss interrupt occurs, normally we would call
    hash_page to look up the Linux PTE for the address and create a HPTE.
    However, hash_page is fairly complex and takes some locks, so to
    avoid the possibility of deadlock, we check the preemption count
    to see if we are in a (pseudo-)NMI handler, and if so, we don't call
    hash_page but instead treat it like a bad access that will get
    reported up through the exception table mechanism. An interrupt
    whose handler runs even though the interrupt occurred when
    soft-disabled (such as the PMU interrupt) is considered a pseudo-NMI
    handler, which should use nmi_enter()/nmi_exit() rather than
    irq_enter()/irq_exit().

    Acked-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    Paul Mackerras
     

14 Aug, 2009

1 commit

  • Conflicts:
    arch/sparc/kernel/smp_64.c
    arch/x86/kernel/cpu/perf_counter.c
    arch/x86/kernel/setup_percpu.c
    drivers/cpufreq/cpufreq_ondemand.c
    mm/percpu.c

    Conflicts in core and arch percpu codes are mostly from commit
    ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many
    num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all
    the first chunk allocators into mm/percpu.c, the changes are moved
    from arch code to mm/percpu.c.

    Signed-off-by: Tejun Heo

    Tejun Heo
     

30 Jul, 2009

1 commit

  • In switch_mmu_context() if we call steal_context_smp() to get a context
    to use we shouldn't fall through and than call steal_context_up(). Doing
    so can be problematic in that the 'mm' that steal_context_up() ends up
    using will not get marked dirty in the stale_map[] for other CPUs that
    might have used that mm. Thus we could end up with stale TLB entries in
    the other CPUs that can cause all kinda of havoc.

    Signed-off-by: Kumar Gala

    Kumar Gala
     

28 Jul, 2009

1 commit

  • mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()

    Upcoming paches to support the new 64-bit "BookE" powerpc architecture
    will need to have the virtual address corresponding to PTE page when
    freeing it, due to the way the HW table walker works.

    Basically, the TLB can be loaded with "large" pages that cover the whole
    virtual space (well, sort-of, half of it actually) represented by a PTE
    page, and which contain an "indirect" bit indicating that this TLB entry
    RPN points to an array of PTEs from which the TLB can then create direct
    entries. Thus, in order to invalidate those when PTE pages are deleted,
    we need the virtual address to pass to tlbilx or tlbivax instructions.

    The old trick of sticking it somewhere in the PTE page struct page sucks
    too much, the address is almost readily available in all call sites and
    almost everybody implemets these as macros, so we may as well add the
    argument everywhere. I added it to the pmd and pud variants for consistency.

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: David Howells [MN10300 & FRV]
    Acked-by: Nick Piggin
    Acked-by: Martin Schwidefsky [s390]
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     

08 Jul, 2009

5 commits

  • pr_debug() can now result in code being generated even when DEBUG
    is not defined. That's not really desirable in some places.

    With CONFIG_DYNAMIC_DEBUG=y:

    size before:
    text data bss dec hex filename
    2036 368 8 2412 96c arch/powerpc/mm/pgtable.o

    size after:
    text data bss dec hex filename
    1677 248 8 1933 78d arch/powerpc/mm/pgtable.o

    Signed-off-by: Michael Ellerman
    Signed-off-by: Benjamin Herrenschmidt

    Michael Ellerman
     
  • pr_debug() can now result in code being generated even when DEBUG
    is not defined. That's not really desirable in some places.

    With CONFIG_DYNAMIC_DEBUG=y:

    size before:
    text data bss dec hex filename
    3252 384 0 3636 e34 arch/powerpc/mm/gup.o

    size after:
    text data bss dec hex filename
    2576 96 0 2672 a70 arch/powerpc/mm/gup.o

    Signed-off-by: Michael Ellerman
    Signed-off-by: Benjamin Herrenschmidt

    Michael Ellerman
     
  • pr_debug() can now result in code being generated even when DEBUG
    is not defined. That's not really desirable in some places.

    With CONFIG_DYNAMIC_DEBUG=y:

    size before:
    text data bss dec hex filename
    3261 416 4 3681 e61 arch/powerpc/mm/slb.o

    size after:
    text data bss dec hex filename
    2861 248 4 3113 c29 arch/powerpc/mm/slb.o

    Signed-off-by: Michael Ellerman
    Signed-off-by: Benjamin Herrenschmidt

    Michael Ellerman
     
  • pr_debug() can now result in code being generated even when DEBUG
    is not defined. That's not really desirable in some places.

    With CONFIG_DYNAMIC_DEBUG=y:

    size before:
    text data bss dec hex filename
    1508 48 28 1584 630 powerpc/mm/mmu_context_nohash.o

    size after:
    text data bss dec hex filename
    1088 0 28 1116 45c powerpc/mm/mmu_context_nohash.o

    Signed-off-by: Michael Ellerman
    Signed-off-by: Benjamin Herrenschmidt

    Michael Ellerman
     
  • Signed-off-by: Joe Perches
    Acked-by: Geoff Levand
    Signed-off-by: Benjamin Herrenschmidt

    Joe Perches