19 Jun, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation #

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 4122 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Kate Stewart
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

08 Apr, 2019

1 commit


26 Feb, 2019

2 commits

  • As of today we enable unaligned access unconditionally on ARCv2.
    Do this under a Kconfig option to allow disable it for test, benchmarking
    etc. Also while at it

    - Select HAVE_EFFICIENT_UNALIGNED_ACCESS
    - Although gcc defaults to unaligned access (since GNU 2018.03), add the
    right toggles for enabling or disabling as appropriate
    - update bootlog to prints both HW feature status (exists, enabled/disabled)
    and SW status (used / not used).
    - wire up the relaxed memcpy for unaligned access

    Signed-off-by: Eugeniy Paltsev
    Signed-off-by: Vineet Gupta
    [vgupta: squashed patches, handle gcc -mno-unaligned-access quick]

    Eugeniy Paltsev
     
  • Optimise code to use efficient unaligned memory access which is
    available on ARCv2. This allows us to really simplify memcpy code
    and speed up the code one and a half times (in case of unaligned
    source or destination).

    Don't wire it up yet !

    Signed-off-by: Eugeniy Paltsev
    Signed-off-by: Vineet Gupta

    Eugeniy Paltsev
     

22 Feb, 2019

1 commit

  • ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the
    next cache line but doesn't ensure that the line is not past the end of
    the buffer. PRETECHW changes the line ownership and marks it dirty,
    which can cause data corruption if this area is used for DMA IO.

    Fix the issue by avoiding the PREFETCHW. This leads to performance
    degradation but it is OK as we'll introduce new memcpy implementation
    optimized for unaligned memory access using.

    We also cut off all PREFETCH instructions at they are quite useless
    here:
    * we call PREFETCH right before LOAD instruction call.
    * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64)
    in a main logical loop. so we call PREFETCH 4 times (or 2 times)
    for each L1 cache line (in case of 64B L1 cache Line which is
    default case). Obviously this is not optimal.

    Signed-off-by: Eugeniy Paltsev
    Signed-off-by: Vineet Gupta

    Eugeniy Paltsev
     

18 Jan, 2019

1 commit

  • ARCv2 optimized memset uses PREFETCHW instruction for prefetching the
    next cache line but doesn't ensure that the line is not past the end of
    the buffer. PRETECHW changes the line ownership and marks it dirty,
    which can cause issues in SMP config when next line was already owned by
    other core. Fix the issue by avoiding the PREFETCHW

    Some more details:

    The current code has 3 logical loops (ignroing the unaligned part)
    (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC
    (b) Loop for 32 x 2 bytes with PREFETCHW
    (c) any left over bytes

    loop (a) was already eliding the last 64 bytes, so PREALLOC was
    safe. The fix was removing PREFETCW from (b).

    Another potential issue (applicable to configs with 32 or 128 byte L1
    cache line) is that PREALLOC assumes 64 byte cache line and may not do
    the right thing specially for 32b. While it would be easy to adapt,
    there are no known configs with those lie sizes, so for now, just
    compile out PREALLOC in such cases.

    Signed-off-by: Eugeniy Paltsev
    Cc: stable@vger.kernel.org #4.4+
    Signed-off-by: Vineet Gupta
    [vgupta: rewrote changelog, used asm .macro vs. "C" macro]

    Eugeniy Paltsev
     

01 Oct, 2016

1 commit

  • This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi
    ops generation.

    Note that we didn't change the normal ENTRY/EXIT as we don't actually
    want unwind info in the trap/exception/interrutp handlers which use
    these, as unwinder then gets confused (it keeps recursing vs. stopping).
    Semantically these are leaf routines and unwinding should stop when it
    hits those routines.

    Before
    ------

    28.52% 1.19% 9929 hackbench libuClibc-1.0.17.so [.] __write_nocancel
    |
    ---__write_nocancel
    |--8.95%--EV_Trap
    | --8.25%--sys_write
    | |--3.93%--sock_write_iter
    ...
    |--2.62%--memset

    Vineet Gupta
     

03 Nov, 2015

1 commit


20 Jul, 2015

2 commits


22 Jun, 2015

2 commits


26 Mar, 2014

1 commit


25 Aug, 2013

1 commit

  • For a search buffer, 2 byte aligned, strchr() was returning pointer
    outside of buffer (buf - 1)

    ------------->8----------------
    // Input buffer (default 4 byte aigned)
    char *buffer = "1AA_";

    // actual search start (to mimick 2 byte alignment)
    char *current_line = &(buffer[2]);

    // Character to search for
    char c = 'A';

    char *c_pos = strchr(current_line, c);

    printf("%s\n", c_pos) --> 'AA_' as oppose to 'A_'
    ------------->8----------------

    Reported-by: Anton Kolesov
    Debugged-by: Anton Kolesov
    Cc: # [3.9 and 3.10]
    Cc: Noam Camus
    Signed-off-by: Joern Rennecke
    Signed-off-by: Vineet Gupta
    Signed-off-by: Linus Torvalds

    Joern Rennecke
     

11 Feb, 2013

2 commits

  • Hand optimised asm code for ARC700 pipeline.
    Originally written/optimized by Joern Rennecke

    Signed-off-by: Vineet Gupta
    Cc: Joern Rennecke

    Vineet Gupta
     
  • Arnd in his review pointed out that arch Kconfig organisation has several
    deficiencies:

    * Build time entries for things which can be runtime extracted from DT
    (e.g. SDRAM size, core clk frequency..)
    * Not multi-platform-image-build friendly (choice .. endchoice constructs)
    * cpu variants support (750/770) is exclusive.

    The first 2 have been fixed in subsequent patches.
    Due to the nature of the 750 and 770, it is not possible to build for
    both together, w/o special runtime glue code which would hurt
    performance.

    Signed-off-by: Vineet Gupta
    Cc: Arnd Bergmann
    Cc: Sam Ravnborg
    Acked-by: Sam Ravnborg

    Vineet Gupta