23 Apr, 2020

1 commit


10 Mar, 2020

1 commit

  • The current codebase makes use of the zero-length array language
    extension to the C90 standard, but the preferred mechanism to declare
    variable-length types such as these ones is a flexible array member[1][2],
    introduced in C99:

    struct foo {
    int stuff;
    struct boo array[];
    };

    By making use of the mechanism above, we will get a compiler warning
    in case the flexible array does not occur last in the structure, which
    will help us prevent some kind of undefined behavior bugs from being
    inadvertently introduced[3] to the codebase from now on.

    Also, notice that, dynamic memory allocations won't be affected by
    this change:

    "Flexible array members have incomplete type, and so the sizeof operator
    may not be applied. As a quirk of the original implementation of
    zero-length arrays, sizeof evaluates to zero."[1]

    This issue was found with the help of Coccinelle.

    [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
    [2] https://github.com/KSPP/linux/issues/21
    [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

    Signed-off-by: Gustavo A. R. Silva
    Link: https://lore.kernel.org/r/20200309202327.GA8813@embeddedor
    Signed-off-by: Kees Cook

    Gustavo A. R. Silva
     

09 Jan, 2020

1 commit

  • In my attempt to fix a memory leak, I introduced a double-free in the
    pstore error path. Instead of trying to manage the allocation lifetime
    between persistent_ram_new() and its callers, adjust the logic so
    persistent_ram_new() always takes a kstrdup() copy, and leaves the
    caller's allocation lifetime up to the caller. Therefore callers are
    _always_ responsible for freeing their label. Before, it only needed
    freeing when the prz itself failed to allocate, and not in any of the
    other prz failure cases, which callers would have no visibility into,
    which is the root design problem that lead to both the leak and now
    double-free bugs.

    Reported-by: Cengiz Can
    Link: https://lore.kernel.org/lkml/d4ec59002ede4aaf9928c7f7526da87c@kernel.wtf
    Fixes: 8df955a32a73 ("pstore/ram: Fix error-path memory leak in persistent_ram_new() callers")
    Cc: stable@vger.kernel.org
    Signed-off-by: Kees Cook

    Kees Cook
     

05 Jun, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this software is licensed under the terms of the gnu general public
    license version 2 as published by the free software foundation and
    may be copied distributed and modified under those terms this
    program is distributed in the hope that it will be useful but
    without any warranty without even the implied warranty of
    merchantability or fitness for a particular purpose see the gnu
    general public license for more details

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 285 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexios Zavras
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190529141900.642774971@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

04 Jan, 2019

1 commit

  • Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
    of the user address range verification function since we got rid of the
    old racy i386-only code to walk page tables by hand.

    It existed because the original 80386 would not honor the write protect
    bit when in kernel mode, so you had to do COW by hand before doing any
    user access. But we haven't supported that in a long time, and these
    days the 'type' argument is a purely historical artifact.

    A discussion about extending 'user_access_begin()' to do the range
    checking resulted this patch, because there is no way we're going to
    move the old VERIFY_xyz interface to that model. And it's best done at
    the end of the merge window when I've done most of my merges, so let's
    just get this done once and for all.

    This patch was mostly done with a sed-script, with manual fix-ups for
    the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.

    There were a couple of notable cases:

    - csky still had the old "verify_area()" name as an alias.

    - the iter_iov code had magical hardcoded knowledge of the actual
    values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
    really used it)

    - microblaze used the type argument for a debug printout

    but other than those oddities this should be a total no-op patch.

    I tried to fix up all architectures, did fairly extensive grepping for
    access_ok() uses, and the changes are trivial, but I may have missed
    something. Any missed conversion should be trivially fixable, though.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

04 Dec, 2018

6 commits

  • The ramoops backend currently calls persistent_ram_save_old() even
    if a buffer is empty. While this appears to work, it is does not seem
    like the right thing to do and could lead to future bugs so lets avoid
    that. It also prevents misleading prints in the logs which claim the
    buffer is valid.

    I got something like:

    found existing buffer, size 0, start 0

    When I was expecting:

    no valid data in buffer (sig = ...)

    This bails out early (and reports with pr_debug()), since it's an
    acceptable state.

    Signed-off-by: Joel Fernandes (Google)
    Co-developed-by: Kees Cook
    Signed-off-by: Kees Cook

    Joel Fernandes (Google)
     
  • This improves and updates some comments:
    - dump handler comment out of sync from calling convention
    - fix kern-doc typo

    and improves status output:
    - reminder that only kernel crash dumps are compressed
    - do not be silent about ECC infrastructure failures

    Signed-off-by: Kees Cook

    Kees Cook
     
  • The struct persistent_ram_zone wasn't well documented. This adds kern-doc
    for it.

    Signed-off-by: Kees Cook

    Kees Cook
     
  • In order to more easily perform automated regression testing, this
    adds pr_debug() calls to report each prz allocation which can then be
    verified against persistent storage. Specifically, seeing the dividing
    line between header, data, any ECC bytes. (And the general assignment
    output is updated to remove the bogus ECC blocksize which isn't actually
    recorded outside the prz instance.)

    Signed-off-by: Kees Cook

    Kees Cook
     
  • With both ram.c and ram_core.c built into ramoops.ko, it doesn't make
    sense to have differing pr_fmt prefixes. This fixes ram_core.c to use
    the module name (as ram.c already does). Additionally improves region
    reservation error to include the region name.

    Signed-off-by: Kees Cook

    Kees Cook
     
  • When initialing a prz, if invalid data is found (no PERSISTENT_RAM_SIG),
    the function call path looks like this:

    ramoops_init_prz ->
    persistent_ram_new -> persistent_ram_post_init -> persistent_ram_zap
    persistent_ram_zap

    As we can see, persistent_ram_zap() is called twice.
    We can avoid this by adding an option to persistent_ram_new(), and
    only call persistent_ram_zap() when it is needed.

    Signed-off-by: Peng Wang
    [kees: minor tweak to exit path and commit log]
    Signed-off-by: Kees Cook

    Peng Wang
     

24 Oct, 2018

1 commit


22 Oct, 2018

1 commit

  • When ramoops reserved a memory region in the kernel, it had an unhelpful
    label of "persistent_memory". When reading /proc/iomem, it would be
    repeated many times, did not hint that it was ramoops in particular,
    and didn't clarify very much about what each was used for:

    400000000-407ffffff : Persistent Memory (legacy)
    400000000-400000fff : persistent_memory
    400001000-400001fff : persistent_memory
    ...
    4000ff000-4000fffff : persistent_memory

    Instead, this adds meaningful labels for how the various regions are
    being used:

    400000000-407ffffff : Persistent Memory (legacy)
    400000000-400000fff : ramoops:dump(0/252)
    400001000-400001fff : ramoops:dump(1/252)
    ...
    4000fc000-4000fcfff : ramoops:dump(252/252)
    4000fd000-4000fdfff : ramoops:console
    4000fe000-4000fe3ff : ramoops:ftrace(0/3)
    4000fe400-4000fe7ff : ramoops:ftrace(1/3)
    4000fe800-4000febff : ramoops:ftrace(2/3)
    4000fec00-4000fefff : ramoops:ftrace(3/3)
    4000ff000-4000fffff : ramoops:pmsg

    Signed-off-by: Kees Cook
    Reviewed-by: Joel Fernandes (Google)
    Tested-by: Sai Prakash Ranjan
    Tested-by: Guenter Roeck

    Kees Cook
     

14 Sep, 2018

1 commit

  • persistent_ram_vmap() returns the page start vaddr.
    persistent_ram_iomap() supports non-page-aligned mapping.

    persistent_ram_buffer_map() always adds offset-in-page to the vaddr
    returned from these two functions, which causes incorrect mapping of
    non-page-aligned persistent ram buffer.

    By default ftrace_size is 4096 and max_ftrace_cnt is nr_cpu_ids. Without
    this patch, the zone_sz in ramoops_init_przs() is 4096/nr_cpu_ids which
    might not be page aligned. If the offset-in-page > 2048, the vaddr will be
    in next page. If the next page is not mapped, it will cause kernel panic:

    [ 0.074231] BUG: unable to handle kernel paging request at ffffa19e0081b000
    ...
    [ 0.075000] RIP: 0010:persistent_ram_new+0x1f8/0x39f
    ...
    [ 0.075000] Call Trace:
    [ 0.075000] ramoops_init_przs.part.10.constprop.15+0x105/0x260
    [ 0.075000] ramoops_probe+0x232/0x3a0
    [ 0.075000] platform_drv_probe+0x3e/0xa0
    [ 0.075000] driver_probe_device+0x2cd/0x400
    [ 0.075000] __driver_attach+0xe4/0x110
    [ 0.075000] ? driver_probe_device+0x400/0x400
    [ 0.075000] bus_for_each_dev+0x70/0xa0
    [ 0.075000] driver_attach+0x1e/0x20
    [ 0.075000] bus_add_driver+0x159/0x230
    [ 0.075000] ? do_early_param+0x95/0x95
    [ 0.075000] driver_register+0x70/0xc0
    [ 0.075000] ? init_pstore_fs+0x4d/0x4d
    [ 0.075000] __platform_driver_register+0x36/0x40
    [ 0.075000] ramoops_init+0x12f/0x131
    [ 0.075000] do_one_initcall+0x4d/0x12c
    [ 0.075000] ? do_early_param+0x95/0x95
    [ 0.075000] kernel_init_freeable+0x19b/0x222
    [ 0.075000] ? rest_init+0xbb/0xbb
    [ 0.075000] kernel_init+0xe/0xfc
    [ 0.075000] ret_from_fork+0x3a/0x50

    Signed-off-by: Bin Yang
    [kees: add comments describing the mapping differences, updated commit log]
    Fixes: 24c3d2f342ed ("staging: android: persistent_ram: Make it possible to use memory outside of bootmem")
    Cc: stable@vger.kernel.org
    Signed-off-by: Kees Cook

    Bin Yang
     

08 Mar, 2018

1 commit


08 Mar, 2017

1 commit

  • The per-prz spinlock should be using the dynamic initializer so that
    lockdep can correctly track it. Without this, under lockdep, we get a
    warning at boot that the lock is in non-static memory.

    Fixes: 109704492ef6 ("pstore: Make spinlock per zone instead of global")
    Fixes: 76d5692a5803 ("pstore: Correctly initialize spinlock and flags")
    Signed-off-by: Kees Cook
    Cc: stable@vger.kernel.org

    Kees Cook
     

14 Feb, 2017

1 commit

  • The ram backend wasn't always initializing its spinlock correctly. Since
    it was coming from kzalloc memory, though, it was harmless on
    architectures that initialize unlocked spinlocks to 0 (at least x86 and
    ARM). This also fixes a possibly ignored flag setting too.

    When running under CONFIG_DEBUG_SPINLOCK, the following Oops was visible:

    [ 0.760836] persistent_ram: found existing buffer, size 29988, start 29988
    [ 0.765112] persistent_ram: found existing buffer, size 30105, start 30105
    [ 0.769435] persistent_ram: found existing buffer, size 118542, start 118542
    [ 0.785960] persistent_ram: found existing buffer, size 0, start 0
    [ 0.786098] persistent_ram: found existing buffer, size 0, start 0
    [ 0.786131] pstore: using zlib compression
    [ 0.790716] BUG: spinlock bad magic on CPU#0, swapper/0/1
    [ 0.790729] lock: 0xffffffc0d1ca9bb0, .magic: 00000000, .owner: /-1, .owner_cpu: 0
    [ 0.790742] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.10.0-rc2+ #913
    [ 0.790747] Hardware name: Google Kevin (DT)
    [ 0.790750] Call trace:
    [ 0.790768] [] dump_backtrace+0x0/0x2bc
    [ 0.790780] [] show_stack+0x20/0x28
    [ 0.790794] [] dump_stack+0xa4/0xcc
    [ 0.790809] [] spin_dump+0xe0/0xf0
    [ 0.790821] [] spin_bug+0x30/0x3c
    [ 0.790834] [] do_raw_spin_lock+0x50/0x1b8
    [ 0.790846] [] _raw_spin_lock_irqsave+0x54/0x6c
    [ 0.790862] [] buffer_size_add+0x48/0xcc
    [ 0.790875] [] persistent_ram_write+0x60/0x11c
    [ 0.790888] [] ramoops_pstore_write_buf+0xd4/0x2a4
    [ 0.790900] [] pstore_console_write+0xf0/0x134
    [ 0.790912] [] console_unlock+0x48c/0x5e8
    [ 0.790923] [] register_console+0x3b0/0x4d4
    [ 0.790935] [] pstore_register+0x1a8/0x234
    [ 0.790947] [] ramoops_probe+0x6b8/0x7d4
    [ 0.790961] [] platform_drv_probe+0x7c/0xd0
    [ 0.790972] [] driver_probe_device+0x1b4/0x3bc
    [ 0.790982] [] __device_attach_driver+0xc8/0xf4
    [ 0.790996] [] bus_for_each_drv+0xb4/0xe4
    [ 0.791006] [] __device_attach+0xd0/0x158
    [ 0.791016] [] device_initial_probe+0x24/0x30
    [ 0.791026] [] bus_probe_device+0x50/0xe4
    [ 0.791038] [] device_add+0x3a4/0x76c
    [ 0.791051] [] of_device_add+0x74/0x84
    [ 0.791062] [] of_platform_device_create_pdata+0xc0/0x100
    [ 0.791073] [] of_platform_device_create+0x34/0x40
    [ 0.791086] [] of_platform_default_populate_init+0x58/0x78
    [ 0.791097] [] do_one_initcall+0x88/0x160
    [ 0.791109] [] kernel_init_freeable+0x264/0x31c
    [ 0.791123] [] kernel_init+0x18/0x11c
    [ 0.791133] [] ret_from_fork+0x10/0x50
    [ 0.793717] console [pstore-1] enabled
    [ 0.797845] pstore: Registered ramoops as persistent store backend
    [ 0.804647] ramoops: attached 0x100000@0xf7edc000, ecc: 0/0

    Fixes: 663deb47880f ("pstore: Allow prz to control need for locking")
    Fixes: 109704492ef6 ("pstore: Make spinlock per zone instead of global")
    Reported-by: Brian Norris
    Signed-off-by: Kees Cook

    Kees Cook
     

16 Nov, 2016

1 commit

  • In preparation of not locking at all for certain buffers depending on if
    there's contention, make locking optional depending on the initialization
    of the prz.

    Signed-off-by: Joel Fernandes
    [kees: moved locking flag into prz instead of via caller arguments]
    Signed-off-by: Kees Cook

    Joel Fernandes
     

12 Nov, 2016

1 commit

  • Currently pstore has a global spinlock for all zones. Since the zones
    are independent and modify different areas of memory, there's no need
    to have a global lock, so we should use a per-zone lock as introduced
    here. Also, when ramoops's ftrace use-case has a FTRACE_PER_CPU flag
    introduced later, which splits the ftrace memory area into a single zone
    per CPU, it will eliminate the need for locking. In preparation for this,
    make the locking optional.

    Signed-off-by: Joel Fernandes
    [kees: updated commit message]
    Signed-off-by: Kees Cook

    Joel Fernandes
     

09 Sep, 2016

4 commits

  • The ramoops buffer may be mapped as either I/O memory or uncached
    memory. On ARM64, this results in a device-type (strongly-ordered)
    mapping. Since unnaligned accesses to device-type memory will
    generate an alignment fault (regardless of whether or not strict
    alignment checking is enabled), it is not safe to use memcpy().
    memcpy_fromio() is guaranteed to only use aligned accesses, so use
    that instead.

    Signed-off-by: Andrew Bresticker
    Signed-off-by: Enric Balletbo Serra
    Reviewed-by: Puneet Kumar
    Signed-off-by: Kees Cook
    Cc: stable@vger.kernel.org

    Andrew Bresticker
     
  • persistent_ram_update uses vmap / iomap based on whether the buffer is in
    memory region or reserved region. However, both map it as non-cacheable
    memory. For armv8 specifically, non-cacheable mapping requests use a
    memory type that has to be accessed aligned to the request size. memcpy()
    doesn't guarantee that.

    Signed-off-by: Furquan Shaikh
    Signed-off-by: Enric Balletbo Serra
    Reviewed-by: Aaron Durbin
    Reviewed-by: Olof Johansson
    Tested-by: Furquan Shaikh
    Signed-off-by: Kees Cook
    Cc: stable@vger.kernel.org

    Furquan Shaikh
     
  • Removing a bounce buffer copy operation in the pmsg driver path is
    always better. We also gain in overall performance by not requesting
    a vmalloc on every write as this can cause precious RT tasks, such
    as user facing media operation, to stall while memory is being
    reclaimed. Added a write_buf_user to the pstore functions, a backup
    platform write_buf_user that uses the small buffer that is part of
    the instance, and implemented a ramoops write_buf_user that only
    supports PSTORE_TYPE_PMSG.

    Signed-off-by: Mark Salyzyn
    Signed-off-by: Kees Cook

    Mark Salyzyn
     
  • I have here a FPGA behind PCIe which exports SRAM which I use for
    pstore. Now it seems that the FPGA no longer supports cmpxchg based
    updates and writes back 0xff…ff and returns the same. This leads to
    crash during crash rendering pstore useless.
    Since I doubt that there is much benefit from using cmpxchg() here, I am
    dropping this atomic access and use the spinlock based version.

    Cc: Anton Vorontsov
    Cc: Colin Cross
    Cc: Kees Cook
    Cc: Tony Luck
    Cc: Rabin Vincent
    Tested-by: Rabin Vincent
    Signed-off-by: Sebastian Andrzej Siewior
    Reviewed-by: Guenter Roeck
    [kees: remove "_locked" suffix since it's the only option now]
    Signed-off-by: Kees Cook
    Cc: stable@vger.kernel.org

    Sebastian Andrzej Siewior
     

12 Dec, 2014

2 commits

  • On some ARMs the memory can be mapped pgprot_noncached() and still
    be working for atomic operations. As pointed out by Colin Cross
    , in some cases you do want to use
    pgprot_noncached() if the SoC supports it to see a debug printk
    just before a write hanging the system.

    On ARMs, the atomic operations on strongly ordered memory are
    implementation defined. So let's provide an optional kernel parameter
    for configuring pgprot_noncached(), and use pgprot_writecombine() by
    default.

    Cc: Arnd Bergmann
    Cc: Rob Herring
    Cc: Randy Dunlap
    Cc: Anton Vorontsov
    Cc: Colin Cross
    Cc: Olof Johansson
    Cc: Russell King
    Cc: stable@vger.kernel.org
    Acked-by: Kees Cook
    Signed-off-by: Tony Lindgren
    Signed-off-by: Tony Luck

    Tony Lindgren
     
  • Currently trying to use pstore on at least ARMs can hang as we're
    mapping the peristent RAM with pgprot_noncached().

    On ARMs, pgprot_noncached() will actually make the memory strongly
    ordered, and as the atomic operations pstore uses are implementation
    defined for strongly ordered memory, they may not work. So basically
    atomic operations have undefined behavior on ARM for device or strongly
    ordered memory types.

    Let's fix the issue by using write-combine variants for mappings. This
    corresponds to normal, non-cacheable memory on ARM. For many other
    architectures, this change does not change the mapping type as by
    default we have:

    #define pgprot_writecombine pgprot_noncached

    The reason why pgprot_noncached() was originaly used for pstore
    is because Colin Cross had observed lost
    debug prints right before a device hanging write operation on some
    systems. For the platforms supporting pgprot_noncached(), we can
    add a an optional configuration option to support that. But let's
    get pstore working first before adding new features.

    Cc: Arnd Bergmann
    Cc: Anton Vorontsov
    Cc: Colin Cross
    Cc: Olof Johansson
    Cc: linux-kernel@vger.kernel.org
    Cc: stable@vger.kernel.org
    Acked-by: Kees Cook
    Signed-off-by: Rob Herring
    [tony@atomide.com: updated description]
    Signed-off-by: Tony Lindgren
    Signed-off-by: Tony Luck

    Rob Herring
     

09 Aug, 2014

1 commit


07 Jun, 2014

1 commit

  • - Define pr_fmt in plateform.c and ram_core.c for global prefix.

    - Coalesce format fragments.

    - Separate format/arguments on lines > 80 characters.

    Note: Some pr_foo() were initially declared without prefix and therefore
    this could break existing log analyzer.

    [akpm@linux-foundation.org: missed a couple of prefix removals]
    Signed-off-by: Fabian Frederick
    Cc: Joe Perches
    Cc: Anton Vorontsov
    Cc: Colin Cross
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     

18 Mar, 2014

1 commit


15 Jun, 2013

1 commit

  • For persistent RAM outside of main memory, the memory may have limitations
    on supported accesses. For internal RAM on highbank platform exclusive
    accesses are not supported and will hang the system. So atomic_cmpxchg
    cannot be used. This commit uses spinlock protection for buffer size and
    start updates on ioremapped regions instead.

    Signed-off-by: Rob Herring
    Acked-by: Anton Vorontsov
    Signed-off-by: Tony Luck

    Rob Herring
     

04 Apr, 2013

3 commits


04 Jan, 2013

1 commit

  • CONFIG_HOTPLUG is going away as an option. As a result, the __dev*
    markings need to be removed.

    This change removes the use of __devinit from the pstore filesystem.

    Based on patches originally written by Bill Pemberton, but redone by me
    in order to handle some of the coding style issues better, by hand.

    Cc: Bill Pemberton
    Cc: Anton Vorontsov
    Cc: Colin Cross
    Cc: Kees Cook
    Cc: Tony Luck
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

18 Jul, 2012

4 commits

  • Decoding the binary trace w/ a different kernel might be troublesome
    since we convert addresses to symbols. For kernels with minimal changes,
    the mappings would probably match, but it's not guaranteed at all.
    (But still we could convert the addresses by hand, since we do print
    raw addresses.)

    If we use modules, the symbols could be loaded at different addresses
    from the previously booted kernel, and so this would also fail, but
    there's nothing we can do about it.

    Also, the binary data format that pstore/ram is using in its ringbuffer
    may change between the kernels, so here we too must ensure that we're
    running the same kernel.

    So, there are two questions really:

    1. How to compute the unique kernel tag;
    2. Where to store it.

    In this patch we're using LINUX_VERSION_CODE, just as hibernation
    (suspend-to-disk) does. This way we are protecting from the kernel
    version mismatch, making sure that we're running the same kernel
    version and patch level. We could use CRC of a symbol table (as
    suggested by Tony Luck), but for now let's not be that strict.

    And as for storing, we are using a small trick here. Instead of
    allocating a dedicated buffer for the tag (i.e. another prz), or
    hacking ram_core routines to "reserve" some control data in the
    buffer, we are just encoding the tag into the buffer signature
    (and XOR'ing it with the actual signature value, so that buffers
    not needing a tag can just pass zero, which will result into the
    plain old PRZ signature).

    Suggested-by: Steven Rostedt
    Suggested-by: Tony Luck
    Suggested-by: Colin Cross
    Signed-off-by: Anton Vorontsov
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • Nowadays we can use prz->ecc_size as a flag, no need for the special
    member in the prz struct.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • This is now pretty straightforward: instead of using bool, just pass
    an integer. For backwards compatibility ramoops.ecc=1 means 16 bytes
    ECC (using 1 byte for ECC isn't much of use anyway).

    Suggested-by: Arve Hjønnevåg
    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • The struct members were never used anywhere outside of
    persistent_ram_init_ecc(), so there's actually no need for them
    to be in the struct.

    If we ever want to make polynomial or symbol size configurable,
    it would make more sense to just pass initialized rs_decoder
    to the persistent_ram init functions.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     

21 Jun, 2012

3 commits

  • - Instead of exploiting unsigned overflows (which doesn't work for all
    sizes), use straightforward checking for ECC total size not exceeding
    initial buffer size;

    - Printing overflowed buffer_size is not informative. Instead, print
    ecc_size and buffer_size;

    - No need for buffer_size argument in persistent_ram_init_ecc(),
    we can address prz->buffer_size directly.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • We will implement variable-sized ECC buffers soon, so post_init routine
    might fail much more likely, so we'd better check for its errors.

    To make error handling simple, modify persistent_ram_free() to it be safe
    at all times.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • Registering the platform driver before module_init allows us to log oopses
    that happen during device probing.

    This requires changing module_init to postcore_initcall, and switching
    from platform_driver_probe to platform_driver_register because the
    platform device is not registered when the platform driver is registered;
    and because we use driver_register, now can't use create_bundle() (since
    it will try to register the same driver once again), so we have to switch
    to platform_device_register_data().

    Also, some __init -> __devinit changes were needed.

    Overall, the registration logic is now much clearer, since we have only
    one driver registration point, and just an optional dummy device, which
    is created from the module parameters.

    Suggested-by: Colin Cross
    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov