07 Apr, 2015
1 commit
-
Conflicts:
drivers/net/ethernet/mellanox/mlx4/cmd.c
net/core/fib_rules.c
net/ipv4/fib_frontend.cThe fib_rules.c and fib_frontend.c conflicts were locking adjustments
in 'net' overlapping addition and removal of code in 'net-next'.The mlx4 conflict was a bug fix in 'net' happening in the same
place a constant was being replaced with a more suitable macro.Signed-off-by: David S. Miller
04 Apr, 2015
2 commits
-
Pull block layer fix from Jens Axboe:
"Just one patch in this pull request, fixing a regression caused by a
'mathematically correct' change to lcm()"* 'for-linus' of git://git.kernel.dk/linux-block:
block: fix blk_stack_limits() regression due to lcm() change -
Now that resizing is completely automatic, we need to remove
the max_size setting or the test will fail.Reported-by: Fengguang Wu
Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller
03 Apr, 2015
1 commit
-
Conflicts:
drivers/net/usb/asix_common.c
drivers/net/usb/sr9800.c
drivers/net/usb/usbnet.c
include/linux/usb/usbnet.h
net/ipv4/tcp_ipv4.c
net/ipv6/tcp_ipv6.cThe TCP conflicts were overlapping changes. In 'net' we added a
READ_ONCE() to the socket cached RX route read, whilst in 'net-next'
Eric Dumazet touched the surrounding code dealing with how mini
sockets are handled.With USB, it's a case of the same bug fix first going into net-next
and then I cherry picked it back into net.Signed-off-by: David S. Miller
01 Apr, 2015
1 commit
-
This is especially important in cases where the kernel allocs a new
structure and expects a field to be set from a netlink attribute. If such
attribute is shorter than expected, the rest of the field is left containing
previous data. When such field is read back by the user space, kernel memory
content is leaked.Signed-off-by: Jiri Benc
Acked-by: Thomas Graf
Signed-off-by: David S. Miller
31 Mar, 2015
1 commit
-
Linux 3.19 commit 69c953c ("lib/lcm.c: lcm(n,0)=lcm(0,n) is 0, not n")
caused blk_stack_limits() to not properly stack queue_limits for stacked
devices (e.g. DM).Fix this regression by establishing lcm_not_zero() and switching
blk_stack_limits() over to using it.DM uses blk_set_stacking_limits() to establish the initial top-level
queue_limits that are then built up based on underlying devices' limits
using blk_stack_limits(). In the case of optimal_io_size (io_opt)
blk_set_stacking_limits() establishes a default value of 0. With commit
69c953c, lcm(0, n) is no longer n, which compromises proper stacking of
the underlying devices' io_opt.Test:
$ modprobe scsi_debug dev_size_mb=10 num_tgts=1 opt_blks=1536
$ cat /sys/block/sde/queue/optimal_io_size
786432
$ dmsetup create node --table "0 100 linear /dev/sde 0"Before this fix:
$ cat /sys/block/dm-5/queue/optimal_io_size
0After this fix:
$ cat /sys/block/dm-5/queue/optimal_io_size
786432Signed-off-by: Mike Snitzer
Cc: stable@vger.kernel.org # 3.19+
Acked-by: Martin K. Petersen
Signed-off-by: Jens Axboe
26 Mar, 2015
1 commit
-
nftables sets will be converted to use so called setextensions, moving
the key to a non-fixed position. To hash it, the obj_hashfn must be used,
however it so far doesn't receive the length parameter.Pass the key length to obj_hashfn() and convert existing users.
Signed-off-by: Patrick McHardy
Signed-off-by: Pablo Neira Ayuso
25 Mar, 2015
4 commits
-
rhashtable_destroy() variant which stops rehashes, iterates over
the table and calls a callback to release resources.Avoids need for nft_hash to embed rhashtable internals and allows to
get rid of the being_destroyed flag. It also saves a 2nd mutex
lock upon destruction.Also fixes an RCU lockdep splash on nft set destruction due to
calling rht_for_each_entry_safe() without holding bucket locks.
Open code this loop as we need know that no mutations may occur in
parallel.Signed-off-by: Thomas Graf
Signed-off-by: David S. Miller -
Introduce a new bool automatic_shrinking to require the
user to explicitly opt-in to automatic shrinking of tables.Signed-off-by: Thomas Graf
Signed-off-by: David S. Miller -
Signed-off-by: Thomas Graf
Signed-off-by: David S. Miller -
This patch adds a comment on the choice of the value 16 as the
maximum chain length before we force a rehash.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller
24 Mar, 2015
9 commits
-
Conflicts:
net/netfilter/nf_tables_core.cThe nf_tables_core.c conflict was resolved using a conflict resolution
from Stephen Rothwell as a guide.Signed-off-by: David S. Miller
-
The commit 963ecbd41a1026d99ec7537c050867428c397b89 ("rhashtable:
Fix use-after-free in rhashtable_walk_stop") fixed a real bug
but created another one because we may end up sleeping inside an
RCU critical section.This patch fixes it properly by replacing the mutex with a spin
lock that specifically protects the walker lists.Reported-by: Sasha Levin
Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
We need this symbol later on in ipv6.ko, thus export it via EXPORT_SYMBOL
like sha_transform already is.Signed-off-by: Hannes Frederic Sowa
Signed-off-by: David S. Miller -
This patch reintroduces immediate rehash during insertion. If
we find during insertion that the table is full or the chain
length exceeds a set limit (currently 16 but may be disabled
with insecure_elasticity) then we will force an immediate rehash.
The rehash will contain an expansion if the table utilisation
exceeds 75%.If this rehash fails then the insertion will fail. Otherwise the
insertion will be reattempted in the new hash table.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller -
This patch adds the ability to allocate bucket table with GFP_ATOMIC
instead of GFP_KERNEL. This is needed when we perform an immediate
rehash during insertion.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller -
This patch adds the missing bits to allow multiple rehashes. The
read-side as well as remove already handle this correctly. So it's
only the rehasher and insertion that need modification to handle
this.Note that this patch doesn't actually enable it so for now rehashing
is still only performed by the worker thread.This patch also disables the explicit expand/shrink interface because
the table is meant to expand and shrink automatically, and continuing
to export these interfaces unnecessarily complicates the life of the
rehasher since the rehash process is now composed of two parts.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller -
This patch changes rhashtable_shrink to shrink to the smallest
size possible rather than halving the table. This is needed
because with multiple rehashing we will defer shrinking until
all other rehashing is done, meaning that when we do shrink
we may be able to shrink a lot.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller -
Since every current rhashtable user uses jhash as their hash
function, the fact that jhash is an inline function causes each
user to generate a copy of its code.This function provides a solution to this problem by allowing
hashfn to be unset. In which case rhashtable will automatically
set it to jhash. Furthermore, if the key length is a multiple
of 4, we will switch over to jhash2.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller -
The walker is a lockless reader so it too needs an smp_rmb before
reading the future_tbl field in order to see any new tables that
may contain elements that we should have walked over.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller
21 Mar, 2015
5 commits
-
Conflicts:
drivers/net/ethernet/emulex/benet/be_main.c
net/core/sysctl_net_core.c
net/ipv4/inet_diag.cThe be_main.c conflict resolution was really tricky. The conflict
hunks generated by GIT were very unhelpful, to say the least. It
split functions in half and moved them around, when the real actual
conflict only existed solely inside of one function, that being
be_map_pci_bars().So instead, to resolve this, I checked out be_main.c from the top
of net-next, then I applied the be_main.c changes from 'net' since
the last time I merged. And this worked beautifully.The inet_diag.c and sysctl_net_core.c conflicts were simple
overlapping changes, and were easily to resolve.Signed-off-by: David S. Miller
-
Now that all rhashtable users have been converted over to the
inline interface, this patch removes the unused out-of-line
interface.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch converts test_rhashtable to the inlined rhashtable
interface.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch deals with the complaint that we make indirect function
calls on the fast paths unnecessarily in rhashtable. We resolve
it by moving the fast paths into inline functions that take struct
rhashtable_param (which obviously must be the same set of parameters
supplied to rhashtable_init) as an argument.The only remaining indirect call is to obj_hashfn (or key_hashfn it
obj_hashfn is unset) on the rehash as well as the insert-during-
rehash slow path.This patch also extends the support of vairable-length keys to
include those where the key is fixed but scattered in the object.
For example, in netlink we want to key off the namespace and the
portid but they're not next to each other.This patch does this by directly using the object hash function
as the indicator of whether the key is accessible or not. It
also adds a new function obj_cmpfn to compare a key against an
object. This means that the caller no longer needs to supply
explicit compare functions.All this is done in a backwards compatible manner so no existing
users are affected until they convert to the new interface.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch marks the rhashtable_init params argument const as
there is no reason to modify it since we will always make a copy
of it in the rhashtable.This patch also fixes a bug where we don't actually round up the
value of min_size unless it is less than HASH_MIN_SIZE.Signed-off-by: Herbert Xu
Acked-by: Thomas Graf
Signed-off-by: David S. Miller
20 Mar, 2015
1 commit
-
Round up min_size respectively round down max_size to the next power
of two to make sure we always respect the limit specified by the
user. This is required because we compare the table size against the
limit before we expand or shrink.Also fixes a minor bug where we modified min_size in the params
provided instead of the copy stored in struct rhashtable.Signed-off-by: Thomas Graf
Acked-by: Herbert Xu
Signed-off-by: David S. Miller
19 Mar, 2015
4 commits
-
Now that nobody uses max_shift and min_shift, we can safely remove
them.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch converts test_rhashtable to use rhashtable max_size
instead of the obsolete max_shift.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch adds the parameters max_size and min_size which are
meant to replace max_shift and min_shift.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
Keeping both size and shift is silly. We only need one.
Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller
17 Mar, 2015
3 commits
-
Caching the lock pointer avoids having to hash on the object
again to unlock the bucket locks.Signed-off-by: Thomas Graf
Signed-off-by: David S. Miller -
If the part of the compression data are corrupted, or the compression
data is totally fake, the memory access over the limit is possible.This is the log from my system usning lz4 decompression.
[6502]data abort, halting
[6503]r0 0x00000000 r1 0x00000000 r2 0xdcea0ffc r3 0xdcea0ffc
[6509]r4 0xb9ab0bfd r5 0xdcea0ffc r6 0xdcea0ff8 r7 0xdce80000
[6515]r8 0x00000000 r9 0x00000000 r10 0x00000000 r11 0xb9a98000
[6522]r12 0xdcea1000 usp 0x00000000 ulr 0x00000000 pc 0x820149bc
[6528]spsr 0x400001f3
and the memory addresses of some variables at the moment are
ref:0xdcea0ffc, op:0xdcea0ffc, oend:0xdcea1000As you can see, COPYLENGH is 8bytes, so @ref and @op can access the momory
over @oend.Signed-off-by: JeHyeon Yeon
Reviewed-by: David Sterba
Signed-off-by: Greg Kroah-Hartman -
Fixes the following sparse warnings:
lib/rhashtable.c:767:5: warning: context imbalance in 'rhashtable_walk_start' - wrong count at exit
lib/rhashtable.c:849:6: warning: context imbalance in 'rhashtable_walk_stop' - unexpected unlockFixes: f2dba9c6ff0d ("rhashtable: Introduce rhashtable_walk_*")
Signed-off-by: Thomas Graf
Signed-off-by: David S. Miller
16 Mar, 2015
2 commits
-
The commit 9d901bc05153bbf33b5da2cd6266865e531f0545 ("rhashtable:
Free bucket tables asynchronously after rehash") causes gratuitous
failures in rhashtable_remove.The reason is that it inadvertently introduced multiple rehashing
from the perspective of readers. IOW it is now possible to see
more than two tables during a single RCU critical section.Fortunately the other reader rhashtable_lookup already deals with
this correctly thanks to c4db8848af6af92f90462258603be844baeab44d
("rhashtable: rhashtable: Move future_tbl into struct bucket_table")
so only rhashtable_remove is broken by this change.This patch fixes this by looping over every table from the first
one to the last or until we find the element that we were trying
to delete.Incidentally the simple test for detecting rehashing to prevent
starting another shrinking no longer works. Since it isn't needed
anyway (the work queue and the mutex serves as a natural barrier
to unnecessary rehashes) I've simply killed the test.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
The commit c4db8848af6af92f90462258603be844baeab44d ("rhashtable:
Move future_tbl into struct bucket_table") introduced a use-after-
free bug in rhashtable_walk_stop because it dereferences tbl after
droping the RCU read lock.This patch fixes it by moving the RCU read unlock down to the bottom
of rhashtable_walk_stop. In fact this was how I had it originally
but it got dropped while rearranging patches because this one
depended on the async freeing of bucket_table.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller
15 Mar, 2015
5 commits
-
This patch moves future_tbl to open up the possibility of having
multiple rehashes on the same table.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
This patch adds a rehash counter to bucket_table to indicate
the last bucket that has been rehashed. This serves two purposes:1. Any bucket that has been rehashed can never gain a new object.
2. If the rehash counter reaches the size of the table, the table
will forever remain empty.This patch also downsizes bucket_table->size to an unsigned int
since we do not support sizes greater than 32 bits yet.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
There is in fact no need to wait for an RCU grace period in the
rehash function, since all insertions are guaranteed to go into
the new table through spin locks.This patch uses call_rcu to free the old/rehashed table at our
leisure.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
It seems that I have already made every rehash redo the random
seed even though my commit message indicated otherwise :)Since we have already taken that step, this patch goes one step
further and moves the seed initialisation into bucket_table_alloc.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller -
We only nest one level deep there is no need to roll our own
subclasses.Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller