Commit
cbb9bf65ae25dee772e85589136e7dd1c3e743ae
Exists in
master
and in
39 other branches
8mp-imx_5.4.70_2.3.0, 8qm-imx_5.4.70_2.3.0, emb_imx_lf-5.15.y, emb_lf-6.1.y, imx_3.0.35_4.1.0, imx_3.10.17_1.0.1_ga, imx_3.10.53_1.1.0_ga, imx_3.14.28_1.0.0_ga, imx_4.1.15_1.0.0_ga, pitx_8mp_lf-5.10.y, rt-smarc-imx_4.1.15_1.0.0_ga, rt_linux_5.15.71, smarc-8m-android-11.0.0_2.0.0, smarc-imx6_4.14.98_2.0.0_ga, smarc-imx6_4.9.88_2.0.0_ga, smarc-imx7_4.14.98_2.0.0_ga, smarc-imx7_4.9.11_1.0.0_ga, smarc-imx7_4.9.88_2.0.0_ga, smarc-imx_3.10.53_1.1.0_ga, smarc-imx_3.14.28_1.0.0_ga, smarc-imx_4.1.15_1.0.0_ga, smarc-imx_4.9.11_1.0.0_ga, smarc-imx_4.9.51_imx8m_ga, smarc-imx_4.9.88_2.0.0_ga, smarc-m6.0.1_2.1.0-ga, smarc-n7.1.2_2.0.0-ga, smarc-rel_imx_4.1.15_1.2.0_ga, smarc_8m_00d0_imx_4.14.98_2.0.0_ga, smarc_8m_imx_4.14.78_1.0.0_ga, smarc_8m_imx_4.14.98_2.0.0_ga, smarc_8m_imx_4.19.35_1.1.0, smarc_8mm_imx_4.14.78_1.0.0_ga, smarc_8mm_imx_4.14.98_2.0.0_ga, smarc_8mm_imx_4.19.35_1.1.0, smarc_8mm_imx_5.4.24_2.1.0, smarc_8mp_lf-5.10.y, smarc_8mq_imx_5.4.24_2.1.0, smarc_8mq_lf-5.10.y, smarc_imx_lf-5.15.y
crypto: hash - Fix handling of unaligned buffers
The correct way to calculate the start of the aligned part of an
unaligned buffer is:
offset = ALIGN(offset, alignmask + 1);
However, crypto_hash_walk_done() has:
offset += alignmask - 1;
offset = ALIGN(offset, alignmask + 1);
which actually skips a whole block unless offset % (alignmask + 1) == 1.
This patch fixes the problem.
Signed-off-by: Szilveszter Ördög <slipszi@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Showing
1 changed file
with
0 additions
and
1 deletions
Side-by-side Diff
... |
... |
@@ -78,7 +78,6 @@ |
78
|
78 |
walk->data -= walk->offset; |
79
|
79 |
|
80
|
80 |
if (nbytes && walk->offset & alignmask && !err) { |
81
|
|
- walk->offset += alignmask - 1; |
82
|
81 |
walk->offset = ALIGN(walk->offset, alignmask + 1); |
83
|
82 |
walk->data += walk->offset; |
84
|
83 |
|