Commit 6fde36d5ce7ba4303865d5e11601cd3094e5909b

Authored by Alexei Starovoitov
Committed by Greg Kroah-Hartman
1 parent fdd88d753d

bpf: introduce BPF_JIT_ALWAYS_ON config

[ upstream commit 290af86629b25ffd1ed6232c4e9107da031705cb ]

The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.

A quote from goolge project zero blog:
"At this point, it would normally be necessary to locate gadgets in
the host kernel code that can be used to actually leak data by reading
from an attacker-controlled location, shifting and masking the result
appropriately and then using the result of that as offset to an
attacker-controlled address for a load. But piecing gadgets together
and figuring out which ones work in a speculation context seems annoying.
So instead, we decided to use the eBPF interpreter, which is built into
the host kernel - while there is no legitimate way to invoke it from inside
a VM, the presence of the code in the host kernel's text section is sufficient
to make it usable for the attack, just like with ordinary ROP gadgets."

To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
option that removes interpreter from the kernel in favor of JIT-only mode.
So far eBPF JIT is supported by:
x64, arm64, arm32, sparc64, s390, powerpc64, mips64

The start of JITed program is randomized and code page is marked as read-only.
In addition "constant blinding" can be turned on with net.core.bpf_jit_harden

v2->v3:
- move __bpf_prog_ret0 under ifdef (Daniel)

v1->v2:
- fix init order, test_bpf and cBPF (Daniel's feedback)
- fix offloaded bpf (Jakub's feedback)
- add 'return 0' dummy in case something can invoke prog->bpf_func
- retarget bpf tree. For bpf-next the patch would need one extra hunk.
  It will be sent when the trees are merged back to net-next

Considered doing:
  int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
but it seems better to land the patch as-is and in bpf-next remove
bpf_jit_enable global variable from all JITs, consolidate in one place
and remove this jit_init() function.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Showing 6 changed files with 50 additions and 8 deletions Side-by-side Diff

... ... @@ -1342,6 +1342,13 @@
1342 1342 Enable the bpf() system call that allows to manipulate eBPF
1343 1343 programs and maps via file descriptors.
1344 1344  
  1345 +config BPF_JIT_ALWAYS_ON
  1346 + bool "Permanently enable BPF JIT and remove BPF interpreter"
  1347 + depends on BPF_SYSCALL && HAVE_EBPF_JIT && BPF_JIT
  1348 + help
  1349 + Enables BPF JIT and removes BPF interpreter to avoid
  1350 + speculative execution of BPF instructions by the interpreter
  1351 +
1345 1352 config SHMEM
1346 1353 bool "Use full shmem filesystem" if EXPERT
1347 1354 default y
... ... @@ -760,6 +760,7 @@
760 760 }
761 761 EXPORT_SYMBOL_GPL(__bpf_call_base);
762 762  
  763 +#ifndef CONFIG_BPF_JIT_ALWAYS_ON
763 764 /**
764 765 * __bpf_prog_run - run eBPF program on a given context
765 766 * @ctx: is the data we are operating on
... ... @@ -1310,6 +1311,14 @@
1310 1311 EVAL4(PROG_NAME_LIST, 416, 448, 480, 512)
1311 1312 };
1312 1313  
  1314 +#else
  1315 +static unsigned int __bpf_prog_ret0(const void *ctx,
  1316 + const struct bpf_insn *insn)
  1317 +{
  1318 + return 0;
  1319 +}
  1320 +#endif
  1321 +
1313 1322 bool bpf_prog_array_compatible(struct bpf_array *array,
1314 1323 const struct bpf_prog *fp)
1315 1324 {
1316 1325  
... ... @@ -1357,9 +1366,13 @@
1357 1366 */
1358 1367 struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
1359 1368 {
  1369 +#ifndef CONFIG_BPF_JIT_ALWAYS_ON
1360 1370 u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
1361 1371  
1362 1372 fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
  1373 +#else
  1374 + fp->bpf_func = __bpf_prog_ret0;
  1375 +#endif
1363 1376  
1364 1377 /* eBPF JITs can rewrite the program in case constant
1365 1378 * blinding is active. However, in case of error during
... ... @@ -1368,6 +1381,12 @@
1368 1381 * be JITed, but falls back to the interpreter.
1369 1382 */
1370 1383 fp = bpf_int_jit_compile(fp);
  1384 +#ifdef CONFIG_BPF_JIT_ALWAYS_ON
  1385 + if (!fp->jited) {
  1386 + *err = -ENOTSUPP;
  1387 + return fp;
  1388 + }
  1389 +#endif
1371 1390 bpf_prog_lock_ro(fp);
1372 1391  
1373 1392 /* The tail call compatibility check can only be done at
... ... @@ -6207,9 +6207,8 @@
6207 6207 return NULL;
6208 6208 }
6209 6209 }
6210   - /* We don't expect to fail. */
6211 6210 if (*err) {
6212   - pr_cont("FAIL to attach err=%d len=%d\n",
  6211 + pr_cont("FAIL to prog_create err=%d len=%d\n",
6213 6212 *err, fprog.len);
6214 6213 return NULL;
6215 6214 }
... ... @@ -6233,6 +6232,10 @@
6233 6232 * checks.
6234 6233 */
6235 6234 fp = bpf_prog_select_runtime(fp, err);
  6235 + if (*err) {
  6236 + pr_cont("FAIL to select_runtime err=%d\n", *err);
  6237 + return NULL;
  6238 + }
6236 6239 break;
6237 6240 }
6238 6241  
... ... @@ -6418,8 +6421,8 @@
6418 6421 pass_cnt++;
6419 6422 continue;
6420 6423 }
6421   -
6422   - return err;
  6424 + err_cnt++;
  6425 + continue;
6423 6426 }
6424 6427  
6425 6428 pr_cont("jited:%u ", fp->jited);
... ... @@ -1053,11 +1053,9 @@
1053 1053 */
1054 1054 goto out_err_free;
1055 1055  
1056   - /* We are guaranteed to never error here with cBPF to eBPF
1057   - * transitions, since there's no issue with type compatibility
1058   - * checks on program arrays.
1059   - */
1060 1056 fp = bpf_prog_select_runtime(fp, &err);
  1057 + if (err)
  1058 + goto out_err_free;
1061 1059  
1062 1060 kfree(old_prog);
1063 1061 return fp;
net/core/sysctl_net_core.c
... ... @@ -325,7 +325,13 @@
325 325 .data = &bpf_jit_enable,
326 326 .maxlen = sizeof(int),
327 327 .mode = 0644,
  328 +#ifndef CONFIG_BPF_JIT_ALWAYS_ON
328 329 .proc_handler = proc_dointvec
  330 +#else
  331 + .proc_handler = proc_dointvec_minmax,
  332 + .extra1 = &one,
  333 + .extra2 = &one,
  334 +#endif
329 335 },
330 336 # ifdef CONFIG_HAVE_EBPF_JIT
331 337 {
... ... @@ -2642,6 +2642,15 @@
2642 2642  
2643 2643 core_initcall(sock_init); /* early initcall */
2644 2644  
  2645 +static int __init jit_init(void)
  2646 +{
  2647 +#ifdef CONFIG_BPF_JIT_ALWAYS_ON
  2648 + bpf_jit_enable = 1;
  2649 +#endif
  2650 + return 0;
  2651 +}
  2652 +pure_initcall(jit_init);
  2653 +
2645 2654 #ifdef CONFIG_PROC_FS
2646 2655 void socket_seq_show(struct seq_file *seq)
2647 2656 {