Commit 04e448d9a386640a79a4aa71251aa1cdd314f662

Authored by Tim Abbott
Committed by Sam Ravnborg
1 parent d0e1e09568

vmlinux.lds.h: restructure BSS linker script macros

The BSS section macros in vmlinux.lds.h currently place the .sbss
input section outside the bounds of [__bss_start, __bss_end].  On all
architectures except for microblaze that handle both .sbss and
__bss_start/__bss_end, this is wrong: the .sbss input section is
within the range [__bss_start, __bss_end].  Relatedly, the example
code at the top of the file actually has __bss_start/__bss_end defined
twice; I believe the right fix here is to define them in the
BSS_SECTION macro but not in the BSS macro.

Another problem with the current macros is that several
architectures have an ALIGN(4) or some other small number just before
__bss_stop in their linker scripts.  The BSS_SECTION macro currently
hardcodes this to 4; while it should really be an argument.  It also
ignores its sbss_align argument; fix that.

mn10300 is the only user at present of any of the macros touched by
this patch.  It looks like mn10300 actually was incorrectly converted
to use the new BSS() macro (the alignment of 4 prior to conversion was
a __bss_stop alignment, but the argument to the BSS macro is a start
alignment).  So fix this as well.

I'd like acks from Sam and David on this one.  Also CCing Paul, since
he has a patch from me which will need to be updated to use
BSS_SECTION(0, PAGE_SIZE, 4) once this gets merged.

Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>

Showing 2 changed files with 10 additions and 11 deletions Side-by-side Diff

arch/mn10300/kernel/vmlinux.lds.S
... ... @@ -107,7 +107,7 @@
107 107 __init_end = .;
108 108 /* freed after init ends here */
109 109  
110   - BSS(4)
  110 + BSS_SECTION(0, PAGE_SIZE, 4)
111 111  
112 112 _end = . ;
113 113  
include/asm-generic/vmlinux.lds.h
... ... @@ -30,9 +30,7 @@
30 30 * EXCEPTION_TABLE(...)
31 31 * NOTES
32 32 *
33   - * __bss_start = .;
34   - * BSS_SECTION(0, 0)
35   - * __bss_stop = .;
  33 + * BSS_SECTION(0, 0, 0)
36 34 * _end = .;
37 35 *
38 36 * /DISCARD/ : {
... ... @@ -489,7 +487,8 @@
489 487 * bss (Block Started by Symbol) - uninitialized data
490 488 * zeroed during startup
491 489 */
492   -#define SBSS \
  490 +#define SBSS(sbss_align) \
  491 + . = ALIGN(sbss_align); \
493 492 .sbss : AT(ADDR(.sbss) - LOAD_OFFSET) { \
494 493 *(.sbss) \
495 494 *(.scommon) \
496 495  
... ... @@ -498,12 +497,10 @@
498 497 #define BSS(bss_align) \
499 498 . = ALIGN(bss_align); \
500 499 .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \
501   - VMLINUX_SYMBOL(__bss_start) = .; \
502 500 *(.bss.page_aligned) \
503 501 *(.dynbss) \
504 502 *(.bss) \
505 503 *(COMMON) \
506   - VMLINUX_SYMBOL(__bss_stop) = .; \
507 504 }
508 505  
509 506 /*
510 507  
... ... @@ -735,8 +732,11 @@
735 732 INIT_RAM_FS \
736 733 }
737 734  
738   -#define BSS_SECTION(sbss_align, bss_align) \
739   - SBSS \
  735 +#define BSS_SECTION(sbss_align, bss_align, stop_align) \
  736 + . = ALIGN(sbss_align); \
  737 + VMLINUX_SYMBOL(__bss_start) = .; \
  738 + SBSS(sbss_align) \
740 739 BSS(bss_align) \
741   - . = ALIGN(4);
  740 + . = ALIGN(stop_align); \
  741 + VMLINUX_SYMBOL(__bss_stop) = .;