Skip to content

Commit 4d6ffa2

Browse files
MaskRaysuryasaimadhu
authored andcommitted
x86/lib: Change .weak to SYM_FUNC_START_WEAK for arch/x86/lib/mem*_64.S
Commit 393f203 ("x86_64: kasan: add interceptors for memset/memmove/memcpy functions") added .weak directives to arch/x86/lib/mem*_64.S instead of changing the existing ENTRY macros to WEAK. This can lead to the assembly snippet .weak memcpy ... .globl memcpy which will produce a STB_WEAK memcpy with GNU as but STB_GLOBAL memcpy with LLVM's integrated assembler before LLVM 12. LLVM 12 (since https://reviews.llvm.org/D90108) will error on such an overridden symbol binding. Commit ef1e031 ("x86/asm: Make some functions local") changed ENTRY in arch/x86/lib/memcpy_64.S to SYM_FUNC_START_LOCAL, which was ineffective due to the preceding .weak directive. Use the appropriate SYM_FUNC_START_WEAK instead. Fixes: 393f203 ("x86_64: kasan: add interceptors for memset/memmove/memcpy functions") Fixes: ef1e031 ("x86/asm: Make some functions local") Reported-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Fangrui Song <maskray@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20201103012358.168682-1-maskray@google.com
1 parent 3cea11c commit 4d6ffa2

3 files changed

Lines changed: 3 additions & 9 deletions

File tree

arch/x86/lib/memcpy_64.S

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@
1616
* to a jmp to memcpy_erms which does the REP; MOVSB mem copy.
1717
*/
1818

19-
.weak memcpy
20-
2119
/*
2220
* memcpy - Copy a memory block.
2321
*
@@ -30,7 +28,7 @@
3028
* rax original destination
3129
*/
3230
SYM_FUNC_START_ALIAS(__memcpy)
33-
SYM_FUNC_START_LOCAL(memcpy)
31+
SYM_FUNC_START_WEAK(memcpy)
3432
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
3533
"jmp memcpy_erms", X86_FEATURE_ERMS
3634

arch/x86/lib/memmove_64.S

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,7 @@
2424
* Output:
2525
* rax: dest
2626
*/
27-
.weak memmove
28-
29-
SYM_FUNC_START_ALIAS(memmove)
27+
SYM_FUNC_START_WEAK(memmove)
3028
SYM_FUNC_START(__memmove)
3129

3230
mov %rdi, %rax

arch/x86/lib/memset_64.S

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,6 @@
66
#include <asm/alternative-asm.h>
77
#include <asm/export.h>
88

9-
.weak memset
10-
119
/*
1210
* ISO C memset - set a memory block to a byte value. This function uses fast
1311
* string to get better performance than the original function. The code is
@@ -19,7 +17,7 @@
1917
*
2018
* rax original destination
2119
*/
22-
SYM_FUNC_START_ALIAS(memset)
20+
SYM_FUNC_START_WEAK(memset)
2321
SYM_FUNC_START(__memset)
2422
/*
2523
* Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended

0 commit comments

Comments
 (0)