diff options
author | Philipp Tomsich <philipp.tomsich@theobroma-systems.com> | 2014-02-12 01:05:48 +0100 |
---|---|---|
committer | Christoph Muellner <christoph.muellner@theobroma-systems.com> | 2018-08-08 16:21:49 +0200 |
commit | 271e2811e59c0c77fc022fa86a7030f20b4cac8e (patch) | |
tree | b4223dc481690be43dcf14d1c0c0e8eee9afa6dc | |
parent | d735f3ae4712f66362326d179b4d7e9332c79677 (diff) |
aarch64: Correct the maximum shift amount for shifted operands.
The aarch64 ISA specification allows a left shift amount to be applied
after extension in the range of 0 to 4 (encoded in the imm3 field).
This is true for at least the following instructions:
* ADD (extend register)
* ADDS (extended register)
* SUB (extended register)
The result of this patch can be seen, when compiling the following code:
uint64_t myadd(uint64_t a, uint64_t b)
{
return a+(((uint8_t)b)<<4);
}
Without the patch the following sequence will be generated:
0000000000000000 <myadd>:
0: d37c1c21 ubfiz x1, x1, #4, #8
4: 8b000020 add x0, x1, x0
8: d65f03c0 ret
With the patch the ubfiz will be merged into the add instruction:
0000000000000000 <myadd>:
0: 8b211000 add x0, x0, w1, uxtb #4
4: d65f03c0 ret
Signed-off-by: Christoph Muellner <christoph.muellner@theobroma-systems.com>
-rw-r--r-- | gcc/config/aarch64/aarch64.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 67a12e70399d..a14519500331 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -7723,7 +7723,7 @@ aarch64_output_casesi (rtx *operands) int aarch64_uxt_size (int shift, HOST_WIDE_INT mask) { - if (shift >= 0 && shift <= 3) + if (shift >= 0 && shift <= 4) { int size; for (size = 8; size <= 32; size *= 2) |