summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPhilipp Tomsich <philipp.tomsich@theobroma-systems.com>2014-02-12 01:05:48 +0100
committerChristoph Muellner <christoph.muellner@theobroma-systems.com>2019-08-30 20:10:52 +0200
commit3e2cf30505c8fae00e38333eb7fa9310062a81c7 (patch)
tree413b626c60f4e044f96a0db55a8906f42e4d660d
parent971fcb84716ed0544b9ac3a8f2c59822f9a053f2 (diff)
aarch64: Correct the maximum shift amount for shifted operands.
The aarch64 ISA specification allows a left shift amount to be applied after extension in the range of 0 to 4 (encoded in the imm3 field). This is true for at least the following instructions: * ADD (extend register) * ADDS (extended register) * SUB (extended register) The result of this patch can be seen, when compiling the following code: uint64_t myadd(uint64_t a, uint64_t b) { return a+(((uint8_t)b)<<4); } Without the patch the following sequence will be generated: 0000000000000000 <myadd>: 0: d37c1c21 ubfiz x1, x1, #4, #8 4: 8b000020 add x0, x1, x0 8: d65f03c0 ret With the patch the ubfiz will be merged into the add instruction: 0000000000000000 <myadd>: 0: 8b211000 add x0, x0, w1, uxtb #4 4: d65f03c0 ret Signed-off-by: Christoph Muellner <christoph.muellner@theobroma-systems.com>
-rw-r--r--gcc/config/aarch64/aarch64.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 062e1f33f130..c5815b0168e0 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -7748,7 +7748,7 @@ aarch64_output_casesi (rtx *operands)
int
aarch64_uxt_size (int shift, HOST_WIDE_INT mask)
{
- if (shift >= 0 && shift <= 3)
+ if (shift >= 0 && shift <= 4)
{
int size;
for (size = 8; size <= 32; size *= 2)