diff options
author | Richard Sandiford <richard.sandiford@arm.com> | 2015-11-17 18:55:55 +0000 |
---|---|---|
committer | Richard Sandiford <rsandifo@gcc.gnu.org> | 2015-11-17 18:55:55 +0000 |
commit | 70439f0d61b811fa5b9a77fcdf40c6353daa8f75 (patch) | |
tree | 9b4ddd183b2f4dd515f06826aaa396589cf86c98 /gcc/internal-fn.h | |
parent | 10766209ec09ef42deb8cb877f1893a8a03f2a97 (diff) |
Vectorize internal functions
This patch tries to vectorize built-in and internal functions as
internal functions first, falling back on the current built-in
target hooks otherwise.
This means that we'll automatically pick up vector versions of optabs
without the target having to implement any special hooks. E.g. we'll
use V4SF sqrt if the target defines a "sqrtv4sf2" optab. As well as
being simpler, it means that the target-independent code has more
idea what the vectorized function does.
Tested on x86_64-linux-gnu, aarch64-linux-gnu, arm-linux-gnu and
powerpc64-linux-gnu.
gcc/
* internal-fn.h (direct_internal_fn_info): Add vectorizable flag.
* internal-fn.c (direct_internal_fn_array): Update accordingly.
* tree-vectorizer.h (vectorizable_function): Delete.
* tree-vect-stmts.c: Include internal-fn.h.
(vectorizable_internal_function): New function.
(vectorizable_function): Inline into...
(vectorizable_call): ...here. Explicitly reject calls that read
from or write to memory. Try using an internal function before
falling back on the old vectorizable_function behavior.
From-SVN: r230492
Diffstat (limited to 'gcc/internal-fn.h')
-rw-r--r-- | gcc/internal-fn.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/gcc/internal-fn.h b/gcc/internal-fn.h index 6cb123f248d..aea6abdaf7e 100644 --- a/gcc/internal-fn.h +++ b/gcc/internal-fn.h @@ -134,6 +134,14 @@ struct direct_internal_fn_info function isn't directly mapped to an optab. */ signed int type0 : 8; signed int type1 : 8; + /* True if the function is pointwise, so that it can be vectorized by + converting the return type and all argument types to vectors of the + same number of elements. E.g. we can vectorize an IFN_SQRT on + floats as an IFN_SQRT on vectors of N floats. + + This only needs 1 bit, but occupies the full 16 to ensure a nice + layout. */ + unsigned int vectorizable : 16; }; extern const direct_internal_fn_info direct_internal_fn_array[IFN_LAST + 1]; |