summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorKostya Serebryany <kcc@google.com>2017-12-18 21:40:07 +0000
committerKostya Serebryany <kcc@google.com>2017-12-18 21:40:07 +0000
commit351e0cae49c2c24aaf47aa6c00ef43b264e5b547 (patch)
tree4e039e77467a0c2ac33a41123251541d2da5af36 /docs
parent002b83ad4242620b7b334d377cf955e542177b4d (diff)
[hwasan] update the design doc
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@321027 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'docs')
-rw-r--r--docs/HardwareAssistedAddressSanitizerDesign.rst34
1 files changed, 25 insertions, 9 deletions
diff --git a/docs/HardwareAssistedAddressSanitizerDesign.rst b/docs/HardwareAssistedAddressSanitizerDesign.rst
index 00777ce882..5904cceaea 100644
--- a/docs/HardwareAssistedAddressSanitizerDesign.rst
+++ b/docs/HardwareAssistedAddressSanitizerDesign.rst
@@ -21,7 +21,7 @@ The redzones, the quarantine, and, to a less extent, the shadow, are the
sources of AddressSanitizer's memory overhead.
See the `AddressSanitizer paper`_ for details.
-AArch64 has the `Address Tagging`_, a hardware feature that allows
+AArch64 has the `Address Tagging`_ (or top-byte-ignore, TBI), a hardware feature that allows
software to use 8 most significant bits of a 64-bit pointer as
a tag. HWASAN uses `Address Tagging`_
to implement a memory safety tool, similar to :doc:`AddressSanitizer`,
@@ -31,7 +31,7 @@ accuracy guarantees.
Algorithm
=========
* Every heap/stack/global memory object is forcibly aligned by `N` bytes
- (`N` is e.g. 16 or 64)
+ (`N` is e.g. 16 or 64). We call `N` the **granularity** of tagging.
* For every such object a random `K`-bit tag `T` is chosen (`K` is e.g. 4 or 8)
* The pointer to the object is tagged with `T`.
* The memory for the object is also tagged with `T`
@@ -44,19 +44,35 @@ Instrumentation
Memory Accesses
---------------
-All memory accesses are prefixed with a call to a run-time function.
-The function encodes the type and the size of access in its name;
-it receives the address as a parameter, e.g. `__hwasan_load4(void *ptr)`;
-it loads the memory tag, compares it with the
-pointer tag, and executes `__builtin_trap` (or calls `__hwasan_error_load4(void *ptr)`) on mismatch.
+All memory accesses are prefixed with an inline instruction sequence that
+verifies the tags. Currently, the following sequence is used:
-It's possible to inline this callback too.
+
+.. code-block:: asm
+
+ // int foo(int *a) { return *a; }
+ // clang -O2 --target=aarch64-linux -fsanitize=hwaddress -c load.c
+ foo:
+ 0: 08 dc 44 d3 ubfx x8, x0, #4, #52 // shadow address
+ 4: 08 01 40 39 ldrb w8, [x8] // load shadow
+ 8: 09 fc 78 d3 lsr x9, x0, #56 // address tag
+ c: 3f 01 08 6b cmp w9, w8 // compare tags
+ 10: 61 00 00 54 b.ne #12 // jump on mismatch
+ 14: 00 00 40 b9 ldr w0, [x0] // original load
+ 18: c0 03 5f d6 ret
+ 1c: 40 20 40 d4 hlt #0x102 // halt
+ 20: 00 00 40 b9 ldr w0, [x0] // original load
+ 24: c0 03 5f d6 ret
+
+
+Alternatively, memory accesses are prefixed with a function call.
Heap
----
Tagging the heap memory/pointers is done by `malloc`.
This can be based on any malloc that forces all objects to be N-aligned.
+`free` tags the memory with a different tag.
Stack
-----
@@ -75,7 +91,7 @@ TODO: details.
Error reporting
---------------
-Errors are generated by `__builtin_trap` and are handled by a signal handler.
+Errors are generated by the `HLT` instruction and are handled by a signal handler.
Attribute
---------