summaryrefslogtreecommitdiff
path: root/lib/scudo/scudo_tsd.h
diff options
context:
space:
mode:
authorKostya Kortchinsky <kostyak@google.com>2018-06-11 14:50:31 +0000
committerKostya Kortchinsky <kostyak@google.com>2018-06-11 14:50:31 +0000
commit14601e5de1d9b20da589f2d9047745dad52d56b8 (patch)
tree8040ee01358a7a87034767f06db6f2495a9cb2d0 /lib/scudo/scudo_tsd.h
parent722b6f3f67b805ca27f01db149c32ffebb90b398 (diff)
[scudo] Improve the scalability of the shared TSD model
Summary: The shared TSD model in its current form doesn't scale. Here is an example of rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core machines (defaulting to a `CompactSizeClassMap` and no Quarantine): - with tcmalloc: 337K reqs/sec, peak RSS of 338MB; - with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB; - with scudo (shared): 241K reqs/sec, peak RSS of 324MB. This isn't great, since the exclusive model uses a lot of memory, while the shared model doesn't even come close to be competitive. This is mostly due to the fact that we are consistently scanning the TSD pool starting at index 0 for an available TSD, which can result in a lot of failed lock attempts, and touching some memory that needs not be touched. This CL attempts to make things better in most situations: - first, use a thread local variable on Linux (intead of pthread APIs) to store the current TSD in the shared model; - move the locking boolean out of the TSD: this allows the compiler to use a register and potentially optimize out a branch instead of reading it from the TSD everytime (we also save a tiny bit of memory per TSD); - 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so store the `Precedence` in a `uptr` instead of a `u64`. We lose some nanoseconds of precision and we'll wrap around at some point, but the benefit is worth it; - change a `CHECK` to a `DCHECK`: this should never happen, but if something is ever terribly wrong, we'll crash on a near null AV if the TSD happens to be null; - based on an idea by dvyukov@, we are implementing a bound random scan for an available TSD. This requires computing the coprimes for the number of TSDs, and attempting to lock up to 4 TSDs in an random order before falling back to the current one. This is obviously slightly more expansive when we have just 2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still basically corresponds to the moment of the first contention on a TSD. To seed on random choice, we use the precedence of the current TSD since it is very likely to be non-zero (since we are in the slow path after a failed `tryLock`) With those modifications, the benchmark yields to: - with scudo (shared): 330K reqs/sec, peak RSS of 327MB. So the shared model for this specific situation not only becomes competitive but outperforms the exclusive model. I experimented with some values greater than 4 for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just sticking with the current TSD is also a tad slower. Numbers on platforms with less cores (eg: Android) remain similar. Reviewers: alekseyshl, dvyukov, javed.absar Reviewed By: alekseyshl, dvyukov Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers Differential Revision: https://reviews.llvm.org/D47289 git-svn-id: https://llvm.org/svn/llvm-project/compiler-rt/trunk@334410 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/scudo/scudo_tsd.h')
-rw-r--r--lib/scudo/scudo_tsd.h22
1 files changed, 8 insertions, 14 deletions
diff --git a/lib/scudo/scudo_tsd.h b/lib/scudo/scudo_tsd.h
index 80464b5ea..5977cb5bc 100644
--- a/lib/scudo/scudo_tsd.h
+++ b/lib/scudo/scudo_tsd.h
@@ -23,11 +23,11 @@
namespace __scudo {
-struct ALIGNED(64) ScudoTSD {
+struct ALIGNED(SANITIZER_CACHE_LINE_SIZE) ScudoTSD {
AllocatorCache Cache;
uptr QuarantineCachePlaceHolder[4];
- void init(bool Shared);
+ void init();
void commitBack();
INLINE bool tryLock() {
@@ -36,29 +36,23 @@ struct ALIGNED(64) ScudoTSD {
return true;
}
if (atomic_load_relaxed(&Precedence) == 0)
- atomic_store_relaxed(&Precedence, MonotonicNanoTime());
+ atomic_store_relaxed(&Precedence, static_cast<uptr>(
+ MonotonicNanoTime() >> FIRST_32_SECOND_64(16, 0)));
return false;
}
INLINE void lock() {
- Mutex.Lock();
atomic_store_relaxed(&Precedence, 0);
+ Mutex.Lock();
}
- INLINE void unlock() {
- if (!UnlockRequired)
- return;
- Mutex.Unlock();
- }
+ INLINE void unlock() { Mutex.Unlock(); }
- INLINE u64 getPrecedence() {
- return atomic_load_relaxed(&Precedence);
- }
+ INLINE uptr getPrecedence() { return atomic_load_relaxed(&Precedence); }
private:
- bool UnlockRequired;
StaticSpinMutex Mutex;
- atomic_uint64_t Precedence;
+ atomic_uintptr_t Precedence;
};
void initThread(bool MinimalInit);