blob: da8963854c9cc869b4ac3c89fc39e30666a0ee1c (
plain) (
blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
# TSAN suppression / options file for zenserver
#
# Usage:
# TSAN_OPTIONS="detect_deadlocks=0 suppressions=$(pwd)/tsan.supp" ./zenserver
#
# NOTE: detect_deadlocks=0 is required because the GC's LockState() acquires shared
# lock scopes on every named cache bucket (m_IndexLock) and every oplog
# (GcReferenceLocker) simultaneously. With enough buckets/projects/oplogs this
# easily exceeds TSAN's hard per-thread limit of 128 simultaneously-held locks
# (all_locks_with_contexts_[128] in sanitizer_deadlock_detector.h:67), causing a
# CHECK abort. This is a known TSAN limitation, not a real deadlock risk.
# The long-term fix is to replace the N per-bucket shared-lock pattern in
# ZenCacheStore::LockState / ProjectStore::LockState with a single coarser
# "GC epoch" RwLock at the disk-layer / project-store level.
# EASTL's hashtable uses a global gpEmptyBucketArray[2] sentinel shared by all
# empty hash tables (mnBucketCount == 1). DoFreeNodes unconditionally writes NULL
# to each bucket slot, including this shared global. Multiple threads concurrently
# destroying empty EASTL hash_maps all write NULL to gpEmptyBucketArray[0], which
# TSAN reports as a race. This is benign: the slot is always NULL and writing NULL
# to it has no observable effect.
race:eastl::hashtable*DoFreeNodes*
|