kejiewei / thread-sanitizer

Automatically exported from code.google.com/p/thread-sanitizer
0 stars 0 forks source link

tsan fails to find a partially racy unaligned access #17

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Test added to llvm repo: tsan/lit_tests/aligned_vs_unaligned_race.cc

uint64_t Global[2];

void *Thread1(void *x) {
  Global[1]++;
  return NULL;
}

void *Thread2(void *x) {
  char *p1 = reinterpret_cast<char *>(&Global[0]);
  uint64_t *p4 = reinterpret_cast<uint64_t *>(p1 + 1);
  (*p4)++;
  return NULL;
}

tsan currently does not find this race. 
I don't see an easy way to do it w/o sacrificing performance for the
common case.
This bug is just for the record, I don't think we should do anything 
about it, at least in the short term. 

both helgrind and tsan v1 find this race, but the code that allows to do it
slows down the common path. 

Similar issue in asan: 
https://code.google.com/p/address-sanitizer/issues/detail?id=100

Original issue reported on code.google.com by konstant...@gmail.com on 26 Mar 2013 at 8:31

GoogleCodeExporter commented 9 years ago
This should be caught by a compiler/ubsan as undefined behaviour.

Original comment by dvyu...@google.com on 26 Mar 2013 at 8:39

GoogleCodeExporter commented 9 years ago
Now we provide:
  uint16_t __sanitizer_unaligned_load16(const void *p);
  uint32_t __sanitizer_unaligned_load32(const void *p);
  uint64_t __sanitizer_unaligned_load64(const void *p);
  void __sanitizer_unaligned_store16(void *p, uint16_t x);
  void __sanitizer_unaligned_store32(void *p, uint32_t x);
  void __sanitizer_unaligned_store64(void *p, uint64_t x);

I think it's the best we can do.
Closing?

Original comment by dvyu...@google.com on 26 Dec 2013 at 2:18

GoogleCodeExporter commented 9 years ago
Agreed

Original comment by konstant...@gmail.com on 26 Dec 2013 at 2:21

GoogleCodeExporter commented 9 years ago
I see that asan now handles unaligned accesses as:

  // Instrument a 1-, 2-, 4-, 8-, or 16- byte access with one check
  // if the data is properly aligned.
  if ((TypeSize == 8 || TypeSize == 16 || TypeSize == 32 || TypeSize == 64 ||
       TypeSize == 128) &&
      (Alignment >= Granularity || Alignment == 0 || Alignment >= TypeSize / 8))
    return instrumentAddress(I, I, Addr, TypeSize, IsWrite, nullptr, UseCalls);
  // Instrument unusual size or unusual alignment.
  // We can not do it with a single check, so we do 1-byte check for the first
  // and the last bytes. We call __asan_report_*_n(addr, real_size) to be able
  // to report the actual access size.

We should do the same in tsan. Gcc already supports this, it finds the race in 
test/tsan/aligned_vs_unaligned_race.cc which is marked as false negative in 
llvm tests.

If a memory access is unaligned, we can emit calls to 
__sanitizer_unaligned_load/store. If the access is of weird size, we can emit 
calls to __tsan_read/write_range.

Original comment by dvyu...@google.com on 14 Jan 2015 at 4:09

GoogleCodeExporter commented 9 years ago
Adding Project:ThreadSanitizer as part of GitHub migration.

Original comment by gli...@google.com on 30 Jul 2015 at 9:21