Glibc has some inherent capabilities at detecting overruns, and they can be activated with the environment variable MALLOC_CHECK_
. See mcheck and mallopt for details.
Trying it out on a 64-bit Ubuntu 14.04 system, it seems that when allocating many memory blocks, malloc()
rounds up the requested size to the next multiple of 8, and uses 8 extra bytes. E.g., if you allocated 1000000 blocks of 50 bytes, the process actually requests 64000000 bytes from the kernel, not 50000000. This leaves room for a "separator canary" of 8 bytes between any two allocated blocks.
These features seem meant to detect implementation bugs, not malicious exploitation. It is unclear whether they would be really good at actually detecting buffer overruns, and do so early enough; indeed, by definition, heap consistency checks of that kind can be enforced only when the heap allocation routines are invoked. This means that if you had a buffer overflow, you may get some notice only when the relevant block, or the next one in RAM, gets released: this happens after the overflow, possibly much after, hence too late. This contrasts with stack-based canaries, which are checked before returning (i.e. after the overflow, but before using the return address slot which may have been clobbered).
What such checks are more-or-less good at is detecting double-free issues. They cannot trap use-after-free cases, and, as explained above, when they detect an overflow it is probably too late.