mirror of
https://github.com/Zenithsiz/ftmemsim-valgrind.git
synced 2026-02-04 02:18:37 +00:00
Memcheck tries to accurately track definedness at the bit level, at least for scalar integer operations. For many operations it is good enough to use approximations which may overstate the undefinedness of the result of an operation, provided that fully defined inputs still produce a fully defined output. For example, the standard analysis for an integer add is Add#(x#, y#) = Left(UifU(x#, y#)) which (as explained in the USENIX 05 paper http://valgrind.org/docs/memcheck2005.pdf) means: for an add, worst-case carry propagation is assumed. So all bits to the left of, and including, the rightmost undefined bit in either operand, are assumed to be undefined. As compilers have become increasingly aggressive, some of these approximations are no longer good enough. For example, LLVM for some years has used Add operations with partially undefined inputs, when it knows that the carry propagation will not pollute important parts of the result. Similarly, both GCC and LLVM will generate integer equality comparisons with partially undefined inputs in situations where it knows the result of the comparison will be defined. In both cases, Memcheck's default strategies give rise to false uninitialised-value errors, and the problem is getting worse as time goes by. Memcheck already has expensive (non-default) instrumentation for integer adds, subtracts, and equality comparisons. Currently these are only used if you specify --expensive-definedness-checks=yes, and in some rare cases to do with inlined string operations, as determined by analysing the block to be instrumented, and by default on MacOS. The performance hit from them can be quite high, up to 30% lossage. This patch makes the following changes: * During instrumentation, there is much finer control over which IROps get expensive instrumentation. The following groups can now be selected independently for expensive or cheap instrumentation: Iop_Add32 Iop_Add64 Iop_Sub32 Iop_Sub64 Iop_CmpEQ32 and Iop_CmpNE32 Iop_CmpEQ64 and Iop_CmpNE64 This makes it possible to only enable, on a given platform, only the minimal necessary set of expensive cases. * The default set of expensive cases can be set on a per-platform basis. This is set up in the first part of MC_(instrument). * There is a new pre-instrumentation analysis pass. It identifies Iop_Add32 and Iop_Add64 uses for which the expensive handling will give the same results as the cheap handling. This includes all adds that are used only to create memory addresses. Given that the expensive handling of adds is, well, expensive, and that most adds merely create memory addresses, this more than halves the extra costs of expensive Add handling. * The pre-existing "bogus literal" detection (0x80808080, etc) pass has been rolled into the new pre-instrumentation analysis. * The --expensive-definedness-checks= flag has been changed. Before, it had two settings, "no" and "yes", with "no" being the default. Now, it has three settings: no -- always use the cheapest handling auto -- use the minimum set of expensive handling needed to get reasonable results on this platform, and perform pre-instrumentation analysis so as to minimise the costs thereof yes -- always use the most expensive handling The default setting is now "auto". The user-visible effect of the new default is that there should (hopefully) be a drop in false positive rates but (unfortunately) also some drop in performance.