Do so by recursively calling disInstr_MIPS_WRK() if the instruction
currently being disassembled is a branch/jump, effectively combining them
into one IR instruction.
A notable change is that the branch/jump + delay slot combination now forms
an eight-byte instruction.
This is related to KDE #417187.
This fixes drd/tests/annotate_hbefore on mips.
This patch adds sycall wrappers for the following syscalls which
use a 64bit time_t on 32bit arches: gettime64, settime64,
clock_getres_time64, clock_nanosleep_time64, timer_gettime64,
timer_settime64, timerfd_gettime64, timerfd_settime64,
utimensat_time64, pselect6_time64, ppoll_time64, recvmmsg_time64,
mq_timedsend_time64, mq_timedreceive_time64, semtimedop_time64,
rt_sigtimedwait_time64, futex_time64 and sched_rr_get_interval_time64.
Still missing are clock_adjtime64 and io_pgetevents_time64.
For the more complicated syscalls futex[_time64], pselect6[_time64]
and ppoll[_time64] there are shared pre and/or post helper functions.
Other functions just have their own PRE and POST handler.
Note that the vki_timespec64 struct really is the struct as used by
by glibc (it internally translates a 32bit timespec struct to a 64bit
timespec64 struct before passing it to any of the time64 syscalls).
The kernel uses a 64-bit signed int, but is ignoring the upper 32 bits
of the tv_nsec field. It does always write the full struct though.
So avoid checking the padding is only needed for PRE_MEM_READ.
There are two helper pre_read_timespec64 and pre_read_itimerspec64
to check the new structs.
https://bugs.kde.org/show_bug.cgi?id=416753
Necessary changes to support nanoMIPS on Linux.
Part 4/4 - Other changes (mainly include/*)
Patch by Aleksandar Rikalo, Dimitrije Nikolic, Tamara Vlahovic,
Nikola Milutinovic and Aleksandra Karadzic.
Related KDE issue: #400872.
NaN2008 dynamic linker is named ld-linux-mipsn8.so.1.
Update include/pub_tool_redir.h by adding ld-linux-mipsn8.so.1 to the list
of sonames with an accompanying check in coregrind/m_redir.c.
Patch by Stefan Maksimovic.
- The command option --collect-systime has been enhanced to specify
the unit used to record the elapsed time spent during system calls.
The command option now accepts the values no|yes|msec|usec|nsec,
where yes is a synonym of msec. When giving the value nsec, the
system cpu time of system calls is also recorded.
Note that the nsec option is not supported on Darwin.
As it turned out, the size of vki_siginfo_t is incorrect on these 64-bit
architectures:
(gdb) p sizeof(vki_siginfo_t)
$1 = 136
(gdb) ptype struct vki_siginfo
type = struct vki_siginfo {
int si_signo;
int si_errno;
int si_code;
union {
int _pad[29];
struct {...} _kill;
struct {...} _timer;
struct {...} _rt;
struct {...} _sigchld;
struct {...} _sigfault;
struct {...} _sigpoll;
} _sifields;
}
It looks like that for this architecture, __VKI_ARCH_SI_PREAMBLE_SIZE
hasn't been defined properly, which resulted in incorrect
VKI_SI_PAD_SIZE calculation (29 instead of 28).
<6a9e4> DW_AT_name : (indirect string, offset: 0xcf59): _sifields
<6a9ef> DW_AT_data_member_location: 16
This issue has been discovered with strace's "make check-valgrind-memcheck",
which produced false out-of-bounds writes on ptrace(PTRACE_GETSIGINFO) calls:
SYSCALL[24264,1](101) sys_ptrace ( 16898, 24283, 0x0, 0x606bd40 )
==24264== Syscall param ptrace(getsiginfo) points to unaddressable byte(s)
==24264== at 0x575C06E: ptrace (ptrace.c:45)
==24264== by 0x443244: next_event (strace.c:2431)
==24264== by 0x443D30: main (strace.c:2845)
==24264== Address 0x606bdc0 is 0 bytes after a block of size 144 alloc'd
(Note that the address passed is 0x606bd40 and the address reported is
0x606bdc0).
After the patch, no such errors observed.
* include/vki/vki-amd64-linux.h [__x86_64__ && __ILP32__]
(__vki_kernel_si_clock_t): New typedef.
[__x86_64__ && __ILP32__] (__VKI_ARCH_SI_CLOCK_T,
__VKI_ARCH_SI_ATTRIBUTES): New macros.
[__x86_64__ && !__ILP32__] (__VKI_ARCH_SI_PREAMBLE_SIZE): New macro,
define to 4 ints.
* include/vki/vki-arm64-linux.h (__VKI_ARCH_SI_PREAMBLE_SIZE): Likewise.
* include/vki/vki-ppc64-linux.h [__powerpc64__] (__VKI_ARCH_SI_PREAMBLE_SIZE):
Likewise.
* include/vki/vki-linux.h [!__VKI_ARCH_SI_CLOCK_T]
(__VKI_ARCH_SI_CLOCK_T): New macro, define to vki_clock_t.
[!__VKI_ARCH_SI_ATTRIBUTES] (__VKI_ARCH_SI_ATTRIBUTES): New macro,
define to nil.
(struct vki_siginfo): Use __VKI_ARCH_SI_CLOCK_T type for _utime and
_stime fields. Add __VKI_ARCH_SI_ATTRIBUTES.
Resolves: https://bugs.kde.org/show_bug.cgi?id=405201
Reported-by: Dmitry V. Levin <ldv@altlinux.org>
Signed-off-by: Eugene Syromyatnikov <evgsyr@gmail.com>
This patch changes the option parsing framework to allow a set of
core or tool (currently only memcheck) options to be changed dynamically.
Here is a summary of the new functionality (extracted from NEWS):
* It is now possible to dynamically change the value of many command
line options while your program (or its children) are running under
Valgrind.
To have the list of dynamically changeable options, run
valgrind --help-dyn-options
You can change the options from the shell by using vgdb to launch
the monitor command "v.clo <clo option>...".
The same monitor command can be used from a gdb connected
to the valgrind gdbserver.
Your program can also change the dynamically changeable options using
the client request VALGRIND_CLO_CHANGE(option).
Here is a brief description of the code changes.
* the command line options parsing macros are now checking a 'parsing' mode
to decide if the given option must be handled or not.
(more about the parsing mode below).
* the 'main' command option parsing code has been split in a function
'process_option' that can be called now by:
- early_process_cmd_line_options
(looping over args, calling process_option in mode "Early")
- main_process_cmd_line_options
(looping over args, calling process_option in mode "Processing")
- the new function VG_(process_dynamic_option) called from
gdbserver or from VALGRIND_CLO_CHANGE (calling
process_option in mode "Dynamic" or "Help")
* So, now, during startup, process_option is called twice for each arg:
- once during Early phase
- once during normal Processing
Then process_option can then be called again during execution.
So, the parsing mode is defined so that the option parsing code
behaves differently (e.g. allows or not to handle the option)
depending on the mode.
// Command line option parsing happens in the following modes:
// cloE : Early processing, used by coregrind m_main.c to parse the
// command line options that must be handled early on.
// cloP : Processing, used by coregrind and tools during startup, when
// doing command line options Processing.
// clodD : Dynamic, used to dynamically change options after startup.
// A subset of the command line options can be changed dynamically
// after startup.
// cloH : Help, special mode to produce the list of dynamically changeable
// options for --help-dyn-options.
typedef
enum {
cloE = 1,
cloP = 2,
cloD = 4,
cloH = 8
} Clo_Mode;
The option parsing macros in pub_tool_options.h have now all a new variant
*_CLOM with the mode(s) in which the given option is accepted.
The old variant is kept and calls the new variant with mode cloP.
The function VG_(check_clom) in the macro compares the current mode
with the modes allowed for the option, and returns True if qq_arg
should be further processed.
For example:
// String argument, eg. --foo=yes or --foo=no
(VG_(check_clom) \
(qq_mode, qq_arg, qq_option, \
VG_STREQN(VG_(strlen)(qq_option)+1, qq_arg, qq_option"=")) && \
({const HChar* val = &(qq_arg)[ VG_(strlen)(qq_option)+1 ]; \
if VG_STREQ(val, "yes") (qq_var) = True; \
else if VG_STREQ(val, "no") (qq_var) = False; \
else VG_(fmsg_bad_option)(qq_arg, "Invalid boolean value '%s'" \
" (should be 'yes' or 'no')\n", val); \
True; }))
VG_BOOL_CLOM(cloP, qq_arg, qq_option, qq_var)
To make an option dynamically excutable, it is typically enough to replace
VG_BOOL_CLO(...)
by
VG_BOOL_CLOM(cloPD, ...)
For example:
- else if VG_BOOL_CLO(arg, "--show-possibly-lost", tmp_show) {
+ else if VG_BOOL_CLOM(cloPD, arg, "--show-possibly-lost", tmp_show) {
cloPD means the option value is set/changed during the main command
Processing (P) and Dynamically during execution (D).
Note that the 'body/further processing' of a command is only executed when
the option is recognised and the current parsing mode is ok for this option.
*STAT* system calls other than statx are becoming deprecated.
Coregrind should use statx as the first candidate in order to achieve
"stat" functionality.
There are also systems that do not even support older "stats".
This fixes KDE #400593.
Patch by Aleksandar Rikalo.
Introduce new header files for the system call numbers that are shared
across all Linux architectures and also for the system call numbers that
are shared across all 32-bit architectures.
The s390x-specific inline assembly macros for function wrapping in
include/valgrind.h have a few issues.
When the compiler uses vector registers, such as with "-march=z13", all
vector registers must be declared as clobbered by the callee. Because
this is missing, many drd test failures are seen with "-march=z13".
Also, the inline assemblies write the return value into the target
register before restoring r11. If r11 is used as the target register,
this means that the restore operation corrupts the result. This bug
causes failures with memcheck's "wrap6" test case.
These bugs are fixed. The clobber list is extended by the vector
registers (if appropriate), and the target register is now written at the
end, after restoring r11.
Sync VEX/LICENSE.GPL with top-level COPYING file. We used 3 different
addresses for writing to the FSF to receive a copy of the GPL. Replace
all different variants with an URL <http://www.gnu.org/licenses/>.
The following files might still have some slightly different (L)GPL
copyright notice because they were derived from other programs:
- files under coregrind/m_demangle which come from libiberty:
cplus-dem.c, d-demangle.c, demangle.h, rust-demangle.c,
safe-ctype.c and safe-ctype.h
- coregrind/m_demangle/dyn-string.[hc] derived from GCC.
- coregrind/m_demangle/ansidecl.h derived from glibc.
- VEX files for FMA detived from glibc:
host_generic_maddf.h and host_generic_maddf.c
- files under coregrin/m_debuginfo derived from LZO:
lzoconf.h, lzodefs.h, minilzo-inl.c and minilzo.h
- files under coregrind/m_gdbserver detived from GDB:
gdb/signals.h, inferiors.c, regcache.c, regcache.h,
regdef.h, remote-utils.c, server.c, server.h, signals.c,
target.c, target.h and utils.c
Plus the following test files:
- none/tests/ppc32/testVMX.c derived from testVMX.
- ppc tests derived from QEMU: jm-insns.c, ppc64_helpers.h
and test_isa_3_0.c
- tests derived from bzip2 (with embedded GPL text in code):
hackedbz2.c, origin5-bz2.c, varinfo6.c
- tests detived from glibc: str_tester.c, pth_atfork1.c
- test detived from GCC libgomp: tc17_sembar.c
- performance tests derived from bzip2 or tinycc (with embedded GPL
text in code): bz2.c, test_input_for_tinycc.c and tinycc.c
On other arches stpcpy () is intercepted for both libc.so and ld.so.
But not on arm64, where it is only intercepted for libc.so.
This can cause memcheck warnings about the use of stpcpy () in ld.so
when called through dlopen () because ld.so contains its own copy of
that functions.
Fix by introducing VG_Z_LD_LINUX_AARCH64_SO_1 (the encoded name of
ld.so on arm64) and using that in vg_replace_strmem.c to intercept
stpcpy.
https://bugs.kde.org/show_bug.cgi?id=407307
Based on a patch from Łukasz Marek.
Note that this function differs from:
*(T*)VG_(indexXA)(arr, index) = new_value;
as the function will mark the array as unsorted.
Note that this function is currently unused in the current valgrind code basis,
but it is useful for tools outside of valgrind tree.
This commit thoroughly overhauls DHAT, moving it out of the
"experimental" ghetto. It makes moderate changes to DHAT itself,
including dumping profiling data to a JSON format output file. It also
implements a new data viewer (as a web app, in dhat/dh_view.html).
The main benefits over the old DHAT are as follows.
- The separation of data collection and presentation means you can run a
program once under DHAT and then sort the data in various ways. Also,
full data is in the output file, and the viewer chooses what to omit.
- The data can be sorted in more ways than previously. Some of these
sorts involve useful filters such as "short-lived" and "zero reads or
zero writes".
- The tree structure view avoids the need to choose stack trace depth.
This avoids both the problem of not enough depth (when records that
should be distinct are combined, and may not contain enough
information to be actionable) and the problem of too much depth (when
records that should be combined are separated, making them seem less
important than they really are).
- Byte and block measures are shown with a percentage relative to the
global count, which helps gauge relative significance of different
parts of the profile.
- Byte and blocks measures are also shown with an allocation rate
(bytes and blocks per million instructions), which enables comparisons
across multiple profiles, even if those profiles represent different
workloads.
- Both global and per-node measurements are taken at the global heap
peak ("At t-gmax"), which gives Massif-like insight into the point of
peak memory use.
- The final/liftimes stats are a bit more useful than the old deaths
stats. (E.g. the old deaths stats didn't take into account lifetimes
of unfreed blocks.)
- The handling of realloc() has changed. The sequence `p = malloc(100);
realloc(p, 200);` now increases the total block count by 2 and the
total byte count by 300. Previously it increased them by 1 and 200.
The new handling is a more operational view that better reflects the
effect of allocations on performance. It makes a significant
difference in the results, giving paths involving reallocation (e.g.
repeated pushing to a growing vector) more prominence.
Other things of note:
- There is now testing, both regression tests that run within the
standard test suite, and viewer-specific tests that cannot run within
the standard test suite. The latter are run by loading
dh_view.html?test=1 in a web browser.
- The commit puts all tool lists in Makefiles (and similar files) in the
following consistent order: memcheck, cachegrind, callgrind, helgrind,
drd, massif, dhat, lackey, none; exp-sgcheck, exp-bbv.
- A lot of fields in dh_main.c have been given more descriptive names.
Those names now match those used in dh_view.js.
[This commit contains an implementation for all targets except amd64-solaris
and x86-solaris, which will be completed shortly.]
In the baseline simulator, jumps to guest code addresses that are not known at
JIT time have to be looked up in a guest->host mapping table. That means:
indirect branches, indirect calls and most commonly, returns. Since there are
huge numbers of these (often 10+ million/second) the mapping mechanism needs
to be extremely cheap.
Currently, this is implemented using a direct-mapped cache, VG_(tt_fast), with
2^15 (guest_addr, host_addr) pairs. This is queried in handwritten assembly
in VG_(disp_cp_xindir) in dispatch-<arch>-<os>.S. If there is a miss in the
cache then we fall back out to C land, and do a slow lookup using
VG_(search_transtab).
Given that the size of the translation table(s) in recent years has expanded
significantly in order to keep pace with increasing application sizes, two bad
things have happened: (1) the cost of a miss in the fast cache has risen
significantly, and (2) the miss rate on the fast cache has also increased
significantly. This means that large (~ one-million-basic-blocks-JITted)
applications that run for a long time end up spending a lot of time in
VG_(search_transtab).
The proposed fix is to increase associativity of the fast cache, from 1
(direct mapped) to 4. Simulations of various cache configurations using
indirect-branch traces from a large application show that is the best of
various configurations. In an extreme case with 5.7 billion indirect
branches:
* The increase of associativity from 1 way to 4 way, whilst keeping the
overall cache size the same (32k guest/host pairs), reduces the miss rate by
around a factor of 3, from 4.02% to 1.30%.
* The use of a slightly better hash function than merely slicing off the
bottom 15 bits of the address, reduces the miss rate further, from 1.30% to
0.53%.
Overall the VG_(tt_fast) miss rate is almost unchanged on small workloads, but
reduced by a factor of up to almost 8 on large workloads.
By implementing each (4-entry) cache set using a move-to-front scheme in the
case of hits in ways 1, 2 or 3, the vast majority of hits can be made to
happen in way 0. Hence the cost of having this extra associativity is almost
zero in the case of a hit. The improved hash function costs an extra 2 ALU
shots (a shift and an xor) but overall this seems performance neutral to a
win.
PTRACE_GET_THREAD_AREA is not handled by amd64 linux syswrap, which leads
to false positive errors in 64 bits program ptrace-ing 32 bits processes.
For example, the below error was wrongly reported on GDB:
==25377== Conditional jump or move depends on uninitialised value(s)
==25377== at 0x8A1D7EC: td_thr_get_info (td_thr_get_info.c:35)
==25377== by 0x526819: thread_from_lwp(thread_info*, ptid_t) (linux-thread-db.c:417)
==25377== by 0x5281D4: thread_db_notice_clone(ptid_t, ptid_t) (linux-thread-db.c:442)
==25377== by 0x51773B: linux_handle_extended_wait(lwp_info*, int) (linux-nat.c:2027)
....
==25377== Uninitialised value was created by a stack allocation
==25377== at 0x69A360: x86_linux_get_thread_area(int, void*, unsigned int*) (x86-linux-nat.c:278)
Fix this by implementing PTRACE_GET|SET_THREAD_AREA on amd64.
When code uses utimensat with UTIME_NOW or UTIME_OMIT valgrind memcheck
would generate a warning. But as the utimensat manpage says:
If the tv_nsec field of one of the timespec structures has the special
value UTIME_NOW, then the corresponding file timestamp is set to the
current time. If the tv_nsec field of one of the timespec structures
has the special value UTIME_OMIT, then the corresponding file timestamp
is left unchanged. In both of these cases, the value of the corre‐
sponding tv_sec field is ignored.
So ignore the timespec tv_sec when tv_nsec is set to UTIME_NOW or
UTIME_OMIT.
Support for the bpf system call was added in a previous commit, but
did not include tracking for file descriptors handled by the call.
Add checks and tracking for file descriptors. Check in PRE() wrapper
that all file descriptors (pointing to object such as eBPF programs or
maps, cgroups, or raw tracepoints) used by the system call are valid,
then add tracking in POST() wrapper for newly produced file descriptors.
As the file descriptors are not always processed in the same way by the
bpf call, add to the header file some additional definitions from bpf.h
that are necessary to sort out under what conditions descriptors should
be checked in the PRE() helper.
Fixes: 388786 - Support bpf syscall in amd64 Linux
Add support for bpf() Linux-specific system call on amd64 platform. The
bpf() syscall is used to handle eBPF objects (programs and maps), and
can be used for a number of operations. It takes three arguments:
- "cmd" is an integer encoding a subcommand to run. Available subcommand
include loading a new program, creating a map or updating its entries,
retrieving information about an eBPF object, and may others.
- "attr" is a pointer to an object of type union bpf_attr. This object
converts to a struct related to selected subcommand, and embeds the
various parameters used with this subcommand. Some of those parameters
are read by the kernel (example for an eBPF map lookup: the key of the
entry to lookup), others are written into (the value retrieved from
the map lookup).
- "attr_size" is the size of the object pointed by "attr".
Since the action performed by the kernel, and the way "attr" attributes
are processed depends on the subcommand in use, the PRE() and POST()
wrappers need to make the distinction as well. For each subcommand, mark
the attributes that are read or written.
For some map operations, the only way to infer the size of the memory
areas used for read or write operations seems to involve reading
from /proc/<pid>/fdinfo/<fd> in order to retrieve the size of keys
and values for this map.
The definitions of union bpf_attr and of other eBPF-related elements
required for adequately performing the checks were added to the Linux
header file.
Processing related to file descriptors is added in a follow-up patch.
This patch makes sure that the process running under valgrind only sees
the AES, PMULL, SHA1, SHA2, CRC32, FP, and ASIMD features in auxv AT_HWCAPS.
https://bugs.kde.org/show_bug.cgi?id=381556
Adding MIPS N32 ABI support.
BZ issue - #345763.
Contributed and maintained by mulitple people over the years:
Crestez Dan Leonard, Maran Pakkirisamy, Dimitrije Nikolic,
Aleksandar Rikalo, Tamara Vlahovic.
Use RegWord type in mips64.
Part of the changes required for MIPS N32 ABI support.
BZ issue - #345763.
Contributed by:
Dimitrije Nikolic, Aleksandar Rikalo and Tamara Vlahovic.
The modified test none/tests/sem crashes with a SEGV when valgrind is compiled
with lto on various amd64 platforms (debian/gcc 6.3, RHEL7/gcc 6.4,
Ubuntu/gcc 7.2)
The problem is that the vki_semid_ds buf is not what is expected by the kernel:
the kernel expects a bigger structure vki_semid64_ds (at least on
these platforms).
Getting the sem_nsems seems to work by chance, as sem_nsems is at
the same offset in both vki_semid_ds and vki_semid64_ds.
However, e.g. the ctime was not set properly after syscall return,
and 2 words after sem_nsems were set to 0 by the kernel, causing
the SEGV, as a spilled register became 0.
Fix consists in using the 64 bit version for __NR_semctl.
Tested on debian/amd64 and s390x.
Shingled magnetic recording drives support a command set called ZBC
(Zoned Block Commands). Two new ioctls have been added to the Linux
kernel to support such drives, namely VKI_BLKREPORTZONE and
VKI_BLKRESETZONE. Add support to Valgrind for these ioctls.