Replace use of daddi/addi with daddiu/addiu.
This is more R6-friendly and we actually want to use the instructions
that do not cause integer overflow exception.
Patch by Vicente Olivert Riera.
Related issue - BZ#356112.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@16018
Explanation by Matthias Schwarzott:
The linker will request an executable stack as soon as at least one
object file, that is linked in, wants an executable stack.
And the absence of the
.section .note.GNU-stack."",@progbits
is enough to tell the linker that an executable stack is needed.
So even an empty asm-file must at least contain this statement to not
force executable stacks on the whole executable.
* Define a helper macro MARK_STACK_NO_EXEC that disables the
executable stack.
* Instantiate this macro unconditionally at the end of each asm file.
Patch by Matthias Schwarzott <zzam@gentoo.org>.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15692
Valgrind aspects, to match vex r3124.
See bug 339778 - Linux/TileGx platform support to Valgrind
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15080
to add PPC64 LE support. The other two patches can be found in Bugzillas
334384 and 334836.
POWER PC, add the functional Little Endian support, patch 2
The IBM POWER processor now supports both Big Endian and Little Endian.
The ABI for Little Endian also changes. Specifically, the function
descriptor is not used, the stack size changed, accessing the TOC
changed. Functions now have a local and a global entry point. Register
r2 contains the TOC for local calls and register r12 contains the TOC
for global calls. This patch makes the functional changes to the
Valgrind tool. The patch makes the changes needed for the
none/tests/ppc32 and none/tests/ppc64 Makefile.am. A number of the
ppc specific tests have Endian dependencies that are not fixed in
this patch. They are fixed in the next patch.
Per Julian's comments renamed coregrind/m_dispatch/dispatch-ppc64-linux.S
to coregrind/m_dispatch/dispatch-ppc64be-linux.S Created new file for LE
coregrind/m_dispatch/dispatch-ppc64le-linux.S. The same was done for
coregrind/m_syswrap/syscall-ppc-linux.S.
Signed-off-by: Carl Love <carll@us.ibm.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14239
to add PPC64 LE support. The other two patches can be found in Bugzillas
334834 and 334836. The commit does not have a VEX commit associated with it.
POWER PC, add initial Little Endian support
The IBM POWER processor now supports both Big Endian and Little Endian.
This patch renames the #defines with the name ppc64 to ppc64be for the BE
specific code. This patch adds the Little Endian #define ppc64le to the
Additionally, a few functions are renamed to remove BE from the name if the
function is used by BE and LE. Functions that are BE specific have BE put
in the name.
The goals of this patch is to make sure #defines, function names and
variables consistently use PPC64/ppc64 if it refers to BE and LE,
PPC64BE/ppc64be if it is specific to BE, PPC64LE/ppc64le if it is LE
specific. The patch does not break the code for PPC64 Big Endian.
The test files memcheck/tests/atomic_incs.c, tests/power_insn_available.c
and tests/power_insn_available.c are also updated to the new #define
definition for PPC64 BE.
Signed-off-by: Carl Love <carll@us.ibm.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14238
Necessary changes to Valgrind to support MIPS64LE on Linux.
Minor cleanup/style changes embedded in the patch as well.
The change corresponds to r2687 in VEX.
Patch written by Dejan Jevtic and Petar Jovanovic.
More information about this issue:
https://bugs.kde.org/show_bug.cgi?id=313267
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@13292
Reserve space for frame header in disp_run_translations, as some optimizations
may decide to use it. This should fix issue #307141.
Related link:
https://bugs.kde.org/show_bug.cgi?id=307141
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@13080
way that the fast-path through VG_(disp_cp_xindir) only has to
increment a 32 bit counter, saving memory bandwidth on 32 bit
platforms compared to a 64-bit inc. The overall numbers of XIndirs
can still be 64 bit though.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@12527
in common.
To accomplish that without penalizing the non-profiling dispatcher
we do the stats gathering *after* the jitted code returns to the
dispatcher. For that to work properly, we need to stash away the
instruction adddress before entering the jitted code so we can use
it later. (See also VEX r2208).
Two other tweaks are included here:
(1) For the non-profiling dispatcher it is not necessary to update
the LR in each iteration. Quite obviously the jitted code cannot
modify the LR in its iteration because it needs it at the very end
when it returns. So we move this step out of the core loop.
(2) Move loading the address of VG_(tt_fast) past testing for a changed
guest state pointer.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@12044
so as to avoid a GSP-changed check in the common case. See vex r2155.
(amd64-darwin and x86-darwin are now temporarily unbuildable.)
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11786
use test-based detection of GSP pointer changes.
Saves one load per SB.
dispatch-amd64-linux.S:
ditto
dispatch-amd64-linux.S:
use movabsq to get &VG_(tt_fast) into a register,
instead of an rsp-relative load from a constant pool.
Saves a second load per SB.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11785
the fact that all {VG,VEX}_TRC_VALUES have their lowest bit set. All
other targets can benefit from this trick too.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11781
make some attempt to schedule for Cortex-A8. Improves overall IPC
for none running perf/bz2.c "-O" from 0.879 to 0.925.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11780
"insn_address >> 1". The former is appropriate for ARM code, where
all insns are 4-sized and 4-aligned, but not for Thumb code, where the
minimum size and alignment is 2. The old scheme happened to work for
Thumb (indeed, any hash function would), but caused huge amounts of
conflict misses in the fast cache for some programs.
The change has been observed to reduce conflict misses by up to 100
times, and in some cases, improves performance significantly for Thumb
code. Performance of ARM code is unchanged or possibly a bit worse.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11716
side components. (Florian Krohm <britzel@acm.org> and Christian
Borntraeger <borntraeger@de.ibm.com>). Fixes#243404.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@11604