Improved structure of LDT-related code:

- one declarations from core.h removed, one moved to within m_syscalls.
- all the x86 LDT stuff made local to m_syscalls.  x86-linux/ldt.c removed
  as a result.  x86/state.c slimmed down, too.  x86/x86_private.h removed
  too.
- all the AMD64 LDT stuff was deleted, since it was all commented out.  It
  can be added back in later in the appropriate places if necessary.
  Thus amd64-linux/ldt.c and amd64/amd64_private.h were removed.
- other minor naming changes

I hope I didn't break AMD64 compilation.


git-svn-id: svn://svn.valgrind.org/valgrind/trunk@3726
This commit is contained in:
Nicholas Nethercote 2005-05-15 20:52:04 +00:00
parent c14a5bb6a6
commit 7a6ee2fc93
18 changed files with 456 additions and 1149 deletions

View File

@ -11,8 +11,7 @@ noinst_LIBRARIES = libplatform.a
libplatform_a_SOURCES = \
core_platform.c \
ldt.c
core_platform.c
if USE_PIE
libplatform_a_CFLAGS = $(AM_CFLAGS) -fpie

View File

@ -35,29 +35,6 @@
//#include "core_platform_asm.h" // platform-specific asm stuff
//#include "platform_arch.h" // platform-specific tool stuff
/* ---------------------------------------------------------------------
Exports of vg_ldt.c
------------------------------------------------------------------ */
// XXX: eventually all these should be x86-private, and not visible to the
// core (except maybe do_useseg()?)
#if 0
/* Simulate the modify_ldt syscall. */
extern Int VG_(sys_modify_ldt) ( ThreadId tid,
Int func, void* ptr, UInt bytecount );
/* Simulate the {get,set}_thread_area syscalls. */
extern Int VG_(sys_set_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info );
extern Int VG_(sys_get_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info );
/* Called from generated code. Given a segment selector and a virtual
address, return a linear address, and do limit checks too. */
extern Addr VG_(do_useseg) ( UInt seg_selector, Addr virtual_addr );
#endif
/* ---------------------------------------------------------------------
ucontext stuff
------------------------------------------------------------------ */

View File

@ -1,495 +0,0 @@
/*--------------------------------------------------------------------*/
/*--- Simulation of Local Descriptor Tables amd64-linux/ldt.c ---*/
/*--------------------------------------------------------------------*/
/*
This file is part of Valgrind, a dynamic binary instrumentation
framework.
Copyright (C) 2000-2005 Julian Seward
jseward@acm.org
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307, USA.
The GNU General Public License is contained in the file COPYING.
*/
// XXX: this is copied straight from the x86 code... perhaps they should be
// shared. (Are AMD64 LDTs the same as x86 LDTs? Don't know. --njn)
/* Details of the LDT simulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a program runs natively, the linux kernel allows each *thread*
in it to have its own LDT. Almost all programs never do this --
it's wildly unportable, after all -- and so the kernel never
allocates the structure, which is just as well as an LDT occupies
64k of memory (8192 entries of size 8 bytes).
A thread may choose to modify its LDT entries, by doing the
__NR_modify_ldt syscall. In such a situation the kernel will then
allocate an LDT structure for it. Each LDT entry is basically a
(base, limit) pair. A virtual address in a specific segment is
translated to a linear address by adding the segment's base value.
In addition, the virtual address must not exceed the limit value.
To use an LDT entry, a thread loads one of the segment registers
(%cs, %ss, %ds, %es, %fs, %gs) with the index of the LDT entry (0
.. 8191) it wants to use. In fact, the required value is (index <<
3) + 7, but that's not important right now. Any normal instruction
which includes an addressing mode can then be made relative to that
LDT entry by prefixing the insn with a so-called segment-override
prefix, a byte which indicates which of the 6 segment registers
holds the LDT index.
Now, a key constraint is that valgrind's address checks operate in
terms of linear addresses. So we have to explicitly translate
virtual addrs into linear addrs, and that means doing a complete
LDT simulation.
Calls to modify_ldt are intercepted. For each thread, we maintain
an LDT (with the same normally-never-allocated optimisation that
the kernel does). This is updated as expected via calls to
modify_ldt.
When a thread does an amode calculation involving a segment
override prefix, the relevant LDT entry for the thread is
consulted. It all works.
There is a conceptual problem, which appears when switching back to
native execution, either temporarily to pass syscalls to the
kernel, or permanently, when debugging V. Problem at such points
is that it's pretty pointless to copy the simulated machine's
segment registers to the real machine, because we'd also need to
copy the simulated LDT into the real one, and that's prohibitively
expensive.
Fortunately it looks like no syscalls rely on the segment regs or
LDT being correct, so we can get away with it. Apart from that the
simulation is pretty straightforward. All 6 segment registers are
tracked, although only %ds, %es, %fs and %gs are allowed as
prefixes. Perhaps it could be restricted even more than that -- I
am not sure what is and isn't allowed in user-mode.
*/
#include "core.h"
#if 0
/* Maximum number of LDT entries supported (by the x86). */
#define VG_M_LDT_ENTRIES 8192
/* The size of each LDT entry == sizeof(VgLdtEntry) */
#define VG_LDT_ENTRY_SIZE 8
/* Allocate and deallocate LDTs for threads. */
/* Create an LDT. If the parent_ldt is NULL, zero out the
new one. If non-NULL, copy the parent. */
VgLdtEntry* VG_(allocate_LDT_for_thread) ( VgLdtEntry* parent_ldt )
{
UInt nbytes, i;
VgLdtEntry* ldt;
if (0)
VG_(printf)("allocate_LDT_for_thread: parent = %p\n", parent_ldt );
vg_assert(VG_LDT_ENTRY_SIZE == sizeof(VgLdtEntry));
nbytes = VG_M_LDT_ENTRIES * VG_LDT_ENTRY_SIZE;
if (parent_ldt == NULL) {
/* Allocate a new zeroed-out one. */
ldt = (VgLdtEntry*)VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
} else {
ldt = (VgLdtEntry*)VG_(arena_malloc)(VG_AR_CORE, nbytes);
for (i = 0; i < VG_M_LDT_ENTRIES; i++)
ldt[i] = parent_ldt[i];
}
return ldt;
}
/* Free an LDT created by the above function. */
void VG_(deallocate_LDT_for_thread) ( VgLdtEntry* ldt )
{
if (0)
VG_(printf)("deallocate_LDT_for_thread: ldt = %p\n", ldt );
if (ldt != NULL)
VG_(arena_free)(VG_AR_CORE, ldt);
}
/* Clear a TLS array. */
void VG_(clear_TLS_for_thread) ( VgLdtEntry* tls )
{
VgLdtEntry* tlsp;
if (0)
VG_(printf)("clear_TLS_for_thread\n" );
for (tlsp = tls; tlsp < tls + VKI_GDT_ENTRY_TLS_ENTRIES; tlsp++) {
tlsp->LdtEnt.Words.word1 = 0;
tlsp->LdtEnt.Words.word2 = 0;
}
return;
}
/* Fish the base field out of an VgLdtEntry. This is the only part we
are particularly interested in. */
static
void *wine_ldt_get_base( const VgLdtEntry *ent )
{
return (void *)(ent->LdtEnt.Bits.BaseLow |
((unsigned long)ent->LdtEnt.Bits.BaseMid) << 16 |
((unsigned long)ent->LdtEnt.Bits.BaseHi) << 24);
}
static
unsigned int wine_ldt_get_limit( const VgLdtEntry *ent )
{
unsigned int limit = ent->LdtEnt.Bits.LimitLow
| (ent->LdtEnt.Bits.LimitHi << 16);
if (ent->LdtEnt.Bits.Granularity) limit = (limit << 12) | 0xfff;
return limit;
}
#if 0
/* Actually _DO_ the segment translation. This is the whole entire
point of this accursed, overcomplicated, baroque, pointless
segment-override-and-LDT/GDT garbage foisted upon us all by Intel,
in its infinite wisdom.
THIS IS CALLED FROM GENERATED CODE (AND SO RUNS ON REAL CPU).
*/
Addr VG_(do_useseg) ( UInt seg_selector, Addr virtual_addr )
{
UInt table;
Addr base;
UInt limit;
if (0)
VG_(printf)("do_useseg: seg_selector = %p, vaddr = %p\n",
seg_selector, virtual_addr);
seg_selector &= 0x0000FFFF;
/* Sanity check the segment selector. Ensure that RPL=11b (least
privilege). This forms the bottom 2 bits of the selector. */
if ((seg_selector & 3) != 3) {
VG_(synth_fault)(VG_(get_current_tid)());
return 0;
}
/* Extract the table number */
table = (seg_selector & 4) >> 2;
/* Convert the segment selector onto a table index */
seg_selector >>= 3;
if (table == 0) {
VgLdtEntry* the_tls;
vg_assert(seg_selector >= VKI_GDT_ENTRY_TLS_MIN && seg_selector <= VKI_GDT_ENTRY_TLS_MAX);
/* Come up with a suitable GDT entry. We look at the thread's TLS
array, which is pointed to by a VG_(baseBlock) entry. */
the_tls = (VgLdtEntry*)VG_(baseBlock)[VGOFF_(tls_ptr)];
base = (Addr)wine_ldt_get_base ( &the_tls[seg_selector-VKI_GDT_ENTRY_TLS_MIN] );
limit = (UInt)wine_ldt_get_limit ( &the_tls[seg_selector-VKI_GDT_ENTRY_TLS_MIN] );
} else {
VgLdtEntry* the_ldt;
vg_assert(seg_selector >= 0 && seg_selector < 8192);
/* Come up with a suitable LDT entry. We look at the thread's LDT,
which is pointed to by a VG_(baseBlock) entry. If null, we will
use an implied zero-entry -- although this usually implies the
program is in deep trouble, since it is using LDT entries which
it probably hasn't set up. */
the_ldt = (VgLdtEntry*)VG_(baseBlock)[VGOFF_(ldt)];
if (the_ldt == NULL) {
base = 0;
limit = 0;
VG_(message)(
Vg_UserMsg,
"Warning: segment-override prefix encountered, but thread has no LDT"
);
} else {
base = (Addr)wine_ldt_get_base ( &the_ldt[seg_selector] );
limit = (UInt)wine_ldt_get_limit ( &the_ldt[seg_selector] );
}
}
/* Note, this check is just slightly too slack. Really it should
be "if (virtual_addr + size - 1 >= limit)," but we don't have the
size info to hand. Getting it could be significantly complex. */
if (virtual_addr >= limit) {
VG_(message)(
Vg_UserMsg,
"Warning: segment access: virtual addr %d exceeds segment limit of %d",
virtual_addr, limit
);
}
if (0)
VG_(printf)("do_useseg: base = %p, addr = %p\n",
base, base + virtual_addr);
return base + virtual_addr;
}
#endif
/* Translate a struct modify_ldt_ldt_s to an VgLdtEntry, using the
Linux kernel's logic (cut-n-paste of code in linux/kernel/ldt.c). */
static
void translate_to_hw_format ( /* IN */ vki_modify_ldt_t* inn,
/* OUT */ VgLdtEntry* out,
Int oldmode )
{
UInt entry_1, entry_2;
if (0)
VG_(printf)("translate_to_hw_format: base %p, limit %d\n",
inn->base_addr, inn->limit );
/* Allow LDTs to be cleared by the user. */
if (inn->base_addr == 0 && inn->limit == 0) {
if (oldmode ||
(inn->contents == 0 &&
inn->read_exec_only == 1 &&
inn->seg_32bit == 0 &&
inn->limit_in_pages == 0 &&
inn->seg_not_present == 1 &&
inn->useable == 0 )) {
entry_1 = 0;
entry_2 = 0;
goto install;
}
}
entry_1 = ((inn->base_addr & 0x0000ffff) << 16) |
(inn->limit & 0x0ffff);
entry_2 = (inn->base_addr & 0xff000000) |
((inn->base_addr & 0x00ff0000) >> 16) |
(inn->limit & 0xf0000) |
((inn->read_exec_only ^ 1) << 9) |
(inn->contents << 10) |
((inn->seg_not_present ^ 1) << 15) |
(inn->seg_32bit << 22) |
(inn->limit_in_pages << 23) |
0x7000;
if (!oldmode)
entry_2 |= (inn->useable << 20);
/* Install the new entry ... */
install:
out->LdtEnt.Words.word1 = entry_1;
out->LdtEnt.Words.word2 = entry_2;
}
/*
* linux/kernel/ldt.c
*
* Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
/*
* read_ldt() is not really atomic - this is not a problem since
* synchronization of reads and writes done to the LDT has to be
* assured by user-space anyway. Writes are atomic, to protect
* the security checks done on new descriptors.
*/
static
Int read_ldt ( ThreadId tid, UChar* ptr, UInt bytecount )
{
Int err;
UInt i, size;
Char* ldt;
if (0)
VG_(printf)("read_ldt: tid = %d, ptr = %p, bytecount = %d\n",
tid, ptr, bytecount );
ldt = (Char*)(VG_(threads)[tid].arch.ldt);
err = 0;
if (ldt == NULL)
/* LDT not allocated, meaning all entries are null */
goto out;
size = VG_M_LDT_ENTRIES * VG_LDT_ENTRY_SIZE;
if (size > bytecount)
size = bytecount;
err = size;
for (i = 0; i < size; i++)
ptr[i] = ldt[i];
out:
return err;
}
static
Int write_ldt ( ThreadId tid, void* ptr, UInt bytecount, Int oldmode )
{
Int error;
VgLdtEntry* ldt;
vki_modify_ldt_t* ldt_info;
if (0)
VG_(printf)("write_ldt: tid = %d, ptr = %p, "
"bytecount = %d, oldmode = %d\n",
tid, ptr, bytecount, oldmode );
ldt = VG_(threads)[tid].arch.ldt;
ldt_info = (vki_modify_ldt_t*)ptr;
error = -VKI_EINVAL;
if (bytecount != sizeof(vki_modify_ldt_t))
goto out;
error = -VKI_EINVAL;
if (ldt_info->entry_number >= VG_M_LDT_ENTRIES)
goto out;
if (ldt_info->contents == 3) {
if (oldmode)
goto out;
if (ldt_info->seg_not_present == 0)
goto out;
}
/* If this thread doesn't have an LDT, we'd better allocate it
now. */
if (ldt == NULL) {
ldt = VG_(allocate_LDT_for_thread)( NULL );
VG_(threads)[tid].arch.ldt = ldt;
}
/* Install the new entry ... */
translate_to_hw_format ( ldt_info, &ldt[ldt_info->entry_number], oldmode );
error = 0;
out:
return error;
}
Int VG_(sys_modify_ldt) ( ThreadId tid,
Int func, void* ptr, UInt bytecount )
{
Int ret = -VKI_ENOSYS;
switch (func) {
case 0:
ret = read_ldt(tid, ptr, bytecount);
break;
case 1:
ret = write_ldt(tid, ptr, bytecount, 1);
break;
case 2:
VG_(unimplemented)("sys_modify_ldt: func == 2");
/* god knows what this is about */
/* ret = read_default_ldt(ptr, bytecount); */
/*UNREACHED*/
break;
case 0x11:
ret = write_ldt(tid, ptr, bytecount, 0);
break;
}
return ret;
}
Int VG_(sys_set_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info )
{
Int idx;
if (info == NULL)
return -VKI_EFAULT;
idx = info->entry_number;
if (idx == -1) {
for (idx = 0; idx < VKI_GDT_ENTRY_TLS_ENTRIES; idx++) {
VgLdtEntry* tls = VG_(threads)[tid].arch.tls + idx;
if (tls->LdtEnt.Words.word1 == 0 && tls->LdtEnt.Words.word2 == 0)
break;
}
if (idx == VKI_GDT_ENTRY_TLS_ENTRIES)
return -VKI_ESRCH;
} else if (idx < VKI_GDT_ENTRY_TLS_MIN || idx > VKI_GDT_ENTRY_TLS_MAX) {
return -VKI_EINVAL;
} else {
idx = info->entry_number - VKI_GDT_ENTRY_TLS_MIN;
}
translate_to_hw_format(info, VG_(threads)[tid].arch.tls + idx, 0);
VG_TRACK( pre_mem_write, Vg_CoreSysCall, tid,
"set_thread_area(info->entry)",
(Addr) & info->entry_number, sizeof(unsigned int) );
info->entry_number = idx + VKI_GDT_ENTRY_TLS_MIN;
VG_TRACK( post_mem_write, Vg_CoreSysCall, tid,
(Addr) & info->entry_number, sizeof(unsigned int) );
return 0;
}
Int VG_(sys_get_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info )
{
Int idx;
VgLdtEntry* tls;
if (info == NULL)
return -VKI_EFAULT;
idx = info->entry_number;
if (idx < VKI_GDT_ENTRY_TLS_MIN || idx > VKI_GDT_ENTRY_TLS_MAX)
return -VKI_EINVAL;
tls = VG_(threads)[tid].arch.tls + idx - VKI_GDT_ENTRY_TLS_MIN;
info->base_addr = ( tls->LdtEnt.Bits.BaseHi << 24 ) |
( tls->LdtEnt.Bits.BaseMid << 16 ) |
tls->LdtEnt.Bits.BaseLow;
info->limit = ( tls->LdtEnt.Bits.LimitHi << 16 ) |
tls->LdtEnt.Bits.LimitLow;
info->seg_32bit = tls->LdtEnt.Bits.Default_Big;
info->contents = ( tls->LdtEnt.Bits.Type >> 2 ) & 0x3;
info->read_exec_only = ( tls->LdtEnt.Bits.Type & 0x1 ) ^ 0x1;
info->limit_in_pages = tls->LdtEnt.Bits.Granularity;
info->seg_not_present = tls->LdtEnt.Bits.Pres ^ 0x1;
info->useable = tls->LdtEnt.Bits.Sys;
info->reserved = 0;
return 0;
}
#endif
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/

View File

@ -5,8 +5,7 @@ AM_CFLAGS = $(WERROR) -Winline -Wall -Wshadow -O -fomit-frame-pointer -g
noinst_HEADERS = \
core_arch.h \
core_arch_asm.h \
amd64_private.h
core_arch_asm.h
noinst_LIBRARIES = libarch.a

View File

@ -1,46 +0,0 @@
/*--------------------------------------------------------------------*/
/*--- Private arch-specific header. amd64/amd64_private.h ---*/
/*--------------------------------------------------------------------*/
/*
This file is part of Valgrind, a dynamic binary instrumentation
framework.
Copyright (C) 2000-2005 Nicholas Nethercote
njn@valgrind.org
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307, USA.
The GNU General Public License is contained in the file COPYING.
*/
#ifndef __AMD64_PRIVATE_H
#define __AMD64_PRIVATE_H
#include "core_arch_asm.h" // arch-specific asm stuff
#include "tool_arch.h" // arch-specific tool stuff
/* ---------------------------------------------------------------------
Exports of state.c that are not core-visible
------------------------------------------------------------------ */
#endif // __AMD64_PRIVATE_H
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/

View File

@ -30,7 +30,6 @@
#include "core.h"
#include "pub_core_tooliface.h"
#include "amd64_private.h"
#include <sys/ptrace.h>
#include "libvex_guest_amd64.h"
@ -87,36 +86,6 @@ void VGA_(init_thread1state) ( Addr client_rip,
/*--- Thread stuff ---*/
/*------------------------------------------------------------*/
void VGA_(cleanup_thread) ( ThreadArchState *arch )
{
/* TODO: deallocate the thread's LDT / GDT ? */
}
void VGA_(setup_child) ( /*OUT*/ ThreadArchState *child,
/*IN*/ ThreadArchState *parent )
{
/* We inherit our parent's guest state. */
child->vex = parent->vex;
child->vex_shadow = parent->vex_shadow;
#if 0
/* TODO: inherit the thread's LDT / GDT ? */
/* We inherit our parent's LDT. */
if (parent->vex.guest_LDT == (HWord)NULL) {
/* We hope this is the common case. */
child->vex.guest_LDT = (HWord)NULL;
} else {
/* No luck .. we have to take a copy of the parent's. */
child->vex.guest_LDT = (HWord)VG_(alloc_zeroed_x86_LDT)();
copy_LDT_from_to( (VexGuestX86SegDescr*)parent->vex.guest_LDT,
(VexGuestX86SegDescr*)child->vex.guest_LDT );
}
/* We need an empty GDT. */
child->vex.guest_GDT = (HWord)NULL;
#endif
}
void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr))
{
ThreadState *tst = VG_(get_ThreadState)(tid);

View File

@ -805,10 +805,6 @@ extern void
Addr esp_at_startup,
/*MOD*/ ThreadArchState* arch );
// Thread stuff
extern void VGA_(cleanup_thread) ( ThreadArchState* );
extern void VGA_(setup_child) ( ThreadArchState*, ThreadArchState* );
// OS/Platform-specific thread clear (after thread exit)
extern void VGA_(os_state_clear)(ThreadState *);

View File

@ -45,7 +45,7 @@
/* ---------------------------------------------------------------------
Stacks, thread wrappers, clone
Stacks, thread wrappers
Note. Why is this stuff here?
------------------------------------------------------------------ */
@ -107,7 +107,7 @@ static void restart_syscall(ThreadArchState *arch)
get called multiple times.
*/
/* NB: this is identical to the x86 version */
void VGA_(interrupted_syscall)(ThreadId tid,
void VGP_(interrupted_syscall)(ThreadId tid,
struct vki_ucontext *uc,
Bool restart)
{
@ -301,9 +301,14 @@ static Int start_thread(void *arg)
VG_(core_panic)("Thread exit failed?\n");
}
/*
/* ---------------------------------------------------------------------
clone() handling
------------------------------------------------------------------ */
// forward declaration
static void setup_child ( ThreadArchState*, ThreadArchState* );
/*
When a client clones, we need to keep track of the new thread. This means:
1. allocate a ThreadId+ThreadState+stack for the the thread
@ -348,7 +353,7 @@ static Int do_clone(ThreadId ptid,
If the clone call specifies a NULL rsp for the new thread, then
it actually gets a copy of the parent's rsp.
*/
VGA_(setup_child)( &ctst->arch, &ptst->arch );
setup_child( &ctst->arch, &ptst->arch );
VGP_SET_SYSCALL_RESULT(ctst->arch, 0);
if (rsp != 0)
@ -401,7 +406,7 @@ static Int do_clone(ThreadId ptid,
if (ret < 0) {
/* clone failed */
VGA_(cleanup_thread)(&ctst->arch);
VGP_(cleanup_thread)(&ctst->arch);
ctst->status = VgTs_Empty;
}
@ -450,6 +455,22 @@ static Int do_fork_clone(ThreadId tid, UInt flags, Addr rsp, Int *parent_tidptr,
return ret;
}
/* ---------------------------------------------------------------------
More thread stuff
------------------------------------------------------------------ */
void VGP_(cleanup_thread) ( ThreadArchState *arch )
{
}
void setup_child ( /*OUT*/ ThreadArchState *child,
/*IN*/ ThreadArchState *parent )
{
/* We inherit our parent's guest state. */
child->vex = parent->vex;
child->vex_shadow = parent->vex_shadow;
}
/* ---------------------------------------------------------------------
PRE/POST wrappers for AMD64/Linux-specific syscalls
------------------------------------------------------------------ */

View File

@ -43,7 +43,7 @@
/* ---------------------------------------------------------------------
Stacks, thread wrappers, clone
Stacks, thread wrappers
Note. Why is this stuff here?
------------------------------------------------------------------ */
@ -105,7 +105,7 @@ static void restart_syscall(ThreadArchState *arch)
get called multiple times.
*/
/* NB: this is identical to the amd64 version */
void VGA_(interrupted_syscall)(ThreadId tid,
void VGP_(interrupted_syscall)(ThreadId tid,
struct vki_ucontext *uc,
Bool restart)
{
@ -299,9 +299,15 @@ static Int start_thread(void *arg)
VG_(core_panic)("Thread exit failed?\n");
}
/*
/* ---------------------------------------------------------------------
clone() handling
------------------------------------------------------------------ */
// forward declarations
static void setup_child ( ThreadArchState*, ThreadArchState* );
static Int sys_set_thread_area ( ThreadId, vki_modify_ldt_t* );
/*
When a client clones, we need to keep track of the new thread. This means:
1. allocate a ThreadId+ThreadState+stack for the the thread
@ -346,7 +352,7 @@ static Int do_clone(ThreadId ptid,
If the clone call specifies a NULL esp for the new thread, then
it actually gets a copy of the parent's esp.
*/
VGA_(setup_child)( &ctst->arch, &ptst->arch );
setup_child( &ctst->arch, &ptst->arch );
VGP_SET_SYSCALL_RESULT(ctst->arch, 0);
if (esp != 0)
@ -386,7 +392,7 @@ static Int do_clone(ThreadId ptid,
tlsinfo, tlsinfo->entry_number, tlsinfo->base_addr, tlsinfo->limit,
ptst->arch.vex.guest_ESP,
ctst->arch.vex.guest_FS, ctst->arch.vex.guest_GS);
ret = VG_(sys_set_thread_area)(ctid, tlsinfo);
ret = sys_set_thread_area(ctid, tlsinfo);
if (ret != 0)
goto out;
@ -406,7 +412,7 @@ static Int do_clone(ThreadId ptid,
out:
if (ret < 0) {
/* clone failed */
VGA_(cleanup_thread)(&ctst->arch);
VGP_(cleanup_thread)(&ctst->arch);
ctst->status = VgTs_Empty;
}
@ -455,6 +461,411 @@ static Int do_fork_clone(ThreadId tid, UInt flags, Addr esp, Int *parent_tidptr,
return ret;
}
/* ---------------------------------------------------------------------
LDT/GDT simulation
------------------------------------------------------------------ */
/* Details of the LDT simulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a program runs natively, the linux kernel allows each *thread*
in it to have its own LDT. Almost all programs never do this --
it's wildly unportable, after all -- and so the kernel never
allocates the structure, which is just as well as an LDT occupies
64k of memory (8192 entries of size 8 bytes).
A thread may choose to modify its LDT entries, by doing the
__NR_modify_ldt syscall. In such a situation the kernel will then
allocate an LDT structure for it. Each LDT entry is basically a
(base, limit) pair. A virtual address in a specific segment is
translated to a linear address by adding the segment's base value.
In addition, the virtual address must not exceed the limit value.
To use an LDT entry, a thread loads one of the segment registers
(%cs, %ss, %ds, %es, %fs, %gs) with the index of the LDT entry (0
.. 8191) it wants to use. In fact, the required value is (index <<
3) + 7, but that's not important right now. Any normal instruction
which includes an addressing mode can then be made relative to that
LDT entry by prefixing the insn with a so-called segment-override
prefix, a byte which indicates which of the 6 segment registers
holds the LDT index.
Now, a key constraint is that valgrind's address checks operate in
terms of linear addresses. So we have to explicitly translate
virtual addrs into linear addrs, and that means doing a complete
LDT simulation.
Calls to modify_ldt are intercepted. For each thread, we maintain
an LDT (with the same normally-never-allocated optimisation that
the kernel does). This is updated as expected via calls to
modify_ldt.
When a thread does an amode calculation involving a segment
override prefix, the relevant LDT entry for the thread is
consulted. It all works.
There is a conceptual problem, which appears when switching back to
native execution, either temporarily to pass syscalls to the
kernel, or permanently, when debugging V. Problem at such points
is that it's pretty pointless to copy the simulated machine's
segment registers to the real machine, because we'd also need to
copy the simulated LDT into the real one, and that's prohibitively
expensive.
Fortunately it looks like no syscalls rely on the segment regs or
LDT being correct, so we can get away with it. Apart from that the
simulation is pretty straightforward. All 6 segment registers are
tracked, although only %ds, %es, %fs and %gs are allowed as
prefixes. Perhaps it could be restricted even more than that -- I
am not sure what is and isn't allowed in user-mode.
*/
/* Translate a struct modify_ldt_ldt_s to a VexGuestX86SegDescr, using
the Linux kernel's logic (cut-n-paste of code in
linux/kernel/ldt.c). */
static
void translate_to_hw_format ( /* IN */ vki_modify_ldt_t* inn,
/* OUT */ VexGuestX86SegDescr* out,
Int oldmode )
{
UInt entry_1, entry_2;
vg_assert(8 == sizeof(VexGuestX86SegDescr));
if (0)
VG_(printf)("translate_to_hw_format: base %p, limit %d\n",
inn->base_addr, inn->limit );
/* Allow LDTs to be cleared by the user. */
if (inn->base_addr == 0 && inn->limit == 0) {
if (oldmode ||
(inn->contents == 0 &&
inn->read_exec_only == 1 &&
inn->seg_32bit == 0 &&
inn->limit_in_pages == 0 &&
inn->seg_not_present == 1 &&
inn->useable == 0 )) {
entry_1 = 0;
entry_2 = 0;
goto install;
}
}
entry_1 = ((inn->base_addr & 0x0000ffff) << 16) |
(inn->limit & 0x0ffff);
entry_2 = (inn->base_addr & 0xff000000) |
((inn->base_addr & 0x00ff0000) >> 16) |
(inn->limit & 0xf0000) |
((inn->read_exec_only ^ 1) << 9) |
(inn->contents << 10) |
((inn->seg_not_present ^ 1) << 15) |
(inn->seg_32bit << 22) |
(inn->limit_in_pages << 23) |
0x7000;
if (!oldmode)
entry_2 |= (inn->useable << 20);
/* Install the new entry ... */
install:
out->LdtEnt.Words.word1 = entry_1;
out->LdtEnt.Words.word2 = entry_2;
}
/* Create a zeroed-out GDT. */
static VexGuestX86SegDescr* alloc_zeroed_x86_GDT ( void )
{
Int nbytes = VEX_GUEST_X86_GDT_NENT * sizeof(VexGuestX86SegDescr);
return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
}
/* Create a zeroed-out LDT. */
static VexGuestX86SegDescr* alloc_zeroed_x86_LDT ( void )
{
Int nbytes = VEX_GUEST_X86_LDT_NENT * sizeof(VexGuestX86SegDescr);
return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
}
/* Free up an LDT or GDT allocated by the above fns. */
static void free_LDT_or_GDT ( VexGuestX86SegDescr* dt )
{
vg_assert(dt);
VG_(arena_free)(VG_AR_CORE, (void*)dt);
}
/* Copy contents between two existing LDTs. */
static void copy_LDT_from_to ( VexGuestX86SegDescr* src,
VexGuestX86SegDescr* dst )
{
Int i;
vg_assert(src);
vg_assert(dst);
for (i = 0; i < VEX_GUEST_X86_LDT_NENT; i++)
dst[i] = src[i];
}
/* Free this thread's DTs, if it has any. */
static void deallocate_LGDTs_for_thread ( VexGuestX86State* vex )
{
vg_assert(sizeof(HWord) == sizeof(void*));
if (0)
VG_(printf)("deallocate_LGDTs_for_thread: "
"ldt = 0x%x, gdt = 0x%x\n",
vex->guest_LDT, vex->guest_GDT );
if (vex->guest_LDT != (HWord)NULL) {
free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_LDT );
vex->guest_LDT = (HWord)NULL;
}
if (vex->guest_GDT != (HWord)NULL) {
free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_GDT );
vex->guest_GDT = (HWord)NULL;
}
}
/*
* linux/kernel/ldt.c
*
* Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
/*
* read_ldt() is not really atomic - this is not a problem since
* synchronization of reads and writes done to the LDT has to be
* assured by user-space anyway. Writes are atomic, to protect
* the security checks done on new descriptors.
*/
static
Int read_ldt ( ThreadId tid, UChar* ptr, UInt bytecount )
{
Int err;
UInt i, size;
UChar* ldt;
if (0)
VG_(printf)("read_ldt: tid = %d, ptr = %p, bytecount = %d\n",
tid, ptr, bytecount );
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
vg_assert(8 == sizeof(VexGuestX86SegDescr));
ldt = (Char*)(VG_(threads)[tid].arch.vex.guest_LDT);
err = 0;
if (ldt == NULL)
/* LDT not allocated, meaning all entries are null */
goto out;
size = VEX_GUEST_X86_LDT_NENT * sizeof(VexGuestX86SegDescr);
if (size > bytecount)
size = bytecount;
err = size;
for (i = 0; i < size; i++)
ptr[i] = ldt[i];
out:
return err;
}
static
Int write_ldt ( ThreadId tid, void* ptr, UInt bytecount, Int oldmode )
{
Int error;
VexGuestX86SegDescr* ldt;
vki_modify_ldt_t* ldt_info;
if (0)
VG_(printf)("write_ldt: tid = %d, ptr = %p, "
"bytecount = %d, oldmode = %d\n",
tid, ptr, bytecount, oldmode );
vg_assert(8 == sizeof(VexGuestX86SegDescr));
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
ldt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_LDT;
ldt_info = (vki_modify_ldt_t*)ptr;
error = -VKI_EINVAL;
if (bytecount != sizeof(vki_modify_ldt_t))
goto out;
error = -VKI_EINVAL;
if (ldt_info->entry_number >= VEX_GUEST_X86_LDT_NENT)
goto out;
if (ldt_info->contents == 3) {
if (oldmode)
goto out;
if (ldt_info->seg_not_present == 0)
goto out;
}
/* If this thread doesn't have an LDT, we'd better allocate it
now. */
if (ldt == (HWord)NULL) {
ldt = alloc_zeroed_x86_LDT();
VG_(threads)[tid].arch.vex.guest_LDT = (HWord)ldt;
}
/* Install the new entry ... */
translate_to_hw_format ( ldt_info, &ldt[ldt_info->entry_number], oldmode );
error = 0;
out:
return error;
}
static Int sys_modify_ldt ( ThreadId tid,
Int func, void* ptr, UInt bytecount )
{
Int ret = -VKI_ENOSYS;
switch (func) {
case 0:
ret = read_ldt(tid, ptr, bytecount);
break;
case 1:
ret = write_ldt(tid, ptr, bytecount, 1);
break;
case 2:
VG_(unimplemented)("sys_modify_ldt: func == 2");
/* god knows what this is about */
/* ret = read_default_ldt(ptr, bytecount); */
/*UNREACHED*/
break;
case 0x11:
ret = write_ldt(tid, ptr, bytecount, 0);
break;
}
return ret;
}
static Int sys_set_thread_area ( ThreadId tid, vki_modify_ldt_t* info )
{
Int idx;
VexGuestX86SegDescr* gdt;
vg_assert(8 == sizeof(VexGuestX86SegDescr));
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
if (info == NULL)
return -VKI_EFAULT;
gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
gdt = alloc_zeroed_x86_GDT();
VG_(threads)[tid].arch.vex.guest_GDT = (HWord)gdt;
}
idx = info->entry_number;
if (idx == -1) {
/* Find and use the first free entry. */
for (idx = 0; idx < VEX_GUEST_X86_GDT_NENT; idx++) {
if (gdt[idx].LdtEnt.Words.word1 == 0
&& gdt[idx].LdtEnt.Words.word2 == 0)
break;
}
if (idx == VEX_GUEST_X86_GDT_NENT)
return -VKI_ESRCH;
} else if (idx < 0 || idx >= VEX_GUEST_X86_GDT_NENT) {
return -VKI_EINVAL;
}
translate_to_hw_format(info, &gdt[idx], 0);
VG_TRACK( pre_mem_write, Vg_CoreSysCall, tid,
"set_thread_area(info->entry)",
(Addr) & info->entry_number, sizeof(unsigned int) );
info->entry_number = idx;
VG_TRACK( post_mem_write, Vg_CoreSysCall, tid,
(Addr) & info->entry_number, sizeof(unsigned int) );
return 0;
}
static Int sys_get_thread_area ( ThreadId tid, vki_modify_ldt_t* info )
{
Int idx;
VexGuestX86SegDescr* gdt;
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
vg_assert(8 == sizeof(VexGuestX86SegDescr));
if (info == NULL)
return -VKI_EFAULT;
idx = info->entry_number;
if (idx < 0 || idx >= VEX_GUEST_X86_GDT_NENT)
return -VKI_EINVAL;
gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
gdt = alloc_zeroed_x86_GDT();
VG_(threads)[tid].arch.vex.guest_GDT = (HWord)gdt;
}
info->base_addr = ( gdt[idx].LdtEnt.Bits.BaseHi << 24 ) |
( gdt[idx].LdtEnt.Bits.BaseMid << 16 ) |
gdt[idx].LdtEnt.Bits.BaseLow;
info->limit = ( gdt[idx].LdtEnt.Bits.LimitHi << 16 ) |
gdt[idx].LdtEnt.Bits.LimitLow;
info->seg_32bit = gdt[idx].LdtEnt.Bits.Default_Big;
info->contents = ( gdt[idx].LdtEnt.Bits.Type >> 2 ) & 0x3;
info->read_exec_only = ( gdt[idx].LdtEnt.Bits.Type & 0x1 ) ^ 0x1;
info->limit_in_pages = gdt[idx].LdtEnt.Bits.Granularity;
info->seg_not_present = gdt[idx].LdtEnt.Bits.Pres ^ 0x1;
info->useable = gdt[idx].LdtEnt.Bits.Sys;
info->reserved = 0;
return 0;
}
/* ---------------------------------------------------------------------
More thread stuff
------------------------------------------------------------------ */
void VGP_(cleanup_thread) ( ThreadArchState* arch )
{
/* Release arch-specific resources held by this thread. */
/* On x86, we have to dump the LDT and GDT. */
deallocate_LGDTs_for_thread( &arch->vex );
}
static void setup_child ( /*OUT*/ ThreadArchState *child,
/*IN*/ ThreadArchState *parent )
{
/* We inherit our parent's guest state. */
child->vex = parent->vex;
child->vex_shadow = parent->vex_shadow;
/* We inherit our parent's LDT. */
if (parent->vex.guest_LDT == (HWord)NULL) {
/* We hope this is the common case. */
child->vex.guest_LDT = (HWord)NULL;
} else {
/* No luck .. we have to take a copy of the parent's. */
child->vex.guest_LDT = (HWord)alloc_zeroed_x86_LDT();
copy_LDT_from_to( (VexGuestX86SegDescr*)parent->vex.guest_LDT,
(VexGuestX86SegDescr*)child->vex.guest_LDT );
}
/* We need an empty GDT. */
child->vex.guest_GDT = (HWord)NULL;
}
/* ---------------------------------------------------------------------
PRE/POST wrappers for x86/Linux-specific syscalls
------------------------------------------------------------------ */
@ -640,7 +1051,7 @@ PRE(sys_modify_ldt, Special)
PRE_MEM_READ( "modify_ldt(ptr)", ARG2, sizeof(vki_modify_ldt_t) );
}
/* "do" the syscall ourselves; the kernel never sees it */
SET_RESULT( VG_(sys_modify_ldt)( tid, ARG1, (void*)ARG2, ARG3 ) );
SET_RESULT( sys_modify_ldt( tid, ARG1, (void*)ARG2, ARG3 ) );
if (ARG1 == 0 && !VG_(is_kerror)(RES) && RES > 0) {
POST_MEM_WRITE( ARG2, RES );
@ -654,7 +1065,7 @@ PRE(sys_set_thread_area, Special)
PRE_MEM_READ( "set_thread_area(u_info)", ARG1, sizeof(vki_modify_ldt_t) );
/* "do" the syscall ourselves; the kernel never sees it */
SET_RESULT( VG_(sys_set_thread_area)( tid, (void *)ARG1 ) );
SET_RESULT( sys_set_thread_area( tid, (void *)ARG1 ) );
}
PRE(sys_get_thread_area, Special)
@ -664,7 +1075,7 @@ PRE(sys_get_thread_area, Special)
PRE_MEM_WRITE( "get_thread_area(u_info)", ARG1, sizeof(vki_modify_ldt_t) );
/* "do" the syscall ourselves; the kernel never sees it */
SET_RESULT( VG_(sys_get_thread_area)( tid, (void *)ARG1 ) );
SET_RESULT( sys_get_thread_area( tid, (void *)ARG1 ) );
if (!VG_(is_kerror)(RES)) {
POST_MEM_WRITE( ARG1, sizeof(vki_modify_ldt_t) );

View File

@ -47,10 +47,13 @@ extern void VG_(post_syscall) ( ThreadId tid );
// interrupted with a signal. Returns True if the syscall completed
// (either interrupted or finished normally), or False if it was
// restarted (or the signal didn't actually interrupt a syscall).
extern void VGA_(interrupted_syscall)(ThreadId tid,
extern void VGP_(interrupted_syscall)(ThreadId tid,
struct vki_ucontext *uc,
Bool restart);
// Release resources held by this thread
extern void VGP_(cleanup_thread) ( ThreadArchState* );
extern Bool VG_(is_kerror) ( Word res );
/* Internal atfork handlers */

View File

@ -551,7 +551,7 @@ void mostly_clear_thread_record ( ThreadId tid )
vki_sigset_t savedmask;
vg_assert(tid >= 0 && tid < VG_N_THREADS);
VGA_(cleanup_thread)(&VG_(threads)[tid].arch);
VGP_(cleanup_thread)(&VG_(threads)[tid].arch);
VG_(threads)[tid].tid = tid;
/* Leave the thread in Zombie, so that it doesn't get reallocated

View File

@ -1651,7 +1651,7 @@ void async_signalhandler ( Int sigNo, vki_siginfo_t *info, struct vki_ucontext *
sigNo, tid, info->si_code);
/* Update thread state properly */
VGA_(interrupted_syscall)(tid, uc,
VGP_(interrupted_syscall)(tid, uc,
!!(scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART));
/* Set up the thread's state to deliver a signal */

View File

@ -11,8 +11,7 @@ noinst_LIBRARIES = libplatform.a
libplatform_a_SOURCES = \
core_platform.c \
ldt.c
core_platform.c
if USE_PIE
libplatform_a_CFLAGS = $(AM_CFLAGS) -fpie

View File

@ -35,27 +35,6 @@
//#include "core_platform_asm.h" // platform-specific asm stuff
//#include "platform_arch.h" // platform-specific tool stuff
/* ---------------------------------------------------------------------
Exports of vg_ldt.c
------------------------------------------------------------------ */
// XXX: eventually all these should be x86-private, and not visible to the
// core (except maybe do_useseg()?)
/* Simulate the modify_ldt syscall. */
extern Int VG_(sys_modify_ldt) ( ThreadId tid,
Int func, void* ptr, UInt bytecount );
/* Simulate the {get,set}_thread_area syscalls. */
extern Int VG_(sys_set_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info );
extern Int VG_(sys_get_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info );
/* Called from generated code. Given a segment selector and a virtual
address, return a linear address, and do limit checks too. */
extern Addr VG_(do_useseg) ( UInt seg_selector, Addr virtual_addr );
/* ---------------------------------------------------------------------
ucontext stuff
------------------------------------------------------------------ */

View File

@ -1,357 +0,0 @@
/*--------------------------------------------------------------------*/
/*--- Simulation of Local Descriptor Tables x86-linux/ldt.c ---*/
/*--------------------------------------------------------------------*/
/*
This file is part of Valgrind, a dynamic binary instrumentation
framework.
Copyright (C) 2000-2005 Julian Seward
jseward@acm.org
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307, USA.
The GNU General Public License is contained in the file COPYING.
*/
/* Details of the LDT simulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a program runs natively, the linux kernel allows each *thread*
in it to have its own LDT. Almost all programs never do this --
it's wildly unportable, after all -- and so the kernel never
allocates the structure, which is just as well as an LDT occupies
64k of memory (8192 entries of size 8 bytes).
A thread may choose to modify its LDT entries, by doing the
__NR_modify_ldt syscall. In such a situation the kernel will then
allocate an LDT structure for it. Each LDT entry is basically a
(base, limit) pair. A virtual address in a specific segment is
translated to a linear address by adding the segment's base value.
In addition, the virtual address must not exceed the limit value.
To use an LDT entry, a thread loads one of the segment registers
(%cs, %ss, %ds, %es, %fs, %gs) with the index of the LDT entry (0
.. 8191) it wants to use. In fact, the required value is (index <<
3) + 7, but that's not important right now. Any normal instruction
which includes an addressing mode can then be made relative to that
LDT entry by prefixing the insn with a so-called segment-override
prefix, a byte which indicates which of the 6 segment registers
holds the LDT index.
Now, a key constraint is that valgrind's address checks operate in
terms of linear addresses. So we have to explicitly translate
virtual addrs into linear addrs, and that means doing a complete
LDT simulation.
Calls to modify_ldt are intercepted. For each thread, we maintain
an LDT (with the same normally-never-allocated optimisation that
the kernel does). This is updated as expected via calls to
modify_ldt.
When a thread does an amode calculation involving a segment
override prefix, the relevant LDT entry for the thread is
consulted. It all works.
There is a conceptual problem, which appears when switching back to
native execution, either temporarily to pass syscalls to the
kernel, or permanently, when debugging V. Problem at such points
is that it's pretty pointless to copy the simulated machine's
segment registers to the real machine, because we'd also need to
copy the simulated LDT into the real one, and that's prohibitively
expensive.
Fortunately it looks like no syscalls rely on the segment regs or
LDT being correct, so we can get away with it. Apart from that the
simulation is pretty straightforward. All 6 segment registers are
tracked, although only %ds, %es, %fs and %gs are allowed as
prefixes. Perhaps it could be restricted even more than that -- I
am not sure what is and isn't allowed in user-mode.
*/
#include "core.h"
#include "pub_core_tooliface.h"
#include "x86_private.h"
#include "libvex_guest_x86.h"
/* Translate a struct modify_ldt_ldt_s to a VexGuestX86SegDescr, using
the Linux kernel's logic (cut-n-paste of code in
linux/kernel/ldt.c). */
static
void translate_to_hw_format ( /* IN */ vki_modify_ldt_t* inn,
/* OUT */ VexGuestX86SegDescr* out,
Int oldmode )
{
UInt entry_1, entry_2;
vg_assert(8 == sizeof(VexGuestX86SegDescr));
if (0)
VG_(printf)("translate_to_hw_format: base %p, limit %d\n",
inn->base_addr, inn->limit );
/* Allow LDTs to be cleared by the user. */
if (inn->base_addr == 0 && inn->limit == 0) {
if (oldmode ||
(inn->contents == 0 &&
inn->read_exec_only == 1 &&
inn->seg_32bit == 0 &&
inn->limit_in_pages == 0 &&
inn->seg_not_present == 1 &&
inn->useable == 0 )) {
entry_1 = 0;
entry_2 = 0;
goto install;
}
}
entry_1 = ((inn->base_addr & 0x0000ffff) << 16) |
(inn->limit & 0x0ffff);
entry_2 = (inn->base_addr & 0xff000000) |
((inn->base_addr & 0x00ff0000) >> 16) |
(inn->limit & 0xf0000) |
((inn->read_exec_only ^ 1) << 9) |
(inn->contents << 10) |
((inn->seg_not_present ^ 1) << 15) |
(inn->seg_32bit << 22) |
(inn->limit_in_pages << 23) |
0x7000;
if (!oldmode)
entry_2 |= (inn->useable << 20);
/* Install the new entry ... */
install:
out->LdtEnt.Words.word1 = entry_1;
out->LdtEnt.Words.word2 = entry_2;
}
/*
* linux/kernel/ldt.c
*
* Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
*/
/*
* read_ldt() is not really atomic - this is not a problem since
* synchronization of reads and writes done to the LDT has to be
* assured by user-space anyway. Writes are atomic, to protect
* the security checks done on new descriptors.
*/
static
Int read_ldt ( ThreadId tid, UChar* ptr, UInt bytecount )
{
Int err;
UInt i, size;
UChar* ldt;
if (0)
VG_(printf)("read_ldt: tid = %d, ptr = %p, bytecount = %d\n",
tid, ptr, bytecount );
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
vg_assert(8 == sizeof(VexGuestX86SegDescr));
ldt = (Char*)(VG_(threads)[tid].arch.vex.guest_LDT);
err = 0;
if (ldt == NULL)
/* LDT not allocated, meaning all entries are null */
goto out;
size = VEX_GUEST_X86_LDT_NENT * sizeof(VexGuestX86SegDescr);
if (size > bytecount)
size = bytecount;
err = size;
for (i = 0; i < size; i++)
ptr[i] = ldt[i];
out:
return err;
}
static
Int write_ldt ( ThreadId tid, void* ptr, UInt bytecount, Int oldmode )
{
Int error;
VexGuestX86SegDescr* ldt;
vki_modify_ldt_t* ldt_info;
if (0)
VG_(printf)("write_ldt: tid = %d, ptr = %p, "
"bytecount = %d, oldmode = %d\n",
tid, ptr, bytecount, oldmode );
vg_assert(8 == sizeof(VexGuestX86SegDescr));
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
ldt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_LDT;
ldt_info = (vki_modify_ldt_t*)ptr;
error = -VKI_EINVAL;
if (bytecount != sizeof(vki_modify_ldt_t))
goto out;
error = -VKI_EINVAL;
if (ldt_info->entry_number >= VEX_GUEST_X86_LDT_NENT)
goto out;
if (ldt_info->contents == 3) {
if (oldmode)
goto out;
if (ldt_info->seg_not_present == 0)
goto out;
}
/* If this thread doesn't have an LDT, we'd better allocate it
now. */
if (ldt == (HWord)NULL) {
ldt = VG_(alloc_zeroed_x86_LDT)();
VG_(threads)[tid].arch.vex.guest_LDT = (HWord)ldt;
}
/* Install the new entry ... */
translate_to_hw_format ( ldt_info, &ldt[ldt_info->entry_number], oldmode );
error = 0;
out:
return error;
}
Int VG_(sys_modify_ldt) ( ThreadId tid,
Int func, void* ptr, UInt bytecount )
{
Int ret = -VKI_ENOSYS;
switch (func) {
case 0:
ret = read_ldt(tid, ptr, bytecount);
break;
case 1:
ret = write_ldt(tid, ptr, bytecount, 1);
break;
case 2:
VG_(unimplemented)("sys_modify_ldt: func == 2");
/* god knows what this is about */
/* ret = read_default_ldt(ptr, bytecount); */
/*UNREACHED*/
break;
case 0x11:
ret = write_ldt(tid, ptr, bytecount, 0);
break;
}
return ret;
}
Int VG_(sys_set_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info )
{
Int idx;
VexGuestX86SegDescr* gdt;
vg_assert(8 == sizeof(VexGuestX86SegDescr));
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
if (info == NULL)
return -VKI_EFAULT;
gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
gdt = VG_(alloc_zeroed_x86_GDT)();
VG_(threads)[tid].arch.vex.guest_GDT = (HWord)gdt;
}
idx = info->entry_number;
if (idx == -1) {
/* Find and use the first free entry. */
for (idx = 0; idx < VEX_GUEST_X86_GDT_NENT; idx++) {
if (gdt[idx].LdtEnt.Words.word1 == 0
&& gdt[idx].LdtEnt.Words.word2 == 0)
break;
}
if (idx == VEX_GUEST_X86_GDT_NENT)
return -VKI_ESRCH;
} else if (idx < 0 || idx >= VEX_GUEST_X86_GDT_NENT) {
return -VKI_EINVAL;
}
translate_to_hw_format(info, &gdt[idx], 0);
VG_TRACK( pre_mem_write, Vg_CoreSysCall, tid,
"set_thread_area(info->entry)",
(Addr) & info->entry_number, sizeof(unsigned int) );
info->entry_number = idx;
VG_TRACK( post_mem_write, Vg_CoreSysCall, tid,
(Addr) & info->entry_number, sizeof(unsigned int) );
return 0;
}
Int VG_(sys_get_thread_area) ( ThreadId tid,
vki_modify_ldt_t* info )
{
Int idx;
VexGuestX86SegDescr* gdt;
vg_assert(sizeof(HWord) == sizeof(VexGuestX86SegDescr*));
vg_assert(8 == sizeof(VexGuestX86SegDescr));
if (info == NULL)
return -VKI_EFAULT;
idx = info->entry_number;
if (idx < 0 || idx >= VEX_GUEST_X86_GDT_NENT)
return -VKI_EINVAL;
gdt = (VexGuestX86SegDescr*)VG_(threads)[tid].arch.vex.guest_GDT;
/* If the thread doesn't have a GDT, allocate it now. */
if (!gdt) {
gdt = VG_(alloc_zeroed_x86_GDT)();
VG_(threads)[tid].arch.vex.guest_GDT = (HWord)gdt;
}
info->base_addr = ( gdt[idx].LdtEnt.Bits.BaseHi << 24 ) |
( gdt[idx].LdtEnt.Bits.BaseMid << 16 ) |
gdt[idx].LdtEnt.Bits.BaseLow;
info->limit = ( gdt[idx].LdtEnt.Bits.LimitHi << 16 ) |
gdt[idx].LdtEnt.Bits.LimitLow;
info->seg_32bit = gdt[idx].LdtEnt.Bits.Default_Big;
info->contents = ( gdt[idx].LdtEnt.Bits.Type >> 2 ) & 0x3;
info->read_exec_only = ( gdt[idx].LdtEnt.Bits.Type & 0x1 ) ^ 0x1;
info->limit_in_pages = gdt[idx].LdtEnt.Bits.Granularity;
info->seg_not_present = gdt[idx].LdtEnt.Bits.Pres ^ 0x1;
info->useable = gdt[idx].LdtEnt.Bits.Sys;
info->reserved = 0;
return 0;
}
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/

View File

@ -5,8 +5,7 @@ AM_CFLAGS = $(WERROR) -Wmissing-prototypes -Winline -Wall -Wshadow -O -g
noinst_HEADERS = \
core_arch.h \
core_arch_asm.h \
x86_private.h
core_arch_asm.h
noinst_LIBRARIES = libarch.a

View File

@ -30,7 +30,6 @@
#include "core.h"
#include "pub_core_tooliface.h"
#include "x86_private.h"
#include "vki_unistd.h"
#include <sys/ptrace.h>
@ -141,104 +140,10 @@ void VGA_(init_thread1state) ( Addr client_eip,
sizeof(VexGuestArchState));
}
/*------------------------------------------------------------*/
/*--- LDT/GDT stuff ---*/
/*------------------------------------------------------------*/
/* Create a zeroed-out GDT. */
VexGuestX86SegDescr* VG_(alloc_zeroed_x86_GDT) ( void )
{
Int nbytes = VEX_GUEST_X86_GDT_NENT * sizeof(VexGuestX86SegDescr);
return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
}
/* Create a zeroed-out LDT. */
VexGuestX86SegDescr* VG_(alloc_zeroed_x86_LDT) ( void )
{
Int nbytes = VEX_GUEST_X86_LDT_NENT * sizeof(VexGuestX86SegDescr);
return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
}
/* Free up an LDT or GDT allocated by the above fns. */
static void free_LDT_or_GDT ( VexGuestX86SegDescr* dt )
{
vg_assert(dt);
VG_(arena_free)(VG_AR_CORE, (void*)dt);
}
/* Copy contents between two existing LDTs. */
static void copy_LDT_from_to ( VexGuestX86SegDescr* src,
VexGuestX86SegDescr* dst )
{
Int i;
vg_assert(src);
vg_assert(dst);
for (i = 0; i < VEX_GUEST_X86_LDT_NENT; i++)
dst[i] = src[i];
}
/* Free this thread's DTs, if it has any. */
static void deallocate_LGDTs_for_thread ( VexGuestX86State* vex )
{
vg_assert(sizeof(HWord) == sizeof(void*));
if (0)
VG_(printf)("deallocate_LGDTs_for_thread: "
"ldt = 0x%x, gdt = 0x%x\n",
vex->guest_LDT, vex->guest_GDT );
if (vex->guest_LDT != (HWord)NULL) {
free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_LDT );
vex->guest_LDT = (HWord)NULL;
}
if (vex->guest_GDT != (HWord)NULL) {
free_LDT_or_GDT( (VexGuestX86SegDescr*)vex->guest_GDT );
vex->guest_GDT = (HWord)NULL;
}
}
/*------------------------------------------------------------*/
/*--- Thread stuff ---*/
/*------------------------------------------------------------*/
void VGA_(cleanup_thread) ( ThreadArchState* arch )
{
/* Release arch-specific resources held by this thread. */
/* On x86, we have to dump the LDT and GDT. */
deallocate_LGDTs_for_thread( &arch->vex );
}
void VGA_(setup_child) ( /*OUT*/ ThreadArchState *child,
/*IN*/ ThreadArchState *parent )
{
/* We inherit our parent's guest state. */
child->vex = parent->vex;
child->vex_shadow = parent->vex_shadow;
/* We inherit our parent's LDT. */
if (parent->vex.guest_LDT == (HWord)NULL) {
/* We hope this is the common case. */
child->vex.guest_LDT = (HWord)NULL;
} else {
/* No luck .. we have to take a copy of the parent's. */
child->vex.guest_LDT = (HWord)VG_(alloc_zeroed_x86_LDT)();
copy_LDT_from_to( (VexGuestX86SegDescr*)parent->vex.guest_LDT,
(VexGuestX86SegDescr*)child->vex.guest_LDT );
}
/* We need an empty GDT. */
child->vex.guest_GDT = (HWord)NULL;
}
void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr))
{
ThreadState *tst = VG_(get_ThreadState)(tid);

View File

@ -1,52 +0,0 @@
/*--------------------------------------------------------------------*/
/*--- Private arch-specific header. x86/x86_private.h ---*/
/*--------------------------------------------------------------------*/
/*
This file is part of Valgrind, a dynamic binary instrumentation
framework.
Copyright (C) 2000-2005 Nicholas Nethercote
njn@valgrind.org
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307, USA.
The GNU General Public License is contained in the file COPYING.
*/
#ifndef __X86_PRIVATE_H
#define __X86_PRIVATE_H
#include "core_arch_asm.h" // arch-specific asm stuff
#include "tool_arch.h" // arch-specific tool stuff
#include "libvex_guest_x86.h" // for VexGuestX86SegDescr
/* ---------------------------------------------------------------------
Exports of state.c that are not core-visible
------------------------------------------------------------------ */
/* Create LDT/GDT arrays, as specified in libvex_guest_x86.h. */
extern VexGuestX86SegDescr* VG_(alloc_zeroed_x86_GDT) ( void );
extern VexGuestX86SegDescr* VG_(alloc_zeroed_x86_LDT) ( void );
#endif // __X86_PRIVATE_H
/*--------------------------------------------------------------------*/
/*--- end ---*/
/*--------------------------------------------------------------------*/