Julian Seward 35037b3ba1 amd64 back end: handle CmpNEZ64(And64(x,y)) better; ditto the 32 bit case.
Handle CmpNEZ64(And64(x,y)) by branching on flags, similarly to
CmpNEZ64(Or64(x,y)).  Ditto the 32 bit equivalents.  Also, remove expensive
DEFINE_PATTERN/DECLARE_PATTERN uses there and hardwire the matching logic.

n-i-bz.  This is in support of reducing the cost of expensiveCmpEQorNE
in memcheck.
2017-08-30 19:53:50 +02:00
..
2017-05-08 17:21:36 +00:00
2005-01-31 18:54:23 +00:00
2005-02-09 13:03:06 +00:00
2005-06-20 14:57:59 +00:00
2017-08-28 12:40:17 +02:00
2008-05-09 13:27:47 +00:00
2005-08-03 16:07:36 +00:00
2004-11-25 13:08:26 +00:00
2005-04-22 21:21:09 +00:00