Reverse Engineering for Beginners

(avery) #1

CHAPTER 43. INLINE FUNCTIONS CHAPTER 43. INLINE FUNCTIONS


.L24:
movzx eax, BYTE PTR [esi]
lea edi, [edx+11]
add esi, 1
test edi, 2
mov BYTE PTR [edx+10], al
mov eax, 122
je .L7
.L25:
movzx edx, WORD PTR [esi]
add edi, 2
add esi, 2
sub eax, 2
mov WORD PTR [edi-2], dx
jmp .L7
.LFE3:


Universal memory copy functions usually work as follows: calculate how many 32-bit words can be copied, then copy them
usingMOVSD, then copy the remaining bytes.


More complex copy functions useSIMDinstructions and also take the memory alignment in consideration. As an example
of SIMD strlen() function:25.2 on page 400.


43.1.6 memcmp().


Listing 43.22: memcmp() example

void memcpy_1235(char inbuf, char outbuf)
{
memcpy(outbuf+10, inbuf, 1235);
};


For any block size, MSVC 2010 inserts the same universal function:


Listing 43.23: Optimizing MSVC 2010

_buf1$ = 8 ; size = 4
_buf2$ = 12 ; size = 4
_memcmp_1235 PROC
mov edx, DWORD PTR _buf2$[esp-4]
mov ecx, DWORD PTR _buf1$[esp-4]
push esi
push edi
mov esi, 1235
add edx, 10
$LL4@memcmp_123:
mov eax, DWORD PTR [edx]
cmp eax, DWORD PTR [ecx]
jne SHORT $LN10@memcmp_123
sub esi, 4
add ecx, 4
add edx, 4
cmp esi, 4
jae SHORT $LL4@memcmp_123
$LN10@memcmp_123:
movzx edi, BYTE PTR [ecx]
movzx eax, BYTE PTR [edx]
sub eax, edi
jne SHORT $LN7@memcmp_123
movzx eax, BYTE PTR [edx+1]
movzx edi, BYTE PTR [ecx+1]
sub eax, edi
jne SHORT $LN7@memcmp_123
movzx eax, BYTE PTR [edx+2]
movzx edi, BYTE PTR [ecx+2]
sub eax, edi
jne SHORT $LN7@memcmp_123
cmp esi, 3
jbe SHORT $LN6@memcmp_123

Free download pdf