Motherboard Forums


Reply
Thread Tools Display Modes

Re: An idea how to speed up computer programs and avoid waiting. ("event driven memory system")

 
 
Skybuck Flying
Guest
Posts: n/a
 
      08-01-2011, 10:22 PM


"Skybuck Flying" wrote in message news:...

Also in case you have troubles understanding the pascal code then I shall
try and write down some simply and easy to understand pseudo code and some
explanations/comments, let's see:

What the code does and assumes is the following:

general description, this information not that important but might give you
the general idea:

"Memory" is a pointer towards a big linear/sequentially memory block of
bytes/integers. The memory block contains many integers. These integers are
"virtually" subdivided into seperate blocks.

Each block contains 8000 integers. This is indicated by ElementCount, so
ElementCount equals 8000.

There are 4000 blocks. This is indicated by BlockCount, so BlockCount equals
4000.

So in total there are 4000 blocks * 8000 elements * 4 bytes = xxxx bytes of
memory.

Each block is like a chain of indexes.

Each index point towards the next index within it's block.

The indexes are thrown around inside the block during initialisation to
create a random access pattern.

This is also known as the "pointer chasing problem" so it could also be
called the "index chasing problem".

Index -> Index -> Index -> Index -> Index and so forth.

The first index must be retrieved from memory, only then does one know where
the next index is located and so on.

There is an ElementIndex variable which is initialized to zero.

So each ElementIndex simply starts at element 0 of block 0.

There could be 8000 ElementIndexes in parallel all starting at element 0 of
block X.

Or there could simply be on ElementIndex and process each block in turn.

in version 0.03 "BlockBase" was introduced.

BlockBase as an index which points towards the first element of a block.

So it's a storage to prevent multiplications all the time.

In the code below 3 blocks in parallel are attempted.

Now for some further pseudo code:

// routine

// variables:

// each block is processed repeatedly loop count times.
// loop index indicates the current loop/round
LoopIndex

BlockIndexA // indicates block a number
BlockIndexB // indicates block b number
BlockIndexC // indicates block c number

ElementIndexA // indicates block a element index
ElementIndexB // indicates block b element index
ElementIndexC // indicates block c element index

ElementCount // number of elements per block
BlockCount // number of blocks
LoopCount // number of loops/chases per block

BlockBaseA // starting index of block a
BlockBaseB // starting index of block b
BlockBaseC // starting index of block c

Memory // contains all integers of all blocks of all elements. So it's an
array/block of elements/integers.

RoutineBegin

ElementCount = 8000
BlockCount = 4000
LoopCount = 80000

BlockIndexA = 0
BlockIndexB = 1
BlockIndexC = 2

// loop over all blocks, process 3 blocks per loop iteration.
// slight correction:
// wrong: "BlockIndex goes from 0 to 7999 divided over 3 indexes"
// somewhat correct: "BlockIndex goes from 0 to 3999 divided over 3 indexes"
// so BlockIndexA is 0, 3, 6, 9, 12, etc
// so BlockIndexB is 1, 4, 7, 10, 13, etc
// so BlockIndexC is 2, 5, 8, 11, 14, etc
FirstLoopBegin

// calculate the starting index of each block.
// formula is: Base/Index/Offset = block number * number of elements per
block.
BlockBaseA = BlockIndexA * vElementCount;
BlockBaseB = BlockIndexB * vElementCount;
BlockBaseC = BlockIndexC * vElementCount;

// initialise each element index to the first index of the block.
ElementIndexA = 0
ElementIndexB = 0
ElementIndexC = 0

// loop 80000 times through the block arrays/elements
SecondLoopBegin "LoopIndex goes from 0 to 79999"

// Seek into memory at location BlockBase + ElementIndex
// do this for A, B and C:
// retrieve the new element index via the old element index
ElementIndexA = mMemory[ vBlockBaseA + vElementIndexA ]
ElementIndexB = mMemory[ vBlockBaseB + vElementIndexB ]
ElementIndexC = mMemory[ vBlockBaseC + vElementIndexC ]

SecondLoopEnd

// store some bullshit in block result, this step could be skipped.
// but let's not so compiler doesn't optimize anything away
// perhaps block result should be printed just in case.
// the last retrieved index is store into it.
// do this for A,B,C

BlockResult[ BlockIndexA ] = vElementIndexA
BlockResult[ BlockIndexB ] = vElementIndexB
BlockResult[ BlockIndexC ] = vElementIndexC

// once the 3 blocks have been processed go to the next 3 blocks
BlockIndexA = vBlockIndexA + 3
BlockIndexB = vBlockIndexB + 3
BlockIndexC = vBlockIndexC + 3

FirstLoopEnd

Perhaps this clearifies it a little bit.

Let me know if you have any further questions

Bye,
Skybuck.

 
Reply With Quote
 
 
 
 
Bernhard Schornak
Guest
Posts: n/a
 
      08-07-2011, 12:40 PM
Skybuck Flying wrote:


> "Skybuck Flying" wrote in message news:...
>
> Also in case you have troubles understanding the pascal code then I shall
> try and write down some simply and easy to understand pseudo code and some
> explanations/comments, let's see:
>
> What the code does and assumes is the following:
>
> general description, this information not that important but might give you
> the general idea:
>
> "Memory" is a pointer towards a big linear/sequentially memory block of
> bytes/integers. The memory block contains many integers. These integers are
> "virtually" subdivided into seperate blocks.
>
> Each block contains 8000 integers. This is indicated by ElementCount, so
> ElementCount equals 8000.
>
> There are 4000 blocks. This is indicated by BlockCount, so BlockCount equals
> 4000.
>
> So in total there are 4000 blocks * 8000 elements * 4 bytes = xxxx bytes of
> memory.
>
> Each block is like a chain of indexes.
>
> Each index point towards the next index within it's block.
>
> The indexes are thrown around inside the block during initialisation to
> create a random access pattern.
>
> This is also known as the "pointer chasing problem" so it could also be
> called the "index chasing problem".
>
> Index -> Index -> Index -> Index -> Index and so forth.
>
> The first index must be retrieved from memory, only then does one know where
> the next index is located and so on.
>
> There is an ElementIndex variable which is initialized to zero.
>
> So each ElementIndex simply starts at element 0 of block 0.
>
> There could be 8000 ElementIndexes in parallel all starting at element 0 of
> block X.
>
> Or there could simply be on ElementIndex and process each block in turn.
>
> in version 0.03 "BlockBase" was introduced.
>
> BlockBase as an index which points towards the first element of a block.
>
> So it's a storage to prevent multiplications all the time.



4000 is 0x0FA0, 8000 is 0x1F40. If we expand these sizes
to 4096 and 8192, we get ranges 0x0000...0x0FFF(0x1FFF).
This allows simple bit masks to sort out invalid indices
and simplifies address calculations markably.

If we see the entire thing as an array with 4096 pages
8192 elements, we just have to create a large array with
a size of 4096 * 8192 * 4 = 134,217,728 byte.


> In the code below 3 blocks in parallel are attempted.



A very bad idea. L1 cache is 65,536 byte on recent CPUs,
but only 32,768 byte on future processors. e.g. Zambezi.
Three blocks occupy 3 * 32,768 = 98,304 byte, so: 32,768
(or 65,536!) byte do not fit into L1. The result: Memory
access is slowed down from 3 (L1) to 6 (L2), 12 (L3) or
27 (RAM) clocks per access, depending on where the cache
lines have to be fetched from or written to.

If you want to process multiple blocks in parallel, it's
better to check, how many physical processors exist on a
user's machine, then create that amount threads (minus 1
for the OS) running on one physical processor, each.


> Now for some further pseudo code:
>
> // routine
>
> // variables:
>
> // each block is processed repeatedly loop count times.
> // loop index indicates the current loop/round
> LoopIndex
>
> BlockIndexA // indicates block a number
> BlockIndexB // indicates block b number
> BlockIndexC // indicates block c number
>
> ElementIndexA // indicates block a element index
> ElementIndexB // indicates block b element index
> ElementIndexC // indicates block c element index
>
> ElementCount // number of elements per block
> BlockCount // number of blocks
> LoopCount // number of loops/chases per block
>
> BlockBaseA // starting index of block a
> BlockBaseB // starting index of block b
> BlockBaseC // starting index of block c
>
> Memory // contains all integers of all blocks of all elements. So it's an
> array/block of elements/integers.
>
> RoutineBegin
>
> ElementCount = 8000
> BlockCount = 4000
> LoopCount = 80000
>
> BlockIndexA = 0
> BlockIndexB = 1
> BlockIndexC = 2
>
> // loop over all blocks, process 3 blocks per loop iteration.
> // slight correction:
> // wrong: "BlockIndex goes from 0 to 7999 divided over 3 indexes"
> // somewhat correct: "BlockIndex goes from 0 to 3999 divided over 3 indexes"
> // so BlockIndexA is 0, 3, 6, 9, 12, etc
> // so BlockIndexB is 1, 4, 7, 10, 13, etc
> // so BlockIndexC is 2, 5, 8, 11, 14, etc
> FirstLoopBegin
>
> // calculate the starting index of each block.
> // formula is: Base/Index/Offset = block number * number of elements per
> block.
> BlockBaseA = BlockIndexA * vElementCount;
> BlockBaseB = BlockIndexB * vElementCount;
> BlockBaseC = BlockIndexC * vElementCount;



With 8,192 elements per block, the multiplication can be
replaced by a shift:

BlockBaseN = BlockIndexN << 15

reducing the VectorPath MUL (blocking all pipes for five
clocks) to a DirectPath SHL (one clock, one pipe).


> // initialise each element index to the first index of the block.
> ElementIndexA = 0
> ElementIndexB = 0
> ElementIndexC = 0
>
> // loop 80000 times through the block arrays/elements
> SecondLoopBegin "LoopIndex goes from 0 to 79999"
>
> // Seek into memory at location BlockBase + ElementIndex
> // do this for A, B and C:
> // retrieve the new element index via the old element index
> ElementIndexA = mMemory[ vBlockBaseA + vElementIndexA ]
> ElementIndexB = mMemory[ vBlockBaseB + vElementIndexB ]
> ElementIndexC = mMemory[ vBlockBaseC + vElementIndexC ]
>
> SecondLoopEnd



I still do not understand what this is good for, but you
should consider to process one block after the other. If
you are eager to introduce something like "parallel pro-
cessing", divide your entire array into as much parts as
there are processors (hopefully a multiple of two), then
let those threads process a contiguous part of memory in
parallel.

If you prefer a single threaded solution, it's better to
process one block after the other, because entire blocks
fit into L1 completely. Any "touched" element is present
in L1 after the first access, three registers are enough
to manage the entire loop, faster algorithms can be used
to process multiple elements in parallel (all pipes busy
most of the time), and so on,


> // store some bullshit in block result, this step could be skipped.
> // but let's not so compiler doesn't optimize anything away
> // perhaps block result should be printed just in case.
> // the last retrieved index is store into it.
> // do this for A,B,C
>
> BlockResult[ BlockIndexA ] = vElementIndexA
> BlockResult[ BlockIndexB ] = vElementIndexB
> BlockResult[ BlockIndexC ] = vElementIndexC
>
> // once the 3 blocks have been processed go to the next 3 blocks
> BlockIndexA = vBlockIndexA + 3
> BlockIndexB = vBlockIndexB + 3
> BlockIndexC = vBlockIndexC + 3
>
> FirstLoopEnd
>
> Perhaps this clearifies it a little bit.
>
> Let me know if you have any further questions



As shown, your current approach is quite slow - it might
be worth a thought to change your current design towards
something making use of built-in acceleration mechanisms
like caching and multiple execution pipelines.

Using multiples of two (rather than ten) simplifies cal-
culations markably. Hence, the first thing to change was
the size of your array - 4,096 blocks and 8,192 elements
don't occupy that much additional memory, 6,217,728 byte
are peanuts compared to the entire array. In return, you
save a lot of time wasted with calculating addresses and
validating indices (AND automatically keeps your indices
within the valid range).

The slightly changed design used addressing modes like

mov REG,block_offset[block_base + index * 4] or

mov block_offset[block_base + index * 4],REG

allowing to step through the entire array with some very
simple (and fast) additions and shifts. For example, the
given function required one register for block_base plus
block_offset. This had to be updated only once per outer
loop, while the inner loop just fetches indexed elements
and writes them back. Reducing address calculations to a
single register, 12 registers are left for the real work
(two registers are required as loop counters).


Greetings from Augsburg

Bernhard Schornak

 
Reply With Quote
 
 
 
 
Skybuck Flying
Guest
Posts: n/a
 
      08-08-2011, 01:37 PM
"
4000 is 0x0FA0, 8000 is 0x1F40.
"

Ok this could be handy to know to spot these constants/parameters
potentially in your code.

"
If we expand these sizes
to 4096 and 8192, we get ranges 0x0000...0x0FFF(0x1FFF).
This allows simple bit masks to sort out invalid indices
and simplifies address calculations markably.
"

Why would you want to spot invalid indices ? At least my pascal code seems
already ok. Perhaps this is to help you debug your assembler code ?

Address calculations should be possible for any setting, this allows
flexibility in testing different scenerio's.

"
If we see the entire thing as an array with 4096 pages
8192 elements, we just have to create a large array with
a size of 4096 * 8192 * 4 = 134,217,728 byte.
"

Irrelevant any size can be allocated easily, at least in a high level
language, perhaps this helps you in assembler ?

> In the code below 3 blocks in parallel are attempted.


"
A very bad idea.
"

This was your idea after all

It might be bad for current settings, but maybe there is some value in it
for other settings.

"
L1 cache is 65,536 byte on recent CPUs,
but only 32,768 byte on future processors. e.g. Zambezi.
Three blocks occupy 3 * 32,768 = 98,304 byte, so: 32,768
(or 65,536!) byte do not fit into L1. The result: Memory
access is slowed down from 3 (L1) to 6 (L2), 12 (L3) or
27 (RAM) clocks per access, depending on where the cache
lines have to be fetched from or written to.

If you want to process multiple blocks in parallel, it's
better to check, how many physical processors exist on a
user's machine, then create that amount threads (minus 1
for the OS) running on one physical processor, each.
"

So far the examples have all executed well inside the cache because of the
low settings which almost fit in L1 cache.

However it would also be interesting to change settings to for example a
block size of 1920x1200 elements to mimic a single frame of a video codec
for example, which might want to do some random access memory.

So then the L1 cache will probably be whoefully short and then you can still
try and see if your "pairing/parallel" processing of frames might give
higher throughput.

However I already do see a little problem with this idea. Usually video
codec frames are processed sequentially, so it might be difficult to
impossible to process multiple at the same time.

However a frame does consists out of r,g,b and sometimes maybe even alpha
channel.

So there could be 3 to 4 channels, so your idea of processing 3 blocks at
the same time might still have some merit, when each color channel is
considered a block.

Another interesting setting could be to reduce the settings to make it fit
entirely into the L1 cache to see if your pairing/multiple loading code also
becomes faster for when it does fit in cache.

Writing multi threading code for "random access memory" could become tricky.
Especially if writes are involved as well.
Processors might conflict with each other.

As long as processors operate within their own blocks and can never access
each other blocks then it should be ok.
Some care should still be taken at the border of the blocks, inserting some
paddings there could help prevent accidently/unlucky simultanious access. So
it does become a little bit more tricky to write code like that... because
of the padding, allocations have to be done a little bit different, as well
as formula's. I can see it become a bit messy though some variable names
could help solve the mess as follows:

Allocate( mElementCount + mPaddingCount );

^ Then padding count could be used in formula's/code where necessary or left
out where not necessary.

Like mod mElementCount to setup indexes correctly.

"
With 8,192 elements per block, the multiplication can be
replaced by a shift:

BlockBaseN = BlockIndexN << 15

reducing the VectorPath MUL (blocking all pipes for five
clocks) to a DirectPath SHL (one clock, one pipe).
"

While to explanation of this optimization idea/trick is interesting
unfortunately it cannot be really used in the example code,

The example code needs to remain flexible to try out different scenerio's to
see if your idea can be usefull for something

> // initialise each element index to the first index of the block.
> ElementIndexA = 0
> ElementIndexB = 0
> ElementIndexC = 0
>
> // loop 80000 times through the block arrays/elements
> SecondLoopBegin "LoopIndex goes from 0 to 79999"
>
> // Seek into memory at location BlockBase + ElementIndex
> // do this for A, B and C:
> // retrieve the new element index via the old element index
> ElementIndexA = mMemory[ vBlockBaseA + vElementIndexA ]
> ElementIndexB = mMemory[ vBlockBaseB + vElementIndexB ]
> ElementIndexC = mMemory[ vBlockBaseC + vElementIndexC ]
>
> SecondLoopEnd


"
I still do not understand what this is good for, but you
should consider to process one block after the other. If
you are eager to introduce something like "parallel pro-
cessing", divide your entire array into as much parts as
there are processors (hopefully a multiple of two), then
let those threads process a contiguous part of memory in
parallel.
"

The code above represents your "parallel block" idea, where there would be 3
loads in parallel.

The mMemory[ ... ] is a load. See one of your earlier postings of yourself
where you write about that idea.

You "claimed" the X2 processor can do 3 loads in parallel more or less.

The code above tries to take adventage of that.

"
If you prefer a single threaded solution, it's better to
process one block after the other, because entire blocks
fit into L1 completely.
"

Perhaps your parallel loading idea might also be faster in such a situation.

If not then the opposite situation of not fitting into L1 could be tested as
well to see if your parallel loading idea gives more performance.

"
Any "touched" element is present
in L1 after the first access, three registers are enough
to manage the entire loop, faster algorithms can be used
to process multiple elements in parallel (all pipes busy
most of the time), and so on,
"

?


> // store some bullshit in block result, this step could be skipped.
> // but let's not so compiler doesn't optimize anything away
> // perhaps block result should be printed just in case.
> // the last retrieved index is store into it.
> // do this for A,B,C
>
> BlockResult[ BlockIndexA ] = vElementIndexA
> BlockResult[ BlockIndexB ] = vElementIndexB
> BlockResult[ BlockIndexC ] = vElementIndexC
>
> // once the 3 blocks have been processed go to the next 3 blocks
> BlockIndexA = vBlockIndexA + 3
> BlockIndexB = vBlockIndexB + 3
> BlockIndexC = vBlockIndexC + 3
>
> FirstLoopEnd
>
> Perhaps this clearifies it a little bit.
>
> Let me know if you have any further questions


"
As shown, your current approach is quite slow
"

Which one ?

The single block version (which should be quite fast) or the three block
version (which was your idea ! ).

"
- it might
be worth a thought to change your current design towards
something making use of built-in acceleration mechanisms
like caching and multiple execution pipelines.
"

It already does caching by accident more or less, multiple execution
pipelines that was tried with your idea.

I think it does do both a bit... I don't know exactly how much it already
does these things.

A processor simulator would be necessary to see what exactly is happening.

But I can tell a little bit from re-arranging the code that it does some of
these things that you mentioned.

So perhaps the "three block" version is already optimal but it's still
slower than the "one block" version, at least for 32 bit because then it
fits in L1 cache, however different settings should still be tried to see
which one if faster then.

Also 64 bit assembly code of yours should still be tried, it's not tried
yet.

"
Using multiples of two (rather than ten) simplifies cal-
culations markably. Hence, the first thing to change was
the size of your array - 4,096 blocks and 8,192 elements
don't occupy that much additional memory, 6,217,728 byte
are peanuts compared to the entire array. In return, you
save a lot of time wasted with calculating addresses and
validating indices (AND automatically keeps your indices
within the valid range).
"

As long as loading and writing is not affected by the extra padding bytes,
and as long as the example executes correctly then arrays could be padding
with extra bytes if it makes address calculations faster without changing
the nature of the example.

However if the example suddenly produces different results then it would be
a problem.

"
The slightly changed design used addressing modes like

mov REG,block_offset[block_base + index * 4] or

mov block_offset[block_base + index * 4],REG
"

What's the difference here ?

One seems reading the other seems writing ?

So apperently you comparing this code to something else ? But what ?

The shifting code ?

"
allowing to step through the entire array with some very
simple (and fast) additions and shifts. For example, the
given function required one register for block_base plus
block_offset. This had to be updated only once per outer
loop, while the inner loop just fetches indexed elements
and writes them back. Reducing address calculations to a
single register, 12 registers are left for the real work
(two registers are required as loop counters).
"

Ok, I think I now grasp your concept of using the shifting tricks and why it
would lead to higher throughput.

Now that I look at your shifting code again it seems to be applied to the
block number.

So at least the blocks could be padded for the elements. So the number of
actualy elements would be a bit higher, some elements would be unused.

The example should remain with 4000 blocks and 8000 elements "virtually" but
in reality there could be more... however the "more" should not be processed
and only be there to speed up the addressing.

However it remains to be seen what the effect of this padding could be, for
single block processing it doesn't seem to be relevant and caching would be
uneffected, the padded elements would not be cached except for the very end
if hit.

For the multiple blocks idea this could waste a little bit of caching space
but might not be to bad.

So there is some merit in your idea

It could be interesting to write some general code to try and always apply
this padding trick and the shifting trick

Changes are high I will try to write this soon and see what happens, I
expect it run a bit faster after reading your comments and explanation of it


Bye,
Skybuck =D

 
Reply With Quote
 
Skybuck Flying
Guest
Posts: n/a
 
      08-09-2011, 12:57 PM
I tried the shift left trick you mentioned.

For the 32 bit version it doesn't really matter and doesn't give significant
performance increases, at least with current settings.

For what it's worth here is the assembly output :

So from the looks of it, it isn't worth the trouble, perhaps results might
be different for 64 bit or other settings.

// *** Begin of SHL trick, assembly output for 32 bit: ***

unit_TCPUMemoryTest_version_001.pas.364: begin
0040FD44 53 push ebx
0040FD45 56 push esi
0040FD46 57 push edi
0040FD47 83C4F4 add esp,-$0c
unit_TCPUMemoryTest_version_001.pas.365: vElementShift := mElementShift;
0040FD4A 0FB6501C movzx edx,[eax+$1c]
0040FD4E 891424 mov [esp],edx
unit_TCPUMemoryTest_version_001.pas.366: vBlockCount := mBlockCount;
0040FD51 8B5010 mov edx,[eax+$10]
unit_TCPUMemoryTest_version_001.pas.367: vLoopCount := mLoopCount;
0040FD54 8B4814 mov ecx,[eax+$14]
0040FD57 894C2404 mov [esp+$04],ecx
unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to
vBlockCount-1 do
0040FD5B 4A dec edx
0040FD5C 85D2 test edx,edx
0040FD5E 7232 jb $0040fd92
0040FD60 42 inc edx
0040FD61 89542408 mov [esp+$08],edx
0040FD65 33F6 xor esi,esi
unit_TCPUMemoryTest_version_001.pas.372: vBlockBase := vBlockIndex shl
vElementShift;
0040FD67 8B0C24 mov ecx,[esp]
0040FD6A 8BDE mov ebx,esi
0040FD6C D3E3 shl ebx,cl
unit_TCPUMemoryTest_version_001.pas.374: vElementIndex := 0;
0040FD6E 33D2 xor edx,edx
unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to vLoopCount-1
do
0040FD70 8B4C2404 mov ecx,[esp+$04]
0040FD74 49 dec ecx
0040FD75 85C9 test ecx,ecx
0040FD77 720C jb $0040fd85
0040FD79 41 inc ecx
unit_TCPUMemoryTest_version_001.pas.379: vElementIndex := mMemory[
vBlockBase + vElementIndex ];
0040FD7A 03D3 add edx,ebx
0040FD7C 8B7804 mov edi,[eax+$04]
0040FD7F 8B1497 mov edx,[edi+edx*4]
unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to vLoopCount-1
do
0040FD82 49 dec ecx
0040FD83 75F5 jnz $0040fd7a
unit_TCPUMemoryTest_version_001.pas.382: mBlockResult[ vBlockIndex ] :=
vElementIndex;
0040FD85 8B4808 mov ecx,[eax+$08]
0040FD88 8914B1 mov [ecx+esi*4],edx
unit_TCPUMemoryTest_version_001.pas.383: end;
0040FD8B 46 inc esi
unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to
vBlockCount-1 do
0040FD8C FF4C2408 dec dword ptr [esp+$08]
0040FD90 75D5 jnz $0040fd67
unit_TCPUMemoryTest_version_001.pas.384: end;
0040FD92 83C40C add esp,$0c
0040FD95 5F pop edi
0040FD96 5E pop esi
0040FD97 5B pop ebx
0040FD98 C3 ret

// *** End of SHL trick, assembly output for 32 bit: ***

Bye,
Skybuck.

 
Reply With Quote
 
Bernhard Schornak
Guest
Posts: n/a
 
      08-09-2011, 07:46 PM
Skybuck Flying wrote:


>> In the code below 3 blocks in parallel are attempted.

>
> "
> A very bad idea.
> "
>
> This was your idea after all



Not that I knew. If I remember correctly, you posted this
stuff...


> It might be bad for current settings, but maybe there is some value in it for other settings.
>
> "
> L1 cache is 65,536 byte on recent CPUs,
> but only 32,768 byte on future processors. e.g. Zambezi.
> Three blocks occupy 3 * 32,768 = 98,304 byte, so: 32,768
> (or 65,536!) byte do not fit into L1. The result: Memory
> access is slowed down from 3 (L1) to 6 (L2), 12 (L3) or
> 27 (RAM) clocks per access, depending on where the cache
> lines have to be fetched from or written to.
>
> If you want to process multiple blocks in parallel, it's
> better to check, how many physical processors exist on a
> user's machine, then create that amount threads (minus 1
> for the OS) running on one physical processor, each.
> "
>
> So far the examples have all executed well inside the cache because of the low settings
> which almost fit in L1 cache.
>
> However it would also be interesting to change settings to for example a block size of
> 1920x1200 elements to mimic a single frame of a video codec for example, which might want
> to do some random access memory.
>
> So then the L1 cache will probably be whoefully short and then you can still try and see
> if your "pairing/parallel" processing of frames might give higher throughput.



Read some recent processor manuals to get an overview.


> However I already do see a little problem with this idea. Usually video codec frames are
> processed sequentially, so it might be difficult to impossible to process multiple at the
> same time.



Possible with multiple cores. GPUs have hundreds of them.


> However a frame does consists out of r,g,b and sometimes maybe even alpha channel.



These four byte fit into a dword.


> So there could be 3 to 4 channels, so your idea of processing 3 blocks at the same time
> might still have some merit, when each color channel is considered a block.



I never wrote something about "processing three blocks at
the same time" - this probably is the result of your mis-
understanding how my examples really work.


> Another interesting setting could be to reduce the settings to make it fit entirely into
> the L1 cache to see if your pairing/multiple loading code also becomes faster for when it
> does fit in cache.



As you can read everywhere else, it does. If cache didn't
work as accelerator, no recent processor had -any- cache.
Caches are the most expensive part of any processor. They
occupy huge areas on a die and increase production costs.


> Writing multi threading code for "random access memory" could become tricky. Especially if
> writes are involved as well.
> Processors might conflict with each other.



Writes are not the bottleneck, because they are scheduled
by the memory controller (no wait cycles). The real brake
are reads. The processor has to wait until requested data
are present - it cannot work with "guessed" data.


> As long as processors operate within their own blocks and can never access each other
> blocks then it should be ok.



As far as I understood your poatings, your definitions do
not allow inter-block access,


> Some care should still be taken at the border of the blocks, inserting some paddings there
> could help prevent accidently/unlucky simultanious access. So it does become a little bit
> more tricky to write code like that... because of the padding, allocations have to be done
> a little bit different, as well as formula's. I can see it become a bit messy though some
> variable names could help solve the mess as follows:
>
> Allocate( mElementCount + mPaddingCount );
>
> ^ Then padding count could be used in formula's/code where necessary or left out where not
> necessary.
>
> Like mod mElementCount to setup indexes correctly.



Which is faster than an "and reg,mask" (1 clock, 1 pipe),
executed while the processor waits for data?


> "
> With 8,192 elements per block, the multiplication can be
> replaced by a shift:
>
> BlockBaseN = BlockIndexN << 15
>
> reducing the VectorPath MUL (blocking all pipes for five
> clocks) to a DirectPath SHL (one clock, one pipe).
> "
>
> While to explanation of this optimization idea/trick is interesting unfortunately it
> cannot be really used in the example code,
>
> The example code needs to remain flexible to try out different scenerio's to see if your
> idea can be usefull for something



Which idea? What I posted until now were suggestions, how
your ideas could be improved. If I wanted to advertise my
own ideas, I started some threads rather than to reply to
other people's postings.


>> // initialise each element index to the first index of the block.
>> ElementIndexA = 0
>> ElementIndexB = 0
>> ElementIndexC = 0
>>
>> // loop 80000 times through the block arrays/elements
>> SecondLoopBegin "LoopIndex goes from 0 to 79999"
>>
>> // Seek into memory at location BlockBase + ElementIndex
>> // do this for A, B and C:
>> // retrieve the new element index via the old element index
>> ElementIndexA = mMemory[ vBlockBaseA + vElementIndexA ]
>> ElementIndexB = mMemory[ vBlockBaseB + vElementIndexB ]
>> ElementIndexC = mMemory[ vBlockBaseC + vElementIndexC ]
>>
>> SecondLoopEnd

>
> "
> I still do not understand what this is good for, but you
> should consider to process one block after the other. If
> you are eager to introduce something like "parallel pro-
> cessing", divide your entire array into as much parts as
> there are processors (hopefully a multiple of two), then
> let those threads process a contiguous part of memory in
> parallel.
> "
>
> The code above represents your "parallel block" idea, where there would be 3 loads in
> parallel.



This still is your code, and this stupid idea is based on
your misunderstanding of my explanations how Athlons work
(three execution pipes do not speed up reads - especially
not if each of them has to read data from a different, in
most cases uncached, memory location).


> The mMemory[ ... ] is a load. See one of your earlier postings of yourself where you write
> about that idea.
>
> You "claimed" the X2 processor can do 3 loads in parallel more or less.



<j10mg4$2km$(E-Mail Removed)>

I explain, but don't claim anything. What I wrote here is
a short summary of AMD's reference manuals and practice -
not more, not less.


> The code above tries to take adventage of that.



No. The fastest way to perform this task are some -minor-
changes to avoid time consuming instructions like MUL and
a change to sequential (not random) reads as long as this
is possible.


> "
> If you prefer a single threaded solution, it's better to
> process one block after the other, because entire blocks
> fit into L1 completely.
> "
>
> Perhaps your parallel loading idea might also be faster in such a situation.



If you implement it properly, yes. Accessing three memory
areas with distances of 32,768 byte surely isn't a proper
implementation. My example in

<j10mg4$2km$(E-Mail Removed)>

still shows how to speed up reads. Clearly in sequential,
not in random order...


> If not then the opposite situation of not fitting into L1 could be tested as well to see
> if your parallel loading idea gives more performance.
>
> "
> Any "touched" element is present
> in L1 after the first access, three registers are enough
> to manage the entire loop, faster algorithms can be used
> to process multiple elements in parallel (all pipes busy
> most of the time), and so on,
> "
>
> ?
>
>
>> // store some bullshit in block result, this step could be skipped.
>> // but let's not so compiler doesn't optimize anything away
>> // perhaps block result should be printed just in case.
>> // the last retrieved index is store into it.
>> // do this for A,B,C
>>
>> BlockResult[ BlockIndexA ] = vElementIndexA
>> BlockResult[ BlockIndexB ] = vElementIndexB
>> BlockResult[ BlockIndexC ] = vElementIndexC
>>
>> // once the 3 blocks have been processed go to the next 3 blocks
>> BlockIndexA = vBlockIndexA + 3
>> BlockIndexB = vBlockIndexB + 3
>> BlockIndexC = vBlockIndexC + 3
>>
>> FirstLoopEnd
>>
>> Perhaps this clearifies it a little bit.
>>
>> Let me know if you have any further questions

>
> "
> As shown, your current approach is quite slow
> "
>
> Which one ?
>
> The single block version (which should be quite fast) or the three block version (which
> was your idea ! ).
>
> "
> - it might
> be worth a thought to change your current design towards
> something making use of built-in acceleration mechanisms
> like caching and multiple execution pipelines.
> "
>
> It already does caching by accident more or less, multiple execution pipelines that was
> tried with your idea.
>
> I think it does do both a bit... I don't know exactly how much it already does these things.
>
> A processor simulator would be necessary to see what exactly is happening.



The best processor simulator is the human brain (and some
knowledge about the target machine).


> But I can tell a little bit from re-arranging the code that it does some of these things
> that you mentioned.
>
> So perhaps the "three block" version is already optimal but it's still slower than the
> "one block" version, at least for 32 bit because then it fits in L1 cache, however
> different settings should still be tried to see which one if faster then.
>
> Also 64 bit assembly code of yours should still be tried, it's not tried yet.
>
> "
> Using multiples of two (rather than ten) simplifies cal-
> culations markably. Hence, the first thing to change was
> the size of your array - 4,096 blocks and 8,192 elements
> don't occupy that much additional memory, 6,217,728 byte
> are peanuts compared to the entire array. In return, you
> save a lot of time wasted with calculating addresses and
> validating indices (AND automatically keeps your indices
> within the valid range).
> "
>
> As long as loading and writing is not affected by the extra padding bytes, and as long as
> the example executes correctly then arrays could be padding with extra bytes if it makes
> address calculations faster without changing the nature of the example.
>
> However if the example suddenly produces different results then it would be a problem.
>
> "
> The slightly changed design used addressing modes like
>
> mov REG,block_offset[block_base + index * 4] or
>
> mov block_offset[block_base + index * 4],REG
> "
>
> What's the difference here ?
>
> One seems reading the other seems writing ?



Yes.


> So apperently you comparing this code to something else ? But what ?



RDI = block count
RSI = current block address
RBP = current index to read

prepare:
prefetch rsi
mov edi,block_count
inner_loop:
mov eax,[rsi+0x00+rbp*4] # movl 0x00(%rsi, %rbp, 4),%eax
mov ebx,[rsi+0x04+rbp*4]
mov ecx,[rsi+0x08+rbp*4]
mov edx,[rsi+0x0C+rbp*4]
mov r10,[rsi+0x10+rbp*4]
mov r11,[rsi+0x14+rbp*4]
mov r12,[rsi+0x18+rbp*4]
mov r13,[rsi+0x1C+rbp*4]
nop
....
[process data here]
....
sub ebp, 8
jae inner_loop
outer_loop:
mov ebp,(loop_cnt -1)
add esi,block_size
dec edi
jne inner_loop
....


Reads 8 elements per iteration. EAX is present after NOP,
EBX...R13 are present in the following cycles (one access
per cycle). The loop reads half of an Athlon's cache line
size (one Bulldozer cache line). Replacing the NOP with a
PREFETCH [RSI+RBP*4] preloads the next cache line to save
a few clock cycles. This is the code for the entire loop,
just add some processing of the retrieved data. There are
four registers left for calculations, temporary stores or
whatever you want to do.


Greetings from Augsburg

Bernhard Schornak
 
Reply With Quote
 
Bernhard Schornak
Guest
Posts: n/a
 
      08-09-2011, 10:05 PM
Skybuck Flying wrote:


> I tried the shift left trick you mentioned.
>
> For the 32 bit version it doesn't really matter and doesn't give significant performance
> increases, at least with current settings.
>
> For what it's worth here is the assembly output :
>
> So from the looks of it, it isn't worth the trouble, perhaps results might be different
> for 64 bit or other settings.
>
> // *** Begin of SHL trick, assembly output for 32 bit: ***
>
> unit_TCPUMemoryTest_version_001.pas.364: begin
> 0040FD44 53 push ebx
> 0040FD45 56 push esi
> 0040FD46 57 push edi
> 0040FD47 83C4F4 add esp,-$0c
> unit_TCPUMemoryTest_version_001.pas.365: vElementShift := mElementShift;
> 0040FD4A 0FB6501C movzx edx,[eax+$1c]
> 0040FD4E 891424 mov [esp],edx
> unit_TCPUMemoryTest_version_001.pas.366: vBlockCount := mBlockCount;
> 0040FD51 8B5010 mov edx,[eax+$10]
> unit_TCPUMemoryTest_version_001.pas.367: vLoopCount := mLoopCount;
> 0040FD54 8B4814 mov ecx,[eax+$14]
> 0040FD57 894C2404 mov [esp+$04],ecx
> unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to vBlockCount-1 do
> 0040FD5B 4A dec edx
> 0040FD5C 85D2 test edx,edx
> 0040FD5E 7232 jb $0040fd92
> 0040FD60 42 inc edx
> 0040FD61 89542408 mov [esp+$08],edx
> 0040FD65 33F6 xor esi,esi
> unit_TCPUMemoryTest_version_001.pas.372: vBlockBase := vBlockIndex shl vElementShift;
> 0040FD67 8B0C24 mov ecx,[esp]
> 0040FD6A 8BDE mov ebx,esi
> 0040FD6C D3E3 shl ebx,cl
> unit_TCPUMemoryTest_version_001.pas.374: vElementIndex := 0;
> 0040FD6E 33D2 xor edx,edx
> unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to vLoopCount-1 do
> 0040FD70 8B4C2404 mov ecx,[esp+$04]
> 0040FD74 49 dec ecx
> 0040FD75 85C9 test ecx,ecx
> 0040FD77 720C jb $0040fd85
> 0040FD79 41 inc ecx
> unit_TCPUMemoryTest_version_001.pas.379: vElementIndex := mMemory[ vBlockBase +
> vElementIndex ];
> 0040FD7A 03D3 add edx,ebx
> 0040FD7C 8B7804 mov edi,[eax+$04]
> 0040FD7F 8B1497 mov edx,[edi+edx*4]
> unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to vLoopCount-1 do
> 0040FD82 49 dec ecx
> 0040FD83 75F5 jnz $0040fd7a
> unit_TCPUMemoryTest_version_001.pas.382: mBlockResult[ vBlockIndex ] := vElementIndex;
> 0040FD85 8B4808 mov ecx,[eax+$08]
> 0040FD88 8914B1 mov [ecx+esi*4],edx
> unit_TCPUMemoryTest_version_001.pas.383: end;
> 0040FD8B 46 inc esi
> unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to vBlockCount-1 do
> 0040FD8C FF4C2408 dec dword ptr [esp+$08]
> 0040FD90 75D5 jnz $0040fd67
> unit_TCPUMemoryTest_version_001.pas.384: end;
> 0040FD92 83C40C add esp,$0c
> 0040FD95 5F pop edi
> 0040FD96 5E pop esi
> 0040FD97 5B pop ebx
> 0040FD98 C3 ret
>
> // *** End of SHL trick, assembly output for 32 bit: ***
>
> Bye,
> Skybuck.



I just see one SHL in that code. What do you expect that
the posted code might improve? I see at least 10 totally
superfluous lines moving one register into another to do
something with that copy or other -unnecessary- actions.
Removing all redundant instructions will gain more speed
than a (pointlessly inserted) SHL...


Greetings from Augsburg

Bernhard Schornak
 
Reply With Quote
 
Skybuck Flying
Guest
Posts: n/a
 
      08-09-2011, 10:39 PM
The pascal code with the 3 blocks is roughly based on this code/posting of
yours:

"
> An example which does one load per pipe would be nice !


....
mov eax,[mem] \
mov ebx,[mem + 0x04] cycle 1
mov ecx,[mem + 0x08] /
nop \
nop cycle 2
nop /
nop \
nop cycle 3
nop /
eax present \
nop cycle 4
nop /
ebx present \
nop cycle 5
nop /
ecx present \
nop cycle 6
nop /
....


It takes 3 clocks to load EAX - Athlons can fetch
32 bit (or 64 bit in long mode) per cycle. Hence,
the new content of EAX will be available in cycle
four, while the other two loads still are in pro-
gress. Repeat this with EBX and ECX - NOPs should
be replaced by some instructions not depending on
the new content of EAX, EBX or ECX. It is the the
programmer's job to schedule instructions wisely.
Unfortunately, an overwhelming majority of coders
does not know what is going on inside the machine
they write code for (=> "machine independent").
"

I do twist this text a little bit when I write "parallel" I kinda mean 3
loads in whatever clock cycles it takes.

So it's like a more efficient form of batch processing.

I requested for an example of a "load per pipe".

I also doubted if it was possible.

Yet somehow you claimed it was possible, but then you come with some kind of
story.

Not sure if it was an answer to my question/request or a half answer.

It's also not clear what is ment with a "load per pipe".

Is that parallel, or really short sequential.

For me it's more or less the same as long as it's faster than what the code
is currently doing.

So what it's actually called/named doesn't really matter for me as long as
it's somehow faster.

But now it seems we are starting to miss understand each other on both sides


The manuals seems very cluttered with operating system specific information
which I don't need.

I only need to know hardware architecture for optimization purposes.

I already read "optimization tricks" manual.

I feel I could be wasting my time reading an instruction set like x86 or x64
which could die any day now.

It's probably also quite a bad instruction set with many oddities.

I'd rather read and switch to a more properly designed instruction set.

I had a fun time reading about cuda/ptx I do not think reading x86 would be
any fun at all

But perhaps I will give it a looksy to see if I can find something valuable
in it... I doubt it though.

The processors seem also quite complex and change a lot...

Bye,
Skybuck.

 
Reply With Quote
 
Skybuck Flying
Guest
Posts: n/a
 
      08-09-2011, 10:57 PM


"Bernhard Schornak" wrote in message news:j1saub$fi0$(E-Mail Removed)...

Skybuck Flying wrote:


> I tried the shift left trick you mentioned.
>
> For the 32 bit version it doesn't really matter and doesn't give
> significant performance
> increases, at least with current settings.
>
> For what it's worth here is the assembly output :
>
> So from the looks of it, it isn't worth the trouble, perhaps results might
> be different
> for 64 bit or other settings.
>
> // *** Begin of SHL trick, assembly output for 32 bit: ***
>
> unit_TCPUMemoryTest_version_001.pas.364: begin
> 0040FD44 53 push ebx
> 0040FD45 56 push esi
> 0040FD46 57 push edi
> 0040FD47 83C4F4 add esp,-$0c
> unit_TCPUMemoryTest_version_001.pas.365: vElementShift := mElementShift;
> 0040FD4A 0FB6501C movzx edx,[eax+$1c]
> 0040FD4E 891424 mov [esp],edx
> unit_TCPUMemoryTest_version_001.pas.366: vBlockCount := mBlockCount;
> 0040FD51 8B5010 mov edx,[eax+$10]
> unit_TCPUMemoryTest_version_001.pas.367: vLoopCount := mLoopCount;
> 0040FD54 8B4814 mov ecx,[eax+$14]
> 0040FD57 894C2404 mov [esp+$04],ecx
> unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to
> vBlockCount-1 do
> 0040FD5B 4A dec edx
> 0040FD5C 85D2 test edx,edx
> 0040FD5E 7232 jb $0040fd92
> 0040FD60 42 inc edx
> 0040FD61 89542408 mov [esp+$08],edx
> 0040FD65 33F6 xor esi,esi
> unit_TCPUMemoryTest_version_001.pas.372: vBlockBase := vBlockIndex shl
> vElementShift;
> 0040FD67 8B0C24 mov ecx,[esp]
> 0040FD6A 8BDE mov ebx,esi
> 0040FD6C D3E3 shl ebx,cl
> unit_TCPUMemoryTest_version_001.pas.374: vElementIndex := 0;
> 0040FD6E 33D2 xor edx,edx
> unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to
> vLoopCount-1 do
> 0040FD70 8B4C2404 mov ecx,[esp+$04]
> 0040FD74 49 dec ecx
> 0040FD75 85C9 test ecx,ecx
> 0040FD77 720C jb $0040fd85
> 0040FD79 41 inc ecx
> unit_TCPUMemoryTest_version_001.pas.379: vElementIndex := mMemory[
> vBlockBase +
> vElementIndex ];
> 0040FD7A 03D3 add edx,ebx
> 0040FD7C 8B7804 mov edi,[eax+$04]
> 0040FD7F 8B1497 mov edx,[edi+edx*4]
> unit_TCPUMemoryTest_version_001.pas.377: for vLoopIndex := 0 to
> vLoopCount-1 do
> 0040FD82 49 dec ecx
> 0040FD83 75F5 jnz $0040fd7a
> unit_TCPUMemoryTest_version_001.pas.382: mBlockResult[ vBlockIndex ] :=
> vElementIndex;
> 0040FD85 8B4808 mov ecx,[eax+$08]
> 0040FD88 8914B1 mov [ecx+esi*4],edx
> unit_TCPUMemoryTest_version_001.pas.383: end;
> 0040FD8B 46 inc esi
> unit_TCPUMemoryTest_version_001.pas.370: for vBlockIndex := 0 to
> vBlockCount-1 do
> 0040FD8C FF4C2408 dec dword ptr [esp+$08]
> 0040FD90 75D5 jnz $0040fd67
> unit_TCPUMemoryTest_version_001.pas.384: end;
> 0040FD92 83C40C add esp,$0c
> 0040FD95 5F pop edi
> 0040FD96 5E pop esi
> 0040FD97 5B pop ebx
> 0040FD98 C3 ret
>
> // *** End of SHL trick, assembly output for 32 bit: ***
>
> Bye,
> Skybuck.



"
I just see one SHL in that code.
"

That's because this is the single version which I posted, ofcourse I also
tried the 3 pair version, didn't feel like posting that too since it doesn't
become any faster.

But perhaps you interested in it so here ya go, it's maybe 1 procent faster,
but this is doubtfull:

// *** Begin of 3 pair SHL trick of 32 bit code ***:

unit_TCPUMemoryTest_version_001.pas.421: begin
0040FD44 53 push ebx
0040FD45 56 push esi
0040FD46 57 push edi
0040FD47 83C4D8 add esp,-$28
unit_TCPUMemoryTest_version_001.pas.423: vElementShift := mElementShift;
0040FD4A 0FB6501C movzx edx,[eax+$1c]
0040FD4E 89542418 mov [esp+$18],edx
unit_TCPUMemoryTest_version_001.pas.424: vBlockCount := mBlockCount;
0040FD52 8B5010 mov edx,[eax+$10]
0040FD55 89542410 mov [esp+$10],edx
unit_TCPUMemoryTest_version_001.pas.425: vLoopCount := mLoopCount;
0040FD59 8B5014 mov edx,[eax+$14]
0040FD5C 89542414 mov [esp+$14],edx
unit_TCPUMemoryTest_version_001.pas.427: vBlockIndexA := 0;
0040FD60 33D2 xor edx,edx
0040FD62 89542404 mov [esp+$04],edx
unit_TCPUMemoryTest_version_001.pas.428: vBlockIndexB := 1;
0040FD66 C744240801000000 mov [esp+$08],$00000001
unit_TCPUMemoryTest_version_001.pas.429: vBlockIndexC := 2;
0040FD6E C744240C02000000 mov [esp+$0c],$00000002
0040FD76 E988000000 jmp $0040fe03
unit_TCPUMemoryTest_version_001.pas.432: vBlockBaseA := vBlockIndexA shl
vElementShift;
0040FD7B 8B4C2418 mov ecx,[esp+$18]
0040FD7F 8B542404 mov edx,[esp+$04]
0040FD83 D3E2 shl edx,cl
0040FD85 8954241C mov [esp+$1c],edx
unit_TCPUMemoryTest_version_001.pas.433: vBlockBaseB := vBlockIndexB shl
vElementShift;
0040FD89 8B4C2418 mov ecx,[esp+$18]
0040FD8D 8B542408 mov edx,[esp+$08]
0040FD91 D3E2 shl edx,cl
0040FD93 89542420 mov [esp+$20],edx
unit_TCPUMemoryTest_version_001.pas.434: vBlockBaseC := vBlockIndexC shl
vElementShift;
0040FD97 8B4C2418 mov ecx,[esp+$18]
0040FD9B 8B54240C mov edx,[esp+$0c]
0040FD9F D3E2 shl edx,cl
0040FDA1 89542424 mov [esp+$24],edx
unit_TCPUMemoryTest_version_001.pas.436: vElementIndexA := 0;
0040FDA5 33D2 xor edx,edx
unit_TCPUMemoryTest_version_001.pas.437: vElementIndexB := 0;
0040FDA7 33F6 xor esi,esi
unit_TCPUMemoryTest_version_001.pas.438: vElementIndexC := 0;
0040FDA9 33FF xor edi,edi
unit_TCPUMemoryTest_version_001.pas.440: for vLoopIndex := 0 to vLoopCount-1
do
0040FDAB 8B5C2414 mov ebx,[esp+$14]
0040FDAF 4B dec ebx
0040FDB0 85DB test ebx,ebx
0040FDB2 7222 jb $0040fdd6
0040FDB4 43 inc ebx
unit_TCPUMemoryTest_version_001.pas.442: vElementIndexA := mMemory[
vBlockBaseA + vElementIndexA ];
0040FDB5 0354241C add edx,[esp+$1c]
0040FDB9 8B4804 mov ecx,[eax+$04]
0040FDBC 8B1491 mov edx,[ecx+edx*4]
unit_TCPUMemoryTest_version_001.pas.443: vElementIndexB := mMemory[
vBlockBaseB + vElementIndexB ];
0040FDBF 03742420 add esi,[esp+$20]
0040FDC3 8B4804 mov ecx,[eax+$04]
0040FDC6 8B34B1 mov esi,[ecx+esi*4]
unit_TCPUMemoryTest_version_001.pas.444: vElementIndexC := mMemory[
vBlockBaseC + vElementIndexC ];
0040FDC9 037C2424 add edi,[esp+$24]
0040FDCD 8B4804 mov ecx,[eax+$04]
0040FDD0 8B3CB9 mov edi,[ecx+edi*4]
unit_TCPUMemoryTest_version_001.pas.440: for vLoopIndex := 0 to vLoopCount-1
do
0040FDD3 4B dec ebx
0040FDD4 75DF jnz $0040fdb5
unit_TCPUMemoryTest_version_001.pas.447: mBlockResult[ vBlockIndexA ] :=
vElementIndexA;
0040FDD6 8B4808 mov ecx,[eax+$08]
0040FDD9 8B5C2404 mov ebx,[esp+$04]
0040FDDD 891499 mov [ecx+ebx*4],edx
unit_TCPUMemoryTest_version_001.pas.448: mBlockResult[ vBlockIndexB ] :=
vElementIndexB;
0040FDE0 8B5008 mov edx,[eax+$08]
0040FDE3 8B4C2408 mov ecx,[esp+$08]
0040FDE7 89348A mov [edx+ecx*4],esi
unit_TCPUMemoryTest_version_001.pas.449: mBlockResult[ vBlockIndexC ] :=
vElementIndexC;
0040FDEA 8B5008 mov edx,[eax+$08]
0040FDED 8B4C240C mov ecx,[esp+$0c]
0040FDF1 893C8A mov [edx+ecx*4],edi
unit_TCPUMemoryTest_version_001.pas.451: vBlockIndexA := vBlockIndexA + 3;
0040FDF4 8344240403 add dword ptr [esp+$04],$03
unit_TCPUMemoryTest_version_001.pas.452: vBlockIndexB := vBlockIndexB + 3;
0040FDF9 8344240803 add dword ptr [esp+$08],$03
unit_TCPUMemoryTest_version_001.pas.453: vBlockIndexC := vBlockIndexC + 3;
0040FDFE 8344240C03 add dword ptr [esp+$0c],$03
unit_TCPUMemoryTest_version_001.pas.430: while vBlockIndexA <=
(vBlockCount-4) do
0040FE03 8B542410 mov edx,[esp+$10]
0040FE07 83EA04 sub edx,$04
0040FE0A 3B542404 cmp edx,[esp+$04]
0040FE0E 0F8367FFFFFF jnb $0040fd7b
0040FE14 EB35 jmp $0040fe4b
unit_TCPUMemoryTest_version_001.pas.458: vBlockBaseA := vBlockIndexA shl
vElementShift;
0040FE16 8B4C2418 mov ecx,[esp+$18]
0040FE1A 8B542404 mov edx,[esp+$04]
0040FE1E D3E2 shl edx,cl
0040FE20 8954241C mov [esp+$1c],edx
unit_TCPUMemoryTest_version_001.pas.460: vElementIndexA := 0;
0040FE24 33D2 xor edx,edx
unit_TCPUMemoryTest_version_001.pas.462: for vLoopIndex := 0 to vLoopCount-1
do
0040FE26 8B5C2414 mov ebx,[esp+$14]
0040FE2A 4B dec ebx
0040FE2B 85DB test ebx,ebx
0040FE2D 720E jb $0040fe3d
0040FE2F 43 inc ebx
unit_TCPUMemoryTest_version_001.pas.464: vElementIndexA := mMemory[
vBlockBaseA + vElementIndexA ];
0040FE30 0354241C add edx,[esp+$1c]
0040FE34 8B4804 mov ecx,[eax+$04]
0040FE37 8B1491 mov edx,[ecx+edx*4]
unit_TCPUMemoryTest_version_001.pas.462: for vLoopIndex := 0 to vLoopCount-1
do
0040FE3A 4B dec ebx
0040FE3B 75F3 jnz $0040fe30
unit_TCPUMemoryTest_version_001.pas.467: mBlockResult[ vBlockIndexA ] :=
vElementIndexA;
0040FE3D 8B4808 mov ecx,[eax+$08]
0040FE40 8B5C2404 mov ebx,[esp+$04]
0040FE44 891499 mov [ecx+ebx*4],edx
unit_TCPUMemoryTest_version_001.pas.469: vBlockIndexA := vBlockIndexA + 1;
0040FE47 FF442404 inc dword ptr [esp+$04]
unit_TCPUMemoryTest_version_001.pas.456: while vBlockIndexA <=
(vBlockCount-1) do
0040FE4B 8B542410 mov edx,[esp+$10]
0040FE4F 4A dec edx
0040FE50 3B542404 cmp edx,[esp+$04]
0040FE54 73C0 jnb $0040fe16
unit_TCPUMemoryTest_version_001.pas.471: end;
0040FE56 83C428 add esp,$28
0040FE59 5F pop edi
0040FE5A 5E pop esi
0040FE5B 5B pop ebx
0040FE5C C3 ret

// *** End of 3 pair SHL trick of 32 bit code ***:

"
What do you expect that the posted code might improve?
"

You wrote the "mul" blocks the processor somehow, the pipes were blocked
because of it.

You wrote the "shl" does not block the processor/pipelines and multiple
shl's can be executed faster.

So if the program was limited by instruction throughput then switching to
shl would have given more performance.

As far as I can recall it did not give more performance, not in the single
version and and not in the 3 pair version.

"I see at least 10 totally superfluous lines moving one register into
another to do
something with that copy or other -unnecessary- actions.
"

?

"
Removing all redundant instructions will gain more speed
than a (pointlessly inserted) SHL...
"

You are welcome to take these x86 outputs and remove any redundant
instructions you see fit it should be easy to re-integrate your modified
assembly code.

Bye,
Skybuck.

 
Reply With Quote
 
wolfgang kern
Guest
Posts: n/a
 
      08-10-2011, 01:14 AM

Flying Bucket posted an excellent Branch_Never variant:

....
0040FE2A 4B dec ebx
0040FE2B 85DB test ebx,ebx
0040FE2D 720E jb $0040fe3d
0040FE2F 43 inc ebx
....

was this your idea or is it just the output of your smart compiler ?

__
wolfgang





 
Reply With Quote
 
Skybuck Flying
Guest
Posts: n/a
 
      08-10-2011, 02:05 AM
"
....
0040FE2A 4B dec ebx
0040FE2B 85DB test ebx,ebx
0040FE2D 720E jb $0040fe3d
0040FE2F 43 inc ebx
....

was this your idea or is it just the output of your smart compiler ?
"

This is indeed a bit strange, since dec leaves CF alone, test resets CF and
JB looks for CF=1 (inc also leaves CF alone).

Fortunately this code is only executed once at the start of the for loop,
for the rest of the loop iterations it is skipped.

Why it generates this code is a bit of a mystery it could have different
explanations/reasons.

Most likely reason is that this is a place holder for any additional
checking. For example when the code is changed.

It could also be a failed attempt at seeing if the loop needs to be skipped
over.

It could also be caused by a longword and a -1 which would wrap back and
have no effect, the branch will always execute no matter what. The JB jumps
over the loop but as you can see it will never jump over the loop.

So in a way the code is unneccessary. Had the type been an integer than it
could/would have been necessary.

It could also be some strange bug in the compiler in a way where it doesn't
recgonize this useless code or generates it wrongly or perhaps there is no
real solution to detecting wrap back of $FF FF FF FF - 1.

Since the loop has a high iteration count it's not much of a problem, for
smaller loop iterations it could have been a performance/execute problem.

The only real problem I see with it is that it's taking up precious L1
instruction cache space.

One last possible explanation could be "alignment/padding" of instruction
codes to make them nicely aligned but this seems unlikely.

Nicely spotted though by you !

Bye,
Skybuck.

 
Reply With Quote
 
 
 
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: An idea how to speed up computer programs and avoid waiting. ("event driven memory system") Skybuck Flying Nvidia 53 09-18-2011 08:34 AM
Re: An idea how to speed up computer programs and avoid waiting.("event driven memory system") Muscipula Nvidia 4 09-17-2011 05:10 PM
Re: An idea how to speed up computer programs and avoid waiting. ("event driven memory system") Skybuck Flying Nvidia 0 08-01-2011 09:53 PM
An idea how to speed up computer programs and avoid waiting. ("event driven memory system") Skybuck Flying Nvidia 4 07-23-2011 04:19 PM
Re: An idea how to speed up computer programs and avoid waiting. ("event driven memory system") Skybuck Flying Nvidia 0 07-22-2011 02:38 PM


All times are GMT. The time now is 02:48 PM.


Welcome!
Welcome to Motherboard Point
 

Advertisment