Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
527 views
in Technique[技术] by (71.8m points)

c++ - 为什么单独循环中的元素加法比组合循环中的要快得多?(Why are elementwise additions much faster in separate loops than in a combined loop?)

Suppose a1 , b1 , c1 , and d1 point to heap memory and my numerical code has the following core loop.

(假设a1b1c1d1指向堆内存,而我的数字代码具有以下核心循环。)

const int n = 100000;

for (int j = 0; j < n; j++) {
    a1[j] += b1[j];
    c1[j] += d1[j];
}

This loop is executed 10,000 times via another outer for loop.

(此循环通过另一个外部for循环执行了10,000次。)

To speed it up, I changed the code to:

(为了加快速度,我将代码更改为:)

for (int j = 0; j < n; j++) {
    a1[j] += b1[j];
}

for (int j = 0; j < n; j++) {
    c1[j] += d1[j];
}

Compiled on MS Visual C++ 10.0 with full optimization and SSE2 enabled for 32-bit on a Intel Core 2 Duo (x64), the first example takes 5.5 seconds and the double-loop example takes only 1.9 seconds.

(在具有完全优化功能的MS Visual C ++ 10.0上编译,并在Intel Core 2 Duo(x64)上为32位启用了SSE2 ,第一个示例花费5.5秒,而双循环示例仅花费1.9秒。)

My question is: (Please refer to the my rephrased question at the bottom)

(我的问题是:(请参阅底部的我改写的问题))

PS: I am not sure, if this helps:

(PS:我不确定,这是否有帮助:)

Disassembly for the first loop basically looks like this (this block is repeated about five times in the full program):

(第一个循环的反汇编基本上是这样的(在整个程序中此块重复了大约五次):)

movsd       xmm0,mmword ptr [edx+18h]
addsd       xmm0,mmword ptr [ecx+20h]
movsd       mmword ptr [ecx+20h],xmm0
movsd       xmm0,mmword ptr [esi+10h]
addsd       xmm0,mmword ptr [eax+30h]
movsd       mmword ptr [eax+30h],xmm0
movsd       xmm0,mmword ptr [edx+20h]
addsd       xmm0,mmword ptr [ecx+28h]
movsd       mmword ptr [ecx+28h],xmm0
movsd       xmm0,mmword ptr [esi+18h]
addsd       xmm0,mmword ptr [eax+38h]

Each loop of the double loop example produces this code (the following block is repeated about three times):

(双循环示例的每个循环都会生成此代码(以下块重复大约三遍):)

addsd       xmm0,mmword ptr [eax+28h]
movsd       mmword ptr [eax+28h],xmm0
movsd       xmm0,mmword ptr [ecx+20h]
addsd       xmm0,mmword ptr [eax+30h]
movsd       mmword ptr [eax+30h],xmm0
movsd       xmm0,mmword ptr [ecx+28h]
addsd       xmm0,mmword ptr [eax+38h]
movsd       mmword ptr [eax+38h],xmm0
movsd       xmm0,mmword ptr [ecx+30h]
addsd       xmm0,mmword ptr [eax+40h]
movsd       mmword ptr [eax+40h],xmm0

The question turned out to be of no relevance, as the behavior severely depends on the sizes of the arrays (n) and the CPU cache.

(事实证明这个问题无关紧要,因为行为严重取决于阵列(n)的大小和CPU缓存。)

So if there is further interest, I rephrase the question:

(因此,如果有进一步的兴趣,我重新提出一个问题:)

Could you provide some solid insight into the details that lead to the different cache behaviors as illustrated by the five regions on the following graph?

(您能否对导致不同缓存行为的细节提供深入的了解,如下图的五个区域所示?)

It might also be interesting to point out the differences between CPU/cache architectures, by providing a similar graph for these CPUs.

(通过为这些CPU提供类似的图形来指出CPU /缓存体系结构之间的差异也可能很有趣。)

PPS: Here is the full code.

(PPS:这是完整的代码。)

It uses TBB Tick_Count for higher resolution timing, which can be disabled by not defining the TBB_TIMING Macro:

(它使用TBB Tick_Count获得更高分辨率的时序,可以通过不定义TBB_TIMING宏来禁用它:)

#include <iostream>
#include <iomanip>
#include <cmath>
#include <string>

//#define TBB_TIMING

#ifdef TBB_TIMING   
#include <tbb/tick_count.h>
using tbb::tick_count;
#else
#include <time.h>
#endif

using namespace std;

//#define preallocate_memory new_cont

enum { new_cont, new_sep };

double *a1, *b1, *c1, *d1;


void allo(int cont, int n)
{
    switch(cont) {
      case new_cont:
        a1 = new double[n*4];
        b1 = a1 + n;
        c1 = b1 + n;
        d1 = c1 + n;
        break;
      case new_sep:
        a1 = new double[n];
        b1 = new double[n];
        c1 = new double[n];
        d1 = new double[n];
        break;
    }

    for (int i = 0; i < n; i++) {
        a1[i] = 1.0;
        d1[i] = 1.0;
        c1[i] = 1.0;
        b1[i] = 1.0;
    }
}

void ff(int cont)
{
    switch(cont){
      case new_sep:
        delete[] b1;
        delete[] c1;
        delete[] d1;
      case new_cont:
        delete[] a1;
    }
}

double plain(int n, int m, int cont, int loops)
{
#ifndef preallocate_memory
    allo(cont,n);
#endif

#ifdef TBB_TIMING   
    tick_count t0 = tick_count::now();
#else
    clock_t start = clock();
#endif

    if (loops == 1) {
        for (int i = 0; i < m; i++) {
            for (int j = 0; j < n; j++){
                a1[j] += b1[j];
                c1[j] += d1[j];
            }
        }
    } else {
        for (int i = 0; i < m; i++) {
            for (int j = 0; j < n; j++) {
                a1[j] += b1[j];
            }
            for (int j = 0; j < n; j++) {
                c1[j] += d1[j];
            }
        }
    }
    double ret;

#ifdef TBB_TIMING   
    tick_count t1 = tick_count::now();
    ret = 2.0*double(n)*double(m)/(t1-t0).seconds();
#else
    clock_t end = clock();
    ret = 2.0*double(n)*double(m)/(double)(end - start) *double(CLOCKS_PER_SEC);
#endif

#ifndef preallocate_memory
    ff(cont);
#endif

    return ret;
}


void main()
{   
    freopen("C:\test.csv", "w", stdout);

    char *s = " ";

    string na[2] ={"new_cont", "new_sep"};

    cout << "n";

    for (int j = 0; j < 2; j++)
        for (int i = 1; i <= 2; i++)
#ifdef preallocate_memory
            cout << s << i << "_loops_" << na[preallocate_memory];
#else
            cout << s << i << "_loops_" << na[j];
#endif

    cout << endl;

    long long nmax = 1000000;

#ifdef preallocate_memory
    allo(preallocate_memory, nmax);
#endif

    for (long long n = 1L; n < nmax; n = max(n+1, long long(n*1.2)))
    {
        const long long m = 10000000/n;
        cout << n;

        for (int j = 0; j < 2; j++)
            for (int i = 1; i <= 2; i++)
                cout << s << plain(n, m, j, i);
        cout << endl;
    }
}

(It shows FLOP/s for different values of n .)

((它显示n不同值的FLOP / s。))

在此处输入图片说明

  ask by Johannes Gerer translate from so

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Upon further analysis of this, I believe this is (at least partially) caused by data alignment of the four pointers.

(经过对此的进一步分析,我认为这(至少部分地)是由四个指针的数据对齐引起的。)

This will cause some level of cache bank/way conflicts.

(这将导致某种程度的缓存库/方式冲突。)

If I've guessed correctly on how you are allocating your arrays, they are likely to be aligned to the page line .

(如果我猜对了如何分配数组,则它们很可能与page line对齐 。)

This means that all your accesses in each loop will fall on the same cache way.

(这意味着您在每个循环中的所有访问都将使用相同的缓存方式。)

However, Intel processors have had 8-way L1 cache associativity for a while.

(但是,一段时间以来,英特尔处理器已经具有8路L1缓存关联性。)

But in reality, the performance isn't completely uniform.

(但实际上,性能并不完全相同。)

Accessing 4-ways is still slower than say 2-ways.

(访问4路仍然比说2路慢。)

EDIT : It does in fact look like you are allocating all the arrays separately.

(编辑:实际上,它的确看起来像您是分别分配所有数组。)

Usually when such large allocations are requested, the allocator will request fresh pages from the OS.

(通常,当请求如此大的分配时,分配器会从OS请求新页面。)

Therefore, there is a high chance that large allocations will appear at the same offset from a page-boundary.

(因此,很有可能大分配将出现在与页面边界相同的偏移量处。)

Here's the test code:

(这是测试代码:)

int main(){
    const int n = 100000;

#ifdef ALLOCATE_SEPERATE
    double *a1 = (double*)malloc(n * sizeof(double));
    double *b1 = (double*)malloc(n * sizeof(double));
    double *c1 = (double*)malloc(n * sizeof(double));
    double *d1 = (double*)malloc(n * sizeof(double));
#else
    double *a1 = (double*)malloc(n * sizeof(double) * 4);
    double *b1 = a1 + n;
    double *c1 = b1 + n;
    double *d1 = c1 + n;
#endif

    //  Zero the data to prevent any chance of denormals.
    memset(a1,0,n * sizeof(double));
    memset(b1,0,n * sizeof(double));
    memset(c1,0,n * sizeof(double));
    memset(d1,0,n * sizeof(double));

    //  Print the addresses
    cout << a1 << endl;
    cout << b1 << endl;
    cout << c1 << endl;
    cout << d1 << endl;

    clock_t start = clock();

    int c = 0;
    while (c++ < 10000){

#if ONE_LOOP
        for(int j=0;j<n;j++){
            a1[j] += b1[j];
            c1[j] += d1[j];
        }
#else
        for(int j=0;j<n;j++){
            a1[j] += b1[j];
        }
        for(int j=0;j<n;j++){
            c1[j] += d1[j];
        }
#endif

    }

    clock_t end = clock();
    cout << "seconds = " << (double)(end - start) / CLOCKS_PER_SEC << endl;

    system("pause");
    return 0;
}

Benchmark Results:

(基准结果:)

EDIT: Results on an actual Core 2 architecture machine: (编辑:在实际的 Core 2体系结构机器上的结果:)

2 x Intel Xeon X5482 Harpertown @ 3.2 GHz:

(2个Intel Xeon X5482 Harpertown @ 3.2 GHz:)

#define ALLOCATE_SEPERATE
#define ONE_LOOP
00600020
006D0020
007A0020
00870020
seconds = 6.206

#define ALLOCATE_SEPERATE
//#define ONE_LOOP
005E0020
006B0020
00780020
00850020
seconds = 2.116

//#define ALLOCATE_SEPERATE
#define ONE_LOOP
00570020
00633520
006F6A20
007B9F20
seconds = 1.894

//#define ALLOCATE_SEPERATE
//#define ONE_LOOP
008C0020
00983520
00A46A20
00B09F20
seconds = 1.993

Observations:

(观察结果:)

  • 6.206 seconds with one loop and 2.116 seconds with two loops.

    (一圈为6.206秒两圈2.116秒 。)

    This reproduces the OP's results exactly.

    (这样可以准确地再现OP的结果。)

  • In the first two tests, the arrays are allocated separately.

    (在前两个测试中,分别分配数组。)

    You'll notice that they all have the same alignment relative to the page.

    (您会注意到它们相对于页面都具有相同的对齐方式。)

  • In the second two tests, the arrays are packed together to break that alignment.

    (在后两个测试中,将数组打包在一起以破坏对齐方式。)

    Here you'll notice both loops are faster.

    (在这里,您会注意到两个循环都更快。)

    Furthermore, the second (double) loop is now the slower one as you would normally expect.

    (此外,第二(双)循环现在比通常期望的要慢。)

As @Stephen Cannon points out in the comments, there is very likely possibility that this alignment causes false aliasing in the load/store units or the cache.

(正如@Stephen Cannon在评论中指出的那样,这种对齐很有可能导致加载/存储单元或缓存中出现错误的混叠 。)

I Googled around for this and found that Intel actually has a hardware counter for partial address aliasing stalls:

(我在Google上搜索了一下,发现Intel实际上有一个硬件计数器,用于部分地址别名停顿:)

http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html

(http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html)


5 Regions - Explanations (5个地区-说明)

Region 1:

(区域1:)

This one is easy.

(这很容易。)

The dataset is so small that the performance is dominated by overhead like looping and branching.

(数据集是如此之小,以致于性能受循环和分支之类的开销所支配。)

Region 2:

(区域2:)

Here, as the data sizes increases, the amount of relative overhead goes down and the performance "saturates".

( 在这里,随着数据大小的增加,相对开销的数量减少,性能“饱和”。 )

Here two loops is slower because it has twice as much loop and branching overhead.

( 在这里,两个循环比较慢,因为它的循环和分支开销是其两倍。 )

I'm not sure exactly what's going on here... Alignment could still play an effect as Agner Fog mentions cache bank conflicts .

(我不确定这到底是怎么回事...对齐仍然可以发挥作用,因为Agner Fog提到了缓存库冲突 。)

(That link is about Sandy Bridge, but the idea should still be applicable to Core 2.)

((该链接是关于Sandy Bridge的,但该想法仍应适用于Core2。))

Region 3:

(区域3:)

At this point, the data no longer fits in L1 cache.

(此时,数据不再适合L1缓存。)

So performance is capped by the L1 <-> L2 cache bandwidth.

(因此,性能受L1 <-> L2缓存带宽的限制。)

Region 4:

(区域4:)

The performance drop in the single-loop is what we are observing.

(我们正在观察单循环中的性能下降。)

And as mentioned, this is due to the alignment which (most likely) causes false aliasing stalls in the processor load/store units.

(并且如前所述,这是由于对齐(最有可能)导致处理器加载/存储单元中的假混叠停顿。)

However, in order for false aliasing to occur, there must be a large enough stride between the datasets.

(但是,为了使假混叠发生,数据集之间必须有足够大的跨度。)

This is why you don't see this in region 3.

(这就是为什么您在区域3中看不到它的原因。)

Region 5:

(区域5:)

At this point, nothing fits in cache.

(在这一点上,没有什么适合缓存。)

So you're bound by memory bandwidth.

(因此,您受内存带宽的束缚。)


2个Intel X5482 Harpertown @ 3.2 GHz英特尔酷睿i7 870 @ 2.8 GHz英特尔酷睿i7 2600K @ 4.4 GHz


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...