Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
256 views
in Technique[技术] by (71.8m points)

c - Fastest way to multiply an array of int64_t?

I want to vectorize the multiplication of two memory aligned arrays. I didn't find any way to multiply 64*64 bit in AVX/AVX2, so I just did loop-unroll and AVX2 loads/stores. Is there a faster way to do this?

Note: I don't want to save the high-half result of each multiplication.

void multiply_vex(long *Gi_vec, long q, long *Gj_vec){

    int i;
    __m256i data_j, data_i;
    __uint64_t *ptr_J = (__uint64_t*)&data_j;
    __uint64_t *ptr_I = (__uint64_t*)&data_i;


    for (i=0; i<BASE_VEX_STOP; i+=4) {
        data_i = _mm256_load_si256((__m256i*)&Gi_vec[i]);
        data_j = _mm256_load_si256((__m256i*)&Gj_vec[i]);

        ptr_I[0] -= ptr_J[0] * q;
        ptr_I[1] -= ptr_J[1] * q;
        ptr_I[2] -= ptr_J[2] * q;
        ptr_I[3] -= ptr_J[3] * q;

        _mm256_store_si256((__m256i*)&Gi_vec[i], data_i);
    }


    for (; i<BASE_DIMENSION; i++)
        Gi_vec[i] -= Gj_vec[i] * q;
}

UPDATE: I'm using the Haswell microarchitecture with both ICC/GCC compilers. So both AVX and AVX2 is fine. I substitute the -= by the C intrisic _mm256_sub_epi64 after the multiplication loop-unroll, where it get some speedup. Currently, it is ptr_J[0] *= q; ...

I use __uint64_t but is a error. The right data type is __int64_t.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You seem to be assuming long is 64bits in your code, but then using __uint64_t as well. In 32bit, the x32 ABI, and on Windows, long is a 32bit type. Your title mentions long long, but then your code ignores it. I was wondering for a while if your code was assuming that long was 32bit.

You're completely shooting yourself in the foot by using AVX256 loads but then aliasing a pointer onto the __m256i to do scalar operations. gcc just gives up and gives you the terrible code you asked for: vector load and then a bunch of extract and insert instructions. Your way of writing it means that both vectors have to be unpacked to do the sub in scalar as well, instead of using vpsubq.

Modern x86 CPUs have very fast L1 cache that can handle two operations per clock. (Haswell and later: two loads and one store per clock). Doing multiple scalar loads from the same cache line is better than a vector load and unpacking. (Imperfect uop scheduling reduces the throughput to about 84% of that, though: see below)


gcc 5.3 -O3 -march=haswell (Godbolt compiler explorer) auto-vectorizes a simple scalar implementation pretty well. When AVX2 isn't available, gcc foolishly still auto-vectorizes with 128b vectors: On Haswell, this will actually be about 1/2 the speed of ideal scalar 64bit code. (See the perf analysis below, but substitute 2 elements per vector instead of 4).

#include <stdint.h>    // why not use this like a normal person?
#define BASE_VEX_STOP 1024
#define BASE_DIMENSION 1028

// restrict lets the compiler know the arrays don't overlap,
// so it doesn't have to generate a scalar fallback case
void multiply_simple(uint64_t *restrict Gi_vec, uint64_t q, const uint64_t *restrict Gj_vec){
    for (intptr_t i=0; i<BASE_DIMENSION; i++)   // gcc doesn't manage to optimize away the sign-extension from 32bit to pointer-size in the scalar epilogue to handle the last less-than-a-vector elements
        Gi_vec[i] -= Gj_vec[i] * q;
}

inner loop:

.L4:
    vmovdqu ymm1, YMMWORD PTR [r9+rax]        # MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B], MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B]
    add     rcx, 1    # ivtmp.30,
    vpsrlq  ymm0, ymm1, 32      # tmp174, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B],
    vpmuludq        ymm2, ymm1, ymm3        # tmp173, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B], vect_cst_.25
    vpmuludq        ymm0, ymm0, ymm3        # tmp176, tmp174, vect_cst_.25
    vpmuludq        ymm1, ymm4, ymm1        # tmp177, tmp185, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B]
    vpaddq  ymm0, ymm0, ymm1    # tmp176, tmp176, tmp177
    vmovdqa ymm1, YMMWORD PTR [r8+rax]        # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B]
    vpsllq  ymm0, ymm0, 32      # tmp176, tmp176,
    vpaddq  ymm0, ymm2, ymm0    # vect__13.24, tmp173, tmp176
    vpsubq  ymm0, ymm1, ymm0    # vect__14.26, MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__13.24
    vmovdqa YMMWORD PTR [r8+rax], ymm0        # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__14.26
    add     rax, 32   # ivtmp.32,
    cmp     rcx, r10  # ivtmp.30, bnd.14
    jb      .L4 #,

Translate that back to intrinsics if you want, but it's going to be a lot easier to just let the compiler autovectorize. I didn't try to analyse it to see if it's optimal.

If you don't usually compile with -O3, you could use #pragma omp simd before the loop (and -fopenmp).

Of course, instead of a scalar epilogue, it would prob. be faster to do an unaligned load of the last 32B of Gj_vec, and store into the last 32B of Gi_vec, potentially overlapping with the last store from the loop. (A scalar fallback is still needed if the arrays are smaller than 32B.)


Improved vector intrinsic version for Haswell

From my comments on Z Boson's answer. Based on Agner Fog's vector class library code.

Agner Fog's version saves an instruction but bottlenecks on the shuffle port by using phadd + pshufd where I use psrlq / paddq / pand.

Since one of your operands is constant, make sure to pass set1(q) as b, not a, so the "bswap" shuffle can be hoisted.

// replace hadd -> shuffle (4 uops) with shift/add/and (3 uops)
// The constant takes 2 insns to generate outside a loop.
__m256i mul64_haswell (__m256i a, __m256i b) {
    // instruction does not exist. Split into 32-bit multiplies
    __m256i bswap   = _mm256_shuffle_epi32(b,0xB1);           // swap H<->L
    __m256i prodlh  = _mm256_mullo_epi32(a,bswap);            // 32 bit L*H products

    // or use pshufb instead of psrlq to reduce port0 pressure on Haswell
    __m256i prodlh2 = _mm256_srli_epi64(prodlh, 32);          // 0  , a0Hb0L,          0, a1Hb1L
    __m256i prodlh3 = _mm256_add_epi32(prodlh2, prodlh);      // xxx, a0Lb0H+a0Hb0L, xxx, a1Lb1H+a1Hb1L
    __m256i prodlh4 = _mm256_and_si256(prodlh3, _mm256_set1_epi64x(0x00000000FFFFFFFF)); // zero high halves

    __m256i prodll  = _mm256_mul_epu32(a,b);                  // a0Lb0L,a1Lb1L, 64 bit unsigned products
    __m256i prod    = _mm256_add_epi64(prodll,prodlh4);       // a0Lb0L+(a0Lb0H+a0Hb0L)<<32, a1Lb1L+(a1Lb1H+a1Hb1L)<<32
    return  prod;
}

See it on Godbolt.

Note that this doesn't include the final subtract, only the multiply.

This version should perform a bit better on Haswell than gcc's autovectorized version. (like maybe one vector per 4 cycles instead of one vector per 5 cycles, bottlenecked on port0 throughput. I didn't consider other bottlenecks for the full problem, since this was a late addition to the answer.)

An AVX1 version (two elements per vector) would suck, and probably still be worse than 64bit scalar. Don't do it unless you already have your data in vectors, and want the result in a vector (extracting to scalar and back might not be worth it).


Perf analysis of GCC's autovectorized code (not the intrinsic version)

Background: see Agner Fog's insn tables and microarch guide, and other links in the tag wiki.

Until AVX512 (see below), this is probably only barely faster than scalar 64bit code: imul r64, m64 has a throughput of one per clock on Intel CPUs (but one per 4 clocks on AMD Bulldozer-family). load/imul/sub-with-memory-dest is 4 fused-domain uops on Intel CPUs (with an addressing mode that can micro-fuse, which gcc fails to use). The pipeline width is 4 fused-domain uops per clock, so even a large unroll can't get this to issue at one-per-clock. With enough unrolling, we'll bottleneck on load/store throughput. 2 loads and one store per clock is possible on Haswell, but stores-address uops stealing load ports will lower the throughput to about 81/96 = 84% of that, according to Intel's manual.

So perhaps the best way for Haswell would load and multiply with scalar, (2 uops), then vmovq / pinsrq / vinserti128 so you can do the subtract with a vpsubq. That's 8 uops to load&multiply all 4 scalars, 7 shuffle uops to get the data into a __m256i (2 (movq) + 4 (pinsrq is 2 uops) + 1 vinserti128), and 3 more uops to do a vector load / vpsubq / vector store. So that's 18 fused-domain uops per 4 multiplies (4.5 cycles to issue), but 7 shuffle uops (7 cycles to execute). So nvm, this is no good compared to pure scalar.


The autovectorized code is using 8 vector ALU instructions for each vector of four values. On Haswell, 5 of those uops (multiplies and shifts) can only run on port 0, so no matter how you unroll this algorithm it will achieve at best one vector per 5 cycles (i.e. one multiply per 5/4 cycles.)

The shifts could be replaced with pshufb (port 5) to move the data and shift in zeros. (Other s


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...