CUBLAS Transpose matrix multiplication problem

I am trying to multiply C = At * B in CUBLAS. The thing is that with the code that I have (which I have taken from this ) there are some matrix dimensions in which it seems to work fine int rows_a = 1, cols_a = 200, rows_b = 1, cols_b = 200 . Instead, there are some dimensions in which the values are not correct int rows_a = 200, cols_a = 5, rows_b = 200, cols_b = 5;.

In my code I set up the two matrix and then I do the multiplication with the CUBLAS function cublasSgemm and, after that, I do the same matrix multiplication with some CPU function to check if it's ok.

int main(int argc, char *argv[])
{
    cublasCreate(&handle);

    int rows_a = 200, cols_a = 5, rows_b = 200, cols_b = 5;

    float al = 1.0f;
    float bet = 0.0f;
    float *a = (float *)malloc(rows_a * cols_a * sizeof(float));
    float *b = (float *)malloc(rows_b * cols_b * sizeof(float));
    float *c = (float *)malloc(cols_a * cols_b * sizeof(float)); // CUBLAS result
    float *cpu= (float *)malloc(cols_a * cols_b * sizeof(float)); // CPU result

    for (int i = 0; i < rows_a * cols_a; i++)
    {
        a[i] = i;
    }

    for (int i = 0; i < rows_b * cols_b; i++)
    {
        b[i] = i*4;
    }

    float *dev_a, *dev_b, *dev_c;
    cudaMalloc((void **)&dev_a, rows_a * cols_a * sizeof(float));
    cudaMalloc((void **)&dev_b, rows_b * cols_b * sizeof(float));
    cudaMalloc((void **)&dev_c, cols_a * cols_b * sizeof(float));

    cudaMemcpy(dev_a, a, rows_a * cols_a * sizeof(float), cudaMemcpyHostToDevice);
    cudaMemcpy(dev_b, b, rows_b * cols_b * sizeof(float), cudaMemcpyHostToDevice);

    cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_T, cols_b, cols_a, rows_b, &al, dev_b, cols_b, dev_a, cols_a, &bet, dev_c, cols_a);

    cudaMemcpy(c, dev_c, cols_a * cols_b * sizeof(float), cudaMemcpyDeviceToHost);
    printMatriz(c, cols_a, cols_b);

    //CPU
    for (int i = 0; i < cols_a; i++)
    {
        for (int j = 0; j < cols_b; j++)
        {
            float v = 0;
            for (int k = 0; k < rows_a; k++)
            {
                v += a[(cols_a * k) + i] * b[(cols_b * k) + j];
            }
            cpu[(i * cols_b) + j] = v;
        }
    }

    printMatriz(cpu, cols_a, cols_b);
}

The wrong output:

(cublas)
264670000.000000 265068000.000000 265466000.000000 265864000.000000 266262000.000000 
265068000.000000 265466800.000000 265865600.000000 266264400.000000 266663200.000000
...

(cpu)
264669856.000000 265068016.000000 265466144.000000 265864000.000000 266261856.000000 
265068016.000000 265466656.000000 265865584.000000 266264544.000000 266663184.000000 
...

I expect that the two results must be the same and clearly my implementation is not correct. Could someone help me? Thanks!

1 answer

  • answered 2019-11-13 23:29 Sam Mason

    I think you're just hitting up against floating point precision, those values are only a few bits away from each other. e.g. in "hex notation":

    265068000 is 0x1.f993bcp+27
    265068016 is 0x1.f993bep+27
    

    notice the only the last digit changes by 3 (0xf993bc - 0xf993be) which is pretty good considering it's that close after 200 roundings.

    note that 32-bit floats are generally good for around 7 decimal digits of precision, while 64-bit doubles are good for around 15 decimal digits.