6 Replies Latest reply: Sep 24, 2009 8:37 AM by Mikael Grev
Mikael Grev Level 1 Level 1 (0 points)
Hello,

I have quite fast readback (8ms for 1920x1200) on a nVidia 8600m GT (MBP 2007). However, the same code, and many versions of it, is six times slower on a brand new ATI Radeon 4870 512MB on PCI-E (Quad Core 2,93GHz). If it was the other way around I'd understand.

I have tried reading from a new frame buffer. I have tried copy the pixels to a PBO and read/map from it. Not better. Everything is lightning fast on the VRAM but as soon as I try to pull it down to System memory it is slow.

Btw, here's the simplest version of the code. As I said, I have tried many more complicated versions, but they are all as slow or slower.

Any pointer on how to spead this up would be appreciated. Btw, I'm running 10.5 on the fast MacBook Pro and 10.6 on the MAc Pro, but that shouldn't matter, should it?

Cheers,
Mikael


int capture(int capX, int capY, int capWidth, int capHeight, char *dest)
{
if (screen == NULL) {
CGLPixelFormatObj pix;
GLint npix;

CGLPixelFormatAttribute attribs[] = {
kCGLPFAFullScreen,
kCGLPFADisplayMask, (CGLPixelFormatAttribute) CGDisplayIDToOpenGLDisplayMask(CGMainDisplayID()),
(CGLPixelFormatAttribute) 0
};

// Create a fullscreen context
CGLChoosePixelFormat(attribs, &pix, &npix);
CGLCreateContext(pix, NULL, &screen);
CGLDestroyPixelFormat(pix);

if (glGetError() != GLNOERROR)
return -1;
}

CGLSetCurrentContext(screen);
CGLSetFullScreen(screen);

if (glGetError() != GLNOERROR)
return -2;

glPixelStorei(GLPACK_ROWLENGTH, 0);

glReadPixels(0, 0, 1920, 1200, GL_BGRA, GLUNSIGNED_INT_8_8_88, dest);

if (glGetError() != GLNOERROR)
return -3;

return 0;
}

Mac OS X (10.5.1)