Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
201 views
in Technique[技术] by (71.8m points)

ios - Where is the official documentation for CVOpenGLESTexture method types?

I tried google and stackoverflow but I cant seem to find the oficial documentation for functions that start with CVOpenGLESTexture. I can see they are from core Video, and I know they were added on iOS 5 but searching the documentation doesnt give me anything.

I am looking for the information about the parameters, what they do, how to use them etc. like in the other apple frameworks.

So far all I can do is command click on it to see the information but this feels super weird. Or is there a way to add this so it can be displayed on the quick help on the right on xcode?

Thanks and sorry if it is a stupid question.

PD: The core Video reference guide doesnt seem to explain these either.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Unfortunately, there really isn't any documentation on these new functions. The best you're going to find right now is in the CVOpenGLESTextureCache.h header file, where you'll see a basic description of the function parameters:

/*!
    @function   CVOpenGLESTextureCacheCreate
    @abstract   Creates a new Texture Cache.
    @param      allocator The CFAllocatorRef to use for allocating the cache.  May be NULL.
    @param      cacheAttributes A CFDictionaryRef containing the attributes of the cache itself.   May be NULL.
    @param      eaglContext The OpenGLES 2.0 context into which the texture objects will be created.  OpenGLES 1.x contexts are not supported.
    @param      textureAttributes A CFDictionaryRef containing the attributes to be used for creating the CVOpenGLESTexture objects.  May be NULL.
    @param      cacheOut   The newly created texture cache will be placed here
    @result     Returns kCVReturnSuccess on success
*/
CV_EXPORT CVReturn CVOpenGLESTextureCacheCreate(
                    CFAllocatorRef allocator,
                    CFDictionaryRef cacheAttributes,
                    void *eaglContext,
                    CFDictionaryRef textureAttributes,
                    CVOpenGLESTextureCacheRef *cacheOut) __OSX_AVAILABLE_STARTING(__MAC_NA,__IPHONE_5_0);

The more difficult elements are the attributes dictionaries, which unfortunately you need to find examples of in order to use these functions properly. Apple has the GLCameraRipple and RosyWriter examples that show off how to use the fast texture upload path with BGRA and YUV input color formats. Apple also provided the ChromaKey example at WWDC (which may still be accessible along with the videos) that demonstrated how to use these texture caches to pull information from an OpenGL ES texture.

I just got this fast texture uploading working in my GPUImage framework (the source code for which is available at that link), so I'll lay out what I was able to parse out of this. First, I create a texture cache using the following code:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err) 
{
    NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}

where the context referred to is an EAGLContext configured for OpenGL ES 2.0.

I use this to keep video frames from the iOS device camera in video memory, and I use the following code to do this:

CVPixelBufferLockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureRef texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);

if (!texture || err) {
    NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);  
    return;
}

outputTexture = CVOpenGLESTextureGetName(texture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Do processing work on the texture data here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
CFRelease(texture);
outputTexture = 0;

This creates a new CVOpenGLESTextureRef, representing an OpenGL ES texture, from the texture cache. This texture is based on the CVImageBufferRef passed in by the camera. That texture is then retrieved from the CVOpenGLESTextureRef and appropriate parameters set for it (which seemed to be necessary in my processing). Finally, I do my work on the texture and clean up when I'm done.

This fast upload process makes a real difference on the iOS devices. It took the upload and processing of a single 640x480 frame of video on an iPhone 4S from 9.0 ms to 1.8 ms.

I've heard that this works in reverse, as well, which might allow for the replacement of glReadPixels() in certain situations, but I've yet to try this.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...