Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
346 views
in Technique[技术] by (71.8m points)

iphone - IOSurfaces - Artefacts in video and unable to grab video surfaces

This is a 2-part Question. I have the following code working which grabs the current display surface and creates a video out of the surfaces (everything happens in the background).

for(int i=0;i<100;i++){
        IOMobileFramebufferConnection connect;
        kern_return_t result;
        IOSurfaceRef screenSurface = NULL;

        io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
        if(!framebufferService)
            framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
        if(!framebufferService)
            framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));

        result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);

        result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);

        uint32_t aseed;
        IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
        uint32_t width = IOSurfaceGetWidth(screenSurface);
        uint32_t height = IOSurfaceGetHeight(screenSurface);
        m_width = width;
        m_height = height;
        CFMutableDictionaryRef dict;
        int pitch = width*4, size = width*height*4;
        int bPE=4;
        char pixelFormat[4] = {'A','R','G','B'};
        dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
        CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
        CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
        CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
        CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
        CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
        CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
        CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));

        IOSurfaceRef destSurf = IOSurfaceCreate(dict);

        IOSurfaceAcceleratorRef outAcc;
        IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);

        IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);

        IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
        CFRelease(outAcc);

        // MOST RELEVANT PART OF CODE

        CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);

        CMTime frameTime = CMTimeMake(frameCount, (int32_t)5);

        [adaptor appendPixelBuffer:sampleBuffer withPresentationTime:frameTime];

        CFRelease(sampleBuffer);
        CFRelease(destSurf);
        frameCount++;
    }

P.S: The last 4-5 lines of code are the most relevant(if you need to filter).

1) The video that is produced has artefacts. I have worked on videos previously and have encountered such an issue before as well. I suppose there can be 2 reasons for this:
i. The PixelBuffer that is passed to the adaptor is getting modified or released before the processing (encoding + writing) is complete. This can be due to asynchronous calls. But I am not sure if this itself is the problem and how to resolve it.
ii. The timestamps that are passed are inaccurate (e.g. 2 frames having the same timestamp or a frame having a lower timestamp than the previous frame). I logged out the timestamp values and this doesn't seem to be the problem.

2) The code above is not able to grab surfaces when a video is played or when we play games. All I get is a blank screen in the output. This might be due to hardware accelerated decoding that happens in such cases.

Any inputs on either of the 2 parts of the questions will be really helpful. Also, if you have any good links to read on IOSurfaces in general, please do post them here.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I did a bit of experimentation and concluded that the screen surface from which the content is copied is changing even before the transfer of contents is complete (call to IOSurfaceAcceleratorTransferSurface() ). I am using a lock (tried both asynchronous and read-only) but it is being overridden by the iOS. I changed the code between the lock/unlock part to the following minimal:

IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
aseed1 = IOSurfaceGetSeed(screenSurface);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
aseed2 = IOSurfaceGetSeed(screenSurface);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);

The GetSeed function tells if the contents of the surface have changed. And, I logged a count indicating the number of frames for which the seed changes. The count was non-zero. So, the following code resolved the problem:

if(aseed1 != aseed2){
//Release the created surface
continue; //Do not use this surface/frame since it has artefacts
}

This however does affect performance since many frames/surfaces are rejected due to artefacts. Any additions/corrections to this will be helpful.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...