Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
853 views
in Technique[技术] by (71.8m points)

ios - Reverse an audio file Swift/Objective-C

Is there a way that I could reverse and export .m4a audio file? I found a solution to reverse an audio track here, but it only seems to be working on .caf file formats. If the only way is to use a .caf, is there a way to convert the .m4a file to .caf first?

Update: In another post I found out that AVAssetReader can be used to read audio samples from an audio file, but I have no idea how to write the samples back in the reverse order. The below code snippet is an answer directly from the post. Any help would be appreciated. Thanks

+ (void) reverseAudioTrack: (AVAsset *)audioAsset outputURL: (NSURL *)outputURL {
NSError *error;

AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:audioAsset error:&error];
if (error) {NSLog(@"%@", error.localizedDescription);}

AVAssetTrack* track = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
                     forKey:AVFormatIDKey];

AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];

CMSampleBufferRef sample; //= [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];

// Get all samples
while((sample = [readerOutput copyNextSampleBuffer])) {
    [samples addObject:(__bridge id)sample];
    CFRelease(sample);
}

// Process samples in reverse
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;

AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:outputURL
                                                   fileType:AVFileTypeAppleM4A
                                                      error:&error];
if (error) {NSLog(@"%@", error.localizedDescription);}
NSDictionary *writerOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
                                      [ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
                                      [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
                                      [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                                      [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
                                      [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey, nil ];

AVAssetWriterInput *audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:writerOutputSettings];

[writer addInput:audioWriterInput];
[writer startWriting];
[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0]) ];

// (1) Would it work if I loop in reverse here?
for (NSInteger i = 0; i < samples.count; i++) {
    CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer((__bridge CMSampleBufferRef)samples[i]);

    CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples((__bridge CMSampleBufferRef)samples[i]);
    AudioBufferList audioBufferList;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer((__bridge CMSampleBufferRef)samples[i],
                                                            NULL,
                                                            &audioBufferList,
                                                            sizeof(audioBufferList),
                                                            NULL,
                                                            NULL,
                                                            kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                            &buffer
                                                            );

    for (int bufferCount = 0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
        SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
        for (int i=0; i < numSamplesInBuffer; i++) {
            // amplitude for the sample is samples[i], assuming you have linear pcm to start with

            // (2) What should I be doing to write the samples into an audio file?
        }
    }
    CFRelease(buffer);
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Yes, there is a way you can process, then export, any of the audio files for which there is iOS support.

However, most of these formats (mp3 to name one) are lossy and compressed. You must first decompress the data, apply the transformation, and recompress. Most transformation you will apply to the audio information should likely be done at the raw, PCM level.

Combining these two statements, you do this in a few passes:

  1. convert original file to a kAudioFormatLinearPCM compliant audio file, like AIFF
  2. process that temporary file (reverse its content)
  3. convert the temporary file back to the original format

Just like if you were applying a transformation to, say, a compressed jpeg image, there will be degradation in the process. The final audio will have, at best, suffered one more compression cycle.

So the true mathematical answer to this approach is actually no.


Just for reference, here is some starter code in swift 3. It needs further refinement to skip the file headers.

var outAudioFile:AudioFileID?
var pcm = AudioStreamBasicDescription(mSampleRate: 44100.0,
                                      mFormatID: kAudioFormatLinearPCM,
                                      mFormatFlags: kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger,
                                      mBytesPerPacket: 2,
                                      mFramesPerPacket: 1,
                                      mBytesPerFrame: 2,
                                      mChannelsPerFrame: 1,
                                      mBitsPerChannel: 16,
                                      mReserved: 0)

var theErr = AudioFileCreateWithURL(destUrl as CFURL!,
                                    kAudioFileAIFFType,
                                    &pcm,
                                    .eraseFile,
                                    &outAudioFile)
if noErr == theErr, let outAudioFile = outAudioFile {
    var inAudioFile:AudioFileID?
    theErr = AudioFileOpenURL(sourceUrl as! CFURL, .readPermission, 0, &inAudioFile)

    if noErr == theErr, let inAudioFile = inAudioFile {

        var fileDataSize:UInt64 = 0
        var thePropertySize:UInt32 = UInt32(MemoryLayout<UInt64>.stride)
        theErr = AudioFileGetProperty(inAudioFile,
                                      kAudioFilePropertyAudioDataByteCount,
                                      &thePropertySize,
                                      &fileDataSize)

        if( noErr == theErr) {
            let dataSize:Int64 = Int64(fileDataSize)
            let theData = UnsafeMutableRawPointer.allocate(bytes: Int(dataSize),
                                                           alignedTo: MemoryLayout<UInt8>.alignment)

            var readPoint:Int64 = Int64(dataSize)
            var writePoint:Int64 = 0

            while( readPoint > 0 )
            {
                var bytesToRead = UInt32(2)

                AudioFileReadBytes( inAudioFile, false, readPoint, &bytesToRead, theData)
                AudioFileWriteBytes( outAudioFile, false, writePoint, &bytesToRead, theData)

                writePoint += 2
                readPoint -= 2
            }

            theData.deallocate(bytes: Int(dataSize), alignedTo: MemoryLayout<UInt8>.alignment)

            AudioFileClose(inAudioFile);
            AudioFileClose(outAudioFile);
        }
    }
}

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...