前言
我想大家都做过录音的功能吧,首先想到的是不是MediaRecorder?今天我们不用MediaRecorder,而是使用LAMP库自己编译音频编码模块,很明显,这个需要用到NDK。凡是涉及到音视频编解码这块的,都需要用到Android NDK(Native Development Kit),原生开发工具包。即使用C/C++代码实现音频的采样和编码,然后使用Java去调用原生模块实现录音功能。
音视频相关基础知识
我来简单过一下基础的音视频相关知识。
Audio Sample:音频采样,通常指录制音频采样文件PCM的过程,PCM(脉冲编码调制)是一种音频流,称为裸流。人耳听到的是模拟信号,PCM是把声音从模拟信号转化为数字信号的技术。
Audio Track:音轨,封装格式的音频文件通常由多个音轨组成,比如MP3文件就是一种封装格式的音频文件,与之相对的就是音频原始采样数据PCM。比如一首歌,歌声是一个音轨,吉他声、鼓声等一些混合在其中的声音各自也是一个音轨。
Audio Channel:声道,比如左声道、右声道和环绕立体声。
Bitrate:比特率,俗称码率。它直接决定声音的清晰度即声音特征的详细程度,SQ、HQ音质是通过它来判断的,码率越高,音频文件越大,质量也越高。
Sample Rate:采样率,大多数沿用国际通用的标准采样率,即44.100kHZ或者48.000kHZ,肯定是不能录制超声波和次声波的,因为人耳感知不到。
录音和播放声音的详细过程。
录音:音频采样编码->音频封装
播放声音:音频解封装->音频解码播放
录音首先采样并编码得到PCM文件,然后封装PCM文件为MP3、WAV、FLAC等文件,播放声音首先也要解封装成PCM文件,然后对PCM文件解码播放。
编译共享库so
编译共享库so的过程有两种方式,一种是使用ndk-build+Android.mk+Application.mk,一种是CMake+CMakeLists.txt。今天我们采用最原始的方式,mk文件。其实Android.mk和CMakeLists.txt很多东西是一一对应的。
Android.mk | CMakeLists.txt |
---|---|
LOCAL_MODULE、LOCAL_SRC_FILES | add_library |
LOCAL_CFLAGS | add_definitions |
LOCAL_C_INCLUDES | include_directories |
LOCAL_STATIC_LIBRARIES、LOCAL_SHARED_LIBRARIES | add_library + set_target_properties |
LOCAL_LDLIBS | find_library |
C文件
Android.mk
mk
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LAME_LIBMP3_DIR := lame-3.100_libmp3lame
LOCAL_MODULE := mp3lame
LOCAL_SRC_FILES :=\
$(LAME_LIBMP3_DIR)/bitstream.c \
$(LAME_LIBMP3_DIR)/fft.c \
$(LAME_LIBMP3_DIR)/id3tag.c \
$(LAME_LIBMP3_DIR)/mpglib_interface.c \
$(LAME_LIBMP3_DIR)/presets.c \
$(LAME_LIBMP3_DIR)/quantize.c \
$(LAME_LIBMP3_DIR)/reservoir.c \
$(LAME_LIBMP3_DIR)/tables.c \
$(LAME_LIBMP3_DIR)/util.c \
$(LAME_LIBMP3_DIR)/VbrTag.c \
$(LAME_LIBMP3_DIR)/encoder.c \
$(LAME_LIBMP3_DIR)/gain_analysis.c \
$(LAME_LIBMP3_DIR)/lame.c \
$(LAME_LIBMP3_DIR)/newmdct.c \
$(LAME_LIBMP3_DIR)/psymodel.c \
$(LAME_LIBMP3_DIR)/quantize_pvt.c \
$(LAME_LIBMP3_DIR)/set_get.c \
$(LAME_LIBMP3_DIR)/takehiro.c \
$(LAME_LIBMP3_DIR)/vbrquantize.c \
$(LAME_LIBMP3_DIR)/version.c \
MP3Encoder.c
include $(BUILD_SHARED_LIBRARY)
- LOCAL_PATH :=$(call my-dir) 当前文件在系统中的路径,必须为Android.mk文件的第一行。
- include $(CLEAR_VARS) 清除上一次构建中的全局变量,开始一次新的编译。
- LOCAL_MODULE 生成的模块的名称,这里是动态库so文件的名称,so文件的名称拼接为lib「模块名」.so 该模块的编译的目标名,用于区分各个模块,名字必须是唯一并不包含空格的,如果编译目标是 so 库,那么该 so 库的名称就是 lib 项目名 .so。
- LOCAL_SRC_FILES
要编译的.c或.cpp文件,.h和.hpp文件可以不用出现在这里,系统会自动包含。 - include $(BUILD_SHARED_LIBRARY) include开头的是构建系统的内置变量,此行代码的意思是构建动态库,也称共享库,还有以下几种取值。 BUILD_STATIC_LIBRARY: 构建静态库。 PREBUILT_STATIC_LIBRARY: 将静态库包装成一个模块。 PREBUILT_SHARED_LIBRARY: 将静态库包装成一个模块。 BUILD_EXECUTABLE: 构建可执行文件。
Application.mk
mk
APP_ABI := armeabi armeabi-v7a arm64-v8a x86 x86_64 mips mips64
APP_MODULES := mp3lame
APP_CFLAGS += -DSTDC_HEADERS
APP_PLATFORM := android-21
- APP_ABI ABI(Application Binary Interface)应用二级制接口,这是一种计算机科学中的概念,用于描述软件库或操作系统与应用程序之间的二进制通信方式。它跟CPU指令集对应。
- APP_MODULES 指定模块
- APP_FLAGS 指定编译过程的flag,"DSTDC_HEADERS" 是一个编程中常见的宏定义,通常用于检查标准库头文件是否已经包含。这个宏定义通常在C/C++代码中使用,用于确保标准库的头文件已经正确包含,以便程序可以正常编译和运行。如果没有正确包含标准库头文件,编译器可能会报错或者出现未定义的行为。
- APP_PLATFORM 指定创建的动态库的平台。
与JNI相关的文件
h
/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
#ifndef _Included_Mp3Encoder
#define _Included_Mp3Encoder
#ifdef __cplusplus
extern "C" {
#endif
JNIEXPORT void JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_init
(JNIEnv *, jclass, jint, jint, jint, jint, jint);
JNIEXPORT jint JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_encode
(JNIEnv *, jclass, jshortArray, jshortArray, jint, jbyteArray);
JNIEXPORT jint JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_flush
(JNIEnv *, jclass, jbyteArray);
JNIEXPORT void JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_close
(JNIEnv *, jclass);
#ifdef __cplusplus
}
#endif
#endif
lamp的jni层主要定义4个方法init、encode、flush和close。
c
#include "lame-3.100_libmp3lame/lame.h"
#include "Mp3Encoder.h"
static lame_global_flags *glf = NULL;
JNIEXPORT void JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_init(
JNIEnv *env, jclass cls, jint inSamplerate, jint outChannel,
jint outSamplerate, jint outBitrate, jint quality) {
if (glf != NULL) {
lame_close(glf);
glf = NULL;
}
glf = lame_init();
lame_set_in_samplerate(glf, inSamplerate);
lame_set_num_channels(glf, outChannel);
lame_set_out_samplerate(glf, outSamplerate);
lame_set_brate(glf, outBitrate);
lame_set_quality(glf, quality);
lame_init_params(glf);
}
JNIEXPORT jint
jint Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_encode(
JNIEnv *env, jclass cls, jshortArray buffer_l, jshortArray buffer_r,
jint samples, jbyteArray mp3buf) {
jshort* j_buffer_l = (*env)->GetShortArrayElements(env, buffer_l, NULL);
jshort* j_buffer_r = (*env)->GetShortArrayElements(env, buffer_r, NULL);
const jsize mp3buf_size = (*env)->GetArrayLength(env, mp3buf);
jbyte* j_mp3buf = (*env)->GetByteArrayElements(env, mp3buf, NULL);
int result = lame_encode_buffer(glf, j_buffer_l, j_buffer_r,
samples, j_mp3buf, mp3buf_size);
(*env)->ReleaseShortArrayElements(env, buffer_l, j_buffer_l, 0);
(*env)->ReleaseShortArrayElements(env, buffer_r, j_buffer_r, 0);
(*env)->ReleaseByteArrayElements(env, mp3buf, j_mp3buf, 0);
return result;
}
JNIEXPORT jint
jint Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_flush(
JNIEnv *env, jclass cls, jbyteArray mp3buf) {
const jsize mp3buf_size = (*env)->GetArrayLength(env, mp3buf);
jbyte* j_mp3buf = (*env)->GetByteArrayElements(env, mp3buf, NULL);
int result = lame_encode_flush(glf, j_mp3buf, mp3buf_size);
(*env)->ReleaseByteArrayElements(env, mp3buf, j_mp3buf, 0);
return result;
}
JNIEXPORT void JNICALL Java_com_dorachat_dorachat_recorder_mp3_Mp3Encoder_close(
JNIEnv *env, jclass cls) {
lame_close(glf);
glf = NULL;
}
这个需要你会一点C语言的基础,然后就可以轻松调用lamp库的函数了。
java
package com.dorachat.dorachat.recorder.mp3;
public class Mp3Encoder {
static {
System.loadLibrary("mp3lame");
}
public native static void close();
public native static int encode(short[] buffer_l, short[] buffer_r, int samples, byte[] mp3buf);
public native static int flush(byte[] mp3buf);
public native static void init(int inSampleRate, int outChannel, int outSampleRate, int outBitrate, int quality);
public static void init(int inSampleRate, int outChannel, int outSampleRate, int outBitrate) {
init(inSampleRate, outChannel, outSampleRate, outBitrate, 7);
}
}
我们再看一下java层。
Java层大致实现
我们录音肯定是要在后台Service中进行的,可以使用onStartCommand
进行调用,这里只是一个简单的功能,就不使用aidl跨进程了。开始录音时我们需要开一个线程去采样音频数据。
java
public void start(String filePath, RecordConfig config) {
this.currentConfig = config;
if (state != RecordState.IDLE && state != RecordState.STOP) {
Logger.e(TAG, "状态异常当前状态: %s", state.name());
return;
}
resultFile = new File(filePath);
String tempFilePath = getTempFilePath();
LogUtils.dformat(TAG, "----------------开始录制 %s------------------------", currentConfig.getFormat().name());
LogUtils.dformat(TAG, "参数: %s", currentConfig.toString());
LogUtils.iformat(TAG, "pcm缓存 tmpFile: %s", tempFilePath);
LogUtils.iformat(TAG, "录音文件 resultFile: %s", filePath);
tmpFile = new File(tempFilePath);
audioRecordThread = new AudioRecordThread();
audioRecordThread.start();
}
java
private class AudioRecordThread extends Thread {
private AudioRecord audioRecord;
private int bufferSize;
AudioRecordThread() {
bufferSize = AudioRecord.getMinBufferSize(currentConfig.getSampleRate(),
currentConfig.getChannelConfig(), currentConfig.getEncodingConfig()) * RECORD_AUDIO_BUFFER_TIMES;
Logger.d(TAG, "record buffer size = %s", bufferSize);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, currentConfig.getSampleRate(),
currentConfig.getChannelConfig(), currentConfig.getEncodingConfig(), bufferSize);
if (currentConfig.getFormat() == RecordConfig.RecordFormat.MP3) {
if (mp3EncodeThread == null) {
initMp3EncoderThread(bufferSize);
} else {
LogUtils.e("mp3EncodeThread != null, 请检查代码");
}
}
}
@Override
public void run() {
super.run();
switch (currentConfig.getFormat()) {
case MP3:
startMp3Recorder();
break;
default:
startPcmRecorder();
break;
}
}
private void startPcmRecorder() {
state = RecordState.RECORDING;
notifyState();
LogUtils.d("开始录制PCM");
FileOutputStream fos = null;
try {
fos = new FileOutputStream(tmpFile);
audioRecord.startRecording();
byte[] byteBuffer = new byte[bufferSize];
while (state == RecordState.RECORDING) {
int end = audioRecord.read(byteBuffer, 0, byteBuffer.length);
notifyData(byteBuffer);
fos.write(byteBuffer, 0, end);
fos.flush();
}
audioRecord.stop();
files.add(tmpFile);
if (state == RecordState.STOP) {
makeFile();
} else {
LogUtils.i("暂停");
}
} catch (Exception e) {
LogUtils.e(e.getMessage());
notifyError("录音失败");
} finally {
try {
if (fos != null) {
fos.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
if (state != RecordState.PAUSE) {
state = RecordState.IDLE;
notifyState();
LogUtils.d("录音结束");
}
}
private void startMp3Recorder() {
state = RecordState.RECORDING;
notifyState();
try {
audioRecord.startRecording();
short[] byteBuffer = new short[bufferSize];
while (state == RecordState.RECORDING) {
int end = audioRecord.read(byteBuffer, 0, byteBuffer.length);
if (mp3EncodeThread != null) {
mp3EncodeThread.addChangeBuffer(new Mp3EncodeThread.ChangeBuffer(byteBuffer, end));
}
notifyData(ByteUtils.toBytes(byteBuffer));
}
audioRecord.stop();
} catch (Exception e) {
LogUtils.e(e.getMessage());
notifyError("录音失败");
}
if (state != RecordState.PAUSE) {
state = RecordState.IDLE;
notifyState();
stopMp3Encoded();
} else {
LogUtils.d("暂停");
}
}
}
private void stopMp3Encoded() {
if (mp3EncodeThread != null) {
mp3EncodeThread.stopSafe(new Mp3EncodeThread.EncordFinishListener() {
@Override
public void onFinish() {
notifyFinish();
mp3EncodeThread = null;
}
});
} else {
LogUtils.e("mp3EncodeThread is null, 代码业务流程有误,请检查!");
}
}
private void makeFile() {
switch (currentConfig.getFormat()) {
case MP3:
return;
case WAV:
mergePcmFile();
makeWav();
break;
case PCM:
mergePcmFile();
break;
default:
break;
}
notifyFinish();
LogUtils.i("录音完成! path: %s ; 大小:%s", resultFile.getAbsoluteFile(), resultFile.length());
}
/**
* 添加Wav头文件。
*/
private void makeWav() {
if (!FileUtils.isFile(resultFile) || resultFile.length() == 0) {
return;
}
byte[] header = WavUtils.generateWavFileHeader((int) resultFile.length(), currentConfig.getSampleRate(), currentConfig.getChannelCount(), currentConfig.getEncoding());
WavUtils.writeHeader(resultFile, header);
}
/**
* 合并文件。
*/
private void mergePcmFile() {
boolean mergeSuccess = mergePcmFiles(resultFile, files);
if (!mergeSuccess) {
notifyError("合并失败");
}
}
/**
* 合并PCM文件。
*
* @param recordFile 输出文件
* @param files 多个文件源
* @return 是否成功
*/
private boolean mergePcmFiles(File recordFile, List<File> files) {
if (recordFile == null || files == null || files.size() <= 0) {
return false;
}
FileOutputStream fos = null;
BufferedOutputStream outputStream = null;
byte[] buffer = new byte[1024];
try {
fos = new FileOutputStream(recordFile);
outputStream = new BufferedOutputStream(fos);
for (int i = 0; i < files.size(); i++) {
BufferedInputStream inputStream = new BufferedInputStream(new FileInputStream(files.get(i)));
int readCount;
while ((readCount = inputStream.read(buffer)) > 0) {
outputStream.write(buffer, 0, readCount);
}
inputStream.close();
}
} catch (Exception e) {
LogUtils.e(e.getMessage());
return false;
} finally {
try {
if (outputStream != null) {
outputStream.close();
}
if (fos != null) {
fos.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
for (int i = 0; i < files.size(); i++) {
files.get(i).delete();
}
files.clear();
return true;
}
/**
* 根据当前的时间生成相应的文件名。
*/
private String getTempFilePath() {
String fileDir = String.format(Locale.getDefault(), "%s/Record/", Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getAbsolutePath());
if (!FileUtils.createOrExistsDir(fileDir)) {
LogUtils.e("文件夹创建失败:" + fileDir);
}
String fileName = String.format(Locale.getDefault(), "record_tmp_%s", FileUtils.getNowString(new SimpleDateFormat("yyyyMMdd_HH_mm_ss", Locale.SIMPLIFIED_CHINESE)));
return String.format(Locale.getDefault(), "%s%s.pcm", fileDir, fileName);
}
/**
* 表示当前录制状态。
*/
public enum RecordState {
/**
* 空闲状态。
*/
IDLE,
/**
* 录音中。
*/
RECORDING,
/**
* 暂停中。
*/
PAUSE,
/**
* 正在停止。
*/
STOP,
/**
* 录音流程结束(转换结束)。
*/
FINISH
}
在Mp3EncodeThread中init初始化录制参数,然后调用encode进行编码录制,录制完成调用flush把缓冲区冲一下水,清洗一下。最后调用一下close,完美。 最后附上录制流程细节的代码。
java
package com.dorachat.dorachat.recorder.mp3;
import com.dorachat.dorachat.recorder.RecordConfig;
import com.dorachat.dorachat.recorder.RecordService;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
public class Mp3EncodeThread extends Thread {
private static final String TAG = Mp3EncodeThread.class.getSimpleName();
private List<ChangeBuffer> cacheBufferList = Collections.synchronizedList(new LinkedList<ChangeBuffer>());
private File file;
private FileOutputStream os;
private byte[] mp3Buffer;
private EncodeFinishListener encodeFinishListener;
/**
* 是否已停止录音。
*/
private volatile boolean isOver = false;
/**
* 是否继续轮询数据队列。
*/
private volatile boolean start = true;
public Mp3EncodeThread(File file, int bufferSize) {
this.file = file;
mp3Buffer = new byte[(int) (7200 + (bufferSize * 2 * 1.25))];
RecordConfig currentConfig = RecordService.getCurrentConfig();
int sampleRate = currentConfig.getSampleRate();
Mp3Encoder.init(sampleRate, currentConfig.getChannelCount(), sampleRate, currentConfig.getRealEncoding());
}
@Override
public void run() {
try {
this.os = new FileOutputStream(file);
} catch (FileNotFoundException e) {
return;
}
while (start) {
ChangeBuffer next = next();
lameData(next);
}
}
public void addChangeBuffer(ChangeBuffer changeBuffer) {
if (changeBuffer != null) {
cacheBufferList.add(changeBuffer);
synchronized (this) {
notify();
}
}
}
public void stopSafe(EncodeFinishListener encodeFinishListener) {
this.encodeFinishListener = encodeFinishListener;
isOver = true;
synchronized (this) {
notify();
}
}
private ChangeBuffer next() {
for (;;) {
if (cacheBufferList == null || cacheBufferList.size() == 0) {
try {
if (isOver) {
finish();
}
synchronized (this) {
wait();
}
} catch (Exception e) {
}
} else {
return cacheBufferList.remove(0);
}
}
}
private void lameData(ChangeBuffer changeBuffer) {
if (changeBuffer == null) {
return;
}
short[] buffer = changeBuffer.getData();
int readSize = changeBuffer.getReadSize();
if (readSize > 0) {
int encodedSize = Mp3Encoder.encode(buffer, buffer, readSize, mp3Buffer);
try {
os.write(mp3Buffer, 0, encodedSize);
} catch (IOException e) {
}
}
}
private void finish() {
start = false;
final int flushResult = Mp3Encoder.flush(mp3Buffer);
if (flushResult > 0) {
try {
os.write(mp3Buffer, 0, flushResult);
os.close();
} catch (final IOException e) {
}
}
if (encodeFinishListener != null) {
encodeFinishListener.onFinish();
}
}
public static class ChangeBuffer {
private short[] rawData;
private int readSize;
public ChangeBuffer(short[] rawData, int readSize) {
this.rawData = rawData.clone();
this.readSize = readSize;
}
short[] getData() {
return rawData;
}
int getReadSize() {
return readSize;
}
}
public interface EncodeFinishListener {
/**
* 格式转换完毕。
*/
void onFinish();
}
}
总结
这只是Android音视频开发的冰山一角,主要是为了演示大概的开发流程,如果需要深入研究Android NDK,一定需要先把C/C++语言学好,这样才能走得更远。