W3Cschool
恭喜您成為首批注冊用戶
獲得88經(jīng)驗值獎勵
AudioRenderer是音頻渲染器,用于播放PCM(Pulse Code Modulation)音頻數(shù)據(jù),相比AVPlayer而言,可以在輸入前添加數(shù)據(jù)預(yù)處理,更適合有音頻開發(fā)經(jīng)驗的開發(fā)者,以實現(xiàn)更靈活的播放功能。
使用AudioRenderer播放音頻涉及到AudioRenderer實例的創(chuàng)建、音頻渲染參數(shù)的配置、渲染的開始與停止、資源的釋放等。本開發(fā)指導(dǎo)將以一次渲染音頻數(shù)據(jù)的過程為例,向開發(fā)者講解如何使用AudioRenderer進行音頻渲染,建議搭配AudioRenderer的API說明閱讀。
下圖展示了AudioRenderer的狀態(tài)變化,在創(chuàng)建實例后,調(diào)用對應(yīng)的方法可以進入指定的狀態(tài)實現(xiàn)對應(yīng)的行為。需要注意的是在確定的狀態(tài)執(zhí)行不合適的方法可能導(dǎo)致AudioRenderer發(fā)生錯誤,建議開發(fā)者在調(diào)用狀態(tài)轉(zhuǎn)換的方法前進行狀態(tài)檢查,避免程序運行產(chǎn)生預(yù)期以外的結(jié)果。
為保證UI線程不被阻塞,大部分AudioRenderer調(diào)用都是異步的。對于每個API均提供了callback函數(shù)和Promise函數(shù),以下示例均采用callback函數(shù)。
在進行應(yīng)用開發(fā)的過程中,建議開發(fā)者通過on('stateChange')方法訂閱AudioRenderer的狀態(tài)變更。因為針對AudioRenderer的某些操作,僅在音頻播放器在固定狀態(tài)時才能執(zhí)行。如果應(yīng)用在音頻播放器處于錯誤狀態(tài)時執(zhí)行操作,系統(tǒng)可能會拋出異?;蛏善渌炊x的行為。
- import audio from '@ohos.multimedia.audio';
- let audioStreamInfo = {
- samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
- channels: audio.AudioChannel.CHANNEL_1,
- sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
- encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
- };
- let audioRendererInfo = {
- content: audio.ContentType.CONTENT_TYPE_SPEECH,
- usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
- rendererFlags: 0
- };
- let audioRendererOptions = {
- streamInfo: audioStreamInfo,
- rendererInfo: audioRendererInfo
- };
- audio.createAudioRenderer(audioRendererOptions, (err, data) => {
- if (err) {
- console.error(`Invoke createAudioRenderer failed, code is ${err.code}, message is ${err.message}`);
- return;
- } else {
- console.info('Invoke createAudioRenderer succeeded.');
- let audioRenderer = data;
- }
- });
- audioRenderer.start((err) => {
- if (err) {
- console.error(`Renderer start failed, code is ${err.code}, message is ${err.message}`);
- } else {
- console.info('Renderer start success.');
- }
- });
- const bufferSize = await audioRenderer.getBufferSize();
- let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
- let buf = new ArrayBuffer(bufferSize);
- let readsize = await fs.read(file.fd, buf);
- let writeSize = await new Promise((resolve, reject) => {
- audioRenderer.write(buf, (err, writeSize) => {
- if (err) {
- reject(err);
- } else {
- resolve(writeSize);
- }
- });
- });
- audioRenderer.stop((err) => {
- if (err) {
- console.error(`Renderer stop failed, code is ${err.code}, message is ${err.message}`);
- } else {
- console.info('Renderer stopped.');
- }
- });
- audioRenderer.release((err) => {
- if (err) {
- console.error(`Renderer release failed, code is ${err.code}, message is ${err.message}`);
- } else {
- console.info('Renderer released.');
- }
- });
- import audio from '@ohos.multimedia.audio';
- import fs from '@ohos.file.fs';
- const TAG = 'AudioRendererDemo';
- export default class AudioRendererDemo {
- private renderModel = undefined;
- private audioStreamInfo = {
- samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // 采樣率
- channels: audio.AudioChannel.CHANNEL_2, // 通道
- sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采樣格式
- encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 編碼格式
- }
- private audioRendererInfo = {
- content: audio.ContentType.CONTENT_TYPE_MUSIC, // 媒體類型
- usage: audio.StreamUsage.STREAM_USAGE_MEDIA, // 音頻流使用類型
- rendererFlags: 0 // 音頻渲染器標志
- }
- private audioRendererOptions = {
- streamInfo: this.audioStreamInfo,
- rendererInfo: this.audioRendererInfo
- }
- // 初始化,創(chuàng)建實例,設(shè)置監(jiān)聽事件
- init() {
- audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // 創(chuàng)建AudioRenderer實例
- if (!err) {
- console.info(`${TAG}: creating AudioRenderer success`);
- this.renderModel = renderer;
- this.renderModel.on('stateChange', (state) => { // 設(shè)置監(jiān)聽事件,當(dāng)轉(zhuǎn)換到指定的狀態(tài)時觸發(fā)回調(diào)
- if (state == 2) {
- console.info('audio renderer state is: STATE_RUNNING');
- }
- });
- this.renderModel.on('markReach', 1000, (position) => { // 訂閱markReach事件,當(dāng)渲染的幀數(shù)達到1000幀時觸發(fā)回調(diào)
- if (position == 1000) {
- console.info('ON Triggered successfully');
- }
- });
- } else {
- console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`);
- }
- });
- }
- // 開始一次音頻渲染
- async start() {
- let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
- if (stateGroup.indexOf(this.renderModel.state) === -1) { // 當(dāng)且僅當(dāng)狀態(tài)為prepared、paused和stopped之一時才能啟動渲染
- console.error(TAG + 'start failed');
- return;
- }
- await this.renderModel.start(); // 啟動渲染
- const bufferSize = await this.renderModel.getBufferSize();
- let context = getContext(this);
- let path = context.filesDir;
- const filePath = path + '/test.wav'; // 使用沙箱路徑獲取文件,實際路徑為/data/storage/el2/base/haps/entry/files/test.wav
- let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
- let stat = await fs.stat(filePath);
- let buf = new ArrayBuffer(bufferSize);
- let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
- for (let i = 0; i < len; i++) {
- let options = {
- offset: i * bufferSize,
- length: bufferSize
- };
- let readsize = await fs.read(file.fd, buf, options);
- // buf是要寫入緩沖區(qū)的音頻數(shù)據(jù),在調(diào)用AudioRenderer.write()方法前可以進行音頻數(shù)據(jù)的預(yù)處理,實現(xiàn)個性化的音頻播放功能,AudioRenderer會讀出寫入緩沖區(qū)的音頻數(shù)據(jù)進行渲染
- let writeSize = await new Promise((resolve, reject) => {
- this.renderModel.write(buf, (err, writeSize) => {
- if (err) {
- reject(err);
- } else {
- resolve(writeSize);
- }
- });
- });
- if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // 如果渲染器狀態(tài)為released,停止渲染
- fs.close(file);
- await this.renderModel.stop();
- }
- if (this.renderModel.state === audio.AudioState.STATE_RUNNING) {
- if (i === len - 1) { // 如果音頻文件已經(jīng)被讀取完,停止渲染
- fs.close(file);
- await this.renderModel.stop();
- }
- }
- }
- }
- // 暫停渲染
- async pause() {
- // 只有渲染器狀態(tài)為running的時候才能暫停
- if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) {
- console.info('Renderer is not running');
- return;
- }
- await this.renderModel.pause(); // 暫停渲染
- if (this.renderModel.state === audio.AudioState.STATE_PAUSED) {
- console.info('Renderer is paused.');
- } else {
- console.error('Pausing renderer failed.');
- }
- }
- // 停止渲染
- async stop() {
- // 只有渲染器狀態(tài)為running或paused的時候才可以停止
- if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) {
- console.info('Renderer is not running or paused.');
- return;
- }
- await this.renderModel.stop(); // 停止渲染
- if (this.renderModel.state === audio.AudioState.STATE_STOPPED) {
- console.info('Renderer stopped.');
- } else {
- console.error('Stopping renderer failed.');
- }
- }
- // 銷毀實例,釋放資源
- async release() {
- // 渲染器狀態(tài)不是released狀態(tài),才能release
- if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
- console.info('Renderer already released');
- return;
- }
- await this.renderModel.release(); // 釋放資源
- if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
- console.info('Renderer released');
- } else {
- console.error('Renderer release failed.');
- }
- }
- }
當(dāng)同優(yōu)先級或高優(yōu)先級音頻流要使用輸出設(shè)備時,當(dāng)前音頻流會被中斷,應(yīng)用可以自行響應(yīng)中斷事件并做出處理。具體的音頻并發(fā)處理方式可參考多音頻播放的并發(fā)策略。
Copyright©2021 w3cschool編程獅|閩ICP備15016281號-3|閩公網(wǎng)安備35020302033924號
違法和不良信息舉報電話:173-0602-2364|舉報郵箱:jubao@eeedong.com
掃描二維碼
下載編程獅App
編程獅公眾號
聯(lián)系方式:
更多建議: