当前位置: 首页 > news >正文

阿里达摩院:FunASR - onnxruntime 部署

阿里达摩院:FunASR - onnxruntime 部署

git clone https://github.com/alibaba/FunASR.git 

切换到 onnxruntime

 cd FunASR/runtime/onnxruntime

1下载 onnxruntime

wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/dep_libs/onnxruntime-linux-x64-1.14.0.tgz
tar -zxvf onnxruntime-linux-x64-1.14.0.tgz

/media/wmx/soft1/software/onnxruntime/onnxruntime-linux-x64-1.14.0

2下载 ffmpeg

wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/dep_libs/ffmpeg-master-latest-linux64-gpl-shared.tar.xz
tar -xvf ffmpeg-master-latest-linux64-gpl-shared.tar.xz

我这里用之前下载的 /media/wmx/ws1/ffmpeg-n6.0

3安装依赖

# openblas
sudo apt-get install libopenblas-dev #ubuntu
# sudo yum -y install openblas-devel #centos# openssl
apt-get install libssl-dev #ubuntu 
# yum install openssl-devel #centos

4编译 onnxruntime 工程:

#cd FunASR/runtime/onnxruntime
mkdir build && cd build
cmake  -DCMAKE_BUILD_TYPE=release .. -DONNXRUNTIME_DIR=/media/wmx/soft1/software/onnxruntime/onnxruntime-linux-x64-1.14.0  -DFFMPEG_DIR=/media/wmx/ws1/ffmpeg-n6.0
make -j 12

生成在 onnxruntime/build/bin/ 路径下:

5切换到websocket 工程:

#cd FunASR/runtime/websocket
mkdir build && cd build
cmake  -DCMAKE_BUILD_TYPE=release .. -DONNXRUNTIME_DIR=/media/wmx/soft1/software/onnxruntime/onnxruntime-linux-x64-1.14.0 -DFFMPEG_DIR=/media/wmx/ws1/ffmpeg-n6.0
make -j 12

生成在路径下 FunASR/runtime/websocket/build/bin/

6手动下载模型

因为自动下载模型需要 FunASR 的python 环境和相关依赖,这里是runtime ,没有安装这些,所以手动下载
modelscope : https://www.modelscope.cn/models
直接搜索模型名称,然后下载,我放在modelscope配置的路径下 /media/wmx/soft1/huggingface_cache/hub
模型名称:

iic/speech_fsmn_vad_zh-cn-16k-common-onnx 
iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch  
iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx
iic/punc_ct-transformer_cn-en-common-vocab471067-large-onnx 
iic/speech_ngram_lm_zh-cn-ai-wesp-fst 
thuduj12/fst_itn_zh 

修改 FunASR/runtime/run_server.sh
指定模型路径,其中 gpu 、cpu 推理 2选1

download_model_dir="/media/wmx/soft1/huggingface_cache/hub"# 这是 gpu 推理
model_dir="iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch" # 这是 cpu 推理
#model_dir="iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx"  vad_dir="iic/speech_fsmn_vad_zh-cn-16k-common-onnx"
punc_dir="iic/punc_ct-transformer_cn-en-common-vocab471067-large-onnx"
itn_dir="thuduj12/fst_itn_zh"
lm_dir="iic/speech_ngram_lm_zh-cn-ai-wesp-fst"

7 准备音频文件

 #cd FunASR/runtime/
wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav

8 运行server

 #cd FunASR/runtime/./run_server.sh 

9 运行client

#cd FunASR/runtime/
./websocket/build/bin/funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path ./asr_example_zh.wav

输出:

(base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/FunASR/runtime$  ./websocket/build/bin/funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path /media/wmx/ws3/AI/data/audios/zh/asr_example.wavI20240912 23:57:06.561726 80170 funasr-wss-client.cpp:315] Thread: 127658211935744, sended data len=177572
I20240912 23:57:06.768560 80172 funasr-wss-client.cpp:101] Thread: 127658190964224, total_recv=1, on_message = {"is_final":false,"mode":"offline","stamp_sents":[{"end":5195,"punc":"。","start":880,"text_seg":"欢 迎 大 家 来 体 验 达 摩 院 推 出 的 语 音 识 别 模 型","ts_list":[[880,1120],[1120,1380],[1380,1540],[1540,1780],[1780,2020],[2020,2180],[2180,2480],[2480,2600],[2600,2780],[2780,3040],[3040,3240],[3240,3480],[3480,3699],[3699,3900],[3900,4180],[4180,4420],[4420,4620],[4620,4780],[4780,5195]]}],"text":"欢迎大家来体验达摩院推出的语音识别模型。","timestamp":"[[880,1120],[1120,1380],[1380,1540],[1540,1780],[1780,2020],[2020,2180],[2180,2480],[2480,2600],[2600,2780],[2780,3040],[3040,3240],[3240,3480],[3480,3699],[3699,3900],[3900,4180],[4180,4420],[4420,4620],[4620,4780],[4780,5195]]","wav_name":"wav_default_id"}
I20240912 23:57:06.768589 80172 funasr-wss-client.cpp:106] Thread: 127658190964224, close client thread

on_message 返回的就是 json :

{"is_final":false,"mode":"offline","stamp_sents":[{"end":5195,"punc":"。","start":880,"text_seg":"欢 迎 大 家 来 体 验 达 摩 院 推 出 的 语 音 识 别 模 型","ts_list":[[880,1120],[1120,1380],[1380,1540],[1540,1780],[1780,2020],[2020,2180],[2180,2480],[2480,2600],[2600,2780],[2780,3040],[3040,3240],[3240,3480],[3480,3699],[3699,3900],[3900,4180],[4180,4420],[4420,4620],[4620,4780],[4780,5195]]}],"text":"欢迎大家来体验达摩院推出的语音识别模型。","timestamp":"[[880,1120],[1120,1380],[1380,1540],[1540,1780],[1780,2020],[2020,2180],[2180,2480],[2480,2600],[2600,2780],[2780,3040],[3040,3240],[3240,3480],[3480,3699],[3699,3900],[3900,4180],[4180,4420],[4420,4620],[4620,4780],[4780,5195]]","wav_name":"wav_default_id"}

提取其中的 text 字段的值,就是解码结果 "欢迎大家来体验达摩院推出的语音识别模型。"


http://www.mrgr.cn/news/25654.html

相关文章:

  • 单链表的建立
  • Httplib库源码粗度
  • 三折手机可能面临的问题
  • 如何在 Vue 3 中使用 Element Plus
  • 开源免费的工贸一体行业ERP管理系统
  • 【学习笔记】手写 Tomcat 三
  • android 14.0 USB连接模式默认设为MTP
  • MySql:赋权限
  • OLED显示屏应用(STM32)
  • 从0到1!本地部署一个大语言模型!完整方法!
  • 英飞凌motor电机方案
  • velero v1.14.1迁移kubernetes集群
  • B2064 斐波那契数列
  • 【oj刷题】滑动窗口篇:滑动窗口的应用场景和注意事项
  • 【ShuQiHere】深入浅出栈(Stack)数据结构:从基本操作到实现
  • b√最大矩阵和
  • 【python】OpenCV—Augmented Reality Using Aruco Markers
  • 生信操作文件类型
  • 【python实现一个更复杂的计算器】
  • 【H2O2|全栈】关于CSS(3)CSS基础(三)