論文: Deep-FSMN for Large Vocabulary Continuous Speech Recognition 思想: 對於大詞匯量語音識別,往往需要更深的網絡結構,但是當FSMN[1]或cFSMN[2]的結構很深時容易引發剃度消失和爆炸問題 ...
LAS: listen, attented and spell,Google 思想: sequence to sequence的思想,模型分為encoder和decoder兩部分,首先將任意長的輸入序列通過encoder轉化為定長的特征表達,然后輸入到decoder再轉化為任意長的輸出序列 相比於傳統sequence to sequence在decoder部分引入attention機制,讓模型自 ...
2020-09-13 16:00 0 883 推薦指數:
論文: Deep-FSMN for Large Vocabulary Continuous Speech Recognition 思想: 對於大詞匯量語音識別,往往需要更深的網絡結構,但是當FSMN[1]或cFSMN[2]的結構很深時容易引發剃度消失和爆炸問題 ...
論文: SPEECH-TRANSFORMER: A NO-RECURRENCE SEQUENCE-TO-SEQUENCE MODELFOR SPEECH RECOGNITION ...
論文: EESEN:END-TO-END SPEECH RECOGNITION USING DEEP RNN MODELS AND WFST-BASED DECODING ...
論文: CTC:Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks 思想: 語音識別中,一般包含語音 ...
論文: A time delay neural network architecture for efficient modeling of longtemporal contexts ...
的時序長度,在大規模語音數據訓練時提升計算效率; 2)decoder輸入采樣策略,如果訓練時 ...
論文: TRANSFORMER-TRANSDUCER:END-TO-END SPEECH RECOGNITION WITH SELF-ATTENTION 思想: 1)借助RNN-T在語音識別上的優勢,通過tranformer替換RNN-T中的RNN結構,實現 ...
論文: TRANSFORMER TRANSDUCER: A STREAMABLE SPEECH RECOGNITION MODELWITH TRANSFORMER ENCODERS A ...