Follow
Nobuaki Minematsu
Nobuaki Minematsu
Verified email at gavo.t.u-tokyo.ac.jp - Homepage
Title
Cited by
Cited by
Year
Filled pauses as cues to the complexity of upcoming phrases for native and non-native listeners
M Watanabe, K Hirose, Y Den, N Minematsu
Speech communication 50 (2), 81-94, 2008
1982008
Free software toolkit for Japanese large vocabulary continuous speech recognition.
T Kawahara, A Lee, T Kobayashi, K Takeda, N Minematsu, S Sagayama, ...
INTERSPEECH, 476-479, 2000
1632000
WFST-based grapheme-to-phoneme conversion: Open source tools for alignment, model-building and decoding
JR Novak, N Minematsu, K Hirose
Proceedings of the 10th International Workshop on Finite State Methods and …, 2012
1492012
Phonetisaurus: Exploring grapheme-to-phoneme conversion with joint n-gram models in the WFST framework
JR Novak, N Minematsu, K Hirose
Natural Language Engineering 22 (6), 907-938, 2016
1362016
Unsupervised optimal phoneme segmentation: Objectives, algorithm and comparisons
Y Qiao, N Shimomura, N Minematsu
2008 IEEE International Conference on Acoustics, Speech and Signal …, 2008
1132008
A Study on Invariance of -Divergence and Its Application to Speech Recognition
Y Qiao, N Minematsu
IEEE Transactions on Signal Processing 58 (7), 3884-3890, 2010
1022010
One-to-many voice conversion based on tensor representation of speaker space
D Saito, K Yamamoto, N Minematsu, K Hirose
Twelfth Annual Conference of the International Speech Communication Association, 2011
1002011
Automatic estimation of one's age with his/her speech based upon acoustic modeling techniques of speakers
N Minematsu, M Sekiguchi, K Hirose
2002 IEEE International Conference on Acoustics, Speech, and Signal …, 2002
992002
Mathematical evidence of the acoustic universal structure in speech
N Minematsu
Proceedings.(ICASSP'05). IEEE International Conference on Acoustics, Speech …, 2005
962005
A method for automatic extraction of model parameters from fundamental frequency contours of speech
S Narusawa, N Minematsu, K Hirose, H Fujisaki
2002 IEEE International conference on acoustics, speech, and signal …, 2002
932002
Development of English speech database read by Japanese to support CALL research
N Minematsu, Y Tomiyama, K Yoshimoto, K Shimizu, S Nakagawa, ...
Proc. ICA 1 (2004), 557-560, 2004
882004
Sharable software repository for Japanese large vocabulary continuous speech recognition
T Kawahara, T Kobayashi, K Takeda, N Minematsu, K Itou, M Yamamoto, ...
731998
Wasserstein GAN and waveform loss-based acoustic model training for multi-speaker text-to-speech synthesis systems using a WaveNet vocoder
Y Zhao, S Takaki, HT Luong, J Yamagishi, D Saito, N Minematsu
IEEE access 6, 60478-60488, 2018
712018
Galatea: Open-source software for developing anthropomorphic spoken dialog agents
S Kawamoto, H Shimodaira, T Nitta, T Nishimoto, S Nakamura, K Itou, ...
Life-like characters: Tools, affective functions, and applications, 187-211, 2004
592004
Yet another acoustic representation of speech sounds
N Minematsu
2004 IEEE International Conference on Acoustics, Speech, and Signal …, 2004
542004
English Speech Database Read by Japanese Learners for CALL System Development.
N Minematsu, Y Tomiyama, K Yoshimoto, K Shimizu, S Nakagawa, ...
LREC, 2002
542002
Synthesis of F0 contours using generation process model parameters predicted from unlabeled corpora: Application to emotional speech synthesis
K Hirose, K Sato, Y Asano, N Minematsu
Speech communication 46 (3-4), 385-404, 2005
522005
Improving WFST-based G2P Conversion with Alignment Constraints and RNNLM N-best Rescoring.
JR Novak, N Minematsu, K Hirose, C Hori, H Kashioka, PR Dixon
Interspeech, 2526-2529, 2012
502012
Measurement of Objective Intelligibility of Japanese Accented English Using ERJ (English Read by Japanese) Database.
N Minematsu, K Okabe, K Ogaki, K Hirose
INTERSPEECH, 1481-1484, 2011
452011
Structural representation of the pronunciation and its use for CALL
N Minematsu, S Asakawa, K Hirose
2006 IEEE Spoken Language Technology Workshop, 126-129, 2006
452006
The system can't perform the operation now. Try again later.
Articles 1–20