I am building a speech to text system with N sample sentences using Hidden Markov Models for re-estimation. In the context of Neural Networks, I understand that the concept of epoch refers to a complete training cycle. I assume this means "feeding the same data to the same, updating network which has different weights and biases every time" - Correct me if I am wrong.
Would the same logic work while performing re-estimation (i.e. training) of HMMs from the same sentences ? In other words, if I have N sentences, can I repeat the input samples 10 times each to generate 10 * N samples. Does this mean I am performing 10 epochs on HMMs ? Furthermore, Does this actually help obtain better results?
From this paper, I get the impression that epoch in the context of HMMs refers to a unit of time :
Counts represent a device-specific numeric quantity which is generated by an accelerometer for a specific time unit (epoch) (e.g. 1 to 60 sec).
If not a unit of time, epoch at the very least sounds different. In the end, I would like to know :
- What is epoch in the context of HMMs?
- How is it different from epoch in Neural Networks?
- Considering the definition of epoch as training cycles, would multiple epochs improve re-estimation of HMMs ?