I am working on implementing an autoencoder for unsupervised learning, and I have some questions about the overall process. From what I was reading here, @rjpg suggests the following general approach:
- Slice the NN in half on the last encoder (before the decode starts)
- Freeze the the weights of the encoders
- Add or concat a new NN in front of the last encoding layer
- (post) Train this concat NN with a softmax output layer to classify the digit
I haven't been able to find an example of this using Keras. I have seen examples in R or H20, but the code is so encapsulated, it is difficult for me to implement a similar solution in Keras. I have three questions related to how to implement this:
- Does this strategy seem reasonable?
- How to freeze the weights of the encoders in Keras?
- What is meant by adding or concatting a new NN in front of the last encoding layer, or why is this needed?
Thanks!