1
$\begingroup$

I am working on implementing an autoencoder for unsupervised learning, and I have some questions about the overall process. From what I was reading here, @rjpg suggests the following general approach:

  1. Slice the NN in half on the last encoder (before the decode starts)
  2. Freeze the the weights of the encoders
  3. Add or concat a new NN in front of the last encoding layer
  4. (post) Train this concat NN with a softmax output layer to classify the digit

I haven't been able to find an example of this using Keras. I have seen examples in R or H20, but the code is so encapsulated, it is difficult for me to implement a similar solution in Keras. I have three questions related to how to implement this:

  1. Does this strategy seem reasonable?
  2. How to freeze the weights of the encoders in Keras?
  3. What is meant by adding or concatting a new NN in front of the last encoding layer, or why is this needed?

Thanks!

$\endgroup$
1
  • 1
    $\begingroup$Your question does not appear to be about unsupervised learning, but starts after the unsupervised part has finished, and is about how to re-use the unsupervised autoencoder as a component in a supervised learning problem. This is normal, especially if you want to predict something as opposed to compress or de-noise the data. But could you clarify is that your goal, to understand this re-use and predict digits on MNIST as a practice exercise in Keras?$\endgroup$CommentedJan 31, 2018 at 18:30

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.