1
$\begingroup$

I'm trying to use an autoencoder to reduce dimensionality of my features. My features are of dimension 2048. I tried to train an autoencoder to reduce the dimensionality to 50. I'm using a single hidden layer. But the loss decreases to certain extent and saturates. It's not even overfitting on the training data.

To test, I changed the dimension of dimension of hidden layer to 2048. My expectation was that now reconstruction is a trivial task and network would learn it easily. But to my surprise, the loss is even higher than in the first case. Any idea what is going wrong?

My network:

class MyAE: def __init__(self) -> None: flat_inputs = Input(shape=(2048,), name='flat_inputs') bottleneck = Dense(units=2048)(flat_inputs) flat_outputs = Dense(units=2048)(bottleneck) self.model = Model(inputs=flat_inputs, outputs=flat_outputs) optimizer = Adam(lr=0.001) self.model.compile(optimizer=optimizer, loss='mse') self.model.summary(print_fn=print) 

I'm training my network as

num_samples = inputs.shape[0] self.model.fit(x=inputs, y=inputs, batch_size=num_samples, epochs=self.num_epochs, verbose=1) 
$\endgroup$
3
  • $\begingroup$Probably choking. Information works like fluid flow, or vise versa. You need a set of slow steps to get through the encoder to the decoder, and you need a few steps to get from the bottleneck back to the reconstruction. Doing it in one step for a sufficiently complex form, without even a grid search on bottleneck size, is often likely to be problematic.$\endgroup$CommentedAug 19, 2020 at 14:09
  • $\begingroup$what are your activation functions?$\endgroup$
    – learner
    CommentedAug 19, 2020 at 14:47
  • $\begingroup$Linear. Doesn't change much with relu or sigmoid either$\endgroup$CommentedAug 19, 2020 at 15:07

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.