I'm trying to use an autoencoder to reduce dimensionality of my features. My features are of dimension 2048. I tried to train an autoencoder to reduce the dimensionality to 50. I'm using a single hidden layer. But the loss decreases to certain extent and saturates. It's not even overfitting on the training data.
To test, I changed the dimension of dimension of hidden layer to 2048. My expectation was that now reconstruction is a trivial task and network would learn it easily. But to my surprise, the loss is even higher than in the first case. Any idea what is going wrong?
My network:
class MyAE: def __init__(self) -> None: flat_inputs = Input(shape=(2048,), name='flat_inputs') bottleneck = Dense(units=2048)(flat_inputs) flat_outputs = Dense(units=2048)(bottleneck) self.model = Model(inputs=flat_inputs, outputs=flat_outputs) optimizer = Adam(lr=0.001) self.model.compile(optimizer=optimizer, loss='mse') self.model.summary(print_fn=print)
I'm training my network as
num_samples = inputs.shape[0] self.model.fit(x=inputs, y=inputs, batch_size=num_samples, epochs=self.num_epochs, verbose=1)