0
$\begingroup$

I'm getting the same accuracy on validation data, and the accuracy on the training data varies little in every epoch. The training data consists of 19670 images (14445: class 0, 5225: class 1). The validation data consists of 4918 images (3612: class 0, 1306: class 1). Due to class imbalance, I applied calculate class weight so that the penalty for the minority class is higher However, the accuracy is the same on the validation data and the loss does not vary much in every epoch.

I applied data augmentation to all the train data. Also, I am using VGG16, unfreezed the last 5 layers and added some dense layers to the network. I was changing the learning_rate values, but I don't get any significant improvements, and the results are still the same. It follows the same pattern in the accuracy of the training and validation data, there are no improvements and the values ​​are repeated.

Here's the code:

Class weight:

from sklearn.utils.class_weight import compute_class_weight pesos=compute_class_weight("balanced",classes=np.unique(labels),y=labels) #Estos serán los pesos correspondientes a cada clase #Creo un diccionario que pasaré como parámetro a class_weight al momento de entrenar el momento pesos_clases={i: pesos[i] for i in range(len(pesos))} 

Neural network:

from tensorflow.keras.applications import vgg16 VGG16=vgg16.VGG16( weights="imagenet", include_top=False, input_shape=(224,224,3) ) VGG16.trainable=True #Entrenable #Unfreeze the last 5 layers for layers in VGG16.layers[:-5]: layers.trainable=False 
from tensorflow.keras import layers from tensorflow import keras x=VGG16.output x=layers.GlobalAveragePooling2D()(x) x=layers.Dense(1000,activation="relu")(x) #1000 neuronas x=layers.Dropout(0.3)(x) #30% de neuronas se 'desactivan' output=layers.Dense(1,activation="sigmoid")(x) #la capa de salida: 1 neurona modelo=keras.Model(VGG16.inputs,output) #Creo el modelo 
#Elijo el optimizador from tensorflow.keras.optimizers import Adam optimizador=Adam(learning_rate=0.001) #Esto cambiar dependiendo del training performance --> AJUSTAR (0.001 INICIALMENTE) #Compile step modelo.compile( optimizer=optimizador, #Optimizador loss="binary_crossentropy", #clasificación binaria metrics=["accuracy"] ) #Entrenamiento del modelo history=modelo.fit( train_generator, epochs=20, callbacks=[ES], validation_data=val_generator, class_weight=pesos_clases #Asigno los pesos correspondientes a cada clase para penalizar más cuando el modelo falla en la clase minoritaria ) 

But I get these results: Results

I wanna know why this is happening, if I'm changing the hyperparameters, such as the learning rate, number of neurons and I applied class_weight for the minority class.

$\endgroup$
1
  • $\begingroup$This is just because of the class imbalance and the model predicting one class$\endgroup$CommentedDec 29, 2024 at 11:42

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.