Skip to main content

All Questions

8 questions with no upvoted or accepted answers
4votes
0answers
92views

Does ROC AUC different between crossval and test set indicate overfitting or other problem?

I am training a composite model (XGBoost, Linear Regression, and RandomForest) to predict injured people probability. Well, the results of cross-validation with 5 folds. Well, I can see any problem ...
GregOliveira's user avatar
1vote
0answers
112views

How to implement kfold and cv into Hybrid feature selection and evaluate the classification model performance?

I have been working on a Hybrid feature selection combined with hyperopt package for hyperparameter tuning and I am thinking about evaluating the performance of several model classifiers. I looked ...
WDpad159's user avatar
1vote
0answers
371views

How to put KerasClassifier, Hyperopt and Sklearn cross-validation together

I am performing a hyperparameter tuning optimization (hyperopt) tasks with sklearn on a Keras models. I am trying to optimize KerasClassifiers using the Sklearn cross-validation, Some code follows: <...
JING's user avatar
1vote
0answers
59views

Using pipelines with a cross validation of several models in scikit-learn

Is there a simple way to cross-validate several models using sklearn pipelines?
I.D.M's user avatar
1vote
0answers
64views

AUC for basic models is higher than bagged models

Is correct to get a bit lower AUC with bagged algorithms than no bagged algorithms? The first figure shows the ROC with bagged algorithms and the second figure shows the ROC without bagging. The ...
Javiss's user avatar
1vote
0answers
775views

Consistently inconsistent cross-validation results that are wildly different from original model accuracy

I have a question about cross-validation using sklearn in Python (2.7). I have updated this to include the code I use prior to cross-validation. I import a csv into a dataframe. Some of these ...
pmp's user avatar
  • 41
0votes
0answers
23views

How to use cross validation to select/evaluate model with probability score as the output?

Initially I was evaluating my models using cross_val with out-of-pocket metrics such as precision, recall, f1 score, etc, or with my own metrics defined in ...
szheng's user avatar
0votes
0answers
362views

Split dataframe to a train and test sets with a cross validation of x%

I am working on a dataframe and need to split it into a training set and test set, with 90% for Cross-Validation training, and 10% for a final test set. The problem is that I do not know where to ...
ikram zouaoui's user avatar

close