2
$\begingroup$

I have used a Bayesian optimization to tune machine learning parameters. The optimized parameters are "Hidden layer size" and "learning rate". Now I have 2 questions while dealing with Dynamic Neural Networks:

  1. I have 4 datasets i.e (House 1, house 2, house 3, house 4) as shown in below table. The program is executed 4 times in a loop for each dataset with the same dynamic algorithm. Will the optimized parameters remain the same for each dataset. For instance, can we use the optimized parameters of House 1 for House 4 also? Conventionally, I have seen in academic papers authors setting only 1 set of optimized parameters for various datasets in static neural networks.

    Dataset Learning rate Hidden layer size

  2. In dynamic networks, where the network architectures conditionally change with every input sample.How can we claim that this is the optimal parameter for the network with minimum loss? Although the parameters vary in a small amounts on every new run of the script.

$\endgroup$

    0

    Start asking to get answers

    Find the answer to your question by asking.

    Ask question

    Explore related questions

    See similar questions with these tags.