Quantcast
Channel: Active questions tagged feed-forward+python+neural-network - Stack Overflow
Viewing all articles
Browse latest Browse all 25

Accessing the index in my neural network 'for' list

$
0
0

I have a problem with my neural network.It's not getting the best R2 for my model. It is taking the last position of my list_tf variable.How can I fix this so that my network model gets the best result according to the smallest MSE of this list_tf variable?Below is my code: Please help me.

t_f = time.time()goal_master =[]goal= []repetitions = 1act_func= ['selu', 'gelu', 'relu']list_tf = list(product(range(len(act_func)),range(len(act_func))))for rep in range(repetitions):  for data in [new_df]:    dados = np.array(x)    for ne in neuronios:            for hl in hidden_layers:               for bs in list_batch_size:                for ep in list_epochs :                  for opti in opti_functions:                    for tf in list_tf:                         k_neuron = ne # Número de neronios nas camadas intermediárias                      k_epoch = ep # Número de epochs                      bool_batchnormalization = True                      bool_dropout = False                      a = ModelCheckpoint(filepath='model.h5', monitor='val_loss', mode = 'min', save_freq= "epoch", save_best_only=True, verbose=1)                                                                 #Construção da arquitetura da rede                      i = Input(shape=(X_train.shape[1],))                      x = Dense(k_neuron, kernel_initializer = 'uniform', activation = act_func[tf[0]],)(i)                      if hl >=2:                        x = Dense(k_neuron, kernel_initializer = 'uniform', activation = act_func[tf[1]])(x)                      if bool_batchnormalization:                        x = BatchNormalization()(x)                      if bool_dropout:                        x=Dropout(0.2)(x)#kernel_initializer = 'glorot_uniform' deu R de 0,94 com relu/selu                      x = Dense(1, kernel_initializer = 'uniform')(x)                      model = Model(i,x)                      #Compile                        model.compile(optimizer = opti, loss='mean_squared_error')                      #Fit                      t = time.time()                      r = model.fit(x_train,                                         y_train,                                         validation_data=(x_test,y_test),                                         epochs=k_epoch,                                      callbacks=[a],                                        batch_size = bs,                                        verbose =0)                      elapsed = time.time() - t                      Tempo_treinamento = '{:1.0f}min{:.0f}s'.format(int(elapsed/60),elapsed%60)                      #Evaluate the model                      test_loss =model.evaluate(x_test,Y_test,verbose=0);                      dp = np.sqrt(test_loss)                      np.array(test_loss)                          #model.save("model.h5")                                   loaded_model = load_model("model.h5")                      y_pred = loaded_model.predict(x_test)                      y_pred = y_pred.round(decimals=2, out=y_pred)                      MSE = mean_squared_error(y_test, y_pred)                        R2 = r2_score(y_test, y_pred)                      #R2 = r2_score(y_test, model.predict(x_test).round(decimals=2, out=model.predict(x_test)))                      #R2 = r2_score(y_test, model.predict(x_test))

Viewing all articles
Browse latest Browse all 25


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>