在Keras中形成多輸入LSTM

[英]Forming a Multi input LSTM in Keras


I am trying to predict neutron widths from resonance energies, using a Neural Network (I'm quite new to Keras/NNs in general so apologies in advance).

我試圖用共振能量來預測中子寬度,使用神經網絡(我對Keras / NN很新,所以提前道歉)。

There is said to be a link between resonance energies and neutron widths, and the similiarities between energy increasing monotonically this can be modelled similiar to a time series problem.

據說諧振能量和中子寬度之間存在聯系,能量單調增加之間的相似性可以與時間序列問題類似地建模。

In essences I have 2 columns of data with the first column being resonance energy and the other column containing the respective neutron width on each row. I have decided to use an LSTM layer to help in the networks predict by utlising previous computations.

本質上,我有2列數據,第一列是共振能量,另一列包含每行的相應中子寬度。我已經決定使用LSTM層來幫助網絡通過先前的計算進行預測。

From various tutorials and other answers, it seems common to use a "look_back" argument to allow the network to use previous timesteps to help predict the current timestep when creating the dataset e.g

從各種教程和其他答案中,通常使用“look_back”參數來允許網絡使用先前的時間步來幫助預測創建數據集時的當前時間步長,例如

trainX, trainY = create_dataset(train, look_back)

I would like to ask regarding forming the NN:

我想問一下關於形成NN的問題:

1) Given my particular application do I need to explicitly map each resonance energy to its corresponding neutron width on the same row?

1)鑒於我的特定應用,我是否需要將每個共振能量明確地映射到同一行上相應的中子寬度?

2) Look_back indicates how many previous values the NN can use to help predict the current value, but how is it incorporated with the LSTM layer? I.e I dont quite understand how both can be used?

2)Look_back指示NN可用於幫助預測當前值的先前值,但它如何與LSTM層合並?我完全不明白兩者是如何使用的?

3) At which point do I inverse the MinMaxScaler?

3)我在哪一點上反轉MinMaxScaler?

That is the main two queries, for 1) I have assumed its okay not to, for 2) I believe it is possible but I dont really understand how. I can't quite work out what I have done wrong in the code, ideally I would like to plot the relative deviation of predicted to reference values in the train and test data once the code works. Any advice would be much appreciated:

這是主要的兩個查詢,1)我已經假設沒有,2)我相信這是可能的,但我真的不明白如何。我無法弄清楚我在代碼中做錯了什么,理想情況下我想繪制一下代碼工作時列車中的預測參考值和測試數據的相對偏差。任何建議將不勝感激:

import numpy
import matplotlib.pyplot as plt
import pandas
import math

from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error


# convert an array of values into a dataset matrix

def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - look_back - 1):
        a = dataset[i:(i + look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 1])
    return numpy.array(dataX), numpy.array(dataY)

# fix random seed for reproducibility
numpy.random.seed(7)      
# load the dataset
dataframe = pandas.read_csv('CSVDataFe56Energyneutron.csv', engine='python') 
dataset = dataframe.values
print("dataset")
print(dataset.shape)
print(dataset)

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
print(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67) 
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]

# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)  
testX, testY = create_dataset(test, look_back)
# reshape input to be  [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1))
testX = numpy.reshape(testX, (testX.shape[0],look_back, 1))
# # create and fit the LSTM network
# 
number_of_hidden_layers=16
model = Sequential()
model.add(LSTM(6, input_shape=(look_back,1)))
for x in range(0, number_of_hidden_layers):
    model.add(Dense(50, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history= model.fit(trainX, trainY, nb_epoch=200, batch_size=32)
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
print('Train Score: %.2f MSE (%.2f RMSE)' % (trainScore, math.sqrt(trainScore)))
testScore = model.evaluate(testX, testY, verbose=0)
print('Test Score: %.2f MSE (%.2f RMSE)' % (testScore, math.sqrt(testScore)))

1 个解决方案

#1


0  

1) Given my particular application do I need to explicitly map each resonance energy to its corresponding neutron width on the same row?

1)鑒於我的特定應用,我是否需要將每個共振能量明確地映射到同一行上相應的中子寬度?

Yes you have to do that. Basically your data has to be in a shape of. X=[timestep, timestep,...] y=[label, label,...]

是的,你必須這樣做。基本上你的數據必須是。 X = [時間步長,時間步長,...] y = [標簽,標簽,......]

2) Look_back indicates how many previous values the NN can use to help predict the current value, but how is it incorporated with the LSTM layer? I.e I dont quite understand how both can be used?

2)Look_back指示NN可用於幫助預測當前值的先前值,但它如何與LSTM層合並?我完全不明白兩者是如何使用的?

A LSTM is a sequence aware layer. You can think about it as a hidden markov model. It takes the first timestep, calculates something and in the next timestep the previous calculation is considered. Look_back, with is usually called sequence_length is just the maximum number of timesteps.

LSTM是序列感知層。您可以將其視為隱藏的馬爾可夫模型。它需要第一個時間步,計算一些東西,並在下一個時間步中考慮先前的計算。 Look_back,通常稱為sequence_length,只是最大時間步數。

3) At which point do I inverse the MinMaxScaler?

3)我在哪一點上反轉MinMaxScaler?

Why should you do that? Furthermore, you don´t need to scale your input.

你為什么要那樣做?此外,您不需要擴展輸入。

It seems like you have a general misconception in your model. If you have input_shape=(look_back,1) you don´t need LSTMs at all. If your sequence is just sequence of single values, it might be better to avoid LSTMs. Furthermore, fitting your model should include validation after each epoch to track the loss and validation performance.

您的模型中似乎存在一般誤解。如果你有input_shape =(look_back,1)你根本就不需要LSTM。如果序列只是單個值的序列,則最好避免使用LSTM。此外,擬合模型應包括每個時期后的驗證,以跟蹤損失和驗證性能。

model.fit(x_train, y_train,
      batch_size=32,
      epochs=200,
      validation_data=[x_test, y_test],
      verbose=1)

注意!

本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:https://www.itdaan.com/blog/2018/07/30/72062a9792dfb1c9620f755d87b81e39.html



 
粤ICP备14056181号  © 2014-2020 ITdaan.com