在Keras中形成多输入LSTM

[英]Forming a Multi input LSTM in Keras


I am trying to predict neutron widths from resonance energies, using a Neural Network (I'm quite new to Keras/NNs in general so apologies in advance).

我试图用共振能量来预测中子宽度,使用神经网络(我对Keras / NN很新,所以提前道歉)。

There is said to be a link between resonance energies and neutron widths, and the similiarities between energy increasing monotonically this can be modelled similiar to a time series problem.

据说谐振能量和中子宽度之间存在联系,能量单调增加之间的相似性可以与时间序列问题类似地建模。

In essences I have 2 columns of data with the first column being resonance energy and the other column containing the respective neutron width on each row. I have decided to use an LSTM layer to help in the networks predict by utlising previous computations.

本质上,我有2列数据,第一列是共振能量,另一列包含每行的相应中子宽度。我已经决定使用LSTM层来帮助网络通过先前的计算进行预测。

From various tutorials and other answers, it seems common to use a "look_back" argument to allow the network to use previous timesteps to help predict the current timestep when creating the dataset e.g

从各种教程和其他答案中,通常使用“look_back”参数来允许网络使用先前的时间步来帮助预测创建数据集时的当前时间步长,例如

trainX, trainY = create_dataset(train, look_back)

I would like to ask regarding forming the NN:

我想问一下关于形成NN的问题:

1) Given my particular application do I need to explicitly map each resonance energy to its corresponding neutron width on the same row?

1)鉴于我的特定应用,我是否需要将每个共振能量明确地映射到同一行上相应的中子宽度?

2) Look_back indicates how many previous values the NN can use to help predict the current value, but how is it incorporated with the LSTM layer? I.e I dont quite understand how both can be used?

2)Look_back指示NN可用于帮助预测当前值的先前值,但它如何与LSTM层合并?我完全不明白两者是如何使用的?

3) At which point do I inverse the MinMaxScaler?

3)我在哪一点上反转MinMaxScaler?

That is the main two queries, for 1) I have assumed its okay not to, for 2) I believe it is possible but I dont really understand how. I can't quite work out what I have done wrong in the code, ideally I would like to plot the relative deviation of predicted to reference values in the train and test data once the code works. Any advice would be much appreciated:

这是主要的两个查询,1)我已经假设没有,2)我相信这是可能的,但我真的不明白如何。我无法弄清楚我在代码中做错了什么,理想情况下我想绘制一下代码工作时列车中的预测参考值和测试数据的相对偏差。任何建议将不胜感激:

import numpy
import matplotlib.pyplot as plt
import pandas
import math

from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error


# convert an array of values into a dataset matrix

def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - look_back - 1):
        a = dataset[i:(i + look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 1])
    return numpy.array(dataX), numpy.array(dataY)

# fix random seed for reproducibility
numpy.random.seed(7)      
# load the dataset
dataframe = pandas.read_csv('CSVDataFe56Energyneutron.csv', engine='python') 
dataset = dataframe.values
print("dataset")
print(dataset.shape)
print(dataset)

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
print(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67) 
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]

# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)  
testX, testY = create_dataset(test, look_back)
# reshape input to be  [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1))
testX = numpy.reshape(testX, (testX.shape[0],look_back, 1))
# # create and fit the LSTM network
# 
number_of_hidden_layers=16
model = Sequential()
model.add(LSTM(6, input_shape=(look_back,1)))
for x in range(0, number_of_hidden_layers):
    model.add(Dense(50, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history= model.fit(trainX, trainY, nb_epoch=200, batch_size=32)
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
print('Train Score: %.2f MSE (%.2f RMSE)' % (trainScore, math.sqrt(trainScore)))
testScore = model.evaluate(testX, testY, verbose=0)
print('Test Score: %.2f MSE (%.2f RMSE)' % (testScore, math.sqrt(testScore)))

1 个解决方案

#1


0  

1) Given my particular application do I need to explicitly map each resonance energy to its corresponding neutron width on the same row?

1)鉴于我的特定应用,我是否需要将每个共振能量明确地映射到同一行上相应的中子宽度?

Yes you have to do that. Basically your data has to be in a shape of. X=[timestep, timestep,...] y=[label, label,...]

是的,你必须这样做。基本上你的数据必须是。 X = [时间步长,时间步长,...] y = [标签,标签,......]

2) Look_back indicates how many previous values the NN can use to help predict the current value, but how is it incorporated with the LSTM layer? I.e I dont quite understand how both can be used?

2)Look_back指示NN可用于帮助预测当前值的先前值,但它如何与LSTM层合并?我完全不明白两者是如何使用的?

A LSTM is a sequence aware layer. You can think about it as a hidden markov model. It takes the first timestep, calculates something and in the next timestep the previous calculation is considered. Look_back, with is usually called sequence_length is just the maximum number of timesteps.

LSTM是序列感知层。您可以将其视为隐藏的马尔可夫模型。它需要第一个时间步,计算一些东西,并在下一个时间步中考虑先前的计算。 Look_back,通常称为sequence_length,只是最大时间步数。

3) At which point do I inverse the MinMaxScaler?

3)我在哪一点上反转MinMaxScaler?

Why should you do that? Furthermore, you don´t need to scale your input.

你为什么要那样做?此外,您不需要扩展输入。

It seems like you have a general misconception in your model. If you have input_shape=(look_back,1) you don´t need LSTMs at all. If your sequence is just sequence of single values, it might be better to avoid LSTMs. Furthermore, fitting your model should include validation after each epoch to track the loss and validation performance.

您的模型中似乎存在一般误解。如果你有input_shape =(look_back,1)你根本就不需要LSTM。如果序列只是单个值的序列,则最好避免使用LSTM。此外,拟合模型应包括每个时期后的验证,以跟踪损失和验证性能。

model.fit(x_train, y_train,
      batch_size=32,
      epochs=200,
      validation_data=[x_test, y_test],
      verbose=1)
智能推荐

注意!

本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:http://www.itdaan.com/blog/2018/07/30/72062a9792dfb1c9620f755d87b81e39.html



 
© 2014-2019 ITdaan.com 粤ICP备14056181号  

赞助商广告