TimeDistributed layer

Issue

sorry I’m new to keras and RNN in general. I have these data on which to make training. Shape of X_train=(n_steps=25, length_steps=3878, n_features=8), shape of y_train=(n_steps=25, n_features=4). Basically for each step with length 3878 and 8 features I have one target of four. Now, I don’t know how to train these data with a "temporal structure". Someone told me to use TimeDistributed layer, but I’m having issues with the shape. How can I use TimeDistributed on this?

import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, TimeDistributed, SimpleRNN

modelSimple = Sequential()

modelSimple.add(SimpleRNN(200, return_sequences=True, activation='softmax'))
modelSimple.add(SimpleRNN(200, return_sequences=False, activation='softmax'))
modelSimple.add(Dense(4, activation='softmax'))
modelSimple.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
modelSimple.fit(X_train, y_train, epochs = 7)
modelSimple.summary()

Solution

Because it is classfication, I don’t think you might need to use TimeDistributed layer.

However I can explain it to you, TimeDistributed allows you to apply the same operation on each time step. For instance, on a video, you may want to apply the same Conv2Don each frame. In the example in the documentation, you have 10 frames and you apply same convolution on each frame :

>>> inputs = tf.keras.Input(shape=(10, 128, 128, 3))
>>> conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))
>>> outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)
>>> outputs.shape

TensorShape([None, 10, 126, 126, 64])

In Time series, the core idea is the same : you may want to apply an operation on features of each time step. Because it is necessary to keep the time depandancy, you should set return_sequences=True before a TimeDistributed layer. For instance with your data :

modelSimple.add(SimpleRNN(200, return_sequences=True, activation='softmax',input_shape=(3878,8)))
modelSimple.add(SimpleRNN(200, return_sequences=True, activation='softmax'))
modelSimple.add(TimeDistributed(Dense(4, activation='softmax')))

gives you :

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn_15 (SimpleRNN)    (None, 3878, 200)         41800     
_________________________________________________________________
simple_rnn_16 (SimpleRNN)    (None, 3878, 200)         80200     
_________________________________________________________________
time_distributed_9 (TimeDist (None, 3878, 4)           804       
=================================================================

So you have a Dense layer which convert 200 features to 4. This conversion is repeated over all the 3878 time steps thanks to TimeDistributed.

However, Keras is well created, and apply a Dense layer on a 2D object like (num_steps x features) will only affect the last dimension : features. As a result,
Dense layer, naturally process with TimeDistributed in most of Time Series.

Answered By – B Douchet

Answer Checked By – Mildred Charles (AngularFixing Admin)

Leave a Reply

Your email address will not be published.