What is the appropriate input shape for a 2D CNN-based network?

Issue

I am having trouble passing the appropriate input shape to a CNN-based network with a Conv2D layer.
Initially, these are my train shapes. My train data is reshaped into windows:

X_train: (7,100,5185)= (number of features, window size, number of windows)

y_train= (5185, 100 ) = one labeled column that is also windowed

I then calculate some Recurrence plots from this data, I will then have these shapes:

X_train_rp= (5185, 100,100, 7), 100 * 100 referring to my images

y_train = (5185, 100 ), remains unchanged

I pass these two to a conv2D-based CNN with:

model.add(layers.Conv2D(64, kernel_size=3, activation='relu', input_shape=(100, 100, 7)))

And I get this error: Data cardinality is ambiguous: x sizes: 100, 100, 100 ......... y sizes: 5185 Make sure all arrays contain the same number of samples.

I tried many combinations of shapes but in vain! What am I doing wrong ??

EDIT:
This is the model definition using

import tensorflow as tf

X_train_rp = tf.zeros((10, 100,100, 7))
y_train =  tf.zeros((10, 100))

#create model 
model = tf.keras.Sequential() #add model layers    
model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu',
                                 data_format='channels_last', input_shape=(100, 100, 7))) 
model.add(tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu')) 
model.add(tf.keras.layers.Flatten()) 
model.add(tf.keras.layers.Dense(2, activation='softmax')) 

#compile model using accuracy to measure model performance 
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train_rp, y_train_shaped, epochs=3)
model.predict(X_train_rp)

Solution

Judging from the used module aliases I assume you use the tensorflow keras package with a sequential model definition. Your assumptions about the input shapes are actually correct demonstrated by this code snippet adapted from the keras documentation:

import tensorflow as tf

input_shape = (10, 100, 100, 7)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv2D(filters=64, kernel_size=3, activation='relu', input_shape=input_shape[1:])(x)
print(y.shape)
>>> (10, 98, 98, 64)

This means the problem lies within your sequential model definition. Please update your question and include the necessary code.

EDIT:
Using the model definition provided by OP with a small modification yields a working training process. The issue lies in the definition of the dense layer, which takes the output nodes as first positional argument and not the input dimensions.

For the sake of computational cost I reduced the number of training examples from (5185) to (10)…

import tensorflow as tf

X_train_rp = tf.zeros((10, 100,100, 7))
y_train =  tf.zeros((10, 100))

#create model 
model = tf.keras.Sequential() #add model layers    
model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu',
                                 data_format='channels_last', input_shape=(100, 100, 7))) 
model.add(tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu')) 
model.add(tf.keras.layers.Flatten()) 

# Here comes the fix:
model.add(tf.keras.layers.Dense(100, activation='softmax')) 

#compile model using accuracy to measure model performance 
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train_rp, y_train, epochs=3)


Answered By – Jonathan Weine

Answer Checked By – Mildred Charles (AngularFixing Admin)

Leave a Reply

Your email address will not be published.