Tfrecord vs TF.image?

Issue I was under the impression that having a pre-computed Tfrecord file was the most efficient way to feed your input function. However, I keep seeing great looking articles such as this one where the input function takes a reference

Continue reading

ValueError: Layer #232 (named "fpn_cells/cell_0/fnode0/add") expects 0 weight(s) , but the saved weights have 1 element(s)

Issue I’ve been trying to train dataset using python train.py –snapshot efficientdet-d0.h5 –phi 0 –gpu 0 –random-transform –compute-val-loss –freeze-backbone –batch-size 4 –steps 100 coco C:/Users/mustafa/Downloads/deneme.v1-1.coco/datasets/coco And I’ve seen this error. Traceback (most recent call last): File “train.py”, line 381, in

Continue reading

ImportError: cannot import name 'device_spec' from 'tensorflow.python.framework'

Issue When i try to run python train.py –logtostderr –train_dir=training/ –pipeline_config_path=training/faster_rcnn_inception_v2_pets.config command this error pops out. (tensorflow1.13) C:\tensorflow1\models\research\object_detection>python train.py –logtostderr –train_dir=training/ –pipeline_config_path=training/faster_rcnn_inception_v2_pets.config Traceback (most recent call last): File "train.py", line 51, in from object_detection.builders import dataset_builder File "C:\tensorflow1\models\research\object_detection\builders\dataset_builder.py", line 33,

Continue reading

Weird behaviour for my CNN validation accuracy and loss function during training phase

Issue Here is the architecture of my network : cnn3 = Sequential() cnn3.add(Conv2D(32, kernel_size=(3, 3), activation=’relu’, input_shape=input_shape)) cnn3.add(MaxPooling2D((2, 2))) cnn3.add(Dropout(0.25)) cnn3.add(Conv2D(64, kernel_size=(3, 3), activation=’relu’)) cnn3.add(MaxPooling2D(pool_size=(2, 2))) cnn3.add(Dropout(0.25)) cnn3.add(Conv2D(128, kernel_size=(3, 3), activation=’relu’)) cnn3.add(Dropout(0.2)) cnn3.add(Flatten()) cnn3.add(Dense(128, activation=’relu’)) cnn3.add(Dropout(0.4)) # 0.3 cnn3.add(Dense(4, activation=’softmax’))

Continue reading

Split tensor into training and test sets

Issue Let’s say I’ve read in a textfile using a TextLineReader. Is there some way to split this into train and test sets in Tensorflow? Something like: def read_my_file_format(filename_queue): reader = tf.TextLineReader() key, record_string = reader.read(filename_queue) raw_features, label = tf.decode_csv(record_string)

Continue reading

TimeDistributed layer

Issue sorry I’m new to keras and RNN in general. I have these data on which to make training. Shape of X_train=(n_steps=25, length_steps=3878, n_features=8), shape of y_train=(n_steps=25, n_features=4). Basically for each step with length 3878 and 8 features I have

Continue reading

TensorFlow GradCAM – model.fit() – ValueError: Shapes (None, 1) and (None, 2) are incompatible

Issue As part of assignment 4, Coursera CV TF course, my code fails in model.fit() model.compile(loss=’categorical_crossentropy’,metrics= [‘accuracy’],optimizer=tf.keras.optimizers.RMSprop(lr=0.001)) # shuffle and create batches before training model.fit(train_batches,epochs=25) with error: ValueError: Shapes (None, 1) and (None, 2) are incompatible Any hint at where

Continue reading

Training the model of Shakespeare with GPU instead of TPU

Issue I’m trying to see the difference between training a model with TPU and GPU. This is the training model part : import time start = time.time() tf.keras.backend.clear_session() resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=’grpc://’ + os.environ[‘COLAB_TPU_ADDR’]) tf.config.experimental_connect_to_cluster(resolver) #TPU initialization tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ",

Continue reading

How does this split of train and evaluation data ensure there is no overlap?

Issue I am reading this sentiment classification tutorial from Tensorflow: https://www.tensorflow.org/tutorials/keras/text_classification The way it splits data into train and evaluate is the following code: batch_size = 32 seed = 42 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( ‘aclImdb/train’, batch_size=batch_size, validation_split=0.2, subset=’training’, seed=seed) raw_val_ds =

Continue reading

Reshape the input for BatchDataset trained model

Issue I trained my tensorflow model on images after convert it to BatchDataset IMG_size = 224 INPUT_SHAPE = [None, IMG_size, IMG_size, 3] # 4D input model.fit(x=train_data, epochs=EPOCHES, validation_data=test_data, validation_freq=1, # check validation metrics every epoch callbacks=[tensorboard, early_stopping]) model.compile( loss=tf.keras.losses.CategoricalCrossentropy(), optimizer=tf.keras.optimizers.Adam(),

Continue reading