Resize images and merge data sets in python

Issue I have two datasets, images1 and images2(generated in the function below, by reading images in a loop via given path) def measure(images1,path): images2=[] for filename in glob.glob(path): #looking for pngs temp = cv2.imread(filename).astype(float) images2.append (augm_img) print(np.array(images2).dtype) print(np.array(images).dtype) print(np.array(images2).shape) print(np.array(images).shape)

Continue reading

what the bars in keras training show?

Issue I am using keras and part of my network and parameters are as follows: parser.add_argument(“–batch_size”, default=396, type=int, help=”batch size”) parser.add_argument(“–n_epochs”, default=10, type=int, help=”number of epoch”) parser.add_argument(“–epoch_steps”, default=10, type=int, help=”number of epoch step”) parser.add_argument(“–val_steps”, default=4, type=int, help=”number of valdation step”)

Continue reading

how to plot training error and validation error vs number of epochs?

Issue how to plot training error and validation error vs number of epochs? train_data = generate_arrays_for_training(indexPat, filesPath, end=75) validation_data=generate_arrays_for_training(indexPat, filesPath, start=75) model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75% validation_data=generate_arrays_for_training(indexPat, filesPath, start=75),#start=75), #It take the last 25% #steps_per_epoch=10000, epochs=10) steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),#*25),

Continue reading

Can we use Keras model's accuracy metric for Image Captioning model?

Issue Kindly consider the following line of code. model.compile(loss=’categorical_crossentropy’, optimizer=’adam’,metrics=[‘accuracy’]) I am allowed to use metrics=[‘accuracy’] for my Image Captioning model. My model has been defined as follows: inputs1 = Input(shape=(2048,)) fe1 = Dropout(0.2)(inputs1) fe1=BatchNormalization()(fe1) fe2 = Dense(256, activation=’relu’)(fe1) inputs2

Continue reading

Training the model of Shakespeare with GPU instead of TPU

Issue I’m trying to see the difference between training a model with TPU and GPU. This is the training model part : import time start = time.time() tf.keras.backend.clear_session() resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=’grpc://’ + os.environ[‘COLAB_TPU_ADDR’]) tf.config.experimental_connect_to_cluster(resolver) #TPU initialization tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ",

Continue reading

Reshape the input for BatchDataset trained model

Issue I trained my tensorflow model on images after convert it to BatchDataset IMG_size = 224 INPUT_SHAPE = [None, IMG_size, IMG_size, 3] # 4D input model.fit(x=train_data, epochs=EPOCHES, validation_data=test_data, validation_freq=1, # check validation metrics every epoch callbacks=[tensorboard, early_stopping]) model.compile( loss=tf.keras.losses.CategoricalCrossentropy(), optimizer=tf.keras.optimizers.Adam(),

Continue reading

Error during traning my model with pytorch, stack expects each tensor to be equal size

Issue I am using MMSegmentainon library to train my model for instance image segmentation, during the traingin, I craete the model(Vision Transformer) and when I try to train the model using this: I get this error: RuntimeError:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast): File"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",line287,in _worker_loop data=fetcher.fetch(index)

Continue reading