I'm using U-net architecture to build a segmentation task of image. During training I have image of size 256256 image. It works very well on the segmentation of same size 256256 or near to size 256*256 (by using torch for resizing). But while I'm using for bigger dimension it seems like model doesn't work at all.
# To resize the image of 1004* 942 part1_path= '/Users/akshitdhillon/Documents/M.Tech/Project/M.T.P._Colab/full_TEM_image_1.jpg' part1=cv2.imread(part1_path) img_tensor = torch.from_numpy(part1).permute(2, 0, 1).float() resize_transform = transforms.Resize((256, 256)) img_resized = resize_transform(img_tensor) logits_mask = model(img_resized.to(DEVICE).unsqueeze(0)) pred_mask = torch.sigmoid(logits_mask) pred_mask = (pred_mask>0.5)*1.0 test_show(img_resized,pred_mask.detach().cpu().squeeze(0))
I have got very bad segmentation result
For Image size 1004*994 Result
I have tried to crop the bigger dimension image into 8 parts, then the model segment the image accurately but it not what I want what should I do?