isl-org / PhotorealismEnhancement

Code & Data for Enhancing Photorealism Enhancement
1.3k stars 104 forks source link

MSeg difficulties #46

Closed char119 closed 1 year ago

char119 commented 1 year ago

Does anyone either know an alternative to mseg or how one might get mseg to work on windows 10? I tried installing it but it is giving me an error saying that it can only be installed on linux or mac osx. Thanks!

KacperKazan commented 1 year ago

Have you tried using WSL2 on Windows? For example installing Ubuntu with WSL2. And this guide may help as well to set things up I haven't tried doing this myself but hopefully you could get it to work that way :)

char119 commented 1 year ago

That worked. Thanks mate! Last question for this thread: I've modified the files in the csv files and all, but am getting this error: File "/mnt/c/users/johns/downloads/photorealism/code/epe/dataset/utils.py", line 24 for i, line in enumerate(file): IndentationError: unexpected indent Any ideas?

KacperKazan commented 1 year ago

I had the same error. Make sure that there aren't spaces before or after the commas in your .csv (And maybe also no space at the end and begining of each line)

char119 commented 1 year ago

Github is such a helpful community! Thanks so much!

char119 commented 1 year ago

import csv import logging from pathlib import Path

import numpy as np from skimage.transform import rescale import torch from tqdm import tqdm

logger = logging.getLogger('epe.dataset.utils')

def read_filelist(path_to_filelist, num_expected_entries_per_row, check_if_exists=True): """ Loads a file with paths to multiple files per row.

path_to_filelist -- path to text file
num_expected_entries_per_row -- number of expected entries per row.
check_if_exists -- checks each path.
"""

paths = []
num_skipped = 0
with open('C://users//johns//downloads//dataset//training//v2.0//labels//filenames.csv') as file: 'filenames.csv'
continue
     for i, line in enumerate(file):
        t = line.strip().split(',')
        assert len(t) >= num_expected_entries_per_row, \
            f'Expected at least {num_expected_entries_per_row} entries per line. Got {len(t)} instead in line {i} of {path_to_filelist}.'

        ps = [Path(p) for p in t[:num_expected_entries_per_row]]

        if check_if_exists:
            skip = [p for p in ps if not p.exists()]
            if skip:
                num_skipped += 1
                # logger.warn(f'Skipping {i}: {skip[0]} does not exist.')
                continue
                # assert p.exists(), f'Path {p} does not exist.'

                pass
        pass

        paths.append(tuple(ps))
        pass
        pass

if num_skipped > 0:
    logger.warn(f'Skipped {num_skipped} entries since at least one file was missing.')

return paths

def load_crops(path): """ Load crop info from a csv file.

The file is expected to have columns path,r0,r1,c0,c1
path -- Path to image
r0 -- top y coordinate
r1 -- bottom y coordinate
c0 -- left x coordinate
c1 -- right x coordinate
"""

path = Path(path)

if not path.exists():
    logger.warn(f'Failed to load crops from {path} because it does not exist.')
    return []

crops = []      
with open(path) as file:
    reader = csv.DictReader(file)
    for row in tqdm(reader):
        crops.append((row['path'], int(row['r0']), int(row['r1']), int(row['c0']), int(row['c1'])))
        pass
    pass

logger.debug(f'Loaded {len(crops)} crops.')
return crops

def mat2tensor(mat):
t = torch.from_numpy(mat).float() if mat.ndim == 2: return t.unsqueeze(2).permute(2,0,1) elif mat.ndim == 3: return t.permute(2,0,1)

def normalize_dim(a, d): """ Normalize a along dimension d.""" return a.mul(a.pow(2).sum(dim=d,keepdim=True).clamp(min=0.00001).rsqrt())

def transform_identity(img): return img

def make_scale_transform(scale): return lambda img: rescale(img, scale, preserve_range=True, anti_aliasing=True, multichannel=True)

def make_scale_transform_w(target_width): return lambda img: rescale(img, float(target_width) / img.shape[1], preserve_range=True, anti_aliasing=True, multichannel=True)

def make_scale_transform_h(target_height): return lambda img: rescale(img, float(target_height) / img.shape[0], preserve_range=True, anti_aliasing=True, multichannel=True)


TL:;DR, am getting strange error in python.: (epe) C:\Users\johns\Downloads\photorealism\code>python epe/matching/feature_based/collect_crops.py PfD pfd_files.csv Traceback (most recent call last): File "epe/matching/feature_based/collect_crops.py", line 9, in from epe.dataset import ImageBatch, ImageDataset File "c:\users\johns\downloads\photorealism\code\epe\dataset__init__.py", line 2, in from .pfd import PfDDataset File "c:\users\johns\downloads\photorealism\code\epe\dataset\pfd.py", line 12, in from .utils import mat2tensor, normalize_dim File "c:\users\johns\downloads\photorealism\code\epe\dataset\utils.py", line 25 for i, line in enumerate(file): ^ IndentationError: unexpected indent

Why am I getting said error? Thanks!

Charles1192 commented 1 year ago

Hey Char, I'm having a similar error as well. Have you found a solution yet?

char119 commented 1 year ago

Sorry man, not yet.

EyalMichaeli commented 1 year ago

Didn't encounter this error, but does sound like something simple such as an extra space or smth. Anyhow, I've prepared this function to create the csv file, perhaps will help: you need to use it for each dataset (that is, source and target) note: My code is a little different so I tweaked it a bit to be more general, hope it still runs (and if not, tweak it a bit for your use case)

`def create_epe_csv_file(root_path: str, output_csv_path: str, max_images: int = -1, dataset='source') -> None: """ creates epe csv file, according to epe pipeline :param root_path: root path of dataset, where all the folders are, such as ltm, depth, mseg labels, etc. :param output_csv_path: csv path string, can be a path to an old csv file, in this case it will append ot it :param max_images: max images to take from NEW dataset :param dataset: source or target, automatically knows what column and folders to take :return: """ output_csv_path = Path(output_csv_path) append = True if output_csv_path.exists() else False

root_path = Path(root_path)

if dataset.lower() == 'source':

    df = pd.DataFrame({'images': [], 'mseg': [], 'gbuffer': [], 'gt_label': []})

    files_names = os.listdir(root_path / 'ltm')
    df['images'] = [root_path / 'ltm' / file_name for file_name in files_names]
    df['mseg'] = [root_path / 'mseg_labels' / 'gray' / file_name for file_name in files_names]
    df['gbuffer'] = [root_path / 'depth' / file_name for file_name in files_names]
    df['gt_label'] = [root_path / 'semantic_segmentation' / file_name for file_name in files_names]

    if append:
        df_old = pd.read_csv(output_csv_path, header=None)
        df_old.columns = ["images", "mseg", "gbuffer", "gt_label"]
        df = pd.concat((df, df_old), axis=0, ignore_index=True)

elif dataset.lower() == "target":
    df = pd.DataFrame({'images': [], 'mseg': []})

    files_names = os.listdir(root_path / 'images')
    df['images'] = [root_path / 'images' / file_name for file_name in files_names]
    df['mseg'] = [root_path / 'mseg_labels' / 'gray' / file_name for file_name in files_names]

    if append:
        df_old = pd.read_csv(output_csv_path, header=None)
        df_old.columns = ["images", "mseg"]
        df = pd.concat((df, df_old), axis=0, ignore_index=True)

else:
    print(f"Dataset: {dataset} is not supported. choose dataset from: 'source', 'target'")
    raise NotImplementedError

if max_images > 0:
    df = df.iloc[: max_images]

df.to_csv(path_or_buf=output_csv_path, header=False, index=False)
print("Done")
srrichter commented 1 year ago

There might be tabs and spaces mixed up in the file. This commonly leads to intendation errors in python. I recommend you either convert to tabs or spaces.