allenai / embodied-clip

Official codebase for EmbCLIP
https://arxiv.org/abs/2111.09888
Apache License 2.0
113 stars 12 forks source link

zero shot object nav #15

Closed leyuan-sun closed 7 months ago

leyuan-sun commented 7 months ago

I would like to test the zson where should I put the following code you provided to test the pre-trained model for ZSON?

SEEN_OBJECTS = ["AlarmClock", "BaseballBat", "Bowl", "GarbageCan", "Laptop", "Mug", "SprayBottle", "Vase"]
UNSEEN_OBJECTS = ["Apple", "BasketBall", "HousePlant", "Television"]

def compute_scores(metrics_file, obj_type='Apple'):
    ''' Function for computing average success metrics per object type. '''

    metrics = json.load(open(metrics_file, 'r'))

    episodes = [ep in metrics[0]['tasks'] if ep['task_info']['object_type'] == OBJ_TYPE]

    success = [ep['success'] for ep in episodes]
    success = sum(success) / len(success)

    spl = [ep['spl'] for ep in episodes]
    spl = sum(spl) / len(spl)

    return success, spl
apoorvkh commented 7 months ago

Hi,

I believe that the following command (in the zeroshot-objectnav branch) will generate a metrics_file in the specified output directory (storage/embclip-zeroshot). You can pass that metrics_file path into the function you have pasted above. It will parse the output and provide you with success and spl metrics.

export CKPT_PATH=path/to/model.pt

PYTHONPATH=. python allenact/main.py -o storage/embclip-zeroshot -c $CKPT_PATH -b projects/objectnav_baselines/experiments/robothor/clip zeroshot_objectnav_robothor_rgb_clipresnet50gru_ddppo_eval --eval

Feel free to re-open this if it does not solve your issues.