|
| 1 | +# Object Goal Navigation using Goal-Oriented Semantic Exploration |
| 2 | +This is a PyTorch implementation of the NeurIPS-20 paper: |
| 3 | + |
| 4 | +[Object Goal Navigation using Goal-Oriented Semantic Exploration](https://arxiv.org/pdf/2007.00643.pdf)<br /> |
| 5 | +Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov<br /> |
| 6 | +Carnegie Mellon University, Facebook AI Research |
| 7 | + |
| 8 | +Winner of the [CVPR 2020 Habitat ObjectNav Challenge](https://aihabitat.org/challenge/2020/). |
| 9 | + |
| 10 | +Project Website: https://devendrachaplot.github.io/projects/semantic-exploration |
| 11 | + |
| 12 | + |
| 13 | + |
| 14 | +### Overview: |
| 15 | +The Goal-Oriented Semantic Exploration (SemExp) model consists of three modules: a Semantic Mapping Module, a Goal-Oriented Semantic Policy, and a deterministic Local Policy. |
| 16 | +As shown below, the Semantic Mapping model builds a semantic map over time. The Goal-Oriented Semantic Policy selects a long-term goal based on the semantic |
| 17 | +map to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal. |
| 18 | + |
| 19 | + |
| 20 | + |
| 21 | +### This repository contains: |
| 22 | +- Episode train and test datasets for [Object Goal Navigation](https://arxiv.org/pdf/2007.00643.pdf) task for the Gibson dataset in the Habitat Simulator. |
| 23 | +- The code to train and evaluate the Semantic Exploration (SemExp) model on the Object Goal Navigation task. |
| 24 | +- Pretrained SemExp model. |
| 25 | + |
| 26 | +## Installing Dependencies |
| 27 | +- We use earlier versions of [habitat-sim](https://github.com/facebookresearch/habitat-sim) and [habitat-lab](https://github.com/facebookresearch/habitat-lab) as specified below: |
| 28 | + |
| 29 | +Installing habitat-sim: |
| 30 | +``` |
| 31 | +git clone https://github.com/facebookresearch/habitat-sim.git |
| 32 | +cd habitat-sim; git checkout tags/v0.1.5; |
| 33 | +pip install -r requirements.txt; |
| 34 | +python setup.py install --headless |
| 35 | +python setup.py install # (for Mac OS) |
| 36 | +``` |
| 37 | + |
| 38 | +Installing habitat-lab: |
| 39 | +``` |
| 40 | +git clone https://github.com/facebookresearch/habitat-lab.git |
| 41 | +cd habitat-lab; git checkout tags/v0.1.5; |
| 42 | +pip install -e . |
| 43 | +``` |
| 44 | +Check habitat installation by running `python examples/benchmark.py` in the habitat-lab folder. |
| 45 | + |
| 46 | +- Install [pytorch](https://pytorch.org/) according to your system configuration. The code is tested on pytorch v1.6.0 and cudatoolkit v10.2. If you are using conda: |
| 47 | +``` |
| 48 | +conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 #(Linux with GPU) |
| 49 | +conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch #(Mac OS) |
| 50 | +``` |
| 51 | + |
| 52 | +- Install [detectron2](https://github.com/facebookresearch/detectron2/) according to your system configuration. If you are using conda: |
| 53 | +``` |
| 54 | +python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html #(Linux with GPU) |
| 55 | +CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' #(Mac OS) |
| 56 | +``` |
| 57 | + |
| 58 | +### Docker and Singularity images: |
| 59 | +We provide experimental [docker](https://www.docker.com/) and [singularity](https://sylabs.io/) images with all the dependencies installed, see [Docker Instructions](./docs/DOCKER_INSTRUCTIONS.md). |
| 60 | + |
| 61 | + |
| 62 | +## Setup |
| 63 | +Clone the repository and install other requirements: |
| 64 | +``` |
| 65 | +git clone https://github.com/devendrachaplot/Object-Goal-Navigation/ |
| 66 | +cd Object-Goal-Navigation/; |
| 67 | +pip install -r requirements.txt |
| 68 | +``` |
| 69 | + |
| 70 | +### Downloading scene dataset |
| 71 | +- Download the Gibson dataset using the instructions here: https://github.com/facebookresearch/habitat-lab#scenes-datasets (download the 11GB file `gibson_habitat_trainval.zip`) |
| 72 | +- Move the Gibson scene dataset or create a symlink at `data/scene_datasets/gibson_semantic`. |
| 73 | + |
| 74 | +### Downloading episode dataset |
| 75 | +- Download the episode dataset: |
| 76 | +``` |
| 77 | +wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1tslnZAkH8m3V5nP8pbtBmaR2XEfr8Rau' -O objectnav_gibson_v1.1.zip |
| 78 | +``` |
| 79 | +- Unzip the dataset into `data/datasets/objectnav/gibson/v1.1/` |
| 80 | + |
| 81 | +### Setting up datasets |
| 82 | +The code requires the datasets in a `data` folder in the following format (same as habitat-lab): |
| 83 | +``` |
| 84 | +Object-Goal-Navigation/ |
| 85 | + data/ |
| 86 | + scene_datasets/ |
| 87 | + gibson_semantic/ |
| 88 | + Adrian.glb |
| 89 | + Adrian.navmesh |
| 90 | + ... |
| 91 | + datasets/ |
| 92 | + objectnav/ |
| 93 | + gibson/ |
| 94 | + v1.1/ |
| 95 | + train/ |
| 96 | + val/ |
| 97 | +``` |
| 98 | + |
| 99 | + |
| 100 | +### Test setup |
| 101 | +To verify that the data is setup correctly, run: |
| 102 | +``` |
| 103 | +python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0 |
| 104 | +``` |
| 105 | + |
| 106 | +## Usage |
| 107 | + |
| 108 | +### Training: |
| 109 | +For training the SemExp model on the Object Goal Navigation task: |
| 110 | +``` |
| 111 | +python main.py |
| 112 | +``` |
| 113 | + |
| 114 | +### Downloading pre-trained models |
| 115 | +``` |
| 116 | +mkdir pretrained_models; |
| 117 | +wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=171ZA7XNu5vi3XLpuKs8DuGGZrYyuSjL0' -O pretrained_models/sem_exp.pth |
| 118 | +``` |
| 119 | + |
| 120 | +### For evaluation: |
| 121 | +For evaluating the pre-trained model: |
| 122 | +``` |
| 123 | +python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth |
| 124 | +``` |
| 125 | + |
| 126 | +For visualizing the agent observations and predicted semantic map, add `-v 1` as an argument to the above command. |
| 127 | + |
| 128 | +The pre-trained model should get 0.657 Success, 0.339 SPL and 1.474 DTG. |
| 129 | + |
| 130 | +For more detailed instructions, see [INSTRUCTIONS](./docs/INSTRUCTIONS.md). |
| 131 | + |
| 132 | + |
| 133 | +## Cite as |
| 134 | +>Chaplot, D.S., Gandhi, D., Gupta, A. and Salakhutdinov, R., 2020. Object Goal Navigation using Goal-Oriented Semantic Exploration. In Neural Information Processing Systems (NeurIPS-20). ([PDF](https://arxiv.org/pdf/2007.00643.pdf)) |
| 135 | +
|
| 136 | +### Bibtex: |
| 137 | +``` |
| 138 | +@inproceedings{chaplot2020object, |
| 139 | + title={Object Goal Navigation using Goal-Oriented Semantic Exploration}, |
| 140 | + author={Chaplot, Devendra Singh and Gandhi, Dhiraj and |
| 141 | + Gupta, Abhinav and Salakhutdinov, Ruslan}, |
| 142 | + booktitle={In Neural Information Processing Systems (NeurIPS)}, |
| 143 | + year={2020} |
| 144 | + } |
| 145 | +``` |
| 146 | + |
| 147 | +## Related Projects |
| 148 | +- This project builds on the [Active Neural SLAM](https://devendrachaplot.github.io/projects/Neural-SLAM) paper. The code and pretrained models for the Active Neural SLAM system are available at: |
| 149 | +https://github.com/devendrachaplot/Neural-SLAM. |
| 150 | +- The Semantic Mapping module is similar to the one used in [Semantic Curiosity](https://devendrachaplot.github.io/projects/SemanticCuriosity). |
| 151 | + |
| 152 | +## Acknowledgements |
| 153 | +This repository uses [Habitat Lab](https://github.com/facebookresearch/habitat-lab) implementation for running the RL environment. |
| 154 | +The implementation of PPO is borrowed from [ikostrikov/pytorch-a2c-ppo-acktr-gail](https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/). |
| 155 | +The Mask-RCNN implementation is based on the [detectron2](https://github.com/facebookresearch/detectron2/) repository. We would also like to thank Shubham Tulsiani and Saurabh Gupta for their help in implementing some parts of the code. |
0 commit comments