Skip to content

Commit f159e55

Browse files
committed
readme
1 parent afadecf commit f159e55

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ The `run.py` command requires the following arguments:
151151
* `--epochs`: number of epochs to train, `--epochs 0` means running zero-shot inference.
152152
* `--bpe`: batches per epoch (replaces the length of the dataloader as default value). `--bpe 100 --epochs 10` means that each epoch consists of 100 batches, and overall training is 1000 batches. Set `--bpe null` to use the full length dataloader or comment the `bpe` line in the yaml configs.
153153
* `--gpus`: number of gpu devices, set to `--gpus null` when running on CPUs, `--gpus [0]` for a single GPU, or otherwise set the number of GPUs for a [distributed setup](#distributed-setup)
154-
* `--ckpt`: path to the one of the ULTRA checkpoints to use (you can use those provided in the repo ot trained on your own). Use `--ckpt null` to start training from scratch (or run zero-shot inference on a randomly initialized model, it still might surprise you and demonstrate non-zero performance).
154+
* `--ckpt`: **full** path to the one of the ULTRA checkpoints to use (you can use those provided in the repo ot trained on your own). Use `--ckpt null` to start training from scratch (or run zero-shot inference on a randomly initialized model, it still might surprise you and demonstrate non-zero performance).
155155

156156
Zero-shot inference setup is `--epochs 0` with a given checkpoint `ckpt`.
157157

@@ -161,12 +161,12 @@ Fine-tuning of a checkpoint is when epochs > 0 with a given checkpoint.
161161
An example command for an inductive dataset to run on a CPU:
162162

163163
```bash
164-
python script/run.py -c config/inductive/inference.yaml --dataset FB15k237Inductive --version v1 --epochs 0 --bpe null --gpus null --ckpt ckpts/ultra_4g.pth
164+
python script/run.py -c config/inductive/inference.yaml --dataset FB15k237Inductive --version v1 --epochs 0 --bpe null --gpus null --ckpt /path/to/ultra/ckpts/ultra_4g.pth
165165
```
166166

167167
An example command for a transductive dataset to run on a GPU:
168168
```bash
169-
python script/run.py -c config/transductive/inference.yaml --dataset CoDExSmall --epochs 0 --bpe null --gpus [0] --ckpt ckpts/ultra_4g.pth
169+
python script/run.py -c config/transductive/inference.yaml --dataset CoDExSmall --epochs 0 --bpe null --gpus [0] --ckpt /path/to/ultra/ckpts/ultra_4g.pth
170170
```
171171

172172
### Run on many datasets
@@ -176,7 +176,7 @@ Using the same config files, you only need to specify:
176176

177177
* `-c <yaml config>`: use the full path to the yaml config because workdir will be reset after each dataset;
178178
* `-d, --datasets`: a comma-separated list of [datasets](#datasets) to run, inductive datasets use the `name:version` convention. For example, `-d ILPC2022:small,ILPC2022:large`;
179-
* `--ckpt`: ULTRA checkpoint to run the experiments on, use the full path to the file;
179+
* `--ckpt`: ULTRA checkpoint to run the experiments on, use the **full** path to the file;
180180
* `--gpus`: the same as in [run single](#run-a-single-experiment);
181181
* `-reps` (optional): number of repeats with different seeds, set by default to 1 for zero-shot inference;
182182
* `-ft, --finetune` (optional): use the finetuning configs of ULTRA (`default_finetuning_config`) to fine-tune a given checkpoint for specified `epochs` and `bpe`;

0 commit comments

Comments
 (0)