-
Notifications
You must be signed in to change notification settings - Fork 26
[feat] Resume from ckpt #135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
kevssim
wants to merge
53
commits into
modelscope:main
Choose a base branch
from
kevssim:resume_from_ckpt
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from 31 commits
Commits
Show all changes
53 commits
Select commit
Hold shift + click to select a range
5cd3c0f
docs: add transformers resume design spec
kevssim 91eeaeb
docs: refine transformers resume design spec
kevssim 6eebda8
docs: trim resume state fields
kevssim cdd9c1b
docs: add npu resume compatibility requirements
kevssim 1542492
chore: ignore local worktrees
kevssim 9883118
wip
kevssim d41a634
wip
kevssim 21f9918
wip
kevssim 1e59531
fix
kevssim 9bb3f39
wip
kevssim fdf1f71
fix
kevssim 6cf5160
wip
kevssim 144ffe6
Merge branch 'modelscope:main' into resume_from_ckpt
kevssim e21f870
lint
kevssim 3359209
Merge branch 'resume_from_ckpt' of https://github.com/kevssim/twinkle…
kevssim 70ebe50
wip
kevssim 483778d
wip
kevssim 039789b
wip
kevssim 54de1a4
wip
kevssim 920ab86
wip
kevssim ffd6304
lint
kevssim 582bd41
wip
kevssim 9cb6106
wip
kevssim c0cf72e
wip
kevssim 505a75c
wip
kevssim a222b5b
fix
kevssim 7499e00
wip
kevssim cd0b094
doc
kevssim abf2c2f
wip
kevssim 8bf7a6a
lint
kevssim 27e76c6
Merge remote-tracking branch 'origin/main' into resume_from_ckpt
kevssim 5d68910
Merge remote-tracking branch 'origin' into resume_from_ckpt
kevssim 9326e64
wip
kevssim 670f0c1
feat: add resume_from_checkpoint abstract method to TwinkleModel base
kevssim 784730c
feat(dataloader): add resume_from_checkpoint wrapping skip_consumed_s…
kevssim 3db38e9
feat(transformers): replace load_training_state/read_training_progres…
kevssim 94679d5
feat(megatron): add resume_from_checkpoint and save trainer_state.json
kevssim 832ce87
refactor(cookbook): use model.resume_from_checkpoint API
kevssim e3a3cd6
feat(types): replace training state request types with ResumeFromChec…
kevssim a3effab
feat(server): replace training state endpoints with /resume_from_chec…
kevssim 383336d
feat(client): replace training state methods with resume_from_checkpoint
kevssim 54a1db6
docs: update checkpoint/resume documentation for unified API
kevssim 597cbd9
fix: remove stale load_training_state references from __init__.py, mu…
kevssim c55ab9f
fix(transformers): pass correct file paths to _load_scaler_state and …
kevssim 8f76b7b
fix: guard rng_state.pt existence check, add Config extra=allow to Re…
kevssim 4ffa5c7
wip
kevssim 0b43055
wip
kevssim c8bc9ab
wip
kevssim 8c0399e
wip
kevssim 94af275
Merge remote-tracking branch 'origin/main' into resume_from_ckpt
kevssim 10b4a20
refactor: delete resume_utils.py, inline logic in fsdp2.py, update docs
kevssim 3df191a
wip
kevssim deeb648
Merge remote-tracking branch 'origin/main' into resume_from_ckpt
kevssim File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,55 @@ | ||
| from pathlib import Path | ||
| from typing import Any, Optional | ||
|
|
||
| from twinkle import get_logger | ||
|
|
||
|
|
||
| logger = get_logger() | ||
|
|
||
|
|
||
| def _build_model_kwargs(adapter_name: str) -> dict: | ||
| if not adapter_name: | ||
| return {} | ||
| return {'adapter_name': adapter_name} | ||
|
|
||
|
|
||
| def resume_from_checkpoint( | ||
| model: Any, | ||
| dataloader: Any, | ||
| checkpoint_path: Path, | ||
| *, | ||
| resume_only_model: bool, | ||
| ignore_data_skip: bool, | ||
| adapter_name: Optional[str] = None) -> int: | ||
| adapter_name = adapter_name or '' | ||
| checkpoint_dir = str(checkpoint_path) | ||
| model_kwargs = _build_model_kwargs(adapter_name) | ||
| if model_kwargs: | ||
| # Load adapter checkpoint. | ||
| model.load( | ||
| name=checkpoint_path.name, | ||
| output_dir=str(checkpoint_path.parent), | ||
| **model_kwargs, | ||
| ) | ||
|
|
||
| if resume_only_model: | ||
| # Only load model weights, optionally skip data. | ||
| if ignore_data_skip: | ||
| logger.info('Resumed weights only and restarted progress from step 0.') | ||
| return 0 | ||
| progress = model.read_training_progress(checkpoint_dir, **model_kwargs) | ||
| # Skip consumed samples in dataloader and move optimizer to the right step. | ||
| consumed_train_samples = int(progress['consumed_train_samples']) | ||
| dataloader.skip_consumed_samples(consumed_train_samples) | ||
| optimizer_group = model.optimizer_group[adapter_name] | ||
| optimizer_group.cur_step = progress['cur_step'] | ||
| optimizer_group.gradient_accumulation_steps = progress['gradient_accumulation_steps'] | ||
| logger.info(f'Skipped {consumed_train_samples} consumed samples.') | ||
| return consumed_train_samples | ||
|
|
||
| # Load full training state, including model weights, optimizer states, and training progress. | ||
| trainer_state = model.load_training_state(checkpoint_dir, **model_kwargs) | ||
| consumed_train_samples = int(trainer_state['consumed_train_samples']) | ||
| dataloader.skip_consumed_samples(consumed_train_samples) | ||
| logger.info(f'Restored full training state from step {trainer_state["cur_step"]}.') | ||
| return consumed_train_samples |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
load_training_state和read_training_progress什么区别,能否合并为一个呢