huangting4201
04c02a61b2
fix(ci): fix train error ( #228 )
...
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-24 17:11:32 +08:00
Guoteng
7c820cfa40
feat(init): add skip args check flag and add zero overlap flag ( #222 )
...
* feat(init): add skip args check flag
* fix(optim): add param overlap enable flag
2023-08-24 16:44:18 +08:00
ytxiong
9cd1e0314e
fix(pipeline): modify the sequence_parallel in pipeline ( #227 )
...
* move sequence_parallel to parallel config
* set the sequece_parallel default value is False
* fix lint
* fix lint
* fix lint
* modify the sequence_parallel in pp
2023-08-24 14:45:40 +08:00
huangting4201
9eec3d9465
fix(conflicts): merge main to develop
2023-08-24 14:26:10 +08:00
ytxiong
eee93b5a68
test(model): support fp32 with flash_attn ( #223 )
...
* support tf32 with flash
* move autocast to attention
* fix lint
* fix lint
* fix lint
* fix lint
* fix some bugs in model
* modify the convert dtype
2023-08-24 13:54:44 +08:00
huangting4201
fd28bcab58
feat(data/utils.py): add new dataset type code for streaming dataset ( #225 )
2023-08-24 13:46:18 +08:00
huangting4201
94b2aa28fc
Feat/example training internlm ( #212 )
...
* feat(train/training_internlm.py): move common init funcs to internlm/train
* feat(train/training_internlm.py): update some public funcs
* feat(train/training_internlm.py): update some public funcs
* feat(evaluation.py): adapt evaluate to streaming dataset
* feat(train/training_internlm.py): minor update based on comments
* fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0
* fix(training_internlm.py): fix demo error
2023-08-24 10:00:15 +08:00
ytxiong
a017cab4b3
fix(*): move sequence_parallel to parallel config ( #224 )
...
* move sequence_parallel to parallel config
* set the sequece_parallel default value is False
* fix lint
* fix lint
* fix lint
2023-08-24 09:49:04 +08:00
Sun Peng
32664328e7
Feat/overlap_bcast_forward ( #218 )
...
* feat/support bcast forward overlao
* feat/optimize the bcast call
* feat/optimize the bcast call
* feat/optimize the bcast call
* fix lint
* fix lint
* fix lint
* fix lint
* add torch.cuda.synchronize in save_checkpoint
---------
Co-authored-by: sunpeng <sunpengsdu@gmail.com>
2023-08-23 16:59:59 +08:00
cx
a48210f1f3
feat(memory_profiler): improve memory profiler ( #217 )
2023-08-23 14:18:33 +08:00
Guoteng
29779c75f0
feat(ckpt): add auto ckpt load and singal quit ( #216 )
...
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-23 14:17:45 +08:00
loveSnowBest
e1cefaef6b
fix huggingface link ( #219 )
2023-08-22 22:20:01 +08:00
Lyu Han
716131e477
introduce how to deploy 4-bit quantized internlm model ( #207 )
2023-08-22 11:31:01 +08:00
Kai Chen
075648cd70
update readme related to internlm-chat-7v-v1.1 ( #214 )
2023-08-22 08:08:44 +08:00
Wenwei Zhang
58108413bd
Update readme for news of InternLM-Chat-7B-v1.1 and Lagent ( #213 )
...
* update readme
* fix typo
2023-08-22 07:46:01 +08:00
kkscilife
cc3c48ae47
test(ci_scripts): add load ckpt cases ( #208 )
...
* fix format
* add scripts for load ckpt case
* update test config
* debug:use var in json
* fix syntax error
* export pythonpath
* use absolute path
* use father path of workspace
* debug load new ckpt
* change data path
* add train folder
* fix code format
* fix pylint warning
---------
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-08-21 15:24:43 +08:00
huangting4201
53648dc0e9
feat(train.py): support torch profiler ( #201 )
...
* feat(train.py): support torch profiling
* feat(train.py): optimize initialize_llm_profile
* feat(train.py): profiling with tp0 and dp0
* move sequence parallel context manager to evalation func
* fix lint
* move the process for type_ids to load_new_batch
* fix lint
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-21 15:23:38 +08:00
huangting4201
4832671abe
fix(pipeline_scheduler.py): fix tensor shape err and comm block ( #210 )
2023-08-21 12:09:27 +08:00
huangting4201
f5f5446560
Merge main to develop ( #203 )
...
* fix/fix_submodule_err (#61 )
* fix/fix_submodule_err
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* fix issue templates (#65 )
* fix(tokenizer): refactor tokenizer and update usage in readme (#51 )
* update tokenizer example
* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73 )
* fix a typo in readme
* in order to find InternLMTokenizer, select a lower version of Transformers
---------
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
* [Doc] Add wechat and discord link in readme (#78 )
* Doc:add wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* [Docs]: add Japanese README (#43 )
* Add Japanese README
* Update README-ja-JP.md
replace message
* Update README-ja-JP.md
* add repetition_penalty in GenerationConfig in web_demo.py (#48 )
Co-authored-by: YWMditto <862779238@qq.com>
* use fp16 in instruction (#80 )
* [Enchancement] add more options for issue template (#77 )
* [Enchancement] add more options for issue template
* update qustion icon
* fix link
* Use tempfile for convert2hf.py (#23 )
Fix https://github.com/InternLM/InternLM/issues/50
* delete torch_dtype of README's example code (#100 )
* set the value of repetition_penalty to 1.0 to avoid random outputs (#99 )
* Update web_demo.py (#97 )
Remove meaningless log.
* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106 )
* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124 )
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* docs(LICENSE): add license (#125 )
* add license of colossalai and flash-attn
* fix lint
* modify the name
* fix AutoModel map in convert2hf.py (#116 )
* variables are not printly as expect (#114 )
* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128 )
* feat(solver): fix code to adapt to torch2.0
* docs(install.md): publish internlm environment image
* docs(install.md): update dependency packages version
* docs(install.md): update default image
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* add demo test (#132 )
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* fix web_demo cache accelerate (#133 )
* Doc: add twitter link (#141 )
* Feat add checkpoint fraction (#151 )
* feat(config): add checkpoint_fraction into config
* feat: remove checkpoint_fraction from configs/7B_sft.py
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* [Doc] update deployment guide to keep consistency with lmdeploy (#136 )
* update deployment guide
* fix error
* use llm partition (#159 )
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (#165 )
* test: optimization of ci scripts(variables, test data cleaning, etc).
* chore(workflows): disable ci job on push.
* fix: update partition
* test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (#174 )
* add pull_request in lint check
* use default variables in ci_scripts
* fix format
* check and install requirements automaticlly
* fix format
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* feat(profiling): add a simple memory profiler (#89 )
* feat(profiling): add simple memory profiler
* feat(profiling): add profiling argument
* feat(CI_workflow): Add PR & Issue auto remove workflow (#184 )
* feat(ci_workflow): Add PR & Issue auto remove workflow
Add a workflow for stale PR & Issue auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue well be remove in 7 days
- run this workflow every day on 1:30 a.m.
* Update stale.yml
* feat(bot): Create .owners.yml for Auto Assign (#176 )
* Create .owners.yml: for issue/pr assign automatically
* Update .owners.yml
* Update .owners.yml
fix typo
* [feat]: add pal reasoning script (#163 )
* [Feat] Add PAL inference script
* Update README.md
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/pal_inference.py
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal script
* Update README.md
* restore .ore-commit-config.yaml
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal inference script
* Update READMD.md
* Update internlm/utils/interface.py
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
* Update pal script
* Update pal script
* Update script
* Add docstring
* Update format
* Update script
* Update script
* Update script
---------
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
* test(ci_scripts): add timeout settings and clean work after the slurm job (#185 )
* restore pr test on develop branch
* add mask
* add post action to cancel slurm job
* remove readonly attribute on job log
* add debug info
* debug job log
* try stdin
* use stdin
* set default value avoid error
* try setting readonly on job log
* performance echo
* remove debug info
* use squeue to check slurm job status
* restore the lossed parm
* litmit retry times
* use exclusive to avoid port already in use
* optimize loop body
* remove partition
* add {} for variables
* set env variable for slurm partition
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* refactor(tools): move interface.py and import it to web_demo (#195 )
* move interface.py and import it to web_demo
* typo
* fix(ci): fix lint error
* fix(ci): fix lint error
---------
Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-16 15:57:26 +08:00
huangting4201
f3664bfbab
fix(train.py): fix scheduler metric hook skip error ( #204 )
2023-08-16 15:47:05 +08:00
huangting4201
5f2381af62
fix/ci train error ( #200 )
...
* fix(ci): fix ci train error
* fix(ci): fix ci train error
* fix(ci): fix ci train error
2023-08-16 11:11:27 +08:00
huangting4201
db13bc46bc
fix(ci): fix ci train error ( #199 )
2023-08-15 20:09:54 +08:00
Sun Peng
ef851d16c6
Feat/optimizer ( #194 )
...
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call
* feat(optimizer.py): overlap compute norm with allreduce
* update var and function name
* update function compute norm (#197 )
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* feat(optimizer/hybrid_zero_optim.py): overlap gradients last bucket allreduce and compute norm (#196 )
* support gradients allreduce and compute norm overlap
* fix para set error
* remove timer cal_norm for testing
* feat(optimizer/hybrid_zero_optim.py): support group global norm
* format(lint): fix lint error
* feat(optimizer/store.py): update code based on comment
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-08-15 18:55:10 +08:00
x54-729
0600b42c01
refactor(tools): move interface.py and import it to web_demo ( #195 )
...
* move interface.py and import it to web_demo
* typo
2023-08-14 22:32:29 +08:00
cx
4e8bd39d8f
refactor(solver/optimizer): improve optimizer memory ( #193 )
...
* refactor(solver/optimizer): improve optimizer memory
* feat(data): remove useless dataset type ids map
2023-08-11 17:46:07 +08:00
Sun Peng
5f3133fac8
Revert "feat(ckpt): add auto ckpt load and singal quit ( #189 )" ( #192 )
...
This reverts commit a45a91bb84
.
2023-08-11 17:12:26 +08:00
kkscilife
ccb06a98e4
test(ci_scripts): add timeout settings and clean work after the slurm job ( #185 )
...
* restore pr test on develop branch
* add mask
* add post action to cancel slurm job
* remove readonly attribute on job log
* add debug info
* debug job log
* try stdin
* use stdin
* set default value avoid error
* try setting readonly on job log
* performance echo
* remove debug info
* use squeue to check slurm job status
* restore the lossed parm
* litmit retry times
* use exclusive to avoid port already in use
* optimize loop body
* remove partition
* add {} for variables
* set env variable for slurm partition
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
2023-08-11 17:09:56 +08:00
Guoteng
a45a91bb84
feat(ckpt): add auto ckpt load and singal quit ( #189 )
...
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-11 17:08:01 +08:00
Shaoyuan Xie
7cfea534e7
[feat]: add pal reasoning script ( #163 )
...
* [Feat] Add PAL inference script
* Update README.md
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/pal_inference.py
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal script
* Update README.md
* restore .ore-commit-config.yaml
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal inference script
* Update READMD.md
* Update internlm/utils/interface.py
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
* Update pal script
* Update pal script
* Update script
* Add docstring
* Update format
* Update script
* Update script
* Update script
---------
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
2023-08-10 17:53:46 +08:00
del-zhenwu
d28816a499
feat(bot): Create .owners.yml for Auto Assign ( #176 )
...
* Create .owners.yml: for issue/pr assign automatically
* Update .owners.yml
* Update .owners.yml
fix typo
2023-08-10 12:11:30 +08:00
Jaylin Lee
e4c0651b96
feat(CI_workflow): Add PR & Issue auto remove workflow ( #184 )
...
* feat(ci_workflow): Add PR & Issue auto remove workflow
Add a workflow for stale PR & Issue auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue well be remove in 7 days
- run this workflow every day on 1:30 a.m.
* Update stale.yml
2023-08-09 16:26:05 +08:00
cx
f1a7949185
feat(profiling): add a simple memory profiler ( #89 )
...
* feat(profiling): add simple memory profiler
* feat(profiling): add profiling argument
2023-08-08 13:10:01 +08:00
Guoteng
29d27a6227
feat(ckpt): add async upload and ckpt snapshot ( #161 )
...
* use fp16 in instruction (#80 )
* delete torch_dtype of README's example code (#100 )
* feat(ckpt): support async ckpt upload and ckpt snapshot
---------
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-08 13:08:36 +08:00
huangting4201
ff0fa7659f
feat(monitor): support monitor and alert ( #175 )
...
* feat(monitor): support monitor and alert
* feat(monitor.py): fix demo error
* feat(monitor.py): move cmd monitor args to config file
* feat(hybrid_zero_optim.py): if overflow occurs send alert msg
* feat(monitor.py): remove alert msg filter
* feat(monitor.py): optimize class MonitorTracker
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(train.py): update print to log
* style(ci): fix lint error
* fix(utils/evaluation.py): remove useless code
* fix(model/modeling_internlm.py): fix lint error
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-08 11:18:15 +08:00
ytxiong
c219065348
feat(*): support sequence_parallel ( #180 )
...
* support sequence_parallel for no pipeline
* sequence_parallel does not support no-flash-attn
* support sequence parallel for pipeline
* add memory profiler
* Update 13B.py
* add memory profiler
* fix evaluation bug
* remove some unnecessary code
* remove some unnecessary code
* Update parallel_context.py
* modify the config
* remove memory profiler
* modify the config
* support selective dropout
2023-08-07 16:42:52 +08:00
ytxiong
853becfb6e
feat(*): support fp32 training ( #155 )
...
* support float32 training
* fix lint
* add adaptation in model/utils.py
* remove some unnecessary code
* fix lint
* feat(optim): add support for fp32 zero
* Revert "Merge pull request #2 from SolenoidWGT/fp32_zero"
This reverts commit 53fc50b0e5
, reversing
changes made to 40f24d0a73
.
revert commit
* merge develop
* Update utils.py
* support fp32 in zero optimizer
* modify the dtype
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-04 16:05:30 +08:00
kkscilife
06274e64d7
test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations ( #174 )
...
* add pull_request in lint check
* use default variables in ci_scripts
* fix format
* check and install requirements automaticlly
* fix format
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
2023-08-04 11:09:46 +08:00
cx
0268d8eda1
refactor(scheduler): rewrite pipeline scheduler ( #138 )
...
* refactor(scheduler): rewrite pipeline scheduler
* fix(*): fix pipeline scheduler bugs
* fix(*): fix merge bug
* feat(*): update codes with todo tag
* feat(*): add comments
* feat(internlm/core/scheduler): update recv_prev/next logic
* feat(utils/evaluation.py): update sche metric hook for valid
---------
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-08-03 11:48:12 +08:00
zachtzy
585071c95b
test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations ( #165 )
...
* test: optimization of ci scripts(variables, test data cleaning, etc).
* chore(workflows): disable ci job on push.
* fix: update partition
2023-08-03 11:26:51 +08:00
ytxiong
d67be17f96
refactor(*): refactor the code with no-apex ( #170 )
...
* support no-apex
* add default for use_apex
* fix lint
* modify the RMSNormTorch
* remove some comments
* remove use_apex parameter
* remove some unnecessary code
* optimize the code including import
* remove the import RMSNorm
* remove warnings
2023-08-03 11:24:12 +08:00
ytxiong
1c397f523f
feat(*): support no apex ( #166 )
...
* support no-apex
* add default for use_apex
* fix lint
* modify the RMSNormTorch
* remove some comments
* remove use_apex parameter
* remove some unnecessary code
2023-08-02 20:32:38 +08:00
huangting4201
66a23e326a
feat(utils/evaluation.py): support evaluate ( #154 )
...
* style(internlm): fix lint error
* feat(utils/logger.py): support uniscale logger
* fix(utils/logger.py): fix import circular error
* feat(train.py): support dashboard metric panel and fix ci train config
* fix(ci_scripts/train/slurm_train.sh): fix ci train error
* fix(ci_scripts/train/torchrun.sh): fix ci train error
* feat(utils/evaluation.py): support evaluate on validation dataset
* fix(utils/evaluation.py): fix demo error
* fix(ci_scripts/train/ci_7B_sft.py): fix ci train error
* feat(initialize/launch.py): set default value for valid_bsz and valid_every
* fix(ci_scripts/train): restore ci update
* docs(configs/7B_sft.py): update comment for config
* fix(config.json): delete config.json
* fix evaluation bug in scheduler when use_flash_attn=False
* feat(scheduler/no_pipeline_scheduler.py): support micro_bsz>1 in no pp
* modify the jugement in pp and no-pp scheduler
* modify the data_process_func in evaluation
* fix bugs when use_flash_attn=False
* rename symbol
* feat(configs/7B_sft.py): change para valid_bsz to valid_micro_num
* feat(scheduler/no_pipeline_scheduler.py): update para set _grad_accum_batch_size
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-02 19:03:59 +08:00
kkscilife
7fbf85eac9
use llm partition ( #159 )
...
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
2023-08-01 17:49:01 +08:00
huangting4201
1f7304a8bb
feat(utils/logger.py): support uniscale logger ( #152 )
...
* style(internlm): fix lint error
* feat(utils/logger.py): support uniscale logger
* fix(utils/logger.py): fix import circular error
* feat(train.py): support dashboard metric panel and fix ci train config
* fix(ci_scripts/train/slurm_train.sh): fix ci train error
* fix(ci_scripts/train/torchrun.sh): fix ci train error
* fix(ci_scripts/train): restore ci update
* fix(config.json): delete alert webhook
* feat(train.py): optimize func init logger
* feat(config.json): delete config.json
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-08-01 17:37:32 +08:00
ytxiong
307c4741d1
fix(initialize/launch.py): set default value for use_flash_attn ( #158 )
...
* add default for use_flash_attn
* fix lint
2023-08-01 16:03:06 +08:00
lvhan028
fbe6ef1da5
[Doc] update deployment guide to keep consistency with lmdeploy ( #136 )
...
* update deployment guide
* fix error
2023-07-31 14:42:07 +08:00
Guoteng
6b6295aea3
Feat add checkpoint fraction ( #151 )
...
* feat(config): add checkpoint_fraction into config
* feat: remove checkpoint_fraction from configs/7B_sft.py
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-07-31 13:57:01 +08:00
ytxiong
5ee651c2f1
feat(*): support not-flash-attn for pp and no-pp ( #145 )
...
* support not flash attention for no-pp
* support pipeline
* modify the config
* refactor the code
* refactor the code
* remove some unnecessary code
2023-07-28 16:13:04 +08:00
huangting4201
8b1717a05d
style(solver/optimizer/utils.py): fix lint error ( #147 )
...
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-28 10:48:06 +08:00
vansin
2fee4220a6
Doc: add twitter link ( #141 )
2023-07-27 15:24:50 +08:00