Commit Graph

1992 Commits (aac941ef78afc006ef2dcc62c040c3751324d2e2)

Author SHA1 Message Date
Hongxin Liu aa125bcc91
[shardformer] fix modeling of bloom and falcon (#5796) 2024-06-11 17:43:50 +08:00
Runyu Lu c0948aff97
[Inference]refactor baichuan (#5791)
* refactor baichuan

* remove unused code and add TODO for lazyinit
2024-06-11 10:52:01 +08:00
char-1ee f5981e808e Remove flash attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 10:02:19 +00:00
char-1ee ceba662d22 Clean up
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 09:09:29 +00:00
char-1ee 5f398fc000 Pass inference model shard configs for module init
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:52 +00:00
char-1ee eec77e5702 Fix tests and naming
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
char-1ee 04386d9eff Refactor modeling by adding attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
Hongxin Liu 73e88a5553
[shardformer] fix import (#5788) 2024-06-06 19:09:50 +08:00
Hongxin Liu b9d646fe9e
[misc] fix dist logger (#5782) 2024-06-05 15:04:22 +08:00
botbw 3f7e3131d9
[gemini] optimize reduce scatter d2h copy (#5760)
* [gemini] optimize reduce scatter d2h copy

* [fix] fix missing reduce variable

* [refactor] remove legacy async reduce scatter code

* [gemini] missing sync

* Revert "[refactor] remove legacy async reduce scatter code"

This reverts commit 58ad76d466.

* [gemini] further optimize with async all reduce

* [fix] pass flag from manager to chunk
2024-06-05 14:23:13 +08:00
Edenzzzz 79f7a7b211
[misc] Accelerate CI for zero and dist optim (#5758)
* remove fp16 from lamb

* remove d2h copy in checking states

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-05 11:25:19 +08:00
flybird11111 50b4c8e8cf
[hotfix] fix llama flash attention forward (#5777) 2024-06-05 10:56:47 +08:00
yuehuayingxueluo b45000f839
[Inference]Add Streaming LLM (#5745)
* Add Streaming LLM

* add some parameters to llama_generation.py

* verify streamingllm config

* add test_streamingllm.py

* modified according to the opinions of review

* add Citation

* change _block_tables tolist
2024-06-05 10:51:19 +08:00
Yuanheng Zhao 406443200f
[Hotfix] Add missing init file in inference.executor (#5774) 2024-06-03 22:29:39 +08:00
duanjunwen 1b76564e16
[test] Fix/fix testcase (#5770)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;
2024-06-03 15:26:01 +08:00
flybird11111 3f2be80530
fix (#5765) 2024-06-03 11:25:18 +08:00
botbw 023ea13cb5
Merge pull request #5749 from hpcaitech/prefetch
[Gemini] Prefetch next chunk before each op
2024-05-29 15:35:54 +08:00
hxwang 8547562884 [chore] remove unnecessary assert since compute list might not be recorded 2024-05-28 05:16:02 +00:00
hxwang e5e3320948 [bug] continue fix 2024-05-28 02:41:23 +00:00
hxwang 936dd96dbb [bug] workaround for idx fix 2024-05-28 02:33:12 +00:00
Edenzzzz 5f8c0a0ac3
[Feature] auto-cast optimizers to distributed version (#5746)
* auto-cast optimizers to distributed

* fix galore casting

* logger

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-05-24 17:24:16 +08:00
hxwang ff507b755e Merge branch 'main' of github.com:hpcaitech/ColossalAI into prefetch 2024-05-24 04:05:07 +00:00
botbw 2fc85abf43
[gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713)
* [gemini] async grad chunk reduce (all-reduce&reduce-scatter)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] add test

* [gemini] rename func

* [gemini] update llama benchmark

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] use tensor counter

* [gemini] change default config in GeminiPlugin and GeminiDDP

* [chore] typo

* [gemini] fix sync issue & add test cases

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-24 10:31:16 +08:00
Jianghai 85946d4236
[Inference]Fix readme and example for API server (#5742)
* fix chatapi readme and example

* updating doc

* add an api and change the doc

* remove

* add credits and del 'API' heading

* readme

* readme
2024-05-24 10:03:05 +08:00
hxwang 15d21a077a Merge remote-tracking branch 'origin/main' into prefetch 2024-05-23 15:49:33 +00:00
binmakeswell 4647ec28c8
[inference] release (#5747)
* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release
2024-05-23 17:44:06 +08:00
Yuanheng Zhao df6747603f
[Colossal-Inference] (v0.1.0) Merge pull request #5739 from hpcaitech/feature/colossal-infer
[Inference] Merge feature/colossal-infer
2024-05-22 14:31:09 +08:00
Yuanheng Zhao bd38fe6b91
[NFC] Fix code factors on inference triton kernels (#5743) 2024-05-21 22:12:15 +08:00
botbw 13c06d36a3
[bug] fix early return (#5740)
* [bug] fix silly bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [chore] add test for prefetch

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-21 14:21:58 +08:00
Haze188 22ce873c3f
[Shardformer] Add parallel output for shardformer models(bloom, falcon) (#5702)
* [pre-commit.ci] auto fixes from pre-commit.com hooks

* add parallel cross entropy output for falcon model & fix some typos in bloom.py

* fix module name error, self.model -> self.transformers in bloom, falcon model

* Fix the overflow bug of distributed cross entropy loss function when training with fp16

* add dtype to parallel cross entropy loss function

* fix dtype related typos adn prettify the loss.py

* fix grad dtype and update dtype mismatch error

* fix typo bugs
2024-05-21 11:07:13 +08:00
pre-commit-ci[bot] b3c0e6d871 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-21 02:09:15 +00:00
hxwang 137a7c341b [chore] fix init error 2024-05-21 02:07:21 +00:00
Yuanheng Zhao 8633c15da9 [sync] Sync feature/colossal-infer with main 2024-05-20 15:50:53 +00:00
Yuanheng Zhao d8b1ea4ac9
[doc] Update Inference Readme (#5736)
* [doc] update inference readme

* add contents

* trivial
2024-05-20 22:50:04 +08:00
Yuanheng Zhao bdf9a001d6
[Fix/Inference] Add unsupported auto-policy error message (#5730)
* [fix] auto policy error message

* trivial
2024-05-20 22:49:18 +08:00
genghaozhe 90d8d0183c remove personal comments 2024-05-20 07:28:20 +00:00
genghaozhe bfcb2d1ff8 refactor the code structure to solve the circular import 2024-05-20 07:25:24 +00:00
genghaozhe 1ec92d29af remove perf log, unrelated file and so on 2024-05-20 05:23:26 +00:00
genghaozhe 5c6c5d6be3 remove comments 2024-05-20 05:23:12 +00:00
genghaozhe 7416e4943b fix conflicts to beautify the code 2024-05-20 04:09:51 +00:00
genghaozhe d22bf30ca6 implement auto policy prefetch and modify a little origin code. 2024-05-20 04:01:53 +00:00
pre-commit-ci[bot] f1918e18a5 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-20 03:00:07 +00:00
hxwang a55a9e298b [gemini] init auto policy prefetch 2024-05-20 02:21:17 +00:00
Yuanheng Zhao 283c407a19
[Inference] Fix Inference Generation Config and Sampling (#5710)
* refactor and add

* config default values

* fix gen config passing

* fix rpc generation config
2024-05-19 15:08:42 +08:00
genghaozhe 06a3a100b3 remove unrelated code 2024-05-17 10:57:49 +00:00
genghaozhe 3d625ca836 add some todo Message 2024-05-17 10:55:28 +00:00
flybird11111 9d83c6d715
[lazy] fix lazy cls init (#5720)
* fix

* fix

* fix

* fix

* fix

* remove kernel intall

* rebase

revert

fix

* fix

* fix
2024-05-17 18:18:59 +08:00
botbw e57812c672
[chore] Update placement_policy.py 2024-05-17 13:42:18 +08:00
Yuanheng Zhao 8bcfe360fd
[example] Update Inference Example (#5725)
* [example] update inference example
2024-05-17 11:28:53 +08:00
genghaozhe 013690a86b remove set(all_chunks) 2024-05-16 09:57:51 +00:00