Commit Graph

3073 Commits (68ec99e946129298b2e6d8e6463886fe6b22a5df)
 

Author SHA1 Message Date
Yuanchen eae01b6740
Improve logic for selecting metrics (#5196)
11 months ago
Wenhao Chen 4fa689fca1
[pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134)
11 months ago
BlueRum af952673f7
polish readme in application/chat (#5194)
11 months ago
flybird11111 681d9b12ef
[doc] update pytorch version in documents. (#5177)
12 months ago
Yuanchen 3ff60d13b0
Fix ColossalEval (#5186)
12 months ago
flybird11111 79718fae04
[shardformer] llama support DistCrossEntropy (#5176)
12 months ago
Yuanchen cefdc32615
[ColossalEval] Support GSM, Data Leakage Evaluation and Tensor Parallel (#5169)
12 months ago
Michelle b07a6f4e27
[colossalqa] fix pangu api (#5170)
12 months ago
flybird11111 21aa5de00b
[gemini] hotfix NaN loss while using Gemini + tensor_parallel (#5150)
12 months ago
Yuanchen b397104438
[Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
12 months ago
flybird11111 3dbbf83f1c
fix (#5158)
12 months ago
Michelle 368b5e3d64
[doc] fix colossalqa document (#5146)
1 year ago
Michelle c7fd9a5213
[ColossalQA] refactor server and webui & add new feature (#5138)
1 year ago
flybird11111 2a2ec49aa7
[plugin]fix 3d checkpoint load when booster boost without optimizer. (#5135)
1 year ago
Xuanlei Zhao d6df19bae7
[npu] support triangle attention for llama (#5130)
1 year ago
Frank Lee f4e72c9992
[accelerator] init the accelerator module (#5129)
1 year ago
github-actions[bot] f6731db67c
[format] applied code formatting on changed files in pull request 5115 (#5118)
1 year ago
github-actions[bot] 9b36640f28
[format] applied code formatting on changed files in pull request 5124 (#5125)
1 year ago
github-actions[bot] d10ee42f68
[format] applied code formatting on changed files in pull request 5088 (#5127)
1 year ago
digger yu 9110406a47
fix typo change JOSNL TO JSONL etc. (#5116)
1 year ago
Frank Lee 2899cfdabf
[doc] updated paper citation (#5131)
1 year ago
binmakeswell 177c79f2d1
[doc] add moe news (#5128)
1 year ago
Wenhao Chen 7172459e74
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
1 year ago
アマデウス 126cf180bc
[hotfix] fixed memory usage of shardformer module replacement (#5122)
1 year ago
Zian(Andy) Zheng 7b789f4dd2 [FEATURE] Add Safety Eval Datasets to ColossalEval (#5095)
1 year ago
digger yu d5661f0f25
[nfc] fix typo change directoty to directory (#5111)
1 year ago
digger yu 2bdf76f1f2
fix typo change lazy_iniy to lazy_init (#5099)
1 year ago
Xuanlei Zhao 68fcaa2225
remove duplicate import (#5100)
1 year ago
YeAnbang e53e729d8e
[Feature] Add document retrieval QA (#5020)
1 year ago
Xuanlei Zhao 3acbf6d496
[npu] add npu support for hybrid plugin and llama (#5090)
1 year ago
flybird11111 aae496631c
[shardformer]fix flash attention, when mask is casual, just don't unpad it (#5084)
1 year ago
Zhongkai Zhao 75af66cd81
[Hotfix] Fix model policy matching strategy in ShardFormer (#5064)
1 year ago
flybird11111 4ccb9ded7d
[gemini]fix gemini optimzer, saving Shardformer in Gemini got list assignment index out of range (#5085)
1 year ago
digger yu 0d482302a1
[nfc] fix typo and author name (#5089)
1 year ago
digger yu fd3567e089
[nfc] fix typo in docs/ (#4972)
1 year ago
Jun Gao dce05da535
fix thrust-transform-reduce error (#5078)
1 year ago
Hongxin Liu 1cd7efc520
[inference] refactor examples and fix schedule (#5077)
1 year ago
Bin Jia 4e3959d316
[hotfix/hybridengine] Fix init model with random parameters in benchmark (#5074)
1 year ago
github-actions[bot] 8921a73c90
[format] applied code formatting on changed files in pull request 5067 (#5072)
1 year ago
Xu Kai fb103cfd6e
[inference] update examples and engine (#5073)
1 year ago
Bin Jia 0c7d8bebd5
[hotfix/hybridengine] fix bug when tp*pp size = 1 (#5069)
1 year ago
Hongxin Liu e5ce4c8ea6
[npu] add npu support for gemini and zero (#5067)
1 year ago
Hongxin Liu 8d56c9c389
[misc] remove outdated submodule (#5070)
1 year ago
Cuiqing Li (李崔卿) bce919708f
[Kernels]added flash-decoidng of triton (#5063)
1 year ago
Xu Kai fd6482ad8c
[inference] Refactor inference architecture (#5057)
1 year ago
flybird11111 bc09b95f50
[exampe] fix llama example' loss error when using gemini plugin (#5060)
1 year ago
Wenhao Chen 3c08f17348
[hotfix]: modify create_ep_hierarchical_group and add test (#5032)
1 year ago
flybird11111 97cd0cd559
[shardformer] fix llama error when transformers upgraded. (#5055)
1 year ago
flybird11111 3e02154710
[gemini] gemini support extra-dp (#5043)
1 year ago
Elsa Granger b2ad0d9e8f
[pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
1 year ago