Yuanheng Zhao
|
4bb5d8923a
|
[Fix/Inference] Remove unused and non-functional functions (#5543)
* [fix] remove unused func
* rm non-functional partial
|
8 months ago |
傅剑寒
|
a2878e39f4
|
[Inference] Add Reduce Utils (#5537)
* add reduce utils
* add using to delele namespace prefix
|
8 months ago |
yuehuayingxueluo
|
04aca9e55b
|
[Inference/Kernel]Add get_cos_and_sin Kernel (#5528)
* Add get_cos_and_sin kernel
* fix code comments
* fix code typos
* merge common codes of get_cos_and_sin kernel.
* Fixed a typo
* Changed 'asset allclose' to 'assert equal'.
|
8 months ago |
yuehuayingxueluo
|
934e31afb2
|
The writing style of tail processing and the logic related to macro definitions have been optimized. (#5519)
|
8 months ago |
傅剑寒
|
e6496dd371
|
[Inference] Optimize request handler of llama (#5512)
* optimize request_handler
* fix ways of writing
|
8 months ago |
Runyu Lu
|
6251d68dc9
|
[fix] PR #5354 (#5501)
* [fix]
* [fix]
* Update config.py docstring
* [fix] docstring align
* [fix] docstring align
* [fix] docstring align
|
8 months ago |
Runyu Lu
|
1d626233ce
|
Merge pull request #5434 from LRY89757/colossal-infer-cuda-graph
[feat] cuda graph support and refactor non-functional api
|
8 months ago |
Runyu Lu
|
68e9396bc0
|
[fix] merge conflicts
|
8 months ago |
yuehuayingxueluo
|
87079cffe8
|
[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461)
* Support FP16/BF16 Flash Attention 2
* fix bugs in test_kv_cache_memcpy.py
* add context_kv_cache_memcpy_kernel.cu
* rm typename MT
* add tail process
* add high_precision
* add high_precision to config.py
* rm unused code
* change the comment for the high_precision parameter
* update test_rotary_embdding_unpad.py
* fix vector_copy_utils.h
* add comment for self.high_precision when using float32
|
8 months ago |
Runyu Lu
|
ff4998c6f3
|
[fix] remove unused comment
|
8 months ago |
Runyu Lu
|
9fe61b4475
|
[fix]
|
8 months ago |
Runyu Lu
|
5b017d6324
|
[fix]
|
8 months ago |
Runyu Lu
|
606603bb88
|
Merge branch 'feature/colossal-infer' of https://github.com/hpcaitech/ColossalAI into colossal-infer-cuda-graph
|
8 months ago |
Runyu Lu
|
4eafe0c814
|
[fix] unused option
|
8 months ago |
傅剑寒
|
7ff42cc06d
|
add vec_type_trait implementation (#5473)
|
8 months ago |
傅剑寒
|
b96557b5e1
|
Merge pull request #5469 from Courtesy-Xs/add_vec_traits
Refactor vector utils
|
8 months ago |
Runyu Lu
|
aabc9fb6aa
|
[feat] add use_cuda_kernel option
|
8 months ago |
xs_courtesy
|
48c4f29b27
|
refactor vector utils
|
8 months ago |
傅剑寒
|
b6e9785885
|
Merge pull request #5457 from Courtesy-Xs/ly_add_implementation_for_launch_config
add implementatino for GetGPULaunchConfig1D
|
9 months ago |
xs_courtesy
|
5724b9e31e
|
add some comments
|
9 months ago |
Runyu Lu
|
6e30248683
|
[fix] tmp for test
|
9 months ago |
xs_courtesy
|
388e043930
|
add implementatino for GetGPULaunchConfig1D
|
9 months ago |
Runyu Lu
|
d02e257abd
|
Merge branch 'feature/colossal-infer' into colossal-infer-cuda-graph
|
9 months ago |
Runyu Lu
|
ae24b4f025
|
diverse tests
|
9 months ago |
Runyu Lu
|
1821a6dab0
|
[fix] pytest and fix dyn grid bug
|
9 months ago |
yuehuayingxueluo
|
f366a5ea1f
|
[Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418)
* add rotary embedding kernel
* add rotary_embedding_kernel
* add fused rotary_emb and kvcache memcopy
* add fused_rotary_emb_and_cache_kernel.cu
* add fused_rotary_emb_and_memcopy
* fix bugs in fused_rotary_emb_and_cache_kernel.cu
* fix ci bugs
* use vec memcopy and opt the gloabl memory access
* fix code style
* fix test_rotary_embdding_unpad.py
* codes revised based on the review comments
* fix bugs about include path
* rm inline
|
9 months ago |
Steve Luo
|
ed431de4e4
|
fix rmsnorm template function invocation problem(template function partial specialization is not allowed in Cpp) and luckily pass e2e precision test (#5454)
|
9 months ago |
傅剑寒
|
6fd355a5a6
|
Merge pull request #5452 from Courtesy-Xs/fix_include_path
fix include path
|
9 months ago |
xs_courtesy
|
c1c45e9d8e
|
fix include path
|
9 months ago |
Steve Luo
|
b699f54007
|
optimize rmsnorm: add vectorized elementwise op, feat loop unrolling (#5441)
|
9 months ago |
傅剑寒
|
368a2aa543
|
Merge pull request #5445 from Courtesy-Xs/refactor_infer_compilation
Refactor colossal-infer code arch
|
9 months ago |
xs_courtesy
|
095c070a6e
|
refactor code
|
9 months ago |
傅剑寒
|
21e1e3645c
|
Merge pull request #5435 from Courtesy-Xs/add_gpu_launch_config
Add query and other components
|
9 months ago |
Runyu Lu
|
633e95b301
|
[doc] add doc
|
9 months ago |
Runyu Lu
|
9dec66fad6
|
[fix] multi graphs capture error
|
9 months ago |
Runyu Lu
|
b2c0d9ff2b
|
[fix] multi graphs capture error
|
9 months ago |
Steve Luo
|
f7aecc0c6b
|
feat rmsnorm cuda kernel and add unittest, benchmark script (#5417)
|
9 months ago |
xs_courtesy
|
5eb5ff1464
|
refactor code
|
9 months ago |
xs_courtesy
|
01d289d8e5
|
Merge branch 'feature/colossal-infer' of https://github.com/hpcaitech/ColossalAI into add_gpu_launch_config
|
9 months ago |
xs_courtesy
|
a46598ac59
|
add reusable utils for cuda
|
9 months ago |
傅剑寒
|
2b28b54ac6
|
Merge pull request #5433 from Courtesy-Xs/add_silu_and_mul
【Inference】Add silu_and_mul for infer
|
9 months ago |
Runyu Lu
|
cefaeb5fdd
|
[feat] cuda graph support and refactor non-functional api
|
9 months ago |
xs_courtesy
|
95c21498d4
|
add silu_and_mul for infer
|
9 months ago |
Frank Lee
|
593a72e4d5
|
Merge pull request #5424 from FrankLeeeee/sync/main
Sync/main
|
9 months ago |
FrankLeeeee
|
0310b76e9d
|
Merge branch 'main' into sync/main
|
9 months ago |
Camille Zhong
|
4b8312c08e
|
fix sft single turn inference example (#5416)
|
9 months ago |
binmakeswell
|
a1c6cdb189
|
[doc] fix blog link
|
9 months ago |
binmakeswell
|
5de940de32
|
[doc] fix blog link
|
9 months ago |
Frank Lee
|
2461f37886
|
[workflow] added pypi channel (#5412)
|
9 months ago |
Tong Li
|
a28c971516
|
update requirements (#5407)
|
9 months ago |