yuehuayingxueluo
04aca9e55b
[Inference/Kernel]Add get_cos_and_sin Kernel ( #5528 )
...
* Add get_cos_and_sin kernel
* fix code comments
* fix code typos
* merge common codes of get_cos_and_sin kernel.
* Fixed a typo
* Changed 'asset allclose' to 'assert equal'.
2024-04-01 13:47:14 +08:00
yuehuayingxueluo
934e31afb2
The writing style of tail processing and the logic related to macro definitions have been optimized. ( #5519 )
2024-03-28 10:42:51 +08:00
傅剑寒
e6496dd371
[Inference] Optimize request handler of llama ( #5512 )
...
* optimize request_handler
* fix ways of writing
2024-03-26 16:37:14 +08:00
Runyu Lu
6251d68dc9
[fix] PR #5354 ( #5501 )
...
* [fix]
* [fix]
* Update config.py docstring
* [fix] docstring align
* [fix] docstring align
* [fix] docstring align
2024-03-25 15:24:17 +08:00
Runyu Lu
1d626233ce
Merge pull request #5434 from LRY89757/colossal-infer-cuda-graph
...
[feat] cuda graph support and refactor non-functional api
2024-03-25 14:55:59 +08:00
Runyu Lu
68e9396bc0
[fix] merge conflicts
2024-03-25 14:48:28 +08:00
yuehuayingxueluo
87079cffe8
[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding ( #5461 )
...
* Support FP16/BF16 Flash Attention 2
* fix bugs in test_kv_cache_memcpy.py
* add context_kv_cache_memcpy_kernel.cu
* rm typename MT
* add tail process
* add high_precision
* add high_precision to config.py
* rm unused code
* change the comment for the high_precision parameter
* update test_rotary_embdding_unpad.py
* fix vector_copy_utils.h
* add comment for self.high_precision when using float32
2024-03-25 13:40:34 +08:00
Runyu Lu
ff4998c6f3
[fix] remove unused comment
2024-03-25 12:00:57 +08:00
Runyu Lu
9fe61b4475
[fix]
2024-03-25 11:37:58 +08:00
Runyu Lu
5b017d6324
[fix]
2024-03-21 15:55:25 +08:00
Runyu Lu
606603bb88
Merge branch 'feature/colossal-infer' of https://github.com/hpcaitech/ColossalAI into colossal-infer-cuda-graph
2024-03-21 14:25:22 +08:00
Runyu Lu
4eafe0c814
[fix] unused option
2024-03-21 11:28:42 +08:00
傅剑寒
7ff42cc06d
add vec_type_trait implementation ( #5473 )
2024-03-19 18:36:40 +08:00
傅剑寒
b96557b5e1
Merge pull request #5469 from Courtesy-Xs/add_vec_traits
...
Refactor vector utils
2024-03-19 13:53:26 +08:00
Runyu Lu
aabc9fb6aa
[feat] add use_cuda_kernel option
2024-03-19 13:24:25 +08:00
xs_courtesy
48c4f29b27
refactor vector utils
2024-03-19 11:32:01 +08:00
傅剑寒
b6e9785885
Merge pull request #5457 from Courtesy-Xs/ly_add_implementation_for_launch_config
...
add implementatino for GetGPULaunchConfig1D
2024-03-15 11:23:44 +08:00
xs_courtesy
5724b9e31e
add some comments
2024-03-15 11:18:57 +08:00
Runyu Lu
6e30248683
[fix] tmp for test
2024-03-14 16:13:00 +08:00
xs_courtesy
388e043930
add implementatino for GetGPULaunchConfig1D
2024-03-14 11:13:40 +08:00
Runyu Lu
d02e257abd
Merge branch 'feature/colossal-infer' into colossal-infer-cuda-graph
2024-03-14 10:37:05 +08:00
Runyu Lu
ae24b4f025
diverse tests
2024-03-14 10:35:08 +08:00
Runyu Lu
1821a6dab0
[fix] pytest and fix dyn grid bug
2024-03-13 17:28:32 +08:00
yuehuayingxueluo
f366a5ea1f
[Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel ( #5418 )
...
* add rotary embedding kernel
* add rotary_embedding_kernel
* add fused rotary_emb and kvcache memcopy
* add fused_rotary_emb_and_cache_kernel.cu
* add fused_rotary_emb_and_memcopy
* fix bugs in fused_rotary_emb_and_cache_kernel.cu
* fix ci bugs
* use vec memcopy and opt the gloabl memory access
* fix code style
* fix test_rotary_embdding_unpad.py
* codes revised based on the review comments
* fix bugs about include path
* rm inline
2024-03-13 17:20:03 +08:00
Steve Luo
ed431de4e4
fix rmsnorm template function invocation problem(template function partial specialization is not allowed in Cpp) and luckily pass e2e precision test ( #5454 )
2024-03-13 16:00:55 +08:00
傅剑寒
6fd355a5a6
Merge pull request #5452 from Courtesy-Xs/fix_include_path
...
fix include path
2024-03-13 11:26:41 +08:00
xs_courtesy
c1c45e9d8e
fix include path
2024-03-13 11:21:06 +08:00
Steve Luo
b699f54007
optimize rmsnorm: add vectorized elementwise op, feat loop unrolling ( #5441 )
2024-03-12 17:48:02 +08:00
傅剑寒
368a2aa543
Merge pull request #5445 from Courtesy-Xs/refactor_infer_compilation
...
Refactor colossal-infer code arch
2024-03-12 14:14:37 +08:00
xs_courtesy
095c070a6e
refactor code
2024-03-11 17:06:57 +08:00
傅剑寒
21e1e3645c
Merge pull request #5435 from Courtesy-Xs/add_gpu_launch_config
...
Add query and other components
2024-03-11 11:15:29 +08:00
Runyu Lu
633e95b301
[doc] add doc
2024-03-11 10:56:51 +08:00
Runyu Lu
9dec66fad6
[fix] multi graphs capture error
2024-03-11 10:51:16 +08:00
Runyu Lu
b2c0d9ff2b
[fix] multi graphs capture error
2024-03-11 10:49:31 +08:00
Steve Luo
f7aecc0c6b
feat rmsnorm cuda kernel and add unittest, benchmark script ( #5417 )
2024-03-08 16:21:12 +08:00
xs_courtesy
5eb5ff1464
refactor code
2024-03-08 15:41:14 +08:00
xs_courtesy
01d289d8e5
Merge branch 'feature/colossal-infer' of https://github.com/hpcaitech/ColossalAI into add_gpu_launch_config
2024-03-08 15:04:55 +08:00
xs_courtesy
a46598ac59
add reusable utils for cuda
2024-03-08 14:53:29 +08:00
傅剑寒
2b28b54ac6
Merge pull request #5433 from Courtesy-Xs/add_silu_and_mul
...
【Inference】Add silu_and_mul for infer
2024-03-08 14:44:37 +08:00
Runyu Lu
cefaeb5fdd
[feat] cuda graph support and refactor non-functional api
2024-03-08 14:19:35 +08:00
xs_courtesy
95c21498d4
add silu_and_mul for infer
2024-03-07 16:57:49 +08:00
Frank Lee
593a72e4d5
Merge pull request #5424 from FrankLeeeee/sync/main
...
Sync/main
2024-03-04 10:13:59 +08:00
FrankLeeeee
0310b76e9d
Merge branch 'main' into sync/main
2024-03-04 10:09:36 +08:00
Camille Zhong
4b8312c08e
fix sft single turn inference example ( #5416 )
2024-03-01 17:27:50 +08:00
binmakeswell
a1c6cdb189
[doc] fix blog link
2024-02-29 15:01:43 +08:00
binmakeswell
5de940de32
[doc] fix blog link
2024-02-29 15:01:43 +08:00
Frank Lee
2461f37886
[workflow] added pypi channel ( #5412 )
2024-02-29 13:56:55 +08:00
Tong Li
a28c971516
update requirements ( #5407 )
2024-02-28 17:46:27 +08:00
yuehuayingxueluo
0aa27f1961
[Inference]Move benchmark-related code to the example directory. ( #5408 )
...
* move benchmark-related code to the example directory.
* fix bugs in test_fused_rotary_embedding.py
2024-02-28 16:46:03 +08:00
yuehuayingxueluo
600881a8ea
[Inference]Add CUDA KVCache Kernel ( #5406 )
...
* add cuda KVCache kernel
* annotation benchmark_kvcache_copy
* add use cuda
* fix import path
* move benchmark scripts to example/
* rm benchmark codes in test_kv_cache_memcpy.py
* rm redundancy codes
* rm redundancy codes
* pr was modified according to the review
2024-02-28 14:36:50 +08:00