Commit Graph

1426 Commits (4f21c9e8d900d0a82df38a68067857a505eccc66)

Author SHA1 Message Date
CsRic 9623ec1b02 [NFC] polish colossalai/amp/naive_amp/_utils.py code style (#1816)
* [NFC] polish colossalai/nn/metric/accuracy_2p5d.py code style (#1714)

* [NFC] polish colossalai/zero/sharded_param/__init__.py code style

* [NFC] polish colossalai/amp/naive_amp/_utils.py code style

Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: ric <mkkt_bkkt@mail.ustc.edu.cn>
2022-11-09 12:08:47 +08:00
Zangwei Zheng 25993db98a [NFC] polish .github/workflows/build_gpu_8.yml code style (#1813) 2022-11-09 12:08:47 +08:00
Zirui Zhu 244fa3108a [NFC] polish MANIFEST.in code style (#1814) 2022-11-09 12:08:47 +08:00
binmakeswell 3c3714fc2a [NFC] polish strategies_constructor.py code style (#1806) 2022-11-09 12:08:47 +08:00
Jiarui Fang 3ce4463fe6
[utils] remove lazy_memory_allocate from ColoInitContext (#1844) 2022-11-09 11:50:33 +08:00
Fazzie-Maqianli fabed0df3b
Merge pull request #1842 from feifeibear/jiarui/polish
[example] polish diffusion readme
2022-11-09 09:49:18 +08:00
jiaruifang 27211d6267 [example] polish diffusion readme 2022-11-09 09:38:05 +08:00
jiaruifang cddb4b6f6f Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into main 2022-11-09 09:27:45 +08:00
binmakeswell 4ac7d3ec3b
[doc] polish diffusion README (#1840) 2022-11-08 22:36:55 +08:00
binmakeswell 9d3124ac8b
[doc] remove obsolete API demo (#1833) 2022-11-08 18:00:49 +08:00
Jiarui Fang fba34efb5a
version to 0.1.11rc2 (#1832) 2022-11-08 17:25:15 +08:00
jiaruifang 267b55f0a6 version to 0.1.11rc2 2022-11-08 17:24:02 +08:00
Jiarui Fang 8a6d28b6c2
[example] remove useless readme in diffusion (#1831)
* [NFC] update gitignore remove DS_Store

* [version] upgrade the version to 0.1.11rc2
2022-11-08 17:22:32 +08:00
Jiarui Fang f86a703bcf
[NFC] update gitignore remove DS_Store (#1830) 2022-11-08 17:18:15 +08:00
Jiarui Fang a25f755331
[example] add TP to GPT example (#1828) 2022-11-08 17:17:19 +08:00
YuliangLiu0306 49216d7ab1
[autoparallel] fix bugs caused by negative dim key (#1808)
* [autoparallel] fix bugs caused by negative dim key

* fix import error

* fix matmul test issue

* fix unit test issue
2022-11-08 17:03:50 +08:00
アマデウス 4268ae017b
[kernel] added jit warmup (#1792) 2022-11-08 16:22:23 +08:00
binmakeswell 76e64cb67c
[doc] add diffusion (#1827) 2022-11-08 16:21:54 +08:00
YuliangLiu0306 f6032ddb17
[autoparallel] fix bias addition module (#1800) 2022-11-08 16:21:25 +08:00
Fazzie-Maqianli 6e9730d7ab
[example] add stable diffuser (#1825) 2022-11-08 16:14:45 +08:00
Jiarui Fang b1263d32ba
[example] simplify the GPT2 huggingface example (#1826) 2022-11-08 16:14:07 +08:00
Jiarui Fang cd5a0d56fa
[Gemini] make gemini usage simple (#1821) 2022-11-08 15:53:13 +08:00
ver217 99870726b1
[CheckpointIO] a uniform checkpoint I/O module (#1689) 2022-11-08 15:15:13 +08:00
Boyuan Yao 629172b319
[autoparallel] add batch norm metainfo (#1815)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler
2022-11-08 15:05:26 +08:00
Maruyama_Aya a648d061ba
Merge pull request #1817 from MaruyamaAya/main
add ColoDiffusion code: /ldm/module/, /ldm/data/, /scripts/test/
2022-11-08 14:56:00 +08:00
Maruyama_Aya a7e8159da6 add ColoDiffusion codes: /ldm/module/, /ldm/data/, /scripts/test/ 2022-11-08 14:39:35 +08:00
Super Daniel 441d584e4a
[fx] add a symbolic_trace api. (#1812)
* [fx] add a symbolic_trace api.

* [fx] fix import errors.
2022-11-08 13:59:20 +08:00
Jiarui Fang 350ccc0481
[example] opt does not depend on Titans (#1811) 2022-11-08 12:02:20 +08:00
Jiarui Fang 6fa71d65d3
[fx] skip diffusers unitest if it is not installed (#1799) 2022-11-08 11:45:23 +08:00
Jiarui Fang 203ca57aed
[example] add GPT 2022-11-08 10:58:17 +08:00
Jiarui Fang fd2c8d8156
[example] add opt model in lauguage (#1809) 2022-11-08 10:39:13 +08:00
xcnick e0da01ea71
[hotfix] fix build error when torch version >= 1.13 (#1803) 2022-11-08 09:40:24 +08:00
Jiarui Fang f5a92c288c
[example] add diffusion to example (#1805) 2022-11-07 17:43:36 +08:00
oahzxl 9639ea88fc
[kernel] more flexible flashatt interface (#1804) 2022-11-07 17:02:09 +08:00
Zihao 20e255d4e8
MemStatsCollectorStatic (#1765) 2022-11-07 16:49:03 +08:00
Boyuan Yao 327d07c44a
[autoparallel] add conv metainfo class for auto parallel (#1796)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test
2022-11-07 16:15:35 +08:00
oahzxl 501a9e9cd2
[hotfix] polish flash attention (#1802) 2022-11-07 14:30:22 +08:00
Jiarui Fang 218c75fd9d
[NFC] polish type hint for shape consistency (#1801)
* [NFC] polish type hint for shape consistency

* polish code

* polish code
2022-11-07 14:13:03 +08:00
Jiarui Fang c248800359
[kernel] skip tests of flash_attn and triton when they are not available (#1798) 2022-11-07 13:41:13 +08:00
YuliangLiu0306 e34e850a4c
[autoparallel]add essential CommActions for broadcast oprands (#1793) 2022-11-04 18:36:42 +08:00
Boyuan Yao 05ce3d369f
[fx] Add linear metainfo class for auto parallel (#1783)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel
2022-11-04 10:55:09 +08:00
Super Daniel e8a9bebc87
[autoparallel] refactor and add rotorc. (#1789)
* [autoparallel] refactor and add rotorc.

* [autoparallel] refactor and add rotorc.
2022-11-03 12:32:51 +08:00
github-actions[bot] 4d6e1284cb
Automated submodule synchronization (#1785)
Co-authored-by: github-actions <github-actions@github.com>
2022-11-03 12:31:50 +08:00
YuliangLiu0306 2c4c7b3618
[autoparallel] add getattr handler (#1767)
* [autoparallel] add getattr haandler

* polish code

* add extra processes for Parameters

* add unit test for param resharding cost

* add docstring and polish test
2022-11-03 12:31:33 +08:00
HELSON c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 (#1786)
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12

* [zero] add cpu shard init

* [zero] add tiny example test

* [colo_tensor] fix bugs for torch-1.11
2022-11-02 16:11:34 +08:00
Jiarui Fang 32c1b843a9
skip torchrec unittests if not installed (#1790) 2022-11-02 14:44:32 +08:00
kurisusnowdeng 0b8161fab8 updated tp layers 2022-11-02 12:19:38 +08:00
Jiarui Fang cb5a587e9a
[hotfix] polish chunk import (#1787) 2022-11-02 12:10:52 +08:00
YuliangLiu0306 e859380bf7
[fx] support module with bias addition (#1780)
* [autoparallel] refactor tracer to fix bias addition issue

* [fx] support module with bias addition

* create bias_addition_module

* refactor file structure

* polish code

* fix unit test
2022-11-01 22:53:51 +08:00
Frank Lee f3f19a5c47
[autoparallel] added matmul handler (#1763)
* [autoparallel] added matmul handler

* polish code
2022-11-01 15:14:53 +08:00