binmakeswell
9d3124ac8b
[doc] remove obsolete API demo ( #1833 )
2022-11-08 18:00:49 +08:00
Jiarui Fang
fba34efb5a
version to 0.1.11rc2 ( #1832 )
2022-11-08 17:25:15 +08:00
jiaruifang
267b55f0a6
version to 0.1.11rc2
2022-11-08 17:24:02 +08:00
Jiarui Fang
8a6d28b6c2
[example] remove useless readme in diffusion ( #1831 )
...
* [NFC] update gitignore remove DS_Store
* [version] upgrade the version to 0.1.11rc2
2022-11-08 17:22:32 +08:00
Jiarui Fang
f86a703bcf
[NFC] update gitignore remove DS_Store ( #1830 )
2022-11-08 17:18:15 +08:00
Jiarui Fang
a25f755331
[example] add TP to GPT example ( #1828 )
2022-11-08 17:17:19 +08:00
YuliangLiu0306
49216d7ab1
[autoparallel] fix bugs caused by negative dim key ( #1808 )
...
* [autoparallel] fix bugs caused by negative dim key
* fix import error
* fix matmul test issue
* fix unit test issue
2022-11-08 17:03:50 +08:00
アマデウス
4268ae017b
[kernel] added jit warmup ( #1792 )
2022-11-08 16:22:23 +08:00
binmakeswell
76e64cb67c
[doc] add diffusion ( #1827 )
2022-11-08 16:21:54 +08:00
YuliangLiu0306
f6032ddb17
[autoparallel] fix bias addition module ( #1800 )
2022-11-08 16:21:25 +08:00
Fazzie-Maqianli
6e9730d7ab
[example] add stable diffuser ( #1825 )
2022-11-08 16:14:45 +08:00
Jiarui Fang
b1263d32ba
[example] simplify the GPT2 huggingface example ( #1826 )
2022-11-08 16:14:07 +08:00
Jiarui Fang
cd5a0d56fa
[Gemini] make gemini usage simple ( #1821 )
2022-11-08 15:53:13 +08:00
ver217
99870726b1
[CheckpointIO] a uniform checkpoint I/O module ( #1689 )
2022-11-08 15:15:13 +08:00
Boyuan Yao
629172b319
[autoparallel] add batch norm metainfo ( #1815 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
2022-11-08 15:05:26 +08:00
Maruyama_Aya
a648d061ba
Merge pull request #1817 from MaruyamaAya/main
...
add ColoDiffusion code: /ldm/module/, /ldm/data/, /scripts/test/
2022-11-08 14:56:00 +08:00
Maruyama_Aya
a7e8159da6
add ColoDiffusion codes: /ldm/module/, /ldm/data/, /scripts/test/
2022-11-08 14:39:35 +08:00
Super Daniel
441d584e4a
[fx] add a symbolic_trace api. ( #1812 )
...
* [fx] add a symbolic_trace api.
* [fx] fix import errors.
2022-11-08 13:59:20 +08:00
Jiarui Fang
350ccc0481
[example] opt does not depend on Titans ( #1811 )
2022-11-08 12:02:20 +08:00
Jiarui Fang
6fa71d65d3
[fx] skip diffusers unitest if it is not installed ( #1799 )
2022-11-08 11:45:23 +08:00
Jiarui Fang
203ca57aed
[example] add GPT
2022-11-08 10:58:17 +08:00
Jiarui Fang
fd2c8d8156
[example] add opt model in lauguage ( #1809 )
2022-11-08 10:39:13 +08:00
xcnick
e0da01ea71
[hotfix] fix build error when torch version >= 1.13 ( #1803 )
2022-11-08 09:40:24 +08:00
Jiarui Fang
f5a92c288c
[example] add diffusion to example ( #1805 )
2022-11-07 17:43:36 +08:00
oahzxl
9639ea88fc
[kernel] more flexible flashatt interface ( #1804 )
2022-11-07 17:02:09 +08:00
Zihao
20e255d4e8
MemStatsCollectorStatic ( #1765 )
2022-11-07 16:49:03 +08:00
Boyuan Yao
327d07c44a
[autoparallel] add conv metainfo class for auto parallel ( #1796 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
2022-11-07 16:15:35 +08:00
oahzxl
501a9e9cd2
[hotfix] polish flash attention ( #1802 )
2022-11-07 14:30:22 +08:00
Jiarui Fang
218c75fd9d
[NFC] polish type hint for shape consistency ( #1801 )
...
* [NFC] polish type hint for shape consistency
* polish code
* polish code
2022-11-07 14:13:03 +08:00
Jiarui Fang
c248800359
[kernel] skip tests of flash_attn and triton when they are not available ( #1798 )
2022-11-07 13:41:13 +08:00
YuliangLiu0306
e34e850a4c
[autoparallel]add essential CommActions for broadcast oprands ( #1793 )
2022-11-04 18:36:42 +08:00
Boyuan Yao
05ce3d369f
[fx] Add linear metainfo class for auto parallel ( #1783 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
2022-11-04 10:55:09 +08:00
Super Daniel
e8a9bebc87
[autoparallel] refactor and add rotorc. ( #1789 )
...
* [autoparallel] refactor and add rotorc.
* [autoparallel] refactor and add rotorc.
2022-11-03 12:32:51 +08:00
github-actions[bot]
4d6e1284cb
Automated submodule synchronization ( #1785 )
...
Co-authored-by: github-actions <github-actions@github.com>
2022-11-03 12:31:50 +08:00
YuliangLiu0306
2c4c7b3618
[autoparallel] add getattr handler ( #1767 )
...
* [autoparallel] add getattr haandler
* polish code
* add extra processes for Parameters
* add unit test for param resharding cost
* add docstring and polish test
2022-11-03 12:31:33 +08:00
HELSON
c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 ( #1786 )
...
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12
* [zero] add cpu shard init
* [zero] add tiny example test
* [colo_tensor] fix bugs for torch-1.11
2022-11-02 16:11:34 +08:00
Jiarui Fang
32c1b843a9
skip torchrec unittests if not installed ( #1790 )
2022-11-02 14:44:32 +08:00
kurisusnowdeng
0b8161fab8
updated tp layers
2022-11-02 12:19:38 +08:00
Jiarui Fang
cb5a587e9a
[hotfix] polish chunk import ( #1787 )
2022-11-02 12:10:52 +08:00
YuliangLiu0306
e859380bf7
[fx] support module with bias addition ( #1780 )
...
* [autoparallel] refactor tracer to fix bias addition issue
* [fx] support module with bias addition
* create bias_addition_module
* refactor file structure
* polish code
* fix unit test
2022-11-01 22:53:51 +08:00
Frank Lee
f3f19a5c47
[autoparallel] added matmul handler ( #1763 )
...
* [autoparallel] added matmul handler
* polish code
2022-11-01 15:14:53 +08:00
Ziyue Jiang
4df0194976
[Pipeline]Adapt to Pipelinable OPT ( #1782 )
2022-11-01 14:18:50 +08:00
YuliangLiu0306
27de252334
[autoparallel] fix conv handler numerical test ( #1771 )
2022-11-01 10:43:44 +08:00
Super Daniel
1e88811c7a
[autoparallel] move ckpt solvers to autoparallel folder / refactor code ( #1764 )
...
* [autoparallel] first move.
* [autoparallel] add solver rotor.
* [autoparallel] add ckpt solvers.
* [autoparallel] modify codegen.
* [fx] fix annotation in test.
* [fx] remove check.
* [autoparallel] polish docstring.
* [fx] refactor MetaTensor.
2022-11-01 10:43:15 +08:00
github-actions[bot]
2b859502d5
Automated submodule synchronization ( #1781 )
...
Co-authored-by: github-actions <github-actions@github.com>
2022-11-01 10:39:18 +08:00
Super Daniel
5ea89f6456
[CI] downgrade fbgemm. ( #1778 )
2022-10-31 18:18:45 +08:00
Jiarui Fang
f34dab4270
[compatibility] ChunkMgr import error ( #1772 )
2022-10-28 14:48:54 +08:00
YuliangLiu0306
a4d1f59c78
[autoparallel] add numerical test for handlers ( #1769 )
2022-10-28 10:59:59 +08:00
YuliangLiu0306
b0f7c8bde8
[autoparallel] update CommSpec to CommActions ( #1768 )
...
* [autoparallel] update CommSpec to CommActions
* polish code
2022-10-28 09:57:43 +08:00
binmakeswell
16b0abf94f
[doc] add FastFold ( #1766 )
2022-10-27 07:06:57 +00:00