You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/tests/test_gemini/update/test_inference.py

123 lines
4.8 KiB

Update metainfo patch branch (#2517) * init * rename and remove useless func * basic chunk * add evoformer * align evoformer * add meta * basic chunk * basic memory * finish basic inference memory estimation * finish memory estimation * fix bug * finish memory estimation * add part of index tracer * finish basic index tracer * add doc string * add doc str * polish code * polish code * update active log * polish code * add possible region search * finish region search loop * finish chunk define * support new op * rename index tracer * finishi codegen on msa * redesign index tracer, add source and change compute * pass outproduct mean * code format * code format * work with outerproductmean and msa * code style * code style * code style * code style * change threshold * support check_index_duplicate * support index dupilictae and update loop * support output * update memory estimate * optimise search * fix layernorm * move flow tracer * refactor flow tracer * format code * refactor flow search * code style * adapt codegen to prepose node * code style * remove abandoned function * remove flow tracer * code style * code style * reorder nodes * finish node reorder * update run * code style * add chunk select class * add chunk select * code style * add chunksize in emit, fix bug in reassgin shape * code style * turn off print mem * add evoformer openfold init * init openfold * add benchmark * add print * code style * code style * init openfold * update openfold * align openfold * use max_mem to control stratge * update source add * add reorder in mem estimator * improve reorder efficeincy * support ones_like, add prompt if fit mode search fail * fix a bug in ones like, dont gen chunk if dim size is 1 * fix bug again * update min memory stratege, reduce mem usage by 30% * last version of benchmark * refactor structure * restruct dir * update test * rename * take apart chunk code gen * close mem and code print * code format * rename ambiguous variable * seperate flow tracer * seperate input node dim search * seperate prepose_nodes * seperate non chunk input * seperate reorder * rename * ad reorder graph * seperate trace flow * code style * code style * fix typo * set benchmark * rename test * update codegen test * Fix state_dict key missing issue of the ZeroDDP (#2363) * Fix state_dict output for ZeroDDP duplicated parameters * Rewrite state_dict based on get_static_torch_model * Modify get_static_torch_model to be compatible with the lower version (ZeroDDP) * update codegen test * update codegen test * add chunk search test * code style * add available * [hotfix] fix gpt gemini example (#2404) * [hotfix] fix gpt gemini example * [example] add new assertions * remove autochunk_available * [workflow] added nightly release to pypi (#2403) * add comments * code style * add doc for search chunk * [doc] updated readme regarding pypi installation (#2406) * add doc for search * [doc] updated kernel-related optimisers' docstring (#2385) * [doc] updated kernel-related optimisers' docstring * polish doc * rename trace_index to trace_indice * rename function from index to indice * rename * rename in doc * [polish] polish code for get_static_torch_model (#2405) * [gemini] polish code * [testing] remove code * [gemini] make more robust * rename * rename * remove useless function * [worfklow] added coverage test (#2399) * [worfklow] added coverage test * polish code * polish code * polish code * polish code * polish code * polish code * polish code * polish code * add doc for trace indice * [docker] updated Dockerfile and release workflow (#2410) * add doc * update doc * add available * change imports * add test in import * [workflow] refactored the example check workflow (#2411) * [workflow] refactored the example check workflow * polish code * polish code * polish code * polish code * polish code * polish code * polish code * polish code * polish code * polish code * polish code * Update parallel_context.py (#2408) * [hotfix] add DISTPAN argument for benchmark (#2412) * change the benchmark config file * change config * revert config file * rename distpan to distplan * [workflow] added precommit check for code consistency (#2401) * [workflow] added precommit check for code consistency * polish code * polish code * polish code * polish code * polish code * polish code * polish code * adapt new fx * [workflow] added translation for non-english comments (#2414) * [setup] refactored setup.py for dependency graph (#2413) * change import * update doc * [workflow] auto comment if precommit check fails (#2417) * [hotfix] add norm clearing for the overflow step (#2416) * [examples] adding tflops to PaLM (#2365) * [workflow]auto comment with test coverage report (#2419) * [workflow]auto comment with test coverage report * polish code * polish yaml * [doc] added documentation for CI/CD (#2420) * [doc] added documentation for CI/CD * polish markdown * polish markdown * polish markdown * [example] removed duplicated stable diffusion example (#2424) * [zero] add inference mode and its unit test (#2418) * [workflow] report test coverage even if below threshold (#2431) * [example] improved the clarity yof the example readme (#2427) * [example] improved the clarity yof the example readme * polish workflow * polish workflow * polish workflow * polish workflow * polish workflow * polish workflow * [ddp] add is_ddp_ignored (#2434) [ddp] rename to is_ddp_ignored * [workflow] make test coverage report collapsable (#2436) * [autoparallel] add shard option (#2423) * [fx] allow native ckpt trace and codegen. (#2438) * [cli] provided more details if colossalai run fail (#2442) * [autoparallel] integrate device mesh initialization into autoparallelize (#2393) * [autoparallel] integrate device mesh initialization into autoparallelize * add megatron solution * update gpt autoparallel examples with latest api * adapt beta value to fit the current computation cost * [zero] fix state_dict and load_state_dict for ddp ignored parameters (#2443) * [ddp] add is_ddp_ignored [ddp] rename to is_ddp_ignored * [zero] fix state_dict and load_state_dict * fix bugs * [zero] update unit test for ZeroDDP * [example] updated the hybrid parallel tutorial (#2444) * [example] updated the hybrid parallel tutorial * polish code * [zero] add warning for ignored parameters (#2446) * [example] updated large-batch optimizer tutorial (#2448) * [example] updated large-batch optimizer tutorial * polish code * polish code * [example] fixed seed error in train_dreambooth_colossalai.py (#2445) * [workflow] fixed the on-merge condition check (#2452) * [workflow] automated the compatiblity test (#2453) * [workflow] automated the compatiblity test * polish code * [autoparallel] update binary elementwise handler (#2451) * [autoparallel] update binary elementwise handler * polish * [workflow] automated bdist wheel build (#2459) * [workflow] automated bdist wheel build * polish workflow * polish readme * polish readme * Fix False warning in initialize.py (#2456) * Update initialize.py * pre-commit run check * [examples] update autoparallel tutorial demo (#2449) * [examples] update autoparallel tutorial demo * add test_ci.sh * polish * add conda yaml * [cli] fixed hostname mismatch error (#2465) * [example] integrate autoparallel demo with CI (#2466) * [example] integrate autoparallel demo with CI * polish code * polish code * polish code * polish code * [zero] low level optim supports ProcessGroup (#2464) * [example] update vit ci script (#2469) * [example] update vit ci script * [example] update requirements * [example] update requirements * [example] integrate seq-parallel tutorial with CI (#2463) * [zero] polish low level optimizer (#2473) * polish pp middleware (#2476) Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com> * [example] update gpt gemini example ci test (#2477) * [zero] add unit test for low-level zero init (#2474) * [workflow] fixed the skip condition of example weekly check workflow (#2481) * [example] stable diffusion add roadmap * add dummy test_ci.sh * [example] stable diffusion add roadmap (#2482) * [CI] add test_ci.sh for palm, opt and gpt (#2475) * polish code * [example] titans for gpt * polish readme * remove license * polish code * update readme * [example] titans for gpt (#2484) * [autoparallel] support origin activation ckpt on autoprallel system (#2468) * [autochunk] support evoformer tracer (#2485) support full evoformer tracer, which is a main module of alphafold. previously we just support a simplifed version of it. 1. support some evoformer's op in fx 2. support evoformer test 3. add repos for test code * [example] fix requirements (#2488) * [zero] add unit testings for hybrid parallelism (#2486) * [hotfix] gpt example titans bug #2493 * polish code and fix dataloader bugs * [hotfix] gpt example titans bug #2493 (#2494) * [fx] allow control of ckpt_codegen init (#2498) * [fx] allow control of ckpt_codegen init Currently in ColoGraphModule, ActivationCheckpointCodeGen will be set automatically in __init__. But other codegen can't be set if so. So I add an arg to control whether to set ActivationCheckpointCodeGen in __init__. * code style * [example] dreambooth example * add test_ci.sh to dreambooth * [autochunk] support autochunk on evoformer (#2497) * Revert "Update parallel_context.py (#2408)" This reverts commit 7d5640b9db01b501e95b66e91be9fe27b58d2e58. * add avg partition (#2483) Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com> * [auto-chunk] support extramsa (#3) (#2504) * [utils] lazy init. (#2148) * [utils] lazy init. * [utils] remove description. * [utils] complete. * [utils] finalize. * [utils] fix names. * [autochunk] support parsing blocks (#2506) * [zero] add strict ddp mode (#2508) * [zero] add strict ddp mode * [polish] add comments for strict ddp mode * [zero] fix test error * [doc] update opt and tutorial links (#2509) * [workflow] fixed changed file detection (#2515) Co-authored-by: oahzxl <xuanlei.zhao@gmail.com> Co-authored-by: eric8607242 <e0928021388@gmail.com> Co-authored-by: HELSON <c2h214748@gmail.com> Co-authored-by: Frank Lee <somerlee.9@gmail.com> Co-authored-by: Haofan Wang <haofanwang.ai@gmail.com> Co-authored-by: Jiarui Fang <fangjiarui123@gmail.com> Co-authored-by: ZijianYY <119492445+ZijianYY@users.noreply.github.com> Co-authored-by: YuliangLiu0306 <72588413+YuliangLiu0306@users.noreply.github.com> Co-authored-by: Super Daniel <78588128+super-dainiu@users.noreply.github.com> Co-authored-by: ver217 <lhx0217@gmail.com> Co-authored-by: Ziyue Jiang <ziyue.jiang97@gmail.com> Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com> Co-authored-by: oahzxl <43881818+oahzxl@users.noreply.github.com> Co-authored-by: binmakeswell <binmakeswell@gmail.com> Co-authored-by: Fazzie-Maqianli <55798671+Fazziekey@users.noreply.github.com> Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
2 years ago
from functools import partial
import pytest
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.testing import assert_close
import colossalai
from colossalai.amp import convert_to_apex_amp
from colossalai.gemini.chunk import ChunkManager, init_chunk_manager, search_chunk_configuration
from colossalai.gemini.gemini_mgr import GeminiManager
from colossalai.nn.optimizer import HybridAdam
from colossalai.nn.optimizer.zero_optimizer import ZeroOptimizer
from colossalai.nn.parallel import ZeroDDP
from colossalai.testing import parameterize, rerun_if_address_is_in_use
from colossalai.utils import free_port
from colossalai.utils.cuda import get_current_device
from colossalai.utils.model.colo_init_context import ColoInitContext, post_process_colo_init_ctx
from tests.components_to_test import run_fwd_bwd
from tests.components_to_test.registry import non_distributed_component_funcs
from tests.test_tensor.common_utils import debug_print, set_seed
def check_param(model: ZeroDDP, torch_model: torch.nn.Module):
zero_dict = model.state_dict(only_rank_0=False)
torch_dict = torch_model.state_dict()
for key, value in torch_dict.items():
# key is 'module.model.PARAMETER', so we truncate it
key = key[7:]
assert key in zero_dict, "{} not in ZeRO dictionary.".format(key)
temp_zero_value = zero_dict[key].to(device=value.device, dtype=value.dtype)
# debug_print([0], "max range: ", key, torch.max(torch.abs(value - temp_zero_value)))
assert_close(value, temp_zero_value, rtol=1e-3, atol=4e-3)
@parameterize('placement_policy', ['cuda', 'cpu', 'auto', 'const'])
@parameterize('model_name', ['gpt2'])
def exam_inference(placement_policy, model_name: str):
set_seed(19360226)
get_components_func = non_distributed_component_funcs.get_callable(model_name)
model_builder, train_dataloader, test_dataloader, optimizer_class, criterion = get_components_func()
torch_model = model_builder().cuda()
amp_config = dict(opt_level='O2', keep_batchnorm_fp32=False, loss_scale=128)
torch_optim = torch.optim.Adam(torch_model.parameters(), lr=1e-3)
torch_model, torch_optim = convert_to_apex_amp(torch_model, torch_optim, amp_config)
torch_model = DDP(torch_model, device_ids=[dist.get_rank()])
init_dev = get_current_device()
with ColoInitContext(device=init_dev):
model = model_builder()
for torch_p, p in zip(torch_model.parameters(), model.parameters()):
p.data.copy_(torch_p.data)
world_size = torch.distributed.get_world_size()
config_dict, _ = search_chunk_configuration(model, search_range_mb=1, search_interval_byte=100)
config_dict[world_size]['chunk_size'] = 5000
config_dict[world_size]['keep_gathered'] = False
if placement_policy != 'cuda':
init_device = torch.device('cpu')
else:
init_device = None
chunk_manager = ChunkManager(config_dict, init_device=init_device)
gemini_manager = GeminiManager(placement_policy, chunk_manager)
model = ZeroDDP(model, gemini_manager, pin_memory=True)
optimizer = HybridAdam(model.parameters(), lr=1e-3)
zero_optim = ZeroOptimizer(optimizer, model, initial_scale=128)
model.eval()
torch_model.eval()
set_seed(dist.get_rank() * 3 + 128)
train_dataloader = iter(train_dataloader)
def train_iter():
input_ids, label = next(train_dataloader)
input_ids, label = input_ids.cuda(), label.cuda()
zero_optim.zero_grad()
torch_optim.zero_grad()
torch_loss = run_fwd_bwd(torch_model, input_ids, label, criterion, torch_optim)
loss = run_fwd_bwd(model, input_ids, label, criterion, zero_optim)
assert_close(torch_loss, loss)
zero_optim.step()
torch_optim.step()
check_param(model, torch_model)
def inference_iter():
input_ids, label = next(train_dataloader)
input_ids, label = input_ids.cuda(), label.cuda()
with torch.no_grad():
torch_output = torch_model(input_ids)
torch_loss = criterion(torch_output.float(), label)
zero_output = model(input_ids)
zero_loss = criterion(zero_output.float(), label)
assert_close(torch_loss, zero_loss)
train_iter()
inference_iter()
train_iter()
def run_dist(rank, world_size, port):
config = {}
colossalai.launch(config=config, rank=rank, world_size=world_size, host='localhost', port=port, backend='nccl')
exam_inference()
@pytest.mark.dist
@pytest.mark.parametrize('world_size', [1, 4])
@rerun_if_address_is_in_use()
def test_inference(world_size):
run_func = partial(run_dist, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
if __name__ == '__main__':
test_inference(1)