ColossalAI/applications/Chat/coati/ray
Wenhao Chen da4f7b855f
[chat] fix bugs and add unit tests (#4213)
* style: rename replay buffer

Experience replay is typically for off policy algorithms.
Use this name in PPO maybe misleading.

* fix: fix wrong zero2 default arg

* test: update experience tests

* style: rename zero_pad fn

* fix: defer init in CycledDataLoader

* test: add benchmark test

* style: rename internal fn of generation

* style: rename internal fn of lora

* fix: remove unused loss fn

* fix: remove unused utils fn

* refactor: remove generate_with_actor fn

* fix: fix type annotation

* test: add models tests

* fix: skip llama due to long execution time

* style: modify dataset

* style: apply formatter

* perf: update reward dataset

* fix: fix wrong IGNORE_INDEX in sft dataset

* fix: remove DataCollatorForSupervisedDataset

* test: add dataset tests

* style: apply formatter

* style: rename test_ci to test_train

* feat: add llama in inference

* test: add inference tests

* test: change test scripts directory

* fix: update ci

* fix: fix typo

* fix: skip llama due to oom

* fix: fix file mod

* style: apply formatter

* refactor: remove duplicated llama_gptq

* style: apply formatter

* to: update rm test

* feat: add tokenizer arg

* feat: add download model script

* test: update train tests

* fix: modify gemini load and save pretrained

* test: update checkpoint io test

* to: modify nproc_per_node

* fix: do not remove existing dir

* fix: modify save path

* test: add random choice

* fix: fix sft path

* fix: enlarge nproc_per_node to avoid oom

* fix: add num_retry

* fix: make lora config of rm and critic consistent

* fix: add warning about lora weights

* fix: skip some gpt2 tests

* fix: remove grad ckpt in rm and critic due to errors

* refactor: directly use Actor in train_sft

* test: add more arguments

* fix: disable grad ckpt when using lora

* fix: fix save_pretrained and related tests

* test: enable zero2 tests

* revert: remove useless fn

* style: polish code

* test: modify test args
2023-08-02 10:17:36 +08:00
..
callbacks [chat] fix bugs and add unit tests (#4213) 2023-08-02 10:17:36 +08:00
README.md [chat] add distributed PPO trainer (#3740) 2023-06-07 10:41:16 +08:00
__init__.py
detached_replay_buffer.py [chat] fix bugs and add unit tests (#4213) 2023-08-02 10:17:36 +08:00
detached_trainer_base.py [chat] fix bugs and add unit tests (#4213) 2023-08-02 10:17:36 +08:00
detached_trainer_ppo.py
experience_maker_holder.py [chat] fix bugs and add unit tests (#4213) 2023-08-02 10:17:36 +08:00
lora_constructor.py [chat] fix bugs and add unit tests (#4213) 2023-08-02 10:17:36 +08:00
utils.py

README.md

Distributed PPO Training on Stage 3

Detach Experience Makers and Trainers

We can completely separate the trainers and makers.

  • The experience maker performs inference, produces experience, and remotely delivers it to the trainer (1).
  • The trainer consumes experience to train models, and periodically transmits new model parameters to the maker (2.1, 2.2).
  • Using an experience buffer to overlap transmission and computing.

In this manner, each node will work continuously without model idle time, and different optimization strategies can be applied for inference and training to meet the needs of speed or storage. It is also helpful for scalability.

DetachedPPOTrainer and ExperienceMakerHolder are Ray Actors (distinguished from Actor Model), representing Trainer and Experience Maker on the graph above, respectively.

More about Ray Core

Usage

See examples at ColossalAI/application/Chat/examples/ray

Setup Makers

  • define makers' environment variables :

    env_info_makers = [{
        'local_rank': '0',
        'rank': str(rank),
        'world_size': str(num_makers),
        'master_port': maker_port,
        'master_addr': master_addr
    } for rank in range(num_makers)]
    
    
  • define maker models :

    def model_fn():
        actor = get_actor_from_args(...)
        critic = get_critic_from_args(...)
        reward_model = get_reward_model_from_args(...)
        initial_model = get_actor_from_args(...)
        return actor, critic, reward_model, initial_model
    
    
  • set experience_holder_refs :

    experience_holder_refs = [
        ExperienceMakerHolder.options(
            name=f"maker_{i}",
            num_gpus=1,
            max_concurrency=2
        ).remote(
            detached_trainer_name_list=[f"trainer_{x}" for x in target_trainers(...)],
            model_fn=model_fn,
            ...)
        for i, env_info_maker in enumerate(env_info_makers)
    ]
    

    The names in the detached_trainer_name_list refer to the target trainers that the maker should send experience to. We set a trainer's name the same as a maker, by .options(name="str"). See below.

Setup Trainers

  • define trainers' environment variables :

    env_info_trainers = [{
        'local_rank': '0',
        'rank': str(rank),
        'world_size': str(num_trainers),
        'master_port': trainer_port,
        'master_addr': master_addr
    } for rank in range(num_trainers)]
    
  • define trainer models :

    def trainer_model_fn():
        actor = get_actor_from_args(...)
        critic = get_critic_from_args(...)
        return actor, critic
    
  • set trainer_refs :

    trainer_refs = [
        DetachedPPOTrainer.options(
            name=f"trainer{i}",
            num_gpus=1,
            max_concurrency=2
        ).remote(
            experience_maker_holder_name_list=[f"maker{x}" for x in target_makers(...)],
            model_fn = trainer_model_fn(),
            ...)
        for i, env_info_trainer in enumerate(env_info_trainers)
    ]
    

    The names in experience_maker_holder_name_list refer to the target makers that the trainer should send updated models to. By setting detached_trainer_name_list and experience_maker_holder_name_list, we can customize the transmission graph.

Launch Jobs

  • define data_loader :

    def data_loader_fn():
        return = torch.utils.data.DataLoader(dataset=dataset)
    
    
  • launch makers :

    wait_tasks = []
    for experience_holder_ref in experience_holder_refs:
        wait_tasks.append(
            experience_holder_ref.workingloop.remote(data_loader_fn(),
                                                     num_steps=experience_steps))
    
    
  • launch trainers :

    for trainer_ref in trainer_refs:
        wait_tasks.append(trainer_ref.fit.remote(total_steps, update_steps, train_epochs))
    
  • wait for done :

    ray.get(wait_tasks)
    

Flexible Structure

We can deploy different strategies to makers and trainers. Here are some notions.

2 Makers 1 Trainer

2 Makers 2 Trainer

Maker Inference Quantization

Tensor Parallel

TODO

  • Support LoRA
  • Support TP & PP