You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/applications/ColossalChat/coati/models/base.py

59 lines
2.0 KiB

[ColossalChat] Update RLHF V2 (#5286) * Add dpo. Fix sft, ppo, lora. Refactor all * fix and tested ppo * 2 nd round refactor * add ci tests * fix ci * fix ci * fix readme, style * fix readme style * fix style, fix benchmark * reproduce benchmark result, remove useless files * rename to ColossalChat * use new image * fix ci workflow * fix ci * use local model/tokenizer for ci tests * fix ci * fix ci * fix ci * fix ci timeout * fix rm progress bar. fix ci timeout * fix ci * fix ci typo * remove 3d plugin from ci temporary * test environment * cannot save optimizer * support chat template * fix readme * fix path * test ci locally * restore build_or_pr * fix ci data path * fix benchmark * fix ci, move ci tests to 3080, disable fast tokenizer * move ci to 85 * support flash attention 2 * add all-in-one data preparation script. Fix colossal-llama2-chat chat template * add hardware requirements * move ci test data * fix save_model, add unwrap * fix missing bos * fix missing bos; support grad accumulation with gemini * fix ci * fix ci * fix ci * fix llama2 chat template config * debug sft * debug sft * fix colossalai version requirement * fix ci * add sanity check to prevent NaN loss * fix requirements * add dummy data generation script * add dummy data generation script * add dummy data generation script * add dummy data generation script * update readme * update readme * update readme and ignore * fix logger bug * support parallel_output * modify data preparation logic * fix tokenization * update lr * fix inference * run pre-commit --------- Co-authored-by: Tong Li <tong.li352711588@gmail.com>
8 months ago
"""
Base class for critic and reward model
"""
from typing import Optional
import torch
import torch.nn as nn
from transformers import AutoModel, PretrainedConfig
class BaseModel(nn.Module):
"""
Actor model base class.
Args:
pretrained (str): path to pretrained model.
config (PretrainedConfig): PretrainedConfig used to initiate the base model.
**kwargs: all other kwargs as in AutoModel.from_pretrained
"""
def __init__(self, pretrained: str = None, config: Optional[PretrainedConfig] = None, **kwargs) -> None:
super().__init__()
if pretrained is not None:
if config is not None:
# initialize with config and load weights from pretrained
self.model = AutoModel.from_pretrained(pretrained, config=config, **kwargs)
else:
# initialize with pretrained
self.model = AutoModel.from_pretrained(pretrained, **kwargs)
elif config is not None:
# initialize with config
self.model = AutoModel.from_config(config, **kwargs)
else:
raise ValueError("Either pretrained or config must be provided.")
self.config = self.model.config
# create dummy input to get the size of the last hidden state
if "use_flash_attention_2" in kwargs:
self.model = self.model.cuda()
dummy_input = torch.zeros((1, 1), dtype=torch.long).to(self.model.device)
out = self.model(dummy_input)
self.last_hidden_state_size = out.last_hidden_state.shape[-1]
self.model = self.model.cpu()
# print("self.last_hidden_state_size: ",self.last_hidden_state_size)
def resize_token_embeddings(self, *args, **kwargs):
"""
Resize the token embeddings of the model.
Args:
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Returns:
The resized token embeddings.
"""
return self.model.resize_token_embeddings(*args, **kwargs)