* Update ppo.py
Fix the bug of fetching wrong batch data
* Add peft model support in SFT and Prompts training
In stage-1 and stage-3, the peft model supports are added. So the trained artifacts will be only a small lora additions instead of the whole bunch of files.
* Delete test_prompts.txt
* Delete test_pretrained.txt
* Move the peft stuffs to a community folder.
* Move the demo sft to community
* delete dirty files
* Add instructions to install peft using source
* Remove Chinese comments
* remove the Chinese comments