mirror of https://github.com/hpcaitech/ColossalAI
Polish readme link (#3306)
parent
a0b374925b
commit
b512893637
|
@ -280,7 +280,7 @@ For more details, see [`inference/`](https://github.com/hpcaitech/ColossalAI/tre
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
You can find more examples in this [repo](https://github.com/XueFuzhao/InstructionWild/blob/main/compare.md).
|
You can find more examples in this [repo](https://github.com/XueFuzhao/InstructionWild/blob/main/comparison.md).
|
||||||
|
|
||||||
### Limitation for LLaMA-finetuned models
|
### Limitation for LLaMA-finetuned models
|
||||||
- Both Alpaca and ColossalChat are based on LLaMA. It is hard to compensate for the missing knowledge in the pre-training stage.
|
- Both Alpaca and ColossalChat are based on LLaMA. It is hard to compensate for the missing knowledge in the pre-training stage.
|
||||||
|
|
Loading…
Reference in New Issue