add readme for dataset

Former-commit-id: cece66d48a770e3e418496445d4040e3cafa9411
This commit is contained in:
codemayq 2023-08-23 19:55:45 +08:00
parent 4b29d9d2b0
commit b032dc4c4e
2 changed files with 8 additions and 4 deletions

View File

@ -11,7 +11,8 @@ If you are using a custom dataset, please provide your dataset definition in the
"query": "the name of the column in the datasets containing the queries. (default: input)",
"response": "the name of the column in the datasets containing the responses. (default: output)",
"history": "the name of the column in the datasets containing the history of chat. (default: None)"
}
},
"stage": "The stage at which the data is being used: pt, sft, and rm, which correspond to pre-training, supervised fine-tuning(PPO), and reward model (DPO) training, respectively.(default: None)"
}
```
@ -26,6 +27,7 @@ For datasets used in reward modeling or DPO training, the `response` column shou
"output": [
"Chosen answer",
"Rejected answer"
]
],
"stage": "rm"
}
```

View File

@ -11,7 +11,8 @@
"query": "数据集代表请求的表头名称默认input",
"response": "数据集代表回答的表头名称默认output",
"history": "数据集代表历史对话的表头名称默认None"
}
},
"stage": "数据所应用的训练阶段,可选值有 pt, sft, rm 三个,对应预训练,指令监督微调(PPO),奖励模型(DPO)训练, 默认为None表示不限制"
}
```
@ -26,6 +27,7 @@
"output": [
"Chosen answer",
"Rejected answer"
]
],
"stage": "rm"
}
```