But when I use the fine-tuning to end the generated model for model prediction, enter the following command:
Below is an instruction that describes a task, paired with an inputthat provides further contextWrite a response that appropriately completes the request.
### Instruction:
Writing unit test code for a method
### Input:
@Override
public boolean put(Interval interval, T value) {
if (value == null) {
throw new NullPointerException();
}
Object values = getValuesArray();
int valuesLength = Array.getLength(values);
final int index = putInner(interval.getLow(), interval.getHigh());
if (index < 0) {
int insertIndex = -index - 1;
if (size - 1 < valuesLength) {
if (insertIndex < size - 1) {
System.arraycopy(values, insertIndex, values, insertIndex + 1, size - insertIndex - 1);
}
Array.set(values, insertIndex, value);
} else {
Object newArray = Array.newInstance(values.getClass().getComponentType(), valuesLength + 1);
System.arraycopy(values, 0, newArray, 0, insertIndex);
System.arraycopy(values, insertIndex, newArray, insertIndex + 1, valuesLength - insertIndex);
Array.set(newArray, insertIndex, value);
setValuesArray(newArray);
}
return true;
} else {
Array.set(values, index, value);
}
return false;
}
### Response:
but what is returned is:
Below is an instruction that describes a task, paired with an inputthat provides further contextWrite a response that appropriately completes the request.
### Instruction:
Writing unit test code for a method
### Input:
@Override
public boolean put(Interval interval, T value) {
if (value == null) {
throw new NullPointerException();
}
Object values = getValuesArray();
int valuesLength = Array.getLength(values);
final int index = putInner(interval.getLow(), interval.getHigh());
if (index < 0) {
int insertIndex = -index - 1;
if (size - 1 < valuesLength) {
if (insertIndex < size - 1) {
System.arraycopy(values, insertIndex, values, insertIndex + 1, size - insertIndex - 1);
}
Array.set(values, insertIndex, value);
} else {
Object newArray = Array.newInstance(values.getClass().getComponentType(), valuesLength + 1);
System.arraycopy(values, 0, newArray, 0, insertIndex);
System.arraycopy(values, insertIndex, newArray, insertIndex + 1, valuesLength - insertIndex);
Array.set(newArray, insertIndex, value);
setValuesArray(newArray);
}
return true;
} else {
Array.set(values, index, value);
}
return false;
}
### Response:<|endoftext|>
Compared with the input command, his result is only <|endoftext|> more.
What is the problem?
I wonder if there is a better way to solve this problem?
Or is there a better way to do instruction fine-tuning for WizardLM/WizardCoder-15B-V1.0?
Based on the "WizardLM/WizardCoder-15B-V1.0" model, I used 78533 pieces of data to fine-tune the instructions. The dataset format is as follows:
The instruction fine-tuning script is as follows:
But when I use the fine-tuning to end the generated model for model prediction, enter the following command:
but what is returned is:
Compared with the input command, his result is only
<|endoftext|>
more.What is the problem? I wonder if there is a better way to solve this problem? Or is there a better way to do instruction fine-tuning for
WizardLM/WizardCoder-15B-V1.0
?