-
Hi,
I am currently working with the libriphrase_train.py script, specifically from lines [31 to 42](https://github.com/aizhiqi-work/MM-KWS/blob/main/dataloaders/libriphrase_train.py#L31-L42), and I…
-
### Describe your problem
Can you inform me of the meanings of the various parts of speech in huqie, as well as the meanings of each entity in ner.json, and is there any relevant documentation? I nee…
-
wish reply
-
Could you please tell me how i can fine tune for my custom Chinese datasets?
-
I have three questions
First: Can i directly use SQuAD for chinese (Close-Domain)QA task?
Second: Is it the best solution to use run_squda.py to fine tune bert model with chinese dataset which for…
-
This is Chinese NL2SQL dataset. It has same format with wikisql. https://github.com/ZhuiyiTechnology/TableQA
Except a little of difference.
"sql": [2]. It uses list to wrap the value.
When I loa…
-
I can't reach the dataset
because It is required to log in to the cloud application.
But I can't join that platform because I don't have a Chinese mobile number.
Could you open the dataset link to…
-
### Version
1
### DataCap Applicant
ssnks1
### Project ID
1
### Data Owner Name
LAMOST DR7
### Data Owner Country/Region
United States
### Data Owner Industry
Life Sci…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
qkxie updated
3 weeks ago
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…