You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that in your paper the train dataset is 'InstructorDoctor-205k' but in this repo, from the training command, the dataset is 'HealthCareMagic-100k.json'
In the paper, the training was 'fine tuning on nstructorDoctor-205k (seems to be one step?)', but in this repo: 'Our model was firstly be fine-tuned by Stanford Alpaca's data to have some basic conversational capabilities.' does it mean the repo contains updated method?
Training time difference: paper - 18 hours. repo - 30 minutes
Can you help to provide some clarifications?
Thanks!
The text was updated successfully, but these errors were encountered:
@KentOn-Li @ttio2tech @saharmor
hello, I have filled out the link several times, but I do not receive related weight files. Is there something missing here? (I had check my spam) My email is autogptuser(at)gmail(dot)com could you please send me the pre-trained weights? Thanks a lot.
It seems that in your paper the train dataset is 'InstructorDoctor-205k' but in this repo, from the training command, the dataset is 'HealthCareMagic-100k.json'
In the paper, the training was 'fine tuning on nstructorDoctor-205k (seems to be one step?)', but in this repo: 'Our model was firstly be fine-tuned by Stanford Alpaca's data to have some basic conversational capabilities.' does it mean the repo contains updated method?
Training time difference: paper - 18 hours. repo - 30 minutes
Can you help to provide some clarifications?
Thanks!
The text was updated successfully, but these errors were encountered: