-
Notifications
You must be signed in to change notification settings - Fork 262
Issues: pytorch/torchtune
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Can someone give me an example of how to evaluate a llama 3 finetuned model using LORA
#1067
opened Jun 6, 2024 by
jayson1200
[Feature Request] Add lr_scheduler for full_finetune (single_device/distributed)
#1060
opened Jun 6, 2024 by
andyl98
Using include_path with eval file for custom evaluation configs in lm-eval is not supported
#1054
opened Jun 5, 2024 by
yasser-sulaiman
Recommendations for obtaining validation dataset loss after each epoch
#1042
opened Jun 1, 2024 by
dcsuka
GPTQ quantization not working with fine-tuned LLaMA3 models
#1033
opened May 30, 2024 by
sanchitintel
Benchmark performance against other implementation such as
Llama-factory
and Unsloth
?
#1023
opened May 27, 2024 by
liyucheng09
Previous Next
ProTip!
Adding no:label will show everything without a label.