-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torch.compile]: Enhanced Error Reporting and Performance Canary Mode #126644
Comments
Performance canary mode is a good idea, I often want information about baseline versus optimized comparison. |
What about instead the first point? |
Want to serialize MetaTensorDesc from fakeification, logical place is in structured_trace. Also good idea, not too difficult. Failed function should work already, we have user stacks and just report it. |
But often triages still require minimal repro and it is a lot of work especially on intermediate/leaf function. Another additional point is to have a compile deactivation decorator so that in the mean time we are going to open a ticket we could still disable the compilation of the failing |
Yeah, agreed. I definitely agree there is stuff holistically here we can do better. |
This adds dumps of MetaTensorDesc and MetaStorageDesc to structured logs when they are triggered from Dynamo. The logs look like this: ``` V0522 08:13:25.267000 140224882566144 torch/_subclasses/meta_utils.py:195] {"describe_storage": {"id": 0, "describer_id": 0, "size": 32}, "frame_id": 0, "frame_compile_id": 0, "attempt": 0} V0522 08:13:25.267000 140224882566144 torch/_subclasses/meta_utils.py:220] {"describe_tensor": {"id": 0, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cpu')", "size": [8], "is_leaf": true, "stride": [1], "storage": 0, "view_func": "<built-in method _view_func_unsafe of Tensor object at 0x7f882959e840>", "describer_id": 0}, "frame_id": 0, "frame_compile_id": 0, "attempt": 0} V0522 08:13:25.268000 140224882566144 torch/_subclasses/meta_utils.py:1594] {"describe_source": {"describer_id": 0, "id": 0, "source": "L['x']"}, "frame_id": 0, "frame_compile_id": 0, "attempt": 0} ``` The `describer_id` is used to disambiguate ids. We expect it to be unique per frame id, but if there is a bug it possibly is not. Note you will get redundant dumps when evaluation restarts. tlparse can use this to give a visualization of input tensors to a model, you could also use this to generate example inputs to run graphs on. Some care is taken to avoid redumping the tensor metadata multiple times, which would happen ordinarily because AOTAutograd refakifies everything after Dynamo, to deal with metadata mutation. Partially fixes pytorch#126644 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#126879 Approved by: https://github.com/jamesjwu
See also |
@bhack here's a doc I've been working on that I'd love if you help preview. Any comments / suggestions for what else to add would be helpful: https://docs.google.com/document/d/1y5CRfMLdwEoF1nTk9q8qEu1mgMUuUtvhklPKJ2emLU8/edit |
it is good starting point but my impression is that it is going too fast to target users that want to go deeper in compilers internals. I think that first goal is to lowering the number of How do you think we could make this document more visible to the community to collect comments from more users/dev profiles? |
For this, I think we need to actually do some coding, unfortunately. Even for experts like me it is not easy extracting repros from live production issues. The doc is really the best I know how to do right now.
Is there something wrong with TORCH_TRACE for reporting failures? I was hoping it would not be too burdensome for people to run with TORCH_TRACE and upload it with their bug report.
I'm currently collecting comments internally, and then I'll be doing more social media in the wider community as it gets more baked. |
Is it so hard to trace the related source of a decorated
It could be ok but I think it will work mainly for OSS released models or small function trace without too many disclosure issues. Or internal model (META) cause the TRACE are shared in the internal tracking system. But what about research/unpublished models? I don't know how it is practical to share more e2d trace in this case. |
@bhack I actually added dumps for this at #126879 but I haven't gotten around to actually using it for something like repros.
OK, that's fair. But I think we are quickly getting out of the zone of feasibility here. If you have a bug, that happens on a private model, and you cannot share detailed logs, and you are not expert enough to do some minimization / investigation on your own, then there's not really much you can do besides put up the error message and pray someone can look at it and figure it out as is. In an ideal world, automatic repro production would work great for this sort of situation. But we actually have a little bit of experience with this in the minifier, and the problem is that as the bugs get harder and harder to reproduce, we need more and more fidelity out of the minifier, and this just becomes quite a lot of work to maintain in the terminal state. Sometimes, finding the needle in the haystack (what exactly you needed to produce the problem) is most of the way to solving the problem in the first place.) |
Yes I meant something probably "light" like #126879 at least to support |
馃殌 The feature, motivation and pitch
Background
Handling PyTorch compile issues and ensuring reproducibility on minimal isolated code is currently quite labor-intensive. This challenge impacts both:
The complexity increases significantly when compiling full models or high-level
def
functions in a chain. Often, a single error might be hidden within a chain of errors, complicating error reporting and resolution.Proposal
Enhanced Error Isolation and Reporting:
Implement a mechanism to exactly isolate the function where the compilation failed. This will allow users to report the specific function causing the issue without additional effort.
Automatically record fake inputs to facilitate error reproduction without the need for users to fully reproduce their dataset setup. This ensures that developers and triagers can recreate the issue reliably with minimal setup.
Performance Canary Mode:
Introduce a mode where running an uncompiled model stores baseline performance data (e.g., memory usage, speed) on disk.
When running the compiled model, automatically compare current performance against the stored baseline. If there are regressions in memory usage or speed, users should be warned.
In case of performance regressions, provide an easy and straightforward way for users to report these issues.
Benefits
/cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: