-
Notifications
You must be signed in to change notification settings - Fork 21.4k
Issues: pytorch/pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
nn.Transformer gives different output in torch.no_grad() context
#128209
opened Jun 7, 2024 by
IndoorAdventurer
Wrong result for Inplace tensor update on transpose for some devices with torch 2.3.0
#128202
opened Jun 7, 2024 by
jerrychenhf
[aot autograd][inline inbuilt nn modules] AOT Autograd changes _version different from eager
oncall: pt2
#128198
opened Jun 7, 2024 by
anijain2305
swap_tensors
fail when calling nn.Module.to
on XLA DDP
wrapped models.
module: xla
#128165
opened Jun 6, 2024 by
ysiraichi
[inductor] Lowering error with torch.full
module: inductor
oncall: pt2
#128161
opened Jun 6, 2024 by
williamwen42
[dynamo] Dynamo traces through __torch_dispatch__ on custom tensor subclasses
module: dynamo
module: pt2-dispatcher
PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128160
opened Jun 6, 2024 by
williamwen42
Inlining nn modules and FSDP
oncall: distributed
Add this issue/PR to distributed oncall triage queue
#128154
opened Jun 6, 2024 by
laithsakka
torch.compile Jamba: long compilation time with backend="eager"
module: dynamo
module: startup-tracing-compile
Compilation mechanism or time spent in (re)compilation, tracing, startup
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128153
opened Jun 6, 2024 by
xmfan
torch.compile with Custom tensor subclass doesn't inline the tensor subclass methods
module: dynamo
tensor subclass
Related to tensor subclasses
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128149
opened Jun 6, 2024 by
tugsbayasgalan
torch.compile Jamba: Scheduler Error in codegen for ComputedBuffer
module: inductor
oncall: pt2
#128147
opened Jun 6, 2024 by
xmfan
AMP guards recompilation This issue has been looked at a team member, and triaged and prioritized into an appropriate module
dtype mismatch. expected Half, actual Float
module: dynamo
oncall: pt2
triaged
#128134
opened Jun 6, 2024 by
bhack
Unable to record Memory consumption with Related to torch.cuda, and CUDA support in general
needs reproduction
Someone else needs to try reproducing the issue given the instructions. No action needed from user
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
torch.cuda.memory._record_memory_history()
module: cuda
#128131
opened Jun 6, 2024 by
GeJulia
PyTorch fails to build on ppc64le
actionable
module: build
Build system issues
module: multi-headed-attention
module: POWER
Issues specific to the POWER/ppc architecture
module: regression
It used to work, and now it doesn't
triage review
#128130
opened Jun 6, 2024 by
mikejuliet13
Fake tensor support of distributed ops in FX graph tracing
oncall: distributed
Add this issue/PR to distributed oncall triage queue
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128128
opened Jun 6, 2024 by
xuzijian629
Stateless random numbers
feature
A request for a proper, new feature.
module: random
Related to random number generation in PyTorch (rng generator)
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128126
opened Jun 6, 2024 by
albertz
optional memory leak with torch.jit.script on models with fft
oncall: pt2
#128125
opened Jun 6, 2024 by
elyasafm
2nd compile of deepcopy(model) fails on multiple ubuntu-pc (fatal error: Python.h: file not found)
oncall: pt2
#128121
opened Jun 6, 2024 by
alex77g2
DISABLED test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 (__main__.TestSDPACudaOnlyCUDA)
module: flaky-tests
Problem is a flaky test in CI
module: nn
Related to torch.nn
skipped
Denotes a (flaky) test currently skipped in CI.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128119
opened Jun 6, 2024 by
pytorch-bot
bot
DISABLED test_quantization_doc_qat (__main__.TestQuantizationDocs)
module: flaky-tests
Problem is a flaky test in CI
oncall: quantization
Quantization support in PyTorch
skipped
Denotes a (flaky) test currently skipped in CI.
#128118
opened Jun 6, 2024 by
pytorch-bot
bot
A trivial but annoying bug in random_split
module: data
torch.utils.data
needs reproduction
Someone else needs to try reproducing the issue given the instructions. No action needed from user
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128116
opened Jun 6, 2024 by
X-Engineer-001
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.