32 subscribers
Mettez-vous hors ligne avec l'application Player FM !
Podcasts qui valent la peine d'être écoutés
SPONSORISÉ


1 You Are Your Longest Relationship: Artist DaQuane Cherry on Psoriasis, Art, and Self-Care 32:12
Inductor - IR
Manage episode 395704338 series 2921809
Inductor IR is an intermediate representation that lives between ATen FX graphs and the final Triton code generated by Inductor. It was designed to faithfully represent PyTorch semantics and accordingly models views, mutation and striding. When you write a lowering from ATen operators to Inductor IR, you get a TensorBox for each Tensor argument which contains a reference to the underlying IR (via StorageBox, and then a Buffer/ComputedBuffer) that says how the Tensor was computed. The inner computation is represented via define-by-run, which allows for compact definition of IR representation, while still allowing you to extract an FX graph out if you desire. Scheduling then takes buffers of inductor IR and decides what can be fused. Inductor IR may have too many nodes, this would be a good thing to refactor in the future.
83 episodes
Manage episode 395704338 series 2921809
Inductor IR is an intermediate representation that lives between ATen FX graphs and the final Triton code generated by Inductor. It was designed to faithfully represent PyTorch semantics and accordingly models views, mutation and striding. When you write a lowering from ATen operators to Inductor IR, you get a TensorBox for each Tensor argument which contains a reference to the underlying IR (via StorageBox, and then a Buffer/ComputedBuffer) that says how the Tensor was computed. The inner computation is represented via define-by-run, which allows for compact definition of IR representation, while still allowing you to extract an FX graph out if you desire. Scheduling then takes buffers of inductor IR and decides what can be fused. Inductor IR may have too many nodes, this would be a good thing to refactor in the future.
83 episodes
Tous les épisodes
×
1 Inductor - Post-grad FX passes 24:07









1 Dispatcher questions with Sherlock 18:36





1 Tensor subclasses and Liskov substitution principle 19:13


1 DataLoader with multiple workers leaks memory 16:38


1 Multiple dispatch in __torch_function__ 14:20


1 Asynchronous versus synchronous execution 15:03


1 torch.use_deterministic_algorithms 10:50




1 API design via lexical and dynamic scoping 21:44




















1 Why is autograd so complicated 15:45










1 History and constraints of Tensor 14:48

1 How new operators are authored 15:33

1 The life and death of Variable 15:29

Bienvenue sur Lecteur FM!
Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.