
Lépjen offline állapotba az Player FM alkalmazással!
Half precision
Manage episode 301973966 series 2921809
In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!
Further reading.
- The Wikipedia article on IEEE floating point is pretty great https://en.wikipedia.org/wiki/IEEE_754
- How bfloat16 works out when doing training https://arxiv.org/abs/1905.12322
- Definition of acc_type in PyTorch https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/AccumulateType.h
83 epizódok
Manage episode 301973966 series 2921809
In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!
Further reading.
- The Wikipedia article on IEEE floating point is pretty great https://en.wikipedia.org/wiki/IEEE_754
- How bfloat16 works out when doing training https://arxiv.org/abs/1905.12322
- Definition of acc_type in PyTorch https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/AccumulateType.h
83 epizódok
Alle Folgen
×Üdvözlünk a Player FM-nél!
A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.