In 2001, Torch was written and released under a
GPL. It was a machine-learning library written in C++ and CUDA, supporting methods including neural networks,
support vector machines (SVM),
hidden Markov models, etc. Around 2010, it was rewritten by Ronan Collobert, Clement Farabet and Koray Kavuckuoglu. This was known as Torch7 or LuaTorch. This was written so that the backend was in
C and the frontend was in
Lua. In mid-2016, some developers refactored it to decouple the frontend and the backend, with strong influence from torch-autograd and
Chainer. In turn, torch-autograd was influenced by HIPS/autograd. Development on Torch7 ceased in 2018 and was subsumed by the PyTorch project. Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding (
Caffe2), but models defined by the two frameworks were mutually incompatible. The
Open Neural Network Exchange (ONNX) project was created by Meta and
Microsoft in September 2017 to decouple deep learning frameworks from hardware-specific runtimes, allowing models to be converted between frameworks and optimized for execution providers like NVIDIA’s TensorRT. Caffe2 was merged into PyTorch at the end of March 2018. In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of the
Linux Foundation. PyTorch 2.0 was released on 15 March 2023, introducing
TorchDynamo, a Python-level
compiler that makes code run up to two times faster, along with significant improvements in training and inference performance across major
cloud platforms. ==PyTorch tensors==