Facebook open sourced Caffe2. The deep learning framework follows in the steps of the original Caffe, a project started at the University of California, Berkeley. Caffe2 offers developers greater flexibility for building high-performance products that deploy efficiently. This isn’t the first time that Facebook has engaged with the Caffe community. Back in October, Facebook announced Caffe2Go, what effectively was a mobile CPU and GPU optimized version of Caffe2 (they even both have Caffe2 in their names if you parse it right). Caffe2Go received attention at that time because its release coincided with Style Transfer.
Notably, the company also released extensions to the original Caffe. The majority of these changes make Caffe more attractive to developers building services for large audiences. For projects where resources are of no consequence, Facebook has historically turned to Torch — a library it finds optimal for research use cases. Every tech company wants to tout the scalability of its machine learning framework of choice. I asked Yangqing Jia, the lead author on Caffe2, what he thought of MXNet and the noise Amazon has been making about its ability to scale. Reasonably, he was cautious about dropping benchmarking numbers for comparison. These numbers can have meaning, but they are heavily influenced by the actual implementation of a machine learning model and subject to a fair amount of “DIY” volatility.