Optimising TensorFlow performance on multicore CPUs
TensorFlow is a popular Python library for developing machine learning models, with a wide range of applications. Machine learning, and in particular deep learning, can be computationally very demanding. TensorFlow is therefore typically used with GPUs or specialised hardware. However, almost every modern computer comes with multiple CPU cores with considerable computational power. Running TensorFlow on multicore CPUs can be an attractive option, e.g., where a workflow is dominated by IO and faster computational hardware has less impact on runtime, or simply where no GPUs are available.
This talk will discuss which TensorFlow package to choose, and how to optimise performance on multicore CPUs. We will also compare runtimes of training and inference tasks of a deep learning model between different CPU and GPU configurations as an example of a real-world application.