Deep learning models often require high amount of computation power in order to train. This requirement is often met via parallel-processing the execution of these model and thus reducing the training time by several orders of magnitude. Due to this benefit, researches have shifted from CPU based processing to a more GPU based approach (like the NVIDIA Tesla) when training their models. However, these GPUs and GPU clusters are often quite expensive and thus inaccessible to most.
Google has recently started its Colaboratory Project that is essentially a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud. It provides you free access to NVIDIA TESLA K80 GPU clusters.
You have the option to use either no Hardware Acceleration or you can use a GPU Hardware Accelerator, which then connects you to a runtime (provided that GPUs are available).
Colab also has several popular ML and Deep Learning libraries pre-installed and is essentially a plug and play methodology.
It also bypasses some of the challenges and nuances we face when setting up CUDA for a native GPU (which is a decent saving grace).
Personally I would say that one of the biggest downsides of Colab is the availability of GPU hardware on the cloud.
Although powerful, the GPUs are shared between Virtual Machines and hence at many times you would find that GPUs are unavailable.
Another issue that may creep up is that you have to upload your datasets to the cloud which is quite cumbersome.
Nevertheless, Colab gives you an extreme amount of capability free of cost and it is definitely worth a shot.
You can try Colab at: https://colab.research.google.com/
Learning Resources on Using Google Colab