edit: could be wrong thought I read of AWS being .65 dollars an hour for deep learning GPU use.
edit2: Did a quick look, the .65 dollars doesn't include the actual instance, so its around 1.8 an hour on the low end, I think this cheaper.
p2.xlarge comes with an NVIDIA Tesla K80 GPU for $0.90/hr, but this is now an "old" GPU and the RTX Quadro 6000 should have much higher performance (but I was unable to find any machine learning benchmarks).
p3.2xlarge has NVIDIA Tesla V100 GPU which is NVIDIA's most recent deep learning GPU, but it's $3.06/hr.
That said, AWS is among the most expensive providers if you just need a deep learning GPU (but obviously AWS offers a lot of other useful things). For example, OVH Public Cloud has Tesla V100 for $2.66/hr. And comparable NVIDIA GPUs that are not "datacenter-grade" should be even cheaper; AWS, GCP, Azure, etc. are unable to offer them because of contracts when they buy e.g. the Tesla V100.
The K80s are super outdated now. Google used to offer them for free for 12hrs a day on their Colab platform, but they upgraded them to using the Tesla T4s. Note you can get a K80 on GCP (unreserved) for $.45/hr.
K80 is the crappiest cards I've used last year. For sure it was a good choice 2 years ago, but now you are better to upgrade as any new desktop card is better than K80
It depends, for full-time usage, it is a bit more expensive, I think it is a matter of a few hundred, probably less. We've happily migrated from AWS as only one GPU instance cost us near 1k/m. BTW, the newest and the only available GPU instances now should be better RTX6000 even being more expensive.
edit: could be wrong thought I read of AWS being .65 dollars an hour for deep learning GPU use. edit2: Did a quick look, the .65 dollars doesn't include the actual instance, so its around 1.8 an hour on the low end, I think this cheaper.