This rock-paper-scissors science experiment is a novel use of TensorFlow. Last year TensorFlow was seen automating cucumber sorting, finding sea cows in aerial imagery, sorting diced potatoes to make safer baby food, identifying skin cancer, helping to interpret bird call recordings in a New Zealand bird sanctuary, and identifying diseased plants in the most popular root crop on Earth in Tanzania! #AI #ArtificialIntelligence
Training state-of-the-art machine learning models requires an enormous amount of computation, and researchers, engineers, and data scientists often wait weeks for results. To solve this problem, Google “designed an all-new ML accelerator from scratch — a second-generation TPU, or Tensor Processing Unit — that can accelerate both training and running ML models.”
Each device delivers up to 180 teraflops of floating-point performance, and these new TPUs are designed to be connected into even larger systems. A 64-TPU pod can apply up to 11.5 petaflops of computation to a single ML training task.
We’re extremely excited about these new TPUs, and we want to share this technology with the world so that everyone can access their benefits. That’s why we’re bringing our second-generation TPUs to Google Cloud for the first time as Cloud TPUs on GCE, the Google Compute Engine. You’ll be able to mix-and-match Cloud TPUs with Skylake CPUs, NVIDIA GPUs, and all of the rest of our infrastructure and services to build and optimize the perfect machine learning system for your needs. Best of all, Cloud TPUs are easy to program via TensorFlow, the most popular open-source machine learning framework.
Google’s new Cloud TPUs deliver up to 180 teraflops of machine learning acceleration