Last week, Google announced its development of the Tensor Processing Unit, a custom application-specific integrated circuit. The TPU was built specifically for machine learning applications and has apparently been running in Google’s own data centers for over a year.
The Tensor Processing Unit is tailored for TensorFlow, the software library developed by Google for machine intelligence. Google turned over the plans for TensorFlow to the open source community just last year.
According to Google, TPUs provide an order-of-magnitude better-optimized performance per watt for machine learning. Google reps have stated that the TPU’s superiority to any other chip of its kind is comparable to fast-forwarding technology about seven years, or three generations of Moore’s Law.
Some analysts, like Kevin Krewell of Tirias Research, have taken issue with the “misleading” nature of this claim:
“It only works on 8-bit math,” he countered. “It’s basically like a Z80 microprocessor in that regard. All that talk about it being three generations ahead refers to processors a year ago, so they’re comparing it to 28-nm processors.”
“By stripping out most functions and using only necessary math, Google has a chip that acts as though it was a more complex processor from a couple generations ahead,” Krewell said.
In addition, some have pointed out that the Moore’s law reference is somewhat confusing given the actual context and function of the TPU. Principal analyst at the Enderle Group Rob Enderle explained that Moore’s law actually centers around transistor density and “tends to be tied to parts that are targeted at calculation speed.” Google’s TPU, on the other hand, “is more focused on calculation efficiency, so it likely won’t push transistor density. I don’t expect it to have any real impact on Moore’s law.”
That’s not to say that the TPU’s development isn’t expected to have an impact on the realm of machine learning development, especially in Google’s own facilities.
“Clearly, hyperscale cloud operators are gradually becoming more vertically integrated, so they move more into designing their own equipment,” stated John Dinsdale, chief analyst at Synergy Research Group. This could “help them strengthen their game,” he predicted.
Enderle believes that the new processor could make Google “a much stronger player with AI-based products, but ‘could’ and ‘will’ are very different words, and Google has been more the company of ‘could but didn’t’ of late.”
According to Enderle, the TPU could help Google to scale up its query engine by a significant numbers and provide for higher-density servers that could simultaneously handle a higher number of questions. That said, Enderle was quick to state that Google’s efforts “tend to be under-resourced, so it’s unlikely to meet its potential unless that practice changes.”
It’s also worth noting that the TPU isn’t the first processor ever developed specifically for the purposes of machine learning. Intel has made its own product line of Xeon Phi processors as part of its Scalable System Framework. Intel’s SSF aims to bring machine learning and high-performance computing into the exascale era.
Some industry professionals have questioned whether the TPU could even be helpful to typical IT companies. Francisco Martin, CEO of BigML, stated that while the TPU “will have a big effect and impact in data-intensive research, most business problems and tasks can be solved with simpler machine learning approaches… ONly a few companies have the volume of data that Google manages.”
According to Martin, even for companies that do handle high quantities of data, the TPU is “tailored to very specific machine learning applications based on Google’s TensorFlow” and “requires tons of fine-tuning to be useful.”
That said, big data is the way of the future, and many companies are going to need help handling their digital information.