Why Google Is Rushing To Conquer The Edge AI Hardware Market


With an aim to introduce artificial intelligence at the edge, Google has introduced their brand new application-specific integrated circuit (ASIC) product. At the recently-concluded Google Cloud Next conference, the company introduced Edge TPU, the ASICs which can run TensorFlow Lite machine learning models on mobile and embedded devices. In what is clearly a sign of Google bolstering its internet of things and AI portfolio, the search engine giant brought TPUs to the edge that enable the deployment of high-accuracy AI. Reportedly, Google CEO Sundar Pichai underscored that ML is a major key differentiator for Google, and that’s true for their Google Cloud customers as well.

Google’s newly introduced Edge TPU is a general-purpose chip designed for running high-performance ML applications directly on mobile and embedded devices. It is clear that the Mountain View tech giant aims to strengthen its cloud offerings that complement Cloud TPU and Google Cloud services. With Edge TPU, Google now provides an end-to-end, cloud-to-edge, hardware-plus-software infrastructure for facilitating the deployment of customers’ AI-based solutions. This also comes with a new software portfolio Cloud IoT Edge, which means enterprises can train ML models with Google’s cloud-based TPUs and deploy and run them directly on an edge-based processor. Cloud IoT Edge is also optimised to run in mobile and embedded systems via operating systems like Linux and Android Things.


Moving Processing From The Cloud To The Edge
AI is seeing a significant shift as chip vendors gear up to provide Edge AI inference, ABI Research notes. Edge AI inference will grow from just 6 percent in 2017 to 43 percent in 2023, the research says. Jack Vernon, industry analyst at ABI Research, says:

“The shift to the edge for AI processing will be driven by cheaper edge hardware, mission-critical applications, a lack of reliable and cost-effective connectivity options, and a desire to avoid expensive cloud implementation.”

Another reason why it is gaining prominence is because the market is leaning towards robust analytics. Users and players from manufacturing, healthcare, wearable and robotics markets are now incorporating edge AI in their roadmap, Vernon stated.

According to an IDC report, the chip market is supposed to exceed $459 billion in 2018 and IDC estimates that the total amount of data generated from connected devices will exceed 40 trillion gigabytes by 2025. This is an area where traditional chipmakers are investing in significantly to accelerate AI development at the edge.

Google Looks To Conquer A Bigger Market
According to a Google blog post, there are many benefits from intelligent, real-time decision-making at the point where these devices connect to the network — what’s known as the “edge.” It cites several benefits of Edge TPU — for example, it can be used for a number of industrial use-cases such as predictive maintenance, anomaly detection, machine vision, robotics or voice recognition. It can also be deployed in several sectors such as healthcare, manufacturing, retail and transportation, among others.

This will open a world of possibilities in the smart device ecosystem. For example, TPU and Cloud IoT Core will usher in new possibilities with IoT. With powerful data processing and ML capabilities at the edge, devices such as robotic arms, wind turbines and smart cars can now act on the data from their sensors in real time and predict outcomes locally.


The blog also indicates that just like the Edge TPU development kit, the Edge TPU Accelerator will allow the processing of ML inference data directly on-device. The blog points out that the local ML accelerator will increase privacy, remove latency and pave the way for higher performance with less power.

With a view to broaden its customer market and help businesses become more cost-effective by scaling edge hardware and running mission-critical applications more efficiently, Google aims to corner a chunk of market share for AI training. According to John Heard, CTO of Smart Parking, edge-based ML inference is vital to delivering reliable, live, low-latency, and cost-effective smart city IoT. Cloud IoT Edge and Edge TPU unlock these capabilities in new ways for the next generation of Smart Parking systems.

However, Google is not alone in introducing a new chip especially for inference. Just like Google’s Edge TPU, Microsoft announced Project Brainwave, a hardware architecture for pushing real-time AI calculations on FPGA, earlier this May. This was Microsoft’s attempt to make FPGA more general purpose for their customers.

Post a Comment

0 Comments