Share
AI/machine learning

# Machine Learning at the Edge: TinyML Is Getting Big

## Being able to deploy machine learning applications at the edge is the key to unlocking a multibillion-dollar market. TinyML is the art and science of producing machine-learning models frugal enough to work at the edge, and it's seeing rapid growth.

Is it $61 billion and 38.4% compound annual growth rate (CAGR) by 2028 or$43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it is not that different.

What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple. It is about taking compute out of the data center and bringing it as close to where the action is as possible.

Whether it's stand-alone Internet-of-things sensors, devices of all kinds, drones, or autonomous vehicles, there's one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There's just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML.

Tiny machine learning (TinyML) is broadly defined as a fast-growing field of machine-learning technologies and applications including hardware, algorithms, and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, hence enabling a variety of always-on use-cases and targeting battery-operated devices.

This week, the inaugural TinyML EMEA Technical Forum is taking place, and it was a good opportunity to discuss with some key people in this domain. ZDNet caught up with Evgeni Gousev from Qualcomm, Blair Newman from Neuton, and Pete Warden from Google.