The chip is less than 4.5 mm across and weighs less than 2 oz. Nonetheless, it is pushing the power of artificial intelligence (AI) to the edge.
Oceanit designed the Nerro AI microchip to facilitate the use of artificial intelligence in edge computing devices, and some see great potential from edge AI for streamlining operations. Hani Elshahawi, digitalization lead for deepwater technologies at Shell, presented an example of AI on the edge at a recent edge computing conference. He talked about using AI at the edge to facilitate visual inspection of subsea facilities.
Traditionally, subsea asset integrity inspection relied on hours of video being moved to the enterprise site. There, it would be reviewed by a person, who would make notes to be entered on a spreadsheet that was saved in a database. Now, AI can be trained to review the data and search for anomalies, but a tedious separation of data collection and data analysis remains. Putting AI on the edge could eliminate that gap.
“You can simplify a lot of the computational power with artificial neural networks,” Elshahawi said, adding that, “today, still you have to do all your training essentially at the enterprise and then embed it at the edge. But the challenge you might encounter is, how do I update this model? Can you do it much more efficiently, and can you do it at the edge? And the concept here is that you go toward hardware-driven neuromorphic computing devices that could be field-deployed but also very low power and very small form factor.”
“The idea,” he added, “is to change the form factor so that you can put it on what looks like a USB stick.”
AI chips such as Oceanit’s Nerro unite the process in one location. “With traditional computing,” Elshahawi said, “you’re separating the CPU [central processing unit] from your RAM [random access memory], you’re separating your data access from your computational capabilities. Here, it’s really all in one. It’s bringing them in one place—an AI chip.” He noted the efficiency of the Nerro chip as another benefit. “It’s very energy efficient, like 100,000 recognitions per second for milliwatts—for 1,000 neurons,” he said. “There’s no way you’re going to get that with a GPU [graphics processing unit].”
Elshahawi and his team used historical data—about 3.3 TB—to train an AI, which resulted in it being able to detect approximately 15 subsea classes with a database that included 85,000 labeled images.
“We used automated object detection using deep supervised learning,” Elshahawi said. “We used a video device to do this. But this is abundantly used today, for example in autonomous vehicles to detect objects and navigate around those.”
Once the AI is embedded at the edge, Elshahawi said, different chips can be assigned different tasks. “For example, you can have one of the chips do the color extraction,” he said, “and another one work on the texture extraction. Then you can combine all that and you can come up with a rule-based decision.”