AI: Feel Free to Dive In
August 30, 2023
Sponsored Story
Welcome to Part II of this two-part blog. In Part I, I explained why AI is the enabler to an application, and not an application unto itself. This is a fairly common misconception, especially as AI is such a new technology.
I also looked at some of the near-term uses of AI and how it fits into the consumer and industrial realms. Here in Part II, I’ll look a little further down the road, dive into the AI ecosystem, and provide some useful tools for engineers looking to implement this essential technology.
Because the technology is so new and still evolving, the ecosystem is also still evolving. That’s not necessarily a good thing if you’re a developer, but it’s far enough along that you could take the AI plunge now with a high level of confidence in the tools. Just know that the evolution is happening very quickly. The biggest reason for its rapid rate of change has less to do with the technology itself and more to do with the potential revenue it can bring, as we’re just beginning to understand how important AI can be in the embedded space.
Still, it’s clear that the various pieces of the AI ecosystem are not evolving at the same rate. For example, the IP part of the equation appears to be moving faster than the tooling and infrastructure pieces. See these offerings from Synaptics as good examples. And the latter portions, because they are slower to evolve, tend to get fragmented.
“The components needed for the hardware side are beginning to settle out,” says Vikram Gupta, the Chief Product Officer at Synaptics. “Over the last year or two, a slew of companies have come up with their own hardware and/or IP solutions. Some have survived, some have not. Here at Synaptics, we try to take a more holistic view of the AI journey, emphasizing IP, hardware, and tool support.”
At the same time, we are seeing the traditional IP providers become somewhat de-facto ecosystem players; examples include Synopsys, Cadence, and Arm. A big reason for this is that it takes a lot of resources to produce the IP for AI applications.
On the tooling side, the ecosystem is less clear, and frankly, it’s still difficult to see how it will evolve. Again, the bigger players have the resources to invest in these technologies. But because the direction is still being defined, it’s difficult for the developers to know when to jump in.
Gupta says, “It's fairly confusing for people who are in the space. So, for people on the outside looking for solutions, it appears even more fragmented.”
Current Dependencies Fall On the Processor Vendors
For better or worse, the onus of building out the ecosystem, at least those parts that aren’t readily available, is falling on the shoulders of the AI processor vendors, out of necessity—you can’t sell any chips if developers can’t do anything with them. While this gives those vendors control of the market, it also limits what can be done. The upside is that it lets the vendor focus on its key customers. The downside is that the vendor might bear more of the burden than it can handle.
In some cases, the processor vendors have completed this task through acquisition, like in the case of Renesas acquiring Reality AI or Infineon acquiring Imagimob. If you invest in the right technology, you’re good. If not, it could set you back for some time. And the jury is still out on these two relatively high-profile purchases.
AI Isn’t the Be All, End All
A very important point that needs to be made is that AI isn’t for everything. It’s an amazing, new thing, but that doesn’t mean it’s required for everything. There’s definitely a place for it, but that place is not in every design.
For those developers that simply want to stick in a hardware engine, there’s a minimal (easy) way to do that, with an algorithm or model. But if you’re not very precise with what you’re trying to accomplish, your end product will not perform optimally. It’s critically important to understand the goal—where you want to end up—before you set out on the journey. And the processor vendors must keep asking (and answering), “How do we make it easy to get from the starting point to the finish line?
Developers tend to put a lot of emphasis on the hardware portion of the design, when oftentimes, that’s not necessary, because a lot of that work is already done by the processor vendor. A more efficient process would be to rely on the vendor for the hardware portion, then spend your time developing the software and algorithms.
The key question again is, what problem am I trying to solve and what is the end market? Thinking about the models, what do we need to visualize? What is the data set? These are all software dependencies. In other words, worry less about the hardware metrics and more about the task that you’re actually trying to accomplish.
One important distinction we need to make is that AI for industrial applications can be very different from AI for consumer applications. In industrial settings, it's easier to show favorable outcomes, meaning that you can use automation to advance the outcome and help the bottom line. This can be done by replacing humans with robots in certain instances, such as for safety reasons. This is one reason to adopt AI more rapidly in industrial applications.
The bottom line is that the discussion shouldn’t focus on AI, it should focus on the end application. Then you make the determination whether AI can help you achieve your goal. But it’s important to remember that AI is just another tool for the designer to implement, albeit a newer and more complex tool.