Artificial Intelligence

Imagination "embraces" Edge AI capabilities in its GPUs

23rd May 2025
Caitlin Gittins
0

Imagination Technologies recently released its E-Series of GPUs, notably with Edge AI capabilities. Kristof Beets, VP of Product Management at the company took Electronic Specifier through the solution, discussing integrating Edge AI, the importance of flexibility, and how GPUs are best-placed to run AI workloads.

Back in February of this year Imagination discussed with Electronic Specifier the release of their D-Series - the DXTP GPU IP - in relation to the growing integration of AI features into the automotive and smartphone sectors. This has resulted in the rise of chat-based AI models to improve end user experience.

This provided a mere glimpse of how deeply embedded Imagination is getting into AI. The E-Series focuses on providing its customers with Edge AI capabilities, Beets told Electronic Specifier in an exclusive briefing. Although this continues work that began with the release of its B-Series, the E Series represents its first time where the company has been heavily guided by the requirements of running AI workloads.

“We’re looking at our products and saying, ‘what do we need to do to improve and enable and embrace all [of] those new use cases enabled by AI?” said Beets. “Edge AI can be quite a broad thing. It can be an entry-level smartphone all the way up to a self-driving vehicle. Obviously the amount of AI capability they need is vastly different. So we need a scalable solution.”

In the vein of delivering the market with a scalable solution, the E-Series GPU IP has a parallel processing architecture that gives it this kind of flexibility, able to scale from 2 to 200 TOPS for AI workloads. The applications it will meet include desktop applications, natural language processing on smartphones, and autonomous cars, among others. 

AI has traditionally been a software-driven revolution, and not hardware, said Beets, which has presented a unique issue: not all processors are created equally.

Imagination’s experience in creating CPUs, playing around with NPUs and now exclusively manufacturing GPUs has granted it the privilege of understanding the advantages and drawbacks of using these different processors for running AI workloads.

“CPU is … an ultimately flexible engine. It’s the main programmable engine … but it’s not a parallel processing engine. It’s a sequential processing system,” explained Beets, demonstrating the in-depth exploration that went into their determining that GPUs are the best hardware for managing AI workloads.

“It will depend on the market,” added Beets. “If  there is a market segment where you don’t need a GPU at all, not for heavy compute or graphics … we’re not saying that it will be the perfect solution in every market, but in a lot of them, we can tick the boxes.” 

The architecture 

The consideration the company took in figuring out the best processor for AI workloads is evidenced in the architecture of the latest GPU, which has been engineered to deliver maximum performance and the lowest power efficiency. 

The new GPU - the E-Series GPU IP - features burst processors, a new concept from the company which allows it to deliver a 35% improvement in average power efficiency for Edge applications. These burst processors came out of studying the arithmetic logic unit (ALU) and understanding where the power was going. 

“The older GPU had a classic, deep ALU pipeline,” said Beets. “What we found was that a lot of the functionality of that pipeline was not used. They were complex things like flow control, branching, but most of the time, and especially in AI, you’re just executing a very continuous loop of instructions. There’s nothing really special.

“The more stages you have, the more steps you take, the more power it consumes.”

As a result the company reduced the number of pipelines, shared data in pipelines to offload the register bank, and came up with the concept of burst processors.

They also took pains to decide where to place the AI functionality on the GPU, as depending on the location, the performance is affected and can, in some cases, require dedicated logic and SRAM - bumping up the expense for companies. 

“The analysis was really to look at the cost, because one of the problems with the new process nodes is that the SRAM is not scaling right. Logic continues to get smaller, but the SRAMs don’t shrink … so that immediately puts them at the top of your list of concerns.” 

The approach Imagination took was to perform a deep integration with the AI functionality and make use of the existing SRAM. 

“There’s a lot of memory sitting in the GPU that we can now assign to that AI pipeline to keep data locally and process it,” explained Beets. “It costs nothing extra but [it] makes the AI engine very flexible.” 

Standardised GPUs

Another reason GPUs are good for AI functionality - beyond the flexibility it offers - is that it uses standardised compute languages and open-source frameworks that are beneficial for developers; whereas some developers working with DSPs and NPUs struggle because they rely on proprietary assembly languages or specific tool flows.

“Supporting the required data formats is good and for our customers, you can find a lot of software engineers that know how GPUs work,” noted Beets. 

Providing examples of the big tech giants - like Apple, Qualcomm and MediaTek - these companies have their own AI engines with different programming models and tool flows.

“Then they [some developers] looked at the GPUs and said, ‘well every one of them has a GPU and it has a standard API, so it’s much more logical to target the GPU for AI as a developer, as an ecosystem’ and that’s what we see today,” said Beets.

In the spirit of supporting developers, Imagination offers its own libraries with high-level programming languages that developers typically use for AI, and tooling, real-time performance profiling, to enable them to analyse Imagination’s hardware.

“The open-source is one of the critical things, because that’s what allows you to do a lot of fine tuning,” Beets added. “If every AI solution was closed source, this would be a very different market.” 

Earlier Beets acknowledged that there are certain markets where NPUs will continue to be dominant - for instance, in the automotive industry.

By transitioning away from a traditional approach - characterised as having a dedicated GPU for graphics and dedicated NPU for AI - towards a more flexible one, the company is seeking to support customers in these markets reliant on NPUs. 

“For … our autonomous vehicle customers some of the processing that they do is fully vertical. They have their own algorithms, their own neural network engines.

“They can adapt them perfectly to their needs, but they still have that nagging issue of, if a significant algorithm improvement comes true that makes things 10 times better, they want to tap into it. So how can they do it in the most efficient way?”

As a result, Imagination has created a flexible system where NPUs and GPUs can communicate with one another, regardless of who manufactures the NPU. 

The E-Series GPU IP is due to be released later this year, with key announcements coming.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2025 Electronic Specifier