In some of our benchmarking work in my sbc-reviews repo, we’ve found the memory latency to be a bit lacking, in comparison to other modern SoCs (especially with LPDDR5).
The question was brought up whether there may be firmware that fixes that, and maybe also whether there are some optimizations available or on their way to help bring down idle power consumption? (I’m seeing 12W idle and 14W full power, at the wall—though there could be some inefficiency in the power supply unit too).
the bandwidth of DDR5 is sufficient and can reach an overall utilization of 80% under multiple master accesses, providing a bandwidth of 51GB * 0.8 = 40GB.
1、The CPU has an insufficient number of outstanding requests (only 16), which cannot hide the overall memory access latency.
2、Due to the lack of vector instructions, the CPU can only process 64 bits (8 bytes) per load/store operation, so memory copy must be implemented using scalar instructions.
3、Currently, the Ubuntu system is not running at the highest frequency (currently at 1.4GHz), but the maximum frequency can reach 1.8GHz.
4、If cacheable copying is used, the proportion of CPU cache prefetching can be increased, but this may affect the performance of other applications during normal operation.
There is currently a Debian-based image that supports drivers for all modules (including video encoding/decoding, NPU, DSP, etc.). It can run at the maximum frequency of 1.8GHz and allows adjustment of cache prefetching based on demand. However, even with these optimizations, the CPU alone cannot fully utilize the DDR bandwidth. The utilization can be improved by running other programs that consume DDR bandwidth, such as playing 4K videos, running a 4K desktop system, gaming, or running AI applications. In these scenarios, the DDR bandwidth utilization can reach approximately 80%.
The current Ubuntu system is not as feature-complete as the Debian system. We will continue to improve the Ubuntu system in the future.
Also you can download the AI-enabled Debian image and user manual from this link:Release Debian-v1.0.0-p550-20241230 · eswincomputing/eic7x-images · GitHub
By the way, I have seen the inference results of LLM. I would like to provide some additional information on the large model inference results that ESWIN supports.Currently, these models are running on the AI-enabled Debian system,
We noticed the power consumption data you tested and shared on GitHub, which shows some deviation from the data we obtained in our lab tests. We would like to understand your testing environment at that time, especially whether any external devices were connected. Going forward, we will further reduce power consumption through improved power management.
Thank you for all the feedback, it is very helpful!
For my power testing, I am plugging the power supply in the P550 enclosure (it’s an FSP350-57FCB) directly into a ThirdReality Smart Outlet, and measuring the power consumption through that. So my measurement includes any power supply losses—and uses power input through the ATX motherboard connector. The Ethernet cable is plugged in, and a small RF dongle for my wireless keyboard, currently.
I can also try with an appropriate 12V power supply —I believe to do that, I would need to switch the SW2 switch to the other position? And would I be safe with a 12V 3A power adapter, or should I use something more?
You can measure power more easily using MCU’s terminal or WEB interface. It measures power on the 12V rail that is used to derive all other rails used by the board using INA226, you can refer to the MCU user manual for more details. If interested you can also write a simple python script that can read the board’s power consumption continuously, albeit with a relatively low sampling rate, to build the power profile of different workloads and also record the corresponding SOC temperature profile.
ATX PSUs don’t do their best at very low power consumptions, efficiency wise.
So do expect slightly higher power readings from AC, or things like a smart outlet. Usually they aren’t really accurate anyway.
For 12V DC PSU, w/o dGPU plugged in - 3A should be fine, 5A is better.
w/ dGPU however: no, just use an ATX PSU.
I have switched to using my 2.5mm 12V DC 3A power adapter, and my idle power draw measurement at the wall is now 7.2W, with full power when system is under CPU-bound load around 9W.
These numbers seem more reasonable compared to the numbers measured on the @ESWIN_Support test bench above.
A separate question: will there be implemented any kind of powersave / balanced power profile, that allows the clock speed to decrease at idle, to save power when the system is not under load?