Connecting a camera

Hi, did anybody connect a camera to the HiFive1? I want to realtime acquire a stream of frames from my environment and process them on the RISC-V processor. Is it doable?

Thank you,

What kind of interface does the camera have and what kind of resolution and colour depth? I imagine you’ll have a bit of trouble handling image data within the 16KB of SRAM available.

On the other hand, there are low resolution serial-based cameras intended for use with uCs and the speed of the E310 is going to be very helpful for processing:

1 Like

You’re not going to fit much of an image in the 16 KB of RAM available. 96x128 8 bit grayscale is 12288 bytes, for example. Full colour you could manage 6080. WIth 16-level gray you could manage 150200.

That’s assuming uncompressed. You’d get more if the camera supplied JPEG. But then you’ll need space to analyze it? (you could uncompress one 16x16 block at a time)

Also, the IO pins won’t do very high speed. Maybe 10 Mbps maximum. Though that’s enough for 60 FPS at the image size you can fit into RAM :slight_smile:


I’m wondering how tight his “realtime” requirement is. It’s one of those words which really bothers me as it implies a time-constraint, but that’s going to vary enormously by context and implies far different workloads (e.g., “taken an image every second and send it over some data bus within that second” versus “do 10 bytes of data signalling within X milliseconds”).

1 Like

Sorry, I was traveling and I could not reply earlier.

I am a research scientist. This activity has teaching purposes. My ultimate goal is a prototype of a small image processing application (maybe machine learning being it a hot topic) on a (small) device like the HiFive1 can be. Thus, identifying all of the necessary HW/SW components and the HW constraints is really important to decide what is doable.

The realtime is very loose, especially and the system may be limited to few frames per seconds.

I would like to integrate a camera if that is something that a student can do in a limited amount of time. If that requires a significant effort, I would choose to process images that are “pre-loaded” in the 16KB of RAM. If the amount of RAM is still too limited, I would opt for a different project.

If you have any comment, I will really appreciate that.

Cool. Is working within a constrained embedded platform a part of the exercise? I assume you’d just be teaching the same algorithms on a PC, or even a Raspberry Pi, otherwise?

Yes that will be part of a student project. And indeed they will experiments on other platforms as well. If we succeed we can provide some technical details or a small tutorial to the community :slight_smile:

1 Like

Well, there is a lot of flash you could put an image on, and process it in blocks in SRAM.

I don’t know what API is provided for writing the flash or if there is any structure provided on it. I believe you can write to it via SPI, but haven’t seen the details. (it’s memory mapped for reading)

There are two ways of interacting with the SPI controller: a memory-mapped read-only mode specialized for SPI flash, and a low-level FIFO-based interface for sequencing arbitrary transfers. Keep in mind only one interface can be active at a given time, so a program writing to flash must live in the scratchpad.

Does Linux support any of these features? In other words, reading/writing from/to the flash can be filesystem R/W operations?


The FE310 SoC on the HiFive1 does not support supervisor mode and therefore cannot run Linux.

But supposing otherwise: It is certainly possible to write a Linux driver for the SPI controller, which should be able to work with existing MTD SPI NOR drivers (i.e., m25p80). Then you could layer a flash-aware filesystem such as JFFS2 on top of the MTD.


I don’t know of any Linux that needs less than 16 MB of RAM, and most these days want at least 256 MB if not more.

The HiFive1 has 16 kilobytes of RAM. It’s a THOUSAND times too small for Linux.

Plus, no MMU.

Thanks Albert.

What about the icache? Can that be locked to a current set of contents, so that flash can be unmapped and the scratchpad is fully available for data?

So there is not an OS for the board? I assumed from the repository that the adapes Linux kernel could be compiled and installed for the FE310.


It’s an Arduino-class board, not a Raspberry Pi-class board. Those will come in maybe another year from SiFive, lowRISC, and hopefully lots of others.

1 Like

Yeah, Linux really isn’t well-suited for very small applications. Which isn’t a dig at Linux, it’s terrific, but it was designed and built with very different considerations in mind (multi-user, multi-process, load-balancing etc). It’s reliability for realtime scheduling is notoriously bad and requires some patching.

I am wondering if the lack of supervisor mode (@aou) kills the possibility of any OS.

For example, maybe something like uClinux can be ported to the FE310

But at that point it maybe really a big effort.

What about the icache? Can that be locked to a current set of contents, so that flash can be unmapped and the scratchpad is fully available for data?

There is no locking, but even if your entire program fit in the ICache, the real challenge is that there isn’t a good way to get code into the icache other than actually fetching and executing it. You’d have to get the “mess with the SPI Flash” code into the icache somehow before you tried to run it. Maybe you could do that with some creative assembly code weaving to actually hit every cache line then loop back to execute the relevant flashing code, but I don’t know a simpler way (FE310 doesn’t have a “prefetch arbitrary instruction” command).


Make sure every cache line has a “c.ret” instruction you can hit to prefetch it :slight_smile: 32 bytes, right?

The FE310-G000 instruction cache cannot be locked or used as scratchpad. If you want to run without off-chip flash and not use up scratchpad SRAM for instructions, one option is to burn code into the OTP. Obviously there’s limited scope for updating contents during development.