Advanced Tech: Neuton Edge AI

I got notified about Neuton AI, when Nordic Semiconductor acquired them last week. I have been playing around with the platform and the tools, and I am actually quite mind blown by what it can do.

Edge AI

Neuton is a no-code TinyML platform that builds ultra-compact neural-network models for small microcontrollers. You upload a CSV, press train & it grows the network, and you export it as a plain-C library that can be put on any microcontroller platform. The ridiculous part is the size of the generated model file is only around 5KB which is usually 10x lower than other ML platforms like TFlite, AutoKeras maintaining same or better accuracy. Small size also means inference time can be around 2ms. Faster inference implies lower battery usage too.

Edge AI

I really wanted to learn how they achieve this extreme size reduction. All the videos and content on the website are slightly vague. Their core tech is proprietary. Seems to have 2 US patents that cover architecture-free self-organisation and a parallel global search for weights and feature selection. What it means in practice is that the model stops growing the moment accuracy stalls, so there is nothing left to trim. They clearly mention they don’t use common training methods like back propagation or stochastic gradient descent. Each new neuron connects only to the most critical inputs or features rather than every possible input. This keeps the weight count low.

The platform keeps validation curves and lets you pick and download any smaller checkpoint if you prefer size to accuracy. It also has a signal processing engine to help with preprocessing of the data. Because the network connects only to the most informative inputs, it plays well with time series from IMUs, vibration sensors, ECG, and radar. I think this will let product teams skip a lot of manual DSPs and focus on features. Platform is also free to use, not sure if Nordic Semi fronting all the costs for the training runs.

For me personally, Neuton feels like a strong player for my future ML projects. Try it out yourself and see if it’s worth it for you.

If you liked the post, Share it with your friends!

Advanced Tech: Sony STARVIS 2

Been exploring some image sensor tech and Sony’s STARVIS 2 line up looks great. It’s the second-generation “starlight-vision” CMOS tech that runs on many of today’s best low-light security and automotive cameras. They’re impressive because they significantly improve low-light performance, capturing clear images even in near-total darkness. STARVIS 2 delivers higher sensitivity, better dynamic range, and superior near-infrared capture, all without increasing pixel size.

Starvis Camera Sensor
Starvis Camera Sensor

The first STARVIS sensor came out a decade ago, using back-illuminated pixels. This change alone made sensors about 4.6 times more sensitive. Then, in 2021, STARVIS 2 took it further with new tech: deeper vertical photodiodes (not wider, which most companies do) and dual-gain HDR. Deep buckets (vertical photodiodes) extend straight down into the silicon, so they store many more photons without overflowing. IR light penetrates further into silicon before it’s absorbed, so more IR photons are caught. These improvements boost dynamic range by over 8 dB, meaning sensors can clearly capture bright and dark areas simultaneously without blur.

Applications range from security cameras that clearly identify faces at night, traffic cameras capturing license plates in headlights, to dash cams and drones with great low noise performance. So for a normal user purchasing dash cams, make sure you buy once with STARVIS 2 to ensure you can see number plates in bright and low lights. Of course, Sony isn’t alone. They have competitors like OmniVision’s Nyxel, Onsemi’s Hyperlux, and Samsung’s ISOCELL Auto, but Sony is ahead.

Interviews with Sony folks hints at sensors that eliminate the need for shutter exposure times and LED flashes entirely, capturing details from bright sunlight to starlight. This might suggest even deeper wells or some new stacked-pixel designs are on the horizon. Imaging tech will be a nice place to be in when AI robots take off in the next 5yrs.

If you liked the post, Share it with your friends!
1 3 4 5 6 7 9