Gains in memory size: Safehear case study

Published on 1 March 2023

Can my embedded software be optimized? Will I be able to improve the performance of my product?
These are good questions that we took on board at the end of 2022 when Antoine Kuhnast, CTO, and co-founder of Safehear with Héléna Jérome, submitted them to us, both on the first version of their communicating hearing protection equipment, which is now on the market, and on their new version project.

Diagnostic stages

This diagnostic phase was intended to allow them to have a clearer view of the potential optimization issues and to quantify this potential.
When we work with our clients in a service mode, it starts with an appropriation of the code and its data set. Then we dive into the heart of its workings to pinpoint where it is consuming (calculation? memory? control?), to find de  reasons why, and to identify dependencies. The computational part is at the heart of our expertise, so when things start to heat up on the algorithmic side, we get excited.
Finally, we project the possible optimizations in line with the product’s use and the company’s objectives about the latter.

Then comes the time to share the optimization paths with our client. This is an important moment because our mission is to support the teams in their choices. We help to make it happen, by working alongside teams to deliver tangible results thanks to our expertise. This step also invites us to step back from the work that has been done so far by our client and to think of possible alternatives by asking other questions with new metrics.

Results

In some cases, as with Safehear, the obvious or most promising path leads to other paths that are even more promising. In this case, the whole decoding part was already very well optimized, and there was no SIMD possible (parallelism, one of our favorite things, can be powerful in some cases). It is a guarantee of reassurance that reinforces the certainty of the teams before going any further.

By extending the work, we managed to identify possible gains on the decoding part with a bonus of improved signal quality at the output and, because we like to dig into things, we were able to highlight a significant optimization potential on the memory space part (between 40% and 50% on the size of a library for 20% to 30% gains on the binary in its entirety).

For the product, because that’s what we’re all about, there are concrete prospects here in terms of quality (rendering, speed) and potential lifespan. A reduced memory requirement offers prospects for more important updates and the addition of potential new functionalities.

If like Antoine, you need to improve your performance or if you are wondering about the optimization potential of your application, let’s talk about it!