For someone who loves challenges, working on ML is a new challenge every day – no one day is the same as the other.
At the start of a new project, there is a lot of excitement in understanding the requirements and going to the drawing board – to figure out what might work best. The data collection phase is a bit challenging, sometimes tedious – especially if annotated data is not commercially available, but then, the excitement of the aftermath keeps one going. Not to mention, the challenge of learning a good model that is robust and generalizes well even with limited data is one that any good ML engineer will be eager to take on.
Once you have an approach in mind, you run some quick experiments to see how things work – pop over to a colleague you love discussing with and exchange notes. Then, it is time to present the findings so far to the larger team.
Now starts the phase where you train the neural network for the chosen task with a lot of data – get a baseline result. This kicks off my favorite phase where you are constantly looking to improve the performance – tweaking loss functions, designing new loss functions is one of my favorite tricks, along with clever data augmentation.
Once the neural network is working well, the computational complexity is the next challenge – there will be a MFLOPS budget, so you have two options – either tweak the neural network architecture in a way that you can reduce complexity but not sacrifice a lot of performance, or you look at other techniques like quantization and pruning.
Sometimes, this just works, other times, it does not. There will be a cycle involving improvements in training, reducing complexity, and evaluating performance till the required accuracy is reached.
The model is now converted to a format that is suitable for edge AI – so that the GPUs and DSPs on the target platform are leveraged in the best way possible. There are constraints put by the SOC vendors which adds another dimension to the set of challenges.
Finally, you have a model that meets the accuracy and latency requirements on the target platform and it is ready for field tests. There will usually be issues from the field testing which will need to be addressed, and now finally, the model is ready for a beta release to customers.
You can test all you want, but using a model on the field at scale is a different beast. Responding to issues from the field and solving them in a timely manner is one of the most defining traits of an ML engineer – one that separates good from the great.
Rinse and repeat – this is the life of an ML engineer from my experience, and I absolutely love it. Having a good team in a supportive environment makes a big difference so that your mind is free to think about ML.
ML at LightMetrics involves using neural networks on video and image data – video from multiple cameras on a vehicle is processed in real-time, on the edge. If ML is challenging, doing it on the edge is at another level. We work on some of the most cutting-edge topics – learning from less data, self/semi-supervised learning, neural architecture search with a focus on efficient networks, neural network pruning, explainable AI among others. We are establishing strong collaborations with like-minded academic institutions.
ML at LightMetrics is unlike any other, it helps you develop expertise and experience working on the entire ML life cycle in a cutting-edge product company. This is priceless – there are not many opportunities that can claim to provide this.
And yes, we are hiring. I have listed the requirements below:
If you are an ML engineer who wants to work on the entire lifecycle of an ML project, experience the thrill and satisfaction of seeing a model you have trained deliver value to customers, reach out to firstname.lastname@example.org with your resume and a covering letter or a paragraph about what you think makes you a good fit for this role.