The Wisdom Of The Connected Crowd

The advances in mobile computing platforms, coupled with innovations enabling AI on the edge, have led to a Cambrian explosion of applications that sense and perceive the world at unprecedented accuracy.

A map of the USA

The wisdom of the connected crowd

The advances in mobile computing platforms, coupled with innovations enabling AI on the edge, have led to a Cambrian explosion of applications that sense and perceive the world at unprecedented accuracy. Automotive perception stacks, are just one (but most likely the most prominent) example of that. Cameras that help detect, track and react to objects and situations around a vehicle are one of key components of the technology that will enable full autonomy. Even today, they power most, if not all of the advanced driving assistance (ADAS) solutions in mass production.Visual sensing though, suffers from the same problems that eyes have – they can sense only what they see, and are adversely affected by inclement weather, poor lighting, and a variety of other environmental factors. Under such circumstances, the amount of computing power available on a single device or vehicle ceases to be relevant to the problem at hand. Instead, we must zoom out and look at the problem at scale, which is where potential solutions lie. Connected vehicles automatically gathering data about the environment and driving behavior can quickly become a vast crowdsourced information aggregation pipeline. In conjunction with other sensing modalities (Radar, Lidar), the use of crowdsourced mapping information is now seen as the third pillar upon which the realization of full autonomy rests.LightMetrics’ RideView platform is at such a juncture. What started as a mission to enable faster integration and delivery of ADAS-enabled safety solutions to fleets, now stands as a vibrant ecosystem of some of the largest and most innovative telematics service providers (TSP) powering driver safety solutions through RideView. Thousands of commercial vehicle drivers and fleet managers across the continental United States and Canada rely on real-time alerts and actionable insights to ensure that their fleets are safe round the clock. An immensely beneficial by-product of this? A network of thousands of connected cameras gathering a wealth of anonymized data around traffic signs, road quality, aggregated driving behavior patterns, and much more.Today, we are excited to announce the launch of our

Crowdsourced Event Generator, an enhancement to our real-time ADAS-based event generation engine. The very first use-case that it powers is the simplest, and yet in its elegance it portends all that is possible. Real-time speed limit and STOP sign recognition and compliance has been a core value proposition of our platform from the beginning. Speeding is known to be a leading indicator of accident risk, and our real-time driver alerts help mitigate that. Further, by not relying on existing speed limit databases that are infrequently updated, it ensures that fleets are always up-to-date on the limits that they see in everyday operation. However, as mentioned above, there are literal blind-spots in such a reliance only on visual cognition. In inclement weather (fog, rain, snow etc.), under bad lighting conditions, or even when temporary obstacles occlude a sign – think class-8 truck in the right most lane – the system hits its limits.Thanks to the thousands of vehicles on our platform we now have significant and organic coverage of traffic signs across North America, as shown in the figure above. Every time a vehicle on our platform detects a new speed limit or STOP sign, it is tagged with its location and is added to an ever-increasing corpus. A filtering and consensus algorithm on the backend generates a cleaned-up database, only a small fraction of which is downloaded and enforced based on current location. Free of the constraints of what a single vision-based system senses, our algorithms can now ‘see’ in the dark, in the rain, and even when signs are completely occluded by passing vehicles. The overwhelming response from early adopters has been emphatically positive, and we look forward to helping our partners bring this to end-users in the coming months. That said, while we see this as an important milestone, it is still Day-1. Over the coming weeks and months, we will be rolling out enhancements that leverage all the valuable data we keep gathering at scale. From sensing and broadcasting temporary speed limit signs that come up at work-zones, to alerting drivers against accident or rash-driving hotspots in advance, we will focus on enabling use-cases that deliver the most value to our partners and end-users.Ultimately, we see powerful AI-on-the-edge coupled with rich crowdsourced data as interdependent and synergistic technologies that will create a virtuous cycle of innovation and value for all stakeholders in our ecosystem.