Posts

Showing posts from July, 2020

No-stopping zone Technology-driven change comes to parking

Parking is inextricably linked to broader patterns of how people and goods move around—and those patterns are in the midst of transformative change. Driven by a series of converging forces (see figure 1), the entire way we get from point A to B is evolving.7 The result could be a new mobility ecosystem that provides faster, cheaper, cleaner, safer, and more efficient transportation than today. Already, advances in vehicle connectivity, smart infrastructure, and IoT applications are changing the possibilities for parking . Sensor-equipped physical assets, such as garages and street meters, can offer operators real-time space availability and maintenance-needs updates, while feeding digital aggregator platforms with pricing data and licenseplate recognition-based payment systems. For the consumer, such features can reduce search times and create a more seamless experience. For operators, it can enable more efficient use of assets and increase revenue by reducing the number of customers w...

Making surround-view technology more widely available

Car manufacturers want to bring the most basic surround-view features into their entry and mid-range vehicles, and place advanced systems featuring automation on their higher-end and luxury models. Ideally, manufacturers could offer their customers some continuity among the systems – with a familiar look and feel to each – and the ability to upgrade the level of features through simple hardware or software changes. A scalable implementation represents a challenge for Tier-1 manufacturers and their SoC vendors, however. Many SoC vendors offer solutions for only part of the equation – a simple SoC with camera input and visualization capabilities but no capability for analytics, or a system capable of automation that is expensive, power-hungry and impractical for lesser uses. Manufacturers have no choice but to branch their development efforts into different systems, resulting in duplicated effort, higher development costs and no simple way to maintain continuity across a vehicle line...

Basic surround-view Parking systems

Parking Surround-view systems generally use four to six wide-angle-view cameras mounted on the front, rear and sides of a vehicle. The fish-eye lenses used on these cameras produce a distorted bowl-shaped view that geometric alignment algorithms correct. These corrected images then need balancing and color correction for consistency and final stitching into a single 360-degree view around the car. An animated model of the car is rendered at the center of the stitched image to give the driver a bird’s eye view of her environment. It is also possible to add other overlays to the image that show the car’s position relative to objects that the cameras see. A system-on-chip (SoC) for this application requires capacity for multiple camera inputs, an image signal processor and hardware acceleration for image adjustment and tuning, a graphics processing unit for creating the car model, and image overlays and processing cores for algorithmic analysis of the images.

Automated valet parking.

The ability to find an open parking space in a lot and safely park and un-park a vehicle without driver involvement includes even more automated capabilities. The driver may be outside the vehicle, perhaps even some distance away, when the automated system begins the parking or un-parking process. The sensor types here are the same as for an automated parking system , but the algorithmic complexity is greatly enhanced, including the addition of simultaneous localization and mapping and path-planning algorithms. These algorithms provide the location and intelligence required to interpret the full environment in real time and make safe decisions in the parking process. Driverless automation requires an Automotive Safety Integrity Level (ASIL) rating of ASIL-D for the full scope of sensor fusion processing, localization, path planning and drive-by-wire instruction delivery.

Automated parking systems use a set of camera sensors similar to surround-view systems.

Automated parking systems use a set of camera sensors similar to surround-view systems, and usually include short-range radar, ultrasound and high-performance inertial measurement unit sensors. In addition to camera processing and surround-view image creation, this system also needs to perform object detection and classification for parking spot and lane detection. Automated parking systems combine data from all sensors and use vision and sensor fusion algorithms to safely maneuver the vehicle into an available space. The system may also record video streams from the cameras to log vehicle actions while parking.  These logs can be used for incident reporting and to improve future algorithm performance.  An SoC for this more demanding application will need greater processing for sensor fusion and computer vision and neural network processing for the detection and classification algorithms. The SoC may also include video encode acceleration for recording, and access to exte...