Intellect-Partners

Categories
Automotive

LiDAR Technology in Autonomous Vehicles

Introduction:

LiDAR, an acronym for “light detection and ranging” or “laser imaging, detection, and ranging” is a sensor used for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. With the functionality of scanning its environment, it is also sometimes called 3D laser scanning. Particularly, LiDAR image registration (LIR) is a critical task that focuses on techniques of aligning or registering lidar point cloud data with corresponding images. It involves two types of data that have different properties and may be acquired from different sensors at different times or under different conditions. With an accurate alignment of LiDAR point clouds and captured 2D images, the registration method results in the most informative understanding of the environment with fine details.

How does LiDAR work?

The working methodology of LiDAR includes sending a pulse of light and waiting for the return. It measures the total time period i.e. how long it takes to return the pulse. This finally assists in figuring out the distance between objects.

LiDAR Sensor Representation for Autonomous Vehicle

Fig. 1. Working of LiDAR

Application Areas of LiDAR
The fusion of LiDAR point clouds and camera images is a popular example of Multi-Remote Sensing Image Registration (MRSIR). As of today, LiDAR is of various types and forms such as static and mobile LiDARs. According to the geographical use, LiDAR is of terrestrial, aerial, and marine kinds.
The application of LiDAR is very broad. It has uses in surveying, archaeology, geology, forestry, and other fields such as:

  • Autonomous driving: LIR is used to align sensor data to create a more accurate and complete representation of the environment.
  • Robotics: Align sensor data to create more accurate maps and enable more precise localization.
  • 3D mapping: Align data from multiple sensors to create detailed 3D models of the environment.
  • Augmented Reality (AR): Synchronizing virtual elements to correspond with the physical environment.

Utilization of LiDAR in Self-Driving Vehicles

3D Point Cloud and Calculation of Distance
In the realm of road safety, numerous automobile manufacturers are either using or exploring the installation of LiDAR technology in their vehicles.

LiDAR Technology in Self-Driving Vehicles

Fig. 1. LiDAR Technology in Self-Driving Vehicles [Source: https://velodynelidar.com/what-is-lidar/#:~:text=A%20typical%20lidar%20sensor%20emits,calculate%20the%20distance%20it%20traveled]

By iterating this process multiple times within seconds, a detailed, live 3D representation of the environment is generated, referred to as a point cloud.

Advantages of Mounting Lidar Above Autonomous Vehicles
Within an autonomous vehicle, the LiDAR sensor captures extensive data through rapid analysis of numerous laser pulses. This information, forming a ‘3D point cloud‘ from laser reflections, undergoes processing by an integrated computer to generate a dynamic three-dimensional representation of the surroundings. Training the onboard AI model with meticulously annotated point cloud datasets becomes pivotal to ensuring the precise creation of this 3D environment by LiDAR. The annotated data empowers autonomous vehicles to detect, identify, and categorize objects, enhancing their ability to accurately discern traffic lanes, road signs, and moving entities, and evaluate real-time traffic scenarios through image and video annotations.
Beyond research, active exploration delves into the use of LiDAR technology within autonomous vehicles. Automakers have begun integrating LiDAR into advanced driver assistance systems (ADAS), enabling a comprehensive grasp of dynamic traffic conditions. The journey toward autonomous driving safety relies on these systems, which swiftly make precise decisions through meticulous analysis of vast data points, ensuring security through rapid computations.

Cutting-edge approaches
However, there still are challenges in developing a fully automated vehicle with a guarantee of 100% accuracy in critical tasks such as object detection and navigation. To overcome this challenge, many researchers and automobile companies have been trying to improve this technology. The cutting-edge approaches include broadly categorized architecture of methodologies involving four distinct pipelines: information-based pipeline, feature-based pipeline, ego-motion-based pipeline, and deep learning-based pipeline. There has been more accuracy and improvement in the sector of deep learning-based pipelines. LiDAR technology not only enhances convenience but also plays a pivotal role in reducing severe collisions. The latest advancements in this domain include the innovation of LiDAR sensors and the shift from traditional mechanical methods to cutting-edge FMCW and flash technologies.

Patenting Trends for LiDAR Technology in Autonomous Vehicles

The field of autonomous vehicle technology has witnessed a notable rise in patent submissions, especially concerning sensor technology, mapping techniques, decision-making algorithms, and communication systems. Pioneering the advancements are entities such as Google, Tesla, and Uber, whereas longstanding automotive giants like Ford, General Motors, and BMW have also been actively filing patents. In the United States, a significant emphasis lies on artificial intelligence (AI) and augmented reality within the market, with car manufacturers and developers collaborating to introduce self-driving vehicles to the public. Autonomous cars are predicted to change the driving experience and introduce a whole new set of problems.
Despite Sartre’s initial patent submission in the autonomous vehicle domain, it was perceived primarily as a patent related to an AI system designed for highway navigation or restricted roadways. There was a scarcity of US patent filings for self-driving cars before 2006, largely influenced by a trend that emerged in the late 1990s and persists today: a limited number of patents granted by the US Patent Office.

Challenges in Patenting Technology for Autonomous Vehicles
The challenges in patenting technology for self-driving vehicles emerge when these vehicles are involved in incidents or insurance-related events. Owners typically confront three choices:

  1. Assuming liability for any harm or property damage caused by their vehicle.
  2. Taking steps toward legal recourse against the involved driver.
  3. Exploring compensation from their insurance company to address losses resulting from the other driver’s negligence.
    However, legislative uncertainty still clouds the landscape concerning autonomous vehicles and traffic incidents.

Analysis of Patent Applications filed under Lidar in Autonomous Vehicles
Over the past few years, there has been a rapid growth in filing Patent Applications regarding the use of LiDAR in Autonomous Vehicles. As of today, it is marked that there are ~81,697 patents recorded around the globe. It has been observed that Ford Global Tech LLC with ~3,426 patents is a dominant player in the market. Similarly, LG Electronics and Waymo LLC stand in second and third position in the chart.

Analysis of Patent Applications filed under Lidar in Autonomous Vehicles

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]
The following visual representations show the charts representing Legal Status and Patent Documents Over Time.

Legal Status and Patent Documents Over Time.
Patent Documents Over Time

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]

Through an examination of patent filings across different geographic regions, it is evident that the United States, constituting approximately 78% of the overall patents submitted, holds the foremost position in this chart.

patent filings across different geographic regions

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]

Conclusion

In conclusion, LiDAR technology used in self-driving vehicles has a huge scope in improving road safety. With the cutting-edge FMCW and flash technologies, the application of LiDAR in autonomous vehicles shows great improvements in terms of accuracy and comfort providing features like object detection and incredible navigation. Automobile companies such as Tesla and Toyota have already practiced the technology in their vehicles and companies having such huge turnovers are seeking forward to utilize the full potential of the technology. Technology holds the future of global advancement in technology.

Categories
Computer Science Electronics

Popular microcontrollers and their architecture

Microcontrollers

A microcontroller is a programmable processing element with an embedded memory system and multiple programmable input and output peripherals. The peripherals can be advanced GPU, coprocessors, or other electronic components. Microcontrollers are used in different electronic devices for implementing various applications.

It can be used in the device, which can be automatically controlled. Further, it is mostly used in automobiles, computer systems, and different appliances

There are multiple manufacturers of microcontrollers in the market. Such as 

  1. Cypress Semiconductor
  2. NXP Semiconductor
  3. Silicon labs
  4. ARM
  5. MIPS
  6. Maxim Integrated
  7. Renesas
  8. Intel 
  9. Microchip technology

we will learn about the different components of the popular microcontrollers from three manufacturers.

Texas Instrument C2000 MCU

Texas Instrument makes multiple products ranging from all electronic devices, including MCUs. Different MCUs being produced by Texas Instruments are ARM-based MCUs, C2000 MCUs, DSPs, and MSP430 microcontrollers. The most popular MCUs of Texas Instruments are C200 MCUs, used in various electronic devices to perform different control operations, such as digital power and motor control.

C2000 MCUs:

Each C2000 MCU is a combination of multiple configurable blocks that are interconnected. Each CLC can be configured to perform custom operations as per configuration information.

Feature of C2000 Microcontrollers:

1. It provides high computational capabilities with an advanced floating-point data processing unit. 

2. It implements a highly accurate ADC converter

3. It implements integrated comparators for performing comparison operations. 

4. It implements a very high communication interface for the communication of signals and data.

Implementation of C2000 Microcontrollers

Implementation of C2000 Microcontrollers:

The microcontroller can help us to make independent custom logic units to perform different custom logical operations. The MCUs implement multiple Configurable Logic Cells (CLC) in the system, which can be configured or programmed for custom operations. Multiple custom logical units are connected using different local or Universal buses. Each CLC is associated with a PWM module for powering up the CLC. The global bus further connects multiple CLBs.

The input of one CLB can be inputted to another CLB to create a cascading effect.

CLB System Arhitecture
CLB unit modules and CLB sub-modules

Each CLB unit includes multiple CLB sub-modules, namely:

  1. 4-Input Look-up table (LUT) submodules – LUT unit helps to create any boolean operations using up to 4 inputs
  2. 4-State Finite State Machine (FSM) – 4-State FSM generates up to 4 states based on input received.
  3. Counter unit – The counter can act as a counter, shifter, or adder. As a counter, it can count up or down; as a shifter, it can shift right or left; as an adder, it can add or subtract. 
  4. Output Look-up table (LUT) – The output LUT can be configured with boolean operations. 
  5. High-Level Controller (HLC) – The HLC can perform different control operations in the system. The HLC performs data exchange or interrupt operations.
TMS320F28004x Real-Time Microcontrollers

Link to documentation of TI C2000 MCUs are:

https://www.ti.com/microcontrollers-mcus-processors/c2000-real-time-control-mcus/overview.html

https://www.ti.com/lit/ml/slyp681/slyp681.pdf?ts=1655705809321&ref_url=https%253A%252F%252Fwww.google.com%252F

https://www.ti.com/lit/an/spracn0f/spracn0f.pdf?ts=1702390944874

https://www.ti.com/lit/ug/spruii0e/spruii0e.pdf?ts=1702390956144

https://www.ti.com/lit/ug/spruin7b/spruin7b.pdf?ts=1702390972904

NXP S32V2 Processors

NXP has been active in the microcontroller market for a long time. NXP S32V2 MCUs form vision processors for processing images using its APEX-2 vision accelerators in sensing apparatus. It offers an image signal processor and a 3D graphics processing unit (GPU). They are extensively used in ADAS to detect object and image recognition operations.

S32V2 Processor:

The MCU features an APEX-2 vision accelerator for implementing image processing operations using the APEX core framework and an APEX graph tool for sensing different objects ahead of it. The NXP MCu has been implemented in the Bluebox engine for autonomous driving.

Implementation of S32V2 Processor:

  1. Cortex processor A53 for processing different inputs.
  2. APEX-2 vision accelerators:
  3. GPU and Hardware security encryption mechanism
  4. Fabric and internal memory
APEX-2 vision accelerators: GPU and Hardware security encryption mechanism Fabric and internal memory

The APEX processing unit implements two APUs and 16 computational units (CU), and each CU includes four functional units: Multiplier, Load-store, ALU, and shifter unit. 

Each APU is a parallel processor for processing different computational operations. The APU manages the execution and data movement by dispatching instructions to different CUs. 

It has been extensively used in 3D content creation, advanced driver assistance, and video surveillance for recognizing different objects. And people.

G2-APEX-642 ICP Core
APEX ICP Core - Data Flow Management & HW Acceleration

The ACP is a 32-bit RISCV-based processor. The APU implements both scaler and SIMD capabilities. The scaler processing is performed in the Array control processor (ACP) unit. Vector processing is done at the Vector processing unit.

S32V234 Vision Processor - Architecture

Link to documentation of NXP S32V2 MCUs are:

https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/s32-automotive-processors/s32v2-processors-for-vision-machine-learning-and-sensor-fusion:S32V234

https://www.nxp.com/docs/en/data-sheet/S32V234.pdf

https://www.nxp.com/webapp/Download?colCode=S32V234RM

Silabs EFM8 Busy Bee MCU

Silicon Labs’s Laser Bee MCU includes analog-intensive MCUs. This MCU offers high computational operations, including 14-bit ADC, temperature sensors, and high-speed communication peripherals in packages.

Silabs EFM8 Busy Bee MCU

Implementation of Silabs EFM8 Busy Bee:

  1. It includes up to four configurable logic cells.
  2. They are used in different apps and locations that require programmable operations.
  3. Each unit supports 256 other combinational logic functions. Such as AND, OR, XOR, and multiplexing.
  4. Each CLU has a look-up logic (LUT) logic function that can be used to perform 256 different operations. Each CLU contains a D flip-flop, whose input is the LUT output. Multiple CLUs can be cascaded together to achieve some functions.
Silabs EFM8 Busy Bee Architecture

Link to documentation of TI C2000 MCUs are:

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-laser-bee

https://www.silabs.com/documents/public/training/mcu/em8-mcu-overview.pdf

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-bb5

https://www.silabs.com/documents/public/application-notes/AN921.pdf

https://www.silabs.com/documents/public/training/mcu/efm8-lb1-clu.pdf

Categories
Computer Science

Powering AI and ML: Unveiling GDDR6’s Role in High-Speed Memory Technology

Introduction

Artificial intelligence (AI) and machine learning (ML) have evolved into game-changing technologies with limitless applications ranging from natural language processing to the automobile sector. These applications need a significant amount of computing power, and memory is an often neglected resource. Fast memory is crucial for AI and ML activities, and GDDR6 memory has established itself as a prominent participant in this industry where high speed and computing power are necessary. The following article will investigate the usage of GDDR6 in AI and ML applications, as well as current IP trends in this crucial subject.

Architecture of GDDR6

High-speed dynamic random-access memory with high bandwidth requirements is the GDDR6 DRAM. The high-speed interface of the GDDR6 SGRAM is designed for point-to-point communications to a host controller. To accomplish high-speed operation, GDDR6 employs a 16n prefetch architecture and a DDR or QDR interface. The architecture of the technology has two 16-bit wide, completely independent channels.

GDDR6 Controller SGRAM

Figure 1 Block diagram [Source]

The Role of GDDR6 in AI and ML

For AI and ML processes, including the training and inference phases, large-scale data processing is necessary. Avoid AI GPUs (Graphics Processing Units) have evolved into the workhorses of AI and ML systems to make sense of this data. The parallel processing capabilities of GPUs are outstanding, which is crucial for addressing the computational demands of workloads for AI and ML.

Data is a crucial piece of information, high-speed memory is needed to store and retrieve massive volumes of data, and GPU performance depends on data analysis. Since the GDDR5 and GDDR5X chips from earlier generations couldn’t handle data transmission speeds more than 12 Gbps/pin, these applications demand faster memory. Here, GDDR6 memory plays a crucial function. AI and ML performance gains require memory to be maintained, hence High Bandwidth Memory (HBM) and GDDR6 offer best-in-class performance in this situation. The Rambus GDDR6 memory subsystem is designed for performance and power efficiency and was created to meet the high-bandwidth, low-latency requirements of AI and ML. The demand for HBM DRAM has significantly increased for gaming consoles and graphics cards as a result of recent developments in artificial intelligence, virtual reality, deep learning, self-driving cars, etc.

Micron’s GDDR6 Memory

Micron’s industry-leading technology enables the next generation faster, smarter global infrastructures, facilitating artificial intelligence (AI), machine learning, and generative AI for gaming. Micron has launched GDDR6X with NVIDIA GeForce® RTX™ 3090 and GeForce® RTX™ 3080 GPUs due to its high-performance computing, higher frame rates, and increased memory bandwidth.

Micron GDDR6 SGRAMs were designed to work with a 1.35V power supply, making them ideal for graphics cards. The memory controller receives a 32-bit wide data interface from GDDR6 devices. GDDR6 employs two channels that are completely independent of one another. A write or read memory access is 256 bits or 32 bytes wide for each channel. Each 256-bit data packet is converted by a parallel-to-serial converter into 16×16-bit data words that are consecutively broadcast via the 16-bit data bus. Originally designed for graphics processing, GDDR6 is a high-performance memory solution that delivers faster data packet processing. GDDR6 supports an IEEE1149.1-2013 compliant boundary scan. Boundary scan allows testing of interconnect on the PCB during manufacturing using state-of-the-art automatic test pattern generation (ATPG) tools.

GDDR6 2-channel 16n Prefetch Memory Architecture

Figure 2 Source

Rambus GDDR6 Memory Interface Subsystem

The JEDEC GDDR6 JESD250C standard is fully supported by the Rambus GDDR6 interface. The Rambus GDDR6 memory interface subsystem fulfills the high-bandwidth, low-latency needs of AI/ML inference and is built for performance and power economy. It includes a PHY and a digital controller that gives users a full GDDR6 memory subsystem. It provides an industry-leading 24 Gb/s per pin and enables two channels with a combined data width of 32 bits. Each channel supports 16 bits. The Rambus GDDR6 interface has a bandwidth of 96GB/s at 24 Gb/s per pin.

GDDR6 Memory Interface Subsystem Example

Figure 3 [Source]

Application of GDDR6 memory in AI/ML applications

A large variety of AI/ML applications from many industries employ GDDR6 memory. Here are some actual instances of AI/ML applications that make use of GDDR6 memory:

  1. FPGA-based AI applications

Micron in their recent new release focused on the development of High-Performance FPGAs based GDDR6 memory for AI applications built on TSMC 7nm process technology with FPGA from Achronix.

2. GDDR6 memory is ideal for AI/ML inference at the edge where fast storage is essential. It offers better memory bandwidth, system speed, and low latency performance, which makes the system to be used for real-time computing of large amounts of data.

3. Advanced driver assistance systems (ADAS)

ADAS employs GDDR6 memory in visual recognition for processing large amounts of visual data, in multiple sensors for tracking and detection, and for real-time decision-making where a large amount of neutral network-based data is analyzed to reduce accidents and for passenger safety.

4. Cloud Gaming

To provide a smooth gaming experience, cloud gaming uses GDDR6 memory, which is fast memory.

5. Healthcare and Medicine:

GDDR6 is used in faster analysis of medical data in the medical industry implemented with AI algorithms for diagnosis and treatment.

IP Trends in GDDR6 use in machine learning and Artificial intelligence

As the importance of high-speed with low latency memory is increasing, there is a significant growth in the patent filing trends witnessed across the globe. The Highest number of patents granted was in 2022 with 212 patents and the highest number of patent applications filed was ~408 in 2022.

INTEL is a dominant player in the market with ~1107 patent families. So far, it has 2.5 times more patent families than NVIDIA Corp., which comes second with 435 patent families. Micron Technology is the third-largest patent holder in the domain.

Other key players in the domain are SK Hynix, Samsung, and AMD.

Top Applicants for GDDR6 Memory Use

[Source: https://www.lens.org/lens/search/patent/analysis?q=(GDDR6%20memory%20use)]

Following are the trends of publication and their legal status over time:

publication status over time
Legal status over time

[Source: https://www.lens.org/lens/search/patent/analysis?q=(GDDR6%20memory%20use)]

Conclusion

High-speed memory is a hero who goes unnoticed in the quick-paced world of AI and ML, where every millisecond matters. It has stepped up to the plate, providing great bandwidth, low latency, and enormous capacity, making GDDR6 memory an essential part of AI and ML systems. The IP trends for GDDR6 technology indicate continued attempts to enhance memory solutions for these cutting-edge technologies as demand for AI and ML capabilities rises. These developments bode well for future AI and ML developments, which should become much more amazing.