Intellect-Partners

Categories
Automotive

LiDAR Technology in Autonomous Vehicles

Introduction:

LiDAR, an acronym for “light detection and ranging” or “laser imaging, detection, and ranging” is a sensor used for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. With the functionality of scanning its environment, it is also sometimes called 3D laser scanning. Particularly, LiDAR image registration (LIR) is a critical task that focuses on techniques of aligning or registering lidar point cloud data with corresponding images. It involves two types of data that have different properties and may be acquired from different sensors at different times or under different conditions. With an accurate alignment of LiDAR point clouds and captured 2D images, the registration method results in the most informative understanding of the environment with fine details.

How does LiDAR work?

The working methodology of LiDAR includes sending a pulse of light and waiting for the return. It measures the total time period i.e. how long it takes to return the pulse. This finally assists in figuring out the distance between objects.

LiDAR Sensor Representation for Autonomous Vehicle

Fig. 1. Working of LiDAR

Application Areas of LiDAR
The fusion of LiDAR point clouds and camera images is a popular example of Multi-Remote Sensing Image Registration (MRSIR). As of today, LiDAR is of various types and forms such as static and mobile LiDARs. According to the geographical use, LiDAR is of terrestrial, aerial, and marine kinds.
The application of LiDAR is very broad. It has uses in surveying, archaeology, geology, forestry, and other fields such as:

  • Autonomous driving: LIR is used to align sensor data to create a more accurate and complete representation of the environment.
  • Robotics: Align sensor data to create more accurate maps and enable more precise localization.
  • 3D mapping: Align data from multiple sensors to create detailed 3D models of the environment.
  • Augmented Reality (AR): Synchronizing virtual elements to correspond with the physical environment.

Utilization of LiDAR in Self-Driving Vehicles

3D Point Cloud and Calculation of Distance
In the realm of road safety, numerous automobile manufacturers are either using or exploring the installation of LiDAR technology in their vehicles.

LiDAR Technology in Self-Driving Vehicles

Fig. 1. LiDAR Technology in Self-Driving Vehicles [Source: https://velodynelidar.com/what-is-lidar/#:~:text=A%20typical%20lidar%20sensor%20emits,calculate%20the%20distance%20it%20traveled]

By iterating this process multiple times within seconds, a detailed, live 3D representation of the environment is generated, referred to as a point cloud.

Advantages of Mounting Lidar Above Autonomous Vehicles
Within an autonomous vehicle, the LiDAR sensor captures extensive data through rapid analysis of numerous laser pulses. This information, forming a ‘3D point cloud‘ from laser reflections, undergoes processing by an integrated computer to generate a dynamic three-dimensional representation of the surroundings. Training the onboard AI model with meticulously annotated point cloud datasets becomes pivotal to ensuring the precise creation of this 3D environment by LiDAR. The annotated data empowers autonomous vehicles to detect, identify, and categorize objects, enhancing their ability to accurately discern traffic lanes, road signs, and moving entities, and evaluate real-time traffic scenarios through image and video annotations.
Beyond research, active exploration delves into the use of LiDAR technology within autonomous vehicles. Automakers have begun integrating LiDAR into advanced driver assistance systems (ADAS), enabling a comprehensive grasp of dynamic traffic conditions. The journey toward autonomous driving safety relies on these systems, which swiftly make precise decisions through meticulous analysis of vast data points, ensuring security through rapid computations.

Cutting-edge approaches
However, there still are challenges in developing a fully automated vehicle with a guarantee of 100% accuracy in critical tasks such as object detection and navigation. To overcome this challenge, many researchers and automobile companies have been trying to improve this technology. The cutting-edge approaches include broadly categorized architecture of methodologies involving four distinct pipelines: information-based pipeline, feature-based pipeline, ego-motion-based pipeline, and deep learning-based pipeline. There has been more accuracy and improvement in the sector of deep learning-based pipelines. LiDAR technology not only enhances convenience but also plays a pivotal role in reducing severe collisions. The latest advancements in this domain include the innovation of LiDAR sensors and the shift from traditional mechanical methods to cutting-edge FMCW and flash technologies.

Patenting Trends for LiDAR Technology in Autonomous Vehicles

The field of autonomous vehicle technology has witnessed a notable rise in patent submissions, especially concerning sensor technology, mapping techniques, decision-making algorithms, and communication systems. Pioneering the advancements are entities such as Google, Tesla, and Uber, whereas longstanding automotive giants like Ford, General Motors, and BMW have also been actively filing patents. In the United States, a significant emphasis lies on artificial intelligence (AI) and augmented reality within the market, with car manufacturers and developers collaborating to introduce self-driving vehicles to the public. Autonomous cars are predicted to change the driving experience and introduce a whole new set of problems.
Despite Sartre’s initial patent submission in the autonomous vehicle domain, it was perceived primarily as a patent related to an AI system designed for highway navigation or restricted roadways. There was a scarcity of US patent filings for self-driving cars before 2006, largely influenced by a trend that emerged in the late 1990s and persists today: a limited number of patents granted by the US Patent Office.

Challenges in Patenting Technology for Autonomous Vehicles
The challenges in patenting technology for self-driving vehicles emerge when these vehicles are involved in incidents or insurance-related events. Owners typically confront three choices:

  1. Assuming liability for any harm or property damage caused by their vehicle.
  2. Taking steps toward legal recourse against the involved driver.
  3. Exploring compensation from their insurance company to address losses resulting from the other driver’s negligence.
    However, legislative uncertainty still clouds the landscape concerning autonomous vehicles and traffic incidents.

Analysis of Patent Applications filed under Lidar in Autonomous Vehicles
Over the past few years, there has been a rapid growth in filing Patent Applications regarding the use of LiDAR in Autonomous Vehicles. As of today, it is marked that there are ~81,697 patents recorded around the globe. It has been observed that Ford Global Tech LLC with ~3,426 patents is a dominant player in the market. Similarly, LG Electronics and Waymo LLC stand in second and third position in the chart.

Analysis of Patent Applications filed under Lidar in Autonomous Vehicles

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]
The following visual representations show the charts representing Legal Status and Patent Documents Over Time.

Legal Status and Patent Documents Over Time.
Patent Documents Over Time

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]

Through an examination of patent filings across different geographic regions, it is evident that the United States, constituting approximately 78% of the overall patents submitted, holds the foremost position in this chart.

patent filings across different geographic regions

[Source: https://www.lens.org/lens/search/patent/list?q=LiDAR%20%20%2B%20Autonomous%20vehicle]

Conclusion

In conclusion, LiDAR technology used in self-driving vehicles has a huge scope in improving road safety. With the cutting-edge FMCW and flash technologies, the application of LiDAR in autonomous vehicles shows great improvements in terms of accuracy and comfort providing features like object detection and incredible navigation. Automobile companies such as Tesla and Toyota have already practiced the technology in their vehicles and companies having such huge turnovers are seeking forward to utilize the full potential of the technology. Technology holds the future of global advancement in technology.

Categories
Computer Science

Enhancing AI Accelerators with HBM3: Overcoming Memory Bottlenecks in the Age of Artificial Intelligence

High Bandwidth Memory 3 (HBM3): Overcoming Memory Bottlenecks in AI Accelerators

With the rise of generative AI models that can produce original text, picture, video, and audio material, artificial intelligence (AI) has made major strides in recent years. These models, like large language models (LLMs), were trained on enormous quantities of data and need a lot of processing power to function properly. However, because of their high cost and processing requirements, AI accelerators now require more effective memory solutions. High Bandwidth Memory, a memory standard that has various benefits over earlier memory technologies, is one such approach.        

How HBM is relevant to AI accelerators?

Constant memory constraints have grown problematic in a number of fields over the past few decades, including embedded technology, artificial intelligence, and the quick growth of generative AI. Since external memory interfaces have such a high demand for bandwidth, several programs have had trouble keeping up. An ASIC (application-specific integrated circuit) often connects with external memory, frequently DDR memory, through a printed circuit board with constrained interface capabilities. The interface with four channels only offers about 60 MB/s of bandwidth even with DDR4 memory. While DDR5 memory has improved in this area, the improvement in bandwidth is still just marginal and cannot keep up with the continuously expanding application needs.

However, a shorter link, more channels, and higher memory bandwidth become practical when we take the possibility of high memory bandwidth solutions into account. This makes it possible to have more stacks on each PCB, which would greatly enhance bandwidth. Significant advancements in high memory bandwidth have been made to suit the demands of many applications, notably those demanding complex AI and machine learning models.

The latest generation of High Bandwidth Memory

The most recent high bandwidth memory standard is HBM3, which is a memory specification for 3D stacked SDRAM that was made available by JEDEC in January 2022. With support for greater densities, faster operation, more banks, enhanced reliability, availability, and serviceability (RAS) features, a lower power interface, and a redesigned clocking architecture, it provides substantial advancements over the previous HBM2E standard (JESD235D). 

General Overview of DRAM Die Stack with Channels

[Source: HBM3 Standard [JEDEC JESD238A] Page 16 of 270]

P.S. You can refer to HBM3 Standard [JEDEC JESD238A]: https://www.jedec.org/sites/default/files/docs/JESD238A.pdf for further studies.   

How does HBM3 address memory bottlenecks in AI accelerators?

HBM3 is intended to offer great bandwidth while consuming little energy, making it perfect for AI tasks that need quick and effective data access. HBM3 has a number of significant enhancements over earlier memory standards, including:

Increased bandwidth

Since HBM3 has a substantially larger bandwidth than its forerunners, data may be sent between the memory and the GPU or CPU more quickly. For AI tasks that require processing massive volumes of data in real time, this additional bandwidth is essential.

Lower power consumption

Since HBM3 is intended to be more power-efficient than earlier memory technologies, it will enable AI accelerators to use less energy overall. This is crucial because it may result in considerable cost savings and environmental advantages for data centers that host large-scale AI hardware.

Higher memory capacity

Greater memory capacities supported by HBM3 enable AI accelerators to store and analyze more data concurrently. This is crucial for difficult AI jobs that need access to a lot of data, such as computer vision or natural language processing.

Improved thermal performance

AI accelerators are less likely to overheat because to elements in the architecture of HBM3 that aid in heat dissipation. Particularly during demanding AI workloads, this is essential for preserving the system’s performance and dependability.

Compatibility with existing systems

Manufacturers of AI accelerators will find it simpler to implement the new technology because HBM3 is designed to be backward-compatible with earlier HBM iterations without making substantial changes to their current systems. This guarantees an easy switch to HBM3 and makes it possible for quicker integration into the AI ecosystem.

In a word, HBM3 offers enhanced bandwidth, reduced power consumption, better memory capacity, improved thermal performance, and compatibility with current systems, making it a suitable memory choice for AI accelerators. HBM3 will play a significant role in overcoming memory constraints and allowing more effective and potent AI systems as AI workloads continue to increase in complexity and size.

Intellectual property trends for HBM3 in AI Accelerators

HBM3 in AI Accelerators is witnessing rapid growth in patent filing trends across the globe. Over the past few years, the number of patent applications almost getting doubled every two years.    

MICRON is a dominant player in the market with 50% patents. It now holds twice as many patents as Samsung and SK Hynix combined. Performance, capacity, and power efficiency in today’s AI data centers are three areas where Micron’s HBM3 Gen2 “breaks new records.” It is obvious that the goal is to enable faster infrastructure utilization for AI inference, lower training periods for big language models like GPT-4, and better total cost of ownership (TCO).       

Other key players who have filed for patents in High bandwidth memory technology with are Intel, Qualcomm, Fujitsu etc.   

key players who have filed for patents in High bandwidth memory

[Source: https://www.lens.org/lens/search/patent/list?q=stacked%20memory%20%2B%20artificial%20intelligence]  

Following are the trends of publication and their legal status over time:

Legal status for patent applications and documents

[Source: https://www.lens.org/lens/search/patent/list?q=stacked%20memory%20%2B%20artificial%20intelligence]

These Top companies own around 60% of total patents related to UFS. The below diagram shows these companies have built strong IPMoats in US jurisdiction.  

IPMoats in US jurisdiction

[Source: https://www.lens.org/lens/search/patent/list?q=stacked%20memory%20%2B%20artificial%20intelligence]

Conclusion

In summary, compared to earlier memory standards, HBM3 provides larger storage capacity, better bandwidth, reduced power consumption, and improved signal integrity. HBM3 is essential for overcoming memory limitations in the context of AI accelerators and allowing more effective and high-performance AI applications. HBM3 will probably become a typical component in the next AI accelerator designs as the need for AI and ML continues to rise, spurring even more improvements in AI technology.    

Meta Data

The performance of AI accelerators will be improved by the cutting-edge memory technology HBM3, which provides unparalleled data speed and efficiency.

Categories
Electronics

Understanding Hidden Markov Model in Natural Language – Decoding Amazon Alexa

Alexa is a cloud-based software program that acts as a voice-controlled virtual personal assistant. Alexa works by listening for voice commands, translating them into text, interpreting the text to carry out corresponding functions, and delivering results in the form of audio, video, or device/accessory triggers.

Hidden Markov Models (HMMs) are a type of probability model that can be used in Natural Language Understanding (NLU) to help programs come to the most likely decision based on both previous decisions and observations.

Machine learning plays a critical role in improving Alexa’s ability to understand and respond to voice commands over time.

Alexa has three main parts: Wake word, Invocation name, and Utterance. Here is a breakdown of each part:

  • Wake word: This is the word that users say to activate Alexa. By default, the wake word is “Alexa,” but users can change it to “Echo,” “Amazon,” or “Computer.
  • Invocation name: This is the unique name that identifies a custom skill. Users can invoke a custom skill by saying the wake word followed by the invocation name. The invocation name must not contain the wake words “Alexa,” “Amazon,” “Echo,” or the words “skill” or “app.
  • Utterance: This is the spoken phrase that users say to interact with Alexa. Users can include additional words around their utterances, and Alexa will try to understand the intent behind the words.
Natural Language Processing (NLP)

What is NLP?

Natural Language Processing (NLP) is a key component of Alexa’s functionality. NLP is a branch of computer science that involves the analysis of human language in speech and text. It is the technology that allows machines to understand and interact with human speech, but is not limited to voice interactions. NLP is the reader that takes the language created by Natural Language Generation (NLG) and consumes it. Advances in NLP technology have allowed dramatic growth in intelligent personal assistants such as Alexa.

Alexa uses NLP to process requests or commands through a machine learning technique. When a user speaks to Alexa, the audio is sent to Amazon’s servers to be analysed more efficiently. To convert the audio into text, Alexa analyses characteristics of the user’s speech such as frequency and pitch to give feature values. The Alexa Voice Service then processes the response and identifies the user’s intent, making a web service request to a third-party server if needed.

In summary, NLP is the technology that allows Alexa to understand and interact with human speech. It is used to process requests or commands through a machine learning technique, and NLU is a key component of Alexa’s functionality that allows it to infer what a user is asking for when they ask a question in a variety of ways.

Hidden Markov Model (NLU Example) 

Hidden Markov Model (NLU Example) 

HMMs are used in Alexa’s NLU to help understand the meaning behind the words spoken by the user. Here is an example of how HMMs can be used in Alexa’s NLU:

  1. The user says “Alexa, play some music.”
  2. The audio is sent to Amazon’s servers to be analyzed more efficiently.
  3. The audio is converted into text using speech-to-text conversion.
  4. The text is analyzed using an HMM to determine the user’s intent. The HMM takes into account the previous decisions made by the user, such as previous music requests, as well as the current observation, which is the user’s request to play music.
  5. Alexa identifies the user’s intent as “play music” and performs the requested action.

Conclusion

In summary, Alexa’s NLP architecture involves converting the user’s spoken words into text, processing the text to identify the user’s intent, and performing complex operations such NLU using the Alexa Voice Service.