Intellect-Partners

Categories
Computer Science Electronics

Popular microcontrollers and their architecture

Microcontrollers

A microcontroller is a programmable processing element with an embedded memory system and multiple programmable input and output peripherals. The peripherals can be advanced GPU, coprocessors, or other electronic components. Microcontrollers are used in different electronic devices for implementing various applications.

It can be used in the device, which can be automatically controlled. Further, it is mostly used in automobiles, computer systems, and different appliances

There are multiple manufacturers of microcontrollers in the market. Such as 

  1. Cypress Semiconductor
  2. NXP Semiconductor
  3. Silicon labs
  4. ARM
  5. MIPS
  6. Maxim Integrated
  7. Renesas
  8. Intel 
  9. Microchip technology

we will learn about the different components of the popular microcontrollers from three manufacturers.

Texas Instrument C2000 MCU

Texas Instrument makes multiple products ranging from all electronic devices, including MCUs. Different MCUs being produced by Texas Instruments are ARM-based MCUs, C2000 MCUs, DSPs, and MSP430 microcontrollers. The most popular MCUs of Texas Instruments are C200 MCUs, used in various electronic devices to perform different control operations, such as digital power and motor control.

C2000 MCUs:

Each C2000 MCU is a combination of multiple configurable blocks that are interconnected. Each CLC can be configured to perform custom operations as per configuration information.

Feature of C2000 Microcontrollers:

1. It provides high computational capabilities with an advanced floating-point data processing unit. 

2. It implements a highly accurate ADC converter

3. It implements integrated comparators for performing comparison operations. 

4. It implements a very high communication interface for the communication of signals and data.

Implementation of C2000 Microcontrollers

Implementation of C2000 Microcontrollers:

The microcontroller can help us to make independent custom logic units to perform different custom logical operations. The MCUs implement multiple Configurable Logic Cells (CLC) in the system, which can be configured or programmed for custom operations. Multiple custom logical units are connected using different local or Universal buses. Each CLC is associated with a PWM module for powering up the CLC. The global bus further connects multiple CLBs.

The input of one CLB can be inputted to another CLB to create a cascading effect.

CLB System Arhitecture
CLB unit modules and CLB sub-modules

Each CLB unit includes multiple CLB sub-modules, namely:

  1. 4-Input Look-up table (LUT) submodules – LUT unit helps to create any boolean operations using up to 4 inputs
  2. 4-State Finite State Machine (FSM) – 4-State FSM generates up to 4 states based on input received.
  3. Counter unit – The counter can act as a counter, shifter, or adder. As a counter, it can count up or down; as a shifter, it can shift right or left; as an adder, it can add or subtract. 
  4. Output Look-up table (LUT) – The output LUT can be configured with boolean operations. 
  5. High-Level Controller (HLC) – The HLC can perform different control operations in the system. The HLC performs data exchange or interrupt operations.
TMS320F28004x Real-Time Microcontrollers

Link to documentation of TI C2000 MCUs are:

https://www.ti.com/microcontrollers-mcus-processors/c2000-real-time-control-mcus/overview.html

https://www.ti.com/lit/ml/slyp681/slyp681.pdf?ts=1655705809321&ref_url=https%253A%252F%252Fwww.google.com%252F

https://www.ti.com/lit/an/spracn0f/spracn0f.pdf?ts=1702390944874

https://www.ti.com/lit/ug/spruii0e/spruii0e.pdf?ts=1702390956144

https://www.ti.com/lit/ug/spruin7b/spruin7b.pdf?ts=1702390972904

NXP S32V2 Processors

NXP has been active in the microcontroller market for a long time. NXP S32V2 MCUs form vision processors for processing images using its APEX-2 vision accelerators in sensing apparatus. It offers an image signal processor and a 3D graphics processing unit (GPU). They are extensively used in ADAS to detect object and image recognition operations.

S32V2 Processor:

The MCU features an APEX-2 vision accelerator for implementing image processing operations using the APEX core framework and an APEX graph tool for sensing different objects ahead of it. The NXP MCu has been implemented in the Bluebox engine for autonomous driving.

Implementation of S32V2 Processor:

  1. Cortex processor A53 for processing different inputs.
  2. APEX-2 vision accelerators:
  3. GPU and Hardware security encryption mechanism
  4. Fabric and internal memory
APEX-2 vision accelerators: GPU and Hardware security encryption mechanism Fabric and internal memory

The APEX processing unit implements two APUs and 16 computational units (CU), and each CU includes four functional units: Multiplier, Load-store, ALU, and shifter unit. 

Each APU is a parallel processor for processing different computational operations. The APU manages the execution and data movement by dispatching instructions to different CUs. 

It has been extensively used in 3D content creation, advanced driver assistance, and video surveillance for recognizing different objects. And people.

G2-APEX-642 ICP Core
APEX ICP Core - Data Flow Management & HW Acceleration

The ACP is a 32-bit RISCV-based processor. The APU implements both scaler and SIMD capabilities. The scaler processing is performed in the Array control processor (ACP) unit. Vector processing is done at the Vector processing unit.

S32V234 Vision Processor - Architecture

Link to documentation of NXP S32V2 MCUs are:

https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/s32-automotive-processors/s32v2-processors-for-vision-machine-learning-and-sensor-fusion:S32V234

https://www.nxp.com/docs/en/data-sheet/S32V234.pdf

https://www.nxp.com/webapp/Download?colCode=S32V234RM

Silabs EFM8 Busy Bee MCU

Silicon Labs’s Laser Bee MCU includes analog-intensive MCUs. This MCU offers high computational operations, including 14-bit ADC, temperature sensors, and high-speed communication peripherals in packages.

Silabs EFM8 Busy Bee MCU

Implementation of Silabs EFM8 Busy Bee:

  1. It includes up to four configurable logic cells.
  2. They are used in different apps and locations that require programmable operations.
  3. Each unit supports 256 other combinational logic functions. Such as AND, OR, XOR, and multiplexing.
  4. Each CLU has a look-up logic (LUT) logic function that can be used to perform 256 different operations. Each CLU contains a D flip-flop, whose input is the LUT output. Multiple CLUs can be cascaded together to achieve some functions.
Silabs EFM8 Busy Bee Architecture

Link to documentation of TI C2000 MCUs are:

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-laser-bee

https://www.silabs.com/documents/public/training/mcu/em8-mcu-overview.pdf

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-bb5

https://www.silabs.com/documents/public/application-notes/AN921.pdf

https://www.silabs.com/documents/public/training/mcu/efm8-lb1-clu.pdf

Categories
Electronics

Understanding Hidden Markov Model in Natural Language – Decoding Amazon Alexa

Alexa is a cloud-based software program that acts as a voice-controlled virtual personal assistant. Alexa works by listening for voice commands, translating them into text, interpreting the text to carry out corresponding functions, and delivering results in the form of audio, video, or device/accessory triggers.

Hidden Markov Models (HMMs) are a type of probability model that can be used in Natural Language Understanding (NLU) to help programs come to the most likely decision based on both previous decisions and observations.

Machine learning plays a critical role in improving Alexa’s ability to understand and respond to voice commands over time.

Alexa has three main parts: Wake word, Invocation name, and Utterance. Here is a breakdown of each part:

  • Wake word: This is the word that users say to activate Alexa. By default, the wake word is “Alexa,” but users can change it to “Echo,” “Amazon,” or “Computer.
  • Invocation name: This is the unique name that identifies a custom skill. Users can invoke a custom skill by saying the wake word followed by the invocation name. The invocation name must not contain the wake words “Alexa,” “Amazon,” “Echo,” or the words “skill” or “app.
  • Utterance: This is the spoken phrase that users say to interact with Alexa. Users can include additional words around their utterances, and Alexa will try to understand the intent behind the words.
Natural Language Processing (NLP)

What is NLP?

Natural Language Processing (NLP) is a key component of Alexa’s functionality. NLP is a branch of computer science that involves the analysis of human language in speech and text. It is the technology that allows machines to understand and interact with human speech, but is not limited to voice interactions. NLP is the reader that takes the language created by Natural Language Generation (NLG) and consumes it. Advances in NLP technology have allowed dramatic growth in intelligent personal assistants such as Alexa.

Alexa uses NLP to process requests or commands through a machine learning technique. When a user speaks to Alexa, the audio is sent to Amazon’s servers to be analysed more efficiently. To convert the audio into text, Alexa analyses characteristics of the user’s speech such as frequency and pitch to give feature values. The Alexa Voice Service then processes the response and identifies the user’s intent, making a web service request to a third-party server if needed.

In summary, NLP is the technology that allows Alexa to understand and interact with human speech. It is used to process requests or commands through a machine learning technique, and NLU is a key component of Alexa’s functionality that allows it to infer what a user is asking for when they ask a question in a variety of ways.

Hidden Markov Model (NLU Example) 

Hidden Markov Model (NLU Example) 

HMMs are used in Alexa’s NLU to help understand the meaning behind the words spoken by the user. Here is an example of how HMMs can be used in Alexa’s NLU:

  1. The user says “Alexa, play some music.”
  2. The audio is sent to Amazon’s servers to be analyzed more efficiently.
  3. The audio is converted into text using speech-to-text conversion.
  4. The text is analyzed using an HMM to determine the user’s intent. The HMM takes into account the previous decisions made by the user, such as previous music requests, as well as the current observation, which is the user’s request to play music.
  5. Alexa identifies the user’s intent as “play music” and performs the requested action.

Conclusion

In summary, Alexa’s NLP architecture involves converting the user’s spoken words into text, processing the text to identify the user’s intent, and performing complex operations such NLU using the Alexa Voice Service.

Categories
Electronics Others

Migration from Hybrid Memory Cube (HMC) to High-Bandwidth Memory (HBM)

Introduction:

Memory technology plays a vital role in providing effective data processing as the demand for high-performance computing keeps rising. The industry has recently seen a considerable migration from Hybrid Memory Cube (HMC) to High-Bandwidth Memory (HBM) because of HMB’s higher performance, durability, and scalability. This technical note talks about the causes behind the widespread adoption of HBM as well as the benefits it has over HMC.

HBM Overview:

HBM is a revolutionary memory technology that outperforms conventional memory technologies. HBM is a vertically stacked DRAM memory device interconnected to each other using through-silicon vias (TSVs). HBM DRAM die is further tightly connected to the host device using its distribution channels which are completely independent of one another. This architecture is used to achieve high-speed, low-power operation. HBM has a reduced form factor because it combines DRAM dies and logic dies in a single package, making it ideal for space-constrained applications. An interposer that is interconnected to the memory stacks, enables high-speed data transmission between memory and processor units. 

HMC Brief:

The Hybrid Memory Cube (HMC) comprises multiple stacked DRAM dies and a logic die, stacked together using through-silicon via (TSV) technology in a single-package 3D-stacked memory device. The HMC stack’s memory dies each include their memory banks as well as a logic die for memory access control. It was developed by Micron Technology and Samsung Electronics Co. Ltd. in 2011, and announced by Micron in September 2011.

When compared to traditional memory architectures such as DDR3, it enables faster data access and lower power consumption. Each memory in HMC is organized into a vault. Each vault in the logic die has a memory controller which manages memory operations. HMC is used in applications where speed, bandwidth, and sizes are more required. Micron discontinued the use of HMC in 2018 when it failed to become successful in the semiconductor industry.

Hybrid Memory Cube (HMC) and High-Bandwidth Memory (HBM) are two distinct memory technologies that have made significant contributions to high-performance computing. While both of these technologies aim to enhance memory bandwidth operation, there are many fundamental distinctions between HMC and HBM.

Power Consumption: HBM significantly has lower power consumption compared to HMC. HBM’s vertical stacking approach eliminates high-power consumption bus interfaces and reduces the distance for data transfer between DRAM dies, resulting in improved energy efficiency. This decreased power usage is especially beneficial in power-constrained environments like mobile devices or energy-efficient servers.

Memory Architecture: HMC uses a 3D-stacked memory device comprised of several DRAM dies and a logic die stacked together via through-silicon (TSV) technology. In addition to its memory banks, each memory die in the HMC stack contains a logic die for a memory access operation. HBM, on the other hand, is a 3D-stacked architecture that integrates base (logic) die and memory dies as well as a processor (GPU) on a single package that is coupled by TSVs to provide a tightly coupled high-speed processing unit. The memory management process is made easier by the shared memory space shared by the memory dies in an HBM stack.

Industry Adoption: When compared to HMC, HBM offers more memory density in a smaller physical footprint. HBM does this by vertically stacking memory dies on a single chip, resulting in increased memory capacity in a smaller form factor. HBM is well-suited for space-constrained applications such as graphics cards and mobile devices because of its density.

Memory Density: In comparison to HMC, HBM frequently utilizes less energy and power. The vertical stacking strategy used by HBM shortens the transfer of data distance and removes power-hungry bus connections, resulting in increased energy efficiency. This decreased power usage is especially beneficial in power-constrained contexts like mobile devices or energy-efficient servers.

Memory Bandwidth: Comparing HMC and HBM to conventional memory technologies, they both offer much better memory bandwidth. On the other hand, HBM often delivers higher bandwidth compared to HMC. By using a wider data channel and higher signaling rates, HBM accomplishes this, enabling faster data flow between the processor and the memory units.

In conclusion, HMC and HBM differ in terms of memory bandwidth, architecture, power consumption, density, and industry recognition. While HMC offers significantly better performance over conventional memory technologies, HBM has become the market leader due to its reduced form factor, higher performance, and efficiency, which has expedited the transition from HMC to HBM.

Advantages of HBM:

Power Consumption: HBM uses less energy and power for data transfer on the I/O interface than HMC, hence lowering energy efficiency. HBM improves energy efficiency by using vertical stacking technology to reduce data transfer distance and power-intensive bus interfaces.

Bandwidth: HBM provides excellent memory bandwidth, allowing the processor/controller to quickly access data to obtain greater speed. HBM has more memory channels and along with high-speed signaling than HMC, which allows for more bandwidth. This high bandwidth is critical for data-intensive applications such as AI, machine learning, and graphics.

Scalability: By enabling the connection of different memory stacks, HBM offers scalable memory configurations. Because of this flexibility, numerous memory and bandwidth options are available to meet the unique needs of various applications.

Density: With a reduced size, HBM’s vertical stacking technique makes greater memory densities possible. HBM memory is ideal for smaller devices such as mobile phones and graphics cards etc. Enhanced system performance is also made possible by higher memory density by lowering data access latency.

Signal Integrity: TSV-based interconnects in HBM provide superior signal integrity than wire-bonded techniques. The reduced data transmission failures and increased system dependability are both benefits of improved signal integrity.

Conclusion:

A significant development in memory technology is the change from HMC to HBM. The requirement for faster and more effective memory solutions has been spurred by the demand for high-performance computing, particularly in fields like AI, machine learning, and graphics. With its different benefits, HBM is broadly utilized in various ventures because of its high bandwidth, low power consumption, increased density, versatility, and improved signal integrity. HBM has become the standard option for high-performance memory needs, and its continuous development is expected to influence the direction of memory technologies in the market.