Intellect-Partners

Categories
Electronics

Haptic Feedback Displays and Disney’s ‘Feeling Fireworks

HAPTICS:

Haptic technology, also known as haptics, is a technology that creates an experience of touch by applying forces, vibrations, or motions to the user. It targets the user’s sense of touch and can be used to create virtual objects in computer simulations, control virtual objects, and enhance remote control of machines and devices. Haptic devices often incorporate tactile sensors that measure forces exerted by the user on the interface.

Here are some key points about haptic technology:

  • 1. Haptic technology can create haptic feedback through the application of force, vibration, and motion
  • 2. It can be used in various fields such as medicine, aviation, entertainment, and more
  • 3. Haptic technology has been used in video game controllers, smartphones (vibrations), and other consumer devices to provide tactile feedback
  • 4. It has the potential to enhance user experiences and engagement by stimulating the sense of touch
  • 5. Haptic technology has been explored in the medical field, particularly in surgical robots, to improve accuracy and reduce tissue damage

The future of haptic technology holds possibilities for more realistic and immersive experiences, but cost, power consumption, and size remain challenges

Overall, haptic technology provides a way to engage users’ tactile senses and enhance their interaction with digital and physical environments. It has applications in various industries and has the potential to create more immersive and realistic experiences.

What are Haptics feedback displays?

Haptic feedback displays are interfaces that provide tactile feedback to users through the application of forces, vibrations, or motions. Haptic feedback displays can be used in various fields such as medicine, aviation, entertainment, and more. 

Here are some types of haptic feedback displays:

  • 1. Surface haptics: Surface haptics provide programmable haptic effects on physical surfaces, making interfaces come to life. Surface haptics can be used to create fully programmable textures on a physical surface, allowing users to experience textures such as bumps, edges, and collisions.
  • 2. Graspable haptic devices: Graspable haptic devices, such as joysticks, are used in applications like robot control.
  • 3. Touchable haptic devices: Touchable haptic devices are used to create virtual objects in computer simulations and control virtual objects.
  • 4. Wearable haptic devices: Wearable haptic devices are worn on the body and can be used to provide feedback to users in various applications such as gaming and sports.

Haptic feedback displays can be used to enhance user experiences and engagement by stimulating the sense of touch. They have the potential to create more immersive and realistic experiences in various industries. However, cost, power consumption, and size remain challenges for the development of haptic feedback displays.

Disney Research created fireworks display you can feel with your hands:

Disney Research has created a “Feeling Fireworks” display that offers haptic feedback, allowing visually impaired guests to experience the pyrotechnic display through vibrations they can feel. The technology consists of a latex screen mounted on a frame in front of the fireworks, which is then streamed with water through a variety of nozzles to create vibrations that simulate the sound waves and light patterns of the fireworks.

The vibrations are strong enough to be felt by a person’s hands, providing a tactile experience of the fireworks. The tactile fireworks display is for aesthetic purpose, envisioned to bring all crowd members together to enjoy the experience of feeling fireworks. However, there are no plans to implement this technology in any of the Disney parks at the moment. The technology behind Feeling Fireworks would make it possible to make large tactile screens at a reasonable price.

How does it work?

Disney Research has developed a “Feeling Fireworks” display that offers haptic feedback, allowing visually impaired guests to experience the pyrotechnic display through vibrations they can feel. Here’s how it works:

  • 1. The technology consists of a latex screen mounted on a frame in front of the fireworks, which is then streamed with water through a variety of nozzles to create vibrations that simulate the sound waves and light patterns of the fireworks
  • 2. The vibrations are strong enough to be felt by a person’s hands, providing a tactile experience of the fireworks
  • 3. While there are no current plans to implement the technology in any of the Disney parks, the prototype has been tested on sighted users and demonstrated that the tactile effects are meaningful analogs to the visual fireworks that they represent.

Disney Researches Feeling Fireworks for the Blind:

Disney Research has developed a “Feeling Fireworks” display that offers haptic feedback, allowing visually impaired guests to experience the pyrotechnic display through vibrations they can feel.

  • Feeling Fireworks” is a tactile firework show aimed at making fireworks more inclusive for blind and visually impaired guests
  • The technology consists of a latex screen mounted on a frame in front of the fireworks, which is then streamed with water through a variety of nozzles to create vibrations that simulate the sound waves and light patterns of the fireworks
  • The vibrations are strong enough to be felt by a person’s hands, providing a tactile experience of the fireworks

A Patent related to the technology : US10555153B2

Categories
Computer Science Electronics

Popular microcontrollers and their architecture

Microcontrollers

A microcontroller is a programmable processing element with an embedded memory system and multiple programmable input and output peripherals. The peripherals can be advanced GPU, coprocessors, or other electronic components. Microcontrollers are used in different electronic devices for implementing various applications.

It can be used in the device, which can be automatically controlled. Further, it is mostly used in automobiles, computer systems, and different appliances

There are multiple manufacturers of microcontrollers in the market. Such as 

  1. Cypress Semiconductor
  2. NXP Semiconductor
  3. Silicon labs
  4. ARM
  5. MIPS
  6. Maxim Integrated
  7. Renesas
  8. Intel 
  9. Microchip technology

we will learn about the different components of the popular microcontrollers from three manufacturers.

Texas Instrument C2000 MCU

Texas Instrument makes multiple products ranging from all electronic devices, including MCUs. Different MCUs being produced by Texas Instruments are ARM-based MCUs, C2000 MCUs, DSPs, and MSP430 microcontrollers. The most popular MCUs of Texas Instruments are C200 MCUs, used in various electronic devices to perform different control operations, such as digital power and motor control.

C2000 MCUs:

Each C2000 MCU is a combination of multiple configurable blocks that are interconnected. Each CLC can be configured to perform custom operations as per configuration information.

Feature of C2000 Microcontrollers:

1. It provides high computational capabilities with an advanced floating-point data processing unit. 

2. It implements a highly accurate ADC converter

3. It implements integrated comparators for performing comparison operations. 

4. It implements a very high communication interface for the communication of signals and data.

Implementation of C2000 Microcontrollers

Implementation of C2000 Microcontrollers:

The microcontroller can help us to make independent custom logic units to perform different custom logical operations. The MCUs implement multiple Configurable Logic Cells (CLC) in the system, which can be configured or programmed for custom operations. Multiple custom logical units are connected using different local or Universal buses. Each CLC is associated with a PWM module for powering up the CLC. The global bus further connects multiple CLBs.

The input of one CLB can be inputted to another CLB to create a cascading effect.

CLB System Arhitecture
CLB unit modules and CLB sub-modules

Each CLB unit includes multiple CLB sub-modules, namely:

  1. 4-Input Look-up table (LUT) submodules – LUT unit helps to create any boolean operations using up to 4 inputs
  2. 4-State Finite State Machine (FSM) – 4-State FSM generates up to 4 states based on input received.
  3. Counter unit – The counter can act as a counter, shifter, or adder. As a counter, it can count up or down; as a shifter, it can shift right or left; as an adder, it can add or subtract. 
  4. Output Look-up table (LUT) – The output LUT can be configured with boolean operations. 
  5. High-Level Controller (HLC) – The HLC can perform different control operations in the system. The HLC performs data exchange or interrupt operations.
TMS320F28004x Real-Time Microcontrollers

Link to documentation of TI C2000 MCUs are:

https://www.ti.com/microcontrollers-mcus-processors/c2000-real-time-control-mcus/overview.html

https://www.ti.com/lit/ml/slyp681/slyp681.pdf?ts=1655705809321&ref_url=https%253A%252F%252Fwww.google.com%252F

https://www.ti.com/lit/an/spracn0f/spracn0f.pdf?ts=1702390944874

https://www.ti.com/lit/ug/spruii0e/spruii0e.pdf?ts=1702390956144

https://www.ti.com/lit/ug/spruin7b/spruin7b.pdf?ts=1702390972904

NXP S32V2 Processors

NXP has been active in the microcontroller market for a long time. NXP S32V2 MCUs form vision processors for processing images using its APEX-2 vision accelerators in sensing apparatus. It offers an image signal processor and a 3D graphics processing unit (GPU). They are extensively used in ADAS to detect object and image recognition operations.

S32V2 Processor:

The MCU features an APEX-2 vision accelerator for implementing image processing operations using the APEX core framework and an APEX graph tool for sensing different objects ahead of it. The NXP MCu has been implemented in the Bluebox engine for autonomous driving.

Implementation of S32V2 Processor:

  1. Cortex processor A53 for processing different inputs.
  2. APEX-2 vision accelerators:
  3. GPU and Hardware security encryption mechanism
  4. Fabric and internal memory
APEX-2 vision accelerators: GPU and Hardware security encryption mechanism Fabric and internal memory

The APEX processing unit implements two APUs and 16 computational units (CU), and each CU includes four functional units: Multiplier, Load-store, ALU, and shifter unit. 

Each APU is a parallel processor for processing different computational operations. The APU manages the execution and data movement by dispatching instructions to different CUs. 

It has been extensively used in 3D content creation, advanced driver assistance, and video surveillance for recognizing different objects. And people.

G2-APEX-642 ICP Core
APEX ICP Core - Data Flow Management & HW Acceleration

The ACP is a 32-bit RISCV-based processor. The APU implements both scaler and SIMD capabilities. The scaler processing is performed in the Array control processor (ACP) unit. Vector processing is done at the Vector processing unit.

S32V234 Vision Processor - Architecture

Link to documentation of NXP S32V2 MCUs are:

https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/s32-automotive-processors/s32v2-processors-for-vision-machine-learning-and-sensor-fusion:S32V234

https://www.nxp.com/docs/en/data-sheet/S32V234.pdf

https://www.nxp.com/webapp/Download?colCode=S32V234RM

Silabs EFM8 Busy Bee MCU

Silicon Labs’s Laser Bee MCU includes analog-intensive MCUs. This MCU offers high computational operations, including 14-bit ADC, temperature sensors, and high-speed communication peripherals in packages.

Silabs EFM8 Busy Bee MCU

Implementation of Silabs EFM8 Busy Bee:

  1. It includes up to four configurable logic cells.
  2. They are used in different apps and locations that require programmable operations.
  3. Each unit supports 256 other combinational logic functions. Such as AND, OR, XOR, and multiplexing.
  4. Each CLU has a look-up logic (LUT) logic function that can be used to perform 256 different operations. Each CLU contains a D flip-flop, whose input is the LUT output. Multiple CLUs can be cascaded together to achieve some functions.
Silabs EFM8 Busy Bee Architecture

Link to documentation of TI C2000 MCUs are:

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-laser-bee

https://www.silabs.com/documents/public/training/mcu/em8-mcu-overview.pdf

https://www.silabs.com/mcu/8-bit-microcontrollers/efm8-bb5

https://www.silabs.com/documents/public/application-notes/AN921.pdf

https://www.silabs.com/documents/public/training/mcu/efm8-lb1-clu.pdf

Categories
Electronics

Understanding Hidden Markov Model in Natural Language – Decoding Amazon Alexa

Alexa is a cloud-based software program that acts as a voice-controlled virtual personal assistant. Alexa works by listening for voice commands, translating them into text, interpreting the text to carry out corresponding functions, and delivering results in the form of audio, video, or device/accessory triggers.

Hidden Markov Models (HMMs) are a type of probability model that can be used in Natural Language Understanding (NLU) to help programs come to the most likely decision based on both previous decisions and observations.

Machine learning plays a critical role in improving Alexa’s ability to understand and respond to voice commands over time.

Alexa has three main parts: Wake word, Invocation name, and Utterance. Here is a breakdown of each part:

  • Wake word: This is the word that users say to activate Alexa. By default, the wake word is “Alexa,” but users can change it to “Echo,” “Amazon,” or “Computer.
  • Invocation name: This is the unique name that identifies a custom skill. Users can invoke a custom skill by saying the wake word followed by the invocation name. The invocation name must not contain the wake words “Alexa,” “Amazon,” “Echo,” or the words “skill” or “app.
  • Utterance: This is the spoken phrase that users say to interact with Alexa. Users can include additional words around their utterances, and Alexa will try to understand the intent behind the words.
Natural Language Processing (NLP)

What is NLP?

Natural Language Processing (NLP) is a key component of Alexa’s functionality. NLP is a branch of computer science that involves the analysis of human language in speech and text. It is the technology that allows machines to understand and interact with human speech, but is not limited to voice interactions. NLP is the reader that takes the language created by Natural Language Generation (NLG) and consumes it. Advances in NLP technology have allowed dramatic growth in intelligent personal assistants such as Alexa.

Alexa uses NLP to process requests or commands through a machine learning technique. When a user speaks to Alexa, the audio is sent to Amazon’s servers to be analysed more efficiently. To convert the audio into text, Alexa analyses characteristics of the user’s speech such as frequency and pitch to give feature values. The Alexa Voice Service then processes the response and identifies the user’s intent, making a web service request to a third-party server if needed.

In summary, NLP is the technology that allows Alexa to understand and interact with human speech. It is used to process requests or commands through a machine learning technique, and NLU is a key component of Alexa’s functionality that allows it to infer what a user is asking for when they ask a question in a variety of ways.

Hidden Markov Model (NLU Example) 

Hidden Markov Model (NLU Example) 

HMMs are used in Alexa’s NLU to help understand the meaning behind the words spoken by the user. Here is an example of how HMMs can be used in Alexa’s NLU:

  1. The user says “Alexa, play some music.”
  2. The audio is sent to Amazon’s servers to be analyzed more efficiently.
  3. The audio is converted into text using speech-to-text conversion.
  4. The text is analyzed using an HMM to determine the user’s intent. The HMM takes into account the previous decisions made by the user, such as previous music requests, as well as the current observation, which is the user’s request to play music.
  5. Alexa identifies the user’s intent as “play music” and performs the requested action.

Conclusion

In summary, Alexa’s NLP architecture involves converting the user’s spoken words into text, processing the text to identify the user’s intent, and performing complex operations such NLU using the Alexa Voice Service.