Intellect-Partners

Categories
Computer Science Electronics

Unlocking the 6G Future: Harnessing the Potential of Reconfigurable Intelligent Surfaces (RIS)

INTRODUCTION

With each successive generation of communication technology, telecommunication’s primary focus undergoes a transformation. The 2G and 3G epochs were primarily centered on human-to-human communication through voice and text. The advent of 4G marked a pivotal shift toward the extensive consumption of data, while the 5G era prioritized connecting the Internet of Things (IoT) and industrial automation systems.

In the forthcoming 6G era, intelligent computation will drive efficiency and improved human experience. While there is still ongoing innovation in 5G, with the introduction of 5G-Advanced standards, companies have already embarked on research for 6G, with plans to make it commercially available by 2030.

CHARACTERISTICS FOR 6G TECHNOLOGY

According to Nokia Bell Labs, six technology areas are expected to characterize 6G networks. These areas move the industry from faster connectivity alone toward intelligent, secure, sensor-rich and highly automated communication systems.

Figure 1: Six key technology areas expected to characterize 6G networks.
Figure 1: Six key technology areas expected to characterize 6G networks.

Artificial intelligence and machine learning – AI/ML techniques, especially deep learning, have rapidly advanced over the last decade and have already been deployed across domains involving image classification and computer vision, ranging from social networks to security. 5G will unleash the true potential of these technologies; with 5G-Advanced, AI/ML will be introduced into many parts of the network, across multiple layers and functions. From beam-forming optimization in the radio layer to scheduling at the cell site with self-optimizing networks, AI/ML can help achieve better performance at lower complexity.

Spectrum bands – Spectrum is a crucial element in providing radio connectivity. Every new mobile generation requires new pioneer spectrum to fully exploit the benefits of a new technology. Refarming existing mobile communication spectrum from legacy technology to the new generation will also become essential. New pioneer spectrum blocks for 6G are expected to include mid-bands of 7-20 GHz for urban outdoor cells enabling higher capacity through extreme MIMO, low bands of 460-694 MHz for extreme coverage, and sub-THz bands for peak data rates exceeding 100 Gbps.

A network that can sense – One of the most notable aspects of 6G would be its ability to sense the environment, people and objects. The network becomes a source of situational information, gathering signals that bounce off objects and determining type, shape, relative location, velocity and perhaps even material properties. This sensing mode can help create a mirror or digital twin of the physical world in combination with other sensing modalities, extending our senses to every point the network touches. Combining this information with AI/ML will provide new insights from the physical world and make the network more cognitive.

Extreme connectivity – The Ultra-Reliable Low-Latency Communication (URLLC) service that began with 5G will be refined and improved in 6G to address extreme connectivity requirements, including sub-millisecond latency. Network reliability could be amplified through simultaneous transmission, multiple wireless hops, device-to-device connections and AI/ML. Enhanced mobility combined with lower latency and improved reliability will support real-time video communications, holographic experiences and digital twin models updated in real time through the deployment of video sensors.

New network architectures – 5G is the first system designed to operate in enterprise and industrial environments, replacing wired connectivity. As demand and strain on the network increase, industries will require more advanced architectures that support greater flexibility and specialization. 5G is introducing service-based architecture in the core and cloud-native deployments that will be extended to parts of the RAN, with networks deployed in heterogeneous cloud environments involving private, public and hybrid clouds. As the core becomes more distributed and higher layers of the RAN become more centralized, there will be opportunities to reduce cost by converging functions. New network and service orchestration solutions exploiting AI/ML advances will result in an unprecedented level of network automation and lower operating costs.

Security and trust – Networks of all types are increasingly becoming targets of cyber-attacks. The dynamic nature of these threats makes sturdy security mechanisms imperative. 6G networks will be designed to protect against threats such as jamming. Privacy issues will also need to be considered when new mixed-reality worlds combine digital representations of real and virtual objects.

RECONFIGURABLE INTELLIGENT SURFACES (RIS)

A Reconfigurable Intelligent Surface (RIS) is a flat panel with small passive elements, approximately in the range of 1 cm2, each capable of independently adjusting the phase and potentially the amplitude of incident electromagnetic waves. Through precise control of these elements, reradiated waves can be directed toward specific directions with the help of an RIS controller. This enables alternative links within a cell and facilitates communication in non-line-of-sight scenarios, supporting extreme connectivity, AI/ML-based signal augmentation, innovative network architecture and optimized bandwidth utilization.

RIS can be fashioned as self-configuring elements within wireless network infrastructure, fine-tuning electromagnetic attributes in response to shifting traffic demands and propagation characteristics. RIS is conceptually appealing and offers practical implementation advantages because it does not require energy-hungry radio-frequency (RF) chains. The absence of RF chains makes RIS an energy-efficient and cost-effective solution compared with massive MIMO technology, which requires an RF chain for each antenna element and therefore increases hardware cost, complexity and power consumption.

Because RIS is highly passive and requires minimal power for operation, it can be an eco-friendly and cost-effective solution deployable on surfaces such as walls, ceilings, billboards and other infrastructure. However, RIS design still requires careful consideration of coverage range, surface size and the number of elements needed.

Figure 2: Representative RIS-assisted network scenarios, including blocked users, UAV communication, mobile edge computing, vehicular networks, NOMA and physical-layer security.
Figure 2: Representative RIS-assisted network scenarios, including blocked users, UAV communication, mobile edge computing, vehicular networks, NOMA and physical-layer security.

Source: IET Communications RIS article, as shown in the source image.

PATENT ACTIVITY AND COMPETITIVE LANDSCAPE

RIS technology is gaining traction among researchers in 5G-Advanced and 6G. After the standardization of 5G in 2019, patenting activity in RIS technology accelerated because RIS promises gains in spectral and energy efficiency without the expense of massive cell densification, while also unlocking numerous future telecommunication use cases.

Figure 3: RIS patent application activity by application year.
Figure 3: RIS patent application activity by application year.

Source note: Patent analysis using Orbit Intelligence; values reconstructed from the provided screenshot.

The patent landscape view indicates that the top owners of IP related to RIS technology include Qualcomm, Huawei and Samsung. Several Chinese universities are also actively researching in this area, and China constitutes a substantial share of the global RIS patent landscape.

Figure 4: Leading RIS IP owners visible in the source landscape view by patent office or publication route.
Figure 4: Leading RIS IP owners visible in the source landscape view by patent office or publication route.

Source note: Patent analysis using Orbit Intelligence; data reconstructed from the provided screenshot.

CONCLUSION

6G is expected to extend mobile networks beyond connectivity by embedding intelligence, sensing, automation, security and extreme performance into the network fabric. RIS is highly aligned with this direction: by shaping the wireless propagation environment itself, RIS can create alternative links, improve non-line-of-sight coverage, reduce energy consumption and support new architectures for dense, intelligent and adaptive wireless systems.

As patenting activity and research investment increase, RIS is likely to remain a key enabling technology in the transition from 5G-Advanced toward commercial 6G systems.

REFERENCES

Categories
Computer Science

IP in the Age of AI: Who Owns the Algorithm?

In an era where artificial intelligence systems are designing new drugs, composing symphonies, and even writing code, the lines between creator and machine are becoming blurred. As AI continues to infiltrate nearly every industry, the question of intellectual property (IP) ownership is more relevant—and more complex—than ever before.

But when it comes to algorithms, especially those designed by or with the help of AI, who really owns the rights?

A Shifting Landscape

Traditionally, intellectual property laws were crafted with human inventors, artists, and developers in mind. The statutes assume a direct line between a person and their creation. But now that machines can “create” based on training data and optimization, the framework no longer fits as neatly.

Take, for example, a neural network trained to generate new software code. If a developer sets up the AI model, feeds it data, and configures the learning parameters, but the final product—the code—is generated independently by the system, is the developer the owner? Is it the company behind the data or the platform that trained the model?

This is not a hypothetical scenario. It’s playing out in courtrooms, patent offices, and legal think tanks around the world.

Understanding the Types of AI Creations

To unpack the issue, it helps to distinguish between different types of AI-driven work:

  • AI-Assisted Creation: A human uses AI tools as support (e.g., using AI to generate image suggestions for a design). Here, IP rights usually stay with the human.
  • AI-Generated Creation: The final product is produced entirely or mostly by AI, without detailed human direction. This is the grayest area.
  • Autonomously Invented Algorithms: The AI system is responsible for developing new algorithms or processes, such as optimizing supply chain routes or discovering new mathematical formulas.

Each of these scenarios raises unique legal and ethical questions. But they all boil down to the same dilemma: should a machine be recognized as an inventor or author?

What the Law Says (and Doesn’t Say)

In the U.S., the Patent and Trademark Office (USPTO) and the Copyright Office have taken a firm stance: only natural persons (i.e., humans) can hold copyrights or patents. This means that any submission must identify a human as the inventor or author, even if the AI was the actual creator.

Other countries are starting to diverge. The United Kingdom and Australia have seen cases where AI-generated inventions were debated in court. In a notable instance, Dr. Stephen Thaler submitted patents listing his AI, DABUS, as the sole inventor. Courts in the U.S. and UK rejected the claims, while Australia briefly accepted them before backtracking.

These mixed responses reveal how ill-equipped current legal systems are for this technological reality.

Corporate Ownership and the Role of Data

The question of ownership becomes even murkier when you consider the data used to train the algorithm. AI systems are only as good as the data they’re fed—often vast, proprietary sets collected over years.

If Company A develops the AI platform, and Company B licenses it to generate new IP, who owns the result? The answer often comes down to contract law rather than IP law. It’s increasingly common for companies to bake IP clauses into licensing and partnership agreements.

Moreover, data privacy and ownership further complicate the conversation. If an AI model is trained on user-generated data, do those users have any rights over the model’s outputs? So far, most jurisdictions say no, but that could change.

What Startups and Innovators Should Do

For entrepreneurs working in AI or using AI to develop products, these are not distant academic concerns—they’re core business risks. Here are some ways to navigate this tricky terrain:

  • Document Human Contribution: Make sure there’s a clear record of how humans were involved in shaping, guiding, or supervising the AI’s output.
  • Review Licensing Agreements Carefully: If you’re using third-party AI tools, check who owns what under the hood.
  • File IP Early: Even provisional patents can help stake a claim to ownership before a competitor beats you to it.
  • Consult with an IP Attorney: Especially one with experience in AI or emerging technologies.

A Glimpse at the Future

Ultimately, the law will need to evolve. There is growing recognition that traditional IP frameworks are too rigid to handle AI’s capabilities. Some experts advocate for a new category of IP ownership—something between traditional authorship and corporate control.

Others suggest updating definitions of “inventor” or “author” to allow for shared credit between AI and human operators. Whether this happens soon or decades from now will depend on political will, judicial interpretation, and economic pressure.

What’s clear is that the future of innovation is entangled with AI. If we don’t adapt our IP systems, we risk stifling the very innovation these systems were designed to protect.

Categories
Computer Science

Powering AI and ML: Unveiling GDDR6’s Role in High-Speed Memory Technology

Introduction

Artificial intelligence (AI) and machine learning (ML) have evolved into game-changing technologies with limitless applications ranging from natural language processing to the automobile sector. These applications need a significant amount of computing power, and memory is an often neglected resource. Fast memory is crucial for AI and ML activities, and GDDR6 memory has established itself as a prominent participant in this industry where high speed and computing power are necessary. The following article will investigate the usage of GDDR6 in AI and ML applications, as well as current IP trends in this crucial subject.

Architecture of GDDR6

High-speed dynamic random-access memory with high bandwidth requirements is the GDDR6 DRAM. The high-speed interface of the GDDR6 SGRAM is designed for point-to-point communications to a host controller. To accomplish high-speed operation, GDDR6 employs a 16n prefetch architecture and a DDR or QDR interface. The architecture of the technology has two 16-bit wide, completely independent channels.

GDDR6 Controller SGRAM

Figure 1 Block diagram [Source]

The Role of GDDR6 in AI and ML

For AI and ML processes, including the training and inference phases, large-scale data processing is necessary. Avoid AI GPUs (Graphics Processing Units) have evolved into the workhorses of AI and ML systems to make sense of this data. The parallel processing capabilities of GPUs are outstanding, which is crucial for addressing the computational demands of workloads for AI and ML.

Data is a crucial piece of information, high-speed memory is needed to store and retrieve massive volumes of data, and GPU performance depends on data analysis. Since the GDDR5 and GDDR5X chips from earlier generations couldn’t handle data transmission speeds more than 12 Gbps/pin, these applications demand faster memory. Here, GDDR6 memory plays a crucial function. AI and ML performance gains require memory to be maintained, hence High Bandwidth Memory (HBM) and GDDR6 offer best-in-class performance in this situation. The Rambus GDDR6 memory subsystem is designed for performance and power efficiency and was created to meet the high-bandwidth, low-latency requirements of AI and ML. The demand for HBM DRAM has significantly increased for gaming consoles and graphics cards as a result of recent developments in artificial intelligence, virtual reality, deep learning, self-driving cars, etc.

Micron’s GDDR6 Memory

Micron’s industry-leading technology enables the next generation faster, smarter global infrastructures, facilitating artificial intelligence (AI), machine learning, and generative AI for gaming. Micron has launched GDDR6X with NVIDIA GeForce® RTX™ 3090 and GeForce® RTX™ 3080 GPUs due to its high-performance computing, higher frame rates, and increased memory bandwidth.

Micron GDDR6 SGRAMs were designed to work with a 1.35V power supply, making them ideal for graphics cards. The memory controller receives a 32-bit wide data interface from GDDR6 devices. GDDR6 employs two channels that are completely independent of one another. A write or read memory access is 256 bits or 32 bytes wide for each channel. Each 256-bit data packet is converted by a parallel-to-serial converter into 16×16-bit data words that are consecutively broadcast via the 16-bit data bus. Originally designed for graphics processing, GDDR6 is a high-performance memory solution that delivers faster data packet processing. GDDR6 supports an IEEE1149.1-2013 compliant boundary scan. Boundary scan allows testing of interconnect on the PCB during manufacturing using state-of-the-art automatic test pattern generation (ATPG) tools.

GDDR6 2-channel 16n Prefetch Memory Architecture

Figure 2 Source

Rambus GDDR6 Memory Interface Subsystem

The JEDEC GDDR6 JESD250C standard is fully supported by the Rambus GDDR6 interface. The Rambus GDDR6 memory interface subsystem fulfills the high-bandwidth, low-latency needs of AI/ML inference and is built for performance and power economy. It includes a PHY and a digital controller that gives users a full GDDR6 memory subsystem. It provides an industry-leading 24 Gb/s per pin and enables two channels with a combined data width of 32 bits. Each channel supports 16 bits. The Rambus GDDR6 interface has a bandwidth of 96GB/s at 24 Gb/s per pin.

GDDR6 Memory Interface Subsystem Example

Figure 3 [Source]

Application of GDDR6 memory in AI/ML applications

A large variety of AI/ML applications from many industries employ GDDR6 memory. Here are some actual instances of AI/ML applications that make use of GDDR6 memory:

  1. FPGA-based AI applications

Micron in their recent new release focused on the development of High-Performance FPGAs based GDDR6 memory for AI applications built on TSMC 7nm process technology with FPGA from Achronix.

2. GDDR6 memory is ideal for AI/ML inference at the edge where fast storage is essential. It offers better memory bandwidth, system speed, and low latency performance, which makes the system to be used for real-time computing of large amounts of data.

3. Advanced driver assistance systems (ADAS)

ADAS employs GDDR6 memory in visual recognition for processing large amounts of visual data, in multiple sensors for tracking and detection, and for real-time decision-making where a large amount of neutral network-based data is analyzed to reduce accidents and for passenger safety.

4. Cloud Gaming

To provide a smooth gaming experience, cloud gaming uses GDDR6 memory, which is fast memory.

5. Healthcare and Medicine:

GDDR6 is used in faster analysis of medical data in the medical industry implemented with AI algorithms for diagnosis and treatment.

IP Trends in GDDR6 use in machine learning and Artificial intelligence

As the importance of high-speed with low latency memory is increasing, there is a significant growth in the patent filing trends witnessed across the globe. The Highest number of patents granted was in 2022 with 212 patents and the highest number of patent applications filed was ~408 in 2022.

INTEL is a dominant player in the market with ~1107 patent families. So far, it has 2.5 times more patent families than NVIDIA Corp., which comes second with 435 patent families. Micron Technology is the third-largest patent holder in the domain.

Other key players in the domain are SK Hynix, Samsung, and AMD.

Top Applicants for GDDR6 Memory Use

[Source: https://www.lens.org/lens/search/patent/analysis?q=(GDDR6%20memory%20use)]

Following are the trends of publication and their legal status over time:

publication status over time
Legal status over time

[Source: https://www.lens.org/lens/search/patent/analysis?q=(GDDR6%20memory%20use)]

Conclusion

High-speed memory is a hero who goes unnoticed in the quick-paced world of AI and ML, where every millisecond matters. It has stepped up to the plate, providing great bandwidth, low latency, and enormous capacity, making GDDR6 memory an essential part of AI and ML systems. The IP trends for GDDR6 technology indicate continued attempts to enhance memory solutions for these cutting-edge technologies as demand for AI and ML capabilities rises. These developments bode well for future AI and ML developments, which should become much more amazing.