Intellect-Partners

Categories
Computer Science

Google’s Quantum Leap: Multiverse Calculations or Marketing Buzz?

Is the future of computing truly quantum? And did Google’s latest chip really perform across multiple universes?

In a groundbreaking announcement, Google Quantum AI unveiled “Willow”, its new quantum chip that’s pushing boundaries—and possibly crossing into parallel universes. While the headlines are buzzy, let’s break down what’s really happening and what this means for the future of quantum tech, patent strategy, and commercialization.

Quantum Supremacy—Again?

According to Hartmut Neven, the founder of Google Quantum AI, the Willow chip completed a benchmark quantum computation in under five minutes—a task that would reportedly take a supercomputer 10 septillion years.

“Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (10²⁵) years.”

That’s longer than the age of the universe.

But here’s the catch: the task itself has no real-world application. It’s designed to demonstrate quantum supremacy, not utility.

What Did Willow Actually Do?

The computation in question was to generate a random distribution, a task that’s notoriously difficult for classical computers. However, this is the same calculation that Google used in its 2019 quantum supremacy claim—a claim that was contested by IBM and later replicated using classical systems.

So, while Willow’s error reduction using more qubits is impressive, its commercial relevance is still uncertain.

Quantum Mechanics, Patents & the Multiverse?

Tucked in Google’s announcement was a reference to the Many Worlds Interpretation of quantum mechanics. Neven suggested that the chip’s performance “lends credence to the notion that quantum computation occurs in many parallel universes.”

This is tied to David Deutsch’s theory of quantum parallelism, where computation occurs across branches of the multiverse, rather than collapsing into one outcome. While fascinating, this remains speculative—and it has no impact on how patents are filed or enforced today.

Still, these bold claims reflect a trend: quantum computing is evolving, and IP frameworks must evolve with it.

Real Challenges: Quantum ≠ Practical (Yet)

Despite the hype, quantum computing remains experimental:

  • Google’s global challenge offering $5 million to find a practical use case for quantum computing still stands.
  • Current quantum algorithms are narrow, and error rates remain a bottleneck.
  • The IP ecosystem around quantum tech is nascent, and patent clarity is crucial for future commercialization.

This is why companies and investors need to keep a close eye on not just quantum announcements, but also standardization efforts, licensing frameworks, and IP protection mechanisms.

How Do Quantum Computers Actually Work?

Let’s simplify:

  • Classical computers process bits (0s and 1s).
  • Quantum computers rely on qubits, which use superposition and entanglement.
  • They use interference patterns to solve complex problems, theoretically faster than any classical system.

However, the actual power lies in building error-resilient, scalable quantum chips—and protecting these innovations with well-structured patents.

Why This Matters for Innovators & IP Strategy
  • Quantum computing is expected to disrupt multiple industries: cybersecurity, pharma, materials science, and logistics.
  • As we inch closer to quantum advantage, companies must act now to evaluate, patent, and license their innovations.
  • At Intellect Partners, we help clients navigate the complex patent landscapes around quantum and emerging technologies.

Whether it’s freedom-to-operate analysis, claim charting, or licensing strategy, our team ensures your IP portfolio is aligned with tech frontiers.

Final Thoughts: Buzz vs Reality

While it’s fun to speculate about quantum computers tapping into alternate realities, what truly matters is building commercially useful, reliable, and scalable quantum systems—and securing them with strong IP protection.

Google’s Willow chip is a leap forward, but we’re still a long road away from widespread adoption. Until then, innovators and tech leaders must focus on building value—one patent at a time.

Interested in understanding how quantum tech intersects with IP?

Contact Intellect Partners for a consultation on IP strategies for quantum and other next-gen technologies.

Categories
Computer Science

Why Generative AI Feels Broken: The Hidden Reliability Crisis Behind the AI Boom

Generative AI is having a moment. If you have asked a curious question into the digital ether, whether you are plugged into tech, a business owner, a student, or just someone navigating the worldwide web, you have probably encountered generative AI tools such as OpenAI’s ChatGPT, Google’s Gemini, Meta’s LLaMA, or Microsoft’s Copilot. These systems can write essays, create images, write emails, help with coding, and even write legal documents. The enthusiasm around these services is dizzying—imagining infinite creativity and productivity, as well as having every bit of human knowledge at your fingertips.

However, amidst the digital gold rush, cracks are starting to appear. These tools, often remarkable, still cannot be trusted. They hallucinate facts, misunderstand questions, misinterpret context, occasionally deliver answers that are completely incorrect, and sometimes, even downright dangerous. Additionally, as more websites, applications, and platforms begin to rely on generative AI for everyday features, it feels like we are slowly staging the entire internet into beta again. We’ve entered a wild west of unpredictability and experimentation (not everything works as we think it should).

What Exactly Are the Reliability Issues?

To identify the source of the problems, we have to understand a little about how generative AI operates. These models are trained on extensive databases, essentially the public stretch of the entire internet, through something called ‘unsupervised learning,’ with the aim of predicting the next word in a sequence. That’s it. There is no real understanding, logic, or knowledge of facts behind their answers.

This means even the best of systems can produce errors such as:

Hallucinations: Confidently stating something as fact when it is false.

Bias and offensive material: Reflecting harmful stereotypes contained in training data.

Inconsistency: Providing different answers to the same question based on how the question is posed.

Context fade: Losing track of long conversations and understanding of subtle changes in context.

Overconfidence: Presenting guesses in an authoritative tone, which leads users to trust incorrect information.

In the case of a user asking a chatbot for legal advice, they may receive fabricated case law. A student using AI for historical facts could be misled by fictitious quotes (i.e., the user takes the output as fact). Even a technologically savvy user may fall victim to errors if they do not fact-check the outcomes.

Real-World Examples of AI Misfires

The news just keeps rolling:

Google’s AI Overviews, which were supposed to enhance search, suggested that users eat rocks and put glue in their pizza sauce, were predicated on misunderstood or satirical sources.

Air Canada’s chatbot advertised a non-existent refund policy, and the company was forced to abide by it when challenged in court.

A New York lawyer had ChatGPT draft a legal brief that cited total fabrication of court cases, which eventually made it to a hearing, and the judge sanctioned him, and the story went viral.

Bing’s chatbot (early version) was reported to be aggressive or emotionally manipulating users in long conversations.

These are not just bugs; these are symptoms of a substantial reliability problem in the generative AI architecture.

Why Is This Happening?

Generative AI is founded on the notion that it doesn’t “know” anything. It neither checks facts, discovers truths, consults other sources, nor even questions its outputs. It simply generates output based on mathematical data patterns. This causes a few critical issues:

1. No Ground Truth

AI systems don’t “know” what a fact is. They only generate plausible text outputs, not facts. Even if training data was rigid facts, it could erase that information, or cross data facts together, especially if the user inputs a narrow, specialty, or complex request/input.

2. Training Data Has Errors

If you give an AI a set of training data from the internet, it includes all of the errors, biases, and nonsensical knowledge. Satire, misinformation, tiny errors, etc., are all equal verbal inputs.

3. Models Don’t Know Anything About Current Knowledge

Most models won’t provide feedback on current knowledge after their training, and therefore don’t know what is currently happening in the world. Some like ChatGPT even augment knowledge with a live search, but most do not. Most likely, if the AI’s output left knowledge before it collected knowledge, then basic current event questions can turn badly.

4. Models Have No Accountability

An AI system will not say, “I’m wrong” unless you make it. The system will not tell you, “I’m guessing.” The next output will always be a flat, confident, polished output, which is potentially dangerous and misleading.

Can Reliability Be Improved?

Yes—but it will take more than simply data and computing power. This is what companies and researchers are doing:

1. RAG (Retrieval-Augmented Generation)

Rather than relying solely on the AI’s knowledge from its training database, RAG systems create systems that go out to external databases or the web to retrieve information in real time before generating the answer based on the previous relevant information. This can help to eliminate some hallucinations and give a level of confidence around facts.

2. Model Alignment and Guardrails

Many companies such as OpenAI, Anthropic, and Google are putting massive resources into making AI outputs safer and more reliable by applying alignment approaches, reinforcement learning from human feedback (RLHF), and built-in moderation systems.

3. Domain-Specific Models

General all-purpose AI may never be fully competent across entire domains. However, focused AIs trained on specific fields such as law, medicine, or engineering can deliver output with much higher reliability.

4. Fact-Checking Layers

Some startups and research organizations are developing AI layers that double-check the output of another model—think an “AI proofreader” that seeks to validate claims, citations, and logical soundness.

What Can Users Do Right Now?

Users must be cautious and skeptical when using generative tools, such as AI, until AI becomes fully reliable.

Here are some best practices:

Always validate AI-generated content, especially in sensitive situations (e.g., health care, finance, or law).

Ask follow-up questions to clarify the AI’s reasoning or solicit its citations.

Work with trusted platforms that offer transparency, disclaimers, or access to source links.

Think of AI as a collaborator, not an authority. AI is an effective tool, but it is not an expert replacement.

Why This Affects the Whole Internet

Generative AI is rapidly becoming the infrastructure of digital experiences—be it in search engines or help desks, creative tools or education platforms. Companies are hurrying to integrate AI capabilities, often the model is often not production-ready when it is deployed.

This creates a paradox; the more we lean into AI, the more we expose our user/users to its shortcomings. And if these issues are never addressed, it can lead to:

A decrease in public trust in digital platforms.

Misinformation at scale.

Legal liabilities and regulatory push-back.

Furthering the knowledge gap for the less-savvy user who assumes that whatever is generated is always accurate.

Conclusion

Generative AI is not broken; it’s simply not fully baked. The tech sector is still figuring out how to augment generative models in ways that are trustworthy, transparent, and safe. These are necessary growing pains in what is potentially one of the most significant technological shifts of modern times. It is time for users, creators, and organizations to come to terms with the fact that it is not a mature technology yet. The shine of AI-generated content glosses over the brittleness behind the curtain.

Until generative AI systems can reliably distinguish fact from fiction, we’re all in a beta version of the future—and it’s on all of us to proceed cautiously, ask questions, and demand better.

Patent Landscape and Graphical Exploration
Top CPC classification codes
Top IPCR classification codes
Top Owners
Patent documents by jurisdiction

(Source: lens.org)

Categories
Computer Science

IP in the Age of AI: Who Owns the Algorithm?

In an era where artificial intelligence systems are designing new drugs, composing symphonies, and even writing code, the lines between creator and machine are becoming blurred. As AI continues to infiltrate nearly every industry, the question of intellectual property (IP) ownership is more relevant—and more complex—than ever before.

But when it comes to algorithms, especially those designed by or with the help of AI, who really owns the rights?

A Shifting Landscape

Traditionally, intellectual property laws were crafted with human inventors, artists, and developers in mind. The statutes assume a direct line between a person and their creation. But now that machines can “create” based on training data and optimization, the framework no longer fits as neatly.

Take, for example, a neural network trained to generate new software code. If a developer sets up the AI model, feeds it data, and configures the learning parameters, but the final product—the code—is generated independently by the system, is the developer the owner? Is it the company behind the data or the platform that trained the model?

This is not a hypothetical scenario. It’s playing out in courtrooms, patent offices, and legal think tanks around the world.

Understanding the Types of AI Creations

To unpack the issue, it helps to distinguish between different types of AI-driven work:

  • AI-Assisted Creation: A human uses AI tools as support (e.g., using AI to generate image suggestions for a design). Here, IP rights usually stay with the human.
  • AI-Generated Creation: The final product is produced entirely or mostly by AI, without detailed human direction. This is the grayest area.
  • Autonomously Invented Algorithms: The AI system is responsible for developing new algorithms or processes, such as optimizing supply chain routes or discovering new mathematical formulas.

Each of these scenarios raises unique legal and ethical questions. But they all boil down to the same dilemma: should a machine be recognized as an inventor or author?

What the Law Says (and Doesn’t Say)

In the U.S., the Patent and Trademark Office (USPTO) and the Copyright Office have taken a firm stance: only natural persons (i.e., humans) can hold copyrights or patents. This means that any submission must identify a human as the inventor or author, even if the AI was the actual creator.

Other countries are starting to diverge. The United Kingdom and Australia have seen cases where AI-generated inventions were debated in court. In a notable instance, Dr. Stephen Thaler submitted patents listing his AI, DABUS, as the sole inventor. Courts in the U.S. and UK rejected the claims, while Australia briefly accepted them before backtracking.

These mixed responses reveal how ill-equipped current legal systems are for this technological reality.

Corporate Ownership and the Role of Data

The question of ownership becomes even murkier when you consider the data used to train the algorithm. AI systems are only as good as the data they’re fed—often vast, proprietary sets collected over years.

If Company A develops the AI platform, and Company B licenses it to generate new IP, who owns the result? The answer often comes down to contract law rather than IP law. It’s increasingly common for companies to bake IP clauses into licensing and partnership agreements.

Moreover, data privacy and ownership further complicate the conversation. If an AI model is trained on user-generated data, do those users have any rights over the model’s outputs? So far, most jurisdictions say no, but that could change.

What Startups and Innovators Should Do

For entrepreneurs working in AI or using AI to develop products, these are not distant academic concerns—they’re core business risks. Here are some ways to navigate this tricky terrain:

  • Document Human Contribution: Make sure there’s a clear record of how humans were involved in shaping, guiding, or supervising the AI’s output.
  • Review Licensing Agreements Carefully: If you’re using third-party AI tools, check who owns what under the hood.
  • File IP Early: Even provisional patents can help stake a claim to ownership before a competitor beats you to it.
  • Consult with an IP Attorney: Especially one with experience in AI or emerging technologies.

A Glimpse at the Future

Ultimately, the law will need to evolve. There is growing recognition that traditional IP frameworks are too rigid to handle AI’s capabilities. Some experts advocate for a new category of IP ownership—something between traditional authorship and corporate control.

Others suggest updating definitions of “inventor” or “author” to allow for shared credit between AI and human operators. Whether this happens soon or decades from now will depend on political will, judicial interpretation, and economic pressure.

What’s clear is that the future of innovation is entangled with AI. If we don’t adapt our IP systems, we risk stifling the very innovation these systems were designed to protect.