The Uncomfortable Truth

After two decades in semiconductor verification services, I've watched the same loop repeat across hundreds of projects, dozens of clients, and three generations of EDA tools.

Tools got faster. Coverage got denser. Sign-off stayed subjective.

The Verification Hell Loop

Every service company knows this dance. We don't talk about it publicly. But it's real.

            CLIENT COMES IN ──▶ WE PROPOSE ──▶ WE INITIATE ──▶ DESIGN TEAM BUSY
            (Reputation)         THE PLAN       EXECUTION      (STA, DFT, Backend)
                                                    │                  │
                                                    ▼                  │ DELAYS
                                        ┌───────────────────────┐      │
                                        │   REGRESSION RUNS     │◀─────┘
                                        │   (Tools work fast)   │
                                        └───────────────────────┘
                                                    │
                                                    ▼
                                        ┌───────────────────────┐
                                        │  COVERAGE READY       │
                                        │  90%? 95%? 98%?       │
                                        └───────────────────────┘
                                                    │
                    ┌───────────────────────────────┴───────────────────────────┐
                    ▼                                                           ▼
            ┌──────────────────────┐                            ┌──────────────────┐
            │ CODE COVERAGE REVIEW │                            │ FUNC COV REVIEW  │
            │ Designer: BUSY       │                            │ Client: BUSY     │
            │ Status: WAITING...   │                            │ Status: WAITING  │
            └──────────────────────┘                            └──────────────────┘
                    │                                                           │
                    └───────────────────────┬───────────────────────────────────┘
                                            ▼
                          ┌────────────────────────────────────┐
                          │         PROPOSAL DELAYED           │
                          │  "We need more coverage"           │
                          │  "Can you add these scenarios?"    │
                          │  "What about corner cases for X?"  │
                          └────────────────────────────────────┘
                                            │
                          LOOP AGAIN  ◀─────┴─────▶  EVENTUALLY SIGN-OFF
                          (weeks)                     (subjective)
            

This loop runs for weeks. Sometimes months. And the painful truth? It's not the tools' fault.

The Product Company Reality

Some product companies never touch their verification setup as long as it works. Unless European functional safety rules require coverage updates. Then they come to us.

They come because of our reputation. We propose a plan. We initiate. And we all know what happens next:

  • Design team is busy responding to STA, DFT, backend issues
  • Regression runs, coverage numbers ready
  • But code coverage needs designer review — designer is busy
  • Functional coverage needs client verification team — they're overwhelmed
  • Proposal gets delayed. More tests requested. Loop again.

This is the industry. Everyone is busy. Everyone is doing their job. But nobody has time to think deeply about whether the design actually matches the specification.

Where the Real Time Goes

Here's what clients don't see. Here's what managers don't track.

Activity Time
Tool Runtime (Simulation, Coverage) 15%
Human Waiting (Reviews, Approvals, Clarifications) 70%
Actual Analysis (Interpretation, Decision Making) 15%
The tools are NOT the bottleneck. The bottleneck is human availability + human judgment.

The Sign-off Subjectivity Problem

Every client has different criteria. Same coverage number, different meanings.

Client Type Response to 95% Focus
Consumer Electronics "95% is good enough. Ship it." Time-to-market
Automotive (ASIL-D) "95% means nothing. What about the 5%?" Functional safety
Aerospace "Show us WHY each line is unreachable." DO-254 traceability
European Automotive "We need formal proof for the gaps." Regulatory compliance

Same metric. Four completely different sign-off criteria. Tools give numbers. They don't give judgment.

Why No Tool Has Solved This

Giant EDA companies build excellent tools. Accurate. Fast. Reliable. But they can't solve the interpretation problem.

  • Data Privacy: No client lets you train a tool on their complete project. IP concerns. NDA restrictions.
  • Project Isolation: It's never one company's project. Every client asks us to sign off with THEIR inputs, THEIR criteria, THEIR judgment.
  • Subjectivity: Every sign-off is isolated. Every judgment is one-time. We never accumulate learning across projects.

How We Train Engineers vs How We "Train" Tools

            HOW WE TRAIN A HUMAN ENGINEER:
            
            FRESH GRAD ──▶ JUNIOR ──▶ SENIOR ──▶ PRINCIPAL ──▶ ARCHITECT
            
            • Basics        • One protocol    • Multiple       • Protocol      • Maps any protocol
            • Theory        • Debug simple      protocols        expert        • Knows implied behavior
            • Syntax                          • Complex debug  • Intuition     • Signs off with CONFIDENCE
            
            ═══════════════════════════════════════════════════════════════════════════════════
            
            HOW WE "TRAIN" CURRENT TOOLS:
            
            INSTALL ─────────────────────────────────────────────────▶ RUN IT
            
            • Generic rules                                            • Get numbers
            • No project context                                       • No judgment
            
            No progression. No learning. No adaptation.
            

The Knowledge Layers of an Architect

An architect doesn't just know more. They THINK differently.

            LAYER 4: DESIGN-SPECIFIC KNOWLEDGE                     ◀── JUDGMENT
            • This specific chip's architecture
            • Client's implicit requirements
            • Historical issues in similar designs
                                  ▲
            LAYER 3: PROTOCOL-SPECIFIC KNOWLEDGE
            • PCIe quirks vs AMBA quirks vs USB quirks
            • Where specs are ambiguous
            • Common implementation pitfalls
                                  ▲
            LAYER 2: VERIFICATION METHODOLOGY
            • UVM patterns, Coverage strategies, Assertion principles
                                  ▲
            LAYER 1: GENERIC RULES                                 ◀── CURRENT TOOLS
            • SystemVerilog syntax, Basic RTL patterns
            
            ═══════════════════════════════════════════════════════════════════════════════════
            Current tools operate at Layer 1. Sign-off judgment requires Layer 4.
            

Transfer Learning: How Experts Actually Work

How does a senior engineer learn a new protocol in weeks, not years?

            ┌────────────────────────┐        ┌────────────────────────┐
            │   KNOWN PROTOCOL       │        │    NEW PROTOCOL        │
            │   (e.g., AXI)          │        │    (e.g., CHI)         │
            │                        │        │                        │
            │ • Transaction types    │───────▶│ • Map transaction      │
            │ • Ordering rules       │TRANSFER│   types                │
            │ • Error handling       │───────▶│ • Find ordering        │
            │ • Edge cases seen      │        │   similarities         │
            │                        │        │ • Apply error patterns │
            └────────────────────────┘        │ • PREDICT edge cases   │
                                              └────────────────────────┘
            
            The expert doesn't start from zero. They map patterns. They predict problems.
            Current tools CAN'T do this. They treat every project as if they've never seen anything before.
            

The Gap We're Trying to Close

               SPECIFICATION                              RTL
               ┌───────────────┐                         ┌───────────────┐
               │               │                         │               │
               │  • Intent     │                         │  • Code       │
               │  • Edge cases │                         │  • Coverage   │
               │  • Implied    │         ?               │  • Numbers    │
               │    behavior   │◀───────────────────────▶│               │
               │  • What's NOT │                         │               │
               │    stated     │                         │               │
               └───────────────┘                         └───────────────┘
            
                                       ▲
                             ┌─────────┴─────────┐
                             │   UNDERSTANDING   │
                             │                   │
                             │ • Not matching    │
                             │ • Not searching   │
                             │ • REASONING       │
                             └───────────────────┘
            
            The gap isn't simulation speed. It's interpretation.
            Reading specs. Understanding intent. Mapping to RTL.
            Deciding what's missing. Knowing when "done" means done.w
            

Our Journey: Plan → Execute → Learn → Pivot → Repeat

Teaching a tool to think like an engineer is not easy. Mapping every possible scenario takes immense thinking and implementation. We planned. We executed. We learned. We pivoted. Again and again.

PLAN → EXECUTE → LEARN → PIVOT → PLAN → EXECUTE → ...

This cycle repeated many times. Not because we failed. Because the problem is HARD. Every iteration taught us something:

  • What works in theory doesn't always work in practice
  • Simple approaches often beat complex architectures
  • The gap between spec and RTL is semantic, not syntactic
  • Pattern matching fails. Understanding succeeds.

After months of iterations, we started seeing results. Not perfect. But measurable. And improving.

What Real Gaps Look Like

From our analysis, here are examples of REAL gaps — places where the specification says something, but the RTL doesn't implement it:

GAP: ERROR CORRECTION

"The uPacket RX is required to apply majority voting on the repeated ECFs for error correction."

Finding: No majority voting logic found in receiver module. Action: RTL implementation needed.

GAP: SIGNAL HANDLING

"The receiver must not clear these bits upon isolated symbol errors."

Finding: Receiver incorrectly clears bits on symbol errors. Action: RTL fix required.

GAP: FEATURE SUPPORT

"SCRAMBLING_DISABLE" capability required

Finding: Feature not implemented in framing module. Action: Implementation decision needed.

GAP: SIGNAL NAMING

Signals DATA0_7, DATAM_7, ACK_Data, IRQ_HPR required

Finding: These signals not present in framing module. Action: Architecture review needed.

These aren't theoretical. These are real findings. Each one would have been caught by a human reviewer — eventually. But now we can find them systematically.

AVP Gaps View

What "Defer" Items Look Like

Not everything is a gap. Some items need verification — they might be correct, but we can't prove it automatically:

DEFER: NUMERICAL RANGE

Numerical requirement "100 – 80"

Status: Need to verify parameter matches in RTL. Action: Parameter verification check.

DEFER: REGISTER MAPPING

DPCD register address 0x80

Status: Need to verify correct mapping in DPCD module. Action: DPCD address map check.

DEFER: HEX LITERAL

Signal mapping for hex value 0x13

Status: Mapping exists but needs confirmation. Action: Signal trace verification.

These items aren't gaps. They're unknowns. A senior engineer could verify them in minutes. But they need to be FOUND first. That's what we do.

AVP Defer View

Feature-Level Coverage

When we analyze a complete protocol, we can show coverage by feature:

Feature Proven Defer Gap Coverage
Symbol/Framing 1,168 78 382 72%
AUX Channel 850 35 202 78%
Main Link 418 16 78 82%
I2C-over-AUX 290 11 76 77%
DPCD Registers 256 23 48 78%
Link Training 260 10 44 83%
Video Output 252 10 47 82%
Interrupts/HPD 177 13 38 78%
Power States 121 6 44 71%
SDP 467 21 106 79%
Audio 217 16 52 76%
GTC 88 10 5 85%
TOTAL (22 Features) 6,169 421 637 ~49%
AVP Coverage Dashboard

What We Believe

  • 1. Pattern matching won't solve this. Regex fails. Keywords fail. You need understanding.
  • 2. The tool needs to grow like an engineer. Generic rules → Protocol knowledge → Design context → Judgment.
  • 3. Data privacy blocked training until now. No client lets you train on their complete project. Every sign-off is isolated. Every judgment is one-time.
  • 4. This will be the path for the next 10 years. Unless we find a way to train a tool the way we train an engineer.
  • 5. We're on that path. Plan. Execute. Learn. Pivot. Repeat. The results are starting to show.

Our Approach

We're building systems that think about specifications the way engineers do.

We enhance open-source tools with an intelligence layer. We bridge the gap between
open-source capabilities and commercial tool requirements for comprehensive signoff.

Walk-in ones, walk-in zeros

#semiconductor #verification #EDA #coverage #signoff
s