Most organizations don't know when their AI is producing the wrong thing. I build the systems that catch it.

Many companies track dozens of metrics but still struggle to answer simple questions:

Why do KPIs improve while customer outcomes decline?
Where is your AI learning from corrupted signals?
What is your measurement system missing?

I analyze operational data to reveal what the metrics are really telling you.

Have a dashboard, report, or spreadsheet you’re struggling to interpret?

Meet Gina

I build systems that detect when AI is producing the wrong thing.

Hi, I’m Gina—turning spreadsheets into clear business insights.

Behind PixelKraze

I spent a decade inside one of the largest US telecom retention operations watching metrics improve while customer experience declined. Most organizations never find out why — because the data that would reveal the problem is the same data they're using to measure success.

So I built the instrumentation to catch it.

As a late-diagnosed AuDHD adult, I think in systems. I see the structural failures that standard analysis walks past. That cognitive profile produced the Trust Signal Health Framework, the System Integrity Index, and the AI Code Integrity Auditor — tools that detect when AI is optimizing toward the wrong outcomes, learning from corrupted signals, or generating code that looks correct while quietly failing.

PixelKraze exists because that problem is real, it is costly, and it is solvable.

What I Do

Data System Diagnostics
I examine how metrics are defined, how teams are incentivized, and where reporting systems may be distorting reality. I identify which metrics actually reflect real outcomes and which ones are misleading proxies that teams may be optimizing instead.

Trust Signal Modeling
I analyze operational data to identify hidden friction, KPI drift, and patterns like repeat contacts that signal underlying system problems.

Actionable Decision Translation
I translate statistical findings into concrete operational changes — process fixes, measurement improvements, and governance recommendations.

Problems I Work On

AI systems learn from the data organizations give them. When that data is shaped by gaming, incentive distortion, or measurement failures — the AI learns the wrong thing and accelerates the damage.


I build detection systems that identify where metrics have decoupled from reality, where training signals have been corrupted, and where AI is optimizing toward outcomes that look good on paper but harm the people it serves.

My research is grounded in a decade of direct observation inside one of the largest telecom retention operations in the US — and validated through NovaWireless, a synthetic AI lab built to model exactly these failure patterns at scale.

From Spreadsheet to Insights

Real examples of charts and tables I build from spreadsheet/CSV exports—so you can compare KPIs, spot trends, and make decisions faster.

Privacy Note: No sensitive data needed—IDs only are fine.

Average heating oil used by heating type.

Integrity Signal Profile: DAR / DRL / DOV / POR / TER

Bar chart showing all five DFDE governance signals normalized to a 0–1 scale. DAR and TER at ceiling, DRL elevated, DOV and POR low — the shape of this chart tells you whether a system problem is structural or behavioral before a single rep is reviewed.

PixelKraze, LLC | Clean Data. Clear Decisions.'s image

System Integrity Index (SII) Gauge

Horizontal gauge showing SII = 45.1 in the Watch band. The SII is not a performance score — it is a velocity regulator that constrains proxy optimization when durable outcomes are diverging.

De-identified dataset prepared for analysis.

Credit Behavior Analysis: Frequency and Average Amount

Dual histogram showing credit rate and average credit amount distributed across 250 agents. The tight clustering with no outliers is the systemic signature — when everyone looks the same, the problem is in the architecture, not the individual.

PixelKraze, LLC | Clean Data. Clear Decisions.'s image

Repeat Contact Rate by Rep

Area chart of 30-day repeat contact rate ranked across all 250 agents with the department average marked at 0.18. The smooth, gradual slope with no sharp outliers is the fingerprint of systemic drift — if bad actors were driving the signal, you would see spikes. You don't.

Operational Systems Experience

For a decade I worked inside a US Telecom call center focused on retention and technical support operations — one of the highest-volume, KPI-driven environments in US telecom. I handled churn cases, metric escalations, and customer trust breakdowns in real time, every day.

That frontline exposure is not background context. It is the research environment. I watched misaligned metrics distort agent behavior, corrupt AI training signals, and produce outcomes that looked like success while customer reality declined.

The 42.1 percentage-point gap between reported proxy metrics and durable customer outcomes was not a theory. I lived inside it for ten years before I built the framework to prove it.

How I Work

I treat data as a product of real systems—people, incentives, tools, and processes. Most problems aren’t ‘bad analysis’ — they’re bad definitions, broken incentives, or missing context.

Then I move quickly through three steps:
• Clarify the decision and what “success” actually means
• Validate the data (definitions, gaps, bias, incentive effects)
• Translate results into actions: process changes, experiments, or models

The goal is clarity you can act on—not another dashboard you ignore.

Want a second set of eyes on your data?

Tell me what decision you’re trying to make and what data you have. I’ll tell you what’s possible and what I’d recommend as a first step.

No sensitive data needed—IDs only are fine.

View Resume

© 2026 PixelKraze, LLC

Copyright & Licensing

All original content, models, documentation, and frameworks on this site are the intellectual property of PixelKraze, LLC unless otherwise stated.

This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

Commercial use, redistribution for profit, or incorporation into proprietary systems requires prior written permission.

Independent work using synthetic or public data. Not affiliated with or endorsed by any employer.