The Brief

Credit Genie is an AI-native lending platform — the core product is a decision engine that ingests applicant financial data and outputs a credit assessment with confidence scoring. My role: design the full UX from zero, covering both the applicant-facing application flow and the underwriter-facing decision dashboard.

This was founding design work. No prior screens, no design system, no established patterns to extend. The product had a working model and a TypeScript API. My job was to turn that into something humans could use and trust.

The Trust Problem in AI Lending

Credit decisions carry legal weight. Applicants have the right to understand why they were approved or declined. Underwriters need to be able to explain and override decisions. Regulators want an audit trail. The AI model can produce a correct answer — but if the interface doesn't surface the reasoning, none of those requirements are met.

The central design challenge was building a transparency layer: a UI that exposed enough of the model's decision logic to create appropriate trust, without overwhelming users with probability distributions and feature weights they couldn't interpret. I studied how other regulated AI products — credit bureaus, insurance underwriting tools, medical triage systems — handle explainability. Most of them handle it badly. Dense technical outputs dropped into UI without translation.

The Applicant Flow

The applicant-facing flow handles data collection: personal information, financial history, employment status, existing obligations. The challenge is that credit applications require a lot of information — and the more steps involved, the higher the drop-off rate.

I structured the flow around progressive disclosure: one category of information per step, with a visible progress indicator and a rationale for why each category matters. Applicants who understand why they're being asked something are significantly more likely to complete. "Why do you need this?" inline prompts reduced perceived invasiveness without being defensive about data collection.

The Decision Dashboard

The underwriter dashboard surfaces the model's output: approval/decline recommendation, confidence score, primary decision factors, and applicant financial summary. The dashboard needed to serve two user types: junior underwriters who would accept most AI recommendations, and senior underwriters who would interrogate edge cases.

I designed a hierarchical decision summary: verdict and confidence score at the top, supporting factors expandable below, raw data accessible but not surfaced by default. Override controls positioned to require deliberate action — not accidentally triggerable. Decision reasoning exportable for regulatory documentation.

The AI Reasoning Layer

The transparency layer was the most technically interesting design problem. The model outputs feature importance scores — which data points most influenced the decision. Translating these into human-readable explanations required close collaboration with the ML team to define what could be safely communicated vs. what required expert interpretation.

The final design used a "Key Factors" component: three to five plain-language statements about the decision, each linked to the underlying data point, with an expandable detail view for underwriters who wanted to go deeper. Applicants saw a version of this too — simplified, legally reviewed, no model internals.

Results

  • V1.0 shipped to internal pilot with lending partners
  • AI transparency layer received positive feedback from regulatory review
  • Applicant flow achieved measurable reduction in drop-off at the financial data submission step vs. the previous form-based prototype
  • Underwriter dashboard adopted by the pilot team without additional training — dashboard hierarchy was self-explanatory

What This Taught Me

Designing for AI-native products is different from designing products that use AI features. The model is the product — the UX is the explanation layer wrapped around it. Every design decision involves a negotiation between what the model actually knows and what it's appropriate to surface to different user types. Getting that calibration right requires working much closer to the ML team than most design workflows allow for. I'm building that way of working now.