Jeffrey Hicks

Jeffrey Hicks

Platform Eng @R360

Summary: Visual Feedback Loops for SwiftUI with Claude Code

Notes on Christopher Trott's research into Swift Snapshot Testing for AI-assisted SwiftUI development

By Jeffrey Hicks • Aug 20, 2025 • analysis

This is a summary of Christopher Trott’s excellent research on visual feedback loops for AI-assisted SwiftUI development. The original article contains much more detail and context—definitely worth reading the full piece if you’re interested in this approach.

Trott explores an innovative method for giving Claude Code visual capabilities when working with SwiftUI through Swift Snapshot Testing. The core idea: create a feedback loop where Claude can generate SwiftUI code, capture visual snapshots, and iterate based on what it “sees.”

The Approach

Trott’s method treats Claude Code as an iterative agent that can make changes and visually verify results against target designs. The key insight: integrate Swift Snapshot Testing to give Claude multimodal feedback on its SwiftUI code generation.

Technical Implementation

Test Setup

Trott creates a dedicated ViewSnapshotTests target for temporary verification—not permanent testing. These snapshots get reset after verification rather than becoming part of the test suite.

Template Test File:

/// ViewVerificationTests.swift
import SnapshotTesting
import SwiftUI
@testable import mytarget
import Testing

@Suite("ViewVerificationTests")
@MainActor
struct ViewVerificationTests {
    @Test("ViewVerificationTest")
    func viewVerification() {
        // Replace with the view under test
        let view = EmptyView()
        assertSnapshot(
            of: view,
            as: .image(layout: .fixed(width: 0, height: 0)),
            record: true
        )
    }
}

Iteration Process

Trott outlines a 10-step workflow that Claude can follow:

SwiftUI View Verification Workflow:

  1. Create SwiftUI View from specifications or reference image
  2. Run xcodegen to add the file to the project
  3. Modify the viewVerification test with the new View
  4. Run xcodebuild test -only-testing:"ViewSnapshotTests/ViewVerificationTests" -quiet
  5. Analyze the output image using multimodal capabilities
  6. Plan changes to bring the result closer to the target
  7. Implement the planned changes
  8. Re-run the test to generate new snapshot
  9. Repeat steps 5-8 for 2-5 iterations
  10. Reset test files after approval

Visual Analysis Tools

Trott also explores ImageMagick commands for deeper visual analysis:

# Extract exact RGB values from specific coordinates
magick image.png -crop 1x1+200+300 txt:
# Output: (240,240,240,255) #F0F0F0FF grey94

# Check image dimensions and properties
magick identify image.png

# Get Root Mean Square Error between images
magick compare -verbose -metric RMSE reference.png snapshot.png null:

# Generate visual difference overlay
magick compare reference.png snapshot.png diff_output.png

Practical Results

Trott tested this by having Claude recreate a SwiftUI view from a reference image. The experiment revealed both capabilities and important limitations.

Key Findings

Trott discovered several challenges:

  • Claude tends toward system defaults even with reference images
  • Difficulty detecting subtle visual differences
  • Tendency to declare results “good enough” prematurely
  • ImageMagick analysis sometimes created more confusion

His conclusion: while promising, this approach isn’t ready for pixel-perfect design reproduction. Traditional methods (providing explicit design specs) still work better for precise Figma-to-SwiftUI conversion.

Other Visual Approaches

Trott also mentions alternative techniques for AI visual feedback:

Full XCTest UIAutomation

  • Captures complete simulator output including system UI
  • More complex setup requiring full app environment
  • Enables tap simulation and navigation testing

XcodeBuildMCP Integration

  • Offers simulator automation features
  • Includes UI interaction and screenshot capabilities
  • Provides more comprehensive app testing environment

PeekabooMCP for macOS

  • System-wide screen content access
  • Primarily useful for macOS app development
  • Enables broader system-level visual analysis

Takeaways

Trott’s research suggests:

  • Keep iterations limited (3 max) before human intervention
  • Works better for “general feel” than precise reproduction
  • Still experimental—not ready for production workflows
  • Swift Testing has some technical constraints for this use case

Looking Forward

While not yet practical for pixel-perfect work, this research establishes a foundation for visual feedback loops in AI development. As multimodal AI capabilities improve, approaches like this could become increasingly valuable for SwiftUI development workflows.

Worth exploring if you’re interested in cutting-edge AI development techniques, but traditional design-to-code approaches remain more reliable for now.