Notes on Christopher Trott's research into Swift Snapshot Testing for AI-assisted SwiftUI development
This is a summary of Christopher Trott’s excellent research on visual feedback loops for AI-assisted SwiftUI development. The original article contains much more detail and context—definitely worth reading the full piece if you’re interested in this approach.
Trott explores an innovative method for giving Claude Code visual capabilities when working with SwiftUI through Swift Snapshot Testing. The core idea: create a feedback loop where Claude can generate SwiftUI code, capture visual snapshots, and iterate based on what it “sees.”
Trott’s method treats Claude Code as an iterative agent that can make changes and visually verify results against target designs. The key insight: integrate Swift Snapshot Testing to give Claude multimodal feedback on its SwiftUI code generation.
Trott creates a dedicated ViewSnapshotTests
target for temporary verification—not permanent testing. These snapshots get reset after verification rather than becoming part of the test suite.
Template Test File:
/// ViewVerificationTests.swift
import SnapshotTesting
import SwiftUI
@testable import mytarget
import Testing
@Suite("ViewVerificationTests")
@MainActor
struct ViewVerificationTests {
@Test("ViewVerificationTest")
func viewVerification() {
// Replace with the view under test
let view = EmptyView()
assertSnapshot(
of: view,
as: .image(layout: .fixed(width: 0, height: 0)),
record: true
)
}
}
Trott outlines a 10-step workflow that Claude can follow:
SwiftUI View Verification Workflow:
xcodegen
to add the file to the projectviewVerification
test with the new Viewxcodebuild test -only-testing:"ViewSnapshotTests/ViewVerificationTests" -quiet
Trott also explores ImageMagick commands for deeper visual analysis:
# Extract exact RGB values from specific coordinates
magick image.png -crop 1x1+200+300 txt:
# Output: (240,240,240,255) #F0F0F0FF grey94
# Check image dimensions and properties
magick identify image.png
# Get Root Mean Square Error between images
magick compare -verbose -metric RMSE reference.png snapshot.png null:
# Generate visual difference overlay
magick compare reference.png snapshot.png diff_output.png
Trott tested this by having Claude recreate a SwiftUI view from a reference image. The experiment revealed both capabilities and important limitations.
Trott discovered several challenges:
His conclusion: while promising, this approach isn’t ready for pixel-perfect design reproduction. Traditional methods (providing explicit design specs) still work better for precise Figma-to-SwiftUI conversion.
Trott also mentions alternative techniques for AI visual feedback:
Trott’s research suggests:
While not yet practical for pixel-perfect work, this research establishes a foundation for visual feedback loops in AI development. As multimodal AI capabilities improve, approaches like this could become increasingly valuable for SwiftUI development workflows.
Worth exploring if you’re interested in cutting-edge AI development techniques, but traditional design-to-code approaches remain more reliable for now.