Why AI Tools Generate Ugly Code
You open v0, describe a pricing page, and get back something that works. The layout is reasonable. The code compiles. But it looks like every other AI-generated page on the internet: safe gradients, generic shadows, rounded-2xl cards with too much padding.
This is not a model quality problem. GPT-4, Claude, Gemini — they can all write excellent code. The problem is context. Or rather, the absence of it.
The context gap
When you ask an AI tool to build a component, it reaches for the broadest possible defaults. It has no idea that your design system uses 4px border radius instead of 16px. It does not know your brand avoids gradients. It cannot reference your custom Button variant API because it has never seen your component library.
The result is code that is structurally correct but aesthetically generic. Every project gets the same Tailwind soup:
// What AI generates without context
<div className="rounded-2xl bg-gradient-to-br
from-purple-500 to-pink-500 p-8 shadow-2xl">
<h2 className="text-3xl font-bold text-white">
Pricing
</h2>
</div>Compare that to what you actually want — something that respects your design tokens, uses your component primitives, and follows your conventions:
// What AI generates WITH your design context
<Section>
<Heading level={2}>Pricing</Heading>
<div className="mt-8 grid grid-cols-3 gap-4">
<PricingCard tier="starter" />
<PricingCard tier="pro" featured />
<PricingCard tier="enterprise" />
</div>
</Section>The second version is shorter, more maintainable, and actually looks like your product. The difference is not smarter prompting. It is structured context.
Why prompting alone does not work
The obvious workaround is to include instructions in your prompt: “use my Button component, keep borders at 1px, no gradients.” This works for a single generation. It falls apart at scale.
A real design system has hundreds of decisions baked into it. Spacing scales, color semantics, component APIs, composition patterns, animation preferences, accessibility standards. No one is going to paste all of that into a chat prompt every time they need a component.
AI context packs
This is the idea behind AI context packs. Instead of repeating yourself in every prompt, you define your design system context once in a format that AI tools can consume. A context pack includes:
- Component API reference — what components exist, their props, valid combinations
- Design tokens — colors, spacing, typography, radii, all as CSS variables
- Composition patterns — how components are meant to be combined
- Constraints — what to avoid, anti-patterns, brand rules
Drop a .cursorrules file in your project root, configure your MCP server, and suddenly every AI generation is contextually aware of your design system.
How kit approaches this
kit is built on two primitives: the shadcn registry protocol for distributing components, and AI context packs for teaching tools how to use them.
When you install a component from kit, you get the source code in your project — not a node_modules dependency. When your AI tool has the context pack loaded, it knows exactly how to use that component: the correct props, the right composition patterns, the design constraints.
The end of generic
AI-generated code does not have to look like AI-generated code. The models are capable of producing excellent, idiomatic, brand-consistent output. They just need the right context to do it.
The gap between “works” and “feels like ours” is not a model problem. It is an infrastructure problem. Context packs are the infrastructure.
kit is an AI-native component registry with built-in context packs. Install components with a single command, then generate code that actually matches your design system.
Read the docs to get started