Claude Desktop

1. Should you keep Ant Design for those 6 components?

The short answer is: keep them for now, but plan a targeted migration for 3 of the 6.

The 6 components split into two groups based on replacement difficulty:

Easy to replace (DatePicker, TimePicker, Slider) — all three have high-quality Radix or headless alternatives today, and the visual surface area is small enough that your SCSS theming layer can bridge the gap without UX regression.

Hard to replace (Table, Tree, TreeSelect) — these are data-heavy, accessibility-complex components where Ant Design genuinely earns its keep. Radix doesn't cover them, shadcn doesn't cover them, and any headless alternative (TanStack Table, react-arborist) requires significantly more integration work. Replacing them risks introducing subtle regression bugs in ways your users will actually notice.

The Select via react-select is a similar story to the hard group — react-select has extremely deep keyboard and accessibility behavior, and the effort to replicate it cleanly outweighs the gain.

The practical recommendation: migrate DatePicker, TimePicker, and Slider away from Ant Design (these are the ones where Ant's bundle cost and theming friction are highest relative to what you get). Keep Table, Tree, TreeSelect on Ant Design with a thin abstraction layer. Keep Select on react-select. This is the minimal-risk path that also happens to produce the cleanest foundation for the AI-era changes.


2. The four options compared

Let me diagram the architecture of each option first, then analyze them across your three axes.

V

visualize

V

visualize show_widget

Option A (Status quo) keeps working but accumulates the wrong kind of debt. The fragmented foundation — SCSS on custom components, Ant on some, react-select on Select — is the specific thing that makes LLMs generate inconsistent code. When an LLM sees <Button> from NeetoUI, it has to guess whether that's a shadcn-style component, an Ant Design wrapper, or something entirely custom. It can't reliably infer prop shapes, variants, or composition patterns.

Option B (Migrate directly to shadcn) is the clean move for your custom components but doesn't solve the fundamental question of how the library is exposed to AI tools — it just makes the codebase more legible. Tailwind class names are well-understood by every major LLM because shadcn/ui training data is abundant. However, you'd be trading your SCSS theming system for Tailwind's CSS variable layer, which is a non-trivial change across a mature product ecosystem.

Option C (NeetoUI as a custom shadcn registry) is the recommendation, and the reason is specific: the shadcn registry format is the emerging standard that AI tools are being trained against. When you host a registry.json that describes NeetoUI components in shadcn's schema, any AI tool that understands shadcn (Cursor, Claude, Copilot) can directly fetch and use your components. This is not about visual aesthetics — it's about the machine-readable contract that the registry format provides. You get shadcn's primitives as the foundation for your standard components, preserve NeetoUI's API surface so Neeto apps don't need to change, and keep Ant Design for Table/Tree/TreeSelect where the replacement cost exceeds the benefit.

Option D (shadcn + Base UI for gaps) is the most architecturally pure option but introduces a two-primitive-library situation that's harder to reason about, harder to document, and significantly harder to migrate to. Base UI from Uber is excellent engineering but has a much smaller training corpus in current LLMs, which works against your AI-era goal. Unless you have specific components that neither Radix nor shadcn can cover, the added complexity isn't worth it. Revisit in 18 months when Base UI has more ecosystem momentum.


3. What does "AI searchable" mean for a design system?

This concept has three distinct layers, and most teams only think about the first one.

Layer 1: Static discoverability — LLMs need to be able to find and understand your component API from a URL they can fetch. This is the llms.txt layer. The format (proposed by Jeremy Howard, gaining fast adoption) is a plain-text manifest at yourdomain.com/llms.txt that lists your documentation pages with one-line descriptions, so an LLM knows where to look without crawling your entire site. A companion llms-full.txt contains the full flattened documentation.

For NeetoUI this looks like:

# NeetoUI Component Library

## Components
- https://neeto.com/docs/button: Button component — variants, sizes, loading states, icon support
- https://neeto.com/docs/modal: Modal — controlled and uncontrolled, nested modals, focus trap
- https://neeto.com/docs/table: Table — built on Ant Design, sortable, filterable, virtualized

Layer 2: Machine-readable component schema — this is the registry layer. shadcn's registry.json schema lets you describe each component's files, dependencies, and prop types in a format that tools like Cursor's @registry feature can consume directly. If NeetoUI publishes a registry, a developer can type use the NeetoUI Button in Cursor and get the correct import, prop usage, and variants without hallucination. This is the highest-leverage investment you can make for AI tooling.

Layer 3: Live MCP server — this is the emerging frontier. An MCP (Model Context Protocol) server for NeetoUI would expose tools that an AI agent can call: search_components(query), get_component_props(name), get_usage_examples(component, variant). This lets Claude or any MCP-compatible agent search your design system in real time rather than relying on training data or static files.

Here's the concrete implementation stack I'd recommend:

V

visualize

V

visualize show_widget

The concrete steps in priority order:

Step 1 — llms.txt (1 day, immediate ROI). Add neeto.com/llms.txt listing every NeetoUI component with a one-line description. Add neeto.com/llms-full.txt with the full prop documentation flattened into a single document. This is fetchable by any LLM with web access today with zero tooling changes on the LLM side.

Step 2 — registry.json (1–2 weeks). Model this on ui.shadcn.com/registry but emit Neeto-specific components. Each entry has a name, type (registry:ui), files array pointing to your source, dependencies, devDependencies, and a meta block with prop documentation. The most valuable field is meta.examples — inline code snippets showing correct usage. When Cursor loads your registry, it can generate <NeetoButton variant="primary" size="md"> with correct prop names rather than hallucinating <Button variant="default">.

Step 3 — .cursor/rules/neeto-ui-context.md (a few hours). A project-level context file that Cursor (and similar tools) automatically prepend to every coding conversation. It should contain your component naming conventions, how to import from NeetoUI, which components wrap which primitives, and anti-patterns to avoid (e.g., "never import directly from Ant Design — use NeetoUI Table"). Commit this to your app repos, not just the library.

Step 4 — MCP server (1–3 weeks, when ready). A minimal MCP server with three tools covers 90% of use cases: list_components() returning names and descriptions, get_component(name) returning full prop docs and examples, and search_components(query) doing semantic search. The server can be built on top of your existing docs data — it doesn't require new content, just new plumbing. Host at mcp.neeto.com and add it to your internal Cursor/Claude setup instructions.


4. Balancing consistency, migration risk, and AI tooling

These three goals genuinely tension each other, but less than they might appear. The key insight is that they can be sequenced rather than traded off simultaneously.

Phase 1 (Now, 0–4 weeks): AI tooling on existing codebase. Implement llms.txt, registry.json, and the context rules file against NeetoUI as it exists today. You don't need to change a single component to get meaningful AI tooling improvements. This also establishes a baseline — you'll be able to see how well AI tools perform against your library before migration, and measure the improvement after.

Phase 2 (3–6 months): Selective migration, not big bang. Start with the three Ant Design components worth migrating — DatePicker, TimePicker, Slider. Build them on shadcn/Radix primitives. This is where you move from the registry schema being documentation to it being implementation — your registry entries now point to files that are actually built on the standard primitives LLMs understand. Keep your NeetoUI API surface identical; the migration is internal to the library, invisible to Neeto app developers.

Phase 3 (6–12 months): NeetoUI as first-class registry. Once the foundation is cleaner, publish NeetoUI as a proper shadcn-compatible custom registry that any Neeto product can pull from. This is the full Option C architecture. It's particularly powerful if you have multiple Neeto apps that could theoretically share components — the registry model makes that sharing explicit and machine-readable.

On design consistency: the thing that most threatens cross-product consistency isn't the underlying primitive library, it's the absence of a machine-readable token and variant contract. Whether you stay on SCSS or move to Tailwind CSS variables, the most important consistency investment is extracting your design tokens (color, spacing, radius, typography) into a format that's both human-readable and machine-consumable — a tokens.json that feeds both your SCSS variables and your Figma variables. This makes it trivially easy for an LLM to use the correct Neeto color or spacing when generating UI code, and it decouples visual consistency from implementation technology.

On migration risk: the single highest-risk move would be attempting all three layers at once — migrating primitives, adopting Tailwind, and publishing a registry in the same sprint. The shadcn registry format is stable enough to build on now. Tailwind adoption across a mature SCSS codebase takes longer than teams expect because the mental model change for engineers is significant, not just the technical change. Sequence the AI tooling work first, get feedback from your team on how AI tools are performing, then use those findings to guide the technical migration prioritization.

The bottom line: Option C is the right destination, but the llms.txt + registry.json investments are correct regardless of which option you eventually choose, and they deliver value immediately. Start there, let the migration follow the tooling.