The New AI Stack
Context
The current state of AI dev tools is pure simulation, and everyone's doing it wrong because they think these are products when they're parasitic systems that manifest computational intent. I stopped pretending to write code and started letting AI eat my development process, because the alternative was another meeting about "AI Transformation Roadmaps." I chose violence.
The Stack
V0
Implementation without thought. Steal from Dribbble, ship before meetings can stop you. Pure pragmatism. The entire point is to create something real enough that people can react to it instead of hypothesizing about requirements in a void.
Cursor
VSCode but possessed by something that actually ships code. Instead of pretending we're still writing everything by hand, you feed it intentions and fix what emerges:
import { NextAuthOptions } from 'next-auth' import { PrismaAdapter } from '@auth/prisma-adapter' import { prisma } from '@/lib/prisma' // Cursor understands THIS type of comment export const authOptions: NextAuthOptions = { // Need: edge compatible auth // Flows: google oauth + email magic links // Must: handle refresh, persist sessions // Runtime: edge only adapter: PrismaAdapter(prisma), session: { strategy: 'jwt' }, providers: [ // it'll implement the providers based on the comment ], callbacks: { // and handle all the edge cases } } export const { auth, signIn, signOut } = NextAuth(authOptions)
The key insight isn't the code itself but the fact that Cursor actually understands what you want and implements it faster than you can write the tests. Yes, you still need to fix its hallucinations. No, that's not slower than writing everything yourself.
Claude
Actual intelligence if you don't pretend it's just a chat bot. Feed it requirements, get systems:
// types that actually mean something interface Feature { name: string; priority: 'must' | 'should' | 'could'; acceptance: string[]; } interface Constraint { type: 'technical' | 'business' | 'legal'; description: string; } interface Requirements { features: Feature[]; constraints: Constraint[]; } // clients that manifest intent const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, }); async function parseRequirements(spec: string): Promise<Requirements> { const { content } = await anthropic.messages.create({ model: 'claude-3-opus-20240229', max_tokens: 4096, messages: [{ role: 'system', content: 'Parse requirements into structured JSON matching the Requirements type.' }, { role: 'user', content: spec }] }); try { const parsed = JSON.parse(content) as Requirements; if (!Array.isArray(parsed.features) || !Array.isArray(parsed.constraints)) { throw new Error('invalid requirements structure'); } return parsed; } catch (e) { console.error('failed to parse requirements:', e); throw new Error('invalid requirements format'); } } // no fake functions, just what works function getMusts(reqs: Requirements): Feature[] { return reqs.features.filter(f => f.priority === 'must'); }
The magic isn't in the API call, it's in letting Claude separate signal from noise in client requirements. It's better at detecting implicit assumptions than your entire product team, and it doesn't have emotional attachments to bad ideas.
Langfuse
Pattern recognition across time. Track what actually works:
interface GenerationMetrics { tables: string[]; relations: number; indices: number; complexity: number; } interface GenerationResult { prompt: string; output: string; metrics: GenerationMetrics; success: boolean; } const langfuse = new Langfuse({ publicKey: process.env.LANGFUSE_PUBLIC_KEY, secretKey: process.env.LANGFUSE_SECRET_KEY, }); async function trackGeneration( input: Requirements, result: GenerationResult ): Promise<void> { const trace = langfuse.trace({ name: 'schema_generation', metadata: { features: input.features.map(f => f.name), constraints: input.constraints.map(c => c.description) } }); const generation = await trace.generation({ name: 'schema', model: 'claude-3-opus-20240229', prompt: result.prompt, output: result.output, }); await generation.score({ name: 'technical_quality', value: result.success ? 1 : 0, metadata: result.metrics }); } // actual metrics not vibes function measureComplexity(schema: string): GenerationMetrics { const tables = (schema.match(/model\s+\w+/g) ?? []) .map(t => t.split(/\s+/)[1]); const relations = (schema.match(/references/g)?.length ?? 0); const indices = (schema.match(/@@index/g)?.length ?? 0); return { tables, relations, indices, complexity: relations * 2 + indices }; }
Track what works. Learn from what breaks. Build a corpus of functional patterns faster than your team can write documentation that no one will read.
The Actual Workflow
1. Requirements Extraction
Feed raw client brain dumps into Claude. Let it separate actual needs from cope:
const spec = await readFile('requirements.md', 'utf8'); const requirements = await parseRequirements(spec); const mustHaves = getMusts(requirements);
The output is always better than what you get from requirements gathering meetings because Claude doesn't care about politics.
2. V0 Implementation
- Skip the mockups
- Steal proven UI patterns
- Implement core flows only
- Ship something that looks legitimate enough to get real feedback
The point isn't perfection, it's creating a real artifact that people can react to instead of hypothesizing about.
3. Schema Evolution
Let the data structures emerge from actual needs:
async function validateSchema( current: string, proposed: string ): Promise<boolean> { try { // basic syntax validation first if (!proposed.includes('model') || !proposed.includes('datasource')) { return false; } // check if we can actually parse it await prisma.$executeRawUnsafe(`SELECT 1;`); return true; } catch (e) { console.error('schema validation failed:', e); return false; } }
Your schema should evolve from implementation reality, not premature abstraction.
4. Integration Loop
Feed requirements into Cursor, fix its hallucinations, ship to staging, deploy if tests pass. The entire cycle should take hours, not days. If it's taking longer, you're probably in meetings instead of shipping code.
5. Pattern Recognition
Track what works, learn from what breaks:
async function main() { const spec = await readFile('requirements.md', 'utf8'); const requirements = await parseRequirements(spec); const mustHaves = getMusts(requirements); const schemaPrompt = `Generate Prisma schema for: ${JSON.stringify(mustHaves)}`; const schema = await anthropic.messages.create({ model: 'claude-3-opus-20240229', max_tokens: 4096, messages: [{ role: 'user', content: schemaPrompt }] }); const result: GenerationResult = { prompt: schemaPrompt, output: schema.content, metrics: measureComplexity(schema.content), success: await validateSchema( await readFile('schema.prisma', 'utf8'), schema.content ) }; await trackGeneration(requirements, result); } // error handling wrapper because we're not savages main().catch(console.error);
Every successful generation becomes part of your development acceleration.
Real Talk
Your documentation is fake and your process is simulation. These tools want to write your code. Let them.
Meetings about code are worse than bad code.
If your dev cycle takes more than a day, you're not doing development—you're doing management.
Truth
We're all going to be obsolete. Ship faster.
This stack dies in 6 months when:
- Context becomes infinite
- Limits disappear
- Quality becomes assumed
- Dev becomes continuous
That's fine actually.
Everything above works right now. Your results may vary. Nothing is permanent.