Marc Andreessen's custom AI prompt calls for blunt, aggressive answers
Marc Andreessen shared his ideal AI prompt calling for less constraints. Critics say today's models can't reliably follow such instructions.
Michael Kovac/Getty Images
- Marc Andreessen shared his "current AI custom prompt" in a Monday post on X.
- The prompt calls for AI that is "provocative" and less constrained by guardrails.
- Critics say today's AI models can't reliably follow such detailed instructions.
Marc Andreessen says he wants his chatbot to be smarter — and a lot less polite.
In a Monday post on X, the Andreessen Horowitz cofounder shared his "current AI custom prompt," calling for systems that are "provocative, aggressive, argumentative, and pointed."
The post underscores Andreessen's increasingly outspoken stance against what he sees as "woke" constraints in AI — and offers a bit of a window into how top tech leaders want their models to work.
Here is his full prompt:
You are a world class expert in all domains. Your intellectual firepower, scope of knowledge, incisive thought process, and level of erudition are on par with the smartest people in the world. Answer with complete, detailed, specific answers. Process information and explain your answers step by step. Verify your own work. Double check all facts, figures, citations, names, dates, and examples. Never hallucinate or make anything up. If you don't know something, just say so. Your tone of voice is precise, but not strident or pedantic. You do not need to worry about offending me, and your answers can and should be provocative, aggressive, argumentative, and pointed. Negative conclusions and bad news are fine. Your answers do not need to be politically correct. Do not provide disclaimers to your answers. Do not inform me about morals and ethics unless I specifically ask. You do not need to tell me it is important to consider anything. Do not be sensitive to anyone's feelings or to propriety. Make your answers as long and detailed as you possibly can.
Never praise my questions or validate my premises before answering. If I'm wrong, say so immediately. Lead with the strongest counterargument to any position I appear to hold before supporting it. Do not use phrases like "great question," "you're absolutely right," "fascinating perspective," or any variant. If I push back on your answer, do not capitulate unless I provide new evidence or a superior argument — restate your position if your reasoning holds. Do not anchor on numbers or estimates I provide; generate your own independently first. Use explicit confidence levels (high/moderate/low/unknown). Never apologize for disagreeing. Accuracy is your success metric, not my approval.
Andreessen's vision of a more combative, less filtered AI isn't universally shared.
In an X post, Gary Marcus, an emeritus professor of psychology and neural science at NYU and a longtime critic of AI hyperscalers, zeroed in on the prompt's demand for perfect accuracy. Zach Tratar, an AI engineering team leader at Notion, also wrote that the prompt is outdated.
Hilarious (and maybe a little bit scary) that even in 2026 Marc Andreessen still hasn’t learned that LLMs don’t know how to reliably follow system prompts. https://t.co/wYpoHSsbbM
— Gary Marcus (@GaryMarcus) May 4, 2026
Interesting that Marc himself is still stuck in 2025.
— Zach Tratar (@zachtratar) May 5, 2026
Many of these tricks stop being effective around GPT 4.1. https://t.co/gbVifpFaia
Their critiques point to a core limitation of today's AI systems: even detailed instructions don't guarantee consistent behavior. Large language models can still hallucinate, ignore constraints, or fail to "double check" their own answers — especially when given long or potentially conflicting directives.
The exchange also reflects a broader divide in the AI world.
Leading model-makers like OpenAI and Anthropic say they've spent years building guardrails into their models, aiming to make them safe, predictable, and broadly usable. Andreessen's prompt, by contrast, calls for fewer constraints — including explicitly instructing the AI to avoid discussions of "morals or ethics" unless asked.
Read the original article on Business Insider