{
	"version": "https://jsonfeed.org/version/1",
	"title": "Neil Roberts",
	"icon": "https://cdn.micro.blog/pottedmeat/avatar.jpg",
	"home_page_url": "https://pottedmeat.blog/",
	"feed_url": "https://pottedmeat.blog/feed.json",
	"items": [
			{
				"id": "http://pottedmeat.micro.blog/2026/04/07/your-prompts-are-a-portrait/",
				"title": "Your Prompts Are a Portrait of You",
				"content_html": "<p>I&rsquo;m not going to dismiss the experience of someone who has made an attempt at using AI and written off the output as lowest-common-denominator generic slop. I <em>am</em> going to assume that a brain worm (all too common these days) was whispering into their ear some reductive argument that helped confirm that AI had no place in their life.</p>\n<p>I see two groups making this claim. The first hasn&rsquo;t seriously used AI to create something. They&rsquo;ve absorbed the criticism from the outside, from backlash essays and from the meme that every chatbot sounds like a press release, and concluded the output is boring by design. This group hasn&rsquo;t done the fieldwork. They&rsquo;re borrowing a conclusion.</p>\n<p>Let&rsquo;s take the second group more seriously. These are people who have really wanted to use AI. They opened a chat interface, asked for a draft, felt like what came back missed the mark, fought with it, eventually creating something they were happy with only to have to repeat the same process over the next time. Without building a framework around a process, without knowing how to build a framework around a process, it&rsquo;s a slog to keep fighting against the tendency of AI to create something you never would have.</p>\n<p>The strongest version of the argument goes like this: AI trains on a bunch of human output. The optimization objective, at some level, is to do a bunch of math to find the center of what a broad sample of evaluators preferred. Centers feel generic. The closer you get to the mean of human expression, the less the output sounds like any specific person.</p>\n<p>That logic holds together if that&rsquo;s what was really going on.</p>\n<h2 id=\"every-tool-has-someones-taste\">Every tool has someone&rsquo;s taste</h2>\n<p>You&rsquo;re almost never hitting the raw average. You&rsquo;ve left the center before you typed your first word.</p>\n<p>Every call to an AI has a system prompt. That prompt was written or generated by someone with opinions about how they want their chat to interact with you and what a good response should look like. Use a writing tool and you can be assured its author has embedded their view of what writing should sound like, about pacing, about tone, about structure, all before you&rsquo;ve typed a single character. The team behind a music generator has made choices about what the default output should sound like. Image tools carry style priors baked into the weights and the sampling parameters. None of these tools drop you into a perfect tidy little blank canvas.</p>\n<p>None of these defaults are yours. But none of them are the mean of human output either. They&rsquo;re someone else&rsquo;s taste. That&rsquo;s a different problem than the one this criticism is identifying. You&rsquo;re not getting the average. You&rsquo;re getting an opinion you didn&rsquo;t sign up for.</p>\n<p>The distinction matters because it changes what you do about it. If you&rsquo;re assuming AI genuinely defaults to the statistical mean, why not let it? Won&rsquo;t that resonate with the most people? But if it&rsquo;s defaulting to someone else&rsquo;s preferences and is, instead, probably acting on behalf of someone you disagree with or who just has plain bad taste, you&rsquo;re going to want to replace those preferences with your own.</p>\n<h2 id=\"the-em-dash-wasnt-an-accident\">The em dash wasn&rsquo;t an accident</h2>\n<p>Even the base model training isn&rsquo;t a clean average. The final step in training, where human reviewers rank responses, reflects the preferences of the specific people who happened to be picking their favorite responses.</p>\n<p>This is how em dashes got baked into some models: the people scoring outputs preferred sentences that used them. Not because em dashes represent some cross-cultural median of human writing. Because a specific group of people with a specific aesthetic were the primary signal.</p>\n<p>The unmodified chat model is downstream of dozens of aesthetic decisions you were never told about. The teams who labeled training data had opinions. The reviewers who ranked responses did too. Everyone in that pipeline had opinions. By the time you open a chat window, you&rsquo;re already interacting with the accumulated preferences of a group of people who aren&rsquo;t you. A group of people who doesn&rsquo;t represent an encompassing wisdom-of-the-crowd perspective, but just a subset of people who happened to be involved in how this model was developed.</p>\n<p>So when the output sounds like someone who isn&rsquo;t you: that&rsquo;s accurate. It sounds like whoever built the thing, and whoever trained the thing, and whoever ranked the responses. The problem isn&rsquo;t that it&rsquo;s generic. The problem is that it has the wrong opinions.</p>\n<p>And we can&rsquo;t allow anyone to have a wrong opinion.</p>\n<p>When you install a skill or paste in a tailored prompt that someone else built for a specific use case, it doesn&rsquo;t make this friction go away. In fact, it makes the problem worse. The tool&rsquo;s opinions start running into your opinions, and the collisions tell you something. Not about what the tool does wrong, but about what you actually want.</p>\n<p>If nobody pushes back, if the incentive to ship quickly wins and the output passes through unchanged, that&rsquo;s when slop gets created. But slop isn&rsquo;t an inherent property of AI. It&rsquo;s what happens when a lot of people are pushing out a lot of content, all with the baseline taste that created these models. ChatGPT slop is different than Claude slop but there is just so much of it that it all blends together.</p>\n<h2 id=\"you-change-the-output-the-output-changes-you\">You change the output. The output changes you.</h2>\n<p>Once you start trying to steer the ship, the gap between what you asked for and what came back becomes information. You ask for blunt and get wishy-washy. You ask for something that sounds like you wrote it and get something that even your furthest acquaintance would immediately recognize was written by someone else.</p>\n<p>A lot of people can live in this tension. Some shrug off the wrong tone or the shapeless structure and keep going. Some decide this means AI isn&rsquo;t useful for what they&rsquo;re trying to do.</p>\n<p>But the people who can&rsquo;t live with the annoyance start iterating and start collecting receipts. They keep a text file of prompts that worked. When something comes back wrong, they modify the prompt and try again. When they can&rsquo;t figure out what went wrong, they ask the AI why it didn&rsquo;t follow its instructions and eventually how to fix their instructions. There&rsquo;s nothing sophisticated required here. No special tools, no configuration files, no technical background. Just a loop of push, observe, adjust.</p>\n<p>As the prompts accumulate, as the corrections stack, something happens. At first you&rsquo;re patching individual failures, fixing the thing that was wrong last time so you don&rsquo;t have to fix it again. But over enough rounds, the patches start adding up to something. A vocabulary you keep reaching for. A structural preference that shows up in everything. An intolerance for hedging where you&rsquo;d just say the thing directly.</p>\n<p>Your taste gets embedded. Not all at once. Not necessarily done with intention. But it happens.</p>\n<h2 id=\"learning-the-language-is-a-real-barrier\">Learning the language is a real barrier</h2>\n<p>I realize this assumes something that sounds simple but isn&rsquo;t: knowing you can interact with the AI like this.</p>\n<p>Chat interfaces are the most visible thing about AI. You type, it responds, you type again. For a lot of people, that&rsquo;s the entire experience. The concept of a reusable prompt, of a skill that encodes a specific workflow, of an agent that can run a multi-step process with your preferences baked in: none of this is intuitive at first.</p>\n<p>Skills are well suited to the kind of thing I&rsquo;m describing. A skill is a structured set of instructions that does a specific job the same way every time, shaped by the preferences of whoever wrote it. But you need to know skills exist. You need to understand what they do and how they differ from chatting. That gap between &ldquo;I can ask it a question&rdquo; and &ldquo;I can shape how it works for me&rdquo; is real and it&rsquo;s where most people stall.</p>\n<p>My oft-repeated procedure is this: &ldquo;What about the instructions made you [do this thing I don&rsquo;t like]?&rdquo; &ldquo;How can the instructions be adjusted to make you more likely to [get this thing right in this way]?&rdquo;</p>\n<p>Some people hit the friction and stop. Some live with the annoyance indefinitely. But the annoyance is information. The people who tune into that discord end up building something that works for them. Something that sounds like them.</p>\n<h2 id=\"your-prompts-are-a-portrait-of-you\">Your prompts are a portrait of you</h2>\n<p>Getting the AI to stop doing the thing you don&rsquo;t like requires describing what you don&rsquo;t like. Sometimes you might just say &ldquo;it was too long&rdquo; or &ldquo;it was too formal&rdquo; and it either resolves or you have to keep tweaking until you&rsquo;ve built up a set of proclamations that all work together in just the right way. Sometimes the gap reveals a specific desire you didn&rsquo;t know you had and you say &ldquo;it gave me four structured sections when I wanted the argument to build continuously, without headers breaking it up.&rdquo; Or &ldquo;it hedged every claim where I&rsquo;d just state it.&rdquo;</p>\n<p>Either way, this feedback melds into something more exact than anything you&rsquo;d need to say to a collaborator who already understood you. They force a precision you didn&rsquo;t know you had.</p>\n<p>Here&rsquo;s the part that surprised me. As you refine prompts over time, you start preserving things about your thinking that you never thought to write down.</p>\n<p>You&rsquo;ve probably had opinions about how you write, how you move through a problem, what &ldquo;finished&rdquo; looks like to you. But opinions like these don&rsquo;t need to be made explicit when you&rsquo;re the only one working. When you&rsquo;re steering AI, explicit is the only option that works. The specificity required to get good output is also, accidentally, the specificity of self-documentation.</p>\n<p>You&rsquo;re formalizing aspects of your own creative process that were previously invisible. How much tolerance you have for hedged language. Whether you reach for the example before or after you&rsquo;ve stated the principle. What you&rsquo;re trying to sound like when you&rsquo;re trying to sound like yourself. Your process of creation. What annoys you. What you keep trying to protect in the output.</p>\n<p>Read your prompts back in six months. They&rsquo;ll tell you things about yourself you hadn&rsquo;t put into words. Not because you sat down to write a self-portrait. Because every time you pushed back on an AI tool, you left a mark, and the marks accumulated into a shape.</p>\n<p>Leave your mark.</p>\n",
				
				"date_published": "2026-04-07T19:00:00-05:00",
				"url": "https://pottedmeat.blog/2026/04/07/your-prompts-are-a-portrait/",
				"tags": ["ai","writing","tools","longform"]
			}
	]
}
