Return to TheMann00.com

Meet Jacob

IndyCar Freq

Ancestry

Photography

If you’ve been reading content online for the past two years, you’ve probably encountered some version of this disclaimer: “This content was created with AI assistance.” Sometimes it’s at the top of an article. Sometimes it’s buried in the footer. Sometimes it doesn’t exist at all, even when you’re pretty sure you’re reading something a machine helped write.

The disclosure landscape is a mess right now. Academic publishers have strict policies. Amazon KDP has specific rules. News outlets are developing their own standards. And individual bloggers and content creators are left trying to figure out what, if anything, they should be telling readers about their workflow.

I use AI tools extensively in my writing process. Not to replace thinking, but to structure it, draft it, and refine it. And I’ve spent considerable time figuring out what honest disclosure actually looks like for someone who isn’t a news organization, academic journal, or major publisher. This is what I’ve learned.

Split-screen illustration showing a writer at a desk with floating text fragments and idea bubbles on the left, transitioning to organized paragraphs flowing onto a screen on the right, representing the collaborative process of AI-assisted writing

The Spectrum Nobody Talks About

The problem with most AI disclosure conversations is that they treat “AI-assisted” as a binary. Either you used AI or you didn’t. Either you need to disclose or you don’t. That’s not how writing actually works anymore.

Consider these scenarios:

Scenario One: You use Grammarly to fix typos and suggest better word choices. The tool flags passive voice, recommends more concise phrasing, and catches subject-verb disagreement. You accept about half the suggestions. Is that AI-assisted writing?

Scenario Two: You dictate an article into your phone while driving. The transcription software uses AI to convert your speech to text, add punctuation, and format paragraphs. You edit the transcript later to fix errors and tighten the prose. Is that AI-assisted writing?

Scenario Three: You have a conversation with Claude where you explain your thinking on a topic, answer clarifying questions, and refine your position. The AI drafts content based on that conversation. You edit extensively, rewrite sections that don’t match your voice, and add examples the AI didn’t include. Is that AI-assisted writing?

Scenario Four: You feed a prompt into ChatGPT asking for a blog post about a topic. The AI generates 1,000 words. You publish it with minor edits. Is that AI-assisted writing?

Most people would say Scenario One doesn’t require disclosure. Most would say Scenario Four absolutely does. But Scenarios Two and Three exist in a gray area that current disclosure frameworks don’t handle well.

The distinction that actually matters isn’t whether AI was involved. It’s where the thinking happened.

The Two Questions That Actually Matter

Instead of asking “Did I use AI?”, ask these two questions:

First: Did I do the intellectual work? This means developing the core argument, choosing what matters, deciding what to emphasize, and determining what conclusions to draw. If you outsourced that thinking to an AI, you’re generating content, not writing.

Second: Would readers care about how this was created? For technical documentation, probably not. For personal essays about your experience, probably yes. For analysis pieces where your judgment is the value, definitely yes.

If you did the intellectual work and readers wouldn’t reasonably care about your process, disclosure is optional. If an AI did substantial thinking or if readers would expect to know, disclosure matters.

What My Process Actually Looks Like

I’m going to describe my workflow for some of the posts on this site, because I think specificity helps more than vague principles.

Step One: I develop the core idea through conversation. I’ll often start by talking through a topic with Claude or Perplexity. Not asking it to write something, but using it as a sounding board to clarify my own thinking. What’s the interesting angle here? What am I actually trying to say? What examples support this point versus that one?

This is similar to how I’d talk through an idea with a colleague or editor before writing. The AI asks questions I hadn’t considered. It points out gaps in my reasoning. It helps me figure out what I actually think before I try to write it down. Sometimes, I’ve stated an opinion, and my AI thought-partner pushed back with something that had me adjust my position because of new information.

Step Two: The AI drafts based on my thinking. Once I’ve clarified the argument, structure, and key points through conversation, I’ll ask the LLM to draft content. But this isn’t “write me a blog post about X.” It’s “based on everything we just discussed, draft a post that makes these specific points in this order with these examples.”

The output is usually 60-70% of what I actually publish. The core argument is mine. The structure is mine. The examples are mine. But the specific phrasing, transitions, and flow come from the AI’s interpretation of our conversation.

Step Three: I edit extensively. I rewrite sections that don’t sound like me. I add details that the AI couldn’t know. I cut tangents that seem AI-generated rather than purposeful. I adjust tone, add specificity, and make sure every sentence reflects what I actually think.

This takes longer than most people assume. I’m not just fixing typos. I’m ensuring the final piece genuinely represents my analysis and voice, not a machine’s approximation of it.

Where This Falls on the Disclosure Spectrum

My workflow fits somewhere between academic writing (which requires detailed disclosure of every AI interaction) and casual blogging (which often doesn’t disclose at all). The intellectual work is mine. The thinking is mine. The final editing and judgment calls are mine. But the drafting assistance is substantial enough that transparency matters.

So here’s what I’ve settled on: a simple end-of-article disclosure that explains the role AI played without overcomplicating things.

“This post was developed through conversation with AI tools that helped structure my thoughts and draft initial content. The analysis, opinions, and final editing are mine.”

That’s honest without being defensive. It clarifies what the AI did (structure, drafting) versus what I did (thinking, analysis, editing). And it respects readers’ right to know how content was created without suggesting I’m apologizing for using available tools.

Why This Matters Beyond Compliance

The easy answer to “should I disclose AI use?” is to look up whatever rules apply to your situation and follow them. Academic journals have policies. Publishers have guidelines. Platforms like Amazon KDP have specific requirements.

But the more interesting question is why you’d disclose even when you’re not required to.

Transparency builds trust. When readers know you’re upfront about your process, they’re more likely to trust your judgment. When they suspect you’re hiding something, that suspicion colors how they interpret everything else you write.

Setting norms matters. Right now, the AI-assisted writing landscape is chaotic. Some people disclose meticulously. Others hide AI use entirely. Many fall somewhere in between with no clear standard to follow. Early adopters who model responsible disclosure help establish what becomes standard practice.

Your reputation depends on being ahead of this, not behind it. At some point, probably soon, readers will expect disclosure for substantial AI assistance. Better to establish credibility now by being transparent than to get caught hiding your process later when expectations have shifted.

The Disclosure Nobody Needs

Here’s what doesn’t help: vague disclaimers that provide no useful information.

“Some content on this site may have been created with AI assistance.” What does that mean? Which content? How much assistance? For what purpose?

“This article was reviewed for accuracy.” Was it written by a human or AI? Who reviewed it? What did the review process actually entail?

“AI tools were used in the creation of this content.” Were they used to fix typos or write entire sections? This tells readers nothing.

If you’re going to disclose, make it specific enough to be meaningful. Explain what role it played. Clarify what you’re still responsible for. Vague disclosure is often worse than no disclosure because it raises questions without answering them.

Where I’m Still Figuring Things Out

I don’t have perfect answers here. The landscape is evolving faster than anyone can establish stable norms.

I’m still deciding whether to disclose AI assistance differently for different types of content. I’m watching to see whether readers actually care. So far, the disclosure I include at the end of posts hasn’t generated questions or pushback. That might mean it’s appropriately transparent, or it might mean readers don’t pay attention to disclosures at all.

I’m curious whether platform-level solutions will make individual disclosures unnecessary. If browsers start automatically detecting and flagging AI-generated content, does that make manual disclosure redundant or still important for building trust?

What I know for certain: being thoughtful about disclosure now, even when it’s not required, positions you better than waiting until someone calls you out for hiding your process. The writers who figure out responsible AI assistance early will have credibility advantages over those who treat it as something to hide.

This is the standard I’m holding myself to. Your mileage may vary. But if you’re using AI substantially in your writing workflow, you should probably figure out your own disclosure policy before someone else figures it out for you.

This post was developed through conversation with AI tools that helped structure my thoughts and draft initial content. The analysis, opinions, and final editing are mine.
Read more about: AI-assisted writing.

Read more about what I have to say about AI on my Substack.