Part 8 Postmortem: Writing This Series With AI

What this piece should be

This should not read like a novelty disclosure or a tooling diary.

The strongest version is a retrospective with a clear claim:

This series was mostly written with AI assistance, but the interesting part is not that AI generated text. The interesting part is that quality only emerged once the work was structured into plans, artifacts, review passes, and human editorial decisions.

That makes the postmortem consistent with the series itself. The process mirrored the argument.


The underlying thesis for Part 8

Three claims seem worth centering:

  1. AI made the series faster to produce, but not automatically coherent.
  2. The biggest gains came from structuring the work into artifacts and staged review loops.
  3. The final quality came from human editorial judgment, not from asking the model to try harder.

If the piece lands on those three points, it will feel like a real retrospective rather than a confession.


What actually happened

Phase 1: Discovery

  1. Brainstorm with ChatGPT voice chat.
  2. Explore the topic broadly enough to find the interesting tension.

The value here was speed and breadth. Voice chat was good for open-ended exploration before there was a clear outline.

Phase 2: Architecture

  1. Restructure everything into a coherent high-level plan.
  2. Identify the narrative spine.
  3. Slice the idea into distinct articles and create one plan for each.

This was probably the turning point. Before this step, the material was just ideas. After this step, it became a system of constraints.

Phase 3: Evidence

  1. Use ChatGPT deep research to find prior art online.
  2. Separate useful sources from weak or obscure ones.

This phase matters because AI is good at finding material, but not always good at ranking it by taste, relevance, credibility, or rhetorical usefulness.

Phase 4: Artifacts and persistence

  1. Move the high-level plan, research, and per-article plans into a markdown repository.

This step looks operational, but it is conceptually important. The repo turned conversational exploration into durable artifacts. It made the process inspectable, resumable, and easier to refine.

Phase 5: Drafting and controlled iteration

  1. Use Copilot to draft the articles.
  2. Ask the AI to review the drafts for gaps, duplication, weak arguments, unsubstantiated claims, poor transitions, and inverted logic.
  3. Manually identify patterns that felt wrong and force the drafts through repeated revision.

Examples of those patterns:

  1. Repeat the AI review pass and the human taste pass until the result is acceptable.

That last loop is where the authorship question becomes clearer. The human role was not just approving text. It was defining standards, spotting recurring defects, and deciding what kind of writing counted as finished.


The most important reflection

The process validated the series argument.

The work got better every time it became more structured:

In other words, the quality did not come from autonomous generation. It came from a workflow with handoffs, artifacts, constraints, and human judgment at the irreversible edges.

That symmetry is probably the most interesting thing to say in Part 8.


What AI was actually good at

AI was especially useful when the task was expansive, comparative, or iterative.


What still depended on human judgment

This is probably worth stating directly: the hardest part was not generation. It was convergence.


Where the real effort went

One strong retrospective angle is that drafting was not the expensive phase.

The expensive phase was editorial convergence:

This is important because many people assume AI shifts effort out of writing. In practice, it often shifts effort out of first-draft production and into evaluation, curation, and taste.


Failure modes worth naming explicitly

Part 8 will be stronger if it includes concrete examples of what the models kept doing badly.

These are not random defects. They are the cost of accepting plausible prose too early.


A good way to frame authorship

This part needs care.

It probably helps to avoid percentage claims like “AI wrote 80%” unless you really want that fight.

A more durable framing is:

The series was largely drafted with AI assistance, but it was planned, structured, constrained, sourced, and repeatedly rewritten through a human editorial process.

That is more precise and more interesting.


What I would do differently next time

This section should probably be concrete.

1. Lock the style guide earlier

Write down the anti-patterns at the start instead of discovering them only through frustration.

Examples:

2. Define source tiers before drafting

Separate primary sources, high-quality reporting, vendor documentation, and weak filler sources before prose generation begins.

3. Freeze the narrative spine sooner

Once the series argument is clear, treat it as a constraint. Do not let later drafting passes silently rewrite the thesis.

4. Separate factual review from prose review earlier

One pass should ask: is this true and well supported?

Another should ask: does this sound human, precise, and worth reading?

Combining both too early makes revision noisier.

5. Keep a running defect log

When a recurring flaw appears, record it once and treat it as a standing instruction for the rest of the project.

6. Expect editing, not magic

The main mistake would be assuming that better prompts remove the need for taste, judgment, and line-level rewriting. They do not.


Extra angles worth including in the retrospective

These feel like good candidates for Part 8 even if they are not central:

The division of labor by tool

The cost of the workflow

Not just time or tokens, but cognitive overhead:

The source-policy evolution

The series appears to have improved as the source bar rose and citations became more selective and more purposeful.

What remained irreducibly human

There is probably value in stating clearly that taste, fairness, structure, and rhetorical judgment were not automated away.

Whether disclosure changes how the series should be read

One possible closing question:

If a series is AI-assisted but governed by clear human standards, how much does the production method matter relative to the argument itself?


A concise closing direction for Part 8

If the piece needs a final note, this is the one worth building toward:

The lesson of this process was not that AI can write for me. It was that AI becomes useful when writing is treated as a system: plans, artifacts, review loops, and judgment. The more autonomous the drafting looked, the more human the editing had to become.

That ending would connect the retrospective back to the rest of the series cleanly.