Why Manual Publishing Doesn't Scale Past Three Sites
The AIUNITES network started with a single hub site and grew, over the course of a few months, into a network of eighteen interconnected products. Each is a separate GitHub repository, each with its own custom domain, each with its own UAT plan, its own changelog, its own AdSense registration, and its own Search Console verification.
At three sites, manually committing and pushing changes was slightly tedious. At ten, it was the dominant time sink in any given session. By the time the count crossed fifteen, the math broke: a fifteen-second commit and push, multiplied by eighteen repositories, is over four minutes of pure tooling overhead per round of edits, before any of the cross-site coordination tasks — sitemap regeneration, IndexNow submission, webring updates, ads.txt verification — even started.
The publishing pipeline that emerged was not a sudden architectural decision. It was a series of small automations layered onto a workflow until the round-trip cost of a change to any site dropped below the cost of thinking about which site needed the change.
The Core Loop
The pipeline runs on four pieces of infrastructure, in roughly this order of importance:
- auto-publish.ps1 — commits and pushes any changed repository in the network, every ten minutes, via Windows Task Scheduler.
- script-runner.ps1 — executes a queue of one-off and recurring scripts, with a self-comment-out behavior that turns the queue into a kind of durable to-do list.
- tray-monitor.ps1 — a small system tray window showing the publishing state in real time, with a manual run trigger when the next ten-minute cycle is too far away.
- Claude with the Anthropic MCP filesystem server — the long-context engineering partner that does the actual editing, refactoring, code review, and writing.
The flow of any given change is short. The operator describes what needs to happen, in plain English, in a chat session with Claude. Claude reads the relevant files via the MCP filesystem server, makes the necessary edits, and verifies the work. The auto-publish task picks up the changes within minutes and pushes them. GitHub Pages rebuilds. The change is live, on the relevant production domain, usually within fifteen minutes of the original instruction.
auto-publish.ps1
The auto-publish script is the load-bearing piece of the system. It iterates over every repository in the network's GitHub directory, runs git status --porcelain to detect changes, and for any repository with uncommitted modifications, stages the work, generates a descriptive commit message, and pushes to the appropriate branch.
Several details matter for an eighteen-repository setup that a single-repo workflow would never need to think about.
Branch detection per repository
Most repos in the network use master. A few — notably cloudsion-site — use main. The script tries the expected branch first and falls back gracefully when the expected branch is not present. This is unglamorous defensive coding that prevents one repo's branch quirks from breaking the entire publishing run.
Stats generation in the same pass
The auto-publish run also produces a publish-stats.json file summarizing per-repo activity — commits per day, files changed, last push timestamp. This is published into cloudsion-site/data/ and consumed by the network's automation dashboard. Doing it in the same pass as the commit means the stats are always within ten minutes of accurate.
Silent execution via VBS
Windows AppLocker on the operator's machine blocks executables launched from arbitrary directories. The auto-publish task runs through a wscript.exe launcher pointed at auto-publish-silent.vbs, which in turn invokes the PowerShell script. This indirection is required for the task to run silently in the background without an AppLocker prompt or a visible console window flashing every ten minutes.
The IndexNow side effect
Every successful push triggers a small follow-on call to Bing's IndexNow endpoint, submitting the changed URLs for fast re-crawl. IndexNow is an open protocol that Microsoft and a handful of other search engines have committed to, and it is the difference between Google or Bing seeing a change in two hours versus two weeks. The integration is one HTTP POST per push.
The Script Runner Pattern
Auto-publish handles the routine case — if you changed a file, your change goes live. Everything that is not a routine commit-and-push goes through the Script Runner instead.
The Script Runner is, structurally, a list of script paths in a PowerShell file. Each script in the list is run once on the next Task Scheduler cycle. A script that completes without throwing is automatically commented out of the list and will not run again. A script that throws stays in the list and will run again on the next cycle.
This produces a useful pair of behaviors. Successful one-off tasks remove themselves from the queue — "rebuild the OG image set after a logo update" runs once, succeeds, and is gone. Long-running observers stay queued by design — the cert watcher described in the recent HTTPS sprint article throws on every cycle until all monitored domains are healthy, at which point it returns clean and removes itself.
The Script Runner is also the right place for batch operations across the network. A script like "regenerate every site's sitemap.xml" gets queued, runs once, and self-comments. A script like "monitor for AdSense regressions" gets queued and stays queued indefinitely. The same primitive handles both shapes of work.
The strict ASCII rule
One non-obvious operating constraint: every script and every separator comment in the Script Runner queue must be ASCII-only. The runner reads its own source via Get-Content -Encoding UTF8, edits the queue line for the script that just ran, and writes it back via Set-Content -Encoding UTF8. Any non-ASCII characters in the file get re-encoded in subtly wrong ways across this round trip and corrupt the queue. Box-drawing characters, em dashes in comments, smart quotes pasted from a Word document — all of these have, at one time or another, broken the runner. The fix is a strict policy: ASCII or it doesn't ship.
Where Claude Fits
The publishing pipeline is what makes a change deployable. The actual work of authoring the change — reading code, making correct edits, designing schemas, writing documentation, debugging, reviewing — is done in collaboration with Claude, accessed through Anthropic's chat interface and connected to the file system via Anthropic's MCP filesystem server.
This is worth describing carefully, because the popular framing of "AI engineering assistants" tends to drift into two unhelpful caricatures. One caricature treats the AI as an autonomous agent: type a goal, walk away, return to a finished product. The other treats it as a slightly fancier autocomplete: useful for boilerplate, irrelevant to the actual work. Neither maps to what is actually happening.
What is actually happening is that the operator describes a problem, and Claude, which has substantial context on the AIUNITES architecture from prior sessions and from the project files it can read on demand, proposes and implements solutions. The operator reviews, redirects, and approves. For many changes, this collaboration is fast: the operator describes the desired change in a sentence, Claude reads the relevant files, makes the edit, and the operator confirms. For more complex changes — a new template, an architectural shift, a debugging session with no obvious cause — the collaboration looks more like pair programming, with Claude proposing hypotheses, building diagnostic tools, and iterating in close coordination with the operator's judgment.
The MCP filesystem server
The technical mechanism that makes this work at scale is Anthropic's MCP (Model Context Protocol) filesystem server. The MCP server runs on the operator's machine and exposes a controlled set of file system operations — read, write, edit, list, search — to Claude over the protocol. The configured allowed directories include the GitHub root, AppData, Downloads, Documents, and a small handful of others. Operations outside those directories are simply unavailable.
From Claude's side, this looks like a set of tools that can be invoked during a conversation. The tool calls produce real effects on the local filesystem. From the operator's side, this looks like asking a colleague to make a change, with the colleague reading the files in question and editing them in place, except that the colleague is an AI model and the read-write operations are happening in seconds rather than hours.
The combination of MCP file access and Claude's long context window is what makes the eighteen-site scale tractable. Claude can hold the structure of the network in working memory across a session: which sites use the modal-admin template versus the page-admin template, which repos use main versus master, which sites have AdSense enabled, which are flagged for content review. When a change needs to apply across multiple repos, Claude can apply it consistently without the operator having to context-switch between eighteen separate codebases.
Cloudsion: The Productized Version
The pipeline described above is what runs the AIUNITES network internally. Cloudsion is the productized version of the same pattern, packaged for clients who want the same publishing capability for their own sites without reinventing the auto-publish, Script Runner, and MCP integration components.
Cloudsion's pricing reflects what the underlying pipeline actually costs to operate. The on-premises setup is a one-time engagement that installs the auto-publish stack, configures Task Scheduler and the silent VBS launchers, sets up the MCP filesystem server with appropriate allowed directories, and walks the client through their first few publishing sessions. The monthly retainer covers ongoing pipeline operation: the actual editing, refactoring, and content work that runs through the chat interface, billed in time blocks against the same 80-dollar-per-hour rate that applies to the rest of the AIUNITES consulting work.
The running cost to a client of an established Cloudsion pipeline is approximately twenty dollars per month, all in. That figure is striking, and worth explaining. GitHub Pages hosting is free for public repositories. Custom domain registration is whatever the client is paying their registrar — typically twelve to fifteen dollars per domain per year. The pipeline scripts are free; the operator runs them on a machine they already own. The single ongoing recurring cost is the Anthropic Claude Pro subscription that powers the long-context editing sessions.
This is not a typical SaaS economics story. The pipeline is not selling compute or storage at a markup. It is selling a way of working — a publishing capability that turns ongoing edits and content updates into a flow rather than a project — and the underlying infrastructure costs are genuinely small.
Operating Discipline
Tools alone do not produce a stable network. A handful of discipline practices keep the AIUNITES pipeline from drifting into chaos as the site count grows.
UATEST plans, never deletions
Every site has its own UATEST_PLAN.md in its repository root, listing every feature with a checkbox status. The hard rule is that features are never deleted from a UATEST plan. Removed or deprecated features get marked with a strikethrough or a "deprecated" tag. This means the UAT plan history becomes the authoritative changelog for a site — what was once true, what is true now, and what changed between them — and it is the first file Claude reads when starting work on any given site.
Versioning in four places
Every site tracks its semantic version in four places: the UAT plan header, the JS config file that drives the admin panel, the admin panel's changelog tab, and the network-wide PROGRESS_LOG. The redundancy is intentional. A version visible only in one place gets out of sync. Versions visible in four places that all need to be updated together produce a small ritual that catches lapses early.
Transcripts and the journal
Substantial sessions are saved as Markdown transcripts in a per-site transcripts/ directory. A network-wide journal.txt captures the human-readable summary of each session. The transcripts are searchable by Claude in subsequent sessions, which means the system retains a kind of organizational memory across operators and across time without requiring anyone to remember to write documentation as they go.
Backups via a separate scripts repo
The auto-publish run also writes a snapshot of the scripts directory to a separate aiunites-scripts-backup repository on every cycle. This is the equivalent of a versioned backup with no operator effort. If a script gets corrupted — the ASCII rule above is the most common way this happens — the previous working version is one git log away.
What This Pipeline Is Not
It is worth being explicit about what the AIUNITES publishing pipeline is not, because the framing of "AI-powered everything" tends to overpromise.
It is not autonomous. There is always an operator in the loop, providing direction, reviewing changes, and exercising judgment. The pipeline accelerates the work; it does not replace the worker.
It is not magic. The auto-publish, Script Runner, and MCP integration are straightforward engineering. The novel piece is the combination — the pipeline is more useful than the sum of its parts because each part eliminates a different category of friction.
It is not zero-defect. Things break. The recent HTTPS sprint described in the companion article is one example of a class of failures that the pipeline cannot prevent and can only help diagnose and recover from. What the pipeline provides is the leverage to fix problems quickly when they appear, not the guarantee that they won't.
And it is not a replacement for engineering taste. The operator still has to know what the right answer is. Claude is a remarkably capable partner for executing on a vision, but it does not invent the vision. The pipeline amplifies the operator; it does not substitute for them.
Where This Goes Next
The current pipeline is sufficient for an eighteen-site network operated by a single person at a sustainable pace. Two areas of expansion are in progress.
The first is multi-operator support. Cloudsion clients are starting to engage with the pipeline as more than a single-operator tool. The Script Runner pattern generalizes naturally to a team setting — queues that survive across operator handoffs — but the MCP filesystem server is currently a single-machine concern, and the pattern for sharing it across collaborators is still being worked out.
The second is observability. The current system logs heavily but does not yet ship a single dashboard that says "the network is healthy" or "X needs attention." The publish-stats.json file is the closest thing, and it is already informative, but the gap between "useful logs" and "actionable status" is real and worth closing. The cert watcher described in the HTTPS sprint article is an early example of an observer that produces actionable signal; more of those, with a unified surface, is the natural direction.
For now, the pipeline does what it was built for. Eighteen sites stay current, AdSense stays compliant, content stays consistent, and the operator's available time stays mostly available for the actual work of building things rather than the work of shipping them.
Cloudsion: Publishing as a Capability
The same pipeline that runs AIUNITES, packaged for client networks. On-premises setup, monthly retainer, custom configurations available.
Learn About Cloudsion → Consulting →