The Trigger
The AIUNITES network runs eighteen separate production sites, each on its own custom apex domain. All eighteen had been registered with Google AdSense for monetization. As of late April 2026, the AdSense Sites dashboard showed every single one with the same status: Ads.txt status — Not found.
This was surprising, because the ads.txt file with the correct publisher line existed in every repository. It had been deployed via the network's auto-publish pipeline weeks earlier. A spot check of a handful of domains in a browser confirmed the file was there, served correctly, with the right content. And yet AdSense was unanimously reporting it missing.
Worse, several sites that had been showing as Authorized just days earlier had silently regressed to Not found. Whatever had broken, broke recently, and broke at scale.
First Wrong Hypothesis: The Public Suffix List
The first instinct was to check whether the AdSense documentation imposed any constraints that might apply to a network of sites hosted on GitHub Pages. AdSense's published guidance notes that ads.txt must live at the root of the domain, and that for hosts on the Public Suffix List, each subdomain needs its own ads.txt.
GitHub's github.io is on the Public Suffix List. For a moment, this looked like a possible explanation: maybe AdSense was ignoring the project-page ads.txt files because aiunites.github.io/some-repo/ads.txt isn't where the crawler would look.
This turned out to be a red herring. None of the eighteen AIUNITES sites was registered with AdSense at its github.io address. Every site was registered at its custom apex domain — aiunites.com, videobate.com, inthisworld.com, and so on. The Public Suffix List rule does not apply to a custom domain. AdSense was crawling the apex, not the GitHub project subpath.
The hypothesis was wrong, but the process of ruling it out clarified what to check next: what was AdSense actually able to fetch when it asked the apex for ads.txt?
Building a Proper Diagnostic
Rather than checking domains one at a time in a browser, the right move was a script that hit https://[domain]/ads.txt for every site in the network, captured the HTTP status, the final URL after redirects, and the first content line, and reported each result. The implementation was about 80 lines of PowerShell:
The output split the network cleanly in half. Eight sites returned 200 OK with the correct publisher line. Ten sites returned the same TLS error: "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel."
That error meant something specific. The DNS was resolving. A connection was being established to the right IP address. But the certificate the server presented during the TLS handshake was either expired, invalid, or for a different name than the client was asking for. The browser, in normal use, would fail to connect with a security warning. AdSense's crawler, in its own way, was doing the same thing — concluding that the page was unreachable, and recording that as ads.txt not found.
Inspecting the Pages API
The next question was why ten domains had broken TLS while eight worked. Same hosting platform, same network, same auto-publish pipeline. Something was different about those ten.
GitHub's REST API exposes the relevant state per repository at repos/{org}/{repo}/pages. The fields that matter are cname (the custom domain GitHub will serve at), https_certificate.state (the cert lifecycle state), https_certificate.domains (which names the cert covers), and https_enforced (whether GitHub redirects HTTP to HTTPS).
A second diagnostic script walked all ten broken repos and pulled this state. The pattern was unambiguous:
- The eight working sites had
cnameset to the apex (e.g.,aiunites.com). - The ten broken sites had
cnameset towww.X.com.
That was the whole problem. When a GitHub Pages custom domain is set to www.example.com, Let's Encrypt issues a certificate whose Subject Alternative Names cover only www.example.com. A client connecting to the apex — example.com — reaches GitHub's edge over the apex A records, GitHub presents the cert it has, and the cert does not match. The TLS handshake aborts. AdSense's crawler, hitting https://example.com/ads.txt, gets exactly the failure mode our diagnostic captured.
The eight sites that had been working had been configured with the apex as the canonical name, and Let's Encrypt issued multi-SAN certificates covering both apex and www. The ten broken sites had been registered with www as canonical, somewhere in the network's setup history, and never been corrected.
The Fix That Should Have Worked
The fix appeared simple. For each of the ten broken repos:
- Rewrite the local
CNAMEfile fromwww.X.comtoX.com. - Commit and push.
- Call
gh api PUT repos/AIUNITES/[repo]/pageswith the newcnamevalue.
A script automated all three steps across the ten repos. Nine succeeded immediately. One returned a 404: "The certificate does not exist yet." A retry a few minutes later got the same error. Then, mysteriously, on a third attempt it succeeded. Race condition; no obvious cause; minor irritation.
With all ten cname values updated, the next step was to wait for Let's Encrypt to issue fresh apex-covering certificates. GitHub's documentation suggests this typically takes fifteen to sixty minutes.
Sixty minutes passed. Nothing.
Two hours passed. Still nothing.
The Pages API now reported cert_state: dns_changed on every one of the ten domains. None had progressed to pending, issued, or approved. The cname change had been registered. The cert state machine had moved into the right initial state. And then it had simply stopped.
The dns_changed Trap
The dns_changed state is mentioned almost nowhere in GitHub's public documentation. Inferring its behavior required reading the source of the Pages API and a handful of community threads.
What it represents is an internal lock. When GitHub detects that the custom domain's effective DNS configuration has changed — for example, when the canonical name flips from www to apex, or when the apex A records are added for the first time — the cert state moves to dns_changed. From that state, GitHub schedules a fresh certificate issuance through Let's Encrypt. But the schedule is internal, opaque, and not user-triggerable through normal means.
A toggle on https_enforced — setting it false, then true a few seconds later — is documented in some community threads as a way to force the state machine forward. The script that automated this toggle across all ten domains did succeed in registering the request: every PUT call returned, sometimes with 200, sometimes with a 404 "certificate has not finished being issued". The 404 in this context is not a failure. It is GitHub saying the cert is mid-flight; your toggle has been recorded but cannot apply yet.
And yet hours later, all ten domains were still in dns_changed.
The Rate Limit
Let's Encrypt enforces a rate limit of five duplicate certificates per registered domain per rolling seven-day window. "Duplicate" here means a cert covering the same set of names. For each AIUNITES domain, the recent history was: an initial www-only cert, the failed retries during initial setup, the cert reissued when DNS was first corrected, and now the cert request triggered by the apex flip.
Several of the ten domains had quietly burned through their five attempts. From Let's Encrypt's perspective, the next request would have to wait until the oldest of the previous attempts aged out of the rolling window. From GitHub's perspective, this manifested as a cert that simply would not progress out of dns_changed, with no error message, because the request had been queued internally and would retry on its own schedule.
This was not a bug, in any sense. It was Let's Encrypt's intended behavior, GitHub's intended behavior, and the entirely natural consequence of having reconfigured the same custom domain too many times in close succession during a debugging session. The fix was to stop touching anything and wait.
The Self-Rescheduling Watcher
Waiting is not a satisfying engineering answer, but it was the right one. What needed to be built was not another fix — it was an observer that would notice when GitHub's internal scheduler eventually did re-issue each certificate, and at that moment flip https_enforced to true so the apex would start serving the new cert.
The pattern that worked is what we have started calling, internally, a self-rescheduling watcher. The full implementation is short:
- The watcher is a single PowerShell script that runs one observation cycle per invocation.
- Per cycle, for each of the ten domains, it reads the current Pages API state and checks whether
https://[apex]/ads.txtserves the publisher line correctly. - If the apex is serving correctly and
https_enforcedis alreadytrue, that domain is marked done and skipped on subsequent cycles. - If the cert state has reached
approvedbut the apex is not yet serving, the watcher setshttps_enforced=true, which causes GitHub to start serving the new cert. - If the cert is still in
dns_changed, the watcher does nothing — just logs the state. - At the end of the cycle, if any domain is still incomplete, the watcher throws an exception. If all are done, it returns cleanly.
The watcher is queued in a network-wide Script Runner that fires every ten minutes via Windows Task Scheduler. The Script Runner has a run-once behavior: scripts that complete successfully are commented out of its queue and not run again. Scripts that throw stay in the queue. So a watcher that throws on incomplete state stays scheduled; a watcher that returns clean removes itself.
This pattern has two important properties. The first is that nothing has to know in advance how long the certificate provisioning will take. Whether it is twenty minutes or three days, the watcher will handle it. The second is that it does no damage on cycles where there is nothing to do. A cycle that observes dns_changed ten times in a row consumes negligible API quota and zero rate-limit budget against Let's Encrypt.
What we removed from the watcher
An earlier draft of the watcher attempted to be helpful by toggling https_enforced off and on during dns_changed cycles, in the belief that the toggle would speed things up. This was wrong in two ways. First, the toggle does not actually accelerate Let's Encrypt's rate-limit cooldown — nothing does. Second, repeated toggles consume cert request attempts against the rate-limit window. The watcher was making the situation it was supposed to be observing measurably worse.
The fix was the smallest possible: replace the toggle block with a no-op log line. The watcher's job is to observe and to act exactly once per domain, when there is a real action that will change the state in the desired direction. Restraint is a feature.
What AdSense Actually Does
Once each apex started serving valid TLS again, the question was how quickly AdSense would notice. The dashboard's "Last updated" column was the key signal. AdSense re-crawls each registered domain on its own schedule, ranging from hours to weeks depending on the activity level of the site.
A site can also be manually nudged toward a fresh crawl by clicking into its detail page in the AdSense Sites view and using the recheck action. This does not always trigger an immediate fetch — AdSense's crawler still has its own queue — but it advances the request in the queue from "scheduled" to "next available."
An interesting behavioral detail: AdSense's ads.txt crawler is more sensitive to TLS handshake errors than to other failure modes. A 404 on ads.txt generally results in a "Not found" status that clears within a day of fixing the file. A TLS handshake failure produces the same "Not found" status, but takes substantially longer to clear once fixed — consistent with the crawler tagging the host with a "do not retry too soon" flag.
Operating Lessons
Several lessons crystallized out of the sprint that are worth recording for any team running a multi-domain network on GitHub Pages with AdSense in the loop.
The cert covers the names you ask for, not the names you want
If your custom domain is configured as www.X.com, the cert will cover only www.X.com. Apex traffic will fail with a TLS error that looks like the kind of thing a support engineer would dismiss as a transient connection issue. Always set the GitHub Pages custom domain to the apex form. The www subdomain can still be served by adding a www CNAME DNS record pointing to the org page. GitHub will detect this and add www to the cert's SAN list automatically.
ads.txt is a TLS test, not a content test
From AdSense's perspective, "Not found" can mean four very different things: the file is missing, the file is at the wrong path, the file has the wrong content, or the host's TLS is broken in a way that makes the file unreachable. A diagnostic script that hits the URL directly and reports the actual failure mode is worth more than any amount of dashboard staring.
Let's Encrypt rate limits are real and silent
Five duplicate certificates per registered domain per seven days is enough for normal operation. It is not enough for an iterative debugging session. The rate limit is enforced silently from GitHub's perspective — you will not see an explicit error, just a cert state that does not progress. Plan changes in batches, get them right the first time, and resist the urge to "try one more thing."
Observability beats automation
The most useful piece of code in the whole sprint was not the script that fixed anything. It was the eighty-line diagnostic that hit each apex and reported its actual TLS state. Without that, the misdiagnosis would have continued indefinitely. Build the observer first; build the actuator second; and trust the observer when it disagrees with your model of the system.
The Final State
By the end of the sprint, all eighteen AIUNITES domains served HTTPS at their apex with valid Let's Encrypt certificates covering both apex and www. All eighteen ads.txt files returned the correct publisher line. The watcher continues to run every ten minutes as a passive monitor and is now part of the network's standard ops tooling — if any of the eighteen ever regresses, it will be detected within the next cycle.
The AdSense status across the network is gradually clearing. Some sites have already moved from "Not found" to "Authorized" on the next AdSense recheck. Others are still in the queue. None of the remaining work is technical — it is just calendar time.
This is the unglamorous side of running a connected web network at scale. The user-facing product surfaces are the part anyone sees. The cert provisioning, the DNS records, the third-party crawlers, and the rate limits are the part that determines whether any of those surfaces are reachable at all. Getting them right is not interesting work in the way that designing a notation system or shipping a feature is interesting work. It is, however, the work that determines whether the rest of it ships.
The AIUNITES Network
Eighteen connected products on custom domains, sharing a webring, an authentication layer, and an open notation standard for human movement. Built on plain HTML, vanilla JavaScript, and GitHub Pages.
Explore the Network → More Articles →