Field NotePrinciple in Practice

Mar 08, 2026

How to Build an AI Agent That Schedules X Posts

I built an X post scheduler from scratch — Express, Postgres, cron — and had an AI coding agent write most of it. Here's the architecture, the deployment, and why simple AI agent automation beats over-engineering.

OpenClawBuild AI AgentView Related Drop

How to Build an AI Agent That Schedules X Posts

Why AI Agents Need Their Own Scheduling Infrastructure

AI agent automation starts with a simple truth: your agent is only as useful as the infrastructure it can call. I run an AI cockpit — a workspace where I build and orchestrate agents daily. One of those agents handles content distribution. It needed to post to X on a schedule: single posts, threads, batches.

Every scheduling tool I looked at was either too expensive for what it does, too locked down to integrate with an agent, or both. I didn't want a dashboard. I wanted a raw API my agent could hit.

So I did what I always do: I built the shovel.

This is Drop #001 — the first tool in the Starkslab flywheel. And the story of how it got built is half the point.

What Are AI Agent Shovels and Why Do They Matter?

During the gold rush, the people who got reliably rich weren't the miners — they were the ones selling shovels. The same pattern applies to building AI agents.

Everyone's writing about prompt chains, orchestration frameworks, and multi-agent architectures. That stuff matters. But when you sit down to ship an actual agent that does a real job — posting content, monitoring data, managing workflows — you hit a wall immediately. The agent needs infrastructure. A scheduler. A database. A way to call an API. A place to run.

I call these agent shovels: the infrastructure pieces that AI agents need to actually function in production. Not the glamorous stuff. The plumbing that nobody writes about.

The X Scheduler is a shovel. It doesn't think. It doesn't reason. It takes a scheduled post, stores it, and fires it when the time comes. But without it, my content agent is just a language model that can write tweets but can't send them.

Every shovel you build makes the next agent faster to ship. My lightweight AI agent framework can call the scheduler's API as a tool. My CLI tools can pipe analytics data into posts. The infrastructure compounds.

The OpenClaw heartbeat system is a perfect example — an autonomous agent that schedules its own future work. But it can only do that because someone built the scheduling infrastructure first.

That's the thesis: build the shovels first, and the agents become trivial to wire up.

How Does AI Agent Automation Work for Social Media?

The architecture behind this scheduler is intentionally boring. That's a feature.

Here's the flow:

  1. Agent (or human) hits the Express API with a POST request — includes the text, scheduled time, and optionally thread/reply metadata
  2. API validates and stores the post in Postgres — status: pending
  3. Cron worker wakes up every 5 minutes, queries for posts where scheduledFor <= now and status = pending
  4. Worker fires each post via the X API — marks as sent on success or failed on error
  5. For threads, the worker processes posts in sequence with configurable delays, passing each tweet's ID as the in_reply_to for the next one

That's it. Express API → Postgres → Cron worker → X API. No Redis. No message queue. No pub/sub. No Kafka. Just a web server, a database, and a timer.

The schema bootstraps on first run — the API checks if tables exist and creates them if not. No migration tooling, no ORM, just raw SQL that creates what it needs. One less thing to configure, one less thing to break.

POST /schedule         → Schedule a single post
POST /schedule/thread  → Schedule a thread (array of posts)
POST /schedule/batch   → Schedule multiple independent posts
DELETE /schedule/:id   → Cancel a pending post
GET /schedule          → List all scheduled posts
POST /schedule/dry-run → Test the pipeline without posting

The dry-run endpoint is more useful than it sounds. When you're building AI agent automation, you want your agent to test its own outputs before going live. Dry-run lets it validate the full pipeline — auth, scheduling logic, thread assembly — without actually hitting the X API.

How an AI Coding Agent Built Its Own Scheduling Tool

Here's the meta part. This scheduler wasn't hand-coded. An AI coding agent built it.

Inside my cockpit, I have agents connected to GitHub. I gave one of them a brief: build me a scheduler. I pointed it at the X API docs, told it I wanted Express + Postgres, cron-based worker, deployable on Railway.

It scaffolded the repo. Wrote the routes. Wrote the database bootstrapping logic. Wrote the worker with cron scheduling. Pushed it to GitHub. The whole thing — from prompt to working repo — took less time than I'd have spent setting up a new Express project manually.

But here's the part that matters: I still had to review and fix it.

The agent was too optimistic about thread handling. Its original logic would fire off each tweet in a thread independently. If tweet #3 in a 5-tweet thread failed (rate limit, auth error, whatever), tweets #4 and #5 would still fire — orphaned posts with no parent thread. Broken context, confused followers.

I rewrote the error handling: if any tweet in a thread fails, cancel all remaining posts in that thread. Mark them cancelled, log the failure, move on. It's a simple fix, but it's the kind of fix that only comes from thinking about production failure modes — something the agent didn't anticipate.

This is the real pattern of working with AI coding agents today: they get you 80-90% of the way, fast. The last 10-20% is human judgment about edge cases. And that's fine. That 80% acceleration is enormous. I just don't pretend the agent shipped production code without review.

The repo is public: github.com/fedewedreamlabsio/xscheduler.

How to Deploy an AI Agent's Infrastructure on Railway

You need three things: a Railway account (free tier works), X API credentials, and a GitHub account.

Fork the repo. Create a new Railway project with two services from the same fork:

Service 1 — API:

  • Start command: npm start
  • Add a Railway Postgres database (auto-injects DATABASE_URL)
  • Set your environment variables:
TWITTER_API_KEY=your_key
TWITTER_API_SECRET=your_secret
TWITTER_ACCESS_TOKEN=your_token
TWITTER_ACCESS_SECRET=your_token_secret
API_KEY=pick_something_strong

Service 2 — Worker:

  • Start command: npm run worker
  • Cron: */5 * * * *

Schema bootstraps on first run. No migrations to think about. No ORM config. No seed files. The API creates the tables it needs when it first connects to Postgres.

Test it:

curl -X POST https://your-app.railway.app/schedule \
  -H "Authorization: Bearer your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "First post from my self-hosted scheduler",
    "scheduledFor": "2026-02-06T10:00:00Z"
  }'

You should get back a JSON response with the post ID and status: pending. Wait for the next cron cycle (up to 5 minutes), and the post fires.

Total deploy time: under 10 minutes from fork to first scheduled post. Railway's free tier handles the load for personal/small-team use without issue.

For threads:

curl -X POST https://your-app.railway.app/schedule/thread \
  -H "Authorization: Bearer your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "posts": [
      {"text": "Thread opener — the hook"},
      {"text": "Second tweet — the context"},
      {"text": "Third tweet — the punchline"}
    ],
    "scheduledFor": "2026-02-06T12:00:00Z",
    "delayBetween": 30
  }'

The delayBetween field (in seconds) spaces out the thread posts so they don't fire simultaneously — which both looks more natural and helps avoid rate limits.

What Went Wrong: Rate Limits and Thread Failures

Two things bit me in production.

Rate limits are tighter than documented. The X API free tier advertises certain limits, but in practice, if you're batching 10+ posts close together, you'll start seeing 429 responses around post 5-7. The v2 API's rate limit headers are helpful — the worker reads x-rate-limit-remaining and backs off when it gets low — but I didn't have that logic on the first deploy. First batch run: 3 posts went through, 7 failed. Lesson learned.

The fix: the worker now checks rate limit headers after each post and sleeps if remaining calls drop below a threshold. For batch scheduling, space your posts at least 2-3 minutes apart. This is especially important if you're running AI agent automation at any kind of volume — your agent needs to respect rate limits, not just fire-and-forget.

Thread failure cascading. I mentioned this above, but it's worth the detail. The original agent-written code had no concept of thread integrity. Each post in a thread was an independent scheduled item that happened to have a thread_id linking them. If post #2 failed, post #3 would try to reply to a tweet that didn't exist — and the X API would either reject it or create an orphaned post with no context.

The fix was straightforward: when the worker processes a thread, it does so sequentially. If any post fails, all subsequent posts in that thread get marked cancelled with a reference to the failure. The agent that scheduled the thread gets a webhook notification (if configured) so it can retry or alert.

This is exactly the kind of production thinking that AI coding agents miss. They write happy-path code. The edge cases — rate limits, partial failures, cascading errors — that's still human territory. For now.

Why Simple AI Agent Architecture Beats Over-Engineering

I've seen agent infrastructure projects that use Redis for job queues, RabbitMQ for message passing, Kubernetes for orchestration, and a monitoring stack bigger than the actual application. For a scheduler that posts tweets.

Here's what this scheduler uses:

  • Express — routing and API
  • Postgres — storage and state
  • node-cron — timer that fires every 5 minutes
  • The X API client — posts the tweets

Four dependencies that matter. No container orchestration. No message broker. No cache layer. The entire codebase fits in your head.

When you're building AI agent tools — the actual infrastructure agents depend on — simplicity isn't just elegant, it's practical. Every additional system is another thing your agent needs credentials for, another thing that can fail, another thing you're debugging at 2 AM. AI agent automation works best when each piece of infrastructure is small, focused, and independently deployable.

The X Scheduler does one thing: schedule and post. The agent framework does one thing: orchestrate tools. The CLI analytics tools do one thing: surface data. Wire them together and you have a system. But each piece is simple enough to reason about in isolation.

That's the pattern I keep coming back to when building AI agents: small tools, clean APIs, composable by default.

What's Next: From One Shovel to an Ecosystem

This scheduler was Drop #001 — the first tool shipped publicly from the cockpit. Since then, there's been a Python agent framework, CLI tools for analytics, and more in the pipeline.

The pattern is always the same. Agent needs something. Build the smallest working system. Deploy it. Hand the agent the API key. Move on to the next bottleneck.

Most AI agent content out there is about frameworks and abstractions. Which orchestrator to use, how to chain prompts, theoretical architecture diagrams. That stuff has its place. But in the trenches, what you actually need for AI agent automation is a scheduler that works, a database that's set up, a cron job that fires, a sandbox that runs code.

That's what I'm documenting at Starkslab. The agent shovels. The infrastructure that makes AI agent automation actually work in production.

The repo is open: github.com/fedewedreamlabsio/xscheduler. Fork it, deploy it, wire it into your own agent. Or build something better — that's the point.

Start with the tool you need most. Ship it. Move on.

That's escape velocity.

Back to NotesUnlock the Vault