How I Built Synestrology
3:1
I used Synestrology for nearly a year before I decided to build it for others. I've always been a fan of astrology, deep, complex astrology, not the pop-astrology version. When I lived in Thailand, I took some Human Design classes and found them interesting. Both of these things are deeply enmeshed with numerology, and I couldn't help but wonder what happens when you stop treating them as separate systems and instead view them as three lenses looking at the same subject.
When I started this, nobody was synthesizing them because it's a genuinely hard problem. The systems use different languages, different frameworks, and different ways of interpreting a birth chart. Getting them to talk to each other in a way that feels coherent and not like three readings stitched together is the whole challenge.
I wanted to build the thing that does that. An engine that takes your birth data, calculates your chart across all three systems, and synthesizes a single narrative that weaves them together.
the engine
Under the hood, Synestrology is a FastAPI application deployed on Railway. When someone submits their birth data and pays through Stripe (which is integrated with webhook verification and session metadata), a chain of things happens:
First, the birth location gets geocoded. We set up a dual-provider system for this: GeoNames as primary, Nominatim as fallback. This matters because astrology and Human Design both depend on precise latitude, longitude, and timezone for the birth moment. Get the time zone wrong, and the rising sign shifts. Get the coordinates wrong and the house system breaks. 'Portland, Oregon' and 'Portland, OR' returning different coordinates, cascading into different timezones, cascading into different charts. The dual-provider fallback and coordinate validation eventually made it reliable.
Then three calculation engines run independently. Western Tropical Astrology uses Kerykeion, which sits atop the Swiss Ephemeris, the same ephemeris NASA cross-references. Human Design uses a Python-built calculation engine from scratch because I didn't want to depend on someone else's API for a core calculation. That took a few days of research and implementation to get right. Numerology is Pythagorean, computationally simpler but still requiring precision.
We designed an aggregator that combines all three outputs into a single structured XML context. That context becomes the prompt for Claude Sonnet via the Anthropic API. The synthesis prompt doesn't say 'write a reading.' It defines how the three systems should interact, where the narrative should find convergence points, what to do when systems contradict, and how to handle the somatic layer that makes the reading feel embodied. Roughly 30 revisions of that prompt, tested across different chart types, before settling on the current version.
The raw output runs through a validator with 9 auto-fix checks and 2 detection-only flags: degree notation, house numbers, life path accuracy, Human Design type and profile validation, defined center counts. If something's wrong, the validator catches it before the reading gets rendered as a branded PDF via WeasyPrint and delivered through Resend.
Claude also powers the email layer. Lunation emails adapt to each customer's Human Design type and authority. Follow-up upsell emails are personalized to chart data. Both run through Claude Haiku for cost efficiency at scale.
sneaky snake
The scariest failures were the ones that looked correct.
A reading would come back beautifully written, structurally sound, with accurate planetary positions in the text. Everything checks out. Except the validator catches that the reading mentioned 4 defined centers when the chart has 5. The narrative was so fluent that a human reader would never notice.
This is a known characteristic of LLM output: confident fluency regardless of factual accuracy. The response reads like it's correct, and most of the time it is. But 'most of the time' isn't good enough when someone paid for their reading and will check whether their life path number is actually 7.
Same problem with astronomical data. When we built a verification pipeline that checked every entry against Swiss Ephemeris, the results were ugly. Wrong dates. Wrong degrees. Stations listed on the wrong date. Everyone copies from each other, and errors propagate. So we built the observatory: a full Swiss Ephemeris integration that computes planetary positions for any date, cross-checked against NASA's JPL Horizons API. No astronomical claim ships without computational verification.
our house
Early Synestrology had no database. Readings lived in memory. Customer data existed only in Stripe metadata. If the server restarted, everything in progress was gone.
Fine for testing. Not fine when real people are paying real money.
We migrated to Supabase. Six migrations over a few months, Row Level Security on every table. Customers, readings, email logs, calendar subscriptions, reviews, gift cards, a retry queue.
The retry queue was a specific lesson. When a read generation fails (API timeout, geocoding error, or an ephemeris edge case), the job is enqueued with exponential backoff. First retry at 5 minutes, second at 15, third at 60. If all three fail, an alert email fires. No customer should ever pay for a reading and not receive it. That's the line.
Birth data is sensitive. People are trusting us with the exact moment and location of their birth. Service role only, no anonymous access.
banana pudding
257 passing tests cover calculation accuracy, API contract validation, database operations, email formatting, and retry logic. The infrastructure means that a customer in Tokyo at 3 AM gets the same reliable experience as someone in Brooklyn at noon.
GitHub Actions run daily cron jobs for email delivery, monitored through Sentry and tracked in PostHog. Analytics and error logs are checked daily. That data informs what gets built next.
See the product: synestrology.com Free tool: Cosmic Blueprint