The Accidental Content Machine
Turning documentation into "Building in Public"
Life-Copilot Week 1 – Technical Progress
Five days ago I built a personal operating system using Claude Code. This week I’ve been using it daily — and each pain point became an engineering problem to solve.
Git workflow in practice
Day 2, I finally understood branches. Not from a tutorial — from realizing my AltsPipeline repo (separate from life-copilot, to be clear, and very much a work in progress) had 11 commits on a feature branch and 1 on main. Merged via GitHub, learned the mental model: main = production, feature branch = draft. Simple, but now I actually get it because I needed it.
Data pipeline expansion
Day 3, I wanted expense tracking. That meant modifying sync-all.py to handle a 9th Google Sheets tab, updating the schema configs in sheet_helper.py, and verifying the CSV output format matched my existing data structure. Added 15 lines of Python across two files. The system now syncs 9 data sources + calendar every 3 hours via Windows Task Scheduler.
Refactoring and tech debt
Day 4, I had 9 one-off scripts cluttering the root directory — fetch-calendar.py, add-reminder.py, etc. Archived them to archive/scripts/, consolidated functionality into the main sync module, and fixed stale documentation references. Classic cleanup: reduce surface area, single source of truth. (Note to self: always use plan mode… for pretty much everything but especially this).
Debugging a production issue
Day 5, auto-sync started failing. The scheduled task runs from a different directory than where I develop, so relative file paths broke. Fixed by updating the config loader to resolve paths relative to the script’s location, not the working directory. Classic “works on my machine” bug - only surfaces when code runs in a different context.
Configuration as documentation
After that debugging session, I added a Technical Infrastructure section to CLAUDE.md with @ file references — so the next time something breaks, debugging starts with the right context instead of grep searches.
Automated documentation for building in public
I wouldn’t have been able to recall all these details without the session archive system. Every session ends with a /shutdown command that logs what got done, what broke, what I learned. The archives become raw material — for this blog post, for future reference, potentially for automated pipelines to other platforms. I accidentally built a content capture layer while trying to give Claude context between sessions.
Security as a first-class concern
Day 4, I ran a security audit using Claude’s Plan Mode. The goal: verify sensitive data isn’t leaking into git. Result: credentials, state files, health data, and session archives all properly gitignored (not to mention this is probably staying all local, for now). But a one-time audit isn’t enough.
I’m considering building a security subagent — something that periodically scans for secrets in tracked files, sensitive patterns in committed code, dependency vulnerabilities, and gitignore gaps. I asked Claude what tooling exists for this: Semgrep for static analysis, GitLeaks or TruffleHog for secrets detection, Safety or Snyk for dependency scanning. For training/context materials: OWASP Secure Coding Guidelines, CWE Top 25, and Anthropic’s docs on building security-aware agents. Haven’t had a chance to look into them yet, and very much open to other recommendations — but the goal is treating security as continuous (almost like having an infosec team), not a one-time checkbox.
The stack so far
Python 3.11, Google Sheets/Calendar APIs via gspread and Google OAuth, Windows Task Scheduler for automation, Git for version control, structured markdown for state management.
The learning model
Real usage surfaces real problems. Real problems require real engineering. I’m not building a portfolio project — I’m building infrastructure I depend on, which means I have to actually make it work. I’m literally trying to run my life on this thing. It can’t miss stuff.

