# Luis Lacambra — Product Builder > Product builder working at the intersection of software, finance, and AI. Website: https://lacambra.org I design and build products, lead teams, and explore how modern tools change the way software gets made. Currently focused on autonomous workflows, wealth infrastructure, and idea curation. Product leader and builder with 15 years in fintech, SaaS, and crypto. I work at the intersection of strategy, execution, and design. Actively building an AI content distribution platform for creators, wealth management tools for customers sitting between retail and private banking, and advising startups on AI native product development. --- ## About I'm Luis. I've spent the last 15 years building products in fintech and SaaS, mostly in Europe, based out of Zurich. I've been part of early teams that grew into something much bigger, led product at a few different companies along the way, and more recently started building my own things. With that, I'm going down deep into the building rabbit hole. After all that time working alongside really talented engineers, designers, and product people, I thought I had a decent sense of how software gets made. The last nine months have genuinely challenged that. The pace of change week on week I'm seeing in my product development workflow is unlike anything I've seen in the previous decade. I find it equal parts daunting and fascinating. New reasoning models and open source AI are rapidly bringing additional colours to the mix, and combinations of tools that I genuinely didn't think were possible even six months ago. The logbook is where I document all of it: the workflows that work, the hacks worth stealing, and the time-sink experiments that absolutely did not. ### Topics I'm exploring **Product building:** Modern tools and AI are changing how software gets designed, built, and shipped faster than most people realise. When the cost of building a prototype approaches zero, product thinking has to evolve in response. That shift is what I'm most obsessed with right now. **Wealth management:** Most people with meaningful assets are stuck between retail banking tools that are too basic and private banking that requires too much trust in opaque advice. There has to be a better way to understand and manage fragmented financial assets across banks, brokerages, crypto, and real estate. **Agent-based systems:** Autonomous agents can now perform complex multi-step workflows that would have required a team six months ago. The interesting problem is not building the agents but making the collaboration between humans and agents reliable, legible, and genuinely useful. **Idea curation:** Long-form video and podcasts contain some of the most valuable thinking being produced today, and most of it gets lost. Naive summarisation flattens the nuance. The real challenge is preserving the full context and intellectual structure of an idea, not just extracting the headline. --- ## Work A selection of product work across software, finance, and infrastructure. ### uBroader — AI Content Distribution - Role: Founder - Period: 2026–present - Website: ubroader.com - Building an AI-powered platform that turns long-form video and podcast content into ready-to-publish social posts, threads, and newsletters. Focused on creator workflows and content distribution at scale. - Key contributions: - Product strategy and end-to-end UX for content transformation flows - AI pipeline design for transcript extraction, idea identification, and multi-format output - Go-to-market positioning for creators, founders, and podcast hosts ### Axiomas — Fintech Venture Studio - Role: Co-founder - Period: 2025–present - Website: axiomas.ai - Building and scaling proprietary Fintech ventures — from digital banking to wealth management — combining deep product expertise with full-stack execution across regulated markets. - Key contributions: - Venture creation and product strategy for proprietary Fintech builds - Full-stack execution from regulatory strategy to product launch - Advisory for aligned partners in digital banking and wealth management ### SOAX — B2B SaaS - Role: Head of Product - Period: 2024–2025 - Website: soax.com - SOAX is a proxy and data infrastructure platform serving enterprise clients needing reliable, compliant web data access at scale. The market was heavily commoditised, with competitors racing to the bottom on price and margins deteriorating across the board. - Key contributions: - Repositioned SOAX as a premium mid-market player, driving +$1M ARR and +10% EBITDA within 12 months - Executed portfolio restructuring, exiting two underperforming product lines and reallocating capacity to core growth bets - Achieved a Top 3 market ranking in a critical industry report through pricing and performance improvements ### Saidler & Co — Bitcoin Custody & Digital Media - Role: Head of Product - Period: 2022–2023 - Website: saidler.com - Saidler & Co is a family office with a portfolio spanning digital assets and media. The product mandate covered two distinct businesses simultaneously: building an institutional Bitcoin self-custody platform from zero inside Swiss regulatory constraints, and repositioning a stagnant digital newspaper for growth. - Key contributions: - Built the self-custody product 0 to 1, led GTM strategy and secured the firm's first enterprise clients on time and within budget - Implemented PLG activation and retention loops on Standard.sk, delivering 100% subscriber growth and 50% MAU lift - Introduced Shape Up methodology to run disciplined discovery and delivery cycles across both products ### Numbrs — Consumer Fintech - Role: Head of Product & Design - Period: 2013–2022 - Numbrs was a Swiss fintech startup building a multibanking platform that aggregated accounts, payments, and financial products across European banks. The company grew from early-stage startup to unicorn over a decade, navigating the complexity of PSD2, Open Banking regulation, and a highly competitive mobile banking market. - Key contributions: - Turned a 2-star app into a 4.7-star, number one ranked banking app in the German App Store through data-driven, user-centric prioritisation - Scaled the platform to 2.2M downloads and 1.5M linked accounts, supporting unicorn valuation milestones - Led EU expansion using PSD2 and Open Banking, securing bank partnerships and reducing integration costs by 30% --- ## Builder's Logbook A running record of iterations, experiments, and implementation learnings. As I build, I document every problem I hit, how I solved it, and what I'd do differently. Always keen to compare notes with fellow builders. --- ### #01 — Every deploy was a manual SSH session Date: Dec 25, 2025 Tags: docker, devops, infrastructure Stack: Docker, docker-compose, GitHub Actions **Context:** Deploying backend services meant SSH-ing into a server, pulling code, running npm install, and praying nothing broke. I am not a devops engineer. I am a product builder who needed a repeatable way to ship backend code without the manual ritual. **Change:** Attempt 1 (Dec 25): I had never written a multi-stage Dockerfile before. Claude Code walked me through the pattern: one stage to compile TypeScript, a second stage that copies only the compiled output and production dependencies. The result is a small, deterministic image. Added docker-compose.yml so the full backend starts with one command. Attempt 2 (Dec 25): Added a GitHub Actions workflow to validate Docker builds on every PR. Also wrote deployment and developer workflow docs so a future collaborator can get running without asking me. **Result:** Local development went from 'install Node, install Postgres, configure env vars, hope for the best' to docker-compose up. Deployment became a container push instead of a manual ritual. **What worked:** Multi-stage builds with npm prune --production. Final images are lean. Non-root user inside containers. Health check endpoints for orchestration. **What didn't:** I did not anticipate how slow NestJS + Prisma startup would be inside containers. The database connection initialization takes longer than expected, which became a problem later when configuring health checks. **My takeaway:** You do not need to be a devops engineer to containerize your services. A product builder with Claude Code can set this up in a day, and the payoff is immediate. **Builder guide:** 1. Write a multi-stage Dockerfile: first stage installs deps and compiles, second stage copies dist/ and runs npm prune --production. 2. Add a docker-compose.yml that starts all your services with one command. 3. Add a /health endpoint to each service. 4. Add .dockerignore files (copy your .gitignore as a starting point). 5. Test with docker-compose up. If it works locally, it will work in production. --- ### #02 — Shipping without a safety net Date: Dec 25, 2025 Tags: ci-cd, github-actions, automation **Context:** There was no CI. Every push to main was a leap of faith. I trusted myself to run tests locally, but that is not a system. It is a habit, and habits break. **Change:** Attempt 1 (Dec 25): Created the first GitHub Actions workflows for staging and production deployments alongside a batch of feature work. The idea is that deploys should be triggered by branch events, not by someone remembering to run a script. I had not written GitHub Actions before, so this was learning by doing. **Result:** First automated deployment pipeline. Staging deploys on push, production on merge to main. Not perfect yet, but the foundation was there. **What worked:** Having any automation at all. The psychological shift from 'I need to remember to deploy' to 'it just happens' is bigger than the technical change. **What didn't:** The first pass was too coupled to feature work. It shipped inside a 47-file commit that also included frontend changes and database migrations. Made it harder to debug when workflows failed later. **My takeaway:** Ship CI early, even if it is basic. A simple deploy-on-push workflow that takes 30 minutes to set up saves hours of anxiety every week. **Builder guide:** 1. Create .github/workflows/deploy-staging.yml triggered on push to your staging branch. 2. Keep the workflow simple: checkout, build, deploy. Do not add tests yet, that comes later. 3. Create a separate workflow for production with workflow_dispatch (manual trigger only). 4. Iterate from there. The first version does not need to be complete. --- ### #03 — Docker builds locally but fails on Railway Date: Jan 2, 2026 Tags: railway, deployment, infrastructure Stack: Railway, railway.toml **Context:** I chose Railway because it seemed straightforward for deploying Docker containers. But Railway defaulted to Nixpacks for builds, which skipped the TypeScript compilation step entirely. The container would start and immediately crash with 'Cannot find module /app/dist/main'. I spent more time debugging this than writing the Dockerfiles. **Change:** Attempt 1 (Jan 2): Added railway.toml to force Dockerfile-based builds. One config file, 15 lines. Attempt 2 (Jan 2): Extended health check timeout from the default to 120 seconds. NestJS + Prisma needs time to establish the database connection on cold start. Attempt 3 (Jan 2): Set up the full environment pipeline: development, staging, production. Attempt 4 (Jan 2): Repeated the pattern for the agent-service. By the fourth service, the railway.toml was copy-paste. **Result:** All four backend services deploy reliably to Railway. Production requires manual trigger. Health checks prevent serving traffic before the app is actually ready. **What worked:** railway.toml for explicit build control. Manual-only production deploys. Once the pattern was established, adding new services was trivial. **What didn't:** Assumed Railway would auto-detect the Dockerfile. It did not. Underestimated NestJS + Prisma startup time, so the default health check timeout caused false failures. Both cost me a few hours of debugging. **My takeaway:** Platform magic is convenient until it does the wrong thing. When deploying containers, always be explicit about the builder and the health check timeout. **Builder guide:** 1. Create a railway.toml in each service directory with builder = 'dockerfile'. 2. Set healthcheckPath to your /health endpoint. 3. Set healthcheckTimeout to at least 120s if your service connects to a database on startup. 4. Use workflow_dispatch for production deploys so merges do not auto-deploy. 5. Add railway.toml for each service. The config is nearly identical across services. --- ### #04 — Anyone with the URL could see our staging environment Date: Jan 14, 2026 Tags: vercel, security, edge-middleware Stack: Vercel Edge Middleware, HTTP Basic Auth **Context:** Non-production environments were publicly accessible. Anyone who guessed the Vercel preview URL could see the staging app. When you are building a financial product and sharing demo links with investors, that is not acceptable. **Change:** Attempt 1 (Jan 14): Implemented Vercel Edge Middleware that checks the deployment branch. If it is staging or development, it requires HTTP Basic Auth. Production stays public. The browser shows the native login popup, credentials are cached for the session. Two environment variables, one middleware file. **Result:** Staging and development environments are password-protected. Production remains public. No custom login UI to maintain. **What worked:** Edge Middleware runs before the app loads, so there is zero impact on the React bundle. HTTP Basic Auth means no session management, no cookies, no logout flow to build. **What didn't:** You cannot do role-based access or revoke individual credentials without changing the shared password. For a small team, this is fine. For a larger team, you would need something more sophisticated. **My takeaway:** Do not build a custom auth wall for internal access. Edge Middleware plus Basic Auth is a 30-minute solution that covers most cases. **Builder guide:** 1. Create middleware.ts in your project root. 2. Check the request hostname or Vercel environment to determine if auth is needed. 3. Read STAGING_ACCESS_PASSWORD / DEV_ACCESS_PASSWORD from environment variables. 4. Return a 401 with WWW-Authenticate header if credentials are missing or wrong. 5. Production bypasses the check entirely. --- ### #05 — 30 seconds to start the dev server Date: Jan 21, 2026 Tags: vite, frontend, dx Stack: Vite, vite-tsconfig-paths **Context:** Create React App took about 30 seconds to start the dev server and HMR was noticeably slow. When you are iterating on UI details, those seconds compound into minutes of dead time every hour. **Change:** Attempt 1 (Jan 21): Full migration in one session. Replaced react-scripts and craco with Vite. Moved index.html to root (Vite requirement). Changed all REACT_APP_* env vars to VITE_*. Updated every source file to use import.meta.env instead of process.env. Deleted craco.config.js. Claude Code handled most of the mechanical renaming. **Result:** Dev server starts in about 285ms. HMR is instantaneous. The migration touched 24 files but the diff was mostly mechanical renaming. **What worked:** Clean one-shot migration. vite-tsconfig-paths handled the path alias seamlessly. No config hacks needed. **What didn't:** The env var rename (REACT_APP_ to VITE_) was tedious but unavoidable. Had to update every .env template and every file that read from process.env. Not hard, just boring. **My takeaway:** If you are still on CRA, the migration to Vite is straightforward and the speed improvement is dramatic. Block an afternoon and do it. **Builder guide:** 1. Install vite, @vitejs/plugin-react, vite-tsconfig-paths. Remove react-scripts and craco. 2. Move public/index.html to the project root. 3. Create vite.config.ts with the React and tsconfig-paths plugins. 4. Find-and-replace REACT_APP_ with VITE_ in all .env files. 5. Find-and-replace process.env with import.meta.env in all source files. 6. Add src/vite-env.d.ts for TypeScript env var types. 7. Delete craco.config.js. Run npm start. --- ### #06 — Claude Code kept forgetting the project context Date: Jan 21, 2026 Tags: claude-code, ai-tooling, developer-experience **Context:** Every new Claude Code session started from zero. I had to re-explain the architecture, the tech stack constraints, the coding patterns. It was like onboarding a new developer every time I opened a terminal. **Change:** Attempt 1 (Jan 21): Wrote the first CLAUDE.md with architecture overview, backend services, database schemas, Zustand slices, and Docker workflow. Also created the .claude/ directory for agents, skills, and session docs. Attempt 2 (Jan 21): Major expansion: archived legacy components, added comprehensive project structure, established non-negotiables (no secrets in code, privacy first, stack consistency, test coverage, Swiss formatting). CLAUDE.md went from a sketch to a proper project manual. **Result:** New sessions start with full project context. Claude Code follows the established patterns instead of suggesting libraries outside the stack. The CLAUDE.md became the single source of truth for project conventions. **What worked:** Being very specific about what NOT to use. A 'DO NOT USE' column in the tech stack table prevents more drift than positive instructions. Explicit constraints work better than implicit conventions. **What didn't:** The first version was too brief. Claude Code would still occasionally suggest patterns outside the stack. The fix was being more explicit and adding the architectural constraints section. **My takeaway:** CLAUDE.md is the highest-leverage file in any AI-assisted project. Spend time on it. Be specific about constraints, not just conventions. **Builder guide:** 1. Create CLAUDE.md in your project root. 2. Start with a tech stack table. Include a 'DO NOT USE' column for each layer. 3. Add your project structure with one-line descriptions for each directory. 4. List non-negotiable rules (formatting, testing, security). 5. Add common commands (dev server, build, test, deploy). 6. Update it every time you catch Claude Code drifting from your conventions. --- ### #07 — Unit tests pass but the app is broken Date: Jan 23, 2026 Tags: playwright, testing, e2e Stack: Playwright **Context:** Unit tests covered logic but could not catch integration issues. A component renders correctly in isolation, but does the login flow actually work? Does the dashboard load data from the real API? I only found out by clicking through the app manually. **Change:** Attempt 1 (Jan 23): Set up Playwright infrastructure alongside a batch of UI improvements. Added smoke tests that cover the critical paths: login, dashboard load, navigation. The idea is that these run against the local stack before deploying. **Result:** Basic E2E smoke tests that catch the 'everything compiles but nothing works' class of bugs. **What worked:** Playwright's API is clean. Test setup was straightforward. Having even a few smoke tests changes how confidently you deploy. **What didn't:** Coverage is still thin. Smoke tests catch catastrophic failures but not subtle regressions. I need to expand coverage over time, but starting with something is better than waiting for comprehensive coverage. **My takeaway:** You do not need 100% E2E coverage to get value. Three smoke tests that verify login, load, and navigation already catch most deploy-breaking bugs. **Builder guide:** 1. npm init playwright@latest. Accept the defaults. 2. Write three tests: can the app load, can a user log in, does the main page render data. 3. Add a npm run test:e2e script. 4. Run against your local stack before deploying. Add to CI later when you are confident in the tests. --- ### #08 — Prototype code mixed with production code Date: Feb 2, 2026 Tags: refactoring, code-hygiene, prototyping **Context:** I had used Lovable to generate quick UI prototypes for onboarding, dashboard, and other screens. The prototypes were useful for exploring ideas, but the generated code was still sitting in the repo alongside production components. Different styling approach, different component library, no tests. **Change:** Attempt 1 (Feb 2): Archived all completed Lovable prototypes into a dedicated archive directory. Moved legacy components to a reference folder. Added .claudeignore for build artifacts so Claude Code stops reading compiled files. 249 files reorganized in a single session. **Result:** Clear separation between prototype code (archived for reference) and production code. The root directory is clean and Claude Code no longer confuses prototype components with canonical ones. **What worked:** Archiving instead of deleting. I can still reference the prototype designs when building the production version. The .claudeignore was a small but high-impact addition. **What didn't:** I should have done this earlier. Having prototype code in the main tree confused both Claude Code and human developers about which components were canonical. **My takeaway:** Prototyping tools are great for exploration, but clean up the output promptly. Archive, do not delete. Future you will want to reference those designs. **Builder guide:** 1. Create an archive/ or _archived/ directory for prototype code. 2. Move all prototype output there. Do not delete it. 3. Add a .claudeignore (or equivalent) so your AI tool ignores build artifacts and archived code. 4. Do this cleanup as soon as the prototype has served its purpose, not weeks later. --- ### #09 — I kept losing track of what I shipped yesterday Date: Feb 3, 2026 Tags: automation, github-actions, slack, ai Stack: GitHub Actions, Claude API, Slack Webhook **Context:** Working with Claude Code across multiple sessions per day, I would lose track of what actually shipped. By the next morning I could not reconstruct what changed without reading git logs. I wanted a daily digest in Slack without having to write it myself. **Change:** Attempt 1 (Feb 3): Initial daily summary workflow using Claude Code CLI inside GitHub Actions. Reads the git log for the past 24 hours and sends a summary to Slack. Attempt 2 (Feb 3): Improved the output with highlights and commit details. Attempt 3 (Feb 3-5): Fixed JSON escaping issues and grep exit codes. Attempt 4 (Feb 5-7): Cost optimization — switched from a larger model to Haiku. Attempt 5 (Feb 7-8): Hardened against API errors with retry logic. Attempt 6 (Feb 8): Replaced the Claude CLI with a direct curl call to the Anthropic API. The CLI was installing Node.js and npm packages on every run, which was slow and caused an unexpected cost spike. **Result:** Every morning there is a summary in Slack with what shipped yesterday. Runs on Haiku for minimal cost. Direct API call, no CLI overhead. **What worked:** The final version with direct curl to the Anthropic API. Simple, fast, cheap. Structured prompting to get specific, useful summaries instead of vague paragraphs. **What didn't:** The Claude CLI approach caused an unexpected API cost spike. JSON escaping was a recurring headache. Model ID formats changed between attempts. It took 12 commits to get a stable workflow. Each fix was small but they added up. **My takeaway:** Automating with AI sounds simple until you hit the edge cases: JSON escaping, model pricing, CLI vs API tradeoffs. Budget for iteration. The 12-commit journey was worth it for a workflow I now rely on daily. **Builder guide:** 1. Create a GitHub Actions workflow with schedule: cron. 2. Use curl to call the Anthropic API directly. Do not install the CLI in CI. 3. Pass the git log as context and ask for a structured summary. 4. Use the cheapest model that gives acceptable quality (Haiku works well for summaries). 5. Send the result to Slack via webhook. 6. Set max_tokens low and be specific in the prompt about what you want. --- ### #10 — Writing weekly updates by hand felt like busywork Date: Feb 9, 2026 Tags: automation, github-actions, reporting, ai Stack: GitHub Actions, Claude Code CLI **Context:** I needed a weekly summary for stakeholders but writing it manually meant spending 30-45 minutes every Monday reviewing git logs and composing an update. The information was all in the commits, it just needed to be compiled and presented clearly. **Change:** Attempt 1 (Feb 9): Created a GitHub Actions workflow that runs Claude Code with max-turns capped, reads the week's commits, and writes a structured briefing to a markdown file in the repo. Attempt 2 (Feb 9-10): Immediate fixes: handle missing file on first run, fix GITHUB_OUTPUT syntax for the commit step, add write permissions. Attempt 3 (Feb 10-11): Fixed the workflow writing to the wrong filename. All six references to the file path were inconsistent. Attempt 4 (Feb 11): Reduced max-turns from 15 to 6 to cap API cost. **Result:** Every Sunday night, a briefing is generated and committed to the repo. First one ran on Feb 16, then weekly after that. **What worked:** Capping max-turns to 6. Output verification catches silent failures. The briefing format evolved into something I actually read every Monday. **What didn't:** Filename inconsistency across the workflow was a silly but real bug. The initial max-turns of 15 was overkill for a summary task, burning API credits for no benefit. **My takeaway:** For AI-powered automation, always set max-turns to the minimum needed. More turns does not mean better output for simple tasks, it just means higher cost. **Builder guide:** 1. Create a weekly cron workflow in GitHub Actions. 2. Use Claude Code or a direct API call to read the week's git log. 3. Prompt for a structured output (sections like: shipped, in progress, blockers). 4. Commit the output to the repo so it is versioned and searchable. 5. Set max-turns to 6 or less. A summary does not need many iterations. 6. Add a verification step that fails if the output file is empty. --- ### #11 — I needed specialists, not a generalist Date: Feb 14, 2026 Tags: claude-code, ai-tooling, agents **Context:** A single Claude Code session cannot be an expert at everything. I needed architectural reviews, UI/UX audits, financial calculation verification, and compliance checks. Asking the generalist to do all of these in one session meant shallow answers across the board. **Change:** Attempt 1 (Feb 14): Created the first batch of specialized agents: quant-lead for financial calculations, research infrastructure for verifying claims against sources. Attempt 2 (Feb 14): Added the ui-designer agent for pixel-level visual polish. Attempt 3 (Feb 14): Expanded to eight agents total: tech-lead, ux-guardian, ui-designer, historian, quant-lead, planning-expert, user-advocate, compliance-advisor. Each owns a specific domain and has explicit boundaries about what it should and should not do. **Result:** Eight specialized agents that can be invoked for specific tasks. The tech-lead catches architectural drift. The ux-guardian runs design reviews. The compliance-advisor checks regulatory exposure. Each agent knows its lane. **What worked:** Clear separation of concerns between agents. Explicit 'what NOT to do' sections prevent agents from overstepping. The three-way boundary between planning-expert, quant-lead, and compliance-advisor eliminated conflicting advice. **What didn't:** The first agent definitions were too verbose. Iterating on the prompt format to find the right balance between specificity and brevity took a few rounds. **My takeaway:** Specialized agents beat a generalist for domain-specific tasks. Define clear boundaries and keep the definitions focused. An agent that tries to do everything does nothing well. **Builder guide:** 1. Create .claude/agents/ directory. 2. For each specialization, write a markdown file defining: role, expertise, constraints, and explicit boundaries. 3. Include a 'what NOT to do' section. This prevents more problems than the positive instructions. 4. Start with 2-3 agents for your most common review needs. Expand as patterns emerge. 5. Keep definitions under 500 lines. If they are longer, the agent has too broad a scope. --- ### #12 — Repeating the same multi-step instructions every session Date: Feb 14, 2026 Tags: claude-code, automation, developer-experience **Context:** Certain workflows required the same sequence of steps every time: commit, update the task tracker, write a session log. Or: run the design review, then the pixel polish pass, then summarize findings. Typing out these instructions repeatedly was tedious and error-prone. **Change:** Attempt 1 (Feb 14): Created /claude-setup-sync-check to audit consistency between CLAUDE.md and the actual agents, skills, and docs on disk. Attempt 2 (Feb 14): Simplified the /deploy-dev skill. Attempt 3 (Feb 14): Added /ship, which became the most-used skill. One command to: verify tests pass, write a session log, create or update the task tracker issue, commit, and push. Five manual steps became one slash command. **Result:** 16 custom skills covering the full development lifecycle: shipping, deployment, design review, testing, research, brainstorming, and more. **What worked:** /ship became the daily workhorse. It enforces the right process without me having to remember each step. /ui-review combining strategic and tactical design review in sequence saved significant time. **What didn't:** Some skills were too ambitious in their first version. Complexity crept in before the workflow was proven. The second iteration was always simpler. **My takeaway:** If you do the same sequence more than three times, make it a skill. The time investment is 15 minutes per skill. The time saved is every future session. **Builder guide:** 1. Create .claude/skills/ directory with a subdirectory per skill. 2. Each skill gets a SKILL.md file with step-by-step instructions. 3. Start with your most repeated workflow. For most developers, that is the ship/deploy sequence. 4. Keep skills focused. If a skill has more than 10 steps, split it. 5. Iterate: the first version of every skill will be too complex. Simplify after using it a few times. --- ### #13 — Tests only ran when I remembered to run them Date: Feb 15, 2026 Tags: ci-cd, testing, pre-commit, github-actions Stack: GitHub Actions, Husky, lint-staged **Context:** Tests existed but running them was voluntary. Pre-commit hooks did not exist. Linting was manual. The test suite would break silently between sessions, and I would only discover it when something failed after deploying. **Change:** Attempt 1 (Feb 15): Created the CI workflow with separate jobs for frontend tests, backend tests, and linting. Added Husky pre-commit hooks and lint-staged configuration. Attempt 2 (Feb 15): Expanded CI to run all backend tests. Attempt 3 (Feb 15-16): Fixed CI failures: a missing import in a test, a non-idempotent migration, an unused variable. Attempt 4 (Feb 16-17): Environment fixes: updated CI to Node 22, switched from npm ci to npm install for some jobs, added environment variables that tests needed. **Result:** Every push triggers tests. Every commit is linted. Pre-commit hooks catch issues before they enter the repo. CI runs about 700 frontend tests and 200 backend tests. **What worked:** lint-staged for fast pre-commit checks. Separate CI jobs for frontend, backend, and linting so they run in parallel. **What didn't:** npm ci was too strict about lock file format mismatches between Node versions. The first CI run surfaced pre-existing test failures that had been hiding because nobody was running the full suite consistently. **My takeaway:** CI does not just catch new bugs. It exposes the bugs you already had but did not know about. The first run will feel like everything is broken. That is the point. **Builder guide:** 1. Install Husky and lint-staged. Configure lint-staged to run your linter and formatter on staged files only. 2. Create .github/workflows/test.yml with separate jobs for frontend, backend, and linting. 3. Use matrix strategy if you have multiple backend services. 4. Run the full suite once and fix everything that breaks. This is the hard part. 5. After that, CI keeps things green automatically. --- ### #14 — Context disappeared between Claude Code sessions Date: Feb 15, 2026 Tags: claude-code, documentation, context-management **Context:** Claude Code sessions have finite context. When the context window fills up and compacts, details from earlier in the session are lost. The next session starts fresh with no knowledge of what was discussed, decided, or attempted. For multi-session features, this meant re-explaining context every time. **Change:** Attempt 1 (Feb 15): Started writing session logs manually alongside other work. Created the first log documenting a test coverage expansion session. Attempt 2 (Feb 15): Added .claude/logs to git tracking. Made session logs shared artifacts that any developer or future session can read for context. Tracked 18 session logs retroactively. Attempt 3 (Mar 2-5): Evolved into progressive logging: on the first context compaction, a session log segment is written inline. Each subsequent compaction appends a new segment. The key insight was capturing context at compaction time, when the information is still in the window, not after the session ends. **Result:** Every significant session produces a handoff document. Multi-session features have a paper trail. New sessions can read previous logs to pick up where the last one left off. **What worked:** Progressive logging during compaction. Writing segments inline (not via subagent) preserves more detail. Git-tracking the logs makes them searchable and shareable. **What didn't:** Post-session summaries written after the fact were too lossy. By the time the session ends, the context is already compacted and details are gone. **My takeaway:** If you use AI coding assistants with context limits, build a logging system that captures context incrementally. Post-session summaries lose too much detail. **Builder guide:** 1. Create a session log directory (e.g., .claude/logs/session-logs/). 2. Name files by date and topic: YYYY-MM-DD-initials-topic-slug.md. 3. Write the first log segment on your first context compaction, not at the end of the session. 4. Each subsequent compaction appends a new segment. 5. Track logs in git so they are searchable and available to other developers. 6. Use a /handoff skill to finalize the log when ending a session. --- ### #15 — Important decisions were forgotten between sessions Date: Feb 15, 2026 Tags: claude-code, memory, context-management **Context:** Key decisions, debugging lessons, and project conventions disappeared between sessions. I would fix a bug caused by a specific API quirk, but the next session would hit the same quirk because it had no memory of the previous fix. I was solving the same problems twice. **Change:** Attempt 1 (Feb 15): Started building a file-based memory system. MEMORY.md serves as an index pointing to individual memory files. Categories: user preferences, feedback corrections, project context, and external references. Attempt 2 (ongoing): Evolved the system with structured frontmatter on each memory file. Added rules for what should and should not be memorized. API quirks, debugging lessons, and architectural decisions go in. Ephemeral task details stay out. **Result:** A persistent memory system that carries context across sessions. When Claude Code starts, it loads MEMORY.md and has access to prior decisions, lessons, and conventions. **What worked:** File-based approach with an index. Simple, versionable, readable. The frontmatter format makes it easy to find relevant memories. Keeping the index under 200 lines forces curation. **What didn't:** Early versions stored too much. Every observation, every debugging step. The signal-to-noise ratio degraded. The fix was explicit rules about what goes into memory and what stays ephemeral. **My takeaway:** AI memory should be curated, not comprehensive. Store decisions and lessons. Do not store transient state. A small, high-quality memory beats a large, noisy one. **Builder guide:** 1. Create a memory/ directory in your Claude Code project config. 2. Create MEMORY.md as an index file with links to individual memory files. 3. Each memory file gets frontmatter: name, description, type (user/feedback/project/reference). 4. Store: API quirks, debugging lessons, architectural decisions, user preferences. 5. Do not store: ephemeral task details, git history, code patterns visible in the codebase. 6. Keep the index under 200 lines. If it grows beyond that, curate. --- ### #16 — One wrong merge could push to production Date: Feb 16, 2026 Tags: deployment, github-actions, supabase, safety Stack: GitHub Actions, Supabase CLI **Context:** The deployment pipeline had no environment gates. A merge to main would auto-deploy to production. Database migrations deployed without the right flags. GitHub Actions could not reach the database due to IPv6 issues. And the database project would pause if nobody hit it for a week. I learned each of these the hard way. **Change:** Attempt 1 (Feb 16): Made production deploy manual-only (workflow_dispatch). Attempt 2 (Feb 16): Added --include-all flag to database push commands. Attempt 3 (Feb 16): Disabled IPv6 in GitHub Actions runners. Attempt 4 (Feb 16): Added a daily keep-alive cron job. Attempt 5 (Feb 16): Removed the unused edge function deploy step. **Result:** Three-tier deployment: development auto-deploys, staging auto-deploys, production is manual. Database migrations deploy reliably. The database stays alive. **What worked:** Manual-only production deploys. The keep-alive cron. Disabling IPv6 was a quick fix for a confusing connectivity issue. **What didn't:** The IPv6 issue was particularly frustrating because the error messages were unhelpful. I spent time looking at authentication when it was a network issue. Removing the unused edge function step eliminated flaky failures that were red herrings. **My takeaway:** Every deploy workflow should have exactly the steps it needs and nothing more. Remove unused steps, add environment gates, and automate keep-alive for services that sleep. **Builder guide:** 1. Use workflow_dispatch for production deploys. Never auto-deploy to production on merge. 2. If using Supabase or similar BaaS, add --include-all to your db push commands. 3. Add sysctl net.ipv6.conf.all.disable_ipv6=1 to your CI runners if you hit mysterious connection failures. 4. Add a daily keep-alive cron if your database provider pauses inactive projects. 5. Remove any deploy steps for services you are not actually using. --- ### #17 — 44 markdown files in the root directory Date: Feb 16, 2026 Tags: documentation, organization, developer-experience **Context:** Documentation had grown organically. 44 markdown files sat in the root directory. Product strategy, deployment guides, architecture docs, briefings, tech debt lists, all at the same level. Finding anything meant scrolling through a flat list of files. **Change:** Attempt 1 (Feb 16): Wrote PRODUCT_STRATEGY.md as the first structured strategy document. Attempt 2 (Feb 16): Major cleanup: removed 425 files of dead directories, ad-hoc scripts, and tracked build artifacts. 49,630 lines deleted. Attempt 3 (Feb 16): Moved all 44 root markdown files into a docs/ hierarchy: product/, devops/, engineering/, design/. Attempt 4 (Feb 16-17): Cleaned up the archive, created the tech debt backlog, added security and AI strategy docs. **Result:** From 44 scattered files to a structured docs/ tree. Product docs, engineering docs, design docs, and operations docs each have a home. The root directory is clean. **What worked:** Doing it all in one focused session. Incremental reorganization would have left orphans and broken references. The clean sweep was more efficient. **What didn't:** Some docs were moved multiple times across commits as the directory structure evolved. I could have planned the final structure before starting. But honestly, the structure became clear only while doing the work. **My takeaway:** Documentation entropy is real. Block a session, move everything at once, and commit to the new structure. Incremental cleanup never catches up. **Builder guide:** 1. Create a docs/ directory with subdirectories by audience: product/, engineering/, devops/, design/. 2. Move every markdown file from root into the appropriate subdirectory. 3. Delete dead files, archived prototypes, and tracked build artifacts. 4. Do it in one session, not incrementally. 5. Update CLAUDE.md (or your AI config) to reference the new locations. --- ### #18 — I kept missing the moment Claude finished Date: Feb 20, 2026 Tags: claude-code, hooks, productivity, macos Stack: terminal-notifier, Claude Code Hooks **Context:** When running long Claude Code sessions, I would switch to another task while waiting. But I had no way to know when Claude finished or needed input. I would come back minutes later to find it had been waiting for me. Dead time, multiple times per day. **Change:** Attempt 1 (Feb 20): Tried the Claude Code Notifier VS Code extension by Ashwin Santhanam. It worked for in-app alerts but not when VS Code was in the background or minimized. Attempt 2 (Feb 20): Troubleshooting session. Tried different VS Code notification settings, system-level notification permissions, focus mode exceptions. The fundamental issue was that VS Code extension notifications are not OS-level notifications. Attempt 3 (Feb 25): Switched to terminal-notifier with Claude Code's native hooks system. Three hook events: PostToolUse:AskUserQuestion (Claude has a question), PermissionRequest (needs approval), Stop (finished and waiting). One brew install, a few lines in settings.local.json, and it worked immediately. ```bash # Install terminal-notifier brew install terminal-notifier ``` **Result:** macOS notifications fire every time Claude needs attention. I can work on something else and get pulled back the moment input is needed. The dead time went from minutes per occurrence to seconds. **What worked:** terminal-notifier + Claude Code hooks. Three events, one brew install. The native hooks system is more reliable than any extension because it runs at the CLI level, not inside the editor. **What didn't:** VS Code extension approach. Extensions cannot reliably send OS-level notifications when the app is not focused. This is a platform limitation, not the extension's fault. **My takeaway:** A 20-minute setup eliminated hours of idle time per week. If your AI tool has a hooks or event system, use it for notifications before trying anything else. Initial idea from Claude Code Notifier VS Code extension by Ashwin Santhanam. The terminal-notifier approach is a different solution to the same problem. **Builder guide:** 1. brew install terminal-notifier. 2. Add hooks to .claude/settings.local.json for three events: PostToolUse (matcher: AskUserQuestion), PermissionRequest, and Stop. 3. Each hook runs: terminal-notifier -title 'Claude Code' -message '' -sound Glass. 4. Verify with: cat /tmp/claude-hook-debug.log after a session. 5. Check macOS notification permissions if nothing appears: System Settings > Notifications > terminal-notifier. --- ### #19 — Background jobs failed silently for days Date: Feb 21, 2026 Tags: monitoring, webhooks, reliability Stack: Slack Webhooks, Database table **Context:** Background tasks like data snapshots and connection refreshes ran on cron schedules. When they failed, nothing happened. No alert, no log, no indication until someone noticed stale data days later. **Change:** Attempt 1 (Feb 21): Built a TaskStatusService that tracks every background task execution in a database table. Added a WebhookAlertService that fires Slack notifications on failures. Integrated with the main background tasks. Enhanced health endpoints to report task status alongside service health. **Result:** Every background task execution is tracked. Failures trigger immediate Slack alerts with error details. Health endpoints report task status so monitoring tools can check staleness. **What worked:** Database-backed task tracking gives a history of executions, not just current status. Webhook alerts are immediate. Health endpoint integration means standard monitoring tools work out of the box. **What didn't:** Deciding the right granularity for alerts was the hardest part. You do not want to be paged for every transient failure, but you cannot ignore persistent ones either. I erred on the side of alerting and will tune later. **My takeaway:** If a background job has no alerting, assume it is broken and you do not know about it yet. Task tracking plus webhook alerts is a one-day investment that prevents week-long data staleness. **Builder guide:** 1. Create a system_tasks table to track: task name, last run, status, error message. 2. Wrap each background task to log its start, success, or failure to this table. 3. Add a WebhookAlertService that sends to Slack/Discord on failure. 4. Expose task status in your /health endpoint so monitoring tools can check staleness. 5. Set up a Slack channel for alerts. Keep it separate from general chat so alerts do not get buried. --- ### #20 — Can I run Claude Code from my phone? Date: Feb 25, 2026 Tags: mobile, remote-development, exploration **Context:** Sometimes I am away from my desk but want to kick off a task or check on a running session. Claude Code is a CLI tool that runs on my Mac. I wanted to see if I could interact with it remotely from my phone. **Change:** Attempt 1 (Feb 25): Quick exploratory session in a git worktree. Tested SSH from an iPhone into the Mac to run Claude Code remotely. The idea was to see if the terminal-based interface is usable on a small screen and over a mobile connection. **Result:** It works, technically. SSH into the Mac, run Claude Code, interact via the terminal. The experience is functional but not comfortable for long sessions. **What worked:** SSH access is straightforward. Claude Code runs fine remotely since it is just a CLI process. For quick tasks like 'start this migration' or 'check the build status', it is usable. **What didn't:** The mobile terminal experience is not great for extended work. Small screen, no keyboard shortcuts, latency on cellular connections. This is a 'quick check' tool, not a replacement for sitting at the desk. **My takeaway:** Remote SSH to your development machine is a zero-cost way to access AI tools from mobile. Do not expect to do deep work, but for quick tasks it removes the 'I have to be at my desk' constraint. --- ### #21 — Task tracking lived outside the development flow Date: Feb 27, 2026 Tags: linear, mcp, project-management, ai-tooling Stack: Linear, MCP Server **Context:** Tasks were tracked in a separate tool that I had to switch to manually. Creating issues, updating status, linking to commits, all required context-switching out of the development environment. For a solo developer with an AI assistant, this friction was enough that tasks often went untracked. **Change:** Attempt 1 (Feb 27): Connected Linear via an MCP server so Claude Code can create issues, update status, and read project context without leaving the session. Created the initial project structure and milestones directly from a Claude Code conversation. Attempt 2 (Feb 27): Integrated Linear into the /ship skill. When shipping work, Claude Code automatically creates or updates the corresponding issue. Task tracking became a side effect of shipping, not a separate step. **Result:** Task tracking happens inside the development flow. The /ship skill creates issues automatically. Over 60 issues were created directly from Claude Code sessions. **What worked:** MCP server integration is seamless. Claude Code reads and writes task data as naturally as reading files. The /ship integration means tasks are always tracked because tracking is part of the shipping process. **What didn't:** The main risk was over-tracking (creating too many granular issues), but that turned out to be manageable with good skill prompting. **My takeaway:** If your AI assistant supports MCP, connect your project management tool. Making task tracking a side effect of development instead of a separate step is the difference between tracking everything and tracking nothing. **Builder guide:** 1. Set up an MCP server for your project management tool (Linear, Jira, etc.). 2. Configure it in your Claude Code settings. 3. Integrate issue creation into your shipping workflow (/ship skill or equivalent). 4. Let the AI create and update issues as part of the commit process. 5. Review issues weekly to prune over-tracking. --- ### #22 — The same model should not write and review its own code Date: Mar 6, 2026 Tags: claude-code, codex, security, architecture, review Stack: Claude Code, OpenAI Codex CLI **Context:** Using a single AI model for all decisions creates blind spots. If Claude Code writes the code and also reviews the code, it tends to be generous with itself. It knows what it intended to build, so it evaluates against its intentions rather than against what the code actually does. I wanted a second pair of eyes. Not a human reviewer. An independent AI that uses a different model, different reasoning, and has no idea what shortcuts were taken during implementation. Security reviews up to this point were ad-hoc. I would sometimes check before shipping, sometimes not. That is not a system. **Change:** Attempt 1 (Mar 6): Decided to use Codex as an independent reviewer alongside Claude Code. The idea: Claude Code is the builder, Codex is the reviewer. Each operates with a different model, different reasoning patterns, and no shared context about shortcuts taken during implementation. Set up two Codex skill files: .codex/skills/security-officer/SKILL.md and .codex/skills/qa-officer/SKILL.md. Each defines review domains, a severity model, and a structured output format. The Security Officer focuses on auth, input validation, data exposure, and secrets. The QA Officer focuses on functional correctness, edge cases, and data flow integrity. I constrained what Codex could do. It can only append to two backlog files. It cannot touch source code. The reviewer reviews, it does not fix. I think that separation matters. Attempt 2 (Mar 6): First real run. I asked Codex to manually review a commit that added Fixed Income and Structured Products support. I had already tested it manually and was ready to ship. The Security Officer came back APPROVED WITH CONDITIONS. Medium severity: no server-side schema validation on the JSONB detail payloads. Frontend validation exists but can be bypassed by direct API calls. The QA Officer came back NOT READY. Two blocking findings. First, editing existing positions in the new asset classes forced users to re-enter all required fields. mapPositionToFormData() was not passing the detail keys the forms needed for hydration. Second, the underlyings field on structured products was silently dropped during PATCH updates. The frontend wrote it, the backend allow-list did not include it. Edits would disappear without any error. Neither showed up in manual testing because I was testing the create flow, not the edit flow. **Result:** Two bugs caught that were ready to ship to production. The dual-AI review concept proved its value on the first real run. The manual process works but requires me to remember to invoke it and to orchestrate the two reviews myself. **What worked:** Using a different model for review. A reviewer that does not know what was intended can only evaluate what the code actually does. Constraining permissions to review-only kept things clean. Using the ChatGPT subscription for Codex meant no separate API billing, which was the deciding factor. **What didn't:** The process is entirely manual. I have to remember to run it, copy the commit reference, invoke Codex separately, and then read back the results. For it to become a real part of the workflow, it needs to be automated into a single command. **My takeaway:** We do not let the same person write and approve their own code in a team. I do not think the same model should do it either. The manual version validated the concept. Next step is automating it into a single skill. **Next:** Automated version in [#25](/logbook/review-gate-automated-dual-ai). **Builder guide:** 1. Create .codex/skills/security-officer/SKILL.md and .codex/skills/qa-officer/SKILL.md. Define review domains, a severity model, and a structured output format for each. 2. Constrain what Codex can write to: backlog files only. No source code access. 3. Run reviews manually first. You need to understand what each reviewer catches well before automating. 4. Start with your highest-risk changes: auth, payments, data handling. 5. Treat review findings as backlog items, not immediate blockers (unless critical). --- ### #23 — Every component looked slightly different Date: Mar 2, 2026 Tags: design-system, documentation, ui-consistency **Context:** Components built in different sessions had inconsistent styling. Toggle buttons, card shadows, input fields, all varied slightly depending on when they were built. There was no single source of truth for how components should look, so each session made its own interpretation. **Change:** Attempt 1 (Mar 2): Standardized segmented controls, card elevation, and shadows across the entire app. Touched 43 files. Cleaned about 200 lines of redundant inline styles by moving defaults to the theme. Attempt 2 (Mar 2): Redesigned the navigation sidebar, wrote DESIGN_GUIDELINES.md covering philosophy, colors, typography, and spacing. Created component-level contracts so each component type has a specification. Audited all input fields for label consistency. Attempt 3 (Mar 2): Enhanced the guidelines and added a component system index with a decision tree for choosing the right component pattern. **Result:** A documented design system with philosophy, color palette, typography scale, spacing rules, and component contracts. The theme handles defaults so individual components do not need inline style overrides. **What worked:** Moving defaults to the theme. Components look consistent without developers or AI needing to remember the right values. The component contracts give Claude Code a spec to follow when building new UI. **What didn't:** Retroactively standardizing 43 files was painful. If the design system had existed from the start, each component would have been built correctly the first time. **My takeaway:** Write your design system documentation before you have too many components. Retrofitting consistency is always harder than building it in from the start. **Builder guide:** 1. Start with a DESIGN_GUIDELINES.md covering colors, typography, spacing, and elevation. 2. Move component defaults to your theme (MUI theme, Tailwind config, etc.) instead of inline styles. 3. Create a component contract for each component type: expected props, visual states, do/don't. 4. Add a decision tree: 'which component pattern should I use for X?' 5. Reference these docs in CLAUDE.md so your AI assistant follows them. --- ### #24 — Security reviews depended on me remembering to do them Date: Mar 6, 2026 Tags: security, codex, automation Stack: OpenAI Codex **Context:** Security reviews were ad-hoc. I would sometimes remember to check for vulnerabilities before shipping, sometimes not. There was no systematic way to get a security opinion on code changes. **Change:** Attempt 1 (Mar 6): Created a Codex-based security-officer skill with an agent configuration and a security backlog entry template. The idea is that you can run a security review as a skill and get findings formatted as actionable backlog items. **Result:** A dedicated security review skill that can be invoked on any code change. Findings are structured as backlog entries with severity, description, and remediation steps. **What worked:** Using a different AI model for security review adds a second perspective. The structured output format makes findings actionable instead of just advisory. **What didn't:** Still requires manual invocation. The ideal would be an automatic security check in CI, but starting with an on-demand skill is a reasonable first step. **My takeaway:** Having a dedicated security review skill, even if manual, is better than relying on memory. A different AI model gives you a second opinion you would not get otherwise. **Builder guide:** 1. Create a security review skill/prompt for your secondary AI tool. 2. Define a structured output format: severity, description, affected files, remediation. 3. Include a backlog entry template so findings translate directly into tasks. 4. Run it before shipping any auth, payment, or data-handling changes. 5. Consider automating it in CI once the manual version is proven. --- ### #25 — One command to run two AI reviewers in parallel Date: Mar 6, 2026 Tags: claude-code, codex, automation, security, review Stack: Claude Code custom skills, OpenAI Codex CLI Related: #22 **Context:** The manual dual-AI review from #22 worked but had friction. I had to remember to run it, invoke Codex manually, wait for results, and read them back. That is enough friction that it would get skipped under time pressure. The reviews needed to be a single command that runs both reviewers in parallel and returns a gate decision. **Change:** Attempt 1 (Mar 6): Built the /review-gate skill for Claude Code. When invoked, Claude shells out to Codex CLI with two parallel background invocations: one as Security Officer, one as QA Officer. Each uses its own skill file from .codex/skills/ with review domains, severity models, and output formats. Codex reads the code, writes findings to a temp file, and Claude reads the results back after both complete. Attempt 2 (Mar 6): Iterated on the Codex invocation format. --quiet does not exist as a flag. codex exec review --commit SHA with a prompt also does not work. You cannot combine them. What ended up working was piping the prompt via stdin with the - flag: ```bash echo "You are the Security Officer. Stage: IMPLEMENTATION. \ Follow your skill at .codex/skills/security-officer/SKILL.md. \ Review the changes in commit 8ec97f3 for security risks." \ | codex exec --full-auto -o /tmp/codex-security-review.md - ``` Both reviews run as parallel background tasks. Claude waits for both to finish, parses the verdicts, and returns a gate decision: PASS, CONDITIONAL, or FAIL. Attempt 3 (Mar 6): First automated run against the same commit from #22. Same findings, but this time triggered with a single /review-gate command instead of manual orchestration. The entire flow from invocation to gate decision took about 90 seconds. **Result:** A /review-gate skill that orchestrates Security Officer and QA Officer reviews in parallel with a single command. The gate returns PASS (ship), CONDITIONAL (ship with backlog items created), or FAIL (block). Review prompts are versioned in the repo and improve over time as new patterns emerge. **What worked:** Piping the prompt via stdin turned out to be the simplest invocation pattern. Running both reviews as background tasks cuts the wall-clock time roughly in half. The gate decision model (PASS/CONDITIONAL/FAIL) maps cleanly to shipping decisions. **What didn't:** The invocation format took about 30 minutes to get right. The error messages from Codex CLI were not helpful. The first review prompts were too generic and needed a few iterations before they started catching specific patterns consistently. Perhaps start with the manual version several times before trying to automate it in CI. **My takeaway:** The setup took about two hours. The first run caught two bugs that were ready to ship. Run it manually 5 to 10 times before automating in CI. You need to learn what each reviewer catches well and what it misses before you can trust the gate. **Builder guide:** 1. Create .codex/skills/security-officer/SKILL.md and .codex/skills/qa-officer/SKILL.md. Define review domains, a severity model, and a structured output format for each. 2. Constrain what Codex can write to: two backlog files only. No source code access. 3. Pipe the prompt via stdin: echo "..." | codex exec --full-auto -o /tmp/output.md - 4. Run both reviews as parallel background tasks. Parse the verdicts in Claude after both complete. 5. Gate decision: PASS ships, CONDITIONAL ships with backlog items created, FAIL blocks. 6. Run it manually 5 to 10 times before automating in CI. You need to learn what each reviewer catches well and what it misses before you can trust the gate. --- ## Contact Interested in product collaboration, startup ideas, AI systems, or wealth/fintech? Reach out via the contact page at https://lacambra.org/contact. Happy to exchange building hacks, explore collaborations, advise early-stage startups, talk product, or just connect with fellow builders. --- ## Links - Website: https://lacambra.org - LinkedIn: https://linkedin.com/in/luislacambra - Work: https://lacambra.org/work - Logbook: https://lacambra.org/logbook - About: https://lacambra.org/about - Contact: https://lacambra.org/contact