AI 活用で重要なのは、入力の厳格さと出力への深掘りの両立です。 生成されたコードやアーキテクチャ案に対して「このエラーケースは?」「パフォーマンスは?」「この情報は本当に正しい?」と繰り返し問いかけ、公式ドキュメントで裏取りしながら品質を高めていきました。 AI はハルシネーション(誤った情報の生成)もあるため、出力を鵜呑みにせず、批判的に検証することが不可欠です。
意思決定に "唯一の正解" はありません。だからこそ、意思決定理由を明確にし、曖昧さを残さないことが重要です。 AI は選択肢の整理や情報収集を効率化してくれる存在として活用しました。
コード生成の活用
AI によるコード生成は、errgroup による並列処理や HTTP クライアントの接続プール設定など、Go のベストプラクティスに沿った実装パターンを素早く提案してくれるため、調査時間の短縮に役立ちました。 ただし、初期段階では失敗もありました。
spec / research の分離で、チーム全体での情報共有の質が大幅に向上しました。 新メンバーが短時間で追いつけるし、認識のズレもなくなるし、仕様変更点も分かりやすくなりました。 AI を使うと検討ログが膨大になりがちですが、この分離戦略で整理することで、必要な情報に素早くアクセスできるようになります
6. おわりに
マルチクラウド構成って複雑そう…と最初は不安でしたが、実際に検証してみると Cloud Run の OIDC + AWS STS だけでクラウド間認証が完結して、思ったよりシンプルに構築できました。
データストアは S3 + ElastiCache、ジョブは Cloud Run Jobs という構成で、短期間で完了できました。 Go と Docker の既存スキルだけで対応できたのも大きかったです。
Claude Code を技術選定の壁打ちに活用したことで開発スピードが上がりました。 要件を構造化して AI に相談することで選択肢の整理が効率化され、意思決定の材料を素早く集められました。 ただし設計方針や最終判断は人間が行う必要があります。 最初から完璧な指示を出すのではなく、AI の出力に対して疑問を投げかけ、改善を重ねていく対話プロセスが、品質の高い成果物につながります。
# Implement RPC
**このコマンドの目的**: テスト駆動開発)手法に従って新しいRPCエンドポイントを実装します。
## Overview
Implement a new RPC endpoint following strict TDD (Test-Driven Development) methodology.
⚠️ **IMPORTANT NOTICE** ⚠️
Always carefully consider before executing commands from this file. Before implementation, verify:
- The RPC specification is clearly defined
- You understand the impact on the existing codebase
- You understand the Test-Driven Development process
## Arguments
- `rpc_name` (required): The name of the RPC method to implement
- `phase` (optional): The execution phase - "analyze" (default) or "implement"
## Prerequisites (MANDATORY)
**→ For detailed implementation standards, refer to the following sections in `coding-standards.md`:**
- RPC Implementation Standards
- Documentation Standards
- Test-Driven Development (TDD)
### Mandatory Pre-implementation Checklist:
1. **Verify RPC Specification**
2. **Study Project Documentation**
3. **Validate Proto Definition**
4. **Reference Existing Implementations**
5. **Check Auto-generated Files**
## Execution Modes
This command operates in two phases:
### Phase: "analyze" (Default)
When executed without phase argument or with `phase=analyze`:
- Analyzes the proto definition
- Provides guidance and templates for human implementation
- Shows examples of interfaces and tests to write
### Phase: "implement"
When executed with `phase=implement`:
- Reads human-written code from Phase 1
- Executes three parallel implementation tasks
- Generates complete implementation based on interfaces
## RPC Implementation Specification
Based on the provided `rpc_name` argument, the command will:
### RPC Details
- **RPC Name**: `{rpc_name}`
- **Functionality**: {Analyze proto definition to determine functionality}
- **Proto Messages**: {Extract Request/Response types from proto definition}
## Implementation Process
### When phase="analyze" (First Execution)
The command will:
1. **Review Documentation** (MANDATORY)
2. **Analyze Proto Definition**
3. **Generate E2E Test Implementation**
5. **Provide Implementation Guidance**
6. **Output Templates and Implementation**
**E2E Test Implementation** (`test/e2e/{snake_case_rpc_name}_test.go`):
### Human Implementation Phase (Between Commands)
**After reviewing the analysis and AI-generated E2E tests, humans should implement:**
1. **Layer Interfaces**
2. **Domain Layer** (if new domain concepts are needed)
### When phase="implement" (Second Execution)
**After human implementation is complete, the command will:**
1. **First, review all documentation** (CRITICAL)
- **MUST** re-read docs to ensure implementation follows project standards
- **MUST** verify patterns match architecture documentation
- **MUST** check RPC specification in `docs/03-api-reference/{snake_case_rpc_name}.md`
- **MUST** follow patterns from `docs/architecture/` and `docs/technical/`
- **MUST** ensure all implementations align with documentation
2. **Then execute parallel tasks following strict TDD methodology:**
#### Parallel Task 1: Interface Layer Implementation (RPC Handler) - TDD Process
**MANDATORY TDD Steps:**
1. **FIRST: Write failing unit tests**
2. **SECOND: Implement minimal code** to make tests pass (green phase)
3. **THIRD: Refactor** while keeping tests green
#### Parallel Task 2: UseCase Layer Implementation - TDD Process
**MANDATORY TDD Steps:**
1. **FIRST: Write failing unit tests**
2. **SECOND: Implement minimal code** to make tests pass (green phase)
3. **THIRD: Refactor** while keeping tests green
#### Parallel Task 3: Infrastructure Layer Implementation - TDD Process
**MANDATORY TDD Steps:**
1. **FIRST: Write failing unit tests**
2. **SECOND: Implement minimal code** to make tests pass (green phase)
3. **THIRD: Refactor** while keeping tests green
# Commit Ready
**このコマンドの目的**: コードをコミットする前に、フォーマット、リント、テストのチェックを実行してコードベースを準備します。
## Overview
Prepare the codebase for commit by running format, lint, and test checks. This command runs the essential pre-commit checks to ensure code quality and correctness before committing changes.
## CRITICAL REQUIREMENTS
**IMPORTANT**: This command MUST execute ALL of the following commands in order:
1. `make gomock`
2. `make fmt`
3. `make lint`
4. `make test`
**MANDATORY**: The command MUST continue fixing all lint and test errors until they are completely resolved. Do not stop at the first error - continue iterating and fixing issues until all checks pass successfully.
## What this command does
1. **Generate mocks** (`make gomock`) - MUST be run first to regenerate all mock files ensuring they're up to date with current interfaces
2. **Format code** (`make fmt`) - Formats all Go code using goimportz and gofumpt
3. **Lint code** (`make lint`) - Runs golangci-lint to check for code quality issues
4. **Check test coverage** (`make coverage`) - Generates test coverage report and ensures adequate coverage
5. **Verify test implementation** - Checks that all layers have proper tests:
6. **Run tests** (`make test`) - Executes all tests with race detection
## Execution Order and Error Handling
The commands are executed in this specific order:
1. `make gomock` - ALWAYS runs first to ensure mocks are up to date
2. `make fmt` - Formats the code (including newly generated mocks)
3. `make lint` - Checks code quality
- If lint errors are found, fix them and re-run `make lint`
- Continue fixing and re-running until all lint errors are resolved
4. `make coverage` - Checks test coverage
5. `make test` - Runs all tests
- If test failures occur, fix them and re-run `make test`
- Continue fixing and re-running until all tests pass
**IMPORTANT**: Do NOT stop at the first error. Keep fixing issues and re-running the failed command until it passes, then continue with the next command.
## Prerequisites
- All source code files should be saved
- Docker should be running (for Spanner emulator during tests)
- No ongoing file modifications
## Output
The command will:
- Show formatting results
- Display any lint issues that need to be fixed
- Display test coverage report with percentages per package
- Identify any missing tests or low coverage areas
- Run the full test suite and report results
- Indicate if the code is ready for commit
## Exit behavior
- If any step fails, the command will stop and show the error
- Only when all checks pass is the code considered commit-ready
- Fix any reported issues before attempting to commit
## Common issues and solutions
### Mock generation issues
- **Outdated mocks**: If interfaces have changed but mocks haven't been regenerated, tests will fail
- **Missing mocks**: New interfaces need mock generation before tests can be written
- **Solution**: Always run `make gomock` before committing to ensure all mocks are up to date
### Lint errors
- **godot**: Comments should end with a period
- **gofmt**: File formatting issues (automatically fixed by `make fmt`)
- **goimports**: Import organization issues (automatically fixed by `make fmt`)
### Test failures
- Check test output for specific failure details
- Ensure all mocks are properly configured
- Verify database schema is up to date
### Coverage issues
- **Missing E2E tests**: Check that all RPC endpoints have corresponding tests in `test/e2e/`
- **Low use case coverage**: Ensure all use case methods have unit tests with both success and error scenarios
- **Missing converter tests**: Add tests for all conversion and validation functions
- **Repository coverage**: Verify integration tests cover all repository methods
### Format issues
- Usually auto-fixed by `make fmt`
- Ensure consistent indentation and import grouping
# Create PR
**このコマンドの目的**: CLAUDE.mdのルールに従ってプルリクエストを作成し、レビューコメントを追加します。
## Overview
This Claude Code project command creates a PR following the rules in CLAUDE.md and adds a review comment.
## Functionality
1. Ensures commits follow Semantic Git commits convention using `npx git-cz`
2. Checks current branch changes using git commands
3. Creates PR following CLAUDE.md PR creation rules:
4. After PR creation, adds self-review comment:
## Implementation
This command is executed internally by Claude Code, not as a bash script. When invoked via `/create-pr`, Claude will:
1. Use `npx git-cz` for creating semantic commits if there are uncommitted changes
2. Run `git status`, `git diff`, and `git log` to analyze changes
3. Use `gh pr create` with proper template structure and semantic title
4. Add review comment using `gh pr comment`
## Commit Convention
All commits must follow Semantic Git commits format using `npx git-cz`:
- **feat**: A new feature
- **fix**: A bug fix
- **docs**: Documentation only changes
- **style**: Changes that do not affect the meaning of the code
- **refactor**: A code change that neither fixes a bug nor adds a feature
- **test**: Adding missing tests or correcting existing tests
- **chore**: Changes to the build process or auxiliary tools
PR titles should match the primary commit type and scope.
## Prerequisites
- Changes must be pushed to remote branch beforehand
- Must be executed from a branch other than main
- Ensure commits exist before execution
- GitHub CLI (`gh`) must be configured and authenticated