Internal Operations Document

QA Officer Protocol
Solo Field Manual

v1.0 — AI-accelerated development · HyperSync workflow · Manual testing
Solo Officer Edition Multi-project (up to 8) Manual Functional Testing Web & Mobile Apps
Version 1.0
Dev model AI Agents
QA model Human Manual
Flow HyperSync PRD columns
Projects Up to 8 concurrent
§ 00

Core principles for AI-accelerated development

Development using AI agents is 3–10× faster than traditional dev. This creates a specific pressure on QA: the bottleneck is now you, not the developers. The protocol is designed so that you can be fast and thorough — but never skip the fundamentals to keep up with velocity.

The Golden Rule

AI agents introduce bugs that look correct — they pass the happy path but fail on edge cases, permissions, partial states, and error handling. Your job is to be the adversarial user, not just the happy-path validator.

§ 01

Intake — when a PRD list arrives in your QA column

A PRD list arrives in your HyperSync column when developers have pushed to staging and marked it ready. Before you touch staging, complete this intake sequence. It takes 10–20 minutes and will save you hours of disorganized testing.

Step 1 — Read the full list before testing anything

  1. Open every task in the PRD list and read all titles and descriptions. Do not start testing yet.
  2. Classify each task — write next to it: BUG FIX, FEATURE, UI/UX, or CONFIG. If a task is ambiguous, classify it by its primary change.
  3. Count total tasks. If more than 12 tasks in one PRD drop and you have other active projects, flag to management before starting — this is a load issue, not a QA failure.
  4. Check staging is actually deployed — navigate to the staging URL and verify the build is live. Never test a stale environment.
Bug Fix

Existing functionality was broken. The fix must restore it exactly — no regressions, no changed behavior outside the stated fix.

Max time: 30 min per task
New Feature

New capability added. Test against every AC item. Also test what happens when the feature is misused or accessed without permission.

Max time: 60 min per task
UI / UX

Visual or interaction change. Test across at least 2 browsers and 2 screen sizes. Interaction states matter: hover, focus, loading, empty, error.

Max time: 30 min per task

Step 2 — Set your testing order

PriorityTask typeWhy first
1st — CRITICALBug fixes on live featuresA live regression affects real users. Must be verified first regardless of complexity.
2nd — HIGHNew features with P1 priorityBlocking a release if untested. Follow AC order strictly.
3rd — MEDIUMNew features P2/P3, UI changesImportant but not release-blocking on their own.
4th — LOWConfig, copy, minor UIVerify quickly last. These rarely introduce regressions.
§ 02

Deriving test cases from technical acceptance criteria

Current PRD format uses technical ACs written for developers. This section shows you how to translate any AC into a concrete manual test, using the actual format shown in HyperSync tasks.

The Translation Rule

Every AC statement has the form: "When [action], [result] happens." Your job: do the action. Verify the result. Then do the action wrong and verify it fails gracefully. That is one complete test.

Live example — US-001: Allow status change on a Request

This table shows how to read the 5 ACs from the task in the screenshot and derive concrete test steps.

AC What the AC says Happy path test Failure test (the one AI devs miss)
AC1 Status change from All Requests list updates immediately and reflects in the row. Go to Requests → All Requests. Pick any request. Change its status. Confirm the row updates without page reload. Change the status twice in rapid succession. Check for race condition — does the second change override the first correctly?
AC2 Same action works from the Request detail page. Open a request via Requests → All Requests → [Request No.]. Change status. Confirm the change is saved. Change status on detail page, then navigate back to All Requests. Does the list row reflect the updated status?
AC3 Users without approval permission cannot move to Pending Approval or Completed. Log in as a non-approver user. Try to change status to Pending Approval. Confirm the option is absent or blocked. Inspect the request as a low-permission user — are restricted statuses hidden from the UI entirely, or just disabled? If disabled, can they be triggered via keyboard/tab?
AC4 Successful change writes a log entry in System → System Logs. Change a status. Go to System → System Logs. Confirm an entry exists with correct user, request number, old status, and new status. Change the same status twice. Confirm two separate log entries appear, not one overwritten entry.
AC5 On server-side failure, the user sees an error message and the request stays in prior status. This requires simulating a failure — if you can throttle the network or have a staging flag, trigger an error. Confirm the error message appears. Even without simulating, check: does the UI show any loading/disabled state during the change? If there's no loading indicator, the error state is likely also broken.
AC5-type tests (server-side failure states) are the hardest to test manually. If you cannot simulate a failure, note it in your report as "AC5: Could not simulate server failure — error handling not verified. Recommend dev provide a staging flag for this." Do not mark it as PASS if you cannot verify it.

The universal AC decoder — use for any task

  1. Read the AC. Identify: WHO (which user/role), WHERE (which page/path), WHAT ACTION, EXPECTED RESULT.
  2. Test the exact happy path as described. One AC = one happy-path test.
  3. Test with a different user role than stated (e.g. if AC says "Owner", test as a non-Owner).
  4. Test with an empty or invalid input where applicable (blank fields, very long strings, special characters).
  5. Test what happens if you refresh mid-action or navigate away during a process.
§ 03

Testing procedures by task type

Follow the procedure for the task's classification. Do not mix procedures — a bug fix is not tested the same way as a new feature, even if they appear in the same PRD.

Procedure A — Bug fix

  1. Reproduce the original bug first. Use the steps described in the ticket. If you can reproduce it, note "Bug confirmed before fix." If you cannot, note that too — the dev may have fixed it in a different commit, or staging may not have been updated.
  2. Test the fix. Follow the same steps. Confirm the bug no longer occurs. Confirm the correct behavior is now present.
  3. Regression check — 3 surrounding areas. AI bug fixes frequently break adjacent functionality. Identify the 2–3 features most likely to share code with the fix and test each briefly.
  4. Check data integrity. If the bug involved data being saved or displayed incorrectly, verify that existing records were not corrupted by the fix.

Procedure B — New feature

  1. Happy path, AC by AC. Test each acceptance criterion in order. Mark each as pass or fail as you go. Do not skip ahead.
  2. Empty states. What does the feature look like with no data? New UI features from AI agents routinely skip this. Look for blank lists, empty forms, 0-result searches.
  3. Boundary inputs. Maximum length strings. Negative numbers where positive are expected. Past dates in date pickers. Non-ASCII characters in text fields.
  4. Permission matrix. If the feature has any role-based behavior (even implied), test with at least two user roles.
  5. Navigation context. Does the feature work if you arrive at it via a direct URL? Does it work if you return to it after navigating away?
  6. Mobile/responsive (for web apps). Test at one mobile viewport (375px). Check that nothing is cut off, overlapping, or untappable.

Procedure C — UI / UX change

  1. Visual check vs. description. Read the task description. Confirm the visual change matches what was requested. Note anything that looks unintended.
  2. Interaction states. For every changed element, manually test: default, hover, active/click, focus (tab key), disabled (if applicable), loading (if applicable).
  3. Breakpoints. Test at desktop (1280px), tablet (768px), and mobile (375px). At minimum use browser DevTools for this.
  4. Cross-browser spot check. Test in Chrome + one other browser (Safari or Firefox). AI-generated CSS frequently uses properties not fully supported across browsers.
  5. Adjacent element check. Confirm the UI change did not shift, overlap, or break any elements around it on the same page.

Severity classification — use for every bug you find

S1 — Blocker

Core functionality broken. No workaround. Release must not proceed. Must be fixed before re-test.

S2 — Major

Important feature broken. Workaround may exist. Discuss with dev lead whether to block release.

S3 — Minor

Feature partially works. Cosmetic issue with functional impact. Can ship with a noted caveat.

S4 — Cosmetic

Visual defect, copy error, or minor misalignment. No functional impact. Can ship; log for next cycle.

§ 04

Managing 8 concurrent projects — the daily protocol

This is the most critical operational challenge. With AI development running at speed across multiple projects simultaneously, the QA column across projects can fill faster than one person can process. This section is about triage discipline, not testing shortcuts.

Daily opening routine (first 20 minutes)

0:00–0:05
Column sweep. Open HyperSync and review all QA columns across every active project. Count tasks waiting, note any that have been there more than 48 hours.
0:05–0:10
Severity triage. Look for any S1-equivalent tasks (items explicitly marked as blockers or critical production bugs). These override all other work regardless of project.
0:10–0:15
Set today's focus. Pick a maximum of 2 projects to give deep testing to in a single day. If more than 4 projects have active QA work, communicate the queue to management before starting.
0:15–0:20
Environment check. Verify staging is deployed and accessible for the 2 projects you're testing today. Stale staging = time wasted. Fix before starting.
The Context-Switch Rule

Never switch projects in the middle of a task. Finish one complete task (including writing your notes), then switch. Context switching mid-task is how bugs get missed and notes get incomplete.

Project batching — reduce cognitive load

Group projects by type when testing on the same day. Dashboard + dashboard uses the same mental model. E-commerce + e-commerce shares test patterns. Don't follow a dashboard session with an e-commerce session unless there's a priority reason.

Load ceiling — when to flag

SituationWhat to do
More than 3 projects with active QA lists at onceSend a message to management. Do not silently process all of them — the quality will drop. Get explicit prioritization.
A single PRD drop has more than 15 tasksRequest that dev splits the list into batches. Large drops from AI agents are common; testing 15 tasks at once leads to fatigue-driven misses.
Staging environment is broken or outdatedLog it in HyperSync immediately and do not test. Move to a different project. Testing a broken environment wastes time and produces false results.
A task has no AC and no clear descriptionPush it back to technical with the note: "No acceptance criteria. Cannot determine pass/fail condition. Please define before QA."
§ 05

Reporting — how to write notes in HyperSync

Your notes in HyperSync are the primary communication between QA and development. They must be precise enough that a developer can understand the problem and reproduce it without asking you follow-up questions.

Pass notation — use this exact format

Template
STATUS: PASS TESTED BY: [Your name] — [Date] ENVIRONMENT: Staging — [staging URL or build ID if available] SCOPE: AC1 ✓ AC2 ✓ AC3 ✓ AC4 ✓ AC5 – not verified (see note) NOTE: AC5 server-failure simulation not possible in staging. Error state unverified.

Fail notation — use this exact format

Template
STATUS: FAIL — S2 (Major) TESTED BY: [Your name] — [Date] ENVIRONMENT: Staging — [URL] FAILING ACs: AC2, AC4 BUG 1 (AC2) — Status change does not persist from detail page Steps: 1. Go to Requests → All Requests 2. Click any request to open detail page 3. Change status to "In Process" 4. Navigate back to All Requests list Expected: Row shows "In Process" Actual: Row still shows previous status. Change reverted on navigation. Attachment: [screenshot filename] PASSED: AC1 ✓ AC3 ✓ AC5 ✓
Always attach a screenshot or screen recording to every failing AC note. Text descriptions of UI bugs are often misinterpreted. A 15-second recording using the browser's built-in screen capture is sufficient.
§ 06

Push-back protocol — when to return to technical

Moving a PRD back to the technical column in HyperSync should be deliberate and documented. These are the conditions that qualify a push-back.

S1 BLOCKER found. Any single blocker-severity bug returns the entire task to technical, not just the affected AC. The developer must review the full diff again.
Staging is not deployed or is broken. Document which URL you checked and what error appeared. Return immediately — do not test anything else in the same PRD until confirmed fixed.
No acceptance criteria on a task. You cannot determine pass or fail without AC. Note: "AC missing. Define expected behavior before QA can proceed."
Multiple S2 bugs on the same feature. Two or more major bugs on a single feature indicates the implementation is not stable. Return rather than leaving the developer to patch one at a time.
Regression introduced. If a fix or feature broke existing, previously working functionality, this always returns to technical regardless of the original task's scope.
~ S3/S4 bugs only — do NOT push back. Minor or cosmetic bugs are documented in your notes and the task is marked PASS WITH NOTES. Create a separate follow-up task for them.
§ 07

Completion and sign-off checklist

Before marking a PRD list as completed and pushing it toward release, run through this checklist. Every item must be checked or explicitly noted as N/A.

The End-to-End Walk

The last item above is the most important. After all individual tasks pass, spend 5–10 minutes using the app naturally — as if you are the actual user. AI-agent development frequently produces features that pass individually but break when used in sequence. This walk is your final gate.