Back to all docs

Command Execution API v2 - Feature Analysis & Gaps

Date: 2025-11-29 Status: Analysis Priority: High - Core User Experience

Executive Summary

The current command execution API provides real-time command execution via WebSocket (Socket.IO), but has critical gaps for the iMessage/chat app use case. Users cannot reconnect to running jobs, receive notifications when jobs complete while disconnected, or manage multiple long-running tasks across different repos.

Current Implementation Analysis

What We Have

Architecture:

API Flow:

1. Client connects via WebSocket
2. Client emits 'execute' event with command
3. Server spawns process, stores in jobs Map
4. Server streams output to ONLY the connected socket
5. On completion, server emits 'job-complete' and deletes from jobs Map
6. On disconnect, jobs continue running but outputs are lost

Code Location: server.js:1214-1337

What's Missing (Critical Gaps)

1. Job Persistence & Reconnection

Problem: Jobs are stored in memory only (jobs Map). When a user:

They CANNOT reconnect to see the running job's progress.

Impact:

Code Evidence:

// server.js:1266-1272
jobs.set(finalJobId, {
  process: childProcess,
  command: finalCommand,
  repoPath,
  startTime: Date.now(),
  socketId: socket.id  // ⚠️ Tied to specific socket
});

User Story:

As a user, I want to start a long build in Repo A, switch to Repo B to work on something else, then come back to Repo A to see the build results.

Current Behavior: ❌ Build output is lost when switching repos Expected Behavior: ✅ Build continues, user sees results when returning


2. Output Buffering for Disconnected Clients

Problem: Output is only sent to the connected socket. If the client disconnects:

// server.js:1278-1293
childProcess.stdout.on('data', (data) => {
  socket.emit('output', {  // ⚠️ Only to this socket
    jobId: finalJobId,
    data: data.toString(),
    stream: 'stdout'
  });
});

All output since disconnect is permanently lost.

Impact:

User Story:

As a user, when I reconnect after network issues, I want to see what output I missed while disconnected.

Current Behavior: ❌ Output is gone forever Expected Behavior: ✅ Buffered output delivered on reconnect


3. Push Notifications / Job Completion Alerts

Problem: No mechanism to notify users when jobs complete while they're:

Impact:

Code Evidence:

// server.js:1296-1307
childProcess.on('close', (code, signal) => {
  const job = jobs.get(finalJobId);
  if (job) {
    const duration = Date.now() - job.startTime;
    socket.emit('job-complete', {  // ⚠️ Only to connected socket
      jobId: finalJobId,
      exitCode: code,
      signal,
      duration
    });
    jobs.delete(finalJobId);  // ⚠️ Job metadata deleted immediately
  }
});

User Stories:

As a user, when I'm chatting in Repo B, I want a notification when the build in Repo A finishes.

As a mobile user, when the app is backgrounded, I want a push notification when my deploy completes.

Current Behavior: ❌ No notifications, must manually check Expected Behavior: ✅ In-app badge + optional push notification


4. Job History & Status Persistence

Problem: Job metadata is deleted immediately on completion:

jobs.delete(finalJobId);  // ⚠️ Gone forever

Impact:

User Story:

As a user, I want to see my command history for this repo, including exit codes and when they ran.

Current Behavior: ❌ Jobs disappear on completion Expected Behavior: ✅ Jobs persisted to database with status, output, timestamps


5. Multi-User Job Visibility

Problem: Jobs are tied to a single socket ID. Other users (or same user on different device) can't see running jobs.

Impact:

User Story:

As a team member, I want to see what commands my teammate is running in our shared repo.

Current Behavior: ❌ Jobs are private to single socket Expected Behavior: ✅ Jobs associated with repo, visible to all authorized users


6. Long-Running Job Management

Problem: Current limitations:

Code Evidence:

// server.js:1331-1336
socket.on('disconnect', () => {
  console.log('Client disconnected:', socket.id);
  // Optional: Kill all jobs for this socket
  // For now, let jobs continue running  // ⚠️ Orphaned processes
});

Impact:

User Story:

As a user, I want to see all my running jobs across all repos and cancel old ones.

Current Behavior: ❌ Jobs run invisibly after disconnect Expected Behavior: ✅ Dashboard showing all active jobs with reconnect/cancel options


Additional Functional Gaps

7. No Job Queueing

8. No Interactive Commands Support

stdio: ['ignore', 'pipe', 'pipe']  // stdin=ignore

9. No Job Metadata/Tagging

10. No Progress Indicators


Proposed Solution: Command Execution v2

Database Schema

CREATE TABLE jobs (
  id TEXT PRIMARY KEY,
  repo_id TEXT NOT NULL,
  command TEXT NOT NULL,
  mode TEXT CHECK(mode IN ('shell', 'agent')),
  status TEXT CHECK(status IN ('running', 'completed', 'failed', 'cancelled')),
  exit_code INTEGER,
  output_path TEXT,  -- Path to buffered output file
  output_size INTEGER DEFAULT 0,
  started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  completed_at TIMESTAMP,
  created_by TEXT,  -- User/session identifier
  tags TEXT,  -- JSON array
  FOREIGN KEY (repo_id) REFERENCES repos(id)
);

CREATE TABLE job_outputs (
  job_id TEXT NOT NULL,
  sequence INTEGER NOT NULL,
  stream TEXT CHECK(stream IN ('stdout', 'stderr')),
  data TEXT NOT NULL,
  timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (job_id, sequence),
  FOREIGN KEY (job_id) REFERENCES jobs(id) ON DELETE CASCADE
);

CREATE INDEX idx_jobs_repo_status ON jobs(repo_id, status);
CREATE INDEX idx_jobs_status ON jobs(status);

Key Features

1. Persistent Job Storage

2. Reconnection Support

// Client reconnects
socket.emit('list-jobs', { repoId });
// Server responds with all jobs for this repo
socket.emit('jobs-list', jobs);

// Client subscribes to specific job
socket.emit('subscribe-job', { jobId });
// Server sends buffered output + new output

3. Job Lifecycle Management

States: pending → running → completed/failed/cancelled
- pending: Queued but not started
- running: Process active
- completed: Exit code 0
- failed: Exit code != 0
- cancelled: User cancelled

4. Notification System

// When job completes
await notifyJobCompletion({
  jobId,
  repoId,
  status: 'completed',
  exitCode: 0
});

// Broadcasts to:
// 1. All connected sockets for this repo
// 2. Push notification if user has enabled
// 3. Email digest if configured

5. Job History UI

┌─────────────────────────────────────┐
│ Repo: AhaiaApp                      │
├─────────────────────────────────────┤
│ Running (2)                         │
│  ▸ npm test (5m ago) [view]       │
│  ▸ git clone ... (10m ago) [view] │
│                                     │
│ Recent (10)                         │
│  ✓ npm install (1h ago) - 32s     │
│  ✗ npm run build (2h ago) - failed│
│  ✓ git status (3h ago) - 1s       │
└─────────────────────────────────────┘

6. Output Management


API Changes Required

New WebSocket Events

Client → Server:

// List jobs for a repo
socket.emit('list-jobs', {
  repoId,
  status: 'running',  // optional filter
  limit: 50
});

// Subscribe to job updates
socket.emit('subscribe-job', { jobId });

// Unsubscribe from job
socket.emit('unsubscribe-job', { jobId });

// Get job output history
socket.emit('get-job-output', {
  jobId,
  fromSequence: 0,  // For pagination
  limit: 1000
});

Server → Client:

// Job list response
socket.emit('jobs-list', { jobs: [...] });

// Job status update (for subscribed jobs)
socket.emit('job-status', {
  jobId,
  status: 'completed',
  exitCode: 0
});

// Job completion notification (broadcast to repo)
socket.emit('job-notification', {
  jobId,
  repoId,
  command: 'npm test',
  status: 'completed',
  duration: 1234
});

New REST Endpoints

// Get job history
GET /api/repos/:repoId/jobs?status=completed&limit=20

// Get single job details
GET /api/jobs/:jobId

// Download job output
GET /api/jobs/:jobId/output.txt

// Cancel job (alternative to WebSocket)
POST /api/jobs/:jobId/cancel

// Delete old jobs
DELETE /api/jobs/:jobId

Implementation Priority

Phase 1: Critical (Must-Have)

  1. Job persistence - Save jobs to database
  2. Output buffering - Store outputs in database
  3. Reconnection - Subscribe to existing jobs
  4. Job listing - View active jobs per repo

Estimated effort: 2-3 days Impact: Fixes critical UX gaps

Phase 2: Important (Should-Have)

  1. Job history UI - Browse past commands
  2. In-app notifications - Alert when jobs complete
  3. Job status dashboard - Global view of all jobs
  4. Output rotation - Cleanup old job data

Estimated effort: 2-3 days Impact: Professional-grade UX

Phase 3: Nice-to-Have

  1. Push notifications - Mobile alerts
  2. Multi-user visibility - Team collaboration
  3. Interactive commands - stdin support
  4. Job queueing - Resource management

Estimated effort: 3-5 days Impact: Advanced features


Security Considerations

  1. Job ownership - Users should only see their own jobs (or team jobs)
  2. Output sanitization - Strip sensitive data from outputs (API keys, passwords)
  3. Resource limits - Max jobs per user, max output size, max job duration
  4. Path validation - Already implemented, keep enforcing

Testing Strategy

Unit Tests

Integration Tests

E2E Tests


Success Metrics

Before (Current State):

After (Target State):


References

Current Code:

Database:

Related Features:


Conclusion

The current command execution API is functional for synchronous, attended operations but fails for the async, multi-tasking, chat-like UX this app is designed for.

Key Insight: The app is positioned as "iMessage for repos" but the command execution feels more like "traditional SSH terminal". Users expect to:

Recommendation: Implement Phase 1 (job persistence + reconnection) as highest priority to unlock the core UX vision. This is table-stakes for the iMessage-like experience.