Skip to content

Conversation

@slkzgm
Copy link
Contributor

@slkzgm slkzgm commented Jan 15, 2026

This PR adds an optional daemon prototype that can host CodexMonitor's backend logic in a separate process.

Motivation:

What’s included:

  • New binary: src-tauri/src/bin/codex_monitor_daemon.rs
  • Simple line-delimited JSON-RPC protocol over TCP (requests/responses + notifications for events)
  • Default bind is 127.0.0.1:<port> and the daemon requires a shared token unless --insecure-no-auth is used.
  • Documentation: REMOTE_BACKEND_POC.md

What’s not included:

  • No wiring in the desktop app yet (no setting / proxy layer). This is intentionally kept as a standalone building block.

Testing:

  • cargo test (in src-tauri/)
  • Manual smoke test using REMOTE_BACKEND_POC.md

@slkzgm
Copy link
Contributor Author

slkzgm commented Jan 15, 2026

Pushed a small follow-up commit to keep the daemon in sync with the current app-server RPC surface (e.g. turn/start, account/rateLimits/read, skills/list with cwd). This should make the upcoming remote-proxy work usable without having to retrofit the server later.

@Dimillian
Copy link
Owner

When launching the app in dev

❯ npm run tauri dev

> [email protected] tauri
> tauri dev

     Running BeforeDevCommand (`npm run dev`)

> [email protected] dev
> vite


  VITE v7.3.1  ready in 126 ms

  ➜  Local:   http://localhost:1420/
     Running DevCommand (`cargo  run --no-default-features --color always --`)
error: `cargo run` could not determine which binary to run. Use the `--bin` option to specify a binary, or the `default-run` manifest key.
available binaries: codex-monitor, codex_monitor_daemon
        Info Watching /Users/dimillian/Documents/Dev/CodexMonitor/src-tauri for changes...

@Dimillian
Copy link
Owner

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6aad947ee5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

let mut rx = events.subscribe();
let out_tx_events = out_tx.clone();
events_task = Some(tokio::spawn(async move {
while let Ok(event) = rx.recv().await {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle broadcast lag without killing event stream

In handle_client the event-forwarding loop is while let Ok(event) = rx.recv().await, but tokio::sync::broadcast::Receiver::recv returns Err(Lagged(_)) when the receiver falls behind (e.g., a slow client or a burst of terminal output). This code exits the loop on that error, so the client silently stops receiving all future events even though the TCP connection stays open. Consider handling Err(Lagged) by continuing (and optionally notifying the client) or re-subscribing so a temporary backlog doesn’t permanently drop the event stream.

Useful? React with 👍 / 👎.

@slkzgm
Copy link
Contributor Author

slkzgm commented Jan 16, 2026

Pushed two small follow-ups addressing the dev run issue + the event stream robustness feedback:

  • Fixed npm run tauri dev (cargo run ambiguity with 2 bins) by setting default-run = "codex-monitor" in src-tauri/Cargo.toml.
  • Made the daemon event-forwarding task resilient to broadcast::RecvError::Lagged(_) (now continues on lag and only breaks on Closed / disconnect), so a slow client/burst won't permanently stop events.

Commits: fc62fac, 42b2f0b.

@slkzgm slkzgm force-pushed the pr3-daemon-foundation branch from 42b2f0b to 2818820 Compare January 16, 2026 12:43
@slkzgm
Copy link
Contributor Author

slkzgm commented Jan 16, 2026

CI was failing because upstream main added a codex_home: Option parameter to spawn_workspace_session (GitHub Actions builds the PR merge commit).

I rebased this branch on current main (v0.6.8) and updated the daemon to pass the resolved legacy .codexmonitor dir as CODEX_HOME (or None otherwise). CI should now be green again.

@Dimillian
Copy link
Owner

Thanks!

@Dimillian Dimillian merged commit f030d3e into Dimillian:main Jan 16, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants