mayros batch
Process multiple prompts in parallel with configurable concurrency. Input can be a JSONL file, a text file with --- separators, or piped from stdin. Results are streamed as JSON-lines to stdout or written to a file.
Subcommands
batch run [file]
Process a file of prompts in parallel. Each prompt is sent to the Gateway as an independent chat request.
Arguments:
| Argument | Description |
|---|---|
[file] | Input file (JSONL or text with --- separators). Use - for stdin. |
Options:
| Flag | Description | Default |
|---|---|---|
-c, --concurrency <n> | Max concurrent prompts (1-16) | 4 |
-o, --output <file> | Write results to file instead of stdout | — |
--json | Output results as JSON-lines | false |
--session <key> | Session key prefix (each item gets a unique suffix) | — |
--thinking <level> | Thinking level for all prompts | — |
--timeout <ms> | Per-prompt timeout in milliseconds | 120000 |
--url <url> | Gateway WebSocket URL | — |
--token <token> | Gateway auth token | — |
--password <password> | Gateway password | — |
Input formats:
- JSONL — one JSON object per line with at least a
promptfield. Optionalidandcontextfields. - Text — prompts separated by
---on its own line. IDs are auto-assigned (1, 2, 3...). - stdin — pipe data to
mayros batch run -.
Format is auto-detected: if the first non-empty line starts with {, the file is parsed as JSONL; otherwise as ----separated text.
Output format:
Each result is a JSON object with the following fields:
json{ "id": "1", "status": "ok", "response": "The assistant response...", "durationMs": 1234 }
On errors, status is "error" and the error field contains the reason.
batch status
Show batch processing capabilities and usage examples.
Examples
bash# Process a JSONL file with 8 concurrent prompts mayros batch run prompts.jsonl --concurrency 8 # Process a text file and write results to a file mayros batch run prompts.txt --output results.jsonl # Pipe a single prompt from stdin echo '{"prompt":"Explain monads"}' | mayros batch run - # Set a custom timeout and thinking level mayros batch run prompts.jsonl --timeout 60000 --thinking high # Use a specific gateway mayros batch run prompts.jsonl --url ws://localhost:8080 --token mytoken
JSONL input example (prompts.jsonl):
json{"id": "q1", "prompt": "What is dependency injection?"} {"id": "q2", "prompt": "Explain the observer pattern", "context": "In TypeScript"} {"id": "q3", "prompt": "Compare REST and GraphQL"}
Text input example (prompts.txt):
textWhat is dependency injection? --- Explain the observer pattern --- Compare REST and GraphQL
Related
- Headless mode — single-prompt non-interactive mode
- code — interactive coding session