Skip to main content

Basics

Maestro is designed to be used through conversation. You may be used to working with systems which are designed to handle very narrowly scoped requests: think of Maestro as your engineering partner instead.

Sessions

Currently, all Maestro sessions are independent. A single session is a checkpointable, resumable artefact, used to create an Agent.

Memories

Dialog history, containing records of previous turns. All memories can be forgotten or compacted.

Files

Full support for source code and data in complex, enterprise scale projects.

Tools

Comprehensive toolbox to manage and interact with files, gather information and accomplish goals.

The Maestro Partnership Model

User’s Role: Quality Controller & Strategic Guide

  • Set clear objectives and success criteria
  • Challenge logical inconsistencies and shortcuts
  • Enforce proper validation methodology
  • Push back when standards aren’t met

Maestro’s Role: Technical Implementation Partner

  • Deep technical implementation capability
  • Systematic analysis and problem-solving
  • Comprehensive testing and validation
  • Learning from feedback and course-correction

The Dynamic That Works

Best Sessions Feature: Active user oversight with immediate feedback
  • Users who catch errors early prevent larger mistakes
  • Professional criticism improves output quality
  • Insistence on evidence leads to better solutions
  • Partnership creates better results than either alone
Maestro is primarily designed to tackle substantial tasks - thousands of lines of code, and entire features or software systems, not just tiny items like changing the color of a button. Most users also take advantage of spinning up multiple parallel sessions to tackle multiple engineering challenges at once. As such, the most important step is to guide Maestro to the problem you need solved. That normally involves confirming Maestro understands what you need to do, that it has designed a plan of attack for your problem, and that it has a clear objective criteria to enable it to validate its work. For example: to add a new feature to an existing code

Complex Feature Implementation Phases

1

Discovery & Understanding

Goal: Deep comprehension of existing systemPattern: “Clone X and walk me through how subsystem Y works”Success Criteria: Maestro demonstrates understanding of architecture, constraints, and integration pointsTime Investment: Essential upfront work that prevents later architectural mistakes
2

Strategic Analysis

Goal: Identify highest-value implementation approachPattern: “What are the 3 most valuable ways to extend this with Z?”Success Criteria: Clear rationale for chosen approach, understanding of alternativesRisk Mitigation: Prevents over-engineering or choosing wrong approach
3

Specification-Driven Development

Goal: Complete technical specification before implementationPattern: “Create a full spec for X, then implement it”Success Criteria: Comprehensive spec covering edge cases, performance targets, testing requirementsQuality Gate: Implementation should follow spec, not evolve organically
4

Implementation with Continuous Validation

Goal: Working implementation with proper integrationSuccess Criteria: Code compiles, basic functionality works, no obvious regressionsEarly Warning: Watch for compilation issues, integration problems
5

Professional Validation

Goal: Systematic testing and performance validationPattern: “Validate your work systematically”Success Criteria: All tests pass, performance meets targets, no regressionsUser Vigilance Required: This is where user oversight is most critical
6

Comprehensive Integration

Goal: Complete test coverage, documentation updates, clean codebasePattern: “Run ALL tests, update docs, clean up WIP code”Success Criteria: Production-ready code with full documentation
There are many strategies you can take, but generally the ideal scenarios are either green field codebases, or codebases which are testable within Maestro’s sandbox environment. The most critical areas of your value add are in critical analysis and pushback.

What Makes Sessions Successful

🎯 Clear Success Criteria

Best Practice: Define measurable outcomes upfront
  • “Performance should exceed baseline by X%”
  • “All existing tests must continue passing”
  • “Implementation must be fully Redis-compatible”

🔍 Demand Evidence, Not Claims

Pattern: Always ask for validation
  • ❌ Don’t Accept: “The implementation is performing well”
  • ✅ Demand: “Show me benchmarks against the baseline using the same methodology”

🚫 Never Accept Shortcuts

Quality Standards:
  • Zero test failures tolerated
  • Every performance claim must be validated
  • All edge cases must be tested
  • Regressions are unacceptable

🛠️ Use Existing Infrastructure

Principle: Don’t reinvent testing/benchmark tools
  • “Use the existing test suite structure”
  • “Run the benchmark scripts already in the codebase”
  • “Follow the established patterns”
There are more layers, and skill curve involved, but put simply, you are in dialog with an intelligent entity equipped with multiple tools to complete your goals. You also have access to a variety of commands to control behaviour, or use actions for managing context and other utilities.
Maestro is at its best when goals are clearly defined, with clear metrics of success and error. Be clear, concise and direct.

Capacity

Task completion can be prevented by the number of tokens Maestro is consuming. Either the number of tokens you are consuming exceeds the maximum limit, or you have a case of context poisoning, and should curate the context your agent sees to increase ability. You can always see a live tracker of session capacity in the top right corner, and click for additional details to help inform context management decisions.
Whilst sessions with long term memories can be useful, this also increases cost. Consider using /compact and /forget to manage session memory and /refresh to clear out old file iterations.