Skip to content

Getting the Best Results

Working with an AI agent is different from working with a developer. The agent isn’t ignoring you when it gets something wrong — it genuinely thinks it solved the problem. Understanding how it works will save you tokens and get better results.

Every interaction costs tokens. This includes the AI reading your code, thinking about what to change, writing the changes, and verifying them. The more complex your project gets, the more tokens each interaction costs — because the AI reads and understands more code before making edits.

This is normal. A 50-screen app costs more per change than a 5-screen app, the same way a contractor charges more to renovate a room in a large building with complex wiring. Early changes are cheap. Later changes cost more.

The retry loop — the biggest token waste

Section titled “The retry loop — the biggest token waste”

The most expensive mistake is asking the agent to do the same thing again with the same words. If the AI didn’t get it right the first time, repeating your prompt usually produces the same result.

Instead of repeating, give the agent new information to work with:

Be specific about what’s wrong.

Don’t say:

It’s still not working.

Say:

The button is blue but I need it red.

Or:

The list shows 3 items but should show 5.

Describe what you see vs. what you expected.

I see a blank screen after tapping Login. I expected to see the home screen with my dashboard.

This tells the AI exactly where the gap is. A vague “it doesn’t work” forces the agent to guess — and it often guesses the same way it did last time.

Attach a screenshot.

The AI can see images. A screenshot showing the problem is often worth more than three rounds of text-only back and forth — and costs fewer tokens. See Attaching Context.

Ask for a different approach.

If something isn’t working after two attempts, say:

That approach isn’t working. Can you try solving this a completely different way?

This forces the agent to reconsider instead of repeating the same fix.

Large, complex prompts are risky. If something goes wrong halfway through, you’ve spent tokens on everything — including the parts that failed.

Do this — one step at a time:

  1. “Add a login screen with email and password fields”

  2. “Add a forgot password link below the login button”

  3. “Add form validation — email must be valid, password must be 8+ characters”

Not this — everything at once:

Add a complete authentication system with login, signup, forgot password, email verification, social login with Google and Apple, session management, and a profile page

Smaller steps mean less waste when something needs adjusting. You can verify each step works before moving on.

When the AI says “done” but something looks wrong

Section titled “When the AI says “done” but something looks wrong”

Sometimes the agent reports a change as complete, but you don’t see it in the preview. Before asking the agent to fix it:

Refresh the preview. The web preview sometimes needs a manual refresh to show new changes. This is the most common cause of “I don’t see the change.”

Check on a real device. Some layout changes that look wrong in the web preview render correctly on an actual phone screen. The web preview is a Flutter web build, not device emulation — they can differ. See Live Preview.

Be precise about what’s missing. Instead of “it didn’t work,” describe exactly what’s off:

The image you added is not showing in the header. I see the header text but no image above it.

This gives the agent a clear target instead of forcing it to re-examine everything.

Building your app into an APK or IPA can fail for reasons that have nothing to do with your code — signing certificates, configuration files, or third-party plugin issues.

  1. Don’t immediately ask the agent to fix it.

    Look at the error message. If it mentions signing, certificates, or provisioning, it’s likely a configuration issue that needs a specific fix — not a code change.

  2. Share the full error.

    If the build log says “173 more lines omitted,” the agent can only fix what it can see. Share as much of the error as possible.

  3. Ask the agent to explain the error first.

    Don’t change anything yet. What does this error mean?

    This costs far fewer tokens than a blind fix attempt and helps you understand whether the agent can actually solve it.

For specific build failure scenarios and solutions, see Debugging & Fixing Errors.

When the agent keeps failing at the same task, or a complex change breaks after multiple attempts, switch from Auto to Claude Opus manually in the model selector.

Opus is the most capable model. It costs more tokens per request, but it often solves hard problems on the first or second try where other models might take five or more attempts. When you factor in all those failed retries, Opus can be cheaper for difficult tasks.

Switch to Opus when:

  • The agent has failed the same task 2–3 times
  • You’re making a complex change that touches many files
  • You need the agent to understand and fix a tricky bug
  • You’re working with complex business logic or data flows

You can switch back to Auto anytime. See Model Selection for the full comparison.

  • One change per prompt. Don’t bundle five requests into one message.
  • Use the Stop button. If you see the agent going in a direction you don’t want, stop it immediately. Don’t wait for it to finish — every second costs tokens.
  • After two failed attempts, change your approach. Describe the problem differently, attach a screenshot, or ask the agent to try a different solution.
  • Provide detail upfront. Colors, sizes, positions, exact text — every detail you give saves tokens the agent would spend guessing wrong.
  • Start simple, then refine. Get the basic version working first. Polish comes after.
  • Use @file references. Point the AI at specific files instead of letting it scan the whole project. See Attaching Context.