Prompting Fundamentals
Core prompting principles for writing effective prompts that get better results.
Working with an AI agent is different from working with a developer. The agent isn’t ignoring you when it gets something wrong — it genuinely thinks it solved the problem. Understanding how it works will save you tokens and get better results.
Every interaction costs tokens. This includes the AI reading your code, thinking about what to change, writing the changes, and verifying them. The more complex your project gets, the more tokens each interaction costs — because the AI reads and understands more code before making edits.
This is normal. A 50-screen app costs more per change than a 5-screen app, the same way a contractor charges more to renovate a room in a large building with complex wiring. Early changes are cheap. Later changes cost more.
The most expensive mistake is asking the agent to do the same thing again with the same words. If the AI didn’t get it right the first time, repeating your prompt usually produces the same result.
Instead of repeating, give the agent new information to work with:
Be specific about what’s wrong.
Don’t say:
It’s still not working.
Say:
The button is blue but I need it red.
Or:
The list shows 3 items but should show 5.
Describe what you see vs. what you expected.
I see a blank screen after tapping Login. I expected to see the home screen with my dashboard.
This tells the AI exactly where the gap is. A vague “it doesn’t work” forces the agent to guess — and it often guesses the same way it did last time.
Attach a screenshot.
The AI can see images. A screenshot showing the problem is often worth more than three rounds of text-only back and forth — and costs fewer tokens. See Attaching Context.
Ask for a different approach.
If something isn’t working after two attempts, say:
That approach isn’t working. Can you try solving this a completely different way?
This forces the agent to reconsider instead of repeating the same fix.
Large, complex prompts are risky. If something goes wrong halfway through, you’ve spent tokens on everything — including the parts that failed.
Do this — one step at a time:
“Add a login screen with email and password fields”
“Add a forgot password link below the login button”
“Add form validation — email must be valid, password must be 8+ characters”
Not this — everything at once:
Add a complete authentication system with login, signup, forgot password, email verification, social login with Google and Apple, session management, and a profile page
Smaller steps mean less waste when something needs adjusting. You can verify each step works before moving on.
Sometimes the agent reports a change as complete, but you don’t see it in the preview. Before asking the agent to fix it:
Refresh the preview. The web preview sometimes needs a manual refresh to show new changes. This is the most common cause of “I don’t see the change.”
Check on a real device. Some layout changes that look wrong in the web preview render correctly on an actual phone screen. The web preview is a Flutter web build, not device emulation — they can differ. See Live Preview.
Be precise about what’s missing. Instead of “it didn’t work,” describe exactly what’s off:
The image you added is not showing in the header. I see the header text but no image above it.
This gives the agent a clear target instead of forcing it to re-examine everything.
Building your app into an APK or IPA can fail for reasons that have nothing to do with your code — signing certificates, configuration files, or third-party plugin issues.
Don’t immediately ask the agent to fix it.
Look at the error message. If it mentions signing, certificates, or provisioning, it’s likely a configuration issue that needs a specific fix — not a code change.
Share the full error.
If the build log says “173 more lines omitted,” the agent can only fix what it can see. Share as much of the error as possible.
Ask the agent to explain the error first.
Don’t change anything yet. What does this error mean?
This costs far fewer tokens than a blind fix attempt and helps you understand whether the agent can actually solve it.
For specific build failure scenarios and solutions, see Debugging & Fixing Errors.
When the agent keeps failing at the same task, or a complex change breaks after multiple attempts, switch from Auto to Claude Opus manually in the model selector.
Opus is the most capable model. It costs more tokens per request, but it often solves hard problems on the first or second try where other models might take five or more attempts. When you factor in all those failed retries, Opus can be cheaper for difficult tasks.
Switch to Opus when:
You can switch back to Auto anytime. See Model Selection for the full comparison.
Prompting Fundamentals
Core prompting principles for writing effective prompts that get better results.
Debugging & Fixing Errors
Handle errors, loops, and build failures when things go wrong.