Most people using Kling 3.0 are describing the scene. They're not directing it. There's a difference, and it shows in the output.

Give your camera a job, not a description

Instead of "a woman walking through a forest," try telling the camera what to do: "tracking shot following a woman through a dense forest, camera moves left to right at walking pace." Kling 3.0 actually responds to camera direction. Pan, tilt, dolly, crane, static wide — these words change the output. The model was trained on film data, so speak to it like a director, not a screenwriter.

Want the full list of camera moves that work? Read the guide -> https://vicsee.com/blog/kling-3-camera-prompts

Stop prompting color. Start prompting light.

If you want vibrant fuchsia in a Nano Banana 2 image, don't write "#FF00FF" or "bright pink." Write "intense direct sunlight hitting the subject." The model understands light physics better than color names. Subsurface scattering, rim lighting, golden hour — these are the levers that make colors pop. The color you see is a consequence of the light you describe.

More on why AI portraits look plastic (and the physics fix) -> https://vicsee.com/blog/seedream-vs-nano-banana-2

More on the blog

Quick question

What AI tool or model should we add to VicSee next? Just reply to this email. I read every response.

Try it yourself: https://vicsee.com — free credits on signup, no credit card required.

Until next time,

JZ

Keep reading