Writing prompts for Vibe Coding is a technical skill. I just translated the official Lovable prompt guide, which is full of practical information. If you want to spend the least time and cost to maximize the potential of AI, I highly recommend reading this article to help you open up your mind. 🧵👇
First put thedocs.lovable.dev/prompting/prom…ion is very long
The instructions you write to an AI are called "prompts." The clearer the prompt, the more accurate and efficient the AI will be in building interfaces and writing logic. Simply put: good prompts = good results. Prompts are more than just random sentences. Well-written, AI can help you complete the entire process: Automate repetitive tasks, find debugging ideas faster, build and optimize workflows, and you don't need to be a programmer to get started.
Be clear when writing a prompt: Write in blocks: context/task/guidelines/restrictions Provide background: Don’t just say “make a login page”, specify the framework, tools, and details Clearly define restrictions: What libraries to use and what range to exceed, all need to be written out Order is important: The beginning and the end are the most critical, focus on the front and back and can be repeated Be careful about forgetting: If the prompt is too long, the model will forget the previous text, reiterate the key points when necessary Know its boundaries: The model has no common sense and does not know the latest news, don’t assume it will fill in the gaps on its own Treat AI as a serious but common sense intern. The clearer the instructions, the more reliable the results.
There is an easy-to-remember framework for writing prompts: CLEAR Concise: be concise and straightforward, no nonsense Logical: be organized, explain step by step Explicit: clearly state what you want and what you don’t want, preferably with examples Adaptive: rewrite if you are not satisfied, and iterate repeatedly Reflective: review effective writing methods, summarize and improve them. Following CLEAR, prompts are more efficient and the results are more controllable.
1️⃣ Structured Prompts (Training Wheels): When first starting to write prompts or encountering complex tasks, the four-part structure of "Context / Task / Guidelines / Constraints" is the safest approach. Clearly outline the context, objectives, methods, and constraints, item by item. This is like putting training wheels on your AI. This forces you to think clearly about your requirements while also preventing ambiguity within the AI. It's ideal for beginners or large tasks.
2️⃣ Once you're familiar with conversational prompts, you can let go of the "training wheels" and communicate naturally, just like you would with a colleague. The key is to maintain clear logic, complete details, and avoid missing any conditions. For example, describe functional points in sections: uploading an avatar, storage location, and error handling. This provides more flexibility and is more suitable for rapid iteration during multiple rounds of conversation.
3. Meta-prompting: A more advanced approach is to have AI help you rewrite or optimize your prompts. Essentially, this involves using AI as a language expert, helping you identify ambiguities and provide additional details. For example, you can ask the AI to review ambiguities or generate more precise phrases. This approach can quickly improve prompt quality, essentially providing a personal prompt coach.
4️⃣ Reverse Meta: After completing a task, don't just end it. Instead, let the AI summarize the process and generate a useful prompt for the future. For example, it could organize the cause and solution of a JWT error and then output a reusable template for future use. This way, you can build a personal prompt library, turning past mistakes into valuable experience.
Advanced Tips: Zero-Shot vs. Few-Shot: Zero-Shot simply gives instructions without examples, allowing the model to complete the task based on existing training. It's suitable for common tasks, simple and efficient, such as "Translate this sentence into Spanish." Few-Shot, on the other hand, provides a few input and output examples in a prompt, which is equivalent to "teaching by example." The model will continue writing according to the format you demonstrate, making it ideal for controlling style or handling complex tasks, such as code comments or test cases. Usage suggestion: Try Zero-Shot first, and if the results are not satisfactory, add Few-Shot examples. Zero-Shot is fast and direct, while Few-Shot is more precise and controllable.
In Lovable, AI often has "hallucinations" - confidently making up functions, interfaces or error summaries. It is impossible to avoid this completely, but the risk can be reduced: Provide reliable context: Put PRD, user processes, technology stack, etc. into the knowledge base so that AI has real basis instead of guessing. Attach real data to prompts: for example, add API document snippets or JSON examples, and avoid fictitious fields or methods. Require step-by-step reasoning: Let AI explain the idea before giving the code, which can expose potential errors. Prompt to be honest: Clearly write "If you are not sure, don't make it up", and many models will comply. Iterative verification: After getting the output, let AI check or review whether it is consistent with the requirements. If you encounter a result that "looks magical", be sure to review it, and don't blindly believe it. This is the only way to reduce hallucinations and ensure the accuracy of the output.
Want to make AI tools more compliant? You need to understand their "temperament" and usage tips. Distinguish between two working modes: Chat Mode is suitable for brainstorming, discussing solutions, or analyzing problems; it won't directly modify your code. Default Mode is used to execute explicit instructions, such as writing code or creating components. The recommended workflow is: first "discuss" the solution in Chat Mode, then switch to Default Mode to "do" it. (Translator's Note: In software like Claude Code, it is divided into Plan Mode and Execution Mode.) Break large tasks into small requests: Be aware of the AI's output length limit (token limit). Don't expect it to write an entire complex module in one go; it can easily break down or become confused midway. The best approach is to break large tasks into smaller ones, such as having it generate one function at a time. Clarify your formatting and code preferences: AI can't read your mind, and it doesn't know your team's standards. You need to tell it explicitly in the prompt, such as "Please follow the ESLint rules of this project" or "Please use markdown format to output code", so that it can better follow your style.