Prompting Best Practices

The way you prompt large language models (LLM) matters. Poor prompts are likely to yield to lower quality output and vice versa.

The following document aims to provide best practices to help improve the quality of user prompts, and in turn, the quality of the output.

1. The more clear and concise you are, the better

Large language models (LLMs) are very literate and will look to execute your exact commands. Providing clear and concise instructions will yield better performing results.

"What are my top performing products?"

Less Likely to get desired insight 🔴

Regardless of clarity, Lumi will attempt to provide a response to your data request; however, you are more likely to get the insight you are looking for with more clear and explicit instructions.

2. Spell attributes correctly!

Lumi does not know what the correct spelling of a product, brand, or category is. Misspelling an attribute in a data request will most likely return an empty response.

Consider an item catalog with the following categories:

How many items fall under bikes?

How many items fall under category bikes?

Tips to Avoid Error

Use the exact attribute name.

Example: What is the injury rate in 2024 for the 'AUSTIN (6Z, 3023)' market?

3. Specifying which columns you want to see is helpful.

Explicitly specifying output requirements will help Lumi understand exactly what columns you want to see in the rendered output. You don’t need to do this with every prompt, be can be helpful if you are trying to generate a very specific report.

Can you give me the top 10 brands that saw the largest increase in revenues in Jan 2024 (Jan 1 to Jan 15) vs. Jan 2023 (Jan 1 to Jan 15)? Output should include brand, 2024 January revenues, 2024 January revenues, and the delta.

Alternatively, we’ve also realized that including the phrase “include relevant details” at the end of your prompt seems to work well - Lumi will used its’ best judgement to determine which columns are needed.

what are my top shipped items in June 2023? Include relevant details

4. Break up a request for multiple metrics into multiple sentences.

When constructing queries, especially those involving multiple metrics or dimensions, it's beneficial to simplify the query by breaking it into separate sentences. This approach helps in clearly defining each part of the request and ensures that each metric is accurately addressed.

For example, consider a prompt where you want to know about item performance within a specific time frame, along with current stock levels. Instead of a single complex query, breaking it down can make it clearer:

Who are the top 10 items with the highest order count in April 2024? Add another column that shows current inventory levels.

This prompt effectively separates the historical sales data request from the current inventory status, allowing Lumi to focus on discrete pieces of information sequentially.

5. Specify the filter criteria where possible.

Providing clear and specific filter criteria in your queries is helpful for accurately narrowing down the data and extracting meaningful insights. Ambiguity in query prompts can lead to incomplete or irrelevant data being retrieved.

How many items fall under Dark Chocolates?

Less Likely to get desired insight 🔴

Last updated