Human Verification
Our workflows are designed to quickly and accurately respond to user prompts, but certain circumstances may require that an individual familiar with the data and associated model verify and sign for the accuracy of the results.
With the "Human Verification" module, users can share and collect feedback verifying or rejecting the results or a workflow without having to leave the application.
Common Scenarios
Not all circumstances warrant requesting a human to verify a result, and requests will never prevent insights from being generated, but the following are cases that should be considered.
Result appears correct, but underlying SQL is complex — If the explanation, assumptions, and raw analytical code are overwhelming, it's useful to have a Knowledge Base Contributor who's assembled the model review and verify on a user's behalf.
Concern for underlying data quality issues — The data results may look invalid, despite clear and successful code execution. In these cases, there may be a source system issue that a Knowledge Base Contributor may be able to better assess and comment on.
Results are essential for a critical business decision or presentation — Normally, you can expect high accuracy from Lumi Workflows in Chat and elsewhere, but when business decisions are critical and numbers need the highest level of confidence, you can use Human Verification to have an expert on your team review and confirm.
Conversely, cases where human verification is not warranted include:
A workflow failed or result looks visibly inaccurate on a first attempt — If a data quality issue is unlikely, in these circumstances it's best to continue the conversation with Lumi AI and explore with follow-up prompts and additional context. Lumi Workflows make their best effort to provide insights, but may not always provide the perfect answer the first time (e.g., from an ambiguous prompt, or data not available in the Knowledge Base, or a connection issue, etc.).
Result format, styling, or exact insights vary between attempts or chats for the same format — Because the underlying technology is "generational", the output may vary, including the exact columns or formats used in results, or the summary produced. Unless the raw numbers vary dramatically, this is expected behaviour. You can use Memories instead to align on preferred calculations and formats.
Basic Patterns
There are two actors when observing Human Verification patterns: the requestor and the reviewer.
Requestor — The individual raising a request for Human Verification. Normally a Knowledge Base user with no expected deeper familiarity with the Knowledge Base specifics.
Reviewer — The individual reviewing a request for Human Verification. Any Knowledge Base Contributor (inclusive of workspace Admins) can comment and definitively verify (approve) or reject a request.
Each request goes through 4 phases.
Submission — This is when a user in Chat requests Human Verification for a message (and in the process becomes a requestor).
Review — During this time the request is considering "Pending" and both the requestor and any reviewer(s) may comment on the request to help close it. The report is accessible via the Human Verification hub or the message itself (the option for requesting human verification is replaced with a link to the report in progress).
Qualification — This happens when a reviewer Approves (Verifies) or Rejects a request, and the message is now qualfied accordingly. This normally concludes the process, and the message receives an icon.
(Optional) Adjustment — In some cases, the original qualification must be changed (e.g., turns out the response was accurate, or another reviewer determined there was an issue, etc.). The qualification can be changed by any reviewer at any time, and will be reflected accordingly for the message and its icon.
Navigation
The Human Verification hub is accessible via the navbar on the lefthand side of the webapp. There are two sections in the hub:
My Requests — All submissions from the user, in any state.
My Reviews — All requests/reports available for review (must be a Knowledge Base Contributor or Workspace Admin for any to be present).
By default, all panels will be filtered to "Pending" status requests, limiting the view to requests that require action. You can adjust the filters any time to review historical examples.
FAQ
Some additional aspects to consider:
Can I use Human Verification for Boards, Cards, or other non-Chat elements in Lumi AI?
Answer: At present, Human Verification is only available for Chat. It is recommended that a response is verified in Chat before being added to a Board if the preference is to evaluate an answer before including it on a Board.
Can I approve my own request?
Answer: No. This defeats the purpose of another individual providing verification for a result. As a result, this is specifically prevented.
Last updated
Was this helpful?