HOW WE DO IT
To achieve the best results, we team up with stakeholders and domain experts during the implementation, using the platform tools designed for this purpose
During the construction stage, the functional expert can decide to enter self-improvement mode, to dialog with the assistant, give feedback, suggest desired answers, make corrections. This process allows the assistant to automatically modify different components of the implementation until reaching the answer desired by the expert.
Where assistants, techniques, tools (default and custom), configurations and prompting are defined.
Where the client can manage documents and access interaction monitoring.
Component used in implementations to measure the performance of the assistant at each stage of the process. Using LLM-based metrics, embedding-based metrics, and traditional NLP metrics, we generate reports for the client with the assistant's performance at a given moment. After the implementation stage, it is periodically run as regression tests.
Where the client can see in real time (and in pre-built dashboards) all interactions with the wizard and user feedback.