[Product Launch] Error summarization experiment

We’ve recently started rolling out an AI experiment that allows you to get additional information on an error within a failed step. Our goal is to help speed up the process of resolving common issues without requiring you to dig into documentation or search around the web.

This feature works by sending the error message within the step output to a third-party LLM. Prior to submitting the error message to the LLM, the message is passed through a filter to remove any sensitive information such as access keys and passwords.

The LLM we’re currently using is OpenAI. The data that is being used by the LLM is not being used for training purposes.

Administrators can turn this experiment off by navigating to the “Advanced” section of the “Organization Settings” menu.

As this is just an experiment we’re looking for feedback.

1 Like