[Product Launch] Error summarization experiment

We’ve recently started rolling out an AI experiment that allows you to get additional information on an error within a failed step. Our goal is to help speed up the process of resolving common issues without requiring you to dig into documentation or search around the web.

This feature works by sending the error message within the step output to a third-party LLM. Prior to submitting the error message to the LLM, the message is passed through a filter to remove any sensitive information such as access keys and passwords.

The LLM we’re currently using is OpenAI. The data that is being used by the LLM is not being used for training purposes.

Administrators can turn this experiment off by navigating to the “Advanced” section of the “Organization Settings” menu.

As this is just an experiment we’re looking for feedback.

July 3, 2024 Update: This functionality is now opt-in by default as opposed to being opt-out by default.

1 Like

Hi, the experiment is still enabled for our organization as an option even though we have the experiment turned off.

Hi! Can you confirm that clicking the button actually executes anything? Turning off the experiment turns off the execution of the button but doesn’t hide the button currently.

No, I can’t. I don’t want to risk sending private information to a 3rd party.

This is the message you should see if you attempt to hit the button. We don’t send any information until the option is explicitly turned on.

But I can’t press the button, I explained that. You’re basically telling me to trust that a visible button which says it will send my data to a third party wont’ send my data to a third party. I really just want it gone so our organization doesn’t have to worry about it.

Thanks for the feedback. I’ve confirmed that the button is indeed not sending any information currently. We’re going to look into disabling/hiding the button.

Hi, our company’s security team is evaluating this feature since our developers noticed it recently and are asking us if it is OK to use. Could you provide some more information on what is sent to OpenAI: is it just the error message, or does it include the stack trace, or does it include the entire build log for context?

When you say “The LLM we’re currently using is OpenAI”, is there a way to be notified of changes to the LLM in future if you switch to another third-party hosted LLM?

Will the CircleCI Sub-Processors page be updated to include OpenAI?