LogRocket Galileo AI Privacy & FAQ

LogRocket Galileo uses multiple AI models in tandem to generate relevant and actionable insights. LogRocket has taken numerous measures to ensure that customer and user data is kept safe at all times.

We make calls to OpenAI’s API for certain Galileo functions. OpenAI does not use data sent via its API to train their models, which our team has determined is a secure approach to keeping customer and user data safe. More information about how OpenAI treats data sent to its API can be found here.

FAQ

Does LogRocket comply with GDPR, CCPA, and other relevant data protection regulations?

Yes, both we and our downstream providers (OpenAI, GCP) are compliant with common data protection regulations.

Does LogRocket use or plan to use customer information to improve and/or train future AI models? If so, can customers opt-out and how do they do so?

LogRocket does not use identifiable customer-specific data for its general models, instead using redacted non-textual data. Identifiable data may only be used in siloed processes that are restricted to individual customers. Please contact [email protected] to work with our legal team to opt out of general models.

Which AI models and services does LogRocket use? What types of AI model does LogRocket use?

We have vendor relationships with both GCP and OpenAI and may use their commercial models, or open-source models, in the future. Most of the models used are LLMs or multimodal transformer models.

Please see additional details about the vendors we use in our Vendor Management Policy.

What data was used to train the AI?

Our own models were trained on non-textual data across our user base. The commercial off-the-shelf model providers do not reveal their training sets in detail.

What measures are in place to ensure the AI’s outputs are fair, unbiased, accurate, and truthful? How does LogRocket handle AI bias?

OpenAI provides extensive safety documentation: https://openai.com/safety

We use AI to describe and summarize user experiences. We do not use AI to make recommendations on individuals. The potential for harm is minimal even with malicious inputs.

Is there a possibility of LogRocket’s AI making uncontrolled or unsupervised decisions that could impact other systems or people?

No.

Can LogRocket Galileo be used to generate code or integrate with other systems?

Not at present.