AI features in Aha! software

Aha! provides purpose-built generative artificial intelligence features in our products to assist product builders from product discovery to delivery.

Security and confidentiality are top priorities in every AI feature we build. You can enable or disable generative AI features according to your needs.

Account-level controls

We recognize that customers have different policies regarding AI features and we provide granular account-level settings.

Account administrators can enable or disable generative AI-powered features and configure the AI assistant's capabilities from

Settings ⚙️ Account AI control.

Shared responsibility

Aha! AI features are designed to be used with trusted product data by trusted product builders. Aha! provides AI assistant controls to disable features that are not suited for your data, users, or internal policies. This includes controls for customer administrators to limit records by type and workspace to prevent processing of untrusted data. For example, disallowing the processing of idea records reduces the risk of unexpected behavior from the AI assistant when working with untrusted data submitted via an ideas portal. It also helps prevent accidental edits to idea records that may be visible in a public ideas portal.

Models we use

Aha! is using AWS, Google, and OpenAI models to power some of our recently announced generative AI features. They were chosen for their industry-leading accuracy and robust safety controls. Using public and pre-trained models means that the training data set does not include any Aha! customer data.

Model training

Your account data will not be used to train AI models. This includes prohibiting AWS, Google, and OpenAI from using your inputs and outputs for training purposes. Our standard Privacy Policy and Terms of Service also apply to AI features.

Data confidentiality

Aha! implements a comprehensive supplier management program with annual security and contractual checks as part of our ISO 27001 certified Information Security Management System. This includes AI suppliers and we ensured that AWS, Google, and OpenAI met rigorous confidentiality standards as part of AI feature implementation.

Our contractual agreements with AWS, Google, and OpenAI require them to maintain the security and confidentiality of customer data and their control effectiveness is assessed through comprehensive third-party audits.

Data Security

Aha! implements strong security controls for customer data during AI data processing. This includes encryption in transit with at least TLS version 1.2, strong authentication controls, and AES256 encryption at rest. AI feature development follows our comprehensive secure software development lifecycle which also includes AI-specific best practices and guidance.

Permissions

Aha! AI features follow the permissions of the user who initiates the interaction. This means AI assistant interactions and summaries only have access to data that the user can already access within the Aha! interface. The ability of Aha! AI features to create and modify data is also limited by the initiating user's workspace permissions.

Privacy

Aha! is typically used with roadmapping and strategy data, only collecting a limited amount of nonsensitive personal information in a business context. Aha! AI feature use cases do not include decision-making or high risk processing. Consequently, Aha! AI features typically do not handle substantive personal information and are limited to incidental personal information (such as names included in record descriptions).

Data retention

Content created as part of AI features (such as accepted AI drafts) becomes part of your standard account data. These are maintained and backed up like any other data entered into your account.

For some AI features, Aha! maintains a history of user prompts so that users can re-use or refine a previously used prompt. This data is maintained to provide prompt history and is not used for training.

Safety

Where appropriate, such as in our AI writing assistant, users have the ability to review model outputs for accuracy before accepting them. The Aha! AI assistant asks for approval for significant actions such as record creation. Getting feedback and approval helps ensure the user can review and approve outputs and significant actions.

Aha! handles model outputs with care, including disabling model-generated external image references, asking the user about external links, sandboxing model-generated artifacts, and not running model-provided code on Aha! servers. Aha! chose AWS, Google, and OpenAI models for their industry-leading safety controls, and we use automated moderation controls to help detect inappropriate model outputs.

Questions?

If you have other questions regarding Aha! security or AI features, please contact us by email at support@aha.io.