-
Definitions
- “AI Features” means any functionality of the Services that uses artificial intelligence, machine learning, or large language models to generate, predict, recommend, classify, summarize, extract, transform, or otherwise process data.
- “Customer Data” has the meaning set forth in this Agreement and includes Protected Health Information (“PHI”) as defined by HIPAA, to the extent Customer submits PHI to the Services and the Parties have executed a Business Associate Agreement (“BAA”).
- “AI Inputs” means prompts, queries, instructions, files, text, and other data submitted to AI Features, including Customer Data.
- “AI Outputs” means results generated by AI Features in response to AI Inputs.
- “AI Subprocessor” means a third party engaged by Provider to support AI Features (including any model, hosting, or inference provider).
-
Relationship to the BAA; PHI Handling.
To the extent AI Features create, receive, maintain, or transmit PHI on Provider’s behalf, such processing will be governed by the BAA. In the event of a conflict between this Section and the BAA with respect to PHI, the BAA will control.
-
Security Program; Control Alignment.
Provider will implement and maintain a written information security program designed to protect Customer Data (including PHI) processed by or in connection with AI Features. Such program will include administrative, technical, and physical safeguards appropriate to the nature of the Services and risks presented, and will be aligned to recognized security and risk management frameworks (including practices consistent with the NIST risk management approach) and will address, as applicable: risk assessment; access controls and least privilege; encryption in transit and at rest; secure development and change management; logging and monitoring; vulnerability management; incident response; business continuity; and third-party risk management. Provider will not state that the Services are certified under any standard (including HITRUST) unless expressly agreed in writing.
-
No Training on Customer Data (Including PHI)
Provider will not use Customer Data, including PHI, to train or fine-tune any artificial intelligence or machine learning models (whether Provider’s models or third-party models), except where Customer provides explicit written authorization (e.g., an opt-in program) that describes the scope and purpose of such training. For clarity, Provider may use Customer Data solely as necessary to provide the Services (including generating AI Outputs), to maintain and secure the Services, and to comply with applicable law and the BAA.
-
De-Identification and Aggregation
Provider may create and use de-identified and/or aggregated data derived from Customer Data for analytics, benchmarking, product improvement, security, and operational purposes, provided such data is de-identified in accordance with applicable law and, for PHI, in accordance with HIPAA de-identification standards and the BAA, and does not reasonably identify Customer or any individual.
-
AI Subprocessors and Future Model Providers
-
Provider may engage AI Subprocessors to support AI Features, including in the future. Provider will:
-
perform risk-based due diligence and ongoing oversight of AI Subprocessors;
-
enter into written agreements requiring AI Subprocessors to protect Customer Data (including PHI) in a manner consistent with this Agreement and, where applicable, the BAA (including executing appropriate HIPAA subcontractor/business associate terms);
-
limit AI Subprocessors’ processing to what is necessary to provide AI Features and prohibit them from using Customer Data for training except as expressly permitted under Section 4; and
-
remain responsible for AI Subprocessors’ acts and omissions to the same extent as Provider’s own.
-
Customer Control; Configuration
Customer controls whether and how its users submit data to AI Features. Customer is responsible for (a) determining whether AI Outputs are appropriate for Customer’s use cases, (b) implementing internal policies and user instructions regarding acceptable AI Inputs (including PHI workflows), and (c) verifying AI Outputs prior to use in any clinical, operational, or other decision-making.
-
Output Quality; No Medical Advice
AI Outputs are generated based on statistical methods and may be inaccurate, incomplete, or inappropriate. Customer is responsible for human review of AI Outputs before relying on them. AI Features and AI Outputs do not provide medical advice or clinical decision-making unless expressly stated in the Services documentation. Provider makes no warranties regarding the accuracy, completeness, or suitability of AI Outputs.
-
High-Risk Use and Safeguards
Customer will not use AI Outputs as the sole basis for decisions that produce legal or similarly significant effects on individuals, including clinical diagnosis or treatment decisions, without appropriate human oversight and safeguards commensurate with the risks of such use.
-
Security Incidents
Provider will maintain incident response procedures applicable to AI Features. If Provider becomes aware of unauthorized access to or disclosure of Customer Data processed in connection with AI Features, Provider will notify Customer in accordance with this Agreement and, for PHI, in accordance with the BAA.
-
Rights in Data and Outputs
As between the Parties, Customer retains all rights in Customer Data. Subject to Customer’s compliance with this Agreement, Customer may use AI Outputs for its internal business purposes in connection with the Services. Provider retains all rights in the Services and AI Features, excluding Customer Data.
-
AI Disclaimer
TO THE MAXIMUM EXTENT PERMITTED BY LAW, AI FEATURES AND AI OUTPUTS ARE PROVIDED “AS IS” AND “AS AVAILABLE.” PROVIDER DISCLAIMS ALL WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING WARRANTIES OF ACCURACY, NON-INFRINGEMENT, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND THAT AI OUTPUTS WILL BE ERROR-FREE OR UNINTERRUPTED.