Box AI: Acceptable Use Policy & Guiding Principles
Effective date: May 2, 2023
At Box, ensuring the privacy and security of our customers’ content is a bedrock principle at the heart of the Content Cloud. We believe that the adoption of generative AI (“AI”) will bring incredible innovation to what our customers can do with their content in Box. We also understand that the adoption of AI brings unique risks that must be addressed responsibly. As part of our ongoing commitment to customer trust and transparency, we’d like to update you on our efforts to incorporate AI into our product offerings, our policies and guiding principles, along with your responsibilities as a user of Box AI.
The Box AI: Acceptable Use Policy & Guiding Principles (the “Policy”) incorporates and should be read in accordance with your underlying agreement with Box. This Policy and your agreement with Box are in place for your protection and ours. Box also reserves the right to update the Policy from time to time due to changes in vendor requirements, applicable laws, guidance, standards and/or industry best practices.
AI transparency and Box’s use of third parties
We are committed to being transparent about our AI practices, technology, vendors, and data usage. When you choose to enable Box AI, it’s important you understand that Box AI is powered by integrations of AI systems provided by our trusted vendors.
AI principles, customer responsibilities, and use
Our AI Principles outline our commitment to using the power of AI in a manner that prioritizes the interests of our customers and users. These principles provide a framework for the responsible use of AI within Box. The below section details Box’s intended use of Box AI by our customers and their respective end users, as well as emphasizes customers' responsibilities when using Box AI, and lists specific examples of prohibited usage. By adhering to this Policy, Box and our customers can best capitalize on the benefits of AI, while mitigating risks and ensuring our shared compliance with applicable laws.
Customer responsibilities. The customer has the sole responsibility to ensure their usage of Box AI is in compliance with their own legal obligations, standards, practices, and internal guiding principles. Provided below are certain key examples of customer responsibilities related to the use of Box AI.
- Understand Box AI limitations. There are inherent limitations of AI systems and customers should not rely solely on Box AI outputs for consequential and/or critical decision-making that may result in high-risk outcomes. For instance, customers should consider the context, input data, and potential biases that may affect Box AI-generated results.
- Responsible use of content shared and output generated. Customers are responsible for the content they choose to share with Box AI and any actions taken by them as a result of the AI-generated output.
- Responsible for human oversight. Customers should avoid relying entirely on Box AI-generated outputs for making automated decisions without human review. Combining human judgment with Box AI-generated insights can help ensure more accurate and responsible outcomes.
- Responsible use for high-risk activities. Customers are responsible for and should avoid use of Box AI for high-risk and/or high-stakes activities, including activities with a high-risk of physical harm or activities that could have a consequential impact on the rights and freedoms of individuals’ legal status, healthcare, social welfare and other freedoms protected under law (as detailed below in the Prohibited Use section).
- Responsible for compliance with laws and regulations. Customers should ensure their use of Box AI complies with applicable laws, regulations, standards, and guidelines, such as those related to data protection, privacy and security, as well as adhering to requirements in countries or regions where the use of AI has been prohibited or otherwise limited.
- Responsible for transparency with all users. Customers should maintain transparency with their users concerning the Box AI product offering and provide safeguards to ensure applicable laws, regulations, standards, guidelines and this Policy are followed.
Intended use. Box AI can be used to gain valuable insight into your content. Below are some examples of intended use cases for Box AI:
- Quickly find suggested answers to specific questions based on a document’s content.
- Classify files, folders or documents.
- Extract specific data from one or multiple documents automatically.
- Digest key information from large documents into short, readable summaries.
- Reference multiple documents associated with a topic.
- Generate content based on an existing document.
- Assist in the creation or understanding of a document’s content.
- Creating work product as part of the product development process.
Prohibited use. While we believe that Box AI has limitless potential use cases, certain uses are beyond the intended purpose of the product offering and are strictly prohibited. For instance, Box AI models are not trained explicitly for legal, financial, or medical advice and should not be relied on for this type of high-risk decision-making. Examples of prohibited use cases for Box AI include:
- Employment: Box AI should not be used to make employment decisions, including hiring, firing, promoting, demoting, and other employment-related activities.
- Healthcare: Box AI should not be used to make health-related decisions, including recommendations for treatment, analysis of images and health records, diagnoses, and patient care.
- Finance: Box AI should not be used to make financial decisions, including accepting or rejecting loan applications, credit scoring, algorithmic trading, financial, investment or tax advice, or fraud detection.
- Legal: Box AI should not be used to make decisions regarding legal matters, including generating legal claims and patents, analyzing legal claims, providing legal advice and other related actions that may be deemed to be the unauthorized practice of law.
- Education: Box AI should not be utilized to make decisions regarding matters related to education, including making educational institutional acceptance or rejection decisions, assessing student proficiency, and testing assessments.
- illegal or unsafe behaviors: Box AI should not be used for any illegal or unsafe behaviors, including producing harassing or violent content, malware and/or hacking activity, activity that could result in economic harm, fraudulent or deceptive activity, plagiarism or other academic dishonesty, disinformation, spam, activities that violate privacy, classifying individuals based on protected characteristics and/or otherwise in furtherance of a crime or activity deemed illegal in your respective jurisdiction.
- Override or circumvent safety filters: Box AI should not be used to attempt to override or circumvent safety filters or intentionally drive the model to act in a manner that contravenes this Policy.
- Protected classes: Box AI should not be used to make decisions based on a protected class or characteristic, as defined by applicable law(s), such as the following: