Box AI

Principles and approach

Box AI principles


We’re committed to responsibly developing Box AI, ensuring transparency while adhering to permissions and data security protocols.

Effective date: May 2, 2023

Artificial intelligence (AI) offers significant opportunities to transform the way enterprises manage, analyze, create, and extract value from their content. In particular, new advancements in large language and other advanced models mean that AI now has the unprecedented ability to process unstructured content. This promises to streamline workflows, enhance decision-making processes, and unlock potential across all aspects of business operations.

However, the adoption of AI brings with it unique challenges and risks that must be addressed responsibly. Ensuring the security, privacy, and conscientious use of AI on your most critical corporate content is at the core of our mission at Box.

With this in mind, the Box AI Principles outline our commitment to applying the power of AI in a manner that prioritizes the interests (and protects the content) of our customers and users. These principles provide a framework for the responsible and secure use of AI within Box, and serve to ensure there is transparency around how Box will use AI. By adhering to these principles and guidelines, we can all capitalize on the benefits of AI without compromising the integrity of proprietary data or operations.


Box AI commitments

When using and providing AI capabilities in Box products, we are committed to the following principles, and as our experience in this space deepens, this list may evolve:

  1. Full customer control of AI usage. We’re committed to ensuring that our customers maintain control over their own data and processes. Customers may enable or disable the use of AI and decide whether AI should be applied to their content.
  2. No training models using customer content without explicit approval. Box won’t train AI models using customer content without the customer’s explicit authorization (for example, if a customer wants to create a customized AI model based on some of their content, they will need to explicitly agree to allowing this application of AI to their content).
  3. Explanations of AI output. We provide users, wherever reasonable, with a clear understanding of how our AI systems work and the rationale behind the AI output, to ensure context for the users.
  4. Strict adherence to permissions. AI systems adhere to the same strict controls and permissions policies that determines access to content across Box. Our architecture vigorously protects against data leakage and unauthorized access.
  5. Data security. We safeguard customer data by implementing robust security protocols, including encryption and data-security best practices, to maintain strict confidentiality and bolster security.
  6. Transparency. We are committed to being transparent about our AI practices, technology, vendors, and data usage.
  7. Protection of user and enterprise data. Box remains committed to complying with privacy, security, and/or applicable regulations by prioritizing the continued protection of both end-user and enterprise data.
  8. Trustworthy AI models. We are dedicated to using high-quality AI models from trusted vendors to support accuracy, reliability, and safety of AI solutions.  



Enterprise responsibilities and acceptable use

When using Box AI, Box customers also have responsibilities to ensure they use the AI input and output responsibly, including: understanding the limitations of AI, avoiding illegal or unsafe behaviors, avoiding automatic decision-making by AI that could cause harm without human review, and staying in compliance with laws/regulations. More information about enterprise responsibilities is available in our Box AI Acceptable Use Policy & Guiding Principles.


AI Governance at Box: Unleashing innovation safely

At Box, ensuring the privacy and security of our customers’ content is a foundational principle at the heart of the Content Cloud. That's why we've established a thorough AI governance program that aligns with the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF).

The NIST AI RMF offers a structured approach to evaluating, controlling, and reducing risks related to integrating AI technologies such as generative AI powered by large language models. While voluntary, the NIST AI RMF establishes specific standards tailored to a company's role, use case, size, complexity, and other relevant factors. The following outlines how we have integrated the four key pillars of this framework, where applicable, into our own AI governance program.

  • Govern. We maintain AI policies and practices to make certain that our use of AI technology is in compliance with laws, regulations, certifications, and industry standards. We provide clear principles for employees to follow when Box and our product offerings integrate with AI technology service partners. These policies also establish a process for giving feedback on best practices.
  • Map. The Box AI Acceptable Use Policy and Guiding Principles detail how Box's customers and their end users should utilize Box AI technologies, including their responsibilities and what usage is prohibited. Adhering to these principles helps maximize the benefits of AI while minimizing risks and supporting adherence with applicable legal obligations.
  • Measure. We continually monitor our integration with AI technologies made available by our AI service partners, including the viability and reliability of outputs generated by Box AI.
  • Manage. We've established a cross-functional AI governance team to supervise the integration of AI technology provided by our AI service partners into Box's systems. The AI governance team includes Box’s most senior leaders from legal, security, compliance, product, engineering, supplier management and employment to ensure that we only integrate with AI service partners that align with Box’s policies and industry best practices.


Safeguarding your data

From a single Box user with a personal account to our largest Enterprise customers, we maintain enterprise-grade security, compliance, privacy and trust to protect customer content. Box’s data encryption strategy is based on requirements from standards such as, HIPAA/HITECH Act, PCI DSS, ISO 27001 and adherence to NIST-recommended algorithms and methods, among others. Content uploaded to Box is encrypted in transit when sent through our website and Box-created applications. Content is also encrypted at rest. To learn more about how we prioritize security, compliance, data privacy, and reliability at Box, visit the Box Trust Center.



AI Governance
AI Governance: Safeguarding Trust and Data Privacy
Read blog
Protecting privacy in AI driven world
Protecting Privacy in an AI-driven World
Watch now
Evolving landscape
The evolving landscape of AI regulations
Watch now
Ensuring data privacy
Ensuring data privacy and trust across AI innovations
Watch now
Centralizing your content
Why centralizing your content matters for privacy
Read blog

See what Box can do for you