AI Policy

Responsible AI

We are committed to advancing AI technology responsibly. We do this by utilizing rigorous, multidisciplinary review processes throughout our product development lifecycle, establishing diverse development teams to reduce biases, and collaborating with industry partners to mitigate potentially harmful uses of AI. We encourage AI policy measures that enable an open ecosystem and consider context and sector-specific use cases and applications rather than horizontal requirements. We believe that an open AI ecosystem drives accessibility for all actors in the AI value chain and promotes a level playing field where innovation can thrive. Additionally, we strongly recommend that industry stakeholders adopt responsible AI principles or guidelines that enable human oversight of AI systems and address risks related to human rights, transparency, equity and inclusion, safety, security, reliability, and data privacy.

Accountability

We support AI accountability practices that promote ethical behavior and share information with other organizations to help ensure AI systems are responsibly developed and deployed. Utilizing a risk-based approach, organizations should implement processes to evaluate and address potential impacts and risks associated with the use, development, and deployment of AI systems. While numerous existing laws or regulations apply to the deployment and use of AI technology, such as privacy and consumer financial laws, new rules may need to be adopted where gaps exist.

Trust and Safety

We support a risk-based, multi-stakeholder approach to trustworthy AI development, informed by international standards (e.g., ISO/IEC) and frameworks such as the National Institute for Standards and Technology (NIST) AI Risk Management Framework. These provide key guidance to address important requirements underpinning the trust and safety of AI, such as data governance, transparency, accuracy, robustness, and bias. Regulatory agencies should also evaluate the use and impact of AI in relevant, specific use cases to clarify how existing laws apply to AI and how AI can be used in compliance with existing laws. If necessary, regulatory agencies may consider the development of appropriate requirements in collaboration with industry and stakeholders to address additional concerns.

Generative AI

Generative AI describes the algorithms used to create new data that can resemble human-generated content, including audio, code, images, text, simulations, and videos. This technology is trained with existing content and data, creating the potential for applications like natural language processing, computer vision, metaverse, and speech synthesis. As generative AI continues to improve, reliable access to verifiable, trustworthy information, including the certainty that a particular piece of media is genuinely from the claimed source, will be increasingly critical in our society. Technology is likely to play an important role. We are working to mitigate risks and build trust by developing algorithms and architectures to determine whether content has been manipulated using AI techniques. Our research team investigates new approaches to help determine the authenticity of media content; our research areas include using AI and other methods for media authentication.