Responsible AI
We are committed to advancing AI technology responsibly. We do this by utilizing rigorous,multidisciplinary review processes throughout our product development lifecycle, establishingdiverse development teams to reduce biases, and collaborating with industry partners tomitigate potentially harmful uses of AI. We encourage AI policy measures that enable an openecosystem and consider context and sector-specific use cases and applications rather thanhorizontal requirements. We believe that an open AI ecosystem drives accessibility for all actorsin the AI value chain and promotes a level playing field where innovation can thrive. Additionally,we strongly recommend that industry stakeholders adopt responsible AI principles or guidelinesthat enable human oversight of AI systems and address risks related to human rights,transparency, equity and inclusion, safety, security, reliability, and data privacy.
Accountability
We support AI accountability practices that promote ethical behavior and share information withother organizations to help ensure AI systems are responsibly developed and deployed.Utilizing a risk-based approach, organizations should implement processes to evaluate andaddress potential impacts and risks associated with the use, development, and deployment ofAI systems. While numerous existing laws or regulations apply to the deployment and use of AItechnology, such as privacy and consumer financial laws, new rules may need to be adoptedwhere gaps exist.
Trust and Safety
We support a risk-based, multi-stakeholder approach to trustworthy AI development, informedby international standards (e.g., ISO/IEC) and frameworks such as the National Institute forStandards and Technology (NIST) AI Risk Management Framework. These provide keyguidance to address important requirements underpinning the trust and safety of AI, such asdata governance, transparency, accuracy, robustness, and bias. Regulatory agencies shouldalso evaluate the use and impact of AI in relevant, specific use cases to clarify how existinglaws apply to AI and how AI can be used in compliance with existing laws. If necessary,regulatory agencies may consider the development of appropriate requirements in collaborationwith industry and stakeholders to address additional concerns.
Generative AI
Generative AI describes the algorithms used to create new data that can resemblehuman-generated content, including audio, code, images, text, simulations, and videos. Thistechnology is trained with existing content and data, creating the potential for applications likenatural language processing, computer vision, metaverse, and speech synthesis. As generativeAI continues to improve, reliable access to verifiable, trustworthy information, including thecertainty that a particular piece of media is genuinely from the claimed source, will beincreasingly critical in our society. Technology is likely to play an important role. We are workingto mitigate risks and build trust by developing algorithms and architectures to determine whethercontent has been manipulated using AI techniques. Our research team investigates newapproaches to help determine the authenticity of media content; our research areas include using AI andother methods for media authentication.