Skip to the content.

Overview

Date May 4, 2023
Location The workshop will be held in a Hybrid mode, welcoming both in-person and virtual attendance (ICLR registration required).

In recent years, the landscape of AI has been significantly altered by the advances in large-scale pre-trained models. Scaling up the models with more data and parameters has significantly improved performance and achieved great success in a variety of applications, from natural language understanding to multi-modal representation learning. However, when applying large-scale AI models to real-world applications, there have been concerns about their potential security, privacy, fairness, robustness, and ethics issues. In the wrong hands, machine learning could be used to negatively impact mission-critical domains, including healthcare, education, and law, resulting in economic and environmental consequences as well as legal and ethical concerns. For example, existing studies have shown that large-scale pre-trained language models contain toxicity in open-ended generation and have the risk of amplifying bias against marginalized groups, such as BIPOC and LGBTQ+. Moreover, large-scale models can unintentionally leak sensitive personal information during the pre-training stage. Last but not least, machine learning models are often viewed as "blackboxes" and may produce unpredictable, inaccurate, and unexplainable results, especially under domain shifts or maliciously tailored attacks.

To address these negative societal impacts in large-scale models, researchers have investigated different approaches and principles to ensure robust and trustworthy large-scale AI systems. This workshop is the first attempt to bridge the gap between security, privacy, fairness, ethics, and large-scale AI models and aims to discuss the principles and experiences of developing robust and trustworthy large-scale AI systems. The workshop also focuses on how future researchers and practitioners should prepare themselves to reduce the risks of unintended behaviors of large ML models.

This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of robustness and trustworthiness in large-scale foundation models from a broad range of disciplines with different perspectives on this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.