Call For Papers
We invite both ARR-based submissions and newly submitted manuscripts on the Openreview platform where each submitted paper will receive at least 3 reviews. Depending on the number of accepted submissions, papers will be presented in oral and/or poster sessions. Papers should be submitted in ACL format.
Important Dates
Submission Deadline: 5th May 2026, AOE
Notification Deadline: 31st May 2026
Camera-ready Submission Deadline: 5th June 2026
The topics of interest for the submission (four-pages short papers or eight-pages long papers) in the workshop are focused on two primary themes, which include but are not limited to the following areas:
1. Development of Foundational Models for Social Good
- Responsible Development of Foundation Models: We welcome research on the responsible use of foundation models (e.g. Efficient ML / Low-carbon ML, methods for understanding inner workings of foundational models) exploring their opportunities, challenges.
- Foundation Models for Low Resource Language: To ensure equitable access to the enormous potential of foundation models, they must be available across all languages. We invite works on developing efficient foundation models for low-resource languages.
- Small Scale Foundation Models: Foundation models typically demand massive datasets and large-scale architectures, making them difficult to build and deploy across diverse systems. We encourage work on developing efficient, small-scale foundation models that are more accessible and usable for everyone.
- Multi-modality in Foundation Models: Multi-modality is vital for Foundation models, as it can integrate various data forms like text, images, audio, and video to create more effective, context-aware solutions. This approach helps address real-world challenges and better serves diverse communities. Thus, we also invite research work on building multi-modal foundation models.
- Efficient Inference of LLMs: Reducing inference time is critical for deploying large language models (LLMs) in real-world applications, where latency, cost, and scalability directly affect usability. We encourage work related to model compression techniques such as quantization (e.g., 8-bit or 4-bit weights), pruning, decoding techniques and knowledge distillation to reduce computation and memory footprint while preserving performance.
2. Deployment of Foundational Models for Social Good
- Innovative Applications of Foundation Models: Considering the 17 UN Sustainable Development Goals (SDGs), we invite research contributions that apply foundation models in novel and underexplored domains.
- Efficient Inference of LLMs Reducing inference time is critical for deploying large language models (LLMs) in real-world applications, where latency, cost, and scalability directly affect usability. Several practical strategies can substantially improve efficiency. Consequently, we encourage work related to model compression techniques such as quantization (e.g., 8-bit or 4-bit weights), pruning, decoding techniques and knowledge distillation reduce computation and memory footprint while preserving performance.
The Submission link is here.