GenAI Evaluation KDD2024: KDD workshop on Evaluation and Trustworthiness of Generative AI Models International Barcelona Convention Center Barcelona, Spain, August 25-26, 2024 |
Conference website | https://genai-evaluation-kdd2024.github.io/genai-evalution-kdd2024/ |
Submission link | https://easychair.org/conferences/?conf=genaievaluationkdd20 |
Abstract registration deadline | June 21, 2024 |
Submission deadline | June 21, 2024 |
Welcome to GenAI Evaluation KDD 2024 !
We are excited to invite you to contribute to the KDD workshop on Evaluation and Trustworthiness of Generative AI Models at KDD 2024 in Barcelona, Spain. This workshop aims to serve as a pivotal platform for discussing the forefront of Generative AI trustworthiness and evaluation advancements.
Workshop Highlights & Important Dates
Workshop Highlights:
- Location: Barcelona, Spain
- Webpage: https://genai-evaluation-kdd2024.github.io/genai-evalution-kdd2024/
- Workshop Date: August 25th or 26th, 2024
Important Dates:
- Paper Submission Deadline:
June 10th, 2024, June 21th, 2024 - Paper Acceptance Notification: July 19th, 2024
- Camera-Ready Submission: August 12th, 2024
Call for Papers
Generative AI models, such as Large Language Models (LLMs) and Diffusion Models have revolutionized various domains, underscoring the critical need for reliable Generative AI technologies. As these models increasingly influence decision-making processes, establishing robust evaluation metrics and methods becomes paramount. Our objective is to delve into diverse evaluation strategies to enhance Generative AI models reliability across applications. The workshop topics include, but are not limited to:
- Holistic Evaluation: Covering datasets, metrics, and methodologies
- Trustworthiness in Generative AI Models:
- Truthfulness: Counteracting misinformation, hallucination, inconsistency, sycophancy in responses, adversarial factuality.
- Ensuring Safety and Security: privacy concerns, preventing harmful and toxicity content.
- Addressing Bias and Fairness.
- Ethical Considerations: social norm alignment, compliance with values, regulations and laws.
- Privacy: privacy awareness and privacy leakage.
- Enhancing misuse resistance, explainability, and robustness.
- User-Centric Assessment.
- Multi-perspective Evaluation: Emphasizing logical reasoning, knowledge depth, problem-solving, and user alignment.
- Cross-Modal Evaluation: Integrating text, image, audio, etc.
Submission Guidelines
- A paper should be submitted in PDF format through EasyChair at the following link: [link]
- Submission Guidelines can be found: [link]
Contact
Organizers: Yuan Ling, Shujing Dong, Zongyi Liu, Yarong Feng, George Karypis, Chandan K Reddy
Contact: kdd2024-ws-genai-eval@amazon.com