Evaluation Tasks


The 23rd China National Conference on Computational Linguistics (CCL 2024)

Call for technical evaluation tasks

July 25-28, 2024, Taiyuan

Conference Website: http://cips-cl.org/static/CCL2024/en/index.html

"The Twenty-third China National Conference on Computational Linguistics (CCL 2024)" will be held in Taiyuan, Shanxi from July 25 to 28, 2024. The conference is hosted by Shanxi University. The China National Conference on Computational Linguistics was founded in 1991 and is organized by the Technical Committee on Computational Linguistics of the Chinese Information Processing Society of China. After more than 30 years of development, the conference has become the most authoritative, largest, and most influential academic conference in the field of natural language processing in China. As the flagship conference of the Chinese Information Processing Society of China (a first-level society in China), CCL focuses on intelligent computing and information processing of various languages in China, providing the most extensive high-level exchange platform for discussing and disseminating the latest academic and technical achievements in computational linguistics.

The CCL conference began to organize technical evaluations in 2017 to provide Chinese language processing researchers with a platform to test related technologies, algorithms and systems. 43 evaluation tasks have been organized, involving morphology, syntax, semantics, pragmatics, modern Chinese and ancient Chinese and other multi-level Chinese language processing basic technologies and their applications in e-commerce, communications, justice. Multiple open datasets have been released, with thousands of teams participating in the competitions.

This year's conference evaluation workshop (CCL24-Eval), in order to better promote exchanges among domestic and foreign peers and enhance the influence of the evaluation, at the proposal of the evaluation committee, the conference organizing committee and the publication committee negotiated with ACL to determine that this year's evaluation will continue to have a dedicated Proceedings under the CCL2024 conference in ACL Anthology. Summary papers of the evaluation tasks and papers from each participating team will have the opportunity to be included. The evaluation committee will organize experts for double-blind review, and evaluation reports (in Chinese and English) with excellent content and writing quality will be included in both CCL Anthology and ACL Anthology.

This year the conference will continue to organize technical evaluations, and now sincerely solicit evaluation task proposals from scholars, research institutions and enterprises in related fields. The proposals should describe the task content, evaluation criteria, preparation of evaluation data, and the approximate schedule in detail (please refer to the template example below). The task leader (one person per task, responsible for communicating with the evaluation committee; changes to the task leader during the evaluation process are generally not allowed) should send the evaluation task application to libin.njnu@gmail.com and tanhongye@sxu.edu.cn.

Technical Evaluation Schedule

  • Task solicitation begins:
    November 1, 2023
  • Deadline for task collection:
    December 31, 2023
  • Release and promotion of each evaluation task:
    January 2024
  • Organizers release training sets:
    March-May 2024
  • CCL officially releases all evaluation tasks:
    March 1, 2024
  • Technical evaluation of each task:
    March-May 2024
  • Completion of evaluation tasks:
    May 31, 2024
  • Submission of Chinese or English technical reports:
    June 2024
  • Review of evaluation papers & acceptance notification:
    July 2024
  • Submission of camera-ready versions of evaluation papers:
    July 2024
  • Correction and typesetting, submission for inclusion in ACL and CCL Anthology:
    July 2024
  • CCL 2024 Evaluation Workshop:
    July 2024

Program (Review) Committee:

  • Pengyuan Liu (Researcher, College of Information Science, Beijing Language and Culture University)
  • Longlong Ma (Associate Researcher, Institute of Software, Chinese Academy of Sciences)
  • Qi Su (Researcher, School of Foreign Languages, Peking University)
  • Meng Wang (Associate Professor, School of Humanities, Jiangnan University)
  • Xuri Tang (Professor, School of Foreign Languages, Huazhong University of Science and Technology)
  • Dongbo Wang (Professor, College of Information Management, Nanjing Agricultural University)
  • Liu Liu (Associate Professor, College of Information Management, Nanjing Agricultural University)
  • Bo An (Assistant Researcher, Institute of Ethnology and Anthropology, Chinese Academy of Social Sciences)
  • Shehui Liang (Associate Professor, School of International Cultural Education, Nanjing Normal University)
  • Lin Li (Associate Professor, School of Computer Science, Qinghai Normal University)
  • Xiaofei Qian (Ph.D., College of Liberal Arts, Shanghai University)
  • Chengjie Sun (Associate Professor, Faculty of Computing, Harbin Institute of Technology)
  • Xianling Mao (Associate Professor, School of Computer Science and Technology, Beijing Institute of Technology)
  • Wenpeng Lu (Professor, Faculty of Computer Science and Technology, Qilu University of Technology/Shandong Academy of Sciences)
  • Meiling Liu (Associate Professor, College of Information and Computer Engineering, Northeast Forestry University)

More seats are being added...

Evaluation tasks can include but are not limited to the following topics:

  • Basic tasks of natural language processing
    • Lexical and syntactic analysis
    • Semantic Analysis
    • Text and pragmatic analysis
    • Cross-language, small language natural language processing
  • Natural language processing applications
    • Knowledge Graph
    • Question answering and dialogue system
    • Reading comprehension
    • Text generation
    • Information retrieval, recommendation system and social media computing
    • Combined application of natural language processing and medical, education, humanities, justice and other fields
  • Application and Evaluation of Large Lnaguage Models
    • Specific applications of large language models
    • Evluation and assessment methods for large language models
  • Multimodal Computing
    • Analysis and modeling of multimodal data such as phonetic, text, images, videos, electroencephalography (EEG), and magnetic resonance imaging (MRI)
    • Language-related computing applications such as Virtual Reality (VR) and Augmented Reality (AR)

Solicitation of Corporate Sponsorship

CCL24-Eval is seeking sponsorship from various companies, with sponsorship options including:

(1) Sponsorship of the organizers, which grants the company corresponding promotional benefits.

(2) Sponsorship of a specific evaluation task, which grants the company naming rights for the task.

(3) Sponsorship options include but are not limited to: providing prizes (for winning teams), funds (to support the organizers), computing power (to provide GPU support to the organizers and participating teams), and platforms (to provide a network platform for displaying A/B lists, etc.)

Evaluation Chairs:

Hongfei Lin, Dalian University of Technology

Bin Li, Nanjing Normal University

Hongye Tan, Shanxi University

November 1, 2023



CCL24-Eval *** Evaluation Task Application Template

Organizers and Contact Information

Leader and Contact Information [Responsible for communicating with the evaluation committee]

Contact Person and Contact Information [Responsible for communicating with the participating teams]

Please note: One person per task is responsible, and changes during the evaluation process are generally not allowed

1. Task Content

Please provide specific details about the evaluation (task definition, task settings, etc.)

2. Evaluation Data

Please provide information about the data used for the evaluation (data samples, data distribution, etc.)

3. Evaluation Criteria

Please provide the evaluation criteria for the evaluation task (evaluation criteria for each task, evaluation criteria for the final evaluation results, etc.)

4. Evaluation Schedule

According to the overall arrangement of CCL24-Eval, please provide the specific schedule for the evaluation, including: registration, data release, evaluation script release, LeaderBoard preparation (if applicable), result submission, A/B list or preliminary final arrangement (if applicable), result announcement, etc.

5. Funding Situation

Please provide the evaluation's award and sponsorship plan

6. Website Development and Paper Review

Please provide the URL for the evaluation task (GitHub is generally recommended, you can refer to the 2023 evaluation tasks)

7. Paper Format

Conference submissions must use the LaTeX template provided. Submitted papers can contain up to 6 pages of content, with no limit on the number of reference pages. As the conference adopts a double-blind review process, authors' names and affiliations should not appear in the submitted papers. Therefore, authors should not use "we propose" but rather "the authors name propose..." for self-references. Papers that do not meet these requirements will be rejected without undergoing a full review process.

Template download link (may be updated to the 2024 version later):

http://cips-cl.org/static/CCL2023/downloads/ccl2023_template.zip

8. Other

Please refer to the CCL23-Eval evaluation release page:

The 22nd China National Conference on Computational Linguistics - CCL 2023 (cips-cl.org)

http://cips-cl.org/static/CCL2023/cclEval/taskEvaluation/index.html