The 24th International Conference on Knowledge Engineering and Knowledge Management (EKAW-24) encompasses the diverse realms of eliciting, acquiring, modeling, and managing knowledge. The conference addresses the pivotal role of knowledge in constructing systems and services for the semantic web, knowledge management, knowledge discovery, information integration, natural language processing, intelligent systems, e-business, e-health, humanities, cultural heritage, and beyond.

EKAW-24's Special Theme: Knowledge in the Age of Language Models (LMs). In addition to the topics above, this year's conference invites research articles focusing on algorithms, tools, methodologies, and applications that leverage the interplay between knowledge and LMs. The contributions should deepen our understanding of how LMs contribute to the dynamic landscape of knowledge acquisition and engineering and vice versa.

All submissions, including those related to LMs, should establish a clear connection to Knowledge Engineering and Knowledge Management or demonstrate a significant impact on the field. While acknowledging the interdisciplinary nature of knowledge and its interplay with other disciplines and technologies, such as Machine Learning, Natural Language Processing, and Computer Vision, contributions lacking direct relevance to Knowledge Engineering and Knowledge Management will not be considered pertinent to the EKAW conference.


  • LM-enhanced ontology and knowledge engineering methodologies and tools
  • Ontology evaluation via LMs
  • (Ontological) knowledge memorization in LMs
  • Knowledge-based techniques for LMs (Retrieval Augmented Generation based approaches, fact-checking, and bias mitigation)
  • Question answering over knowledge graphs via LMs

  • Methods, techniques, and tools for knowledge acquisition and ontology engineering (e.g., ontology learning and population, ontology design patterns)
  • Ontology mapping and alignment
  • Ontology evaluation and metrics
  • Collaborative knowledge engineering
  • Multimodal Knowledge Engineering and Acquisition

  • Methods, techniques, and tools for knowledge management and ontology governance
  • Knowledge evolution, maintenance, and preservation
  • Knowledge sharing and distribution
  • Methods for accelerating take-up of knowledge management technologies

  • Ethics, trust, and privacy in knowledge representation and reasoning
  • Explainable AI
  • Provenance, trust, and transparency in knowledge management
  • FAIR data and FAIR knowledge
  • Inclusivity and diversity in knowledge representation

  • Knowledge representation inspired by cognitive science
  • Synergies between humans and machines
  • Knowledge emerging from user interaction and (social) networks
  • Knowledge ecosystems
  • Crowdsourcing in knowledge management

  • Data mining for knowledge construction
  • Text mining and ontology engineering
  • Classification and clustering for knowledge management
  • Mining patterns and association rules
  • Neuro-symbolic Artificial Intelligence

  • eGovernment and public administration
  • Life sciences, health, and medicine
  • Humanities and Social Sciences
  • Cultural Heritage and Digital Libraries
  • ICT4D (Knowledge in the developing world)

Timeline

⚠ This section will be updated with specific dates for each track.

All submission deadlines are 23:59:59 AoE.

Download ICS file

EKAW-24 distinguishes between research, in-use, and position papers. The papers will all have the same status and follow the same formatting guidelines in the proceedings but will receive special treatment during the reviewing phase. In particular, each paper type will be subject to its own evaluation criteria:

  • Research papers: These are standard papers presenting a novel method, technique, or analysis with appropriate empirical or other types of evaluation as a proof-of-concept. The main evaluation criteria here will be originality, technical soundness, and validation.

  • In-use papers: These are papers describing knowledge management and engineering applications in real environments. Applications must address a sufficiently interesting and challenging problem, work with real-world data, and involve real users. The focus is less on the originality of the approach and more on presenting systems that solve a significant problem while addressing the particular challenges that come with the use of real-world data. Evaluations are essential for this type of paper and should involve a representative subset of the actual users of the system.

  • Position papers: These are papers describing novel, innovative, and disruptive ideas. Position papers may also comprise an analysis of currently unsolved problems or review these problems from a new perspective to contribute to their better understanding within the research community. We expect that such papers will guide future research by highlighting critical assumptions, motivating the difficulty of a specific problem, or explaining why current techniques are not sufficient, possibly corroborated by quantitative and qualitative arguments.


Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via EasyChair: https://easychair.org/conferences/?conf=ekaw2024

As in past editions, accepted papers will be published by Springer in an LNAI volume. Submissions must be in PDF, formatted in the style of LNCS conference proceedings. For details and available templates (Latex, Microsoft Word), see the Springer’s conference proceedings guidelines.

The following page limits (references excluded) apply:

  • Research and In-use papers: 15 pages;
  • Position papers: 8 pages.

Submissions must be in English and must be prepared for single-blind review. Manuscripts that are already uploaded on Arxiv but not published anywhere are allowed for submission. However, dual submissions are not allowed.

Neither plagiarism nor self-plagiarism is tolerated. Please be advised that a plagiarism-checking tool may be applied to screen for plagiarism.

Large Language Models Policy of EKAW-24: Papers that include text generated from a Large Language Model (LLM), such as ChatGPT, are prohibited unless this generated text is presented as a part of the experimental analysis of the article. AI tools may be used to edit and polish authors' work, such as using LLMs for light editing of their text (e.g., automate grammar checks, word autocorrect, and other editing work), but the text "produced entirely" by AI is not allowed. We rely on the LLM policy as stated in ICML 2023.


One full registration for the conference at the regular rate is required for each accepted paper.


Mehwish Alam
Télécom Paris, Institut Polytechnique de Paris
Contact Personal Page
Marco Rospocher
University of Verona, Italy
Contact Personal Page