Artificial intelligence (AI) is front and center in the data-driven revolution that has been taking place in the last couple of years with the increasing availability of large amounts of data (“big data”) in virtually every domain. The now dominant paradigm of data-driven AI, powered by sophisticated machine learning algorithms, employs big data to build intelligent applications and support fact-based decision making. The focus of data-driven AI is on learning (domain) models and keeping those models up-to-date by using statistical methods over big data, in contrast to the manual modeling approach prevalent in traditional, knowledge-based AI.
While data-driven AI has led to significant breakthroughs, it also comes with a number of disadvantages. First, models generated by machine learning algorithms often cannot be inspected and understood by a human being, thus lacking explainability. Furthermore, integration of preexisting domain knowledge into learned models – prior to or after learning – is difficult. Finally, correct application of data-driven AI depends on the domain, problem, and organizational context while considering human aspects as well. Conceptual modeling can be the key to applying data-driven AI in a meaningful, correct, and time-efficient way while improving maintainability, usability, and explainability.
While we welcome all kinds of contributions that bring together approaches from conceptual modeling and data-driven AI, we wish to put an emphasis on the following key areas:
1. Augmenting Data-Driven AI with Conceptual Modeling for Explainable AI
Data-driven AI generates models that may be either symbolic (e.g. rules, decision trees) or sub-symbolic (neural networks). These models will then be used in application systems to implement specific functions and/or behavior, e.g., object detection in images, scene understanding in videos, medical diagnoses, interpreting sensor data. Typically, these models cannot be inspected and understood by a human being. Models of a symbolic nature tend to be too complex, whereas sub-symbolic models do not contain structural elements that can be understood by humans. In many application scenarios, however, this is an important requirement. Various approaches to explainability, model testing and verification are based on integrating data-driven AI with approaches from conceptual modeling.
2. Supporting Data-Driven Decision Making with Conceptual Modeling
Business intelligence (BI) and analytics projects require domain experts, business people, data scientists, and engineers to communicate with each other in a common language. Stakeholders must collaborate in various ways in order to gain a common understanding of the problem as well as to decide on the appropriate data model, architecture, algorithms, user interfaces, etc. Yet, BI and analytics projects often involve low-level, ad hoc data wrangling and programming, which increases development effort and reduces usability of BI and analytics solutions. Furthermore, data analytics and machine learning techniques are often misapplied, adversely affecting the validity of analysis results in practice. A conceptual perspective on data-driven decision making ensures that analysts employ algorithms in the correct context and use the appropriate systems to process the available data in a meaningful way. Conceptual modeling allows to move data-driven decision making onto a higher level of abstraction, facilitating implementation and use of data analytics solutions while allowing stakeholders with different skills to communicate with each other.
3. Using Data-Driven Model Generation for Conceptual Modeling
Conceptual modeling is not necessarily a (purely) manual undertaking but can benefit from approaches to model generation. For example, a first draft of a conceptual model might be generated from existing data which will then be extended and refined manually. Furthermore, individual parts of a conceptual model may be generated automatically whereas other parts are handcrafted. Existing models might be (semi-)automatically improved based on regularities derived from newly available data, which may form the basis for self-tuning and self-repairing systems.
Topics of Interest
Further topics of interest include, but are not limited to:
- Combining generated and manually engineered models
- Combining symbolic with sub-symbolic models
- Conceptual (meta-)models as background knowledge for model learning
- Conceptual models for enabling explainability, model validation and plausibility checking
- Trade-off between interpretability and model performance
- Reasoning in generated models
- Data-driven modeling support
- Learning of meta-models
- Automatic, incremental model adaptation
- Case-based reasoning in the context of model generation and conceptual modeling
- Model-driven guidance and support for data analytics lifecycle
- Conceptual models for supporting users with conducting data analysis
Workshop Program (CEST)
CMAI Session I chaired by Dominik Bork
13.30 - 13.40 Introduction to the Workshop: Conceptual Modeling and Artificial Intelligence: Mutual Benefits from Complementary Worlds Paper (PDF)
Dominik Bork
13.40 - 14.10 Why Should Machine Learning Require Conceptual Models? Paper (PDF)
Wolfgang Maass and Veda C. Storey
14.10 - 14.40 Conceptual Models for ML: Reflections and Guidelines Paper (PDF), further reading: AAAI’21 ER’20 CAiSE’19 EMMSAD’21 DKE’21 MSIQ’21
Arturo Castellanos, Alfred Castillo, Monica Chiarini Tremblay, Roman Lukyanenko, Jeffrey Parsons and Veda C. Storey
14.40 - 15.10 Using Conceptual Modeling to Drive Machine Learning Solutions Development - A Case Report on applying GR4ML Paper (PDF)
Soroosh Nalchigar and Eric Yu
CMAI Session II chaired by Ulrich Reimer
15.30 - 16.00 Searching for Models with Hybrid AI Techniques Paper (PDF)
Martin Eisenberg, Hans-Peter Pichler, Antonio Garmendia and Manuel Wimmer
16.00 - 16.30 Conceptual Modelling and Artificial Intelligence Overview and research challenges from the perspective of predictive business process management Paper (PDF)
Peter Fettke
16.30 - 17.00 Wrap-Up
Important Dates
Paper submission: extended until 16 July 2021
Author notification: 6 August 2021
Camera-ready Version: 20 August 2021
Paper Submission
Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.
Papers must not contain and author information (i.e., blind submission) and must not exceed 14 pages (including figures, references, etc.) in length using the LNCS template. Submissions are handled in the EasyChair system. Click here to submit your paper.
Accepted papers will be published in the LNCS series by Springer. Note that only accepted papers presented in the workshop by at least one author will be published.
Workshop Organizers
- Dominik Bork, TU Wien, Austria
- Peter Fettke, German Research Center for Artificial Intelligence, Saarland University, Germany
- Ulrich Reimer, Eastern Switzerland University of Applied Sciences, Switzerland
- Marina Tropmann-Frick, University of Applied Sciences Hamburg, Germany