Project Details
Projekt Print View

softXplain: Requirements for Self-Explaining Software

Subject Area Software Engineering and Programming Languages
Term since 2021
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 470146331
 
Software is getting increasingly complex, and it is sometimes difficult to use. Users do not understand what the software does, and why. This is often due to a clash of user expectation and perceived system behavior. End users are exposed to complex software behavior without a human expert available who could answer their questions. Explanations provided by the software could mitigate that confusion. Explainability refers to the ability of software to explain its own behavior. Similar to usability, security, or maintainability, explainability can be considered a quality attribute. When a complex software-based system is specified, there are functional and non-functional requirements to consider. Quality aspects are specified in non-functional requirements. Non-functional requirements determine the scope and degree of quality aspects in a software product. There are often trade-offs that call for prioritization and decisions. It is, therefore, important to specify quality aspects in a useful way: Explainability requirements should be valuable when implemented, realistic, and in good balance with competing quality aspects. High-quality requirements on explainability, are an important prerequisite for useful and self-explainable software. A number of key questions need to be answered before useful requirements on explainability can be specified: How can software know user expectations in a given situation and context? How can software recognize whether a user is confused and needs explanations? Under which conditions should an explanation be provided?The goals of softXplain are to provide meaningful answers to these key questions, to deliver executable demonstrators of limited size; and to show empirical evidence for increased user performance and satisfaction when adequate explanations are provided by demonstrators. This will lay the scientific foundations for useful requirements on explainability. The project vision includes to (1) identify needs for explanations, (2) characterize appropriate types of explanations, and (3) describe how software can provide specified explanations. Explainability currently gains importance due to the advent of complex software systems. The challenge of interpretability in machine learning makes software behavior even more difficult to understand. However, interpretability is investigated in various other research projects. softXplain deals with the ability of software to explain itself in general – with or without machine learning involved and will not study details of interpretability.Validation of proposed solutions will be based on empirical studies in experiments. Demonstrators incorporate and illustrate challenges and solutions developed in this project. Thus, different variants of explanations can be compared. User studies can be conducted in a semi-controlled environment by using those demonstrators.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung