Mixing Description Logics in Privacy-Preserving Ontology Publishing
From International Center for Computational Logic
Mixing Description Logics in Privacy-Preserving Ontology Publishing
Talk by Adrian Nuradiansyah
- Location: APB 2026
- Start: 5. September 2019 at 1:00 pm
- End: 5. September 2019 at 2:00 pm
- Event series: KBS Seminar
- iCal
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of , it turns out to be actually lower (polynomial) for the more expressive DL .