Inproceedings525714163: Unterschied zwischen den Versionen
Aus International Center for Computational Logic
Johannes Lehmann (Diskussion | Beiträge) K (Textersetzung - „Verifikation und formale quantitative Analyse“ durch „Algebraische und logische Grundlagen der Informatik“) |
Johannes Lehmann (Diskussion | Beiträge) Keine Bearbeitungszusammenfassung |
||
Zeile 17: | Zeile 17: | ||
|Abstract=Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision. | |Abstract=Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision. | ||
|DOI Name=10.1007/978-3-031-45329-8_19 | |DOI Name=10.1007/978-3-031-45329-8_19 | ||
|Projekt=SEMECO- | |Projekt=SEMECO-Q1 | ||
|Forschungsgruppe=Algebraische und logische Grundlagen der Informatik | |Forschungsgruppe=Algebraische und logische Grundlagen der Informatik | ||
}} | }} |
Aktuelle Version vom 18. März 2025, 15:08 Uhr
Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
Calvin ChauCalvin Chau, Jan KřetínskýJan Křetínský, Stefanie MohrStefanie Mohr
Calvin Chau, Jan Křetínský, Stefanie Mohr
Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
In André, Étienne and Sun, Jun, eds., Automated Technology for Verification and Analysis, 401--421, 2023. Springer Nature Switzerland
Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
In André, Étienne and Sun, Jun, eds., Automated Technology for Verification and Analysis, 401--421, 2023. Springer Nature Switzerland
- KurzfassungAbstract
Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision. - Projekt:Project: SEMECO-Q1
- Forschungsgruppe:Research Group: Algebraische und logische Grundlagen der InformatikAlgebraic and Logical Foundations of Computer Science
@inproceedings{CKM2023,
author = {Calvin Chau and Jan K{\v{r}}et{\'{\i}}nsk{\'{y}} and Stefanie
Mohr},
title = {Syntactic vs Semantic Linear Abstraction and Refinement of Neural
Networks},
editor = {Andr{\'{e}} and {\'{E}}tienne and Sun and Jun},
booktitle = {Automated Technology for Verification and Analysis},
publisher = {Springer Nature Switzerland},
year = {2023},
pages = {401--421},
doi = {10.1007/978-3-031-45329-8_19}
}