Inproceedings525714163: Unterschied zwischen den Versionen

Aus International Center for Computational Logic
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorVorname= Calvin |ErsterAutorNachname=Chau |FurtherAuthors= Jan Křetínský; Stefanie Mohr}} {{Inproceedings |Editor=André, Étienne and Sun, Jun |Title=Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks |Booktitle=Automated Technology for Verification and Analysis |Year=2023 |Publisher=Springer Nature Switzerland |Pages=401--421 }} {{Publikation Details |Abstract…“)
 
Calvin Chau (Diskussion | Beiträge)
Keine Bearbeitungszusammenfassung
Zeile 1: Zeile 1:
{{Publikation Erster Autor
{{Publikation Erster Autor
|ErsterAutorVorname= Calvin
|ErsterAutorVorname=Calvin
|ErsterAutorNachname=Chau
|ErsterAutorNachname=Chau
|FurtherAuthors= Jan Křetínský; Stefanie Mohr}}
|FurtherAuthors=Jan Křetínský; Stefanie Mohr
}}
{{Inproceedings
{{Inproceedings
|Editor=André, Étienne
|Referiert=0
              and Sun, Jun
|Title=Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
|Title=Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
|To appear=0
|Year=2023
|Booktitle=Automated Technology for Verification and Analysis
|Booktitle=Automated Technology for Verification and Analysis
|Year=2023
|Pages=401--421
|Publisher=Springer Nature Switzerland
|Publisher=Springer Nature Switzerland
|Pages=401--421
|Editor=André, Étienne              and Sun, Jun
}}
}}
{{Publikation Details
{{Publikation Details
|Abstract=Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision.
|Abstract=Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision.
|DOI Name=10.1007/978-3-031-45329-8_19
|DOI Name=10.1007/978-3-031-45329-8_19
|Projekt=SEMECO-Q2
|Forschungsgruppe=Verifikation und formale quantitative Analyse
|Forschungsgruppe=Verifikation und formale quantitative Analyse
}}
}}

Version vom 5. März 2025, 14:05 Uhr

Toggle side column

Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks

Calvin ChauCalvin Chau,  Jan KřetínskýJan Křetínský,  Stefanie MohrStefanie Mohr
Calvin Chau, Jan Křetínský, Stefanie Mohr
Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
In André, Étienne and Sun, Jun, eds., Automated Technology for Verification and Analysis, 401--421, 2023. Springer Nature Switzerland
  • KurzfassungAbstract
    Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision.
  • Projekt:Project: SEMECO-Q2
  • Forschungsgruppe:Research Group: Verifikation und formale quantitative Analyse„Verifikation und formale quantitative Analyse“ befindet sich nicht in der Liste (Computational Logic, Automatentheorie, Wissensverarbeitung, Knowledge-Based Systems, Knowledge Systems, Wissensbasierte Systeme, Logische Programmierung und Argumentation, Algebra und Diskrete Strukturen, Knowledge-aware Artificial Intelligence, Algebraische und logische Grundlagen der Informatik) zulässiger Werte für das Attribut „Forschungsgruppe“.Algebraic and Logical Foundations of Computer Science
@inproceedings{CKM2023,
  author    = {Calvin Chau and Jan K{\v{r}}et{\'{\i}}nsk{\'{y}} and Stefanie
               Mohr},
  title     = {Syntactic vs Semantic Linear Abstraction and Refinement of Neural
               Networks},
  editor    = {Andr{\'{e}} and {\'{E}}tienne               and Sun and Jun},
  booktitle = {Automated Technology for Verification and Analysis},
  publisher = {Springer Nature Switzerland},
  year      = {2023},
  pages     = {401--421},
  doi       = {10.1007/978-3-031-45329-8_19}
}