Talks

Abstracts


Krishna S, IIT Bombay. Verification of Concurrent Programs Under Weak Memory

Abstract

This is an overview of some recent work on the verification of concurrent programs. Traditionally concurrent programs are interpreted under sequential consistency (SC). Eventhough SC is very intuitive and easy to use, modern multiprocessors do not employ SC for performance reasons, and instead use so called "weak memory models". Some of the well known weak memory models in vogue among modern multiprcessor architectures are Intel x-86, IBM POWER and ARM. The use of weak memory is also prevelant in the C11 model, leading to the release acquire fragment of C11. In this talk, I will discuss some of these memory models, as well as recent efforts in the verification of programs under these.


Sumanth Prabhu, TCS Research. Constrained Horn Clauses for Specification Synthesis

Abstract

Constrained Horn Clauses (CHCs) are gaining popularity as a logical intermediate representation for ensuring program safety. A program is guaranteed to be safe if its corresponding system of CHCs has a solution. With multiple frontends available for various programming languages, verification techniques can focus on solving the CHC system to prove the input program is safe. In our work over the last few years, we have used CHCs to synthesize missing program specifications, such as function contracts and preconditions. In this talk, I will discuss the challenges of encoding specification synthesis tasks as CHCs and how our techniques overcome these challenges within the infer-check-weaken framework.


Sayan Mukherjee, Universite Libre de Bruxelles, Belgium. Greybox Learning of Languages Recognizable by Event-Recording Automata

Abstract

Given a (timed/untimed) language over finite words, inferring an automaton that recognizes the target language, is an active area of research. Broadly, there are two frameworks for learning various classes of automata -- (i) active learning, where the learner makes queries to a teacher, and (ii) passive learning, where the learner is instead provided with a labelled (positive and negative) set of examples. In this talk, we revisit the active learning of timed languages recognizable by Event-Recording Automata (ERA) -- a determinizable subclass of Timed Automata. Our framework employs a method known as greybox learning, which enables the learning of ERA with a minimal number of control states. This approach avoids learning the region automaton associated with the language, contrasting with existing methods. We have implemented our greybox learning algorithm with some heuristics that maintain low computational complexity. We will demonstrate how our algorithm works through some examples.

This is a joint work with Anirban Majumdar and Jean-François Raskin.


Soumyadip Bandyopadhyay, ABB Corporate Research Lab. SamaTulyata4PLC: Behavioural Verification of Software Upgrade and Migration for PLC Code using Petri net based Model

Abstract

Industrial plants rely on Programmable Logic Controllers (PLC) and Distributed Control Systems (DCS) for automation. These evolve with market and technology changes, necessitating migrating and upgrading the control software. Changes during the migration process could introduce errors in the code, posing a safety risk. Detecting the introduction of such errors in line with strict industry safety requirements is crucial. We suggest using Petri net models as a comprehensive representation of the behavior of control software. These models precisely capture the semantics of the Sequential Function Chart (SFC) and Taylor™ Control Language (TCL). We have developed a Petri net based equivalence checker to verify behavioral equivalence between the source code and the upgraded/migrated code. Our approach provides a formal and efficient method for verifying control software behaviour, addressing the critical challenges in software upgrade or migration.


Turaga Venkata Hanumanta Prathamesh, Krea University. Formalizing Free Groups in Isabelle-HOL: Nielsen-Schreier and Conjugacy Problem

Abstract

Free groups are fundamental in group theory and have extensive applications in algebra, topology, and geometry. This talk focuses on formalizing two foundational results on free groups using the proof assistant Isabelle/HOL. First, we present a formal proof and verified decision procedure for the conjugacy problem, resulting in a trusted program. Second, we discuss our formalization of the Nielsen-Schreier theorem, which asserts that any subgroup of a free group is free. This collaboration with Aabid Seeyal Abdul Kharim, Shweta Rajiv, and Rishi Vyas marks the first formalization of the Nielsen-Schreier theorem in Isabelle/HOL and its combinatorial proof in any proof assistant. It is also the first formalization of the decision procedure for conjugacy in any proof assistant, and forms a part of a larger project aimed at formally verified programs for group theoretic computations.


Nikhil Balaji, IIT Delhi. Multiplicity equivalence of context-free grammars

Abstract

Language equivalence for context-free grammars is a classical undecidable problem in formal language theory. Two context-free grammars are called multiplicity equivalent iff all words over the common terminal alphabet are generated by the same degree of ambiguity. Decidability of the multiplicity equivalence problem is a long-standing open problem in formal language theory. However many special cases of the problem are known to be decidable. Over a series of papers by Kuich and Salomaa (1986), and later Raz (1993), decidability of several special classes of grammars was established. In this work, we show that a better complexity bound for the problem (counting hierarchy) by reduction to a simple problem of computing the degree of a polynomial given by an arithmetic circuit. As a corollary we obtain new complexity bounds on language inclusion of a nondeterministic finite automaton in an unambiguous context-free grammar, and language inclusion of a non-deterministic context-free grammar in an unambiguous finite automaton.

Based on joint work with Lorenzo Clemente, Klara Nosan, Mahsa Shirmohammadi and James Worrell which appeared in LICS 2023


Piyush Kurur, IIT Palakkad. Kleene algebra, their equational theory and an attempt to formalisation

Abstract

In this talk I will give an overview of Kleene algebra and Dexter Kozen's equational theory of it. While the soundness and completeness of Kleene algebra axiom is well known since 1991 [0], its complete formalisation (in coq) was done in 2011 by Thomas Braibant and Damien Pous [1] as part of a reflexive tactic for proving equations in Kleene algebra. I will be introducing the problem and talking about my attempt at reformalisation work.

This is work in progress and no public code is available so far.

  1. A completeness theorem for Kleene Algebra and the algebra of regular events. Dexter Kozen, Proceedings Sixth Annual IEEE Symposium on Logic in Computer Science, 1991.
  2. Deciding Kleene Algebras in Coq, Thomas Braibant and Damien Pous, Log. Methods Comput. Sci., 2011, volume 8.


Sudakshina Dutta, IIT Goa. Provable repair of Deep Neural Networks

Abstract

Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance. This has motivated a large number of techniques for finding unsafe behavior in DNNs. The speaker will speak about a technique to tackle the problem of correcting a DNN once unsafe behavior is found. The provable repair problem will be introduced, which is the problem of repairing a network 𝑁 to construct a new network 𝑁′ that satisfies a given specification. If the safety specification is over a finite set of points, the Provable Point Repair algorithm can find a provably minimal repair satisfying the specification, regardless of the activation functions used.


Alphy George and Shabana AT, NIT Calicut. Verification of Deep Reinforcement Learning Systems

Abstract

In this talk, we explore the safety verification process for deep reinforcement learning (DRL) systems. DRL integrates reinforcement learning (RL) and deep learning principles, creating complex models that require thorough reliability assessments, especially in safety-critical domains. The formal verification of these systems is challenging due to the infinite and continuous nature of their state spaces and the non-transparent operations of deep neural networks. We present an algorithm and its implementation that abstracts states in accordance with the DRL policy, enabling effective safety verification of the abstracted system. The foundational concept for this approach is derived from the paper 'Probabilistic Guarantees for Safe Deep Reinforcement Learning' by Edoardo Bacci and David Parker.

This is an ongoing work in collaboration with Swarup Kumar Mohalik (Ericsson Research) and Deepak D'Souza (IISc, Bangalore).


Nikhil Hegde, IIT Dharwad. Revisiting model checking of parameterized systems with counter abstraction and environment abstraction

Abstract


Diganta Mukhopadhyay, TCS Research. Structural Abstractions of Neural Networks

Abstract

Neural networks are being used extensively in several safety critical domains, which makes the formal verification of neural networks being deployed in safety critical domains extremely important. However, the size of neural networks remains a major hurdle for verification of neural networks. Structural abstraction of neural networks attempts to solve this problem by starting with a query involving a large network and reducing it to an over-approximate query involving a smaller abstract network. In this talk we would introduce a wide range of works performing neural network abstraction and present the techniques being used there. Finally, we would present a unifying framework we have developed that brings together several existing approaches for neural network abstraction.


Ramanujam R, Azim Premji University. Implicitly quantified modal logic for reasoning in large games

Abstract

When the number of players in a game is large, dynamics typically depend on choice distributions (rather than strategy profiles). If the choice of a,b,c is respectively by 1/2, 1/4. 1/4 of the player population, the payoff is (say) r,s,t with r for those who chose a, s for those who chose b, etc. The players are "anonymous" and the study of improvement dynamics does not refer to specific players. Introducing variables and quantification to reason about such ganes is natural, but leads to undecidable logics. We consider a variable free modal fragment with implicit quantification to reason about large games. Usually logics on games are parameterised by the number of players, and the syntax refers to player identities. In large games, every model comes with its own set of players, offering challenges for decision procedures and axiomatisation.

This work is joint with Ramit Das (Intel Bangalore, India) and Anantha Padmanabha (IIT Madras). The talk is based on our paper that appeared in Synthese 201 (163), 2023.


Rajdeep Baruri, IIT Dharwad. Mission Decomposition in Multi-agent Systems

Abstract

In multi-agents systems decomposition of a mission specification into goals for individual agents, enables agents to work independently. When these goals are achieved by agents; it ensures the fulfillment of the mission by the system. In this talk we will consider a scenario, in which a LTL mission is decomposed into goals (sequences of actions for individual agents), using an automata-theoretic approach.

Talk is based on a paper by Philipp Schillinger et. al. titled "Decomposition of Finite LTL Specifications for Efficient Multi-agent Planning", 2016.


Habeeb P, IISc Bangalore. Approximate Conformance Verification of Deep Neural Networks

Abstract

We consider the problem of approximate conformance checking on deep neural networks. More precisely, given two neural networks and a conformance bound , \epsilon we need to check if the neural network outputs are within \epsilon given the same inputs from the input set. Our approach reduces the approximate conformance checking problem to a reachability analysis problem using transformations of neural networks. We provide experimental comparison of \epsilon -conformance checking based on our approach using various reachability analysis tools as well as other alternate \epsilon-conformance checking algorithms. We illustrate the benefits of our approach as well as identify reachability analysis tools that are conducive for conformance checking.

This work is published at the NASA Formal Methods (NFM) 2024, and this is a collaborative work with Prof. Pavithra Prabhakar from Kansas State University.


Priyanka Golia, IIT Delhi. Designing Samplers is Easy

Abstract

Given a formula F, the problem of uniform sampling seeks to sample solutions of F uniformly at random. The computational intractability of uniform sampling has led to the development of several samplers that heavily rely on heuristics and are not accompanied by theoretical analysis of their distribution. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution is close to uniform. The availability of Barbarik has the potential to spur development of sampler techniques such that developers can design sampling methods that can be accepted by Barbarik even though these samplers may not be amenable to a detailed mathematical analysis. Based on the flexibility offered by SAT solver CryptoMiniSat, we design the sampler CMSGen that achieves a sweet spot of quality of distributions and runtime performance.


Tephilla Prince, IIT Dharwad. Deciding reachability and coverability in Lossy EOS and its implementations

Abstract

Elementary Object Systems (EOSs) are a representative of the nets-within-nets paradigm, where the places of a system Petri Net (PN) can also host object nets in place of standard black tokens. Recently, EOSs were put forward to naturally capture the notion of break-downs, under the let-it-crash approach, even when lacking any extra domain knowledge: break-downs can affect only the computational units modeled by object nets. The break-downs are represented by removing all tokens from the internal markings of the object nets, so as to enforce a deadlock. In this setting, one can check whether a property is robust, i.e., if it holds even in front of at most a given number of break-downs (possibly infinitely many). The approach of induced deadlocks is reminiscent of lossy Petri Nets, where some token (possibly all) may be non-deterministically lost. The partial loss of a marking can be seen as a more fine-grained variant of break-down, where the EOS components partially degrade. However, checking the decidability of reachability/coverability for EOSs with lossy steps has not been charted yet.

In this talk, I will briefly describe results from our PNSE 24 paper, where, we fully charted the decidability status of reachability and coverability in front of various forms of lossiness: only at the object net level, only at the system net level, or at both. We do that for both EOSs and the fragment of conservative EOSs. Our results show that almost all the studied problems are undecidable. The only decidable cases regard conservative EOSs in front of any given positive number of losses and EOSs in front of arbitrarily many losses, both at the system and object net level. The latter result is especially interesting, since finitely many losses still result in undecidability. To bridge the gap between theory and practice, and to build reusable software, we have implemented our prototype to simulate and analyze Petri Nets and Elementary Object Systems from nets-within-net using the temporal ASP solver Telingo. This was presented at CILC’24.