Talks

Abstracts


Dhruv Nivetia, ETH Zurich, An Automata-Theoretic Characterisation of Weighted First-Order Logic

Slides

Abstract

Since the 1970s with the work of McNaughton, Papert and Schützenberger, a regular language is known to be definable in the first-order logic if and only if its syntactic monoid is aperiodic. This algebraic characterisation of a fundamental logical fragment has been extended in the quantitative case by Droste and Gastin, dealing with polynomially ambiguous weighted automata and a restricted fragment of weighted first-order logic. In the quantitative setting, the full weighted first-order logic (without the restriction that Droste and Gastin use, about the quantifier alternation) is more powerful than weighted automata, and extensions of the automata with two-way navigation, and pebbles or nested capabilities have been introduced to deal with it. In this work, we characterise the fragment of these extended weighted automata that recognise exactly the full weighted first-order logic, under the condition that automata are polynomially ambiguous.This is joint work with Benjamin Monmege.


Alvin George, IISc Bangalore, Verifying Temporal Properties of Control Systems using Abstraction Refinement

Slides

Abstract

A Control System with a continuous state Plant and a controller is an infinite state system. For the case with an RNN as controller, given a temporal specification, our goal is to check whether the system satisfies the property. Our approach to solving the verification problem starts by constructing a finite state abstract system using predicate abstraction technique and then using the counter-example guided abstraction refinement (CEGAR) technique to verify the abstract system against the LTL property. We check the feasibility of the counter-examples using Recurrent sets.


Pankaj Kumar Kalita, IIT Kanpur, Synthesizing Abstract Transformer

Slides

Abstract

This talk addresses the problem of creating abstract transformers automatically. The method we present automates the construction of static analyzers in a fashion similar to the way yacc automates the construction of parsers. Our method treats the problem as a program-synthesis problem. The user provides specifications of (i) the concrete semantics of a given operation op, (ii) the abstract domain A to be used by the analyzer, and (iii) the semantics of a domain-specific language L in which the abstract transformer is to be expressed. As output, our method creates an abstract transformer for op in abstract domain A, expressed in L (an “L-transformer for op over A”). Moreover, the abstract transformer obtained is a most-precise L-transformer for op over A; that is, there is no other L-transformer for op over A that is strictly more precise. We implemented our method in a tool called AMURTH. We used AMURTH to create sets of replacement abstract transformers for those used in two existing analyzers, and obtained essentially identical performance.


Ramchandra Phawade, IIT Dharwad, Decidability results for free choice time petri nets

Slides

Abstract

For unbounded Time Petri nets most of the classical problems like reachability, firability, termination, livenss, and coverability are undecidable. For the structural subclass of Time Free choice nets, we show that firability, termination and liveness are decidable.


Rahul Sharma, Microsoft Research, Large language models meet program synthesis

Slides

Abstract

Large pre-trained language models such as GPT-3, Codex, and Google’s language model are now capable of generating code from natural language specifications of programmer intent. We view these developments with a mixture of optimism and caution. On the optimistic side, such large language models have the potential to improve productivity by providing an automated AI pair programmer for every programmer in the world. On the cautionary side, since these large language models do not understand program semantics, they offer no guarantees about quality of the suggested code. We present an approach to augment these large language models with post-processing steps based on program analysis and synthesis techniques, that understand the syntax and semantics of programs. Further, we show that such techniques can make use of user feedback and improve with usage. We present our experiences from building and evaluating such a tool Jigsaw, targeted at synthesizing code for using Python Pandas API using multi-modal inputs. Our experience suggests that as these large language models evolve for synthesizing code from intent, Jigsaw has an important role to play in improving the accuracy of the systems.


Sreejith A. V., IIT Goa, Formal methods in machine learning

Slides

Abstract

As intelligent algorithms continue to gain wider adoption, they are beginning to replace humans in certain contexts. Ensuring the reliability and security of these algorithms in high-risk areas such as self-driving cars or unmanned planes/trains is critical. However, conventional testing methods are often insufficient to address the increased level of risk involved. As a result, formal verification of these algorithms becomes necessary. During this introductory talk, we'll explore the challenges of using traditional formal methods to verify deep learning models, and consider some recent developments that offer hope in this rapidly evolving field of research.


Soumyadip Bandyopadhyay, NVidia, Industrial Application of Equivalence checking of programs

Slides

Abstract

Programs are often subjected to significant optimizing and parallelizing transformations based on extensive dependence analysis. Formal validation of such transformations needs modelling paradigms which can capture both control and data dependences in the program vividly. Being value-based with an inherent scope of capturing parallelism, the untimed coloured Petri net (CPN) models, reported in the literature, fit the bill well; accordingly, they are likely to be more convenient as the intermediate representations (IRs) of both the source and the transformed codes for translation validation than strictly sequential variable-based IRs like sequential control flow graphs (CFGs). Our main contribution is to develop a novel path-based equivalence checking technique for Petri net based models of programs for validating several parallelizing transformations.


Ravindra Metta, TRDDC Pune, (Non-)termination checking via tests

Slides

Abstract

Non-termination checking of programs has been a theoretically hard and practically useful problem. In this talk, we present our recent published as well as on going work on checking for program non-termination (NT) as well as termination (T). Some of this work has been implemented in our tool VeriFuzz 1.4, which won the Termination category of SVCOMP 2023.


Abhisekh Sankaran, TRDDC Pune, First order Skolem function synthesis over finite structures

Slides

Abstract

The problem of Skolem function synthesis (SFS) asks if there is an algorithm that, given a prenex first order (FO) formula \varphi and a structure A, synthesizes the interpretations in A of the Skolem functions for the existential variables of \varphi, assuming that computable such interpretations exist. Here synthesis means constructing algorithms that compute the said interpretations. This problem has been studied extensively in the Boolean setting but a commensurate study for FO is lacking in the literature. A recent paper of Akshay and Chakraborty (MFCS '22) provides a (possibly the first) systemmatic study of SFS for FO showing amongst other results that SFS in general has a negative answer, and that if the structure is fixed, then SFS has a positive answer precisely when the elementary diagram of the structure is decidable. The characterization above immediately implies that SFS is in the affirmative for all finite structures. In fact it can be shown that there is an algorithm that for any given finite structure A of size n, synthesises the interpretations in A of the Skolem functions for any given prenex FO formula \varphi, in time n^O({|\varphi|}). However with \varphi and A both seen as part of the input, this running time is exponential. This motivates asking: under what conditions can we get polynomial time algorithms for SFS? This talk explores the stated question and proposes that a finitary adaptation of a classical model-theoretic notion called the Lowenheim-Skolem property might offer some answers.


Habeeb P, IISc. Bangalore, Verification of Camera-Based Autonomous Systems

Slides

Abstract

Autonomous technologies are becoming increasingly prevalent in the automotive industry due to the numerous benefits they offer, including improved safety and security, enhanced accessibility for the mobility challenged, and the ability to navigate in hazardous environments. Autonomous vehicles rely on various inputs, includingcamera images, lidar readings, and infrared signals, which are processed by advanced neural network-based controllers. These controllers play a critical role in determining the vehicle's trajectory, making it imperative that the closed-loop control system and perception modules are highly reliable and accurate. Testing the autonomous vehicle in real-world terrains or test tracks provides valuable information for debugging and giving an assurance about the system's reliability. However, these evaluations are often expensive and afford very limited coverage. A promising alternative is to reason about a model of the vehicle in simulated (or synthetic) environments. Both safe and unsafe trajectories from a simulated environment are known to transfer well to real environments. Thus, analysis in a simulated environment can give us an effective way to debug and gain confidence in our system. In this work, we consider the problem of verifying the safety of the trajectories of a camera-based autonomous vehicle in a given 3D scene, with a specified initial and target region within the scene. We give a baseline procedure and an abstraction-refinement-based algorithm to verify that all trajectories starting from a given initial region safely reach a specified target region without colliding with obstacles. We also present a prioritization-based falsification procedure whose objective is to find a large number of spatially distinct unsafe trajectories, that could be used to re-train/re-design the neural network/controller. Both these techniques are based on the notion ofinvariant regions which are volumes within which the camera records the same image of the 3D scene.


Prince Mathew, IIT Goa, One-deterministic-counter automata

Slides

Abstract

We introduce One-deterministic-counter automata (ODCA), one-counter automata where all runs labeled by a given word exhibit the same counter effect. The talk will specifically focus on weighted ODCAs. We present a novel problem known as the co-VS (complement to a vector space) reachability problem. By leveraging properties of matrices and vector spaces, a pseudo-pumping lemma is established, proving that the Co-VS reachability problem is in P. Furthermore, we demonstrate that the lexicographically minimal witness for this problem has a special form. Consequently, it follows that the equivalence problem of ODCAs is also in P. This is joint work with Dr. Sreejith A.V., Dr. Vincent Penelle, and Dr. Prakash Saivasan.


Saina Sunny, IIT Goa, Deciding edit-distance between transducers

Slides

Abstract

Given a metric over words, define the distance between two transducers with identical domain as the supremum of the distances of their respective outputs over all inputs. This is a useful generalisation for comparing transducers that goes beyond the equivalence problem. Two transducers are said to be close (resp. k-close) if their distance is bounded (resp. at most k). We will discuss the decidability of the problem that asks whether two given transducers are close (resp. k-close), for common edit distances. This is a joint work with C.Aiswarya (CMI) and Amaldev Manuel (IIT Goa).


Madhavan Mukund, CMI, Undecidability of distributed games with causal memory

Slides

Abstract

In this talk, we will look at Hugo Gimbert's recent result on the undecidability of distributed games with causal memory. The paper can be found here https://lmcs.episciences.org/10016


Deepak D'Souza, IISc Bangalore, Symbolic Fixpoint Algorithms for Logical LTL Games

Slides

Abstract

Two-player games are a fruitful way to represent and reason about several important synthesis tasks like controller synthesis, program repair, and synchronization synthesis. In all these applications, a solution directly corresponds to a winning strategy for one of the players in the induced game. In turn, logically-specified games offer a powerful way to model these tasks for large or infinite-state systems. Much of the techniques proposed for solving such games rely on abstraction-refinement or template-based solutions. In this work we show how to apply classical fixpoint algorithms in a symbolic logical setting. We implement our techniques in a tool called GenSysLTL and show that they are not only effective in synthesizing valid controllers for a variety of challenging benchmarks from the literature, but often compute maximal winning regions and maximally permissive controllers. This is joint work with Stanly Samuel and K V Raghavan.


S P Suresh, CMI, The secrecy problem for protocols with assertions

Slides

Abstract

A fundamental property desired of security protocols is secrecy -- there is no run where a designated secret is leaked to the intruder. The resulting secrecy problem has been studied in various settings in the symbolic analysis of security protocols. It is in general undecidable, but decidable when we search for a leak involving a fixed finite number of sessions in each run. Even though the length of each run is bounded in this setting, the problem is non-trivial due to non-atomic substitutions. In this talk, we indicate how this problem is solved for protocols that only communicate terms, and how the solution is lifted to protocols that communicate assertions as well as terms.


Sanjana Singh, Optimal Stateless Model Checking based on View-equivalence

Slides

Abstract

Partitioning program executions into equivalence classes is central to efficient stateless model checking of concurrent programs. An equivalence relation drives the partitioning, and prior works have investigated various such relations toward minimizing the set of equivalence classes. This work introduces a novel view-equivalence partitioning relation that relies on the values read by memory accesses. Two program executions are view-equivalent if (i) they have the same set of read memory accesses and (ii) the read memory accesses read the same values. For any input program, view-equivalence is at least as coarse as any existing equivalence relation and, thus, induces the least number of equivalence classes. This work also presents a stateless model checker based on view-equivalence, called ViEqui, and shows that ViEqui is sound, complete, and optimal under view-equivalence. ViEqui is implemented for C/C++ input programs over the Nidhugg tool. We test its correctness over 16000+ litmus tests and compare its performance against prior stateless model checkers on challenging benchmarks. Our experiments reveal that ViEqui performs over 20x faster in ∼18% of tests and times out in significantly lesser tests than other existing techniques. The definition of view-equivalence is memory model oblivious. Hence, for future scope one can explore the possibility of using view-equivalence for stateless model checking under weak memory models. Additionally, the notion of view-equivalence may also find applicability for database transactions.


Rajesh Kumar, BITS Pilani, RESilience-by-DEsign (RESIDE)— a decision theoretic framework for optimal incident response and damage control using statistical model checking

Slides

Abstract

Engineering safety-critical systems such as the steam-boiler system is a challenging task. Inherent to these systems is the intrinsic heterogeneity, concurrency, and sensitivity to timing of various system processes. With these systems increasingly integrated to the public networks, security issues have been emergent. In literature, while a number of papers focus on optimal preventive countermeasures, a decision-theoretic framework for response and damage-control is lacking. In this paper, we supplement state-of-the-art model-driven security risk analysis frameworks with a cyber-resilience mechanism to answer the various questions of interest that are typical during the system run. For example:- Can I maintain operational continuity by knowing and patching a few essential components? Or what are the cost implications of operator actions in the event of malicious ingress, for example. reboot a component or shut down the entire system? We answer these questions using the statistical model checking as an engine and using the popular UPPAAL SMC tool. Results of various scenario analysis and policy intervention are graphically presented showcasing the optimal strategies. We use the industrial case-study of the steam-boiler system to showcase the different steps of our framework.


Pranshu Gaba, TIFR Mumbai, Stochastic Window Mean-payoff Games

Slides

Abstract

Stochastic two-player games model systems with an environment that is both adversarial and stochastic. The environment is modelled by a player (Player 2) who tries to prevent the system (Player 1) from achieving its objective. We consider window mean-payoff objectives: these are finitary versions of the traditional mean-payoff objective, obtained by replacing the long-run average of the payoffs by average payoffs computed over a finite sliding window. We present complexity bounds and algorithmic solutions for computing strategies of Player 1 to ensure that the objective is satisfied with positive probability, with probability 1, or with probability at least p, regardless of the strategy of Player 2. The solution crucially relies on a reduction to the special case of non-stochastic two-player games with the same objective. We also give a general characterisation of prefix-independent objectives for which this reduction holds. This is a joint work with Laurent Doyen and Shibashis Guha.


Sumanth Prabhu S, TRDDC Pune, Weakest Quantified Preconditions for Linear Array Programs

Slides

Abstract

Precondition synthesis is an important problem having many applications in verification and testing. The weakest (or maximal) precondition is desirable in most of these applications, but is difficult to infer, especially for programs having loops and arrays. Since the size of an array is often not constant in a program, weakest preconditions need to be expressed as quantified formulas in general. However, existing techniques have limitations in finding such preconditions. To address these challenges, we present an approach for weakest quantified precondition inference using the infer-check-weaken framework. A precondition is inferred by a novel technique called range abduction, then the precondition is checked for maximality and weakened if required. Range abduction attempts to propagate the given quantified postcondition backward, and strengthen or weaken it as needed to infer precondition. Maximality checking is achieved by non-trivial encodings of the problem. Weakening is done in a syntax-guided fashion. Our proposed approach can effectively infer and prove weakest preconditions on a range of benchmarks and outperforms competing techniques.


Pankaj Kumar Kalita, IIT Kanpur, Program Synthesis beyond Programs

Slides

Abstract

Program synthesis is a technique to synthesize a program automatically from a given specification. Significant research has been going on in this field to improve program synthesis tools. In this talk, we will talk about the synthesis of beyond regular programs. In the first problem, we try synthesizing fences and atomic blocks in concurrent programs under relaxed memory models. The second work focuses on synthesizing semantic actions of attribute grammars. The last work is about synthesizing abstract transformers for an abstract domain to automate abstract interpretation.


Srivathsan B, CMI, Suffix-reading automata and their synthesis problem

Slides

Abstract

We present deterministic suffix-reading automata (DSA), a new automaton model over finite words. Edges of these automata are labelled with strings. A transition from a state is enabled when, at that state, the automaton reads a word with a suffix that matches its label. DSAs recognize regular languages, but have a more succinct representation than Deterministic Finite Automata (DFA) for some languages. We look at the problem of synthesizing succinct DSAs from DFAs. We describe an algorithm for this problem, investigate its complexity and discuss challenges in synthesizing a minimal DSA. We present some experiments with an implementation of our algorithm and exhibit examples of specifications for which the generated DSAs are smaller and more readable than the corresponding DFAs.