Ryan Williams is a postdoctoral fellow at IBM Almaden Research Center in San Jose, California, supported by the Josef Raviv Memorial Fellowship. Previously he was a member of the Institute for Advanced Study in Princeton, New Jersey. Ryan received his PhD from Carnegie Mellon in 2007, advised by Manuel Blum, and B.A. and M.Eng. degrees from Cornell. His primary research interests are in the theory of algorithms and computational complexity.
web site: www.cs.cmu.edu/~ryanw/
Title of Invited Talk:Connecting SAT Algorithms and Complexity Lower Bounds
Abstract: I will describe prior and current work on connecting the art of finding good satisfiability algorithms with the art of finding obstructions (a.k.a. lower bounds), which are proofs of limitations on what problems can be solved by good algorithms. Surprisingly, we show that if the worst case O(2^n) time bound for satisfiability can be only slightly improved in some situations, then interesting obstructions can be found in other situations. That is, somewhat "weak" algorithms for solving SAT can be turned into strong obstructions for another problem. These connections have made it possible to prove new complexity lower bounds that had long been conjectured, and they suggest concrete directions for further progress.
Download the 2-page abstract of Ryan's Talk
Koushik Sen is an assistant professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interest lies in Software Engineering, Programming Languages, and Formal methods. He is interested in developing software tools and methodologies that improve programmer productivity and software quality. He is best known for his work on directed automated random testing and concolic testing. He received the C.L. and Jane W-S. Liu Award in 2004, the C. W. Gear Outstanding Graduate Award in 2005, and the David J. Kuck Outstanding Ph.D. Thesis Award in 2007 from the UIUC Department of Computer Science. He has received a NSF CAREER Award in 2008, a Haifa Verification Conference
(HVC) Award in 2009, a IFIP TC2 Manfred Paul Award for Excellence in
Software: Theory and Practice in 2010, and a Sloan Foundation Fellowship in 2011. He has won three ACM SIGSOFT Distinguished Paper Awards. He holds a B.Tech from Indian Institute of Technology, Kanpur, and M.S. and Ph.D. in CS from University of Illinois at Urbana-Champaign.
web site: www.eecs.berkeley.edu/Faculty/Homepages/ksen.html
Title of Invited Talk:
Concolic Testing and Constraint Satisfaction
Abstract: In this talk, I will describe concolic testing, also known as directed automated random testing or dynamic symbolic execution, a technique that paved the way for development of practical automated test generation tools. Concolic testing improves classical symbolic execution by performing symbolic execution of a program along a concrete execution path. Specifically, concolic testing executes a program starting with some given or random concrete input. It then gathers symbolic constraints on inputs at conditional statements during the execution induced by the concrete input. Finally, a constraint solver is used to infer variants of the concrete input to steer the next execution of the pro- gram towards an alternative feasible execution path. This process is repeated systematically or heuristically until all feasible execution paths are explored or a user-defined coverage criteria is met.
Download the 2-page abstract of Koushik's Talk
Topic: Establishing Standards for Empirical Evaluations
Moderator: Allen Van Gelder
- Armin Biere
- Marijn Heule
- Laurent Simon
This will be more like a forum than a panel. Everyone is welcome to speak and the panelists are NOT supposed to be experts handing down their wisdom.
It is supposed address standards that will provide guidelines for authors, referees and program chairs as to what is expected, what is reasonable, etc. This topic is motivated by the observation that there was a lot of dissatisfaction about empirical sections during the SAT 2011 reviewing process, and very likely, a good number of dissatisfied authors, as well. This is the chance to voice your opinion about how the whole process can be improved.
The topic is NOT intended to be just "what is a good set of standards?" I am more interested in how it gets implemented productively.
Ideally both authors and reviewers would respect the standards. We would need editors and PC chairs to do some publicity and enforcement.
Has work already been done on this? We should not re-invent the wheel (or an inferior version of the wheel).
A few references are attached here:
- Parberry, I., A Guide for New Referees in Theoretical Computer Science, Information and Computation, 1994. http://www.cse.ucsc.edu/~avg/Sat11panel/parberry-94-IandC.pdf
- Walsh, T., Empirical Mehtods in AI, AI Magazine, 1998. http://www.cse.ucsc.edu/~avg/Sat11panel/walsh-98-AImag.pdf
- Hey, T., Tansley, S., and Tolle, K., eds., The Fourth Paradigm: Data-Intensive Scientifgic Discovery, 2009. http://research.microsoft.com/en-us/collaboration/fourthparadigm/