Our sponsors


June 29, 2011 Old pages moved, Photos added

June 13, 2011 Additional directions to housing added

June 2, 2011 lodging at Baits is full

May 3, 2011 Conference Schedule added

Apr 15, 2011 Final list of accepted papers added

Apr 4, 2011 Accommodation / Registration Opened

Feb 17,2011: MaxSAT Evaluation Deadline Added

Feb 16, 2011: Extended Deadline for paper submissions (21st instead of 18th, Feb).

Dec 20, 2010: Final CFP Added

Sep 24, 2010: Preliminary CFP Added

Aug 31, 2010: Web Site opens


Workshops proposals
Dec. 17, 2010

Abstract submission
Feb. 11, 2011

Paper submission
Feb. 21, 2011

Author notification
Mar. 20, 2011

Final version
Apr. 8, 2011

June 19th to 22th, 2011

June 18th and 23th, 2011

Affiliated Events Main Dates

SAT Competition
Web site
Bench. Submission:
Feb. 13, 2011
Solver Submission:
(ext.) March 2, 2011

PB Evaluation
Web Site
Solver/Benchs. Subm.:
March 20, 2011

MAX-SAT Evaluation
Web Site
Solver Submission:
March 25, 2011

Affiliated Workshops Main Dates

web site
Abstract: April, 10th
Papers: April, 17th

web site
Abstract: April, 22nd
Papers: April, 29th

web site
Abstract: April, 17th
Papers: April, 24th

SAT/SMT Solver Summer School 2011

The SAT 2011 conference is held just a few days after the SAT/SMT Solver Summer School 2011 (from June 12th to June 17th), located at MIT, Cambridge, USA.

Invited Speakers

Ryan Williams

Short Bio: Ryan Williams is a postdoctoral fellow at IBM Almaden Research Center in San Jose, California, supported by the Josef Raviv Memorial Fellowship. Previously he was a member of the Institute for Advanced Study in Princeton, New Jersey. Ryan received his PhD from Carnegie Mellon in 2007, advised by Manuel Blum, and B.A. and M.Eng. degrees from Cornell. His primary research interests are in the theory of algorithms and computational complexity.
web site:

Title of Invited Talk:Connecting SAT Algorithms and Complexity Lower Bounds
Abstract: I will describe prior and current work on connecting the art of finding good satisfiability algorithms with the art of finding obstructions (a.k.a. lower bounds), which are proofs of limitations on what problems can be solved by good algorithms. Surprisingly, we show that if the worst case O(2^n) time bound for satisfiability can be only slightly improved in some situations, then interesting obstructions can be found in other situations. That is, somewhat "weak" algorithms for solving SAT can be turned into strong obstructions for another problem. These connections have made it possible to prove new complexity lower bounds that had long been conjectured, and they suggest concrete directions for further progress.
Download the 2-page abstract of Ryan's Talk

Koushik Sen

Short Bio: Koushik Sen is an assistant professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interest lies in Software Engineering, Programming Languages, and Formal methods. He is interested in developing software tools and methodologies that improve programmer productivity and software quality. He is best known for his work on directed automated random testing and concolic testing. He received the C.L. and Jane W-S. Liu Award in 2004, the C. W. Gear Outstanding Graduate Award in 2005, and the David J. Kuck Outstanding Ph.D. Thesis Award in 2007 from the UIUC Department of Computer Science. He has received a NSF CAREER Award in 2008, a Haifa Verification Conference (HVC) Award in 2009, a IFIP TC2 Manfred Paul Award for Excellence in Software: Theory and Practice in 2010, and a Sloan Foundation Fellowship in 2011. He has won three ACM SIGSOFT Distinguished Paper Awards. He holds a B.Tech from Indian Institute of Technology, Kanpur, and M.S. and Ph.D. in CS from University of Illinois at Urbana-Champaign.
web site:

Title of Invited Talk: Concolic Testing and Constraint Satisfaction
Abstract: In this talk, I will describe concolic testing, also known as directed automated random testing or dynamic symbolic execution, a technique that paved the way for development of practical automated test generation tools. Concolic testing improves classical symbolic execution by performing symbolic execution of a program along a concrete execution path. Specifically, concolic testing executes a program starting with some given or random concrete input. It then gathers symbolic constraints on inputs at conditional statements during the execution induced by the concrete input. Finally, a constraint solver is used to infer variants of the concrete input to steer the next execution of the pro- gram towards an alternative feasible execution path. This process is repeated systematically or heuristically until all feasible execution paths are explored or a user-defined coverage criteria is met.
Download the 2-page abstract of Koushik's Talk

Panel Discussion

Topic: Establishing Standards for Empirical Evaluations
Moderator: Allen Van Gelder
Panel Members:

Some information about the panel at SAT 2011 on the topic "Establishing Standards for Empirical Evaluations":
This will be more like a forum than a panel. Everyone is welcome to speak and the panelists are NOT supposed to be experts handing down their wisdom.
It is supposed address standards that will provide guidelines for authors, referees and program chairs as to what is expected, what is reasonable, etc. This topic is motivated by the observation that there was a lot of dissatisfaction about empirical sections during the SAT 2011 reviewing process, and very likely, a good number of dissatisfied authors, as well. This is the chance to voice your opinion about how the whole process can be improved.
The topic is NOT intended to be just "what is a good set of standards?" I am more interested in how it gets implemented productively.
Ideally both authors and reviewers would respect the standards. We would need editors and PC chairs to do some publicity and enforcement.
Has work already been done on this? We should not re-invent the wheel (or an inferior version of the wheel).

A few references are attached here: