HW 2 -- CSP & Planning Heuristics
Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.
This week's Colab notebook can be found here: Colab Notebook (Click Me!)
1) Local search for Constraint Satisfaction Problems§
1.1) §
When there is some kind of information available about the “quality” of a non-satisfying assignment, then we can do local search, trying to improve that solution by making a sequence of small improving changes.
Which of the following might make good measures of the quality of an assignment, which would decrease as the assignment improved toward a solution?
Number of violated constraints | |
Sum of the “amount” each constraint is violated (for example, how much two objects interpenetrate, given a collision-free constraint) | |
Number of values in the domains of the variables involved in violated constraints |
1.2) §
Consider an algorithm in which you iterate these steps until you find a solution:
- Find a violated constraint.
- Make a minimal reassignment of values to variables involved in it, so that it becomes satisfied.
Is it guaranteed to find a satisfying assignment if one exists?
True | |
False |
1.3) §
Consider an algorithm in which you iterate these steps until you find a solution:
- Pick a variable at random and a possible assignment to it.
- If changing that variable to have that value decreases the number of constraints that are violated, make the change, otherwise do not.
Is it guaranteed to find a satisfying assignment if one exists?
True | |
False |
1.4) §
Consider an algoirthm in which you iterate these steps until you find a solution:
- Pick a variable at random and a possible assignment to it.
- If changing that variable to have that value improves the quality of the assignment, make the change.
- Otherwise, with probability e^{-\delta / T} where \delta is the change in assignment quality, accept the change
Is there a way to manage the T parameter so this is guaranteed to find a satisfying assignment if one exists?
True | |
False |
2) Constraint Satisfaction Coding Problems§
2.1) CSP Warmup§
Create a BinaryCSP with the criteria described in the docstring.For reference, our solution is 4 lines of code.
2.2) Backtracking Search§
Complete this implementation of backtracking search for binary CSPs.- Run the provided implementation of AC-3 after you have made each assignment.
- Use the least-constraining-value heuristic.
- Use the MRV heuristic.
For reference, our solution is 31 lines of code.
2.3) Sudoku Solving§
Use your implementation of backtracking search to solve sudoku puzzles like the following:sudoku_puzzle = [
[0, 0, 3, 0, 2, 0, 6, 0, 0],
[9, 0, 0, 3, 0, 5, 0, 0, 1],
[0, 0, 1, 8, 0, 6, 4, 0, 0],
[0, 0, 8, 1, 0, 2, 9, 0, 0],
[7, 0, 0, 0, 0, 0, 0, 0, 8],
[0, 0, 6, 7, 0, 8, 2, 0, 0],
[0, 0, 2, 6, 0, 9, 5, 0, 0],
[8, 0, 0, 2, 0, 3, 0, 0, 9],
[0, 0, 5, 0, 1, 0, 3, 0, 0],
]
For reference, our solution is 61 lines of code.
In addition to all of the utilities defined at the top of the colab notebook, the following functions are available in this question environment: run_backtracking_search
. You may not need to use all of them.
2.4) AC3 and me§
Rerun the solver with and without AC3.
-
How long did it take the solver to terminate with AC3 enabled?
Select one:1 second Less than a minute More than five minutes -
How long did it take the solver to terminate without AC3 enabled?
Select one:1 second Less than a minute More than five minutes
3) Warming up with Pyperplan§
In this homework, we will be using a Python PDDL planner called pyperplan
.
Let's warm up by using pyperplan to solve a given blocks PDDL problem.
3.1) Let's Make a Plan§
Use run_planning
to find a plan for the blocks problem defined at the top of the colab file (BLOCKS_DOMAIN
, BLOCKS_PROBLEM
).
The run_planning
function takes in a PDDL domain string, a PDDL problem string, the name of a search algorithm, and the name of a heuristic (if the search algorithm is informed) or a customized heuristic class. It then uses the Python planning library pyperplan
to find a plan.
The plan returned by run_planning
is a list of pyperplan Operators. You should not need to manipulate these data structures directly in this homework, but if you are curious about the definition, see here.
The search algs available in pyperplan are: astar, wastar, gbf, bfs, ehs, ids, sat
. The heuristics available in pyperplan are: blind, hadd, hmax, hsa, hff, lmcut, landmark
.
For this question, use the astar
search algorithm with the hadd
heuristic.
For reference, our solution is 2 line(s) of code.
3.2) Fill in the Blanks§
You've received PDDL domain and problem strings from your boss and you need to make a plan, pronto! Unfortunately, some of the PDDL is missing.
Here's what you know. What you're trying to model is a newspaper delivery robot. The robot starts out at a "home base" where there are papers that it can pick up. The robot can hold arbitrarily many papers at once. It can then move around to different locations and deliver papers.
Not all locations want a paper -- the goal is to satisfy all the locations that do want a paper.
You also know:
- There are 6 locations in addition to 1 for the homebase. Locations 1, 2, 3, and 4 want paper; locations 5 and 6 do not.
- There are 8 papers at the homebase.
- The robot is initially at the homebase with no papers packed.
Use this description to complete the PDDL domain and problem. You can assume the PDDL planner will find the optimal solution.
If you are running into issues writing or debugging the PDDL you can check out this online PDDL editor, which comes with a built-in planner: editor.planning.domains. To use the editor, you can create two files, one for the domain and one for the problem. You can then click "Solve" at the top to use the built-in planner.
For a general reference on PDDL, check out planning.wiki. Note that the PDDL features supported by pyperplan are very limited: types and constants are supported, but that's about it. If you want to make a domain that involves more advanced features, you can try the built-in planner at editor.planning.domains, or you can use any other PDDL planner of your choosing.
Debugging PDDL can be painful. The online editor at editor.planning.domains is helpful: pay attention to the syntax highlighting and to the line numbers in error messages that result from trying to Solve. To debug, you can also comment out multiple lines of your files by highlighting and using command+/. Common issues to watch out for include:
- A predicate is not defined in the domain file
- A variable name does not start with a question mark
- An illegal character is used (we recommended sticking with alphanumeric characters, dashes, or underscores; and don't start any names with numbers)
- An operator is missing a parameter (in general, the parameters should be exactly the variables that are used anywhere in the operator's preconditions or effects)
- An operator is missing a necessary precondition or effect
- Using negated preconditions, which are not allowed in basic Strips
If you get stuck debugging your PDDL file for more than 10-15 min, please reach out and we'll help!
For reference, our solution is 89 line(s) of code.
4) Planning Heuristics§
Let's now compare different search algorithms and heuristics.
Let's consider two of the search algorithms available in pyperplan:
astar, gbf
.
And the following heuristics: blind, hadd, hmax, hff, lmcut
. blind
, hmax
and lmcut
are admissible; hadd
and hff
are not.
Unfortunately the documentation for pyperplan is limited at the moment, but if you are curious to learn more about its internals, the code is open-sourced on this GitHub repo.
4.1) Comparison§
We have given you a set of blocks planning problem of different complexity BW_BLOCKS_PROBLEM_1
to BW_BLOCKS_PROBLEM_6
.
You can use the function test_run_planning
found in the Colab to test the different combinations of search and heuristic
on different problems. There is also a helper function in the Colab to plot the result.
If planning takes more than 30 seconds, just kill the process.
Problems 1, 2 and 3 are easy (plan in less than 30 seconds) for all the methods. | |
Problem 4 is easy (plan in less than 30 seconds) for all the non-blind heuristics. | |
Problem 4 is easy (plan in less than 30 seconds) for all the non-blind heuristics, except hmax . | |
The blind heuristic is generally hopeless for the bigger problems. | |
hadd is generally much better (leads to faster planning) than hmax . | |
gbf-lmcut will always find paths of the same length as astar-lmcut (given sufficient time). | |
astar-lmcut will always find paths of the same length as astar-hmax (given sufficient time). | |
astar-lmcut usually finds paths of the same length as astar-hff (in these problems). | |
gbf-hff is a good compromise for planning speed and plan length (in these problems). |
4.2) Relax, It's Just Logistics§
Consider the PDDL domain for logistics.
We create a delete-relaxed version of the logistics domain by dropping the negative effects from each operator.
Now we want to compute h^{\textit{ff}} for this initial state and goal
I = {truck(t), package(p), location(a), location(b), location(c), location(d), location(e), at(t, a), at(p, c), road(a, b), road(b, c), road(c, d), road(a, e), road(b, a), road(c, b), road(d, c), road(e, a)} G = {at(t, a), at(p, d)}.
-
What are the actions in the relaxed plan that we would extract from the RPG for this problem, using the h^{\textit{ff}} algorithm? Enter the plan as a list of strings; ordering is not important.
Enter a list of strings, e.g.["drive-truck(t,a,b)", "unload-truck(p,t,d)"]
.
-
What is the value of the heuristic score of the initial state: h^{\textit{ff}}(I)?
Enter an integer
4.3) More Relaxing Fun§
If there exists a plan to achieve a goal with a single fact \{\texttt{g}\}, then that fact \texttt{g} must appear in some level of the relaxed planning graph (RPG). | |
If there exists a plan to achieve a goal with a single fact \{\texttt{g}\}, then that fact \texttt{g} must appear in the last level of the RPG. | |
The last level of the RPG is a superset of all of the facts that could ever be in the state by taking actions from the initial state. | |
The initialization (initial state) of the RPG may affect the number of levels in the RPG, but it will not change the final set of facts in the last level of the RPG. | |
If a fact \texttt{g} appears in some level of the RPG, then there exists a (non-relaxed) plan to achieve the goal \{\texttt{g}\}. |
5) Implementing h^{\textit{ff}}§
In the code below, one of the arguments is a Pyperplan Task instance.
Please study these definitions; some of the methods may be useful in writing your code. Note that fact names
are strings as are the names of Operator
instances.
Also, note that most of the attributes are frozenset
; if you want to modify them, you can do set(x)
to convert a frozenset x to a set.
5.1) Construct Reduced Planning Graph (RPG)§
Given a Pyperplan Task instance and an optional state, return (F_t, A_t), two lists as defined in the RPG pseudo-code from lecture. Note that fact names in the documentation of Task
are strings such as '(on a b)' and operator instances are instances of the Operator
class; the name
of an operator instance is a string such as '(unstack a c)'. If state is None, use task.initial_state
.
For reference, our solution is 15 line(s) of code.
5.2) Implement h_add§
Given a Pyperplan Task instance and a state (set of facts or None), return h_add(state).
For reference, our solution is 7 line(s) of code.
In addition to all the utilities defined at the top of the Colab notebook, the following functions are available in this question environment: construct_rpg
. You may not need to use all of them.
5.3) Implement h_ff (OPTIONAL)§
Given a Pyperplan Task and a state (set of facts or None), return h_ff(state).
For reference, our solution is 26 line(s) of code.
In addition to all the utilities defined at the top of the Colab notebook, the following functions are available in this question environment: construct_rpg
. You may not need to use all of them.
6) Feedback§
-
How many hours did you spend on this homework?
-
Do you have any comments or suggestions (about the problems or the class)?