Home / homework / HW 1 -- Search & Intro CSP

HW 1 -- Search & Intro CSP

The questions below are due on Monday September 18, 2023; 11:59:00 PM.
 
You are not logged in.

Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.

This week's Colab notebook can be found here: Colab Notebook (Click Me!)

1) Written Questions§

1.1) Warm-Up: Algorithm Simulations (Optional)§

Consider the search graph shown below. S is the start state, and G is the (only) goal state. All edges are bidirectional. Note that the heuristic values are written in the circles corresponding to the states.

Make the following assumptions:

  • Ties are broken alphabetically in search. For example, a partial plan S \to X \to A would be popped from the frontier before S \to X \to B.
  • All edge costs are positive.

Also note that:

  • The algorithms in AIMA check for redundant paths when a child state is generated by the successor function before it has been added to the frontier.
  • All cost-based algorithms, implemented using best-first-search, terminate when a node corresponding to a goal state is popped from the frontier, not when the node is added to the frontier. Breadth-first search in AIMA terminates when a goal is generated by the successor function, before it is added to the frontier.
Click to open image in new window

For each search algorithm below, give the ordering of all the states that the algorithm pops from the frontier. Further, give the path that each algorithm returns. Each answer is a string of state names separated by hyphens, such as "S-A-G".

  1. For breadth-first graph search (ignores costs), enter the states popped (in order).

  2. For breadth-first graph search (ignores costs), enter the path returned.

  3. For uniform-cost graph search, enter the states popped (in order).

  4. For uniform-cost graph search, enter the path returned.

  5. For greedy best-first graph search (ignores costs), enter the states popped (in order).

  6. For greedy best-first graph search (ignores costs), enter the path returned.

  7. For A* graph search, enter the states popped (in order).

  8. For A* graph search, enter the path returned.

1.2) Grids§

Consider a uniform grid that extends infinitely in all directions.

In the grid on the left, each state has four child states (movement on the grid is limited to up, down, left, and right). In the grid on the right, each state has eight children (diagonal moves are allowed).

We start at the grid point marked S. Assume the goal is far away and not shown in the figures.

1.2.1) BFS with NO pruning§

Answer the following questions about breadth-first search with no pruning. Specifically, by no pruning, we mean not even pruning out paths that contain cycles (i.e., include the same state more than once).

  1. In the left grid, how many paths with one edge are ever added to the frontier?

  2. In the left grid, how many paths with 2 edges are ever added to the frontier?

  3. In the right grid, how many paths with one edge are ever added to the frontier?

  4. In the right grid, how many paths with 2 edges are ever added to the frontier?

1.2.2) BFS with a reached test§

Answer the following questions about breadth-first search with a reached test, which never adds a state's descendants to the frontier more than once.

  1. In the left grid, how many paths of length 1 are ever added to the frontier?

  2. In the left grid, how many paths of length 2 are ever added to the frontier?

  3. In the right grid, how many paths of length 1 are ever added to the frontier?

  4. In the right grid, how many paths of length 2 are ever added to the frontier?

1.2.3) Uniform-Cost (UC)§

In the grid on the left, each state has four child states (movement on the grid is limited to up, down, left, and right). In the one on the right, each state has eight children (diagonal moves are allowed).

In this problem, we will consider uniform-cost search in the above domain. Assume that:

  • The cost of traversing a path is the Euclidean distance of the path.
  • The distance between points in the vertical and horizontal directions is 1 meter.

Answer the following questions about uniform-cost search without a reached test in the eight-action grid (the grid on the right). Your answers should be decimal numbers accurate to within 0.01.

    1. What is the cost of the second node popped?

    2. What is the cost of the sixth node popped?

    3. What is the cost of the tenth node popped?

    1.2.4) A*§

    The Manhattan distance between two states (x_0,y_0) and (x_1,y_1) on the grid is defined as:

    d_\text{Manhattan}\left(\left(x_0,y_0\right), \left(x_1,y_1\right)\right) = |x_0 - x_1| + |y_0 - y_1| .
    Assume that we are using A* search and we use this distance an a heuristic.

    1. In the left grid, are we guaranteed to find an optimal path?
      Yes
      No

    2. In the right grid, are we guaranteed to find an optimal path?
      Yes
      No

    1.3) Heuristics 1§

    Consider the following graph, where S is the start state and G is the (only) goal state, and the arcs are labeled with their actual costs.

    Click to open image in new window

    For each of the two heuristics h_1 and h_2 in the table below, determine whether the heuristic is admissible and/or consistent.

    Mark all that are true.
    h_1 is admissible.
    h_1 is consistent.
    h_2 is admissible.
    h_2 is consistent.

    1.4) Heuristics 2§

    1. Consider a search problem where all edges have cost 1 and the optimal solution has cost C. Let h be a heuristic which is \max(h^{*} - k, 0) where h^{*} is the actual cost to the closest goal and k is a nonnegative constant.

      Mark all that are necessarily true.
      h is admissible.
      h is consistent.

    2. Now consider the same search problem, but with a heuristic h' which is 0 at all states that lie along an optimal path to a goal and h^* elsewhere.

      Mark all that are necessarily true.
      h is admissible.
      h is consistent.

    1.5) Moving Day§

    Here is a really big problem to help think about constructing heuristics. Imagine that you have to do the logistics planning for trucks that drive around and pick up packages from different businesses. You know the sizes of the packages (thankfully, all cubes, but not of the same dimension) and the complete road map in advance.

    Your possible actions are:

    • to drive from one intersection to another in the city map (cost of this action is the length of the drive),
    • to pick up a package and place it in the truck at a specific position (in x, y, z, relative to a given corner of the cargo compartment) (cost of this action is 1).

    The requirements of this problem are:

    • You can only drive on roads.
    • You cannot unload a package from the truck once you pick it up.
    • You cannot re-arrange packages once you place them in the truck.
    • You cannot move packages through other packages inside the truck when placing the package.
    • When you place a package in the truck it has to be fully supported by the floor of the truck or on a bigger (or equal) package.
    • The dimensions of the truck cargo compartment are given: (W, L, H) and the packages must be fully inside the cargo compartment.
    • All the distances between intersections (and the depot) are greater or equal to 1.

    Given a list of packages (id, size, location) and a map, your job is to find the shortest path (in driving distance) starting at the depot, that picks up all the packages and returns to the depot.

    1. What information below must be included in the minimal state space for the search?

      Mark all that apply (such that your selection is minimal).
      truck location (an intersection)
      distance traveled to reach truck location (number)
      path traveled to reach truck location (list of intersections)
      uncollected package ids (set of ids)
      uncollected package locations (set of (id, intersection))
      uncollected package sizes (set of (id, size))
      location of the packages in the truck (set of (id, xyz))
      sizes of the packages in the truck (set of (id, size))

    2. Which of the following heuristics are admissible (and not always zero) for this problem class?

      Mark all that are admissible and not always zero.
      h_0: The straight-line distance from the current location to the depot
      h_1: The number of uncollected packages
      h_2: The sum of distances from current location to the locations of the uncollected packages
      h_3: The distance to the closest uncollected package

    3. Which of the following heuristics are admissible (and not always zero) for this problem class?

      Mark all that apply.
      h_0 + h_1
      h_0 + h_2
      h_0 + h_3
      h_1 + h_2
      h_1 + h_3
      h_2 + h_3
      \max(h_0, h_1)
      \max(h_0, h_2)
      \max(h_0, h_3)
      \max(h_1, h_2)
      \max(h_1, h_3)
      \max(h_2, h_3)

    4. If we consider the sub-problem of getting from the current state in the search to the goal and ignore some constraints, we get a relaxation. Which of the following relaxations produce non-trivial admissible heuristics (non-trivial in the sense that it is not always zero) for this problem class?

      Mark all that apply.
      Solve the problem assuming there are no packages to pick up
      Solve the problem ignoring the choice of package locations in the truck, assuming they will fit somewhere.
      Solve the problem ignoring the map constraint, assuming we can go to any location in a straight line.

    2) Search Algorithms §

    In this section you will implement the best-first search framework. To help with your implementation, we have provided you various utilities in the Colab notebook. Before starting on the implementations, you should read the utilities we provide you carefully.

    2.1) Best-first Search§

    Complete an implementation of the best-first search, encompassing A*, GBFS, or UCS. You can assume any heuristics are consistent. You should follow the psuedocode given in lecture closely. In particular, your implementation should prune redundant paths by remembering the reached states.

    For reference, our solution is 55 line(s) of code.

    2.2) Uniform Cost Search§

    Use your implementation of run_best_first_search to implement uniform cost search.

    For reference, our solution is 3 line(s) of code.

    In addition to all of the utilities defined at the top of the colab notebook, the following functions are available in this question environment: run_best_first_search. You may not need to use all of them.

    2.3) A* Search§

    Use your implementation of run_best_first_search to implement A* search.

    For reference, our solution is 3 line(s) of code.

    In addition to all of the utilities defined at the top of the colab notebook, the following functions are available in this question environment: run_best_first_search. You may not need to use all of them.

    2.4) Greedy Best-First Search§

    Use your implementation of run_best_first_search to implement GBFS.

    For reference, our solution is 3 line(s) of code.

    In addition to all of the utilities defined at the top of the colab notebook, the following functions are available in this question environment: run_best_first_search. You may not need to use all of them.

    3) Moving in Fractals §

    We now introduce you to a family of reward problems --- we will call these fractal problems. Make sure you familiarize yourself with these problems since some of the following questions depend on them!

    Fractal Problem A fractal problem is a reward problem with a finite horizon H such that:

    • The state space \mathcal{S} is the entire plane \mathbb{R}^2
    • The action at each time step t (assuming the agent takes the first action at step t=1) moves to one of the eight neighboring directions, with length scaled down by 2^{-t}:
      \left\{(dx, dy) \mid (dx, dy) \in \{-2^{-t}, 0, 2^{-t}\}^2 \land (dx, dy) \neq (0, 0) \right\}
    • The initial state is the origin at (0, 0).
    • The entire state space is covered by a reward field. A reward field is a function that maps each point in the state space to a scalar reward: \mathbb{R}^2 \rightarrow \mathbb{R}^+.

    In this problem, we will consider fractal problems with horizon H = 6 (that is, the agent is allowed to step 6 times).

    3.1) Visualize Reward Fields§

    We have defined for you three fractal problems in get_fractal_problems(). Below you can visualize the reward fields corresponding to these problems. We have also provided you with code in the Colab notebook to recreate these visualizations. You are encouraged to inspect the relevant class definitions for these problems to understand them better. You do not need to submit any code for this subsection. Instead, we will ask you questions in the following problems to confirm your understanding.

    Once you are ready, let's confirm that you understand these reward fields:

    1. What does reward-field-1 look most similar to?
      A smoothly changing gradient, where the four corners are equal local maxima.
      A smoothly changing gradient, where the four corners are equal local minima.
      A smoothly changing gradient, where the four corners are local maxima. Further, one corner's value is greater than the other three corners' values.
      A smoothly changing gradient, where the four corners are local minima. Further, one corner's value is less than the other three corners' values.
      Some noise.

    2. What does reward-field-2 look most similar to?
      A smoothly changing gradient, where the four corners are equal local maxima.
      A smoothly changing gradient, where the four corners are equal local minima.
      A smoothly changing gradient, where the four corners are local maxima. Further, one corner's value is greater than the other three corners' values.
      A smoothly changing gradient, where the four corners are local minima. Further, one corner's value is less than the other three corners' values.
      Some noise.

    3. What does reward-field-3 look most similar to?
      A smoothly changing gradient, where the four corners are equal local maxima.
      A smoothly changing gradient, where the four corners are equal local minima.
      A smoothly changing gradient, where the four corners are local maxima. Further, one corner's value is greater than the other three corners' values.
      A smoothly changing gradient, where the four corners are local minima. Further, one corner's value is less than the other three corners' values.
      Some noise.

    4. What is your best guess for the optimal plan for the problem reward-field-2? You may take any guess --- this question is not graded. But we will ask you again later!
      Move towards the upper-right corner.
      Move towards the lower-left corner.
      Move towards any corner.
      Move towards the origin after taking a random initial step.

    3.2) Path-cost Problem from Reward Problem§

    Implement the reduction from a reward problem to a path-cost problem as in lecture.

    For reference, our solution is 31 line(s) of code.

    You should now be able to solve reward problems by using the above reduction and the uniform cost search implementation from the previous section.

    What is the optimal plan for the problem reward-field-2? You should use path_cost_problem_from_reward_problem, run_uniform_cost_search and visualize_plan to help answer this question.
    Move towards the upper-right corner.
    Move towards the lower-left corner.
    Move towards any corner.
    Move towards the origin after taking a random initial step.

    4) Play with MCTS§

    4.1) Monte-carlo Warmup §

    Consider the problem of finding the maximum value in an array of size n without duplicates.

    1. The expected complexity of solving this problem by systematic search (i.e., by enumerating the array) is O(\underline{\quad\quad}).

      Fill in the blank above. (Enter an expression in n. Hint: don't overthink --- it's not a trick question).

    2. Consider solving this problem by sampling uniformly at random with replacement.

    import random, math
    def find_max_random(array):
        best, seen = -math.inf, set()
        while len(seen) < len(array):
            x = random.choice(array)
            if x > best:
                best = x
            seen.add(x)
        return best
    

    Let's think about the expected number of samples required for find_max_random to find the maximum value.

    If we knew in advance what the maximum value was, then we would just draw samples at random until we hit the value we were looking for. The expected number of samples required to hit a particular element of a set of size n is just the expected number of trials required to get heads, when flipping a coin with probability 1/n of coming up heads. According to facts about the geometric distribution, the expected number of trials required before winning is 1 / (1/n) = n.

    But we don't know what the maximum value is, so we can't stop as soon as we have encountered it! So, let's walk through another approach to the problem.

    Along the way, we will leave some blanks for you to fill in.

    Derivation

    To be sure of finding the maximum value, find_max_random has to visit all n elements by uniformly sampling.
    Let t_i be the time at which the algorithm first visits the i-th new index in the array (using 1-based indexing). In other words, t_i is the number of iterations of the while loop after len(seen) is incremented to i. So, for example, if the sequence of (randomly) visited indices is [4, 2, 4, 4, 5, 2, 5, 6, \dots], then t_1 = 1 and [t_2, t_3, t_4] = \underline{ \quad \mathsf{A} \quad }.
    \mathsf{ A }.

    Fill in the blank \mathsf{ A } above. (Enter a Python list)
    Using this notation, the expected number of trials until we've seen all the values is \mathbb{E}[t_n].

    Let d_i be the number of samples the algorithm takes to find the i-th new index after it has found the i-1-th new index. That is d_i = t_i - t_{i-1} for i > 1, and d_1 = 1. So, for example, in the list of visited indices above, d_1 = 1 and [d_2, d_3, d_4] = \underline{ \quad \mathsf{B} \quad }.
    \mathsf{ B }.

    Fill in the blank \mathsf{ B } above. (Enter a Python list)
    In general, t_n = \underline{ \quad\quad \mathsf{C} \quad\quad }.
    \mathsf{ C }.
    Fill in the blank \mathsf{ C } above.
    d_1 + d_2 + \dots + d_{n-1}
    d_1 + d_2 + \dots + d_n
    d_1 + d_2 + \dots + d_{n+1}
    By \underline{ \quad\quad \mathsf{D} \quad\quad }, we have \mathbb{E}[t_n] = \mathbb{E}[\sum_i d_i] = \sum_i \mathbb{E}[d_i].
    \mathsf{ D }.
    Fill in the blank \mathsf{ D } above.
    The expectation of a sum of random variables is the sum of the expectations of the individual random variables
    The expectation of a product of random variables is the product of the expectations of the random variables
    The expectation of any function of random variables is that function of the expectations of the random variables

    Let’s think about \mathbb{E}[d_i] now! Suppose the algorithm has already found i-1 of the values and there are n total possible values. So the probability that the algorithm would draw a new value by sampling uniformly at random for one step is \underline{ \quad \mathsf{E} \quad }.
    \mathsf{ E }.

    Fill in the blank \mathsf{ E } above. (Enter an expression involving n and i and constants)
    Therefore, the expected number of samples for the algorithm to draw until getting the i-th value is: \mathbb{E}[d_i] = \underline{ \quad \mathsf{F} \quad }.
    \mathsf{ F }.
    Fill in the blank \mathsf{ F } above. (Enter an expression involving n and i and constants)
    So, E[t_n] = \sum_i \underline{ \quad \mathsf{G} \quad }.
    \mathsf{ G }.
    Fill in the blank \mathsf{ G } above. (Enter an expression involving n and i and constants)
    Therefore,

    • when n = 10, \mathbb{E}[t_{10}] = \underline{ \quad \mathsf{H} \quad }. \mathsf{ H }.
      Fill in the blank \mathsf{ H } above. (Enter a number accurate to 2 decimal places)
    • when n = 100, \mathbb{E}[t_{100}] = \underline{ \quad \mathsf{I} \quad }. \mathsf{ I }.
      Fill in the blank \mathsf{ I } above. (Enter a number accurate to 2 decimal places)

    Conclusion. Under what circumstances does random sampling dominate systematic search in a problem like this? Answer: never!

    4.2) Practice with the upper confidence bound (UCB) §

    Consider MCTS on a problem where the initial state s_0 has actions two actions a_0 and a_1. The UCB parameter C is 1.4. Suppose the search has completed 8 full iterations:

    • It selected action a_0 3 times, receiving utilities [0.1, 0.7, 0.3].
    • It selected action a_1 5 times, receiving utilities [0.4, 0.4, 0.4, 0.4, 0.4].
    1. On the 9-th iteration, what is the UCB value for action a_0 when expanding from s_0? (Enter a number accurate to 2 decimal places)

    2. On the 9-th iteration, what is the UCB value for action a_1 when expanding from s_0? (Enter a number accurate to 2 decimal places)

    3. Which of the two actions will be selected on the 9-th iteration?
      a_0
      a_1

    4.3) MCTS vs. UCS§

    We have provided you an implementation of MCTS for reward problems in the Colab notebook. You should make sure that you can run our implementation on the reward problems we have defined for you. You are also encouraged to look into our code to see under how our MCTS is implemented. You do not need to submit any code for this question. Instead, we will ask you some questions to confirm your understanding.

    4.3.1) Which will work better?§

    For each of the problems defined in get_fractal_problems, what is your best guess of if MCTS or UCS (uniform cost search) will work better? For concreteness, by "work better" we mean the expected number of problem.step invocations to find an optimal plan.

    1. What is your guess of which one will work better for reward-field-1? You may take any guess --- this question is not graded.
      MCTS
      UCS
    2. What is your guess of which one will work better for reward-field-2? You may take any guess --- this question is not graded.
      MCTS
      UCS
    3. What is your guess of which one will work better for reward-field-3? You may take any guess --- this question is not graded.
      MCTS
      UCS

    4.3.2) Compare the performance§

    Now, for each of the problems defined in get_fractal_problems, let us compare the performances of MCTS vs. UCS empirically by running run_mcts_search and run_uniform_cost_search. In particular, you should:

    • Fix step_budget for both algorithms to 2500. Set iteration_budget for MCTS to infinity.
    • Run MCTS 20 times and record the average cumulative reward.
    • Run UCS once:
      • If it fails (due to running out of step budget), record the cumulative reward as 0.
      • If it succeeds, record the obtained cumulative reward. Hint: you might need to recover rewards from path costs
    • Repeat the above for all three problems in get_fractal_problems.
    1. What is the average cumulative reward obtained by MCTS in reward-field-1?

    2. What is the obtained cumulative rewards by UCS in reward-field-1?

    3. What is the average cumulative reward obtained by MCTS in reward-field-2?

    4. What is the obtained cumulative rewards by UCS in reward-field-2?

    5. What is the average cumulative reward obtained by MCTS in reward-field-3?

    6. What is the obtained cumulative rewards by UCS in reward-field-3?

    7. MCTS tends to be a better choice than UCS when: (choose all that apply)
      There is a single high-valued state and everything else has value 0.
      The value of the first part of a path gives you information about the value of its completions.
      You would rather have an "okay" answer soon than the optimal answer after a longer period of time.

    4.4) Adjusting C in MCTS §

    4.4.1) Choosing C§

    Consider the following scenarios:

    • An early trajectory has a terrible outcome and the algorithm never tries its initial action again.
    • The algorithm behaves very similarly to a systematic enumeration of all possible paths.
    • The algorithm behaves very similarly to a systematic enumeration of all possible path prefixes, with each followed by a random completion.

    For each of the scenarios, indicate whether:

    • It is likely to happen when C is too high.
    • It is likely to happen when C is too low.
    • It is unlikely to happen in MCTS.
    1. An early trajectory has a terrible outcome and we never try its initial action again.
      It is likely to happen when C is too high.
      It is likely to happen when C is too low.
      It is unlikely to happen in MCTS.

    2. The algorithm behaves very similarly to a systematic enumeration of all possible paths.
      It is likely to happen when C is too high.
      It is likely to happen when C is too low.
      It is unlikely to happen in MCTS.

    3. The algorithm behaves very similarly to a systematic enumeration of all possible path prefixes, with each followed by a random completion.
      It is likely to happen when C is too high.
      It is likely to happen when C is too low.
      It is unlikely to happen in MCTS.

    5) Constraint Satisfaction Problems§

    5.1) Class scheduling§

    Let's look at the formulation of a class scheduling problem in which we try to pick courses for 4 years at MIT. We're going to say that there are four "slots" each term, and each will be assigned to one course. So, there are 32 variables and the domain of each variable is the set of all courses. A complete assignment to the variables is a class schedule.

    Let's make a couple of assumptions:

    • Assume that you will take exactly 4 courses each term.

    • Assume that pre-requisites are a list of courses that must all be taken in previous terms. So, there are no "or" choices or co-requisites.

    The term slots are written as, e.g. (Fall, 1, 1) (Fall, 1, 2) ... (Spring, 4, 4), starting with the slots for the fall term of the first year and going through the slots for the spring term of the fourth year. So (Fall, 1, 2) refers to the second slot of the fall term of the first year.

    Indicate which of the following statements best describe the constraints required in this formulation. This means that you might mark some statements as false that are really, strictly speaking, true simply because there is a better choice on the list.
    A constraint between each pair of variables representing term slots for a given term, e.g. (Fall, 1, 1) (Fall, 1, 2) etc., which checks that the classes assigned to those slots do not have conflicting times.
    A constraint between each pair of variables representing term slots for a given term, e.g. (Fall, 1, 1) (Fall, 1, 2) etc., which checks that the number of courses taken that term is no more than four.
    A constraint between every pair of variables representing term slots for a given term, that ensures that the same course is not scheduled to be taken more than once.
    A constraint between every pair of variables representing all the term slots that ensures that the same course is not scheduled to be taken more than once.
    The domain of each variable is limited to the courses offered that term.
    The domain of each variable is limited to those courses with non-conflicting times.
    A constraint between every pair of variables that checks that if the course assigned to one variable is a pre-requisite of the other, then the pre-requisite course is scheduled in the earlier term.
    A constraint between every pair of variables in different terms that checks that if the course assigned to one variable is a pre-requisite of the other, then the pre-requisite course is scheduled in the earlier term.

    5.2) Map coloring§

    Let's look at a simple map coloring problem and explore the behavior of constraint propagation (AC-3), backtracking and forward checking in this context. Here is the constraint graph of the CSP:

    Map-coloring Image

    The letters indicate the initial domain of each variable. Each edge in the graph represents a constraint that the two variables it connects must have different colors.

    5.2.1) Arc consistency§

    Enter the values in each of the indicated variable domains after any changes required to achieve arc consistency for just that arc. Then, assume that the following arcs are done sequentially, with the effects on the domains propagating to the next question.

    Below we give you a sequence of six arcs, each defined by a pair of variables, e.g. V_1 - V_2. For each arc, you should enter the domains D_1 and D_2 after dropping any entries necessary to achieve arc consistency on that arc.

    1. V_1 - V_2
      D_1 =
      D_2 =
    2. V_1 - V_3
      D_1 =
      D_2 =
    3. V_2 - V_4
      D_2 =
      D_4 =
    4. V_3 - V_4
      D_3 =
      D_4 =
    5. V_1 - V_2
      D_1 =
      D_2 =
    6. V_1 - V_3
      D_1 =
      D_3 =

    5.2.2) Backtracking§

    Assume we use pure backtracking to search for a solution to this problem. We use a fixed variable ordering in the search (V_1, V_2, V_3, V_4) and the values are considered in the order shown in the picture above. Then, show the order in which assignments for each variable are considered by backtracking (this is analogous to the order in which nodes are expanded by depth-first search). Show the variable assignment even if it is immediately found to be inconsistent upon testing. Assume that the search continues even after finding a valid solution.

    Show a total of 10 variable assignments. We have split them into two segments so that you can check your partial answers.

    Enter a Python list of the first 5 assignments. Each assignment is a string consisting of a number from 1 to 4 (indicating the variable) followed by a letter, drawn from 'R', 'G', 'B', e.g. ['1R', '2G',...].

    Enter a Python list of the next 5 assignments. Each assignment is a string consisting of a number from 1 to 4 (indicating the variable) followed by a letter, drawn from 'R', 'G', 'B', e.g. ['1R', '2G',...].

    5.2.3) Backtracking with forward-checking§

    Repeat assuming we use backtracking with forward checking to search for a solution to this problem. We use the same variable ordering and value ordering as before. Show the order in which assignments are considered by BT-FC. Whenever propagating after an assignment causes a domain to become empty, that causes backup in the search. Assume that the search continues even after finding a valid solution.

    Show a total of 10 assignments. We have split them into twp segments so that you can check your partial answers.

    Enter a Python list of the first 5 assignments. Each assignment is a string of a number from 1 to 4 (indicating the variable) followed by a letter, drawn from 'R', 'G', 'B', e.g. ['1R', '2G',...].

    Enter a Python list of the next 5 assignments. Each assignment is a string of a number from 1 to 4 (indicating the variable) followed by a letter, drawn from 'R', 'G', 'B', e.g. ['1R', '2G',...].

    6) Feedback§

    1. How many hours did you spend on this homework?

    2. Do you have any comments or suggestions (about the problems or the class)?