State Space Search Problem Space (state space): States Transitions between states (operators for changing states). Problem instance: A problem space A start state (initial state) A goal state Example: Chess. States: board configurations Operators: legal moves (which map between board configurations) Initial state: the initial board configuration Goals: winning board configurations --------------------------------------------- Graph Representation Nodes represent states Arcs represent operator applications Then, problem solving = path finding Find a path through the graph from an initial state to a goal state. -------------------------------- *Actually, a goal is either a state with particular properties (e.g., a winning board configuration) a path to a state, where the path has particular properties (e.g., in the traveling salesman problem, the shortest path; see later) ------------------------------------- More Examples Rubik's cube: States: cube configurations Operator: a physical twist Initial state: some configuration of the cube Goal: a state in which each side is all one color ------ Traveling salesman: a salesman has to visit each of n cities and then return home. Find the shortest path. States: cities Initial state: the home city Operator: travel the distance to a new city Goal: a complete circuit with minimum cost (a path, not a state) Blocks world: States: configuration of blocks Operators: put x on the table; put x on y Initial and goal states: configurations of the blocks. ---------------------------------- *Note about the search space: Search programs typically generate only those parts of the state space that they decide to explore. **Russell & Norvig, Chapter 3 slides, 4-5 and 11-62 ================================================= See the tree* files; lisp programs you can run. ================================================= Another Lisp example: A grammar for NLP ----------------------------------------- Problem: generate random English sentences, according to a grammar for a tiny subset of English. Simple example of common NLP problems The Grammar ------- Sentence --> NounPhrase VerbPhrase NounPhrase --> Det Noun Verb-Phrase --> Verb NounPhrase Det --> the|a Noun --> cat|dog|mouse|table|garage Verb --> saw|chased Example Sentence = ``a mouse chased the dog'' A straightforward solution: represent each grammar rule by a Lisp function. (defun Sentence () (append (NounPhrase)(VerbPhrase))) (defun NounPhrase () (append (Det)(Noun))) (defun VerbPhrase () (append (Verb)(NounPhrase))) (defun Det () (oneOf '(the a))) (defun Noun () (oneOf '(cat dog mouse table garage))) (defun Verb () (oneOf '(chased saw))) Start by calling Sentence: > (Sentence) oneOf chooses an option at random (written below) We haven't written it yet, but looking at the code (in particular, the ``append'' in NounPhrase), what exactly does oneOf return? Now let's write oneOf. Some built-in functions we will use (elt ) (where positions start at 0) returns the element at position n in the list (length ) returns the number of elements in the list (random ) returns an integer from 0 to N-1; N is an integer (defun oneOf (List) ; Return a random element of List, in a list (list (elt List (random (length List))))) A typescript: (Note: alias mylisp 'clim2xm_composer') is my .cshrc file ------------- Script started on Fri Aug 29 22:48:04 1997 cream:2:1> mylisp Allegro CL 4.2 [SPARC; R1] (2/12/96 9:25) Copyright (C) 1985-1993, Franz Inc., Berkeley, CA, USA. All Rights Reserved. ;; Optimization settings: safety 1, space 1, speed 1, debug 2 ;; For a complete description of all compiler switches given the current ;; optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS). USER(1): (load "temp.lisp") ; Loading /home/ch1/wiebe/temp.lisp. T USER(2): (Sentence) (A CAT CHASED THE GARAGE) USER(3): (Sentence) (THE TABLE CHASED A GARAGE) USER(4): (Sentence) (THE GARAGE SAW THE GARAGE) USER(5): (NounPhrase) (THE DOG) USER(6): (NounPhrase) (THE GARAGE) USER(7): (VerbPhrase) (SAW A GARAGE) USER(8): (Det) (A) USER(9): (Det) (THE) USER(10): (Noun) (CAT) USER(11): (trace Sentence Noun) (NOUN SENTENCE) USER(12): (Sentence) 0: (SENTENCE) 1: (NOUN) 1: returned (MOUSE) 1: (NOUN) 1: returned (TABLE) 0: returned (THE MOUSE SAW THE TABLE) (THE MOUSE SAW THE TABLE) USER(13): (exit) ; Exiting Lisp cream:2:2> exit script done on Fri Aug 29 22:51:05 1997 ----------------------------------------------- How good is this solution? Harder to read than the original grammar Doesn't scale up ------------------------------------ Add: nouns can be modified by an indefinite number of adjectives (the green cold silly dog) and prepositional phrases (PPs) (the boy in the garage). NounPhrase --> Det Adj* Noun PP* Adj* --> 0|Adj Adj* PP* --> 0|PP PP* PP --> Prep NounPhrase Adj --> green|cold|silly Prep --> in|at|on|with 0: choice of nothing at all *: Kleene star (0 or more occurrences) (just part of the name; not an operator) Example sentence: The silly green dog on the table in the garage saw a cat. ------------------------------------------------- Our lisp functions get ugly: (defun Adj* () ; Need to return what for 0? ; ; nil (append '(a b) nil) --> '(a b) ; ; ; So, we want to return nil or ; (append (Adj)(Adj*)) ; ; How can we decide whether to return nil? (cond ((equal (random 2) 1) nil) (t (append (Adj)(Adj*)))) ;In lisp, if-then-else conds can be written: ;(if ) (if (equal (random 2) 1) nil (append (Adj)(Adj*))) ;Here is the NounPhrase function that would call Adj*: (defun NounPhrase () (append (Det)(Adj*)(Noun)(PP*))) ----------------------------------------- For the purpose of understanding Lisp: Would the following version of Adj* have worked? (defun Adj* () (oneOf '(nil (append (Adj)(Adj*))))) Nope: returns either "nil" "(append (Adj)(Adj*)) OK, so would the following work? (defun Adj* () (oneOf (list nil (append (Adj)(Adj*))))) Nope: infinite recursion ----------------------------- What's wrong with the solution? As the grammar gets more complex, functions will too Why? knowledge mixed in with the control structure AI problems involve lots of knowledge. Need to add and delete knowledge at will Need to be able to reuse existing knowledge A data driven approach: generate.lisp ------------------------------ Generation as state-space search? States? Sentence, NounPhrase, etc. Initial state? Sentence Operators (state transitions)? the grammar rules the ``successor'' function? rewrites goalp? Is the state a terminal of the grammar? The state space is an ``and-or'' tree. ====================== Comparison of Searches ====================== depth-first better than Iterative Deepening? -- finite state space with goals at the leaves (grammar) (finite: don't need ID's termination guarantee) (leaves: why repeatedly search shallow part) breadth-first search better than Iterative Deepening? -- small branching factor (space costs) -- goal nodes expected at a reasonably shallow depth (space costs) -- operators are expensive -- ?? (if the operators are expensive, regenerated the shallow parts of the tree might be too prohibitive) E.g.: representing states of the world; operators are actions; ``what changes?'' can be hard to infer ====================== Searching trees/graphs ======================= General point about searching; text, pp. 82. So far, we have considered only trees. But often state spaces are graphs but not trees contain cycles a node may have more than one parent (tic-tac-toe? rubic cube?) Simple example (in logical inference) States: P, Q Operators: P --> Q Q --> P You can search this as if it were a tree, in the sense that you can consider the graph to be a tree that has duplicate nodes. [recall that the state space is being generated as you go along] Consider the same nodes more than once infinite loop? yes -- depth-first search no -- breadth-first search extra work (applying operators) To avoid considering nodes more than once, we can keep a list of the nodes already visited (on list called ``closed''). When you generate successors during search, if you have seen it before, do not store it on ``open'' (that way, you will not search that sub-graph yet again). Expensive: space time -- have we visited this node before? (a search through ``closed'') Often, we are searching for a good path to the goal. Suppose we encounter a node we saw before during search. What would we like to do? Save the best path we found so far to that node. This leads us to saving paths to nodes. [getting very space expensive!] For a particular problem, need to weigh: -- do I need to save the paths? Will any path do, or do I need a good path? -- can I afford the time to regenerate nodes? can I search the graph as if it were a tree? -- if states are big and the goal is not usually in a shallow part of the tree, there are space problems; what's the best thing to do? Forward vs. Backward reasoning/searching ======================================== You can search forward or backward in the same state space. The same state space, but different nodes may be searched (in different orders). Forward: start --> goal Backward: goal --> start Confirm or deny: ``I am a descendent of T. Jefferson'' Start: TJ Goal: me operators: child of Simplifying assumptions: TJ born 250 yrs. ago A generation contains 25 years Each person has three children TJ \ / |/ \ / k1 k2 k3 ... \ / me The path from TJ to me is (at least) 10 generations -- Each level is a generation 250 years ago / 25 years per generation Work from the goal back to the start: 2^10 nodes examined Work from start to the goal: 3^10 nodes examined A property of the *problem* Exhaustive search either way is still exponential; but fewer nodes one than the other. How about planning? start state: I'm at home. goal state: I'm at the airport with my baggage, tickets, etc. operators: all the various actions I could do. start --> goal at home: could watch tv, wash my socks, fix the table, clean out the fridge, read a book, go outside& get in my car&go to Santa Fe/The Mall/.... ... what are the chances you are going to happen to hit upon the actions of packing your suitcase, finding your keys, finding your ticket, leaving, getting in your car, etc. goal --> start is better goal: at airport what could get me here? (not watching TV or washing my socks!) Dendral (finds molecular structure of organic compounds based on mass spectrographic data and chemical knowledge). Start: a set of data Goal: the molecular structure For a given set of data, there are only a few possible compounds. But each compound has many possible structures. Better to search forward in this case. Bi-directional search: (text, p. 80ff) ======================