After the action, rhc holds i. The robot can deliver coffee when it is in the office and has coffee. It can deliver coffee whether Sam wants coffee or not. If Sam wanted coffee before the action, Sam no longer wants it after. If the effects list of an action act is [e 1 , The precondition of each action in the representations is the same. A conditional effect is an effect of an action that depends on the value of other features.
The STRIPS representation is used to specify the values of primitive features in a state based on the previous state and the action taken by the agent. Definite clauses are used to determine the value of derived features from the values of the primitive features in any given state. The precondition of an action is a proposition — the conjunction of the elements of the set — that must be true before the action is able to be carried out. In terms of constraints, the robot is constrained so it can only choose an action for which the precondition holds.
In Example 6. The action to move clockwise is always possible. The semantics relies on the STRIPS assumption : the values of all of the primitive features not mentioned in the effects of the action are unchanged by the action.
The values of non-primitive features can be derived from the values of the primitive features for each time. Both can be collected for building a weapon. You can see how other types of spells and treasures can be added, by simply defining additional actions.
Now, how about the problem? The AI will have to figure out a plan for where to move, who to attack, and what to do, in order to build the weapon. We have a couple of monsters an ogre and a dragon. Both monsters are guarding locations on the map. A character will need both elements in order to build a fireball weapon. If we run the domain and problem, we get the following optimal solution.
The AI has successfully determined a plan, which involves moving from the town to the field, attacking the ogre at the river, and then moving to the river. It then attacks the dragon in the cave, and then opens the treasure chest in the river the AI apparently wanted to attack the dragon before opening the treasure chest sitting at its feet - in reality, both actions had an equal depth cost, so the AI simply chose the first one that it found. It then collects the fire element from the treasure chest and moves to the cave.
The AI opens the treasure chest in the cave, collects the earth element, and finally builds the fireball weapon. What would happen if instead of using breadth-first-search, we try running this with depth-first-search? Depth-first search produces a significantly longer set of steps to achieve the goal. It starts off the same as the optimal solution above, but at step 5, instead of opening the treasure in the river and collecting the fire element, the AI instead chooses to move into the cave and open the treasure there first.
Interestingly, after opening the treasure in the cave, it then moves back to the river and opens the box there. This is effectively back-tracking.
Once again, it repeats its steps of moving back into the cave to collect the element, and back to the river to collect the second element. Laughably, the AI then walks all the way back to town before building the fireball weapon! The AI was simply following straight down a deep path of actions that lead to the goal state. Many other paths likely exist as well, including of course, the optimal path that was found by breadth-first search.
You can see how STRIPS artificial intelligence planning allows the computer to prepare a detailed step plan for achieving a goal. Now, how would you use this with a game? In the main loop of a game, where the screen is continuously redrawn, there is usually associated logic for moving NPC characters and performing other necessary tasks at each tick.
An automated planner can be integrated into this loop to continuously update plans for each NPC character, depending on their goals. Since a player or other NPC character can affect the state of the current world, we would need to update the plan at each tick, so that it reacts to any changes in the current state and updates its plan accordingly.
In this manner, the problem PDDL file could contain a dynamically updated :init section, where the current state of the world is described. The :goal section would remain static, while the :init section changes. At each defined interval, the AI would re-execute the automated planner to produce a new plan, given the state of the world. It would then redirect the NPC character to whichever action is next in the computed plan.
The resulting plan may contain less or more action steps to achieve the goal. As the initial state of the problem PDDL changes, so too would the formulated solution plan.
Since the automated planner may well be running at a frequently defined time interval, the faster the search can complete, the higher the application response rate. For faster automated planner searching, you may even want to upgrade to other types of artificial intelligence planners, including GraphPlan and hierarchical task network HTN planners.
Try any of the example domains or create an account to design your own artificial intelligence planning domains and problems. The homepage for the strips library provides some higher-level overview on the library, including an example of the Starcraft domain.
Domains and problems can be loaded from plain text files into the node. Running a problem set can be done with the following code:. By default, depth-first-search is used. You can change this to breadth-first-search by adding a boolean parameter to the solve method, as follows:. For more details, see the Starcraft strips example. Give it a try and have fun! This article was written by Kory Becker , software developer and architect, skilled in a range of technologies, including web application development, machine learning, artificial intelligence, and data science.
Programming Artificial Intelligence. Bypassing the Dragon in the Field 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 define problem sneak-past-dragon-to-castle :domain magic-world :objects npc - player dragon - monster town field castle tunnel river - location :init border town field border town tunnel border field castle border tunnel river border river castle at npc town at dragon field guarded field :goal and at npc castle Notice that this newly defined STRIPS PDDL problem looks very similar to our first one.
0コメント