Uses of Class
net.pakl.rl.State

Packages that use State
net.pakl.rl These are the basic reinforcement learning model classes -- every reinforcement learning problem can be described (minimally) as a World (collection of states), Policy, ValueFunction, and Actions. 
net.pakl.rl.maze World and policy for a 2D problem with impassable obstacles. 
org.eyelanguage.rl.reading Code for the Adaptive Reading Agent; see ReadingMain for parameters and default values. 
 

Uses of State in net.pakl.rl
 

Subclasses of State in net.pakl.rl
 class StateActionPair
          Can be used to convert a ValueFunction into a Q-function.
 

Fields in net.pakl.rl declared as State
protected  State Agent.state
           
 

Methods in net.pakl.rl that return State
 State World.getNewState(State oldState, Action action)
          Given a state and an action, returns the resulting state.
 State World.getStartingState()
           
 

Methods in net.pakl.rl that return types with arguments of type State
 java.util.Set<State> ValueFunctionHashMap.getKeySet()
           
 

Methods in net.pakl.rl with parameters of type State
 void ActionSet.allowAction(State state, Action newAction)
          This method allows you to program in a possible action given a state.
 void Agent.experience(ValueFunction newValueFunction, ValueFunction valueFunction, State startState, State nextState, double reinforcement)
           
 void PolicyExtractor.forceInitialState(State s)
           
 java.util.List ActionSet.getAllPossibleActions(State state)
          This method returns a list of all possible actions an agent could take given a State.
 Action Agent.getBestActionForValueFrom(State s, ValueFunction vf, ActionSet p)
          This is a function which should probably be called more often to reduce duplicated code.
 State World.getNewState(State oldState, Action action)
          Given a state and an action, returns the resulting state.
 Action ActionSet.getRandomAllowedAction(State state)
           
 double ReinforcementFunction.getReward(State state, Action action)
          Return the reward given an action from a particular state.
 double ReinforcementFunction.getReward(State state, Action action, State newState)
          Sometimes a reward is also based on the resulting state.
 double ValueFunctionResidualAlgorithmPerceptron.getValue(State state)
           
 double ValueFunctionResidualAlgorithmLinear.getValue(State state)
           
 double ValueFunctionPerceptron.getValue(State state)
           
 double ValueFunctionHashMap.getValue(State state)
          The value of a state is defined as the sum of the terinforcements received when starting in that state and following some fixed policy to a terminal state; the optimal policy would map states to actions that maximizes the sum of reinforcements received when starting in an arbitrary state and performing actions until the terminal state is reached
 double ValueFunction.getValue(State state)
          Retrieve the value associated with a state (which may be different for non-stored states depending on the actual class implementing this value function).
 boolean IsWinnable.isDrawState(State s)
           
 boolean IsWinnable.isLoseState(State s)
           
 boolean World.isTerminalState(State state)
          Reports whether this particular state is a terminal state.
 boolean IsWinnable.isWinState(State s)
           
protected  void Agent.performValueIterationUpdateOnState(ValueFunction newValueFunction, ValueFunction valueFunction, State currentState)
           
 void ActionSet.removeAction(State state, Action oldAction)
          This method allows you to deprogram an action from being possible
 void ReinforcementFunction.setReward(State state, Action action, double newReward)
           
 void Agent.setState(State newState)
           
 void ValueFunctionResidualAlgorithmPerceptron.setValue(State state, double newValue)
           
 void ValueFunctionResidualAlgorithmLinear.setValue(State state, double newValue)
           
 void ValueFunctionPerceptron.setValue(State state, double newValue)
           
 void ValueFunctionHashMap.setValue(State state, double newValue)
           
 void ValueFunction.setValue(State state, double newValue)
           
 void ValueFunctionResidualAlgorithmPerceptron.setValue(State thisState, State nextState, double newValue, double discountFactor)
           
 void ValueFunctionResidualAlgorithmLinear.setValue(State thisState, State nextState, double newValue, double discountFactor)
           
 

Constructors in net.pakl.rl with parameters of type State
StateActionPair(State s, Action a)
           
 

Uses of State in net.pakl.rl.maze
 

Subclasses of State in net.pakl.rl.maze
 class State2D
          This class represents a state which corresponds to a position in a two-dimensional World
 

Methods in net.pakl.rl.maze that return State
 State MazeWorld.getNewState(State oldState, Action action)
          This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement.
 State MazeWorld.getRandomState()
           
 State MazeWorld.getStartingState()
           
 

Methods in net.pakl.rl.maze with parameters of type State
 void MazeWorld.addTeleporter(State location, State destination)
           
 int MazeWorld.distance(State state1, State state2)
           
 State MazeWorld.getNewState(State oldState, Action action)
          This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement.
 boolean MazeWorld.isTerminalState(State state)
           
 

Uses of State in org.eyelanguage.rl.reading
 

Subclasses of State in org.eyelanguage.rl.reading
 class ReadingState
          This class represents the State of a reading Agent -- scroll down to Field Detail to see more detailed on information for each dimension of the state.
 class ReadingStateParallelRelative
          In parallel states, we have to keep track of which words of the sentence have been identified already but for default compatability with earlier states we return information about the first word in the window for length, ID and whether the word is identified.
 class ReadingStateRelative
          Inherits from ReadingState, but several methods have been altered so that the model of the environment is word-centric instead of absolute in the sentence (this allows for generalization to new circumstances).
 

Methods in org.eyelanguage.rl.reading that return State
 State SentenceWorldParallel.getNewState(State oldGenericState, Action genericAction)
          Call the same code that is in the serial SentenceWorld newState(state, action), but also calls a new function called SentenceWorldParallel.performParallelLexicalProcessing(org.eyelanguage.rl.reading.ReadingStateParallelRelative) on the state.
 State SentenceWorld.getNewState(State oldGenericState, Action genericAction)
          This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement.
 State SentenceWorld.getNewState(State oldGenericState, Action genericAction, java.lang.String r)
           
 State SentenceWorld.getRandomState()
           
 State SentenceWorldParallel.getStartingState()
           
 State SentenceWorld.getStartingState()
           
 

Methods in org.eyelanguage.rl.reading with parameters of type State
 int SentenceWorld.distance(State state1, State state2)
           
 java.util.List ReadingPolicyParallel.getAllPossibleActions(State state)
           
 java.util.List ReadingPolicy.getAllPossibleActions(State state)
           
 State SentenceWorldParallel.getNewState(State oldGenericState, Action genericAction)
          Call the same code that is in the serial SentenceWorld newState(state, action), but also calls a new function called SentenceWorldParallel.performParallelLexicalProcessing(org.eyelanguage.rl.reading.ReadingStateParallelRelative) on the state.
 State SentenceWorld.getNewState(State oldGenericState, Action genericAction)
          This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement.
 State SentenceWorld.getNewState(State oldGenericState, Action genericAction, java.lang.String r)
           
 double ReadingReinforcementFunction.getReward(State state, Action action)
           
 double ReadingParallelReinforcementFunction.getReward(State state, Action action)
           
 double ReadingReinforcementFunction.getReward(State state, Action action, State resultingState)
           
 double ReadingParallelReinforcementFunction.getReward(State state, Action action, State resultingState)
           
 double ParallelToSerialVFAdapter.getValue(State state)
           
 void SentenceWorld.growBounds(State state)
           
 boolean SentenceWorldParallel.isTerminalState(State state)
          The final word in the sentence, and all preceding words, must be identified.
 boolean SentenceWorld.isTerminalState(State state)
           
 void ParallelToSerialVFAdapter.setValue(State state, double newValue)