Package net.pakl.rl

These are the basic reinforcement learning model classes -- every reinforcement learning problem can be described (minimally) as a World (collection of states), Policy, ValueFunction, and Actions.

See:
          Description

Interface Summary
Action A basic element of our Reinforcement Learning simulation framework; Actions are returned by a ActionSet, and when performed by an Agent, moves it to another state in a World
HasVectorRepresentation Implemented by state objects that are expected to be used with neural network value functions.
IsWinnable This property can be had by a game world.
SubdivisionIdentification Allows a state to identify the patch to which it belongs in case an agent needs to retrieve the value of this state; typically the name of the patch is the name of the value function that needs to be loaded by the agent.
ValueFunction A ValueFunction maps states from a World, to values, and may be implemented with a neural network or a HashMap.
World Basic specification of a world, which ensures that all worlds have certain methods (functions) and can be built, traversed and can have a physical distance measure defined.
 

Class Summary
ActionSet A ActionSet maps States to possible Actions and is used by an Agent in considering its possible next moves.
Agent This class represents an agent, which is capable of iterating over all of the actions from its given state given a ActionSet.
AgentParallelized This class represents a multi-threaded agent (therefore faster if you have multiple CPUs) appropriate for reinforcement learning with full Value Iteration, since during value iteration each state can be updated independently of others (as long as testTrainSameVf is false).
HashMapDistributed A custom HashMap class that distributes itself over many machines, thereby pooling their RAM together.
HashMapFile  
HashMapRemote Instances of HashMaps that are actually distributed to the different machines -- to use, recompile with the JavaParty front-end to generate class and stubs (and place their keyword 'remote' to obtain public remote class).
PolicyExtractor Outputs the optimal actions given a world (environment) (which includes the starting state), value function and generic policy (describing allowed actions) by picking actions which lead to the next highest-valued state.
Queue This class is a synchronized queue (FIFO) data structure.
RandomPerThread This class ensures that each thread that invokes it will get a unique random number generator.
ReinforcementFunction This class contains all of the required functionality for a reward function and should be applicable to all worlds.
State A State represents a position or attitude in a World (that is, physical and/or mental attitudes); from any State, some Action is possible.
StateActionPair Can be used to convert a ValueFunction into a Q-function.
ValueFunctionHashMap A ValueFunction maps states, which are positions in a World, to values, and may be replaced with a neural network.
ValueFunctionHashMapZipped  
ValueFunctionPerceptron  
ValueFunctionResidualAlgorithmBairdPerceptron  
ValueFunctionResidualAlgorithmLinear  
ValueFunctionResidualAlgorithmPatrykNetwork  
ValueFunctionResidualAlgorithmPerceptron  
VectorTools  
ZippedHashMap A custom HashMap class that zips all objects that fall into a particular row of the HashMap (Hash together, get zipped together.) The hope was that this would save a significant amount of memory and allow larger simulations to be run...
 

Exception Summary
NonLearnedStateException An exception which is thrown from a ValueFunction when a state has not been learned and safety is on
 

Package net.pakl.rl Description

These are the basic reinforcement learning model classes -- every reinforcement learning problem can be described (minimally) as a World (collection of states), Policy, ValueFunction, and Actions.