|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
| Packages that use Action | |
|---|---|
| net.pakl.rl | These are the basic reinforcement learning model classes -- every reinforcement learning problem can be described (minimally) as a World (collection of states), Policy, ValueFunction, and Actions. |
| net.pakl.rl.maze | World and policy for a 2D problem with impassable obstacles. |
| org.eyelanguage.rl.reading | Code for the Adaptive Reading Agent; see ReadingMain for parameters and default values. |
| Uses of Action in net.pakl.rl |
|---|
| Methods in net.pakl.rl that return Action | |
|---|---|
Action |
Agent.getBestActionForValueFrom(State s,
ValueFunction vf,
ActionSet p)
This is a function which should probably be called more often to reduce duplicated code. |
Action |
ActionSet.getRandomAllowedAction(State state)
|
| Methods in net.pakl.rl with parameters of type Action | |
|---|---|
void |
ActionSet.allowAction(State state,
Action newAction)
This method allows you to program in a possible action given a state. |
State |
World.getNewState(State oldState,
Action action)
Given a state and an action, returns the resulting state. |
double |
ReinforcementFunction.getReward(State state,
Action action)
Return the reward given an action from a particular state. |
double |
ReinforcementFunction.getReward(State state,
Action action,
State newState)
Sometimes a reward is also based on the resulting state. |
void |
ActionSet.removeAction(State state,
Action oldAction)
This method allows you to deprogram an action from being possible |
void |
ReinforcementFunction.setReward(State state,
Action action,
double newReward)
|
| Constructors in net.pakl.rl with parameters of type Action | |
|---|---|
StateActionPair(State s,
Action a)
|
|
| Uses of Action in net.pakl.rl.maze |
|---|
| Classes in net.pakl.rl.maze that implement Action | |
|---|---|
class |
Action2D
This class represents a one-dimensional action, which could correspond to a movement vector in a 2D World
such as MazeWorld |
| Methods in net.pakl.rl.maze with parameters of type Action | |
|---|---|
State |
MazeWorld.getNewState(State oldState,
Action action)
This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement. |
| Uses of Action in org.eyelanguage.rl.reading |
|---|
| Classes in org.eyelanguage.rl.reading that implement Action | |
|---|---|
class |
ReadingAction
|
| Methods in org.eyelanguage.rl.reading with parameters of type Action | |
|---|---|
State |
SentenceWorldParallel.getNewState(State oldGenericState,
Action genericAction)
Call the same code that is in the serial SentenceWorld newState(state, action), but also calls a new function called
SentenceWorldParallel.performParallelLexicalProcessing(org.eyelanguage.rl.reading.ReadingStateParallelRelative) on the state. |
State |
SentenceWorld.getNewState(State oldGenericState,
Action genericAction)
This critical method returns the new state given an action from an old state, and in this case, simply adds state to position to simulate movement. |
State |
SentenceWorld.getNewState(State oldGenericState,
Action genericAction,
java.lang.String r)
|
double |
ReadingReinforcementFunction.getReward(State state,
Action action)
|
double |
ReadingParallelReinforcementFunction.getReward(State state,
Action action)
|
double |
ReadingReinforcementFunction.getReward(State state,
Action action,
State resultingState)
|
double |
ReadingParallelReinforcementFunction.getReward(State state,
Action action,
State resultingState)
|
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||