Metrics¶
d3rlpy provides scoring functions without compromising scikit-learn compatibility. You can evaluate many metrics with test episodes during training.
from d3rlpy.datasets import get_cartpole
from d3rlpy.algos import DQN
from d3rlpy.metrics.scorer import td_error_scorer
from d3rlpy.metrics.scorer import average_value_estimation_scorer
from d3rlpy.metrics.scorer import evaluate_on_environment
from sklearn.model_selection import train_test_split
dataset, env = get_cartpole()
train_episodes, test_episodes = train_test_split(dataset)
dqn = DQN()
dqn.fit(train_episodes,
eval_episodes=test_episodes,
scorers={
'td_error': td_error_scorer,
'value_scale': average_value_estimation_scorer,
'environment': evaluate_on_environment(env)
})
You can also use them with scikit-learn utilities.
from sklearn.model_selection import cross_validate
scores = cross_validate(dqn,
dataset,
scoring={
'td_error': td_error_scorer,
'environment': evaluate_on_environment(env)
})
Algorithms¶
d3rlpy.metrics.scorer.td_error_scorer |
Returns average TD error (in negative scale). |
d3rlpy.metrics.scorer.discounted_sum_of_advantage_scorer |
Returns average of discounted sum of advantage (in negative scale). |
d3rlpy.metrics.scorer.average_value_estimation_scorer |
Returns average value estimation (in negative scale). |
d3rlpy.metrics.scorer.value_estimation_std_scorer |
Returns standard deviation of value estimation (in negative scale). |
d3rlpy.metrics.scorer.initial_state_value_estimation_scorer |
Returns mean estimated action-values at the initial states. |
d3rlpy.metrics.scorer.soft_opc_scorer |
Returns Soft Off-Policy Classification metrics. |
d3rlpy.metrics.scorer.continuous_action_diff_scorer |
Returns squared difference of actions between algorithm and dataset. |
d3rlpy.metrics.scorer.discrete_action_match_scorer |
Returns percentage of identical actions between algorithm and dataset. |
d3rlpy.metrics.scorer.evaluate_on_environment |
Returns scorer function of evaluation on environment. |
d3rlpy.metrics.comparer.compare_continuous_action_diff |
Returns scorer function of action difference between algorithms. |
d3rlpy.metrics.comparer.compare_discrete_action_match |
Returns scorer function of action matches between algorithms. |
Dynamics¶
d3rlpy.metrics.scorer.dynamics_observation_prediction_error_scorer |
Returns MSE of observation prediction (in negative scale). |
d3rlpy.metrics.scorer.dynamics_reward_prediction_error_scorer |
Returns MSE of reward prediction (in negative scale). |
d3rlpy.metrics.scorer.dynamics_prediction_variance_scorer |
Returns prediction variance of ensemble dynamics (in negative scale). |