Metrics¶
d3rlpy provides scoring functions without compromising scikit-learn compatibility. You can evaluate many metrics with test episodes during training.
from d3rlpy.datasets import get_cartpole
from d3rlpy.algos import DQN
from d3rlpy.metrics.scorer import td_error_scorer
from d3rlpy.metrics.scorer import average_value_estimation_scorer
from d3rlpy.metrics.scorer import evaluate_on_environment
from sklearn.model_selection import train_test_split
dataset, env = get_cartpole()
train_episodes, test_episodes = train_test_split(dataset)
dqn = DQN()
dqn.fit(train_episodes,
eval_episodes=test_episodes,
scorers={
'td_error': td_error_scorer,
'value_scale': average_value_estimation_scorer,
'environment': evaluate_on_environment(env)
})
You can also use them with scikit-learn utilities.
from sklearn.model_selection import cross_validate
scores = cross_validate(dqn,
dataset,
scoring={
'td_error': td_error_scorer,
'environment': evaluate_on_environment(env)
})
Algorithms¶
Returns average TD error. |
|
Returns average of discounted sum of advantage. |
|
Returns average value estimation. |
|
Returns standard deviation of value estimation. |
|
Returns mean estimated action-values at the initial states. |
|
Returns Soft Off-Policy Classification metrics. |
|
Returns squared difference of actions between algorithm and dataset. |
|
Returns percentage of identical actions between algorithm and dataset. |
|
Returns scorer function of evaluation on environment. |
|
Returns scorer function of action difference between algorithms. |
|
Returns scorer function of action matches between algorithms. |
Dynamics¶
|
Returns MSE of observation prediction. |
|
Returns MSE of reward prediction. |
Returns prediction variance of ensemble dynamics. |