d3rlpy.preprocessing.MinMaxRewardScaler

class d3rlpy.preprocessing.MinMaxRewardScaler(dataset=None, minimum=None, maximum=None)[source]

Min-Max reward normalization preprocessing.

\[r' = (r - \min(r)) / (\max(r) - \min(r))\]
from d3rlpy.algos import CQL

cql = CQL(reward_scaler="min_max")

You can also initialize with d3rlpy.dataset.MDPDataset object or manually.

from d3rlpy.preprocessing import MinMaxRewardScaler

# initialize with dataset
scaler = MinMaxRewardScaler(dataset)

# initialize manually
scaler = MinMaxRewardScaler(minimum=0.0, maximum=10.0)

cql = CQL(scaler=scaler)

Methods

Parameters
fit(episodes)[source]

Estimates scaling parameters from dataset.

Parameters

episodes (List[d3rlpy.dataset.Episode]) – list of episodes.

Return type

None

fit_with_env(env)

Gets scaling parameters from environment.

Note

RewardScaler does not support fitting with environment.

Parameters

env (gym.core.Env) – gym environment.

Return type

None

get_params(deep=False)[source]

Returns scaling parameters.

Parameters

deep (bool) – flag to deeply copy objects.

Returns

scaler parameters.

Return type

Dict[str, Any]

get_type()

Returns a scaler type.

Returns

scaler type.

Return type

str

reverse_transform(reward)[source]

Returns reversely processed rewards.

Parameters

reward (torch.Tensor) – reward.

Returns

reversely processed reward.

Return type

torch.Tensor

transform(reward)[source]

Returns processed rewards.

Parameters

reward (torch.Tensor) – reward.

Returns

processed reward.

Return type

torch.Tensor

transform_numpy(reward)[source]

Returns transformed rewards in numpy array.

Parameters

reward (numpy.ndarray) – reward.

Returns

transformed reward.

Return type

numpy.ndarray

Attributes

TYPE: ClassVar[str] = 'min_max'