d3rlpy.preprocessing.MultiplyRewardScaler¶
- class d3rlpy.preprocessing.MultiplyRewardScaler(multiplier=None)[source]¶
Multiplication reward preprocessing.
This preprocessor multiplies rewards by a constant number.
from d3rlpy.preprocessing import MultiplyRewardScaler # multiply rewards by 10 reward_scaler = MultiplyRewardScaler(10.0) cql = CQL(reward_scaler=reward_scaler)
- Parameters
multiplier (float) – constant multiplication value.
Methods
- fit(transitions)[source]¶
Estimates scaling parameters from dataset.
- Parameters
transitions (List[d3rlpy.dataset.Transition]) – list of transitions.
- Return type
- fit_with_env(env)¶
Gets scaling parameters from environment.
Note
RewardScaler
does not support fitting with environment.- Parameters
env (gym.core.Env) – gym environment.
- Return type
- reverse_transform(reward)[source]¶
Returns reversely processed rewards.
- Parameters
reward (torch.Tensor) – reward.
- Returns
reversely processed reward.
- Return type
torch.Tensor
- transform(reward)[source]¶
Returns processed rewards.
- Parameters
reward (torch.Tensor) – reward.
- Returns
processed reward.
- Return type
torch.Tensor
- transform_numpy(reward)[source]¶
Returns transformed rewards in numpy array.
- Parameters
reward (numpy.ndarray) – reward.
- Returns
transformed reward.
- Return type
Attributes