d3rlpy.preprocessing.MinMaxRewardScaler¶
- class d3rlpy.preprocessing.MinMaxRewardScaler(dataset=None, minimum=None, maximum=None)[source]¶
Min-Max reward normalization preprocessing.
\[r' = (r - \min(r)) / (\max(r) - \min(r))\]from d3rlpy.algos import CQL cql = CQL(reward_scaler="min_max")
You can also initialize with
d3rlpy.dataset.MDPDataset
object or manually.from d3rlpy.preprocessing import MinMaxRewardScaler # initialize with dataset scaler = MinMaxRewardScaler(dataset) # initialize manually scaler = MinMaxRewardScaler(minimum=0.0, maximum=10.0) cql = CQL(scaler=scaler)
Methods
- Parameters
dataset (Optional[d3rlpy.dataset.MDPDataset]) –
minimum (Optional[float]) –
maximum (Optional[float]) –
- fit(episodes)[source]¶
Estimates scaling parameters from dataset.
- Parameters
episodes (List[d3rlpy.dataset.Episode]) – list of episodes.
- Return type
- fit_with_env(env)¶
Gets scaling parameters from environment.
Note
RewardScaler
does not support fitting with environment.- Parameters
env (gym.core.Env) – gym environment.
- Return type
- reverse_transform(reward)[source]¶
Returns reversely processed rewards.
- Parameters
reward (torch.Tensor) – reward.
- Returns
reversely processed reward.
- Return type
torch.Tensor
- transform(reward)[source]¶
Returns processed rewards.
- Parameters
reward (torch.Tensor) – reward.
- Returns
processed reward.
- Return type
torch.Tensor
- transform_numpy(reward)[source]¶
Returns transformed rewards in numpy array.
- Parameters
reward (numpy.ndarray) – reward.
- Returns
transformed reward.
- Return type
Attributes