d3rlpy.preprocessing.StandardScaler

class d3rlpy.preprocessing.StandardScaler(dataset=None, mean=None, std=None, eps=0.001)[source]

Standardization preprocessing.

\[x' = (x - \mu) / \sigma\]
from d3rlpy.dataset import MDPDataset
from d3rlpy.algos import CQL

dataset = MDPDataset(observations, actions, rewards, terminals)

# initialize algorithm with StandardScaler
cql = CQL(scaler='standard')

# scaler is initialized from the given episodes
cql.fit(dataset.episodes)

You can initialize with d3rlpy.dataset.MDPDataset object or manually.

from d3rlpy.preprocessing import StandardScaler

# initialize with dataset
scaler = StandardScaler(dataset)

# initialize manually
mean = observations.mean(axis=0)
std = observations.std(axis=0)
scaler = StandardScaler(mean=mean, std=std)

cql = CQL(scaler=scaler)
Parameters

Methods

fit(episodes)[source]

Estimates scaling parameters from dataset.

Parameters

episodes (List[d3rlpy.dataset.Episode]) – list of episodes.

Return type

None

fit_with_env(env)[source]

Gets scaling parameters from environment.

Parameters

env (gym.core.Env) – gym environment.

Return type

None

get_params(deep=False)[source]

Returns scaling parameters.

Parameters

deep (bool) – flag to deeply copy objects.

Returns

scaler parameters.

Return type

Dict[str, Any]

get_type()

Returns a scaler type.

Returns

scaler type.

Return type

str

reverse_transform(x)[source]

Returns reversely transformed observations.

Parameters

x (torch.Tensor) – observation.

Returns

reversely transformed observation.

Return type

torch.Tensor

transform(x)[source]

Returns processed observations.

Parameters

x (torch.Tensor) – observation.

Returns

processed observation.

Return type

torch.Tensor

Attributes

TYPE: ClassVar[str] = 'standard'