padl
- class padl.Batchify(dim=0)
Mark end of preprocessing.
Batchify adds batch dimension at dim. During inference, this unsqueezes tensors and, recursively, tuples thereof. Batchify also moves the input tensors to device specified for the transform.
- Parameters
dim – Batching dimension.
- class padl.Identity
Do nothing. Just pass on.
- class padl.IfEval(if_: padl.transforms.Transform, else_: Optional[padl.transforms.Transform] = None)
Perform if_ if called in “eval” mode, else perform else_.
- Parameters
if – Transform for the “eval” mode.
else – Transform otherwise (defaults to the identity transform).
-
class padl.IfInMode(if_: padl.transforms.Transform, target_mode: Literal[
'infer','eval','train'], else_: Optional[padl.transforms.Transform] = None) Perform if_ if called in mode target_mode, else perform else_.
Example:
>>> from padl import transform >>> a = transform(lambda x: x + 10) >>> b = transform(lambda x: x * 99) >>> iim = IfInMode(a, 'infer', b) >>> iim.infer_apply(1) 11 >>> list(iim.eval_apply([1])) [99]
- Parameters
if – Transform to apply when the mode matches.
target_mode – Mode (one of ‘train’, ‘eval’, ‘infer’).
else – Transform to apply when the mode doesn’t match (defaults to identity transform).
- class padl.IfInfer(if_: padl.transforms.Transform, else_: Optional[padl.transforms.Transform] = None)
Perform if_ if called in “infer” mode, else perform else_.
- Parameters
if – Transform for the “infer” mode.
else – Transform otherwise (defaults to the identity transform).
- class padl.IfTrain(if_: padl.transforms.Transform, else_: Optional[padl.transforms.Transform] = None)
Perform if_ if called in “train” mode, else perform else_.
- Parameters
if – Transform for the “train” mode.
else – Transform otherwise (defaults to the identity transform).
- class padl.Unbatchify(dim=0, cpu=True)
Mark start of postprocessing.
Unbatchify removes batch dimension (inverse of Batchify) and moves the input tensors to ‘cpu’.
- Parameters
dim – Batching dimension.
cpu – If True, moves output to cpu after unbatchify.
- padl.fulldump(transform_or_module)
Switch a Transform or module or package to the “fulldump” mode.
This means that the Transform or any Transform from that module or package will be fully dumped instead of just dumping the statement importing it.
- Parameters
transform_or_module – A Transform, module or package for which to enable full dump. Can also be a string. In that case, will enable full dump for the module or package with matching name.
- padl.group(transform: Union[padl.transforms.Rollout, padl.transforms.Parallel])
Group transforms. This prevents them from being flattened when used
Example:
When writing a Rollout as (a + (b + c)), this is automatically flattened to (a + b + c) - i.e. the resulting Rollout transform expects a 3-tuple whose inputs are passed to a, b, c respectively. To prevent that, do (a + group(b + c)). The resulting Rollout will expect a 2-tuple whose first item will be passed to a and whose second item will be passed to b + c.
- padl.importdump(transform_or_module)
Disable full dump (see
padl.transforms.fulldump()
for more).
- padl.load(path)
Load a transform (as saved with padl.save) from path.
- padl.save(transform: padl.transforms.Transform, path: Union[pathlib.Path, str], force_overwrite: bool = False, compress: bool = False)
Save the transform to a folder at path or a compressed (zip-)file of the same name if compress == True.
The folder’s name should end with ‘.padl’. If no extension is given, it will be added automatically.
If the folder exists, call with force_overwrite = True to overwrite. Otherwise, this will raise a FileExistsError.
- padl.transform(wrappee, ignore_scope=False)
Transform wrapper / decorator. Use to wrap a class, module or callable.
- Parameters
wrappee – class, module or callable to be wrapped
ignore_scope – Don’t try to determine the scope (use the toplevel scope instead).
- padl.unbatch = Unbatchify(dim=0, cpu=True)
See
Unbatchify
.
- padl.value(val, serializer=None)
Helper function that marks things in the code that should be stored by value.