:orphan: :py:mod:`cnn.autoencoder` ========================= .. py:module:: cnn.autoencoder .. autoapi-nested-parse:: An CNN-based `autoencoder `_ model. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: cnn.autoencoder.AutoEncoder Attributes ~~~~~~~~~~ .. autoapisummary:: cnn.autoencoder.BASE_IMAGE_SIZE .. py:data:: BASE_IMAGE_SIZE :annotation: :int = 16 The width and neight that images are downsampled to before flattening it into a code. .. py:class:: AutoEncoder(number_channels: int = 3, input_size: int = 32, code_size: int = 16, number_filters: int = 4) Bases: :py:obj:`pytorch_lightning.LightningModule` An AutoEncoder model, based upon incrementally downsampling CNNs to a flat code, and then upsampling CNNs. .. py:method:: forward(self, activation: Union[torch.Tensor, List[torch.Tensor]]) -> torch.Tensor Overrides :mod:`pl.LightningModule`. .. py:method:: forward_encode_decode(self, input: Union[torch.Tensor, List[torch.Tensor]]) -> torch.Tensor Performs both the encode and decode step on an input (batched). :param input: the input tensor. :returns: the tensor after encoding and decoding. .. py:method:: training_step(self, batch: List[torch.Tensor], batch_idx: int) Overrides :class:`pl.LightningModule`. .. py:method:: validation_step(self, batch: List[torch.Tensor], batch_idx: int) Overrides :class:`pl.LightningModule`. .. py:method:: test_step(self, batch: List[torch.Tensor], batch_idx: int) Overrides :class:`pl.LightningModule`. .. py:method:: predict_step(self, batch, batch_idx: int, dataloader_idx: int = None) Step function called during :meth:`~pytorch_lightning.trainer.trainer.Trainer.predict`. By default, it calls :meth:`~pytorch_lightning.core.lightning.LightningModule.forward`. Override to add any processing logic. The :meth:`~pytorch_lightning.core.lightning.LightningModule.predict_step` is used to scale inference on multi-devices. To prevent an OOM error, it is possible to use :class:`~pytorch_lightning.callbacks.BasePredictionWriter` callback to write the predictions to disk or database after each batch or on epoch end. The :class:`~pytorch_lightning.callbacks.BasePredictionWriter` should be used while using a spawn based accelerator. This happens for ``Trainer(strategy="ddp_spawn")`` or training on 8 TPU cores with ``Trainer(tpu_cores=8)`` as predictions won't be returned. Example :: class MyModel(LightningModule): def predicts_step(self, batch, batch_idx, dataloader_idx): return self(batch) dm = ... model = MyModel() trainer = Trainer(gpus=2) predictions = trainer.predict(model, dm) Args: batch: Current batch batch_idx: Index of current batch dataloader_idx: Index of the current dataloader Return: Predicted output .. py:method:: configure_optimizers(self) Overrides :mod:`pl.LightningModule`. This docstring replaces the parent docstring which is errored.