simpleml.pipelines.base_pipeline

Base Module for Pipelines

Module Contents

Classes

AbstractPipeline

Abstract Base class for all Pipelines objects.

DatasetSequence

Sequence wrapper for internal datasets. Only used for raw data mapping so

Pipeline

Base class for all Pipeline objects.

TransformedSequence

Nested sequence class to apply transforms on batches in real-time and forward

simpleml.pipelines.base_pipeline.LOGGER[source]
simpleml.pipelines.base_pipeline.__author__ = Elisha Yadgaran[source]
class simpleml.pipelines.base_pipeline.AbstractPipeline(has_external_files=True, transformers=None, external_pipeline_class='default', fitted=False, **kwargs)[source]

Bases: future.utils.with_metaclass()

Abstract Base class for all Pipelines objects.

Relies on mixin classes to define the split_dataset method. Will throw an error on use otherwise

params: pipeline parameter metadata for easy insight into hyperparameters across trainings

__abstract__ = True[source]
object_type = PIPELINE[source]
params[source]
X(self, split=None)[source]

Get X for specific dataset split

_create_external_pipeline(self, external_pipeline_class, transformers, **kwargs)[source]

should return the desired pipeline object

Parameters

external_pipeline_class – str of class to use, can be ‘default’ or ‘sklearn’

_generator_transform(self, X, dataset_split=None, **kwargs)[source]

Pass through method to external pipeline

Parameters

X – dataframe/matrix to transform, if None, use internal dataset

NOTE: Downstream objects expect to consume a generator with a tuple of X, y, other… not a Split object, so an ordered tuple will be returned

_hash(self)[source]

Hash is the combination of the: 1) Dataset 2) Transformers 3) Transformer Params 4) Pipeline Config

_iterate_split(self, split, infinite_loop=False, batch_size=32, shuffle=True, **kwargs)[source]

Turn a dataset split into a generator

_iterate_split_using_sequence(self, split, batch_size=32, shuffle=True, **kwargs)[source]

Different version of iterate split that uses a keras.utils.sequence object to play nice with keras and enable thread safe generation.

_sequence_transform(self, X, dataset_split=None, **kwargs)[source]

Pass through method to external pipeline

Parameters

X – dataframe/matrix to transform, if None, use internal dataset

NOTE: Downstream objects expect to consume a sequence with a tuple of X, y, other… not a Split object, so an ordered tuple will be returned

_transform(self, X, dataset_split=None)[source]

Pass through method to external pipeline

Parameters

X – dataframe/matrix to transform, if None, use internal dataset

Return type

Split object if no dataset passed (X is Null) Otherwise matrix return of input X

add_dataset(self, dataset)[source]

Setter method for dataset used

add_transformer(self, name, transformer)[source]

Setter method for new transformer step

assert_dataset(self, msg='')[source]

Helper method to raise an error if dataset isn’t present

assert_fitted(self, msg='')[source]

Helper method to raise an error if pipeline isn’t fit

property external_pipeline(self)[source]

All pipeline objects are going to require some filebase persisted object

Wrapper around whatever underlying class is desired (eg sklearn or native)

fit(self)[source]

Pass through method to external pipeline

fit_transform(self, **kwargs)[source]

Wrapper for fit and transform methods ASSUMES only applies to default (train) split

property fitted(self)[source]
get_dataset_split(self, split=None, return_generator=False, return_sequence=False, **kwargs)[source]

Get specific dataset split Assumes a Split object (simpleml.pipelines.validation_split_mixins.Split) is returned. Inherit or implement similar expected attributes to replace

Uses internal self._dataset_splits as the split container - assumes dictionary like itemgetter

get_feature_names(self)[source]

Pass through method to external pipeline Should return a list of the final features generated by this pipeline

get_params(self, **kwargs)[source]

Pass through method to external pipeline

get_transformers(self)[source]

Pass through method to external pipeline

load(self, **kwargs)[source]

Extend main load routine to load relationship class

remove_transformer(self, name)[source]

Delete method for transformer step

save(self, **kwargs)[source]

Extend parent function with a few additional save routines

  1. save params

  2. save transformer metadata

  3. features

set_params(self, **params)[source]

Pass through method to external pipeline

transform(self, X, return_generator=False, return_sequence=False, **kwargs)[source]

Main transform routine - routes to generator or regular method depending on the flag

Parameters

return_generator – boolean, whether to use the transformation method

that returns a generator object or the regular transformed input :param return_sequence: boolean, whether to use method that returns a keras.utils.sequence object to play nice with keras models

y(self, split=None)[source]

Get labels for specific dataset split

class simpleml.pipelines.base_pipeline.DatasetSequence(split, batch_size, shuffle)[source]

Bases: simpleml.imports.Sequence

Sequence wrapper for internal datasets. Only used for raw data mapping so return type is internal Split object. Transformed sequences are used to conform with external input types (keras tuples)

__getitem__(self, index)[source]

Gets batch at position index. # Arguments

index: position of the batch in the Sequence.

# Returns

A batch

__len__(self)[source]

Number of batch in the Sequence. # Returns

The number of batches in the Sequence.

on_epoch_end(self)[source]

Method called at the end of every epoch.

static validated_split(split)[source]

Confirms data is valid, otherwise returns None (makes downstream checking simpler)

class simpleml.pipelines.base_pipeline.Pipeline(has_external_files=True, transformers=None, external_pipeline_class='default', fitted=False, **kwargs)[source]

Bases: simpleml.pipelines.base_pipeline.AbstractPipeline

Base class for all Pipeline objects.

dataset_id: foreign key relation to the dataset used as input

__table_args__[source]
__tablename__ = pipelines[source]
dataset[source]
dataset_id[source]
class simpleml.pipelines.base_pipeline.TransformedSequence(pipeline, dataset_sequence)[source]

Bases: simpleml.imports.Sequence

Nested sequence class to apply transforms on batches in real-time and forward through as the next batch

__getitem__(self, *args, **kwargs)[source]

Pass-through to dataset sequence - applies transform on raw data and returns batch

__len__(self)[source]

Pass-through. Returns number of batches in dataset sequence

on_epoch_end(self)[source]