simpleml.metrics.base_metric module

class simpleml.metrics.base_metric.AbstractMetric(name=None, has_external_files=False, author=None, project=None, version_description=None, save_method='disk_pickled', **kwargs)[source]

Bases: simpleml.persistables.base_persistable.Persistable

Abstract Base class for all Metric objects

name: the metric name values: JSON object with key: value pairs for performance on test dataset

(ex: FPR: TPR to create ROC Curve) Singular value metrics take the form - {‘agg’: value}
add_model(model)[source]

Setter method for model used

load(**kwargs)[source]

Extend main load routine to load relationship class

object_type = 'METRIC'
save(**kwargs)[source]

Extend parent function with a few additional save routines

score(**kwargs)[source]

Abstract method for each metric to define

Should set self.values

values = Column(None, JSON(), table=None, nullable=False)
class simpleml.metrics.base_metric.Metric(name=None, has_external_files=False, author=None, project=None, version_description=None, save_method='disk_pickled', **kwargs)[source]

Bases: simpleml.metrics.base_metric.AbstractMetric

Base class for all Metric objects

model_id: foreign key to the model that was used to generate predictions

TODO: Should join criteria be composite of model and dataset for multiple
duplicate metric objects computed over different test datasets?
author
created_timestamp
filepaths
has_external_files
hash_
id
metadata_
model
model_id
modified_timestamp
name
project
registered_name
values
version
version_description