simpleml.metrics.base_metric
¶
Module Contents¶
Classes¶
Abstract Base class for all Metric objects |
|
Base class for all Metric objects |
-
class
simpleml.metrics.base_metric.
AbstractMetric
(name=None, has_external_files=False, author=None, project=None, version_description=None, save_patterns=None, **kwargs)[source]¶ Bases:
future.utils.with_metaclass()
Abstract Base class for all Metric objects
name: the metric name values: JSON object with key: value pairs for performance on test dataset
(ex: FPR: TPR to create ROC Curve) Singular value metrics take the form - {‘agg’: value}
-
_get_dataset_split
(self, **kwargs)[source]¶ Default accessor for dataset data. REFERS TO RAW DATASETS not the pipelines superimposed. That means that datasets that do not define explicit splits will have no notion of downstream splits (e.g. RandomSplitPipeline)
-
_get_latest_version
(self)[source]¶ Versions should be autoincrementing for each object (constrained over friendly name and model). Executes a database lookup and increments..
-
_get_pipeline_split
(self, column: str, split: str, **kwargs)[source]¶ For special case where dataset is the same as the model’s dataset, the dataset splits can refer to the pipeline imposed splits, not the inherent dataset’s splits. Use the pipeline split then ex: RandomSplitPipeline on NoSplitDataset evaluating “in_sample” performance
-
-
class
simpleml.metrics.base_metric.
Metric
(name=None, has_external_files=False, author=None, project=None, version_description=None, save_patterns=None, **kwargs)[source]¶ Bases:
simpleml.metrics.base_metric.AbstractMetric
Base class for all Metric objects
model_id: foreign key to the model that was used to generate predictions
- TODO: Should join criteria be composite of model and dataset for multiple
duplicate metric objects computed over different test datasets?