General
tip
- For configurations guide, please refer to the Configurations section.
- For development guide, please refer to the Developer Guides section.
Introduction
In this section, we will introduce how to utilize carefree-learn
to solve deep learning tasks in general.
Recall that Pipeline
serves as the (internal) user interface in carefree-learn
, so one of the main effort of utilizing carefree-learn
will be how to construct a suitable Pipeline
.
info
- Please refer to the Configurations section for more details on how to construct a Pipeline.
- Please refer to the Supported Models section to see currently supported models.
After a Pipeline
is constructed, another effort will be how to define our dataset. Although different tasks require different data format, carefree-learn
introduced DLDataModule
to unify the APIs, which means we can always utilize carefree-learn
in this way more or less:
info
- Please refer to MLData section to see how to construct
DLDataModule
for ML tasks. - Please refer to ImageFolderData section to see how to construct
DLDataModule
for CV tasks.
As shown above, Pipeline
implements fit
method to train models on datasets, which is similar to scikit-learn
. Details of the high level APIs will be described in the following sections.
fit
data
DLDataModule
constructed by our dataset.
sample_weights
[default =None
]- This is an experimental feature and is not fully supported.
cuda
[default =None
]- Specify which
cuda
device we would like to train our models on. - If not provided, no
cuda
will be used andcpu
will be used.
- Specify which
Example
predict
data
DLDataModule
constructed by our (new) dataset.
batch_size
[default =128
]- Specify the size of each batch we would like to use.
make_loader_kwargs
[default ={}
]- Specify some extra configurations we would like to use when constructing the
DataLoader
.
- Specify some extra configurations we would like to use when constructing the
predict_kwargs
[default ={}
]- Specify some extra parameters we would like to use when running the forward pass.
Example
save
export_folder
- Specify the export folder.
compress
[default =True
]- Specify whether should we compress the exported stuffs to a
.zip
file or not.
- Specify whether should we compress the exported stuffs to a
remove_original
[default =True
]- Specify whether should we remove the original folder after compressing or not.
Example
load
export_folder
- Specify the export folder.
cuda
[default =None
]- Specify which
cuda
device we would like to load the models on. - If not provided, we will load the models to
cpu
- Specify which
compress
[default =True
]- Specify whether the saved stuffs are compressed or not.
states_callback
[default =None
]- Specify the callback we would like to apply to the saved parameters.
pre_callback
[default =None
]- Specify the callback we would like to apply to the saved configurations.
post_callback
[default =None
]- Specify the callback we would like to apply after the
Pipeline
is loaded.
- Specify the callback we would like to apply after the
to_onnx
export_folder
- Specify the export folder.
dynamic_axes
[default =None
]- Specify the dynamic axes.
- Notice that the first axis, which usually represents the batch size, should not be included here.
onnx_file
[default ="model.onnx"
]- Specify the name of the saved onnx file.
opset
[default =11
]- Specify the target opset version.
simplify
[default =True
]- Specify whether should we simplify the exported onnx file using onnx-simplifier.
onnx_only
[default =False
]- Specify whether should we save all the information or not.
forward_fn
[default =None
]- If provided, we will replace the original forward pass with it when exporting to onnx.
output_names
[default =None
]- Specify the names of the outputs.
- If not provided,
carefree-learn
will infer the correct ones automatically.
input_sample
[default =None
]- Specify the input sample for the forward pass.
- If not provided,
carefree-learn
will utilize the trainingDataLoader
to generate it.- However sometimes the training
DataLoader
may not exist, in which case we should provide theinput_sample
manually.
- However sometimes the training
num_samples
[default =None
]- Specify whether should we use dynamic batch size or not.
- If not provided, the exported onnx file will have dynamic batch size.
- If provided, the exported onnx file will have fixed batch size (equals to
num_samples
).- In most cases,
num_samples
will be eitherNone
or1
.
- In most cases,
compress
[default =True
]- Specify whether should we compress the exported stuffs to a
.zip
file or not.
- Specify whether should we compress the exported stuffs to a
remove_original
[default =True
]- Specify whether should we remove the original folder after compressing or not.
verbose
[default =True
]- Specify if we want to log some messages to the console.
kwargs
[default ={}
]- Specify other keyword arguments we want to use in
torch.onnx.export
.
- Specify other keyword arguments we want to use in