v0.8.0
BetaML v0.8.0
- support for all models of the new "V2" API that implements a "standard"
mod = Model([Options])
,fit!(mod,X,[Y])
,predict(mod,[X])
workflow (details here). Classic API is now deprecated, with some of its functions be removed in the next BetaML 0.9 versions and some unexported. - standardised function names to follow the Julia style guidelines and the new BetaML code style guidelines](https://sylvaticus.github.io/BetaML.jl/dev/StyleGuide_templates.html)
- new hyper-parameter autotuning method:
Autotune is hyperthreaded with model-specific defaults. For example for Random Forests the defaults are:
mod = ModelXX(autotune=true) # --> control autotune with the parameter `tunemethod` fit!(mod,x,[y]) # --> autotune happens here together with final tuning est = predict(mod,xnew)
The number of models is reduced in order to arrive with a single model.tunemethod=SuccessiveHalvingSearch( hpranges = Dict("n_trees" => [10, 20, 30, 40], "max_depth" => [5,10,nothing], "min_gain" => [0.0, 0.1, 0.5], "min_records" => [2,3,5], "max_features" => [nothing,5,10,30], "beta" => [0,0.01,0.1]), loss = l2loss_by_cv, # works for both regression and classification res_shares = [0.08, 0.1, 0.13, 0.15, 0.2, 0.3, 0.4] multithreads = false) # RF are already multi-threaded
Only supervised model autotuning is currently implemented, but GMM-based clustering autotuniing is planned usingBIC
orAIC
. - new functions
model_load
andmodel_save
to load/save trained models from the filesystem - new
MinMaxScaler
(StandardScaler
was already available as classical API functionsscale
andgetScalingFactors
) - many bugfixes/improvments on corner situations
- new MLJ interface models to
NeuralNetworkEstimator
Closed issues:
- Improve oneHotEncode stability for encoding integers embedding categories (#29)
- initVarainces! doesn't support mixed-type variances (#33)
- Error generating MLJ model registry (#37)
- WARNING: could not import Perceptron ... (#38)
- MLJ model
BetaMLGMMRegressor
predicting row vectors instead of column vectors (#40)