Skip to content

Commit d82f2c6

Browse files
committed
update homepage, h2 fix
1 parent f8390f0 commit d82f2c6

1 file changed

Lines changed: 10 additions & 4 deletions

File tree

docs/homepage.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,8 @@ Mambular simplifies data preprocessing with a range of tools designed for easy t
8989

9090

9191
<h2> Fit a Model </h2>
92-
Fitting a model in mambular is as simple as it gets. All models in mambular are sklearn BaseEstimators. Thus the `.fit` method is implemented for all of them. Additionally, this allows for using all other sklearn inherent methods such as their built in hyperparameter optimization tools.
92+
93+
Fitting a model in mambular is as simple as it gets. All models in mambular are sklearn BaseEstimators. Thus, the `fit` method is implemented for all of them. Additionally, this allows for using all other sklearn inherent methods such as their built in hyperparameter optimization tools.
9394

9495
```python
9596
from mambular.models import MambularClassifier
@@ -101,12 +102,12 @@ model = MambularClassifier(
101102
n_bins=50,
102103
d_conv=8
103104
)
104-
105105
# X can be a dataframe or something that can be easily transformed into a pd.DataFrame as a np.array
106106
model.fit(X, y, max_epochs=150, lr=1e-04)
107107
```
108108

109109
Predictions are also easily obtained:
110+
110111
```python
111112
# simple predictions
112113
preds = model.predict(X)
@@ -116,6 +117,7 @@ preds = model.predict_proba(X)
116117
```
117118

118119
<h3> Hyperparameter Optimization</h3>
120+
119121
Since all of the models are sklearn base estimators, you can use the built-in hyperparameter optimizatino from sklearn.
120122

121123
```python
@@ -145,7 +147,11 @@ random_search.fit(X, y, **fit_params)
145147
print("Best Parameters:", random_search.best_params_)
146148
print("Best Score:", random_search.best_score_)
147149
```
148-
Note, that using this, you can also optimize the preprocessing. Just use the prefix ``prepro__`` when specifying the preprocessor arguments you want to optimize:
150+
151+
152+
**Note:** that using this, you can also optimize the preprocessing. Just use the prefix ``prepro__`` when specifying the preprocessor arguments you want to optimize:
153+
154+
149155
```python
150156
param_dist = {
151157
'd_model': randint(32, 128),
@@ -156,7 +162,6 @@ param_dist = {
156162

157163
```
158164

159-
160165
Since we have early stopping integrated and return the best model with respect to the validation loss, setting max_epochs to a large number is sensible.
161166

162167

@@ -300,6 +305,7 @@ Here's how you can implement a custom model with Mambular:
300305
```
301306

302307
# Custom Training
308+
303309
If you prefer to setup custom training, preprocessing and evaluation, you can simply use the `mambular.base_models`.
304310
Just be careful that all basemodels expect lists of features as inputs. More precisely as list for numerical features and a list for categorical features. A custom training loop, with random data could look like this.
305311

0 commit comments

Comments
 (0)