Skip to content

Commit a4b624c

Browse files
authored
Merge pull request #167 from basf/version_bump
version bump
2 parents 978c49e + 4e4cde8 commit a4b624c

4 files changed

Lines changed: 4 additions & 5 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
</div>
1717

1818
<div style="text-align: center;">
19-
<h1>Mambular: Tabular Deep Learning</h1>
19+
<h1>Mambular: Tabular Deep Made Simple</h1>
2020
</div>
2121

2222
Mambular is a Python library for tabular deep learning. It includes models that leverage the Mamba (State Space Model) architecture, as well as other popular models like TabTransformer, FTTransformer, TabM and tabular ResNets. Check out our paper `Mambular: A Sequential Model for Tabular Deep Learning`, available [here](https://arxiv.org/abs/2408.06291). Also check out our paper introducing [TabulaRNN](https://arxiv.org/pdf/2411.17207) and analyzing the efficiency of NLP inspired tabular models.

mambular/__version__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
"""Version information."""
22

33
# The following line *must* be the last in the module, exactly as formatted:
4-
__version__ = "0.2.3"
4+
__version__ = "1.0.0"

mambular/arch_utils/layer_utils/embedding_layer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def __init__(self, num_feature_info, cat_feature_info, config):
5454
d_embedding=self.d_model,
5555
n_frequencies=getattr(config, "n_frequencies", 48),
5656
frequency_init_scale=getattr(config, "frequency_init_scale", 0.01),
57-
activation=self.embedding_activation,
57+
activation=True,
5858
lite=getattr(config, "plr_lite", False),
5959
)
6060
elif self.embedding_type == "linear":

mambular/configs/mlp_config.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ class DefaultMLPConfig:
6262
weight_decay: float = 1e-06
6363
lr_factor: float = 0.1
6464
layer_sizes: list = (256, 128, 32)
65-
activation: callable = nn.SELU()
65+
activation: callable = nn.ReLU()
6666
skip_layers: bool = False
6767
dropout: float = 0.2
6868
use_glu: bool = False
@@ -76,5 +76,4 @@ class DefaultMLPConfig:
7676
embedding_bias: bool = False
7777
layer_norm_after_embedding: bool = False
7878
d_model: int = 32
79-
embedding_type: float = "plr"
8079
plr_lite: bool = False

0 commit comments

Comments
 (0)