tab

tabgan

We well know GANs for success in the realistic image generation. However, they can be applied in tabular data generation. We will review and examine some recent papers about tabular GANs in action.

Showing:

Popularity

Downloads/wk

0

GitHub Stars

198

Maintenance

Last Commit

9d ago

Contributors

4

Package

Dependencies

11

License

Apache License 2.0

Categories

Reviews

Average Rating

5.0/51
Read All Reviews

Top Feedback

1Great Documentation
1Easy to Use
1Performant
1Highly Customizable
1Bleeding Edge
1Responsive Maintainers

Readme

CodeFactor Code style: black License Downloads

GANs for tabular data

We well know GANs for success in the realistic image generation. However, they can be applied in tabular data generation. We will review and examine some recent papers about tabular GANs in action.

How to use library

  • Installation: pip install tabgan
  • To generate new data to train by sampling and then filtering by adversarial training call GANGenerator().generate_data_pipe:
from tabgan.sampler import OriginalGenerator, GANGenerator
import pandas as pd
import numpy as np

# random input data
train = pd.DataFrame(np.random.randint(-10, 150, size=(50, 4)), columns=list("ABCD"))
target = pd.DataFrame(np.random.randint(0, 2, size=(50, 1)), columns=list("Y"))
test = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list("ABCD"))

# generate data
new_train1, new_target1 = OriginalGenerator().generate_data_pipe(train, target, test, )
new_train2, new_target2 = GANGenerator().generate_data_pipe(train, target, test, )

# example with all params defined
new_train3, new_target3 = GANGenerator(gen_x_times=1.1, cat_cols=None, bot_filter_quantile=0.001,
                                       top_filter_quantile=0.999,
                                       is_post_process=True,
                                       adversaial_model_params={
                                           "metrics": "AUC", "max_depth": 2,
                                           "max_bin": 100, "n_estimators": 500,
                                           "learning_rate": 0.02, "random_state": 42,
                                       }, pregeneration_frac=2, only_generated_data=False,
                                       epochs=500).generate_data_pipe(train, target,
                                                                      test, deep_copy=True,
                                                                      only_adversarial=False,
                                                                      use_adversarial=True)

Both samplers OriginalGenerator and GANGenerator have same input parameters:

  • gen_x_times: float = 1.1 - how much data to generate, output might be less because of postprocessing and adversarial filtering
  • cat_cols: list = None - categorical columns
  • bot_filter_quantile: float = 0.001 - bottom quantile for postprocess filtering
  • top_filter_quantile: float = 0.999 - bottom quantile for postprocess filtering
  • is_post_process: bool = True - perform or not post-filtering, if false bot_filter_quantile and top_filter_quantile ignored
  • adversaial_model_params: dict params for adversarial filtering model, default values for binary task
  • pregeneration_frac: float = 2 - for generataion step gen_x_times * pregeneration_frac amount of data will generated. However in postprocessing (1 + gen_x_times) % of original data will be returned
  • epochs: int = 500 - for how many epochs train GAN samplers, ignored for OriginalGenerator

For generate_data_pipe methods params:

  • train_df: pd.DataFrame Train dataframe which has separate target
  • target: pd.DataFrame Input target for the train dataset
  • test_df: pd.DataFrame Test dataframe - newly generated train dataframe should be close to it
  • deep_copy: bool = True - make copy of input files or not. If not input dataframes will be overridden
  • only_adversarial: bool = False - only adversarial fitering to train dataframe will be performed
  • use_adversarial: bool = True - perform or not adversarial filtering
  • only_generated_data: bool = False - After generation get only newly generated, without concating input train dataframe.
  • @return: -> Tuple[pd.DataFrame, pd.DataFrame] - Newly generated train dataframe and test data

Thus, you may use this library to improve your dataset quality:

def fit_predict(clf, X_train, y_train, X_test, y_test):
    clf.fit(X_train, y_train)
    return sklearn.metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])



dataset = sklearn.datasets.load_breast_cancer()
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=25, max_depth=6)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(
    pd.DataFrame(dataset.data), pd.DataFrame(dataset.target, columns=["target"]), test_size=0.33, random_state=42)
print("initial metric", fit_predict(clf, X_train, y_train, X_test, y_test))

new_train1, new_target1 = OriginalGenerator().generate_data_pipe(X_train, y_train, X_test, )
print("OriginalGenerator metric", fit_predict(clf, new_train1, new_target1, X_test, y_test))

new_train1, new_target1 = GANGenerator().generate_data_pipe(X_train, y_train, X_test, )
print("GANGenerator metric", fit_predict(clf, new_train1, new_target1, X_test, y_test))

Datasets and experiment design

Running experiment

To run experiment follow these steps:

  1. Clone the repository. All required dataset are stored in ./Research/data folder
  2. Install requirements pip install -r requirements.txt
  3. Run all experiments python ./Research/run_experiment.py. Run all experiments python run_experiment.py. You may add more datasets, adjust validation type and categorical encoders.
  4. Observe metrics across all experiment in console or in ./Research/results/fit_predict_scores.txt

Task formalization

Let say we have T_train and T_test (train and test set respectively). We need to train the model on T_train and make predictions on T_test. However, we will increase the train by generating new data by GAN, somehow similar to T_test, without using ground truth labels.

Experiment design

Let say we have T_train and T_test (train and test set respectively). The size of T_train is smaller and might have different data distribution. First of all, we train CTGAN on T_train with ground truth labels (step 1), then generate additional data T_synth (step 2). Secondly, we train boosting in an adversarial way on concatenated T_train and T_synth (target set to 0) with T_test (target set to 1) (steps 3 & 4). The goal is to apply newly trained adversarial boosting to obtain rows more like T_test. Note - initial ground truth labels aren"t used for adversarial training. As a result, we take top rows from T_train and T_synth sorted by correspondence to T_test (steps 5 & 6), and train new boosting on them and check results on T_test.

Experiment design and workflow

Picture 1.1 Experiment design and workflow

Of course for the benchmark purposes we will test ordinal training without these tricks and another original pipeline but without CTGAN (in step 3 we won"t use T_sync).

Datasets

All datasets came from different domains. They have a different number of observations, number of categorical and numerical features. The objective for all datasets - binary classification. Preprocessing of datasets were simple: removed all time-based columns from datasets. Remaining columns were either categorical or numerical.

Table 1.1 Used datasets

NameTotal pointsTrain pointsTest pointsNumber of featuresNumber of categorical featuresShort description
Telecom7.0k4.2k2.8k2016Churn prediction for telecom data
Adult48.8k29.3k19.5k158Predict if persons" income is bigger 50k
Employee32.7k19.6k13.1k109Predict an employee"s access needs, given his/her job role
Credit307.5k184.5k123k12118Loan repayment
Mortgages45.6k27.4k18.2k209Predict if house mortgage is founded
Taxi892.5k535.5k357k85Predict the probability of an offer being accepted by a certain driver
Poverty_A37.6k22.5k15.0k4138Predict whether or not a given household for a given country is poor or not

Results

To determine the best sampling strategy, ROC AUC scores of each dataset were scaled (min-max scale) and then averaged among the dataset.

Table 1.2 Different sampling results across the dataset, higher is better (100% - maximum per dataset ROC AUC)

dataset_nameNonegansample_original
credit0.9970.9980.997
employee0.9860.9660.972
mortgages0.9840.9640.988
poverty_A0.9370.9500.933
taxi0.9660.9380.987
adult0.9950.9670.998
telecom0.9950.8680.992

Table 1.3 Different sampling results, higher is better for a mean (ROC AUC), lower is better for std (100% - maximum per dataset ROC AUC)

sample_typemeanstd
None0.9800.036
gan0.9690.06
sample_original0.9810.032

Table 1.4 same_target_prop is equal 1 then the target rate for train and test are different no more than 5%. Higher is better.

sample_typesame_target_propprop_test_score
None00.964
None10.985
gan00.966
gan10.945
sample_original00.973
sample_original10.984

Acknowledgments

The author would like to thank Open Data Science community [7] for many valuable discussions and educational help in the growing field of machine and deep learning. Also, special big thanks to Sber [8] for allowing solving such tasks and providing computational resources.

Citation

If you use GAN-for-tabular-data in a scientific publication, we would appreciate references to the following BibTex entry: arxiv publication:

@misc{ashrapov2020tabular,
      title={Tabular GANs for uneven distribution}, 
      author={Insaf Ashrapov},
      year={2020},
      eprint={2010.00638},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

library itself:

@misc{Diyago2020tabgan,
    author       = {Ashrapov, Insaf},
    title        = {GANs for tabular data},
    howpublished = {\url{https://github.com/Diyago/GAN-for-tabular-data}},
    year         = {2020}
}

References

[1] Jonathan Hui. GAN — What is Generative Adversarial Networks GAN? (2018), medium article

[2]Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks (2014). arXiv:1406.2661

[3] Lei Xu LIDS, Kalyan Veeramachaneni. Synthesizing Tabular Data using Generative Adversarial Networks (2018). arXiv: 1811.11264v1 [cs.LG]

[4] Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, Kalyan Veeramachaneni. Modeling Tabular Data using Conditional GAN (2019). arXiv:1907.00503v2 [cs.LG]

[5] Denis Vorotyntsev. Benchmarking Categorical Encoders (2019). Medium post

[6] Insaf Ashrapov. GAN-for-tabular-data (2020). Github repository.

[7] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila. Analyzing and Improving the Image Quality of StyleGAN (2019) arXiv:1912.04958v2 [cs.CV]

[8] ODS.ai: Open data science (2020), https://ods.ai/

[9] Sber (2020), https://www.sberbank.ru/

Rate & Review

Great Documentation1
Easy to Use1
Performant1
Highly Customizable1
Bleeding Edge1
Responsive Maintainers1
Poor Documentation0
Hard to Use0
Slow0
Buggy0
Abandoned0
Unwelcoming Community0
100
Insaf AshrapovMoscow6 Ratings0 Reviews
Senior Data Scientist
6 months ago
Great Documentation
Easy to Use
Responsive Maintainers
Performant
Highly Customizable
Bleeding Edge

Alternatives

No alternatives found

Tutorials

No tutorials found
Add a tutorial