Dlint is a tool for encouraging best coding practices and helping ensure Python code is secure.
The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis. Even more valuable than the hundreds of serious bugs I have prevented with it is the change in mindset about the way I view software reliability and code quality.
For a static analysis project to succeed, developers must feel they benefit from and enjoy using it.
For documentation and a list of rules see docs.
python -m pip install dlint
python3 to install for a specific Python version.
And double check that it was installed correctly:
$ python -m flake8 -h Usage: flake8 [options] file file ... ... Installed plugins: dlint: 0.11.0, mccabe: 0.5.3, pycodestyle: 2.2.0, pyflakes: 1.3.0
Dlint builds on
flake8 to perform its linting. This provides many
useful features without re-inventing the wheel.
Let's run a simple check:
$ cat << EOF > test.py print("TEST1") exec('print("TEST2")') EOF
python test.py TEST1 TEST2
$ python -m flake8 --select=DUO test.py test.py:2:1: DUO105 use of "exec" is insecure
DUO? Dlint was originally developed by the Duo Labs team.
--select=DUO flag tells
flake8 to only run Dlint lint rules.
From here, we can easily run Dlint against a directory of Python code:
$ python -m flake8 --select=DUO /path/to/code
To fine-tune your linting, check out the
python -m flake8 --help
Dlint results can also be included inline in your editor for fast feedback. This typically requires an editor plugin or extension. Here are some starting points for common editors:
Dlint can easily be integrated into CI pipelines, or anything really.
For more information and examples see 'How can I integrate Dlint into XYZ?'.
get_resultsfunction appropriately and inherit from
See an example plugin for further details.
First, install development packages:
python -m pip install -r requirements.txt python -m pip install -r requirements-dev.txt python -m pip install -e .
$ pytest --cov
$ pytest -k test_benchmark_run --benchmark-py-file /path/to/file.py tests/test_benchmark/
Or get benchmark results for linters individually:
$ pytest -k test_benchmark_individual --benchmark-py-file /path/to/file.py tests/test_benchmark/
Or run against a single linter:
$ pytest -k test_benchmark_individual[DUO138-BadReCatastrophicUseLinter] --benchmark-py-file /path/to/file.py tests/test_benchmark/