Development#

This section provides guidance for those who wish to contribute to the project. It includes instructions for setting up the development environment, cloning the repository, installing the project in development mode, running tests, and building the documentation.

To ensure that new features and changes adhere to project standards, maintain quality, and keep the documentation up to date, contributors are required to follow:

  • Style-consistent formatting;

  • Documentation-oriented practices;

  • Test-driven development;

Minimal Workflow#

This is a quick guide for a development protocol. Hence, some important but obvious steps are ommited. See more details in the sections below.

Setup#

  1. Clone the repository to a local branch;

git clone https://github.com/iporepos/babilonia.git
  1. Install dependencies in dev mode:

python -m pip install -e .[dev,docs]

Development loop#

  1. Develop features under the ./src/babilonia/ folder;

  2. Develop unit tests for the features under ./tests/unit or ./tests/bcmk;

  3. Document features directly using docstrings;

  4. Document features in the API by editing ./docs/api.rst file;

  5. If ready, proceed to checkout. Repeat otherwise.

Checkout#

  1. Apply style:

python -m dev.style
  1. Build docs (locally):

python -m dev.docs --open
  1. If previous passed, run all CI-based tests:

python -m dev.tests
  1. If previous step passed, stage and commit;

git add .
git commit -m "Message"
  1. If appropriate, tag and publish.

git tag -a vX.Y.Z -m "Release X.Y.Z (message)"
git push origin main
git push origin --tags

Cloning#

Use your IDE for authenticate in GitHub and clone the repo branch of interest in your local system.

Note

Of course, Git must be set as the version control system

Alternatively, clone via terminal:

# [CHANGE THIS] set username, repository and (optional) branch
git clone https://github.com/{username}/{repository}.git

Installing as a developer#

For developing, it’s recommended to set up a python Virtual Environment (venv) locally for developing the repo. This is best for avoiding falling into a dependency hell with your other projects.

Important

Of course, you need Python installed in your system

Move to the repo root folder:

# [CHANGE THIS] set your own actual local path 
cd ./path/to/babilonia

Create a python venv:

python -m venv .venv

Activate the venv session. On Unix (Linux/Mac):

source .venv/bin/activate

Activate the venv on Windows:

. .venv\Scripts\Activate.ps1

Now, under the venv session, install all dependencies in editable mode -e (including dev and docs dependencies with .[dev, docs]):

python -m pip install -e .[dev,docs]

This will install all dependecies needed both for developing and documentation.


Versioning#

Versioning system of the project is based on git and the remote is hosted in github.

Versioning cycle#

Before and after a development session, a health practice is to run:

git status

Considering the status output, add all changed files to the staging area:

git add .

After some substantial development, consider commit the changes to the local git system:

git commit -m "Commit message (eg, 'Bug Fixes')"

Repeat the cycle until if feels ready to publish to the remote host.

Publishing#

Before publishing, a health practice is to check the tags available:

git tag

Considering the output, decide a new tag and add it:

git tag -a vX.Y.Z -m "Release X.Y.Z (message)"

After tagging, publish explicitly:

git push origin main

Or simply:

git push

Then append the tags

git push origin --tags

Tags convention#

This project tags follows Semantic Versioning (vMAJOR.MINOR.PATCH) with the interpretations below.

Major vX.y.z — Project Maturity Level#

  • v0.x.x — Experimental

    • Playground for exploring architecture and project layout.

    • Breaking changes are expected.

  • v1.x.x — Stable Foundation

    • Production-ready core architecture.

    • Actively developed.

    • Backward compatibility is expected.

  • v2.x.x — Next Generation

    • May introduce new syntax or paradigms.

    • Can be incompatible with v1.x.x.

    • More mature, better documented, and more stable.

Minor vx.Y.z — Milestones#

  • Major feature additions.

  • Large refactors within the same architecture.

  • Treated as logical restore points.

Patch vx.y.Z — Maintenance#

  • Bug fixes.

  • Small improvements.

  • Documentation corrections.

  • No behavioral changes.

Releases#

Releases may receive human-readable names. Suggested thematic names:

  • Hammurabi

  • Sumuabum

  • SumulaEl

  • Samsuiluna

  • AbiEshuh

  • AmmiSaduqa

  • Sennacherib

  • Ashurbanipal

  • Nabopolassar

  • Nabonidus


Packaging#

This project relies on the PyPI platform for package distribution.

First-time distribution workflow#

As a first time distribution, manual workflow is recommended.

  1. Register and save API tokens from https://pypi.org/ and https://test.pypi.org.

  2. Install packaging utilities:

For building the distribution:

python -m pip install build

For uploading to PyPI

python -m pip install twine
  1. Build distro

Cleanup first

Remove-Item -Recurse -Force dist, build, *.egg-info

Run the build command

python -m build

Output:

dist/
  yourpkg-0.1.0.tar.gz
  yourpkg-0.1.0-py3-none-any.whl

these are the packages in the repo. This folder is ignored by git.

  1. Validade build

twine check dist/*
  1. Publish on TestPyPI

twine upload --repository testpypi dist/*

Warning

Use the token from TestPyPI

  1. Check test package under a clear environment

python -m pip install --index-url https://test.pypi.org/simple --extra-index-url https://pypi.org/simple <yourpkg>==Z.Y.X
  1. Publish on PyPI:

twine upload dist/*
  1. Check package installation in a clear environment

python -m pip install <yourpkg>

Continous distribution#

An automated system is set for continous distribution via GitHub Actions. The workflow file lives in .github/workflows/dist.yaml and is only triggered by a new tag being pushed.

Warning

To work properly, the API token for PyPI must be included in GitHub repository secrets as PYPI_API_TOKEN.


Style#

In this project, we enforce using Black to ensure a consistent code style.

Since black is listed in dev dependencies, you may run manually before push:

black .

from the repo root, under the venv session

The built-in wrapper is:

python -m dev.style

Warning

Unformatted contributions are not going to pass because GitHub checks for style consistency.


Documentation#

Documentation-oriented development is recommended. Every feature must be documented with standard Sphinx (rST) format.

Build locally#

Use Sphinx for building the documentation website locally. Run this via terminal:

sphinx-build -b html .\docs .\docs\_build --write-all

The built-in wrapper is:

python -m dev.docs --open

Important

Build documentation under a virtual environment session.

Note

The docs website is generated under docs/_build

Testing#

Test-driven development is recommended. Tests are split into the following categories:

  • Unit tests - short and targeting feature behavior.

  • Benchmark tests - may take longer time, targeting full performance, includes inputs and outputs evaluations.

  • Example tests - single line tests presented in docstrings.

Important

Run tests under a virtual environment session.

Unit tests#

Run all unit tests in /tests via terminal:

python -m unittest discover -s tests -p "test_*.py" -v

For a single unit test module:

python -m tests.unit.test_module

The built-in wrapper is:

python -m dev.tests

Variations include:

python -m dev.tests --which "unit"

For benchmarks (run this locally)

python -m dev.tests --which "bcmk"

For all tests: For benchmarks (run this locally)

python -m dev.tests --all

See also

See more in unittest library for details on unit tests.

Note

Example tests can be included in unit tests with their own testing script. A template for this is provided in /tests/test_doctest.py.

Benchmark tests#

Benchmark tests are unit tests related to full-integration of features, sometimes associated with input and output data. Some benchmark tests will install heavy datasets from provided URLs.

Enable benchmark tests#

For running benchmark tests, they must be enabled. This is because benchmarks may take too long and can deplete resources for CI services. Once enabled, just run the unit tests as usual.

Enabling benchmarks on Unix:

RUN_BENCHMARKS=1

Enabling benchmarks on Windows:

$env:RUN_BENCHMARKS="1"

Enable large benchmark tests#

Large benchmark tests are exceptionally large tests. The same logic applies:

Enabling large benchmark tests on Unix:

RUN_BENCHMARKS_XXL=1

Enabling large benchmark tests on Windows:

$env:RUN_BENCHMARKS_XXL="1"

Example tests#

Create single-line example tests in docstring:

def add(num1, num2):
	"""
	
	**Examples**
	
	>>> add(1, 2)
	3
		
	"""
	return num1 + num2

Test the module using doctest:

python -m doctest -v /path/to/module.py

Alternatively, test the module in the script part:

if __name__ == "__main__":
    import doctest
    doctest.testmod()

See also

See more in doctest library for details on example tests.