Synopsis

The Challenge

Building traditional AI / ML requires immense resource allocation to train the models with computationally demanding resources (expensive GPUs). Inputs to the training process include massive datasets (up to terabytes to petabytes for large models), and the actual models are anywhere from a few hundred MiB to tens of GiB.

The process involves:

Our Solution

Enables object storage with verifiable data pipelines and decentralized architecture, access control and ownership, providing tools that solve common challenges in ML/AI including:

Collaboration Over Large Datasets

How it Works

Basin makes data available, replicating datasets & models to decentralized storage for open access.

Benefits

Provide redundancy, fault tolerance, and retrieval to reduce hosted storage costs, guarantee data liveliness, and enable open data access—driving a better data consumer experience.

Data Provenance & Transparency

How it Works