Deploy Dask clusters
This page describes various ways to set up Dask on different hardware, either locally on your own machine or on a distributed cluster. If you are just getting started, then this page is unnecessary. Dask does not require any setup if you only want to use it on a single computer.
You can continue reading or watch the screencast below:
Dask has two families of task schedulers:
Single-machine scheduler: This scheduler provides basic features on a local process or thread pool. This scheduler was made first and is the default. It is simple and cheap to use. It can only be used on a single machine and does not scale.
Distributed scheduler: This scheduler is more sophisticated. It offers more features, but also requires a bit more effort to set up. It can run locally or distributed across a cluster.
High level collections are used to generate task graphs which can be executed on a single machine or a cluster. Using the Distributed scheduler enables creation of a Dask cluster for multi-machine computation.
If you import Dask, set up a computation, and call compute
, then you
will use the single-machine scheduler by default. To use the dask.distributed
scheduler you must set up a Client
import dask.dataframe as dd
df = dd.read_csv(...)
df.x.sum().compute() # This uses the single-machine scheduler by default
from dask.distributed import Client
client = Client(...) # Connect to distributed cluster and override default
df.x.sum().compute() # This now runs on the distributed system
Note that the newer dask.distributed
scheduler is often preferable, even on
single workstations. It contains many diagnostics and features not found in
the older single-machine scheduler.
There are also a number of different cluster managers available, so you can use Dask distributed with a range of platforms. These cluster managers deploy a scheduler and the necessary workers as determined by communicating with the resource manager. Dask Jobqueue, for example, is a set of cluster managers for HPC users and works with job queueing systems (in this case, the resource manager) such as PBS, Slurm, and SGE. Those workers are then allocated physical hardware resources.
An overview of cluster management with Dask distributed.
To summarize, you can use the default, single-machine scheduler to use Dask on your local machine. If you’d like use a cluster or simply take advantage of the extensive diagnostics, you can use Dask distributed. The following resources explain in more detail how to set up Dask on a variety of local and distributed hardware:
- Single Machine:
Default Scheduler: The no-setup default. Uses local threads or processes for larger-than-memory processing
dask.distributed: The sophistication of the newer system on a single machine. This provides more advanced features while still requiring almost no setup.
- Distributed computing:
Manual Setup: The command line interface to set up
dask-scheduler
anddask-worker
processes. Useful for IT or anyone building a deployment solution.SSH: Use SSH to set up Dask across an un-managed cluster.
High Performance Computers: How to run Dask on traditional HPC environments using tools like MPI, or job schedulers like SLURM, SGE, TORQUE, LSF, and so on.
Kubernetes: Deploy Dask with the popular Kubernetes resource manager using either Helm or a native deployment.
YARN / Hadoop: Deploy Dask on YARN clusters, such as are found in traditional Hadoop installations.
Dask Gateway provides a secure, multi-tenant server for managing Dask clusters and allows users to launch and use Dask clusters in a shared cluster environment.
Python API (advanced): Create
Scheduler
andWorker
objects from Python as part of a distributed Tornado TCP application. This page is useful for those building custom frameworks.Docker images are available and may be useful in some of the solutions above.
Cloud for current recommendations on how to deploy Dask and Jupyter on common cloud providers like Amazon, Google, or Microsoft Azure.
- Hosted / managed Dask clusters (listed in alphabetical order):
Coiled handles the creation and management of Dask clusters on cloud computing environments (AWS, Azure, and GCP).
Saturn Cloud lets users create Dask clusters in a hosted platform or within their own AWS accounts.