Table of documentation contents


Weaviate is completely containerized, you can use Docker Compose and/or Kubernetes to run it.


There are multiple ways to set up a Weaviate instance. For a try-out setup, we recommend you start with docker-compose. Cloud deployment can be used for small and larger projects. For production setup and/or large scale projects, we encourage you to use Kubernetes.

Customize your Weaviate Setup

You can use the configuration tool below to customize your Weaviate setup for your desired runtime (e.g. Docker-Compose, Kubernetes, etc.):

Docker Compose

If you want to try out Weaviate locally and on a small scale, you can use Docker Compose.

If you are new to Docker (Compose) and containerization, check out our Docker Introduction for Weaviate Users.

To start Weaviate with docker-compose, you need a docker-compose configuration file. You can obtain it from the configuration tool above or alternatively pick one of the examples below. Additional Environment variables can be set in this file, which regulate your Weaviate setup, authentication and authorization, module settings, and data storage settings.

Example Docker Compose Setups

If you wish not to use the configuration tool above to customize your setup, you can also use one of the following three example docker-compose.yml files below.

To run any of the below examples, save one of the snippets as docker-compose.yml and start them by running docker-compose up from within the same folder.

Weaviate with the text2vec-transformers model

An example docker-compose setup file with the transformers model sentence-transformers/msmarco-distilroberta-base-v2 is:

version: '3.4'
    image: semitechnologies/weaviate:1.2.1
    restart: on-failure:0
     - "8080:8080"
      DEFAULT_VECTORIZER_MODULE: text2vec-transformers
      ENABLE_MODULES: text2vec-transformers
      TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
    image: semitechnologies/transformers-inference:sentence-transformers-msmarco-distilroberta-base-v2
      ENABLE_CUDA: 0 # set to 1 to enable
      # NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDA

Note that transformer models are Neural Networks which were built to run on GPUs. Running Weaviate with the text2vec-transformers module without GPU is still possible, but it will be slower. Enable CUDA if you have a GPU available (ENABLE_CUDA=1).

For more information on how to set up the environment with the text2vec-transformers module, see this page.

The text2vec-transformers module requires at least Weaviate version v1.2.0.

Weaviate with the text2vec-contextionary model

An example docker-compose setup file with the english language contextionary model is:

version: '3.4'
    image: semitechnologies/weaviate:1.2.1
    - 8080:8080
    restart: on-failure:0
      CONTEXTIONARY_URL: contextionary:9999
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'text2vec-contextionary'
      ENABLE_MODULES: text2vec-contextionary
      EXTENSIONS_STORAGE_ORIGIN: http://weaviate:8080
    image: semitechnologies/contextionary:en0.16.0-v1.0.2

The text2vec-contextionary module is designed to run with CPU hardware and does not require or benefit from GPU-accelerated hardware.

For more information on how to set up the environment with the text2vec-contextionary module, see this page.

Note: at the moment, text vectorization modules cannot be combined in a single setup. This means that you can either enable the text2vec-contextionary, the text2vec-transformers or no text vectorization module.

Weaviate without any modules

An example docker-compose setup for Weaviate without any modules can be found below. In this case, no model inference is performed at either import or search time. You will need to provide your own vectors (e.g. from an outside ML model) at import and search time:

version: '3.4'
    image: semitechnologies/weaviate:1.2.1
    - 8080:8080
    restart: on-failure:0
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'

Attaching to the log output of only Weaviate

The output of docker-compose up is quite verbose as it attaches to the logs of all containers.

You can attach the logs only to Weaviate itself, for example by running the following command instead of docker-compose up:

# Run Docker Compose
$ docker-compose up -d && docker-compose logs -f weaviate

Alternatively you can run docker-compose entirely detached with docker-compose up -d and poll {bindaddress}:{port}/v1/meta until you receive status 200 OK.

Environment variables

An overview of environment variables in the docker-compose file:

VariableDescriptionTypeExample Value
ORIGINSet the http(s) origin for Weaviatestring - HTTP origin
CONTEXTIONARY_URLService-Discovery for the contextionary containerstring - URLhttp://contextionary
PERSISTENCE_DATA_PATHWhere should Weaviate Standalone store its data?string - file path/var/lib/weaviate
ENABLE_MODULESWhich modules to enable in the setup?stringtext2vec-contextionary
TRANSFORMERS_INFERENCE_APIThe endpoint where to reach the transformers module if enabledstringhttp://t2v-transformers:8080
DEFAULT_VECTORIZER_MODULEDefault vectorizer module, so this doens’t need to be defined per class in the schemastringtext2vec-contextionary
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLEDAllow users to interact with weaviate without authstring - true/falsetrue
AUTHENTICATION_OIDC_ENABLEDEnable OIDC Authstring - true/falsefalse
AUTHORIZATION_ADMINLIST_ENABLEDEnable AdminList Authorization modestring - true/falsetrue
AUTHORIZATION_ADMINLIST_USERSUsers with admin permissionstring - comma-separated,
AUTHORIZATION_ADMINLIST_READONLY_USERSUsers with read-only permissionstring - comma-separated,

Cloud deployment

Weaviate is available on Google Cloud Marketplace, where you can find more details on deployment on the cloud.

Weaviate Cloud Service

You can create a free Weaviate sandbox cluster that lasts for 5 days completely for free. You can try it out here and if you do, we would love to hear your feedback.

Kubernetes (K8s)

Generally, we recommend using Kubernetes to deploy Weaviate for any long-running deployments or those with specific availablity expectations, such as production use cases. For local development or personal evaluation, using Docker Compose will most likely be sufficient.


  • A Kuberentes Cluster with a recent version (e.g. between 1.14 and 1.19)
  • The cluster needs to be able to provision PersistentVolumes through PersistentVolumeClaims. No special file systems are required. Any default file system capable of ReadWriteOnce access mode is sufficient.
  • Helm (both v2 and v3 are compatible. Note that Helm v2 requires setting up Tiller on the cluster prior to installing charts)

Installing the official Weaviate Helm chart

To obtain and install the weaviate chart on your Kubernetes cluster take the following steps:

Verify tool setup and cluster access

# Check if helm is installed
$ helm version
# Make sure `kubectl` is configured correctly and you can access the cluster. 
# For example, try listing the pods in the currently configured namespace.
$ kubectl get pods

Obtain the Helm Chart

Get the Helm chart and values.yml configuration file.

# Set the Weaviate chart version
export CHART_VERSION="v13.3.0"
# Download the Weaviate Helm chart
# Download an example values.yml (with the default configuration)

Adjust the configuration in the values.yml (Optional)

Note: You can skip this step and run with all default values.

In the values.yaml file you can tweak the configuration to align it with your setup. The yaml file is extensively documented to help you align the configuration with your setup.

Out of the box, the configuration file is setup for:

  • 1 Weaviate replica. (This cannot be changed at the moment, see below)
  • The text2vec-contextionary module is enabled and running with 1 replica. (This can be adjusted based on the expected load).
  • Other modules, such as text2vec-transformers, qna-transformers or img2vec-neural are disabled by default. They can be enabled by setting the respective enabled flag to true.

See the resource requests and limits in the example values.yml. You can adjust them based on your expected load and the resources available on the cluster.

Deploy (install the Helm chart)

You can deploy the helm charts as follows:

# Create a Weaviate namespace
$ kubectl create namespace weaviate
# Deploy
$ helm upgrade \
  "weaviate" \
  weaviate.tgz \
  --install \
  --namespace "weaviate" \
  --values ./values.yaml

The above assumes that you have permissions to create a new namespace. If you are have only namespace-level permissions, you can skip creating a new namespace and adjust the namespace argument on helm upgrade according to the name of your pre-confiugred namespage.

Updating the installation after the initial deployment

The above command (helm upgrade...) is idempotent, you can run it again, for example after adjusting your desired configuration.


Until Weaviate fully supports Horizontal Scalability (ETA end of Q3 2021), you cannot increase the Weaviate replicas beyond 1. Trying to do so would lead to unexpected behavior, as the loadbalancer would switch between replicas without their state being in sync. Some data would land on some replicas, other data on others, each query would only return a fragment of the entire dataset.

Additional Configuration Help

More Resources

If you can’t find the answer to your question here, please look at the:

  1. Frequently Asked Questions. Or,
  2. Knowledge base of old issues. Or,
  3. For questions: Stackoverflow. Or,
  4. For issues: Github. Or,
  5. Ask your question in the Slack channel: Slack.
  • installation