Table of documentation contents

Named Entity Recognition

In short

  • The Named Entity Recognition (NER) module is a Weaviate module for token classification.
  • The module depends on a NER Transformers model that should be running with Weaviate. There are pre-built models available, but you can also attach another HuggingFace Transformer or custom NER model.
  • The module adds a tokens {} filter to the GraphQL _additional {} field.
  • The module returns data objects as usual, with recognized tokens in the GraphQL _additional { tokens {} } field.


Named Entity Recognition (NER) module is a Weaviate module to extract entities from your existing Weaviate (text) objects on the fly. Entity Extraction happens at query time. Note that for maximum performance, transformer-based models should run with GPUs. CPUs can be used, but the throughput will be lower.

There are currently two different NER modules available (taken from Huggingface): dbmdz-bert-large-cased-finetuned-conll03-english, dslim-bert-base-NER.

How to enable (module configuration)


The NER module can be added as a service to the docker compose file. You must have a text vectorizer like text2vec-contextionary or text2vec-transformers running. An example docker-compose file for using the ner-transformers module (dbmdz-bert-large-cased-finetuned-conll03-english) in combination with the text2vec-contextionary:

version: '3.4'
    - --host
    - --port
    - '8080'
    - --scheme
    - http
    image: semitechnologies/weaviate:1.7.0
    - 8080:8080
    restart: on-failure:0
      CONTEXTIONARY_URL: contextionary:9999
      NER_INFERENCE_API: "http://ner-transformers:8080"
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'text2vec-contextionary'
      ENABLE_MODULES: 'text2vec-contextionary,ner-transformers'
      EXTENSIONS_STORAGE_ORIGIN: http://weaviate:8080
    image: semitechnologies/contextionary:en0.16.0-v1.0.2
    - 9999:9999
    image: semitechnologies/ner-transformers:dbmdz-bert-large-cased-finetuned-conll03-english

Variable explanations:

  • NER_INFERENCE_API: where the qna module is running

How to use (GraphQL)

To make use of the modules capabilities, simply extend your query with the following new _additional property:

GraphQL Token

This module adds a search filter to the GraphQL _additional field in queries: token{}. This new filter takes the following arguments:

FieldData TypeRequiredExample valueDescription
propertieslist of stringsyes["summary"]The properties of the of the queries Class which contains text (text or string Datatype). You must provide at least one property
certaintyfloatno0.75Desired minimal certainty or confidence that the recognized token must have. The higher the value, the stricter the token classification. If no certainty is set, all tokens that are found by the model will be returned.
limitintno1The maximum amount of tokens returned per data object in total.

Example query

  Get {
      limit: 1
    ) {
          properties: ["title"],
          limit: 10,
          certainty: 0.7
        ) {

🟢 Click here to try out this graphql example in the Weaviate Console.

GraphQL response

The answer is contained in a new GraphQL _additional property called tokens, which returns a list of tokens. It contains the following fields:

  • entity (string): The Entity group (classified token)
  • word (string): The word that that is recognized as entity
  • property (string): The property in which the token is found
  • certainty (float): 0.0-1.0 of how certain the model is that the token is correctly classified
  • startPosition (int): The position of the first character of the word in the property value
  • endPosition (int): The position of the last character of the word in the property value

Example response

  "data": {
    "Get": {
      "Article": [
          "_additional": {
            "tokens": [
                "property": "title",
                "entity": "PER",
                "certainty": 0.9894614815711975,
                "word": "Sarah",
                "startPosition": 11,
                "endPosition": 16
                "property": "title",
                "entity": "LOC",
                "certainty": 0.7529033422470093,
                "word": "London",
                "startPosition": 31,
                "endPosition": 37
          "title": "My name is Sarah and I live in London"
  "errors": null

Custom NER Transformer module

You can use the same approach as for text2vec-transformers, see here, i.e. either pick one of the pre-built containers or build your own container from your own model using the semitechnologies/ner-transformers:custom base image. Make sure that your model is compatible with Huggingfaces’s transformers.AutoModelForTokenClassification.

How it works (under the hood)

The code for the application in this repo works well with models that take in a text input like My name is Sarah and I live in London and return information in JSON format like this:

    "entity_group": "PER",
    "score": 0.9985478520393372,
    "word": "Sarah",
    "start": 11,
    "end": 16
    "entity_group": "LOC",
    "score": 0.999621570110321,
    "word": "London",
    "start": 31,
    "end": 37

The Weaviate NER Module then takes this output and processes this to GraphQL output.

More resources

If you can’t find the answer to your question here, please look at the:

  1. Frequently Asked Questions. Or,
  2. Knowledge base of old issues. Or,
  3. For questions: Stackoverflow. Or,
  4. For issues: Github. Or,
  5. Ask your question in the Slack channel: Slack.
  • ner-transformers
  • transformers
  • token classification