Run 🤗 Transformers in your browser!

Demo

Don’t believe us? Play around with some of these models:

Notes:

  • Clicking Generate for the first time will download the corresponding model from the
    HuggingFace Hub.
    All subsequent requests will use the cached model.
  • For more information about the different parameters, check out HuggingFace’s
    guide to text generation.

Getting Started

Installation

If you use npm,
you can install it using:

npm i @xenova/transformers

Alternatively, you can use it in a

Basic example

It's super easy to translate from existing code!

from transformers import pipeline

# Allocate a pipeline for sentiment-analysis
pipe = pipeline('sentiment-analysis')

out = pipe('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.999806941}]

Python (original)

import { pipeline } from "@xenova/transformers";

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');

let out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]

JavaScript (ours)

In the same way as the Python library, you can use a different model by providing its
name as the second argument to the pipeline function. For example:

// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');

Custom setup

By default, Transformers.js uses hosted models
precompiled
WASM binaries,
which should work out-of-the-box. You can override this behaviour as follows:

import { env } from "@xenova/transformers";

// Use a different host for models.
// - `remoteURL` defaults to use the HuggingFace Hub
// - `localURL` defaults to '/models/onnx/quantized/'
env.remoteURL = 'https://www.example.com/';
env.localURL = '/path/to/models/';

// Set whether to use remote or local models. Defaults to true.
//  - If true, use the path specified by `env.remoteURL`.
//  - If false, use the path specified by `env.localURL`.
env.remoteModels = false;

// Set parent path of .wasm files. Defaults to use a CDN.
env.onnx.wasm.wasmPaths = '/path/to/files/';

Usage

We currently support the following
tasks and
models, which can be used with the
pipeline
function.

  1. sentiment-analysis (a.k.a. text-classification)

    Supported models: distilbert-base-uncased-finetuned-sst-2-english,
    nlptown/bert-base-multilingual-uncased-sentiment,
    distilgpt2.
    For more information, check out the

    Text Classification docs
    .

  2. question-answering

    Supported models: distilbert-base-cased-distilled-squad,
    distilbert-base-uncased-distilled-squad.
    For more information, check out the

    Question Answering docs
    .

  3. fill-mask

    Supported models: xlm-roberta-base, albert-large-v2,
    albert-base-v2, distilroberta-base, roberta-base,
    bert-base-cased, bert-base-uncased,
    bert-base-multilingual-uncased, bert-base-multilingual-cased,
    distilbert-base-cased, distilbert-base-uncased.
    For more information, check out the

    Language Modelling docs
    .

  4. summarization

    Supported models:
    t5-small,
    t5-base,
    t5-v1_1-small,
    t5-v1_1-base,
    sshleifer/distilbart-cnn-6-6,
    sshleifer/distilbart-cnn-12-6.
    For more information, check out the

    Summarization docs
    .

  5. translation (translation_en_to_de, translation_en_to_fr,
    translation_en_to_ro)

    Supported models:
    t5-small,
    t5-base,
    t5-v1_1-small,
    t5-v1_1-base.
    For more information, check out the

    Translation docs
    .

  6. text2text-generation

    Supported models:
    google/flan-t5-small,
    google/flan-t5-base,
    t5-small,
    t5-base,
    google/t5-v1_1-small,
    google/t5-v1_1-base,
    sshleifer/distilbart-cnn-6-6,
    sshleifer/distilbart-cnn-12-6.
    For more information, check out the

    Text Generation docs
    .

  7. text-generation

    Supported models:
    gpt2,
    distilgpt2,
    Salesforce/codegen-350M-mono,
    Salesforce/codegen-350M-multi,
    Salesforce/codegen-350M-nl.
    For more information, check out the

    Text Generation docs
    .

  8. automatic-speech-recognition

    Supported models:
    openai/whisper-tiny.en,
    openai/whisper-tiny,
    openai/whisper-small.en,
    openai/whisper-small,
    openai/whisper-base.en,
    openai/whisper-base.
    For more information, check out the

    Automatic Speech Recognition docs
    .

  9. image-to-text

    Supported models:
    nlpconnect/vit-gpt2-image-captioning.
    For more information, check out the

    Image-to-Text docs
    .

  10. image-classification

    Supported models:
    google/vit-base-patch16-224.
    For more information, check out the

    Image Classification docs
    .

  11. zero-shot-image-classification

    Supported models:
    openai/clip-vit-base-patch16,
    openai/clip-vit-base-patch32.
    For more information, check out the

    Zero-Shot Image Classification
    .

  12. embeddings

    Supported models:
    sentence-transformers/all-MiniLM-L6-v2,
    sentence-transformers/all-MiniLM-L12-v2,
    sentence-transformers/all-distilroberta-v1,
    sentence-transformers/paraphrase-albert-base-v2,
    sentence-transformers/paraphrase-albert-small-v2.
    For more information, check out the

    Embeddings docs
    .


The following
model types
are supported:

  1. BERT

    Tasks: Masked language modelling
    (AutoModelForMaskedLM),
    question answering
    (AutoModelForQuestionAnswering), and
    sequence classification
    (AutoModelForSequenceClassification).
    For more information, check out the BERT docs.

  2. ALBERT

    Tasks: Masked language modelling
    (AutoModelForMaskedLM).
    For more information, check out the ALBERT docs.

  3. DistilBERT

    Tasks: Masked language modelling
    (AutoModelForMaskedLM),
    question answering
    (AutoModelForQuestionAnswering), and
    sequence classification
    (AutoModelForSequenceClassification).
    For more information, check out the DistilBERT
    docs
    .

  4. T5

    Tasks: Sequence-to-sequence for
    translation/summarization
    (AutoModelForSeq2SeqLM).
    For more information, check out the T5 docs.

  5. T5v1.1

    Tasks: Sequence-to-sequence
    (AutoModelForSeq2SeqLM).
    For more information, check out the T5v1.1 docs.

  6. FLAN-T5

    Tasks: Sequence-to-sequence for over 1000 tasks
    (AutoModelForSeq2SeqLM).
    For more information, check out the FLAN-T5 docs.

  7. GPT2/DistilGPT2

    Tasks: Text generation
    (AutoModelForCausalLM).
    For more information, check out the
    GPT2 docs or
    DistilGPT2 docs.

  8. BART

    Tasks: Sequence-to-sequence for summarization
    (AutoModelForSeq2SeqLM).
    For more information, check out the BART docs.

  9. CodeGen

    Tasks: Text generation
    (AutoModelForCausalLM).
    For more information, check out the
    CodeGen docs.

  10. Whisper

    Tasks: Automatic speech recognition
    (AutoModelForSeq2SeqLM).
    For more information, check out the Whisper docs.

  11. CLIP

    Tasks: Zero-shot Image classification
    (AutoModel).
    For more information, check out the CLIP
    docs
    .

  12. Vision Transformer (ViT)

    Tasks: Image classification
    (AutoModelForImageClassification).
    For more information, check out the Vision Transformer
    docs
    .


Don't see your model type or task supported? Raise an
issue on GitHub, and if there's
enough demand, we will add it!

We use ONNX Runtime to run the
models in the browser, so you must first convert your PyTorch model to ONNX (which can be done using
our
conversion
script
).

Examples

Coming soon... In the meantime, check out the source code for the demo
here.

Read More