Your tool for data and human centric NLP

Rubrix brings your teams together to make your data better and your models more robust.

Monitor and improve training data without the hassle

A new way to iterate on data. Enhance your projects with more humans in the loop.

Are you eager to try it out?

Check out Rubrix’s installation guide, otherwise, if Docker is no stranger to you just run:

  mkdir rubrix && cd rubrix

  pip install rubrix

  wget -O docker-compose.yml

  docker-compose up

Use the libraries you love

No need to wrap your models around a new abstraction. Combine your favourite libraries into novel workflows.

from transformers import pipeline
from datasets import load_dataset
import rubrix as rb

model = pipeline('zero-shot-classification')
dataset = load_dataset("ag_news", split="test[0:100]")

labels = dataset.features["label"].names

for record in dataset:

    prediction = model(record['text'], labels)

    item = rb.TextClassificationRecord(
    rb.log(records=item, name="agnews_zeroshot")
import spacy
import rubrix as rb

text = "Paris a un enfant et la forêt a un oiseau."

nlp = spaCy.load("fr_core_news_sm")

doc = nlp(text)

prediction = [
  (ent.label_, ent.start_char, ent.end_char)
  for ent in doc.ents

record = rb.TokenClassificationRecord(
  tokens=[token.text for token in doc],

rb.log(records=record, name="lesmiserables-ner")
import pandas as pd
import rubrix as rb

df = pd.read_csv("user_requests.csv")

for i,r in df.iterrows():

  record = rb.TextClassificationRecord(
      "message": r.text,
      "subject": r.subject
      "department": r.department,
      "source": r.source

  rb.log(record, name = "user_requests")
Logo Hugging Face Check out the tutorial

Explore, curate and label data with a search-driven and transparent interaction.


Bring models, data and people together

Improve data at every step of the life-cycle. Everyone can contribute.

Life-cycle User Data ModelDataModelUser