Skip to content

Image Data

hover supports bulk-labeling images through their URLs.

💡 Let's do a quickstart for images and note what's different from texts.

This page assumes that you have know the basics

i.e. simple usage of dataset and annotator. Please visit the quickstart tutorial if you haven't done so.

Running Python right here

Think of this page as almost a Jupyter notebook. You can edit code and press Shift+Enter to execute.

Behind the scene is a Binder-hosted Python environment. Below is the status of the kernel:

To download a notebook file instead, visit here.

Dataset for Images

hover handles images through their URL addresses. URLs are strings which can be easily stored, hashed, and looked up against. They are also convenient for rendering tooltips in the annotation interface.

Similarly to SupervisableTextDataset, we can build one for images:

from hover.core.dataset import SupervisableImageDataset
import pandas as pd

# this is a 1000-image-url set of ImageNet data
# with custom labels: animal, object, food
example_csv_path = "https://raw.githubusercontent.com/phurwicz/hover-gallery/main/0.7.0/imagenet_custom.csv"
df = pd.read_csv(example_csv_path).sample(frac=1).reset_index(drop=True)
df["SUBSET"] = "raw"
df.loc[500:800, 'SUBSET'] = 'train'
df.loc[800:900, 'SUBSET'] = 'dev'
df.loc[900:, 'SUBSET'] = 'test'

dataset = SupervisableImageDataset.from_pandas(df, feature_key="image", label_key="label")

# each subset can be accessed as its own DataFrame
dataset.dfs["raw"].head(5)

Vectorizer for Images

We can follow a URL -> content -> image object -> vector path.

import requests
from functools import lru_cache

@lru_cache(maxsize=10000)
def url_to_content(url):
    """
    Turn a URL to response content.
    """
    response = requests.get(url)
    return response.content
from PIL import Image
from io import BytesIO

@lru_cache(maxsize=10000)
def url_to_image(url):
    """
    Turn a URL to a PIL Image.
    """
    img = Image.open(BytesIO(url_to_content(url))).convert("RGB")
    return img
Caching and reading from disk

This guide uses @wrappy.memoize in place of @functools.lru_cache for caching.

  • The benefit is that wrappy.memoize can persist the cache to disk, speeding up code across sessions.

Cached values for this guide have been pre-computed, making it much master to run the guide.

import torch
import wrappy
from efficientnet_pytorch import EfficientNet
from torchvision import transforms

# EfficientNet is a series of pre-trained models
# https://github.com/lukemelas/EfficientNet-PyTorch
effnet = EfficientNet.from_pretrained("efficientnet-b0")
effnet.eval()

# standard transformations for ImageNet-trained models
tfms = transforms.Compose(
    [
        transforms.Resize(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
    ]
)

@wrappy.memoize(cache_limit=10000, persist_path='custom_cache/image_url_to_vector.pkl')
def vectorizer(url):
    """
    Using logits on ImageNet-1000 classes.
    """
    img = tfms(url_to_image(url)).unsqueeze(0)

    with torch.no_grad():
        outputs = effnet(img)

    return outputs.detach().numpy().flatten()

Embedding and Plot

This is exactly the same as in the quickstart, just switching to image data:

# any kwargs will be passed onto the corresponding reduction
# for umap: https://umap-learn.readthedocs.io/en/latest/parameters.html
# for ivis: https://bering-ivis.readthedocs.io/en/latest/api.html
reducer = dataset.compute_nd_embedding(vectorizer, "umap", dimension=2)
from hover.recipes.stable import simple_annotator

interactive_plot = simple_annotator(dataset)

# ---------- SERVER MODE: for the documentation page ----------
# because this tutorial is remotely hosted, we need explicit serving to expose the plot to you
from local_lib.binder_helper import binder_proxy_app_url
from bokeh.server.server import Server
server = Server({'/my-app': interactive_plot}, port=5007, allow_websocket_origin=['*'], use_xheaders=True)
server.start()
# visit this URL printed in cell output to see the interactive plot; locally you would just do "https://localhost:5007/my-app"
binder_proxy_app_url('my-app', port=5007)

# ---------- NOTEBOOK MODE: for your actual Jupyter environment ---------
# this code will render the entire plot in Jupyter
# from bokeh.io import show, output_notebook
# output_notebook()
# show(interactive_plot, notebook_url='https://localhost:8888')
What's special for images?

Tooltips

For text, the tooltip shows the original value.

For images, the tooltip embeds the image based on URL.

  • images in the local file system shall be served through python -m http.server.
  • they can then be accessed through https://localhost:<port>/relative/path/to/file.

Search

For text, the search widget is based on regular expressions.

For images, the search widget is based on vector cosine similarity.

  • the dataset has remembered the vectorizer under the hood and passed it to the annotator.
  • please let us know if you think there's a better way to search images in this case.