FiftyOne Computer Vision Tips and Tricks — March 1, 2024
Welcome to our weekly FiftyOne tips and tricks blog where we recap interesting questions and answers that have recently popped up on Slack, GitHub, Stack Overflow, and Reddit.
As an open source community, the FiftyOne community is open to all. This means everyone is welcome to ask questions, and everyone is welcome to answer them. Continue reading to see the latest questions asked and answers provided!
Wait, what’s FiftyOne?
FiftyOne is an open source machine learning toolset that enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster.
- If you like what you see on GitHub, give the project a star.
- Get started! We’ve made it easy to get up and running in a few minutes.
- Join the FiftyOne Slack community, we’re always happy to help.
Ok, let’s dive into this week’s tips and tricks!
Exporting and importing datasets
Community Slack member Hasan asked:
What is the preferred way to export datasets in the FiftyOne format and then import them into another machine?
For exporting datasets, check out the “Exporting FiftyOne Datasets” section in the Docs.
For importing datasets into another machine, check out the “Loading Data into FiftyOne” section in the Docs.
Visualizing embeddings in FiftyOne
Community Slack member Daniel asked:
Is it possible to create embeddings in FiftyOne?
Yes! The FiftyOne Brain provides a powerful compute_visualization()
method that you can use to generate low-dimensional representations of the samples and/or individual objects in your datasets.
These representations can be visualized natively in the FiftyOne App’s Embeddings panel, where you can interactively select points of interest and view the corresponding samples/labels of interest in the Samples panel, and vice versa. Learn more in the “Visualizing Embeddings” section of the Docs. For a deeper dive, check out our recent blog post on dimensionality reduction techniques in FiftyOne.
Searching through videos inside the Fiftyone App
Community Slack member Yashovardhan asked:
Is there a way to search on videos that are a group of image sequences?
Yes! Check out the FiftyOne “Semantic Video Search” plugin that enables you to use a single prompt to find exactly what you are looking for across every frame in your dataset. This leverages the Twelve Labs semantic video search API!
Viewing images with multiple channels
Community Slack member Gantugs asked:
Is there a way in the FiftyOne App to view images with 5 channels? For context, I am working with microscopic images.
Yes! You can accomplish this by making use of FiftyOne’s grouped datasets feature. Grouped datasets contain multiple slices of samples of possibly different modalities (image, video, or point cloud) that are organized into groups.
Grouped datasets can be used, for example, to represent multiview scenes, where data for multiple perspectives of the same scene can be stored, visualized, and queried in ways that respect the relationships between the slices of data.
Segmenting point clouds
Community Slack member Simon asked:
Is there a way to segment point clouds in the FiftyOne App? I would like to keep rgb and just show different segmentations on it.
You could achieve this by creating a grouped dataset where each slice contains a different .pcd
file with the segmentations encoded as RGB values. Note that if you do this, the point cloud viewer will allow you to toggle on/off any number of point clouds, which could be very convenient.