using the ScanObjectNN open-source dataset

  • Registration deadline: Ongoing
  • Submission window: Saturday, October 1, 2022 – Monday, January 23, 2023
  • Winners will be announced in late February 2023


Candelytics, a 3D data analytics startup, and MIT are launching an open-source competition to help users identify 3D objects within a virtual environment. Advancements in laser scanning and photogrammetry have made it possible to create real-world 3D environments digitally. Within this space, there is a growing and significant need to identify, classify, and annotate objects within a 3D digital environment.

For example, a new renovation project may require users to identify and document various furniture and appliances within the 3D scene, or the military may need to identify and clear landmines and other hazardous objects from a population center. Candelytics is working with both commercial and government stakeholders to provide 3D object detection capabilities in a variety of use cases. The mission of this competition is to identify proposals that deliver the highest quality and level of accuracy in detecting 3D objects using open-source datasets.


The objective of this open-source project is to create a minified Machine Learning (ML) model that takes as input an indoor scene in the form of 3D point clouds and detects and labels the objects present inside the scene. The objects detected can then be extracted for further downstream processing tasks, including but not limited to mesh conversion, geometric search, exporting to Blender, etc.



First Place: $25,000

Second Place: $15,000

Third Place: $5,000

The Dataset

The dataset for this challenge is the ScanObjectNN, which was created as a benchmark dataset to test 3D object classification models on real-world scans that contain background noise. The researchers behind the dataset also used the synthetic dataset ModelNet40, which is used as a baseline for accuracy when classifying objects in the more realistic ScanObjectNN dataset. The dataset, along with the academic paper describing the advent of this data can be found on the product website:

    The Challenge

    The challenge is to achieve an overall accuracy of 90% or higher across all categories present in the dataset. The participants are free to use existing open-source models, including the ones published by the authors if they update relevant hyper parameters that can increase accuracy. 

      Evaluation Metrics

      We will rank all submitted methods according to overall accuracy, mean per-class accuracy, and per-class accuracy.

      Submission Policy

      All models submitted will be published as open-source models for the community to use and improve on.  Your models will be published on the website and authors will be credited for their work.

      All participants must individually agree to have their submissions published. This will occur at both registration and submission. If agreement is not obtained from each individual before the submission deadline, then the submission is disqualified and will not be evaluated.

      The Teams

      All teams must have at least one member with a .edu email address in order to register. The member with the .edu email address will submit the application on behalf of the entire team. All teams must have at least one current student, postdoctoral fellow, or research scientist with a .edu email address. Submissions may come from individuals or teams of two, three, or four. Team members need not be from the same university.



      SUbmission Format

      All registered participants who qualify to participate will be sent a submission form with instructions.

      Please include the following information in your submission:

          • Abbreviation of the name of your methodology
          • Author list
          • Link to the technical description, if applying an open-source method. Each submission should have an accompanying technical description, including the model used and any changes in hyper parameters or data preprocessing techniques. This is to advocate for clear descriptions of methodology, allowing other people to study and benefit from the model, or raise any issues.
          • One single .txt file, which contains the variant of the existing model. The file should list all object predictions in the test set. Each row is a [object ID] [label index] pair.
          • Any additional information that you’d like us to know. For example, you may include your own evaluation of the technique(s) applied that may help us run our own evaluation and validate


      We’re compiling a list of Frequently Asked Questions (FAQs). If you have questions, please submit them to this Airtable form, and we’ll aggregate and post to this webpage as necessary.




      using the ScanObjectNN open-source dataset