Shape Retrieval Contest for CAD models 2008

Evaluations released HERE

Welcome to the Engineering track of Shape Retrieval Contest 2008 (SHREC). We are excited to announce a dedicated track for CAD models in this year's SHREC and we would like to thank Dr. Remco Veltkamp for this opportunity. Please find below the details on how to participate in this track and we'll keep updating this page as details regarding this track evolve. If you have any questions or comments please don't hesitate to email us - shrec [at] purdue [dot] edu.

Why a CAD models track?
Engineering parts typically have high genus, rounding features (fillets, chamfers), presence of internal structure. They are closed watertight volumes. Engineering models can be parts or assemblies. A part is an atomic unit and many parts are assembled to make an assembly. For example a wheel can be a part where as a bike is an assembly. Moreover the engineering context is unique where in part families and parametric models, i.e. models differ by relative dimensions of various local geometries, are common. So this track focuses on engineering parts and the search tasks in an engineering context.

The Dataset
The engineering track uses the Purdue Engineering Shape Benchmark (ESB) [Jayanti et al, 2006]. This established database consists of closed triangulated meshes of CAD parts in vendor neutral formats (.stl & .obj). The models are arranged into a file system with each folder representing a class of parts. This dataset is classified into a ground truth classification which has two levels of hierarchy. Overall there are three super-classes with sub-classes under them. This classification can be browsed at http://purdue.edu/shapelab. Please register to download the SHREC version of the database (~66MB).

To Participate
Please REGISTER HERE and download the dataset for this contest.

Contest Evaluation
A query set consisting of models that are not present in the ESB will be released. These models will be similar to the models in the ESB and will include parametric versions of randomly selected models from the database. Each participant is expected to submit results for tasks based on overall similarity and partial similarity. Submissions will then be measured using various performance criteria. Please refer last year's evaluations for a list of possible criteria.

Timeline
  • Benchmark Dataset: available | Please register to download
  • Query Dataset: RELEASED: Proceed to download >>
  • Results Submission: Deadline Not later than March 17, 2008 11:59 pm UTC. Participants should also submit a two page summary. Click here for templates.
  • Final evaluations released