This webpage briefly summarizes the content of the proposed Dynamic Texture DataBase (DTDB) and provides download links. This dataset constitutes the largest dynamic texture dataset available with > 10,000 videos and ≈ 3.5 million frames. The dataset is organized in two different and comlementary ways with 18 dynamics based categories and 45 appearance based categories (i.e. each video in the dataset will have two labels: a motion based label as well as a dynamics based label). The videos are collected from various sources, including the web and various handheld cameras that we employed, which helps ensure diversity and large intra-class variations. Duration for each video is between 5 to 10 seconds. For a clean dynamic texture dataset, we chose that the target texture should occupy ~90% of the spatial support of the video and all of the temporal support. Hence, the doawnloaded videos were cropped so that the resulting sequences had ~90% of their spatial support occupied by the target dynamic texture, while ensuring that the minimum spatial dimension accepted in 224 pixels. Figure 1 illustrates the complementary organization and provides thumbnail examples from the entire dataset. A corresponding video description and the entire dataset are provided here as well. For more details and experiments using DTDB please see our project website.
This video shows sample video sequences from the new Dynamic Texture Database and illustrates the specfications of the proposed dynamics vs. appearance based categories of DTDB.