Transpondancer
                            Transmit   +   Respond   +   Dance it

Table of Contents

Introduction

The project Transpondancer is based on ongoing artistic research on dance vocabulary over the last six years and also on the main question How to name a movement? Each dance creates another body-relation-system of knowledge in sensing, anatomical structures, emotional codings of body parts, expression, and imagination- as dance poetics. With the help of AI, we can read and compose the multilingual textures of collected dance material from different sources.

Inspiration

So far there hasn't been any multi-lingual dance encyclopedia from which artists/audiences can learn, react and share their dance vocabulary related to movement. We took inspiration from these challenges and provided a solution through AI which helps bring the dance encyclopedia into reality. In this way, it's not limited to the current technology/approaches but can easily be extended to future use cases as both Art and AI are strongly connected in the case of Transpondancer.

What it does

The Transpondancer is a word creator of motion, a fluid dance-encyclopedia as an ongoing learning system that grows, reacts, and relates to movement in sharing and composing body knowledge and capturing different ways of sensing it. The Transpondancer will respond to your movements and naming them in multilingual dance vocabulary so that you could react to them and dance to what you are listening to. Thus, there will be an ongoing correspondence between the dancing audience and the transpondancer as a creative process with no end written in itself.

How we built it

  • We've designed and provided a framework which would help solve the addressed issue but as finding large amounts of data was and is a great challenge for most of the problems in AI we have collected our own "dataset" of different dance styles.
  • Since most of the images are directly taken from the internet, there is a definite need for preprocessing before passing them to the model. This is done with the help of data_handler script which transforms the images into specified shapes and returns batches for both train and validation.
  • Finally, we've trained and produced "Deep-learning models" from which one can be able to identify dance pose of selected genres or can also be a starting point for future models.
  • As some of the models are too big to be uploaded here, they are uploaded to google drive and can be accessed with this link

What we have learned and our Challenges

  • The dance vocabulary is very complex. We have to decide which pose would be recognizable for a named movement sequence and break the movement down to image stills.
  • The Dataset of the collected dance vocabulary was way too small, so we decided to work with images first to build up a model, and we had to create our own movement dataset because the terms related to the concrete movement are not written down.
  • It is a complex work just to collect the vocabulary because what has been named– a position, a combination of steps, a sequence, or the movement quality varies too, and a small change can transform the naming altogether.
  • Each of us was involved in the precise observation and analysis of movements and their specific terms. During this process, we were analysing more than 100 dance videos just to analyze one movement. This process would be repeated for different dance styles and for multiple genres.
  • The dance vocabulary is a correlated poetic, with appearing and disappearing movements within the vocabulary, and sometimes 8 different movements can happen in a split second.

Accomplishments

  • We are very proud of how we started with a complex idea and were able to put all of our ideas together to reach our current state.
  • Providing a complete framework for current and future use cases.
  • Creating a prototype on a real-time dance video for movement classification.
  • Collected our own datasets to work with and successfully trained Deep-Learning models.
  • Finally, incorporating the idea of sound classification and providing a solid foundation for future work.

Future of Transpondancer

  • Improving the datasets and providing multilingual text descriptions of the dances.
  • Movement classification enhancement (similar to time series method) in an image and a video to improve the accuracy of movement classification.
  • Incorporating sound classification with the help of design tools such as Oscillators, Filters, Effects, Equalization (e.g high pass, low pass, notch, etc.) can help recreate the various sounds attributed to the dance styles.
  • Creating an online platform of shared dance vocabulary to build up a fluid dance encyclopedia.

Built With

+ 4 more
Share this project:

Updates