Laboratory of Content-oriented Computational Culture & Arts

関西大学総合情報学部 文化芸術計算機科学研究室

Challenge for AI to entertain and enjoy on rhythm-based games

2017, Dance Dance Convolution1 (DDC), which is a research for letting AI generate rhythm-based games, was published. A member of my research team, Dr. Yudai Tsujino (now assist. prof. at Ritsumeikan Univ. and a Ph.D. candidate then) has been a famous game player, also known as Yudai, and was strongly motivated to improve the system as both an AI researcher and player collaborating with supervision from Dr. Ryosuke Yamanishi (an assoc. prof. at Kansai Univ. and an assist. prof. at Ritsumeikan Univ. then), who is a researcher for entertainment computing.

DDC has motivated us

DDC is oracular because it focuses on “scores of rhythm-based games are the data in which annotations are strictly given to waveform of music, and they can be the source of music informatics.” Generally, a machine learning framework requires annotated huge amounts of data for learning the relationships between statistical features and annotation labels. Then, the annotation should be given by human hands, and it takes a lot of time and human resources. Donahue et al., who developed DDC, noticed that scores of the rhythm-based game could be the dataset for machine learning research for modeling relationships between statistical features of music and actions in games. And then, they developed DDC as a system for automatically generating rhythm-based games from music. In the DDC, the neural network framework learned the relationships where the musical features are convoluted. They named the system Dance Dance Convolution.

We referred to the DDC and have tried to improve the system from the aspect of e-sports players and research of artificial intelligence. The challenge is “AI should enjoy,” “AI should entertain us more!”

1 C. Donahue, Z. C. Lipton, J. McAuley: Dance Dance Convolution, Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1039-1048, 2017.

Challenge for entertaining

The analysis for outputs of DDC and rhythm-based game itself let us find the following considerations;

  • Generating difficult scores is easier, but “enjoyable and easy scores” should be difficult for AI to generate
  • There are some directions of enjoyment even though they are the same levels of difficulty

These points should be tackled; these are the points we could not be satisfied with the DDC’s results. Thus, we have proposed Dance Dance Adaptation (DDA) and Dance Dance Convolution – Clustered (DDCC).

Demo at The Lab.

At the demonstration at Knowledge Capital, we show the system in which the two systems are connectedly applied: the DDA and DDCC. At first, DDCC generates the most difficult(7) score from input music. Then DDCC handles the directions of enjoyment in rhythm-based games; frequent jumps, up-down style, and technical style. And then, DDA adapts the difficult score to easier levels. That is to say, the DDA just translates the difficult scores to easy(1), middle(3), and hard(5) levels without any acoustic features.

We hope you will enjoy the games. And, also we hope that the experience will trigger you to think of the collaboration of humans and AI in entertainment, the creativity of humans, and the characteristics of AI-generated content. Entertainment should be the next step for AI. We believe the research for entertainment (e.g., games, comics/animes, and music) should be the key for the next generation of Artificial Intelligence!