Creating AI music tools for emerging creators
Soundtrap/Spotify

Contributions
Product proposals
UX/UI
Prototyping

Team
UX, Design, Prototyping (Myself)
UX, Design, Prototyping (Andreas W)
Product thinking (George M)
Product thinking (Joakim P)
Engineering (Johannes B)
Engineering (Pedro N)
Engineering (Song L)
Engineering (Johan)
Engineering manager (George B)
Sr product manager (Raquel S)
Project Lead (Per E)
Spotify Leadership (Gustav, Charlie, Sten)

For emerging, non-technical creators who primarily operate online, we explored AI-assisted tools designed to simplify song cover creation. Leveraging Soundtrap’s existing architecture, we envisioned a product within the Spotify ecosystem that radically lowers the barrier to entry for song creation while fostering creative expression.





New relationships between artists and fan creators.


Through fan engagement and content remixing, we imagined new channels of value creation between artists and creators, 







Journeys that were personalised and integrated into the music library


Through a personalisation lens, we explored workflows covering inspiration, creation and finally publishing of song covers.









Early explorations.


Plain language creation, hiding complexity and step by step guidance were dominant ideas driving our early explorations.











Ver 1.0





Creation layer built on top of Soundtrap

01 Hides complexity with track groups.
02 Guides user with steps.
03 Allows big changes with few clicks through Ai powered instruments


Transitioning into the full Soundtrap studio

01 Dive deeper with expanding groups.
02 Access to full studio.







First Principles guiding the product.


Limited access to research and testing constrained our team in gathering feedback and direction. We distilled the problem to its core and built upon First Principles to guide our reasoning




Ownership
Creation over modification.

To give a sense of ownership we gave greater control over instrument sounds by including more steps.

Authenticity
Unique yet familar.

To let users leave their mark we gave them innumerable instrument permutations to play with.

Ease
Sound shaping over arrangement.

To make things easy we automated a lot of the arrangement work and let them focus on creativity.









Instrument configurations.


Each bubble represented an instrumentalist that could be easily tweaked.











Song structure.


Breaking down a song into its distinctive parts helped to guide the user, hide complexity and focus on the most crucial elements.
Song structure based on instrument categories 
such as drums, vocals, bass.
Song structure based on sections 
i.e. intro, verse, chorus.





Final Iteration.


In our final iteration we introduced time-stamped lyrics as a way to navigate the song. Given the context of making song covers, it was a more appropriate way of interacting with the timeline than conventional UI.

A dedicated creation canvas allowed users to play around with various instrument sounds. Novel interaction patterns, first principles thinking and rapid prototyping, helped us create this final proof of concept.








 

Patented


Demo of instrument interactions. 


This prototype shows how Ai powered instruments could be used to create a signature sound. It went on to get patented



Narrated by Johannes B.






Outcome



This proof of concept was set to be piloted within an internal cohort of users. However, amid Spotify’s restructuring and Soundtrap becoming an independent company again, the prototypes continued to evolve in different ways. They resulted in a patent for Soundtrap and influenced key product improvements in the de facto web studio. Additionally, the project provided a working demonstration of how to leverage emerging audio AI technology to address various creation needs.