At ExpressiveLabs, we believe it's our responsibility to be transparent about how and why we use Artificial Intelligence (AI) technology in our products, and what measures we take to ensure this happens in an ethical way. On this page, we explain what we view as "ethical AI", and how we apply these ideas to the way we work.
Artificial Intelligence (AI) is a broad collection of technologies that allow computers to perform tasks that would normally require human intelligence. This includes tasks such as recognizing images, understanding speech, and making decisions.
At ExpressiveLabs, we focus only on a tiny area within this broad field of AI, namely singing voice synthesis and linguistic modeling. In our Mikoto Studio product, we deploy a number of AI components that help users create music. Without going into the specifics of each component, these include:
The current worry about AI can be broadly grouped into two issues: "what does AI mean for jobs?" and "where does all that data come from?". The second of these concerns is addressed later, but first we'll focus on the first of these issues: the impact of AI on jobs.
We do not develop products with the intention of, or capabilities for, replacing human jobs and skills. We believe that human input is key to the creative process, and we create tools that will support and enhance the user in their creative activities. We therefore limit the ability of our AI tools to a strictly supportive role, by restricting our projects to not include any generative AI components.
As artists and creators ourselves, we constantly ask ourselves how the technologies we create help us in our daily practice. We believe that AI can be a powerful tool in the hands of artists, and we want to make sure that our products are designed in a way that allows for this without breaking the strict ethics guidelines we set for ourselves.
Every Mikoto Studio vocalist is modeled after the singing characteristics of a real human being. We contract professional voice actors and singers to provide voice data to our virtual singer characters. This data is then used to train our AI models.
We take special care to ensure the agreement we reach with our voice providers is tailored to their specific needs and wishes. We pay our voice providers through a revenue share, and do so in significantly larger amounts than the industry standard. Throughout the development and production process, the voice provider has a high level of input in shaping the final product.
In other words: the voice provider always knows what we're doing with their data.
As part of an ongoing effort to remain fully transparent, we publish and maintain lists of every dataset we use in both our commercial products as well as our research.
The list is split into categories because the license of some does not allow us to use them in our commercial products. Datasets that fall into this category are only used in research projects.
To protect innovative research, we may delay updating the list of datasets used in research projects by up to 6 months. The list was last updated on March 25, 2024.
Mikoto Studio is in active development. Any features and images shown here may not be representative of the final product.