What is your background?

My background is both as a computer scientist and a musician. I grew up playing flute and piano, classically trained. I thought for many years that I would become a classical musician but then discovered computer programming when I was a teenager. And that took me on a path spending many years in undergrad and in grad school thinking about how I wanted to combine those two sets of interests. Today I have a PhD in computer science, but I teach in a creative computing institute and I work primarily with creative practitioners.

Rebecca Fiebrink during her keynote presentation at the International Conference on Live Interfaces (March 10, 2020). Photo by Shreejay Shrestha.
Rebecca Fiebrink during her keynote presentation at the International Conference on Live Interfaces (March 10, 2020). Photo by Shreejay Shrestha.

What motivated you in your teenage hood to start programming?

Ever since I was a child, I saw working with computers as a different type of creative practice. I found it fun to make things. I got started programming by programming my graphing calculator. I was making little images and animations for my friends and making web pages back in the days when you had to create web pages directly from HTML, when you did not have any other tools. I was making web pages with my friends that were sort of fan fiction sites for the TV shows we liked. There was always something that was a foundation for creative expression. Just like any other artistic medium.

What are your experiences being a woman in music technology?

When I got involved in music technology, I was very surprised that there were not more women here. I guess I’m much older now and I’m not surprised anymore, rather a little bit more cynical. But that said, I have no regrets about my career choice. I really like this field. I think it is full of a lot of fantastic people of different genders and different backgrounds. Personally, I found many sources of inspiration in both women and others within music technology, but also in related fields such as music composition, human computer interaction, and machine learning. We are well situated in the intersection of many fields where a lot of different types of people have different ways of thinking about things. And that is something I have benefited from.

Closeup of Rebecca Fiebrink during her keynote presentation at the International Conference on Live Interfaces (March 10, 2020). Photo by Shreejay Shrestha.
Closeup of Rebecca Fiebrink during her keynote presentation at the International Conference on Live Interfaces (March 10, 2020). Photo by Shreejay Shrestha.

Did you have a role model or mentor?

I have had many role models and mentors. For the most part they have been men and have been different from me in many ways, and they are often people who haven’t shared all of my same interests. And this has been okay. My first mentor was a music cognition researcher named David Huron who did some fantastic work in music cognition and, musicology. He was really the person who introduced me to this idea that you could use technology to expand people’s understanding of music. And since then I’ve had a variety of more senior people in my field and in neighboring fields, who have been fantastic mentors and from whom I have learned a lot.

Could you introduce your latest work/project more closely?

I’ve got a lot of projects now, and one that I just wrapped up recently is something called ‘Sound Control’. That is a project that developed out of a collaboration with a set of music therapists and music teachers in the UK. Specifically, this project is for people who work with kids with a wide variety of physical and developmental disabilities. The project’s aim has been to augment the tools people have for making music but also making other types of sonic interactions, in one- on- one therapy sessions as well as in wider classrooms. That project resulted in some open- source software that people can download online (http://soundcontrolsoftware.com), and this software essentially allows people to take a variety of input devices like cameras, microphones, game controllers and build new musical instruments or sonic interaction interfaces without knowing really anything at all about interaction design, music technology, and machine learning (although machine learning is one of the tools we use to make it easy to build new things).

What are the exact tools you used for making it?

This piece of software is built using MAX, and it is a MAX standalone. Within that, we have some machine learning algorithms, drawn from my old research project called RAPIDMIX. The software allows you to plug in different input devices, which they can select from a drop-down box. And you also select one of a few types of musical control or sound control to interact with, for instance to control MIDI notes or FM synthesis. And then you can show examples of how you might like to move, or how a kid you are working with, might like to move to make that sound. As well as giving specific examples of what sound or notes you want to play. Then, we use a set of machine learning algorithms to build you an instrument, in which your movements with that input device will result in sound.

Any follow up for this?

We will see. I think that right now we are in the very end stage of the projects where it is about documentation, about getting the word out there so that other people know from it and maybe start using it. We’re also putting together resources from teachers who’ve used it, so that people can go to the website and learn how to use the software but also how to use it in the classroom. It has the potential to be a really great resource for a lot of people. It is my hope that the next project in this space comes out of organic observations of what people are doing with it. It is a tool that allows people to do something they haven’t done before, so it is going to take people a little while to figure out how they want to use it. But there are certainly things that we have not anticipated as researchers, there will be interesting new challenges that come up, or different use cases that we have not thought of. And often that is what gives us the idea for the next project, rather than us researchers saying, ‘oh I know what needs to happen next’. To act with the community to figure that out is fun.

My advice to women who are interested in getting started in this field is just to do it. There are so many different avenues into the field: whether someone is coming from a musical background and does not know anything on coding, or doesn’t know anything about audio processing, it doesn’t matter. There is a way to enter in, and those are all skills that you can learn. Likewise, if somebody is coming from a background of computer science or engineering and likes music but does not necessary feel like they have the authentic background of being a musician, that does not matter either; you just need an open mind, you need to be able to listen to people who are musicians and composers, and that is a really good way in. This is a field where people with all sorts of different backgrounds and different ways on looking at things have a space where they can contribute. The more people coming into this space, especially women and people who are not already well represented in the space, that is great, we want them in the field.

Do you have anything to add?

I don’t think so.

Thank you!