Skip to content

Commit

Permalink
Update README descriptions
Browse files Browse the repository at this point in the history
  • Loading branch information
dubreuia committed Feb 6, 2020
1 parent 023ad32 commit 2dd8361
Show file tree
Hide file tree
Showing 10 changed files with 11 additions and 75 deletions.
8 changes: 1 addition & 7 deletions Chapter01/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
# Chapter 1 - Introduction on Magenta and generative art

In this chapter, you'll learn the basics of generative artwork and what already
exists. You'll learn about the new techniques of artwork generation, like
machine learning, and how those techniques can be applied to produce music and
art. Magenta will be introduced, along with Tensorflow, with an overview of its
different parts and the installation of the required software for this book.
We'll finish the installation by generating a simple MIDI file on the command
line.
This chapter will show you the basics of generative music and what already exists. You'll learn about the new techniques of artwork generation, such as machine learning, and how those techniques can be applied to produce music and art. Google's Magenta open source research platform will be introduced, along with Google's open source machine learning platform TensorFlow, along with an overview of its different parts and the installation of the required software for this book. We'll finish the installation by generating a simple MIDI file on the command line.

## Code

Expand Down
8 changes: 1 addition & 7 deletions Chapter02/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
# Chapter 2 - Generating drum sequences with DrumsRNN

In this chapter, you'll learn what many consider the foundation of music:
percussion. We'll show the importance of Recurrent Neural Network (RNN) for
music generation. You'll then learn how to use an existing Magenta model called
"Drums RNN", by calling it on the command line and also directly in Python,
to generate drum sequences. We'll introduce the different model parameters,
including the model's MIDI encoding, and show how to interpret the output of
the model.
This chapter will show you what many consider the foundation of music—percussion. We'll show the importance of Recurrent Neural Networks (RNNs) for music generation. You'll then learn how to use the Drums RNN model using a pre-trained drum kit model, by calling it in the command-line window and directly in Python, to generate drum sequences. We'll introduce the different model parameters, including the model's MIDI encoding, and show how to interpret the output of the model.

## Code

Expand Down
9 changes: 1 addition & 8 deletions Chapter03/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
# Chapter 3 - Generating polyphonic melodies

Building on the last chapter where we created a drum sequence, we can now
proceed to create the heart of music: its melody. In this chapter, you'll learn
the importance of Long Short-Term Memory (LSTM) networks in generating longer
sequences. We'll see how to use a monophonic models, Melody RNN, an LSTM
network with loopback and attention configuration. You'll also learn to use two
polyphonic models, Polyphony RNN and Performance RNN, both LSTM networks using
a specific encoding, with the latter having support for velocity and
expressive timing.
This chapter will show the importance of Long Short-Term Memory (LSTM) networks in generating longer sequences. We'll see how to use a monophonic Magenta model, the Melody RNN—an LSTM network with a loopback and attention configuration. You'll also learn to use two polyphonic models, the Polyphony RNN and Performance RNN, both LSTM networks using a specific encoding, with the latter having support for note velocity and expressive timing.

## Code

Expand Down
9 changes: 1 addition & 8 deletions Chapter04/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
# Chapter 4 - Latent space interpolation with MusicVAE

In this chapter we’ll learn about the importance of continuous latent space
brought by Variational Autoencoders (VAE) and its importance in music generation
compared to standard Autoencoders (AE). We’ll use the MusicVAE model, a
hierarchical recurrent VAE, from Magenta, to sample sequences and then
interpolate between them, effectively morphing smoothly from one to another.
We'll then see how to add groove, or humanization, to an existing sequence,
using the GrooVAE model. We’ll finish by looking at the Tensorflow code used
to build the VAE model.
This chapter will show the importance of continuous latent space of Variational Autoencoders (VAEs) and its importance in music generation compared to standard Autoencoders (AEs). We'll use the MusicVAE model, a hierarchical recurrent VAE, from Magenta to sample sequences and then interpolate between them, effectively morphing smoothly from one to another. We'll then see how to add groove, or humanization, to an existing sequence, using the GrooVAE model. We'll finish by looking at the TensorFlow code used to build the VAE model.

## Code

Expand Down
8 changes: 1 addition & 7 deletions Chapter05/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
# Chapter 5 - Audio generation with NSynth and GANSynth

In this chapter, we'll be looking into audio generation. We'll first provide an
overview of WaveNet, an existing model for audio generation, especially
efficient in text to speech applications. In Magenta, we'll use NSynth, a
Wavenet Autoencoder model, to generate small audio clips, that can serve as
instruments for a backing MIDI score, and can also be transformed by scaling,
time stretching and mixing them. We'll also use GANSynth, a faster approach
based on Generative Adversarial Network (GAN).
This chapter will show audio generation. We'll first provide an overview of WaveNet, an existing model for audio generation, especially efficient in text to speech applications. In Magenta, we'll use NSynth, a Wavenet Autoencoder model, to generate small audio clips, that can serve as instruments for a backing MIDI score. NSynth also enables audio transformation like scaling, time stretching and interpolation. We'll also use GANSynth, a faster approach based on generative adversarial network (GAN).

## Utils

Expand Down
9 changes: 1 addition & 8 deletions Chapter06/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
# Chapter 6 - Data preparation for training

Up until now, we’ve used existing Magenta's pre-trained models, since they are
quite powerful and easy to use. But training our own models is crucial, since
it allows us to generate music in a specific style, generate specific
structures or instruments. Building and preparing a dataset is the first step
before training our own model. To do that, we first look at existing datasets
and APIs to help us find meaningful data. Then, we build two datasets in MIDI
for specific styles: dance and jazz. Finally, we prepare the MIDI files for
training using data transformations and pipelines.
This chapter will show how training our own models is crucial since it allows us to generate music in a specific style, generate specific structures or instruments. Building and preparing a dataset is the first step before training our own model. To do that, we first look at existing datasets and APIs to help us find meaningful data. Then, we build two datasets in MIDI for specific styles—dance and jazz. Finally, we prepare the MIDI files for training using data transformations and pipelines.

## Utils

Expand Down
10 changes: 1 addition & 9 deletions Chapter07/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,6 @@
# Chapter 7 - Training Magenta models

In this chapter, we’ll use the prepared data from the previous chapter to train
the some the RNN and VAE networks. Machine learning training is a finicky
process involving a lot of tuning, experimentation, and back and forth between
your data and your model. We’ll learn to tune hyperparameters, like batch size,
learning rate, and network size, to optimize network performance and training
time. We’ll also show common training problems such as overfitting and models
not converging. Once a model's training is complete, we'll show how to use the
trained model to generate new sequences. Finally, we'll show how to use Google
Cloud Platform to train models faster on the cloud.
This chapter will show how to tune hyperparameters, like batch size, learning rate, and network size, to optimize network performance and training time. We’ll also show common training problems such as overfitting and models not converging. Once a model's training is complete, we'll show how to use the trained model to generate new sequences. Finally, we'll show how to use the Google Cloud Platform to train models faster on the cloud.

## Code

Expand Down
9 changes: 1 addition & 8 deletions Chapter08/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
# Chapter 8 - Magenta in the browser with Magenta.js

In this chapter, we'll talk about Magenta.js, a JavaScript implementation of
Magenta that gained popularity for its ease of use, since it runs in the browser
and can be shared as a web page. We'll introduce Tensorflow.js, the technology
Magenta.js is built upon, and show what models are available in Magenta.js,
including how to convert our previously trained models. Then, we'll create
small web applications using GANSynth and MusicVAE for sampling audio and
sequences. Finally, we'll see how Magenta.js can interact with other
applications, using the Web MIDI API and Node.js.
This chapter will show a JavaScript implementation of Magenta that gained popularity for its ease of use, since it runs in the browser and can be shared as a web page. We'll introduce TensorFlow.js, the technology Magenta.js is built upon, and show what models are available in Magenta.js, including how to convert our previously trained models. Then, we'll create small web applications using GANSynth and MusicVAE for sampling audio and sequences respectively. Finally, we'll see how Magenta.js can interact with other applications, using the Web MIDI API and Node.js.

## Code

Expand Down
10 changes: 1 addition & 9 deletions Chapter09/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,6 @@
# Chapter 9 - Making Magenta interact with music applications

In this chapter, we'll see how Magenta fits in a broader picture by showing
how to make it interact with other music applications such as Digital Audio
Workstations (DAWs) and synthesizers. We'll explain how to send MIDI sequences
from Magenta to FluidSynth and DAWs using the MIDI interface. By doing so,
we'll learn how to handle MIDI ports on all platforms and how to loop MIDI
sequences in Magenta. We'll show how to synchronize multiple applications using
MIDI clocks and transport information. Finally, we'll cover Magenta Studio, a
standalone packaging of Magenta based on Magenta.js that can also integrates in
Ableton Live as a plugin.
This chapter will show how Magenta fits in a broader picture by showing how to make it interact with other music applications such as Digital Audio Workstations (DAWs) and synthesizers. We'll explain how to send MIDI sequences from Magenta to FluidSynth and DAWs using the MIDI interface. By doing so, we'll learn how to handle MIDI ports on all platforms and how to loop MIDI sequences in Magenta. We'll show how to synchronize multiple applications using MIDI clocks and transport information. Finally, we'll cover Magenta Studio, a standalone packaging of Magenta based on Magenta.js that can also integrate into Ableton Live as a plugin.

## Code

Expand Down
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# Hands-On Music Generation with Magenta: Explore the role of deep learning in music generation and assisted music composition
# Hands-On Music Generation with Magenta

Design and use machine learning models for music generation using Magenta and make them interact with existing music creation tools.

## Links
In Hands-On Music Generation with Magenta, we explore the role of deep learning in music generation and assisted music composition. Design and use machine learning models for music generation using Magenta and make them interact with existing music creation tools.

- **[Packt Publishing](https://www.packtpub.com/eu/data/hands-on-music-generation-with-magenta)** - Buy the book in ebook format or paperback
- [Code in Action](https://www.youtube.com/playlist?list=PLWPX7CYPrFFqvJW-vPU0puAo8vqyzq0A6) - Videos that shows the code examples being executed and the resulting generation.
Expand Down

0 comments on commit 2dd8361

Please sign in to comment.