Skip to content

Commit

Permalink
Add magenta version information 1.1.7
Browse files Browse the repository at this point in the history
  • Loading branch information
dubreuia committed Jun 5, 2020
1 parent af59083 commit 09d8879
Show file tree
Hide file tree
Showing 37 changed files with 76 additions and 2 deletions.
4 changes: 3 additions & 1 deletion Chapter01/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 1 - Introduction on Magenta and generative art

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show you the basics of generative music and what already exists. You'll learn about the new techniques of artwork generation, such as machine learning, and how those techniques can be applied to produce music and art. Google's Magenta open source research platform will be introduced, along with Google's open source machine learning platform TensorFlow, along with an overview of its different parts and the installation of the required software for this book. We'll finish the installation by generating a simple MIDI file on the command line.

## Code
Expand All @@ -18,7 +20,7 @@ conda create --name magenta python=3.6
conda activate magenta
```

Then you can install Magenta version 1.1.7 and the dependencies for the book using:
Then you can install Magenta Version 1.1.7 and the dependencies for the book using:

```bash
pip install magenta==1.1.7 visual_midi tables
Expand Down
2 changes: 2 additions & 0 deletions Chapter02/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 2 - Generating drum sequences with DrumsRNN

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show you what many consider the foundation of music—percussion. We'll show the importance of Recurrent Neural Networks (RNNs) for music generation. You'll then learn how to use the Drums RNN model using a pre-trained drum kit model, by calling it in the command-line window and directly in Python, to generate drum sequences. We'll introduce the different model parameters, including the model's MIDI encoding, and show how to interpret the output of the model.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter02/chapter_02_example_01.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
This example shows a basic Drums RNN generation with a hard coded primer.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter03/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 3 - Generating polyphonic melodies

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show the importance of Long Short-Term Memory (LSTM) networks in generating longer sequences. We'll see how to use a monophonic Magenta model, the Melody RNN—an LSTM network with a loopback and attention configuration. You'll also learn to use two polyphonic models, the Polyphony RNN and Performance RNN, both LSTM networks using a specific encoding, with the latter having support for note velocity and expressive timing.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter03/chapter_03_example_01.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows a melody (monophonic) generation using the melody rnn model
and 3 configurations: basic, lookback and attention.
VERSION: Magenta 1.1.7
"""

import math
Expand Down
2 changes: 2 additions & 0 deletions Chapter03/chapter_03_example_02.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
This example shows a polyphonic generation with the polyphony rnn model.
VERSION: Magenta 1.1.7
"""

import math
Expand Down
2 changes: 2 additions & 0 deletions Chapter03/chapter_03_example_03.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
This example shows a polyphonic generation with the performance rnn model.
VERSION: Magenta 1.1.7
"""

import math
Expand Down
2 changes: 2 additions & 0 deletions Chapter04/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 4 - Latent space interpolation with MusicVAE

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show the importance of continuous latent space of Variational Autoencoders (VAEs) and its importance in music generation compared to standard Autoencoders (AEs). We'll use the MusicVAE model, a hierarchical recurrent VAE, from Magenta to sample sequences and then interpolate between them, effectively morphing smoothly from one to another. We'll then see how to add groove, or humanization, to an existing sequence, using the GrooVAE model. We'll finish by looking at the TensorFlow code used to build the VAE model.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter04/chapter_04_example_01.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows how to sample, interpolate and humanize a drums sequence
using MusicVAE and various configurations.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter04/chapter_04_example_02.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows how to sample and interpolate a melody sequence
using MusicVAE and various configurations.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter04/chapter_04_example_03.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows how to sample a trio (drums, melody, bass) sequence
using MusicVAE and various configurations.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter05/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 5 - Audio generation with NSynth and GANSynth

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show audio generation. We'll first provide an overview of WaveNet, an existing model for audio generation, especially efficient in text to speech applications. In Magenta, we'll use NSynth, a Wavenet Autoencoder model, to generate small audio clips, that can serve as instruments for a backing MIDI score. NSynth also enables audio transformation like scaling, time stretching and interpolation. We'll also use GANSynth, a faster approach based on generative adversarial network (GAN).

## Utils
Expand Down
2 changes: 2 additions & 0 deletions Chapter05/chapter_05_example_01.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
This example shows how to use NSynth to interpolate between pairs of sounds.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
4 changes: 3 additions & 1 deletion Chapter05/chapter_05_example_02.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows how to use GANSynth to generate intruments for a backing
This example shows how to use GANSynth to generate instruments for a backing
score from a MIDI file.
VERSION: Magenta 1.1.7
"""

import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 6 - Data preparation for training

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show how training our own models is crucial since it allows us to generate music in a specific style, generate specific structures or instruments. Building and preparing a dataset is the first step before training our own model. To do that, we first look at existing datasets and APIs to help us find meaningful data. Then, we build two datasets in MIDI for specific styles—dance and jazz. Finally, we prepare the MIDI files for training using data transformations and pipelines.

## Utils
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_00.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Extract techno (four on the floor) drum rhythms.
VERSION: Magenta 1.1.7
"""
import argparse
import copy
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_01.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Artist extraction using LAKHs dataset matched with the MSD dataset.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_02.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
Lists most common genres from the Last.fm API using the LAKHs dataset
matched with the MSD dataset.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_03.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
Filter on specific tags from the Last.fm API using the LAKHs dataset
matched with the MSD dataset.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_04.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Get statistics on instrument classes from the MIDI files.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_05.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
Extract drums MIDI files. Some drum tracks are split into multiple separate
drum instruments, in which case we try to merge them into a single instrument
and save only 1 MIDI file.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_06.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
Extract piano MIDI files. Some piano tracks are split into multiple separate
piano instruments, in which case we keep them split and merge them into
multiple MIDI files.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_07.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Extract drums MIDI files corresponding to specific tags.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_08.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Extract piano MIDI files corresponding to specific tags.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter06/chapter_06_example_09.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Extract drums tracks from GMD.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter07/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 7 - Training Magenta models

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show how to tune hyperparameters, like batch size, learning rate, and network size, to optimize network performance and training time. We’ll also show common training problems such as overfitting and models not converging. Once a model's training is complete, we'll show how to use the trained model to generate new sequences. Finally, we'll show how to use the Google Cloud Platform to train models faster on the cloud.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter07/chapter_07_example_01.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Configuration for the MusicVAE model, using the MIDI bass programs.
VERSION: Magenta 1.1.7
"""

import tensorflow as tf
Expand Down
2 changes: 2 additions & 0 deletions Chapter07/chapter_07_example_02.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
Tensor validator and note sequence splitter (training and evaluation datasets)
for the MusicVAE model.
VERSION: Magenta 1.1.7
"""
import argparse

Expand Down
2 changes: 2 additions & 0 deletions Chapter07/chapter_07_example_03.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Configuration for the Drums RNN model that inverts the snares and bass drums.
VERSION: Magenta 1.1.7
"""

import tensorflow as tf
Expand Down
2 changes: 2 additions & 0 deletions Chapter08/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 8 - Magenta in the browser with Magenta.js

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show a JavaScript implementation of Magenta that gained popularity for its ease of use, since it runs in the browser and can be shared as a web page. We'll introduce TensorFlow.js, the technology Magenta.js is built upon, and show what models are available in Magenta.js, including how to convert our previously trained models. Then, we'll create small web applications using GANSynth and MusicVAE for sampling audio and sequences respectively. Finally, we'll see how Magenta.js can interact with other applications, using the Web MIDI API and Node.js.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Chapter 9 - Making Magenta interact with music applications

[![Magenta Version 1.1.7](../docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

This chapter will show how Magenta fits in a broader picture by showing how to make it interact with other music applications such as Digital Audio Workstations (DAWs) and synthesizers. We'll explain how to send MIDI sequences from Magenta to FluidSynth and DAWs using the MIDI interface. By doing so, we'll learn how to handle MIDI ports on all platforms and how to loop MIDI sequences in Magenta. We'll show how to synchronize multiple applications using MIDI clocks and transport information. Finally, we'll cover Magenta Studio, a standalone packaging of Magenta based on Magenta.js that can also integrate into Ableton Live as a plugin.

## Code
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/chapter_09_example_01.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""
Utility functions for finding and creating MIDI ports.
VERSION: Magenta 1.1.7
"""

import mido
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/chapter_09_example_02.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows a basic Drums RNN generation with synthesizer playback,
using a MIDI hub to send the sequence to an external device.
VERSION: Magenta 1.1.7
"""
import argparse
import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/chapter_09_example_03.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
This example shows a basic Drums RNN generation with a
looping synthesizer playback, using a MIDI hub to send the sequence
to an external device.
VERSION: Magenta 1.1.7
"""
import argparse
import os
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/chapter_09_example_04.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
"""
This example shows how to synchronize a Magenta application with an external
device using MIDI clock and transport messages.
VERSION: Magenta 1.1.7
"""

import argparse
Expand Down
2 changes: 2 additions & 0 deletions Chapter09/chapter_09_example_05.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
This example shows a basic Drums RNN generation with a
looping synthesizer playback, generating a new sequence at each loop,
using a MIDI hub to send the sequence to an external device.
VERSION: Magenta 1.1.7
"""
import argparse
import os
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Hands-On Music Generation with Magenta

[![Magenta Version 1.1.7](./docs/magenta-v1.1.7-badge.svg)](https://github.com/magenta/magenta/releases/tag/1.1.7)

In Hands-On Music Generation with Magenta, we explore the role of deep learning in music generation and assisted music composition. Design and use machine learning models for music generation using Magenta and make them interact with existing music creation tools.

<p align="center">
Expand Down

0 comments on commit 09d8879

Please sign in to comment.